id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
14,695,796 | https://en.wikipedia.org/wiki/SPI1 | Transcription factor PU.1 is a protein that in humans is encoded by the SPI1 gene.
Function
This gene encodes an ETS-domain transcription factor that activates gene expression during myeloid and B-lymphoid cell development. The nuclear protein binds to a purine-rich sequence known as the PU-box found on enhancers of target genes, and regulates their expression in coordination with other transcription factors and cofactors. The protein can also regulate alternative splicing of target genes. Multiple transcript variants encoding different isoforms have been found for this gene.
The PU.1 transcription factor is essential for hematopoiesis and cell fate decisions. PU.1 can physically interact with a variety of regulatory factors like SWI/SNF, TFIID, GATA-2, GATA-1 and c-Jun. The protein-protein interactions between these factors can regulate PU.1-dependent cell fate decisions. PU.1 can modulate the expression of 3000 genes in hematopoietic cells including cytokines. It is expressed in monocytes, granulocytes, B and NK cells but is absent in T cells, reticulocytes and megakaryocytes. Its transcription is regulated by various mechanisms .
PU.1 is an essential regulator of the pro-fibrotic system. In fibrotic conditions, PU.1 expression is perturbed in fibrotic diseases, resulting in upregulation of fibrosis-associated genes sets in fibroblasts. Disruption of PU.1 in fibrotic fibroblasts leads to them returning into their resting state from pro-fibrotic fibroblasts. PU.1 is seen to be highly expressed in extracellular matrix producing-fibrotic fibroblasts while it is downregulated in inflammatory/ ECM degrading and resting fibroblasts. The majority of the cells expressing PU.1 in fibrotic conditions remain to be fibroblasts with a few infiltrating lymphocytes. PU.1 induces the polarization of resting and inflammatory fibroblasts into fibrotic fibroblasts.
Structure
The ETS domain is the DNA-binding module of PU.1 and other ETS-family transcription factors.
Interactions
SPI1 has been shown to interact with:
FUS,
GATA2,
IRF4, and
NONO.
References
Further reading
External links
Transcription factors | SPI1 | [
"Chemistry",
"Biology"
] | 506 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,695,887 | https://en.wikipedia.org/wiki/YY1 | YY1 (Yin Yang 1) is a transcriptional repressor protein in humans that is encoded by the YY1 gene.
Function
YY1 is a ubiquitously distributed transcription factor belonging to the GLI-Kruppel class of zinc finger proteins. The protein is involved in repressing and activating a diverse number of promoters. Hence, the YY in the name stands for "yin-yang." YY1 may direct histone deacetylases and histone acetyltransferases to a promoter in order to activate or repress the promoter, thus implicating histone modification in the function of YY1. YY1 promotes enhancer-promoter chromatin loops by forming dimers and promoting DNA interactions. Its dysregulation disrupts enhancer-promoter loops and gene expression.
Clinical significance
YY1 heterozygous deletions, missense, and nonsense mutations cause Gabriele-DeVries syndrome (GADEVS), an autosomal dominant neurodevelopmental disorder characterized by intellectual disability, dysmorphic facial features, feeding problems, intrauterine growth restriction, variable cognitive impairment, behavioral problems and other congenital malformations. A website is available in order to collect and share clinical information between clinicians and the families of affected individuals.
Interactions
YY1 has been shown to interact with:
ATF6,
EP300
FKBP3
HDAC3
Histone deacetylase 2
Myc
NOTCH1
RYBP and
SAP30
Serine—tRNA ligase
References
Further reading
External links
Transcription factors | YY1 | [
"Chemistry",
"Biology"
] | 333 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,696,512 | https://en.wikipedia.org/wiki/Glycine%20receptor%2C%20alpha%201 | Glycine receptor subunit alpha-1 is a protein that in humans is encoded by the GLRA1 gene.
Function
The inhibitory glycine receptor mediates postsynaptic inhibition in the spinal cord and other regions of the central nervous system. It is a pentameric receptor composed solely of alpha subunits. The GLRB gene encodes the alpha subunit of the receptor.
Clinical significance
Mutations in the gene have been associated with hyperekplexia, a neurologic syndrome associated with an exaggerated startle reaction.
See also
Glycine receptor
Stiff person syndrome
Hyperekplexia
References
Further reading
External links
Ion channels | Glycine receptor, alpha 1 | [
"Chemistry"
] | 128 | [
"Neurochemistry",
"Ion channels"
] |
14,697,459 | https://en.wikipedia.org/wiki/Eshkol-Wachman%20movement%20notation | Eshkol-Wachman movement notation is a notation system for recording movement on paper or computer screen. The system was created in Israel by dance theorist Noa Eshkol and Avraham Wachman, a professor of architecture at the Technion. The system is used in many fields, including dance, physical therapy, animal behavior and early diagnosis of autism.
The Movement Notation Society, located in Holon, Israel, is the official organization devoted to Eshkol-Wachman movement notation.
Overview
Eshkol-Wachman movement notation is a system to record movement on paper or computer screen, developed by choreographer Noa Eshkol (daughter of Levi Eshkol) and architect Abraham Wachman. It was originally developed for dance to enable choreographers to write a dance down on paper that dancers could later reconstruct in its entirety, much as composers write a musical score that musicians can later play.
In comparison to most dance notation systems, Eshkol-Wachman movement notation was intended to notate any manner of movement, not only dance. As such, it is not limited to particular dance styles or even to the human form. It has been used to analyze animal behaviour as well as dance (Golani 1976).
Eshkol-Wachman movement notation treats the body as a sort of stick figure. The body is divided at its skeletal joints, and each pair of joints defines a line segment (a "limb"). For example, the foot is a limb bounded by the ankle and the end of the toe.
The relationship of those segments in three-dimensional space is described using a spherical coordinate system. If one end of a line segment is held in a fixed position, that point is the center of a sphere whose radius is the length of the line segment. Positions of the free end of the segment can be defined by two coordinate values on the surface of that sphere, analogous to latitude and longitude on a globe.
Limb positions are written somewhat like fractions, with the vertical number written over the horizontal number. The horizontal component (the lower) is read first. These two numbers are enclosed in brackets or parentheses to indicate whether the position in being described relative to an adjacent limb or to external reference points, such as a stage.
Eshkol-Wachman scores are written on grids, where each horizontal row represents the position and movement of a single limb, and each vertical column represents a unit of time. Movements are shown as transitions between initial and end coordinates.
History
Noa Eshkol (1924–2007) and Abraham Wachman (1931–2010) created the Eshkol-Wachman Movement Notation (EWMN) for recording movement. The original book presenting the system was published by Weidenfeld & Nicolson in 1958. EWMN is a movement notation, not a dance notation. Its user therefore can write down any form of human or animal movement without limiting oneself to any particular style (classical ballet for example). It gives the notator the freedom to use this system wherever movement occurs.
EWMN offers a new and original way of thinking about, observing and analyzing movement. Eshkol was a revolutionary thinker herself and she believed that movement notation could open a lot of new doors in fields where movement is involved. Using EWMN, she composed five dance suites (Publication) all of them to be performed without music. (When performed without music, the audience and the dancers are forced to focus on how movement by itself can evoke emotions within and set the mood of a choreographed dance.) When asked to address her viewpoint, Eshkol used the example that someone does not swell their chest to express strength but instead the action of swelling the chest causes the feeling of strength. (This is somewhat parallel to the James-Lange theory of emotion.) Eshkol stated that by analyzing movement we might begin to understand how one movement evokes a certain emotion while another movement produces an entirely different feeling. The use of movement notation can also lead to the discovery of new laws of composition in particular dance styles, similar to those found in music or other types of art where aesthetic rules are implemented.
Structure: basic elements
The following are the basic concepts of EWMN;
The body
To establish one general form that will stand conceptually for all bodies, an abstract body, similar to a 'stick figure' image is proposed: a 'man without qualities'. Each limb is reduced to its longitudinal axis – an imaginary straight line of unchanging length. A limb, in EW, is considered to be any part of the body, which lies between two adjacent joints or a joint and a free extremity.
Law of 'light' and 'heavy' limbs
When a person walks his legs move but the rest of his body (i.e. the torso, the arm, and the head) is being carried along by the movement of the legs. EWMN labels this phenomenon "the law of light and heavy limbs". The structure of the body is dealt with as a branching linkage. The base is conceived as the 'heaviest' segment of the body. When a 'heavy' limb moves it carries all adjacent 'lighter' limbs passively along.
When standing upright the feet, considered as the base of the body, are the heaviest limb. The legs are lighter (than the base), the torso lighter than the legs, etc.
Manuscript page
EWMN is written, not drawn. Movements are written on a horizontally ruled notation page (resembling a spreadsheet) which represents the body. Vertical lines divide the page into columns, denoting units of time. The symbols for movements are written in order, from left to right. The standard (default) distribution of the limb groups is shown.
The set-up of the notation page in EW is very flexible. It allows the user to divide the body into as many (or as few) parts as necessary to adequately define the movement to be notated. Movements written in EWMN can be set to music. However music is not required, since EWMN focuses on the recording of movement alone.
System of reference (SoR)
EWMN describes movement using a geometrical model that allows the user to observe and notate movement in an objective way, free of verbal ambiguity and emotional attachment. The notation utilizes a spherical system of coordinates, similar to latitude and longitude on a globe. Since the movement of a single axis of constant length free to move about one fixed end, will all be enclosed by a sphere, the free end will always describe a curved path on the surface of this sphere. Every limb in the body can be regarded as such an axis.
Constructing the SoR: One direction on the horizontal plane of the sphere is selected as the starting position for all measurements. This direction is labeled zero (0). By measuring off intervals of 45 degrees, eight positions are obtained (Fig. 6). Four vertical planes intersect the horizontal circle, they are perpendicular to it.
Positions and movement
The position of a limb is defined by identifying it with the coordinates of the SoR. Movements of limbs are also defined, oriented and measured in relation to the SoR.
To document transitions between static positions the system takes into consideration the type of movement, amount of movement, spatial orientation and sense (clockwise or anti clockwise), of the movement.
Types of movement
Three types of movement are defined: Rotatory movement, when the limb rotates around its axis without changing its place in space. An example of such movement is turning a door knob. Plane movement, the shortest distance traveled by a limb between any two positions on the SoR. "Jumping jacks" exercise is an example for Plane movement. Conical movement, can be seen in the waist when doing the hula hoop.
Applications
The flexibility and utility of EWMN allows it to be applied in a wide variety of fields. It has been used to record movements and forms of the hands and fingers in sign language; in the composition of dances, and the recording of folk dances; it has been used in the fields of medicine, the Feldenkrais Method and sports. The notation has also been used to record the courting behavior of jackals, and other ethological research. It was used in the field of graphic and kinetic visual art, and a computer system has been written to plot any movement that can be recorded in EWMN. The notation can easily lend itself to applications in the fields of robotics, animation or motion picture. The system was successfully used to detect the very first movement patterns which are a precursor to the development of Autism. The research carried out by Prof. Philip Teitelbaum and Osnat Teitelbaum at the University of Florida was based entirely on the use of EW to study infant movements. It shows that specific movement patterns appearing in the first few months of life can be a reliable predictor of the later development of Autism and Asperger's Syndrome.
See also
Dance in Israel
Culture in Israel
Israeli inventions and discoveries
References
Further reading
Eshkol, N. 1971. The Hand Book. Tel Aviv: The Movement Notation Society.
Eshkol, N. 1980. 50 Lessons By Dr. Moshe Feldenkrais. Tel Aviv: The Movement Notation Society.
Eshkol, N. 1990. Angles and Angels. Tel Aviv: The Movement Notation Society.
Eshkol, N. & Wachmann (sic), A. 1958. Movement Notation. London: Weidenfeld & Nicolson.
Golani, I. 1976. Homeostatic motor processes in mammalian interactions: a choreography of display. In: P.P.G. Bateson & P.H. Klopfer (eds.), Perspectives in Ethology, Volume 2, pp. 69–134. New York: Plenum Press.
Hutchinson Guest, A. 1984. Dance Notation: The Process of Recording Movement on Paper. London: Dance Books.
Hutchinson Guest, A. 1989. Choreo-Graphics: A Comparison of Dance Notation Systems from the Fifteenth Century to the Present. New York: Gordon and Breach.
Eshkol, N.; Wachman, A. Movement notation. London: Weidenfeld & Nicolson; 1958.
Eshkol, N.; Melvin, P., Michl, J., Von Foerster, H., Wachman, A. Notation of
movement. USA: Biological Computer Laboratory. Dept, of Electrical Engineering, University of Illinois; 1970.
Eshkol, N. Movement Notation Survey. Israel: The Movement Notation Society; 1973.
Hoyman, Annelis S. Eshkol-Wachman Movement Notation. USA: Urbana, Illinois; 1984.
Hutchinson-Guest, A. Dance Notation. The process of recording movement on paper. London: Dance Books; 1984, 108–114.
Eshkol, N.; Harries, J. G., EWMN Part I. Israel: The Movement Notation Society; 2001.
Composition in EWMN
Eshkol, N. Right Angled Curves (Dance suite). Israel: The Movement Notation Society & Tel Aviv University; 1975.
Eshkol, N. Diminishing Series (dance suite). Israel: The Movement, Tel Aviv University; 1978.
Eshkol, N. Rubaiyat (dance suite). Israel: The Movement Notation Society & Tel Aviv
University; 1979.
Eshkol, N. Angles and Angels (dance suite). Israel: The Movement Notation Society & Tel Aviv University; 1990.
Cohen, E.; Hetz, A. Study and Studio. Israel: Jerusalem Rubin Academy of Music and Dance, 1993.
Cohen, E.; Breitbart, O. D'muyot (Wandering Figures). Jerusalem: The Rubin
Academy of Music and Dance; 2001.
Sapir, T.; Reshef-Armony, S. Birds. Holon: The Movement Notation Society; 2005.
Sapir, T.; Al-Dor, N. Moving Landscape. Holon: The Movement Notation Society; 2007.
Classical forms of dance
Eshkol, N.; Nul, R. Classical ballet., Israel: Israel Music Institute; 1968.
Eshkol, N. Tomlinson's Gavot. Israel: Tel Aviv University; 1985.
Composition and graphic- kinetic art
Harries, J.G.; Shapes of movement. Israel: The Movement Notation Society; 1969.
Harries, J.G. Language of shape and movement. Israel: The Movement Notation Society
& Tel Aviv University; 1983.
Harries, J.G. Symmetry and Notation. Israel: Tel Aviv University; 1985.
Physical education
Arad, M.; Sonnenfeld, M., Eshkol, N. Physical training. Israel: Israel Music Institute; 1969.
Sonnenfeld, M.; Shoshani, M., Eshkol, N. Twenty-Five lessons by Dr. Moshe Feldenkrais. Israel: The Movement Notation Society; 1981.
Eshkol, N. Twenty-Five lessons by Dr. Moshe Feldenkrais. (Second edition). Israel: The Movement Notation Society; 1976.
Eshkol, N. et al. 50 lessons by Dr. M. Feldenkrais. Israel: The Movement Notation
Society; 1980.
Eshkol, N. et al. 50 Lessons by Dr. M. Feldenkrais. Second Edition. Israel: The
Movement Notation Society; 1989.
Studies in animal behavior
Golani, I. The golden jackal. Israel: The Movement Notation Society; 1969.
Notated record of studies in animal behavior carried out by Dr. Ilan Golani at the
Zoology Department, Tel Aviv University.
Zeidel, S. In the steps of the horses. Israel: Institute of Human Ecology; 1990.
Folk dances
Shoshani, M.; Zeidel, S., Eshkol, N. Dances of Israel. Israel: Israel Music Institute; 1970.
Eshkol, N.; Harries J.G., Zeidel, S., Shoshani, M. The hand book. Israel: The Movement Notation Society; 1972.
Eshkol, N.; Zeidel, S., Sapir, T., Shoshani, M. The Yemenite dance. Israel: The Movement Notation Society; 1971.
Eshkol, N.; Bone, O., Harries, J.G., Kopit, Z., Nul, R., Sella, R., Shoshani, M. Debka.
Israel: The Movement Notation Society & Tel Aviv University; 1974.
Eshkol, N.; Zeidel, S. In the steps of the Hora. Israel: The Movement Notation
Society & Tel Aviv University; 1986.
Zeidel, S. Ethnic dance: variations in six . Israel: The Movement Notation Society;
1987.
Education: teaching EWMN
Eshkol, N.; Seidel, S., Sapir, T., Nul, R., Harries, J.G., Shoshani, M., Sella, R.
Moving Writing Reading. Israel: The Movement Notation Society & Tel Aviv
University; 1973.
Sapir, T.; Eshkol, N. Hanukka Notebook. Israel: The Movement Notation Society & Tel
Aviv University; 1987.
Jones, D.; Cohen, E., Lin, H., Segev, R., Hermon, S. Dance and movement. Jerusalem:
Ministry of Education and Culture; 1990.
Eshkol, N. A Children's Work. Israel: The Movement Notation Society, 1997.
Cohen, E. Movement and Eshkol-Wachman Movement Notation. Jerusalem: Ministry of
Education and Culture;1999.
Comparative analysis of movement notations
Eshkol, N.; Shoshani, M., Dagan, M. Movement notations (Part I). Israel: The
Movement Notation Society & Tel Aviv University. 1979.
Eshkol, N.; Shoshani, M. Movement Notation (Part Two). Israel. The Movement
Notation Society & Tel Aviv University, 1982.
A comparative study of Labanotation and Eshkol-Wachman Movement Notation.
Hutchinson-Guest, A. Choreo Graphics. New York: Gordon and Breach; 1989.
Eshkol, N.; Shoshani, M., Harries, J. G. Tavim Leriqud – CMDN. Israel: The
Movement Notation Society & Tel Aviv University; 1991.
Hebrew translation of the English textbook of a Chinese dance notation method.
Sign language
Cohen, E.; Namir, L., Schlesinger, I. M. Paris: A new dictionary of sign language. The
Mouton, The Hague; 1977.
A dictionary of sign language.
Eshkol, N.; Harries J.G., Zeidel, S., Shoshani, M. The hand book. Israel: The Movement
Notation Society; 1972.
Martial arts
Eshkol, N.; Harries, J.G., Sella, R., Sapir, T. The quest for T'ai Chi Chuan. Israel: The
Movement Notation Society & Tel Aviv University; 1986.
An EW reader and study of Cheng's short form of this martial art.
Eshkol, N.; Sapir, T., Sella, R., Harries, J.G., Shoshani, M. The quest for T'ai Chi
Chuan. Israel: The Movement Notation Society & Tel Aviv University; 1988.
Second and expanded edition. Includes three styles of the solo exercise of this martial art
form.
Appel, A. Karate. Israel: Minimol Publisher, 1990.
General
Yanai, Z. Notation for the liberation of movement. Journal of IBM. 34–35; 1974.
A general article about EWMN.
Yanai, Z. Notacion para la liberacion de movimiento. "Ariel" Revista Trimestral des
Artes a Letras de Israel. 31:114–130; 1974.
Harries, J. G. A proposed notation for visual fine art. Leonardo. 8:295–300; 1975.
Yanai, Z. Eine schrift für freie bewegnung ."Ariel" Berichte zur Kunst und Bildung in
Israel. 22:114–130; 1975.
Yanai, Z. Un systeme de notation pour la liberation de mouvement. "Ariel" Revue
trimestrielle des arts et letters en israel. 33:114–130; 1975.
Kleinmann, S. Movement notation systems: An Introduction. Quest monograph
XXIII. Winter Issue, January: 33–56; 1975.
Harries, J.G.; Richmond, G. A language of movement. New Dance. 22:14–17; 1982.
General article about EWMN.
Yanai, Z. Notation for the liberation of movement. Contact Quarterly. 82(7):7–15; 1982.
Drewes, H. Transformationen: Bewegung im Notation und digitaler Verarbeitung. PhD Dissertation. Die Blaue Eule: Essen; 2003.
Composition and graphic-kinetic art
Harries, J. G. A proposed notation for visual fine art. Leonardo. 8:295–300; 1975.
Exposition of the use of EW notation in visual art composition.
Harries, J. G. A proposed notation for visual fine art. Visual Art Mathematics and
Computers. Malina, F. J., Ed. New York: Pergamon Press. 69–74; 1978.
Harries, J.G. Personal computers and notated visual art. Leonardo. 14(4):299–310;
1981.
Article on the combination of EW and computer technology in the composition and
production of visual art.
Harries, J.G. Symmetry and notation: regularity and symmetry in notated computer
graphics. Computer and Math with Applications (GB), Hargittai, Ed. New York:
Pergamon Press. 12b(1–2): 303–314; 1986.
Harries, J.G. Symmetry in the Movements of T'ai Chi Chuan. Computers, Mathematics
and Applications. 17(406):827–835; 1989.
Harries, J. G. Reflections on Rotations. Symmetry: Culture and Science. 8 (3–4):115-
332; 1997.
Animal behavior
Golani, I. Homeostatic motor processes in mammalian interaction: A Choreography of
Display. Perspectives in Ethology. 2: 1976.
Ganor, I.; Golani, I. Coordination and integration in the hindleg steps cycle of the rat:
Kinematic Synergies. Brain Research 164; 1980.
Moran, G.; Fentress, J.C., Golani I. A description of the relational patterns of movement
during ritualized fighting in wolves. Animal Behavior. 29:1146–1165; 1981.
Pellis, S. M. A description of social play by the Australian magpie gymnorhina tibicen
based on Eshkol-Wachman notation. Bird Behaviour. 3:61–79; 1981
Pellis, S. M. An analysis of courtship and mating in the Cape Barren goose Cereopsis
novaehollandiae latham based on Eshkol-Wachman movement notation. Bird
Behaviour. 4:30–41; 1982.
Pellis, S. M. Development of head and foot coordination in the Australian Magpie
gymnorhina tibicen, and the function of play. Bird Be haviour. 4: 57–62; 1983.
Pellis, S. M. What is "fixed" in a fixed action pattern? A problem of methodology. Bird
Behaviour. 6: 10–15; 1985.
Whishaw, I. Q.; Kolb, B. The mating movements of male decorticate rats: evidence for
subcortically generated movements by the male but regulation of approaches by the
female. Behavioural Brain Research. 17: 171–191; 1985.
Pellis, S. M.; Officer, R. C. E. An analysis of some predatory behaviour patterns in four
species of carnivorous marsupials (Dasyuridae), with comparative notes on the eutherian
cat Felis catus. Ethology. 75: 177–196;1987.
Pellis, S. M.; Pellis, V. C., Chesire, R. M., Rowland, N. E., Teitelbaum, P. Abnormal
gait sequence in the locomotion released by atropine in catecholamine deficient akinetic
rats. Proc. of the National Academy of Sciences. 84: 8750–8753;1987.
Yaniv, Y.; Golani, I. Superiority and inferiority: A morphological analysis of free and
stimulus bounds behavior in honey badger (Mellivora capensis) interactions. Ethology.
74:89–116 ; 1987.
Eilam, D.; Golani, G. The ontology of exploratory behavior in the house (Rattus Rattus):
The mobility gradient. Developmental Psychobiology. 21(7):679–710; 1988.
Pellis, S. M.; O'Brien, D. P, Pellis, V. C., Teitelbaum, P., Wolgin, D. L., Kennedy, S.
Escalation of feline predation along a gradient from avoidance through "play" to killing.
Behavioral Neuroscience. 102:760–777; 1988.
Faulkes, Z. Sand crab digging: The neuroethology and evolution of a "new" behavior.
B.Sc, University of Lethbridge. 1988.
Pellis, S. M. Fighting: the problem of selecting appropriate behavior patterns.
Blanchard, R. J.; Brain, P. F., Blanchard, D. C., Parmigiani, S., Eds. Ethoexperimental
Approaches to the Study of Behavior. 361–374; 1989.
Teitelbaum, P.; Pellis, S. M., DeVietti, T. L. Disintegration into stereotypy induced by
drugs or brain damage: A micro-descriptive behavioral analysis. Cooper, S.J.; Dourish,
C. T., Eds. Neurobiology of Behavioral Stereotypy. 169–199; 1990.
Whishaw, I. Q.; Pellis, S. M. The structure of skilled forelimb reaching in the rat: A
proximally driven stereotyped movement with a single rotatory component. Behavioral
Brain Research. 41: 49–59;1990.
Whishaw, I. Q.; Pellis, S. M., Gorny, B. P., Pellis, V. C. The impairments in reaching
and the movements of compensation in rats with motor cortex lesions: A videorecording
and movement notation analysis. Behavioural Brain Research. 42: 77–91; 1991.
Golani, I. A Mobility Gradient in the Organization of Vertebrate Movement: The
Perception of Movement Through Symbolic Language. Behavioral and Brain Sciences.
Vol 15(2): 249–308; 1992.
Whishaw, I. Q.; Pellis, S. M., Gorny, B. P. Skilled reaching in rats and humans:
Evidence for parallel development or homology. Behavioural Brain Research. 47: 59-
70; 1992.
34. Whishaw, I. Q.; Dringenberg, H. C., Pellis, S. M. Forelimb use in free feeding by
rats: Motor cortex aids limb and digit positioning. Behavioural Brain Research. 48:
113–125; 192.
Whishaw, I. Q.; Pellis, S. M., Gorny, B. P. Medial frontal cortex lesions impair the
aiming component of rat reaching. Behavioural Brain Research. 50: 93–104;1992.
Whishaw, I. Q.; Pellis, S. M., Pellis, V. C. A behavioral study of the contributions of
cells and fibers of passage in the red nucleus of the rat to postural righting, skilled
movements, and learning. Behavioural Brain Research. 52: 29–44; 1992.
Whishaw, I. Q.; Pellis, S. M., Gorny, B., Kolb, B., Tetzlaff, W. Proximal and distal
impairments in rat forelimb use in reaching following pyramidal tract lesions.
Behavioural Brain Research. 56:59–76; 1993.
Whishaw, I. Q.; Gorny, B., Tran-Nguyen, L. T. L., Castañeda, E., Miklyaeva, E. I., Pellis,
S. M. Doing two things at once: Impairments in movement and posture underlie the
adult skilled reaching deficit of neonatally dopamine-depleted rats. Behavioural Brain
Research. 61: 65–77; 1994.
Field, E. F.; Whishaw, I. Q., Pellis, S. M. An analysis of sex differences in the
movement patterns used during the food wrenching and dodging paradigm. Journal of
Comparative Psychology. 110: 298–306; 1996.
Ivanco, T. L.; Pellis, S. M., Whishaw, I. Q. Skilled movements in prey catching and in
reaching by rats (Rattus norvegicus ) and opossums (Monodelphis domestica ): Relations
to anatomical differences in motor systems. Behavioural Brain Research. 79: 163–182;
1996.
Pellis, S. M. Righting and the modular organization of motor programs. Ossenkopp,
K.P.; Kavaliers, M., Sanberg, P.R., Eds. Measuring Movement and Locomotion: From
Invertebrates to Humans. 115–133; 1996.
Faulkes, Z.; Paul, D. H. Digging in sand crabs (Decapoda, Anomura, Hippoidea):
Interleg coordination. Journal of Experimental Biology. 200: 793–805;1997.
Field, E. F.; Whishaw, I. Q., Pellis, S. M. The organization of sex-typical patterns of
defense during food protection in the rat: The role of the opponent's sex. Aggressive
Behavior. 23: 197–214; 1997.
Field, E. F.; Whishaw, I. Q., Pellis, S. M. A kinematic analysis of sex-typical movement
patterns used during evasive dodging to protect a food item: The role of gonadal
androgens. Behavioral Neuroscience. 111: 808–815; 1997.
Iwaniuk, A. N.; Nelson, J. E., Ivanco, T. L., Pellis, S. M., Whishaw, I Q. Reaching,
grasping and manipulation of food objects by two tree kangaroo species, Dendrolagus
lumholtzi and Dendrolagus matschiei. Australian Journal of Zoology. 46: 235–248;
1998.
Whishaw, I. Q.; Woodward, N. C., Miklyaeva, E., Pellis, S. M. Analysis of limb use by
control rats and unilateral DA-depleted rats in the Montoya staircase test: Movements,
impairments and compensatory strategies. Behavioural Brain Research. 89:167–177; 1998
Whishaw, I. Q.; Sarna, J., Pellis, S. M. Evidence for rodent-common and species-typical
limb and digit use in eating derived from a comparative analysis of ten rodent species.
Behavioural Brain Research. 96:79–91; 1998.
Iwaniuk, A. N.; Whishaw, I. Q. How skilled are the skilled limb movements of the
raccoon (Prycyon lotor). Behavioural Brain Research. 99:35–44; 1999.
Pasztor, T. J.; Smith, L. K., MacDonald, N. L., Michener, G. R., Pellis, S. M. Sexual and
aggressive play fighting of sibling Richardson's ground squirrels. Aggressive Behavior.
27: 323–337; 2001.
Whishaw, I. Q.; Gorny, B., Foroud, A., Kleim, J. A. Long-Evans and Sprague-Dawley
rats have similar skilled reaching success and limb representations in motor cortex but
different movements: some cautionary insights into the selection of rat strains for
neurobiological motor research. Behavioural Brain Research. 145: 221–232; 2003.
Gharbawie, O. A.; Whsiahw, P. A., Whishaw, I. Q. The topography of three-dimensional
exploration: a new quantification of vertical and horizontal exploration, postural support,
and exploratory bouts in the cylinder test. Behavioural Brain Research. 151:125–135;
2004.
Neurological syndromes
Cohen, E.; Sekeles, C. Integrated treatment of Down's Syndrome children through music
and movement. Proceedings of the Fourth International Conferences of DACI. 2:1988.
Teitelbaum P.; Maurer R.G., Fryman J., Teitelbaum O.B., Vilensky J., Creedon M.P.
Dimensions of disintegration in the stereotyped locomotion characteristic in
Parkinsonism. American Psychological Association. 1994.
Teitelbaum P.; Behrman A., Fryman J., Cauraugh J., Maurer R.G., Teitelbaum O.B.,
Principles for the design of walking robots derived from the study of people with
Parkinson's Disease. Paper submitted to the Conference on Simulation of Animal
Behavior, Brighton, England, August 8–12. 1994.
Teitelbaum, P.; Teitelbaum, O.B., Nye J., Fryman, J., Maurer, R.G. Movement analysis
in infancy may be useful for early diagnosis of Autism. Proc. National Academy of
Science. 95:13982-13987; 1998.
Whishaw, I. Q.; Suchowersky, O., Davis, L., Sarna, J., Metz, G. A., Pellis, S. M. A
qualitative analysis of reaching-to-grasp movements in human Parkinson's disease (PD)
reveals impairments in coordination and rotational movements of pronation and
supination: a comparison to deficits in animal models of PD. Behavioural Brain
Research. 133:165–176; 2002.
Teaching EWMN
Cohen, E. On teaching Eshkol-Wachman Movement Notation to academic students.
Proceedings of the Second International Congress on Movement Notation, Hong Kong.
1990.
Shoshani, M. An Analysis of the use of Eshkol Wachman Movement Notation for dance
composition. Dance Study Dep. Surrey University, UK. 1994.
External links
Eshkol-Wachman Movement Notation – Eshkol-Wachman Movement Notation Center
Movement and notation – Home of EW Notator software
Sharon Lockhart | Noa Eshkol Exhibition at The Jewish Museum (New York)
Dance notation
Ethology
Autism screening and assessment tools | Eshkol-Wachman movement notation | [
"Biology"
] | 7,101 | [
"Behavioural sciences",
"Ethology",
"Behavior"
] |
14,698,621 | https://en.wikipedia.org/wiki/Neuropilin%201 | Neuropilin-1 is a protein that in humans is encoded by the NRP1 gene. In humans, the neuropilin 1 gene is located at 10p11.22. This is one of two human neuropilins.
Function
NRP1 is a membrane-bound coreceptor to a tyrosine kinase receptor for both vascular endothelial growth factor (for example, VEGFA) and semaphorin (for example, SEMA3A) family members. NRP1 plays versatile roles in angiogenesis, axon guidance, cell survival, migration, and invasion.[supplied by OMIM]
Interactions
Neuropilin 1 has been shown to interact with Vascular endothelial growth factor A.
Role in COVID-19
Research has shown that neuropilin 1 facilitates entry of SARS-CoV-2 into cells, making it a possible target for future antiviral drugs.
Implication in cancer
Neuropilin 1 has been implicated in the vascularization and progression of cancers. NRP1 expression has been shown to be elevated in a number of human patient tumor samples, including brain, prostate, breast, colon, and lung cancers and NRP1 levels are positively correlated with metastasis.
In prostate cancer NRP1 has been demonstrated to be an androgen-suppressed gene, upregulated during the adaptive response of prostate tumors to androgen-targeted therapies and a prognostic biomarker of clinical metastasis and lethal PCa. In vitro and in vivo mouse studies have shown membrane bound NRP1 to be proangiogenic and that NRP1 promotes the vascularization of prostate tumors.
Elevated NRP1 expression is also correlated with the invasiveness of non-small cell lung cancer both in vitro and in vivo.
Target for cancer therapies
As a co-receptor for VEGF, NRP1 is a potential target for cancer therapies. A synthetic peptide, EG3287, was generated in 2005 and has been shown to block NRP1 activity. EG3287 has been shown to induce apoptosis in tumor cells with elevated NRP1 expression. A patent for EG3287 was filed in 2002 and approved in 2003. As of 2015 there were no clinical trials ongoing or completed for EG3287 as a human cancer therapy.
Soluble NRP1 has the opposite effect of membrane bound NRP1 and has anti-VEGF activity. In vivo mouse studies have shown that injections of sNRP-1 inhibits progression of acute myeloid leukemia in mice.
References
Further reading
Proteins | Neuropilin 1 | [
"Chemistry"
] | 542 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
14,698,685 | https://en.wikipedia.org/wiki/Metal-induced%20gap%20states | In bulk semiconductor band structure calculations, it is assumed that the crystal lattice (which features a periodic potential due to the atomic structure) of the material is infinite. When the finite size of a crystal is taken into account, the wavefunctions of electrons are altered and states that are forbidden within the bulk semiconductor gap are allowed at the surface. Similarly, when a metal is deposited onto a semiconductor (by thermal evaporation, for example), the wavefunction of an electron in the semiconductor must match that of an electron in the metal at the interface. Since the Fermi levels of the two materials must match at the interface, there exists gap states that decay deeper into the semiconductor.
Band-bending at the metal-semiconductor interface
As mentioned above, when a metal is deposited onto a semiconductor, even when the metal film as small as a single atomic layer, the Fermi levels of the metal and semiconductor must match. This pins the Fermi level in the semiconductor to a position in the bulk gap. Shown to the right is a diagram of band-bending interfaces between two different metals (high and low work functions) and two different semiconductors (n-type and p-type).
Volker Heine was one of the first to estimate the length of the tail end of metal electron states extending into the semiconductor's energy gap. He calculated the variation in surface state energy by matching wavefunctions of a free-electron metal to gapped states in an undoped semiconductor, showing that in most cases the position of the surface state energy is quite stable regardless of the metal used.
Branching point
It is somewhat crude to suggest that the metal-induced gap states (MIGS) are tail ends of metal states that leak into the semiconductor. Since the mid-gap states do exist within some depth of the semiconductor, they must be a mixture (a Fourier series) of valence and conduction band states from the bulk. The resulting positions of these states, as calculated by C. Tejedor, F. Flores and E. Louis, and J. Tersoff, must be closer to either the valence- or conduction- band thus acting as acceptor or donor dopants, respectively. The point that divides these two types of MIGS is called the branching point, E_B. Tersoff argued
, where is the spin orbit splitting of at the point.
is the indirect conduction band minimum.
Metal–semiconductor contact point barrier height
In order for the Fermi levels to match at the interface, there must be charge transfer between the metal and semiconductor. The amount of charge transfer was formulated by Linus Pauling and later revised to be:
where and are the electronegativities of the metal and semiconductor, respectively. The charge transfer produces a dipole at the interface and thus a potential barrier called the Schottky barrier height. In the same derivation of the branching point mentioned above, Tersoff derives the barrier height to be:
where is a parameter adjustable for the specific metal, dependent mostly on its electronegativity, . Tersoff showed that the experimentally measured fits his theoretical model for Au in contact with 10 common semiconductors, including Si, Ge, GaP, and GaAs.
Another derivation of the contact barrier height in terms of experimentally measurable parameters was worked out by Federico Garcia-Moliner and Fernando Flores who considered the density of states and dipole contributions more rigorously.
is dependent on the charge densities of the both materials
density of surface states
work function of metal
sum of dipole contributions considering dipole corrections to the jellium model
semiconductor gap
Ef – Ev in semiconductor
Thus can be calculated by theoretically deriving or experimentally measuring each parameter. Garcia-Moliner and Flores also discuss two limits
(The Bardeen Limit), where the high density of interface states pins the Fermi level at that of the semiconductor regardless of .
(The Schottky Limit) where varies with strongly with the characteristics of the metal, including the particular lattice structure as accounted for in .
Applications
When a bias voltage is applied across the interface of an n-type semiconductor and a metal, the Fermi level in the semiconductor is shifted with respect to the metal's and the band bending decreases. In effect, the capacitance across the depletion layer in the semiconductor is bias voltage dependent and goes as . This makes the metal/semiconductor junction useful in varactor devices used frequently in electronics.
References
Electronic band structures
Semiconductor structures | Metal-induced gap states | [
"Physics",
"Chemistry",
"Materials_science"
] | 912 | [
"Electron",
"Electronic band structures",
"Condensed matter physics"
] |
14,698,871 | https://en.wikipedia.org/wiki/Geography%20of%20kendo | Kendo originated in Japan, but is today practiced worldwide.
The size and depth of kendo skill varies widely from country to country. Some countries have few kendo practitioners, while Japan has several million.
Generally, kendo has stronger traditions in countries with strong historical ties to Japan, like Korea and Taiwan, as well as countries with large Japanese immigrant communities such as the United States, Canada and Brazil. While the term kendo is used all over the world, the term Kumdo is used in Korea.
International organisations
The following international organisations administer, manage, promote, or have an interest in the development of kendo.
The International Kendo Federation (FIK) is the international federation of most national and regional kendo organisations. The FIK was established in 1970 to provide a link between the All Japan Kendo Federation and the developing international kendo community. Seventeen national or regional federations were the founding affiliates. The number of affiliated and recognised organisations has increased over the years to 57 (as of May 2015).
FIK affiliated national and regional kendo organisations are listed on the FIK website.
The FIK has conducted the World Kendo Championships, every three years since it was established. The international competition is contested by individual and team representatives of the FIK affiliates.
Dai Nippon Butoku Kai (DNBK) was established in Kyoto, Japan, in 1953. Today, the new axiom of DNBK stresses preservation of classical martial arts tradition and the promotion of education and community service through martial arts training.
International Martial Arts Federation (IMAF) was established in Kyoto, Japan, in 1952. Among the objectives of IMAF are the expansion of interest in Japanese martial arts, the establishment of communication, friendship, understanding and harmony among member chapters, the development of the minds and bodies of members, and the promotion of global understanding and personal growth.
Zen Nihon Sōgō-Budō Renmei (ZNSBR) The organization known as the Zen Nihon Sōgō-Budō Renmei (All Japan [Comprehensive] Budō Federation or SōBuRen) has as its mission the preservation of all Japanese martial arts. This includes (pre-Meiji) Nihon koryū bujutsu/bugei and (post-Meiji) gendai budō. We operate under the protection of the Japanese Imperial House as an entity that protects Japanese culture. The organization was created by Suzuki Masafumi-kanchō in 1969. After the passing of its founder, Ishikawa Takashi-kanchō, Toyama-ryū iaidō hanshi 9th dan and jūkendo hanshi 9th dan, replaced him as President of the Zen Nihon Sōgō-Budō Renmei.
National and regional organisations
Many national and regional organisations manage and promote kendo, some are affiliated to international kendo organisations, while other organisations are independent of international kendo organisations.
Asia
Hong Kong Kendo Association (香港劍道協會)
Malaysia Kendo Association (MKA)
Singapore Kendo Club
Kendo India Federation (KIF)
Indonesian Kendo Association
Shinbukan: Iran Kendo and Iaido Association
Israel Kendo and Budo Federation (IKBF) The federation represents Kendo, Iaido and Jodo in Israel.
All Japan Kendo Federation (AJKF or ZNKR) AJKF was founded in 1952, immediately following the restoration of Japanese independence after the second World War and the subsequent lift of the ban on 'martial arts' in Japan.
Korea Kumdo Association
Chinese Kendo Network (中国剑道网) This group represents Kendo, Iaido, and Jodo in Mainland China.
Macau SAR Kendo Associations Union (澳門特區劍道連盟)
Republic of China (Chinese Taipei) Kendo Federation (中華民國劍道協會)
Kuwait Kendo Dojo
Thailand Kendo Club
Brunei Kendo Alliance
Africa
South Africa The South Africa Kendo Federation. The SAKF and Tunisia The Tunisian Kendo League. The TKL are the only two federations recognized by The International Kendo Federation (FIK) in Africa, but there is also kendo activity in Mozambique, Madagascar and Malawi.
Malawi Kendo was introduced to Malawi in 1992 when a Japanese volunteer took on a group of local children as his students. The Kendo Association of Malawi was formed in 1999 and has seen significant growth in recent years. The Kendo Association of Malawi works closely with the Embassy of Japan in Malawi to promote kendo as a sport and to encourage cultural exchange and interaction between the peoples of the two nations. The majority of the Kendo Associations activities take place in Blantyre and the Blantyre Youth Center. On 31 March 2024 The Kendo Association established a Kendo club in the capital city, Lilongwe, at Kamuzu Institute for Sports, and now weekly training sessions are held in both cities. Furthermore, at least two local tournaments are organized each year, which are patronized by the general public and the Japanese community in Malawi.
Europe
The European Kendo Federation (EKF) is member of International Kendo Federation (FIK), which 35 countries/regions belong to, also promotes jodo and iaido. European kendo championships have been held since 1974. Championships are held every year that there is no world championship. Some national organisations are affiliated to EKF, while other organisations are independent of EKF.
Armenia Kendo is promoted by the National Kendo Federation of Armenia.
Austria The Austrian Kendo Association was founded in 1985.
Belgium Kendo is promoted by All Belgium Kendo Federation (A.B.K.F.).
Bulgaria Kendo is promoted by Bulgarian Kendo Federation (B.K.F).
Croatia Kendo is promoted by Hrvatski Kendo Savez (H.K.S.).
Czech Republic Kendo is promoted by Czech Kendo Federation (C.K.F).
Denmark: Two organisations uphold different approaches to kendo.
Danish Kendo Federation (DKF) which is affiliated with the International Kendo Federation (FIK).
Danish Kendo Society which is independent of European Kendo Federation (EKF) and other international kendo organisations.
Estonia Kendo is promoted by Eesti Kendoliit.
Finland Kendo is promoted Finnish Kendo Association.
France After the end of World War II, many masters of kendo visited France and introduced kendo in the 1950s. The first French kendo championship was held in 1959. Comité National de Kendo
Georgia (country) Kendo is promoted by the Georgian Kendo Association.
Germany Deutscher Kendobund e.V
Greece is promoted by the Hellenic Kendo Iaido Naginata Federation.
Hungary Kendo is promoted by Hungarian Kendo Federation (HKF).
Italy: Kendo is promoted by two organisations:
Confederazione Italiana Kendo (CIK) which is affiliated with European Kendo Federation (EKF) and, thus, with International Kendo Federation (FIK)
Federazione Italiana Kendo(FIK) which is affiliated with Zen Nihon Sogo-Bugo Renmei
Ireland Kendo is promoted by Kendo Na h-Éireann, Irish Kendo Federation.
Latvia Kendo is promoted by Latvian Kendo Federation (LKF).
Lithuania Kendo is promoted by Lithuanian Kendo Association (LKA).
Luxembourg Shobukai Kendo Luxembourg (SKL)
Malta Kendo is promoted by the Maltese Kendo Federation Members of the European Kendo Federation.
Moldova Kendo is promoted by Moldovan Kendo Federation.
Montenegro Kendo is promoted by Montenegrin Kendo Federation .
Netherlands Kendo is promoted by Dutch Kendo Renmei (NKR).
North Macedonia Kendo is promoted by Macedonian Kendo Iaido Federation (MKIF)
Norway Kendo is promoted by the Norwegian Kendo Federation which will soon be joining The Norwegian Martial arts Federation (Norges Kampsport forbund).
Poland Kendo is promoted by Polish Kendo Federation.
Portugal Kendo is promoted by Associação Portuguesa de Kendo (APK), which is affiliated with the European Kendo Federation and the International Kendo Federation.
Romania Kendo is promoted by the Kendo Department of the Contact Martial Arts Federation in Romania.
Russia Moscow Kendo Association
Serbia Kendo is promoted by the Serbian Kendo Federation in Serbia.
Slovakia Kendo is promoted by the Slovenská kendo federácia in Slovak republic.
Spain Kendo is promoted by the Real Federacion Española de Judo y Deportes Asociados.
Sweden: Two organisations uphold different approaches to kendo.
Swedish Kendo Federation (SKF) which is affiliated with the International Kendo Federation (FIK).
Tokugawa Kendo Federation which is independent of European Kendo Federation (EKF) and other international kendo organisations.
Switzerland The Swiss Kendo & Iaido SJV/ASJ was founded in 1967.
Turkey Kendo is promoted by the Turkish Kendo Association in Turkey, Ankara Kendo Iaido Association in Ankara.
Ukraine Kendo is promoted by the Ukrainian Kendo Federation.
United Kingdom: Several organisations promote Kendo in the UK.
British Kendo Association, which is affiliated with the International Kendo Federation.
British Kendo Renmei which is independent of European Kendo Federation (EKF) and other international kendo organisations.
Oceania
Australian Kendo Renmei (AKR) grew from the beginning of kendo in Australia in the 1960s, is a founding member of the FIK (formerly the IKF) and remains affiliated. Australian Kendo Championships were first held in the 1970s and with a few gaps in the early years has been held in Australia annually since.
The AKR also partners with Australian University Sport Inc., to conduct an annual national kendo championship for university students. In 2014, 76 University student kendo players represented nine universities from all over Australia.
The New Zealand Kendo Federation.
Pacific Ocean
Hawaii Kendo Federation (HKF) The Hawaii Budo Kyokai was established in 1947 (even before the All Japan Kendo Federation) and was renamed Hawaii Kendo Federation in 1955. The HKF consists of 16 dojo practicing kendo and iaido on the islands of Oahu, Hawaii, Kauai and Maui. The HKF is an affiliate organisation of the FIK.
North America
Canadian Kendo Federation (CKF) consists of over 55 member clubs. Clubs belong to CKF directly, although they may also belong to a regional federation. Such federations exist in British Columbia and Ontario.
Federación Mexicana de Kendo (FMK) Mexican Kendo Federation, consists of 13 regional associations.
In the United States, several organisations promote kendo:
All United States Kendo Federation (AUSKF) consists of 14 regional members. The regional members comprise a minimum of three kendo clubs, each with a minimum of 50 members. Individual people or clubs cannot be members of the AUSKF.
Many universities also host collegiate clubs that promote kendo among student communities.
The University of California, Los Angeles hosts an annual intercollegiate Yuhihai tournament for undergraduate students to compete.
Hawaii Kendo Federation (HKF) operates separately from the All United States Kendo Federation.
Puerto Rico: Is represented by the Federación Puertorriqueña de Kendo e Iaido. Puerto Rico has sports autonomy so the federation does not fall under the AUSKF.
South America
In South America, the practice of Kendo has existed since the arrival of Japanese immigrants as early as 1908. Since then and with Brazil as its centre, kendo has spread over South America. Now kendo practitioners and kendo federations exist in many countries in South America such as: Brazil, Argentina, Venezuela, Colombia, Ecuador, Peru, Uruguay, Aruba and Chile.
At the December 2006 meeting of the International Kendo Federation (FIK) held in Taiwan, the South American Kendo Confederation (CSK) was discussed and voted upon, as a result the Confederation was admitted as an FIK affiliate.
Argentina, Aruba, Chile, Brazil and Venezuela are affiliated with the FIK.
The next Latin american Kendo Championship was supposed to be held in May 2020 in São Paulo, Brazil, but was suspended until further notice due to the ongoing pandemic.
Federación Argentina de Kendo (FAK) Kendo federation associated to the International Kendo Federation in Argentina.
Federación de Kendo de la República Argentina (FKRA)
Brazilian Kendo Federation
Kendo in Chile started in 1990. The Chilean Kendo Federation was founded in 1997 and became a member of the FIK in 2003. It consists of about 250 kenshi, is part of the CSK (South American Kendo Confederation), and holds Kendo championships annually.
Kendo in Ecuador started in 1999 in the facilities of the Japanese School of Quito.
Kendo y Iaido en Uruguay
Uruguayan Association of Kendō and Iaidō (AUKI).
Ken Zen Dojo de Venezuela was founded in 1990 under the auspice of Ken Zen Dojo of New York.
Central America
Kendo in Guatemala started in 1992. The Guatemalan Kendo Association was founded in 1992. It consists of about 150 kenshi, is part of the CLAK (Latin American Kendo Confederation), and holds Kendo championships annually.
References
Kendo organizations
Human geography | Geography of kendo | [
"Environmental_science"
] | 2,722 | [
"Environmental social science",
"Human geography"
] |
14,699,013 | https://en.wikipedia.org/wiki/Dealkalization | Dealkalization is a process of surface modification applicable to glasses containing alkali ions, wherein a thin surface layer is created that has a lower concentration of alkali ions than is present in the underlying, bulk glass. This change in surface composition commonly alters the observed properties of the surface, most notably enhancing corrosion resistance.
Many commercial glass products such as containers are made of soda-lime glass, and therefore have a substantial percentage of sodium ions in their internal structure. Since sodium is an alkali element, its selective removal from the surface results in a dealkalized surface. A classic example of dealkalization is the treatment of glass containers, where a special process is used to create a dealkalized inside surface that is more resistant to interactions with liquid products put inside the container. However, the term dealkalization may also be generally applied to any process where a glass surface forms a thin surface layer that is depleted of alkali ions relative to the bulk. A common example is the initial stages of glass corrosion or weathering, where alkali ions are leached from the surface region by interactions with water, forming a dealkalized surface layer.
A dealkalized surface may have either no alkali remaining or may just have less than the bulk. In silicate glasses, dealkalized surfaces are also often considered "silica-rich" since the selective removal of alkali ions can be thought to leave behind a surface composed primarily of silica (SiO2). To be precise, dealkalization does not generally involve the outright removal of alkali from the glass, but rather its replacement with protons (H+) or hydronium ions (H3O+) in the structure through the process of ion-exchange.
Treatment of glass containers
Motivation
For glass containers, the goal of surface dealkalization is to render the inside surface of the container more resistant to interactions with liquid products later put inside it. Since the treatment is directed primarily at changing the properties of the inside surface in contact with the product, it is also referred to as "internal treatment".
The most common example of its use with containers is on bottles intended to hold alcoholic spirits. The reason for this is that some alcoholic spirits such as vodka and gin have an approximately neutral pH and a high alcohol content, but are not buffered in any way against changes in pH. If alkali is leached from the glass into the product, the pH will begin to rise (i.e. become more alkaline), can eventually reach a pH high enough that the solution begins to attack the glass itself quite effectively. By this mechanism, initially neutral alcohol products can achieve a pH where the glass container itself begins to slowly dissolve, leaving thin, siliceous glass flakes or particles in the fluid. Dealkalization treatment hinders this process by removing alkali from the inside surface. Not only does this mean less extractable alkali in the glass surface directly contacting the product, but it also creates a barrier for the diffusion of alkali from the underlying bulk glass into the product.
The same logic applies in pharmaceutical glass items such as vials that are intended to hold medicinal products. While many of these items are composed of more durable borosilicate glass, they are also at times dealkalized in order to minimize the possibility of alkali leaching from the glass into the product. This action helps to avoid undesired changes in pH or ionic strength of the solution, which not only inhibits eventual attack of the glass as previously described, but can also be important in maintaining the efficacy or stability of sensitive product formulations.
Dealkalization methods
Dealkalizing glass containers is accomplished by exposing the glass surface to reactive sulfur- or fluorine-containing compounds during the manufacturing process. A rapid ion-exchange reaction proceeds that depletes the inside surface of alkali, and is performed when the glass is at high temperature, usually on the order of 500–650 °C or greater.
Historically, sulfur-containing compounds were the first materials used to dealkalize glass containers. Dealkalization proceeds through the interdiffusion/ion-exchange of Na+ out of the glass and H+/H3O+ into the glass, along with the subsequent reaction of the sulfate species with available sodium at the surface to form sodium sulfate (Na2SO4). The latter is left behind as water-soluble crystalline deposits, or bloom, on the glass surface that must be rinsed away prior to filling. On manufacturing lines, one way in which this process was done was by flooding the annealing lehr with sulfur dioxide (SO2) or sulfur trioxide (SO3) gases—especially in the presence of water, which enhances the reaction. However, this practice fell into disfavor due to environmental and health concerns regarding SOx-type gases. An alternative method for sulfate treatment is with solid ammonium sulfate salt or aqueous solutions thereof. These materials are introduced inside the container after forming and decompose into gases in the annealing lehr, where the resulting sulfur-containing gas mixture carries out the dealkalization reaction. This method is purportedly safer than flooding the annealing lehr since the unreacted components in the gas mixture will tend not to escape to the atmosphere, but rather react with each other and recreate the original salt in the container that can later be rinsed away.
Treatment with fluorine-containing compounds is typically accomplished through the injection of a fluorinated gas mixture (e.g. 1,1-difluoroethane mixed with air) into bottles at high temperatures. The gas can be delivered to the container either in the air used in the forming process (i.e. during the final blow of the container into its desired shape), or with a nozzle directing a stream of the gas down into the mouth of the bottle as it passes on a conveyor belt after forming but before annealing. The mixture gently combusts inside the bottle, creating an extremely small dose of hydrofluoric acid that reacts with the glass surface and serves to dealkalize it. The resultant surface is virtually free from any residues of the process. This treatment is also known as the Ball I.T. process (I.T. standing for internal treatment) as Ball Corporation held the patent and developed the first commercially available system implementing this process.
Testing for dealkalization
Routine tests for surface dealkalization in the glass container industry all generally aim to evaluate the amount of alkali extracted from the glass when it is rinsed with or exposed to purified water. For example, dealkalization can be quickly checked by introducing a small volume of distilled water to a freshly made bottle and rolling the bottle gently to pass the water completely over its inside surface. The pH of the rinse water is then measured; untreated containers will tend to yield a slightly alkaline pH in the 8-9 range due to extracted alkali, while dealkalized containers tend to yield a pH that remains approximately neutral.
A much more thorough version of this test is outlined in various international and domestic testing standards for glass containers, all with comparable methodologies. These tests evaluate the hydrolytic stability of the containers under more severe conditions, wherein containers, filled close to capacity with purified water, are covered and then heat-cycled in an autoclave at 121 °C for 1 hour. After cooling to room temperature, the water is titrated with acid to evaluate the pH of the water, and therefore the equivalent amount of alkali extracted during the heat cycle. The alkali content of the rinse water can also be evaluated more directly by chemical analysis of the rinse water, as outlined in more recent versions of the European Pharmacopoeia. According to the Pharmacopoeia standards, internally treated or dealkalized soda-lime glass containers are designated as "Type II" containers, thus setting them apart from their untreated counterparts due to their improved resistance to product interactions (as opposed to "Type III", which is standard, untreated soda-lime glass, or "Type I", which is reserved for highly resistant borosilicate glass).
While not routine, dealkalization can also be measured in a variety of other ways. Since dealkalized surfaces are more chemically durable, they are also more resistant to weathering reactions, and appropriate evaluation of this parameter can give indirect evidence of a previously dealkalized surface. It is also possible to evaluate dealkalization through the use of advanced, surface analytical techniques such as SIMS or XPS, which give direct measurements of glass surface composition.
See also
Corrosion of glasses
Glass
Glass container industry
Soda-lime glass
Surface science
Glass disease
References
Glass coating and surface modification
Glass chemistry
Packaging
Containers | Dealkalization | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,791 | [
"Glass engineering and science",
"Glass chemistry",
"Coatings",
"Glass coating and surface modification"
] |
14,699,692 | https://en.wikipedia.org/wiki/Courtship%20display | A courtship display is a set of display behaviors in which an animal, usually a male, attempts to attract a mate; the mate exercises choice, so sexual selection acts on the display. These behaviors often include ritualized movement ("dances"), vocalizations, mechanical sound production, or displays of beauty, strength, or agonistic ability.
Male display
In some species, males will perform ritualized movements to attract females. The male six-plumed bird-of-paradise (Parotia lawesii) exemplifies male courtship display with its ritualized "ballerina dance" and unique occipital and breast feathers that serve to stimulate the female visual system. In Drosophila subobscura, male courtship display is seen through the male's intricate wing scissoring patterns and rapid sidestepping. These stimulations, along with many other factors, result in subsequent copulation or rejection.
In other species, males may exhibit courtship displays that serve as both visual and auditory stimulation. For example, the male Anna's hummingbird (Calypte anna) and calliope hummingbird (Stellula calliope) perform two types of courtship displays involving a combination of visual and vocal display—a stationary shuttle display and dive display. When engaging in the stationary shuttle display, the male displays a flared gorget and hovers in front of the female, moving from side to side while rotating his body and tail. The rhythmic movements of the male's wings produce a distinctive buzzing sound. When conducting a dive display, the male typically ascends approximately in the air then abruptly turns and descends in a dive-like fashion. As the male flies over the female, he rotates his body and spreads his tail feathers, which flutter and collide to produce a short, buzzing sound.
In addition, some animals attempt to attract females through the construction and decoration of unique structures. This technique can be seen in the satin bowerbird (Ptilonorhynchus violaceus) of Australia, males of which build and decorate nest-like structures called "bowers". Bowers are decorated with bright and colourful objects (typically blue in colour) to attract and stimulate visiting females. Typically, males who acquire the largest number of decorations tend to have greater success in mating.
In some species, males initiate courtship rituals only after mounting the female. Courtship may even continue after copulation has been completed. In this system, the ability of the female to choose her mate is limited. This process, known as copulatory courtship, is prevalent in many insect species.
In most species, the male sex initiates courtship displays in precopulatory sexual selection. Performing a display allows the male to present his traits or abilities to a female. Mate choice, in this context, is driven by females; direct or indirect benefits to the female often determine which males reproduce and which do not.
Direct benefits may accrue to the female during male courtship displays. Females can raise their own fitness if they respond to courtship behavior that signals benefits to the female rather than the fitness of the male. For example, choosing to mate with males that produce local signals would require less energy for a female as she searches for a mate. Males may compete by imposing lower mating costs on the female or even providing material or offspring contributions to the female.
Indirect benefits are benefits that may not directly affect the parents' fitness but instead increase the fitness of the offspring. Since the offspring of a female will inherit half of the genetic information from the male counterpart, those traits she saw as attractive will be passed on, producing fit offspring. In this case, males may compete during courtship by displaying desirable traits to pass on to offspring.
Female display
Female courtship display is less common in nature as a female would have to invest a lot of energy into both exaggerated traits and in their energetically expensive gametes. However, situations in which males are the sexually selective sex in a species do occur in nature. Male choice in reproduction can arise if males are the sex in a species that are in short supply, for example, if there is a female bias in the operational sex ratio. This could arise in mating systems where reproducing comes at an energy cost to males. Such energy costs can include the effort associated in obtaining nuptial gifts for the female or performing long courtship or copulatory behaviors. An added cost from these time and energy investments may come in the form of increased male mortality rates, putting further strain on males attempting to reproduce.
In pipefish (Syngnathus typhle), females use a temporary ornament, a striped pattern, to both attract males and intimidate rival females. In this case, the female of a species developed a sexually selected signal which serves a dual function of being both attractive to mates and deterring rivals.
Multi-modal signal processing
Many species of animals engage in some type of courtship display to attract a mate, such as dancing, the creation of sounds, and physical displays. However, many species are not limited to only one of these behaviors. The males of a species across many taxa create complex multi-component signals that have an effect on more than one sensory modality, also known as multi-modal signals. There are two leading hypotheses about the adaptive significance of multi-modal signal processing. The multiple message hypothesis states that each signal that a male exhibits will contribute to a possible mate's perception of the male. The redundant signal hypothesis states that the male exhibits multiple signals that portray the same "message" to the female, with each extra signal acting as a fall-back plan for the male should there be a signaling error. The choosy sex may only evaluate one, or a couple, of traits at a given time when interpreting complex signals from the opposite sex. Alternatively, the choosy sex may attempt to process all of the signals at once to facilitate evaluation of the opposite sex.
The process of multi-modal signaling is believed to help facilitate the courtship process in many species. One such species in which multi-modal signaling is seen to improve mating success is the green tree frog (Hyla cinerea). Many anuran amphibians, such as the green tree frog, may use visual cues as well as auditory signals to increase their chances of impressing a mate. When the calls of the tree frogs were held equal, it was determined that females tended to overlook an auditory-only stimulus in favor of males who combined auditory/visual multi-modal signals. It was seen that female green tree frogs preferred when males coupled the visual display with the auditory communication, concluding that male green tree frogs that are visually accessible can increase their probability of mating success.
Peacock spiders (Maratus volans) are exceptionally sexually dimorphic in appearance and signaling behavior. During courtship, male peacock spiders compete using both visual displays and vibratory signals for intersexual communication. Because of the intense sexual selection on male peacock spiders, the reproductive success of an individual relies heavily on a male spider's ability to combine visual and vibratory displays during courtship. The combination of these displays in courtship offers support both to the redundant signal and multiple messages hypotheses for the evolution of multi-modal signaling in species.
Multi-modal signaling is not limited to males. Females in certain species have more than one trait or characteristic that they use in a courtship display to attract mates. In dance flies (Rhamphomyia longicauda), females have two ornaments — inflatable abdominal sacs and pinnate tibial scales — that they use as courtship displays in mating swarms. Intermediate variations of such female-specific ornaments are sexually selected for by male dance flies in wild populations. These ornaments may also be a signal of high fecundity in females.
Mutual display
Often, males and females will perform synchronized or responsive courtship displays in a mutual fashion. With many socially monogamous species such as birds, their duet facilitates pre-copulatory reassurance of pair bonding and strengthens post-copulatory dedication to the development of offspring (e.g., great crested grebe, Podiceps cristatus). For example, male and female crested auklets, Aethia cristatella, will cackle at one another as a vocal form of mutual display that serves to strengthen a bond between the two. In some cases, males may pair up to perform mutual, cooperative displays in order to increase courtship success and attract females. This phenomenon can be seen with long-tailed manakins, Chiroxiphia linearis.
Wild turkeys (Meleagris gallopavo) also engage in co-operative displays in which small groups of males (typically brothers) work together to attract females and deter other competitive males. In many cases, only one male within the group will mate, typically the dominant male. To explain this behaviour, Hamilton's theory of kin selection suggests that subordinate males receive indirect benefits by helping related males copulate successfully.
Sexual ornaments
Sexual ornaments can serve to increase attractiveness and indicate good genes and higher levels of fitness. When exposed to exaggerated male traits, some females may respond by increasing maternal investments. For example, female canaries have been shown to produce larger and denser eggs in response to male supranormal song production.
Sexual conflict
Sexual conflict is the phenomenon in which the interests of males and females in reproduction are not the same: they are often quite different:
Males: their interest is to mate with a large number of completely faithful females, thus spreading their genes widely throughout a population.
Females: their interest is to mate with a large number of fit males, thus producing a large quantity of fit and varied offspring.
This has many consequences. Courtship displays allow the mate performing the selection to have a means on which to base the copulatory decision. If a female chooses more than one male, then sperm competition comes into play. This is competition between sperm to fertilize an egg, which is very competitive as only a single sperm will achieve union. In some insects, the male injects a cocktail of chemicals in seminal fluid together with sperm. The chemicals kill off older sperm from any previous mates, up-regulates the female's egg-laying rate, and reduces her desire to re-mate with another male. The cocktail also shortens the female's lifespan, also reducing her likelihood of mating with other males. Also, some females can get rid of the previous male's sperm.
After mating has taken place, males perform various actions to prevent females from mating again. What action is performed depends on the animal. In some species, the male produces a mating plug after insemination. In some hymenoptera, the male provides a huge quantity of sperm, enough to last the female's entire life. In some birds and mammals, the male may participate in agonistic behaviors with other candidate males.
Agonistic behavior and courtship
Although rare, agonistic behavior between males and females during courtship displays is seen in nature. Intraspecific agonistic behavior that results in the death of a combatant is rare because of the associated risk of death or injury. However, agonistic behavior that turns dangerous does occur.
In some species, physical traits that are sexually selected for in male courtship displays may also be used in agonistic behavior between two males for a mate. In fiddler crabs (genus Uca), males have been sexually selected to have one enlarged claw, which can take up anywhere from a third to a half of their total body mass, and one regular claw. Although the enlarged claw is believed to have developed for use in combat for territorial defense, it is not uncommon for males to employ this claw in battle for a mate. Even though this claw developed as a weapon, it is also closely linked with the crabs' courtship display: it is waved in a certain pattern to attract females for mating.
Agonistic behavior in courtship displays is not limited to male-male interactions. In many primate species, males direct agonistic behavior toward females prior to courtship behaviors. Such behavior can include aggressive vocalizations, displays, and physical aggression. In the western gorilla (Gorilla gorilla), dominant males exhibit agonistic behavior toward female gorillas at very high rates, with the majority of those interactions being courtship-related. Most documented cases of male gorilla aggression toward females is courtship related and is used primarily as a strategy to prevent females from migrating to another male.
In many cases, male courtship displays will cause forms of contest competition to develop. This is often seen within lek mating systems. For example, males will seek to obtain a certain spot or position to perform their courtship display. The best spots are regions of high contention as many males want them for themselves. Because of this direct conflict, agonistic encounters between males are fairly common.
Covert courtship displays have been reported in some species.
Extended courtship period
Mating is preceded by a courtship/pairing period in many animal mating systems. It is during this period that sexually mature animals select their partners for reproduction. This courtship period, which involves displays to attract a mate by a member of a species, is usually short, lasting anywhere from 15 minutes to a few days. However, certain animals may undergo an extended courtship period, lasting as long as two months.
One such exception is the emperor penguin (Aptenodytes forsteri). Emperor penguins engage in an extended courtship period that can last up to two months, the longest of any Arctic seabird. Their courtship period accounts for 16% of the total time they spend breeding, whereas in their closest relatives, the king penguin (Aptenodytes patagonicus), the courtship period takes up just three per cent of their breeding cycle.
Energetic costs
Courtship displays typically involve some sort of metabolic cost to the animal performing it. The energy expended to perform courtship behaviour can vary among species. Some animals engage in displays that expend little energy, as seen in the salamander (Desmognathus ochrophaeus). Under laboratory settings, courtship behaviours in this species, although complex and involving the release of pheromones, represent as little as approximately one per cent of its daily calorie intake.
In contrast, species that engage in prolonged or elaborate displays expend considerable amounts of energy and run the risk of developing fatigue. To prepare and prevent such a risk, some animals may gain weight before a courtship period, only to lose the weight afterward. An example of this can be seen in the greater sage-grouse (Centrocercus urophasianus). During the peak of their breeding season, which lasts up to three months during spring, leks are frequently visited by groups of up to seventy females. In response to such a large presence of females, males engage in a strutting display up to six to ten times per minute for approximately three to four hours per day. This frequent and repetitive behaviour can result in energy expenditures of up to 2524 kJ/day compared to the inactive males that typically expend 1218 kJ/day.
Environmental factors
Various environmental factors, such as temperature, photoperiod, resource and light availability, have an effect on the timing and effectiveness of courtship displays in certain species of animals.
In guppies (Poecilia reticulata), variation in the light environment plays a huge role in their ability to attract mates. Guppy males alter both their 'courtship mode', whether they perform a full courtship display or try to 'engage' in sneak copulations, and distance from females as light intensity changes. Courtship mode also varies with light spectrum and relates to predation risk. On average, male guppies seek out and spend more time in the environment in which their color pattern is the most visible. Males, in the light environment that made them most visible, copulated with the most females.
In emperor penguins (Aptenodytes forsteri), resource availability determines when male emperor penguins will be able to return to their breeding grounds to initiate their courtship rituals. The greater the concentration of resources in their feeding ground, the quicker they will be able to restore their body reserves for winter, and the sooner they will be able to return to their breeding grounds. An early return to their breeding grounds comes with an increased likelihood of finding a mate.
The effectiveness of Hirtodrosophila mycetophaga mating displays is influenced by the color of the bracket fungus that it mates and courts upon; these flies choose brackets that are lighter, making their displays more visible to the opposite sex.
Evolutionary significance
There are multiple hypotheses about how courtship displays may have evolved in animals, including the Fisherian runaway model and the good genes hypothesis.
As explained by the Fisherian runaway model, sexually dimorphic males with exaggerated ornamentation may have been sexually selected for in species with female choice. Fitness of these males would increase, resulting in the proliferation of males with such ornamentation over time. This means that a gene or set of genes will be favoured by female choice over time. This would explain why and how such elaborate traits develop within certain species. However, as time goes on and generations pass, the survival advantage associated with one trait may dissipate due to extreme exaggeration to the point that it decreases fitness.
The "good genes" hypothesis proposes that female selection of a mate is dependent on whether or not the male has genes that would increase the quality of the offspring of the female. In some cases, exaggerated male ornamentation may be indicative to a choosing female that a male who is able to place such a large investment in a trait somewhat counterintuitive to survival would carry good genes. For example, the costs associated with bright and complex plumage can be high. Only males with good genes are able to support a large investment into the development of such traits, which, in turn displays their high fitness.
An alternative is the sensory exploitation hypothesis, which supposes that sexual preferences are the result of preexisting sensory biases, such as that for supernormal stimuli. These could drive the evolution of courtship displays.
See also
References
External links
stanford.edu
featherlightphoto.com
bbc.co.uk
birdsofparadiseproject.org
bbc.co.uk guide
Signalling theory
Mating
Ethology
Animal sexuality
Bird breeding | Courtship display | [
"Biology"
] | 3,753 | [
"Behavior",
"Animals",
"Behavioural sciences",
"Animal sexuality",
"Ethology",
"Sexuality",
"Mating"
] |
14,699,730 | https://en.wikipedia.org/wiki/AMD%20Socket%20G3 | The Socket G3, originally as part of the codenamed Piranha server platform, was supposed to be the intermediate successor to Socket F and Socket F+ to be used in AMD Opteron processor for dual-processor (2P) and above server platforms scheduled to be launched 2009. The Socket G3 would have been accompanied by the Socket G3 Memory Extender (Socket G3MX), for connecting large amounts of memory to a single microprocessor by a G3MX chip placed on the motherboard.
AMD had planned socket G3 to arrive with the advent of the previously planned 8-core MCM chip code named Montreal. Since Q1 2008, the plan for and 8-core MCM server chip based on 45 nm K10.5 design has been scrapped in favor of a 6-core fully integrated MPU design code named Istanbul, which would use the existing socket F/F+ platform, produced by Nvidia, Broadcom, as well as Fiorano to be introduced by AMD in 2009.
However, socket G3 was officially discontinued as of March 2008. The socket that was the successor to the Socket F is the LGA 1974-pin Socket G34.
See also
AMD 800 chipset series
References
External links
AMD Opteron processors
AMD server sockets | AMD Socket G3 | [
"Technology"
] | 270 | [
"Computing stubs",
"Computer hardware stubs"
] |
14,699,765 | https://en.wikipedia.org/wiki/Argument%20%28complex%20analysis%29 | In mathematics (particularly in complex analysis), the argument of a complex number , denoted , is the angle between the positive real axis and the line joining the origin and , represented as a point in the complex plane, shown as in Figure 1. By convention the positive real axis is drawn pointing rightward, the positive imaginary axis is drawn pointing upward, and complex numbers with positive real part are considered to have an anticlockwise argument with positive sign.
When any real-valued angle is considered, the argument is a multivalued function operating on the nonzero complex numbers. The principal value of this function is single-valued, typically chosen to be the unique value of the argument that lies within the interval . In this article the multi-valued function will be denoted and its principal value will be denoted , but in some sources the capitalization of these symbols is exchanged.
In some older mathematical texts, the term "amplitude" was used interchangeably with argument to denote the angle of a complex number. This usage is seen in older references such as Lars Ahlfors' Complex Analysis: An introduction to the theory of analytic functions of one complex variable (1979), where amplitude referred to the argument of a complex number. While this term is largely outdated in modern texts, it still appears in some regional educational resources, where it is sometimes used in introductory-level textbooks.
Definition
An argument of the nonzero complex number , denoted , is defined in two equivalent ways:
Geometrically, in the complex plane, as the 2D polar angle from the positive real axis to the vector representing . The numeric value is given by the angle in radians, and is positive if measured counterclockwise.
Algebraically, as any real quantity such that for some positive real (see Euler's formula). The quantity is the modulus (or absolute value) of , denoted ||:
The argument of zero is usually left undefined. The names magnitude, for the modulus, and phase, for the argument, are sometimes used equivalently.
Under both definitions, it can be seen that the argument of any non-zero complex number has many possible values: firstly, as a geometrical angle, it is clear that whole circle rotations do not change the point, so angles differing by an integer multiple of radians (a complete circle) are the same, as reflected by figure 2 on the right. Similarly, from the periodicity of and , the second definition also has this property.
Principal value
Because a complete rotation around the origin leaves a complex number unchanged, there are many choices which could be made for by circling the origin any number of times. This is shown in figure 2, a representation of the multi-valued (set-valued) function , where a vertical line (not shown in the figure) cuts the surface at heights representing all the possible choices of angle for that point.
When a well-defined function is required, then the usual choice, known as the principal value, is the value in the open-closed interval radians, that is from to radians excluding radians itself (equiv., from −180 to +180 degrees, excluding −180° itself). This represents an angle of up to half a complete circle from the positive real axis in either direction.
Some authors define the range of the principal value as being in the closed-open interval .
Notation
The principal value sometimes has the initial letter capitalized, as in , especially when a general version of the argument is also being considered. Note that notation varies, so and may be interchanged in different texts.
The set of all possible values of the argument can be written in terms of as:
Computing from the real and imaginary part
If a complex number is known in terms of its real and imaginary parts, then the function that calculates the principal value is called the two-argument arctangent function, :
.
The function is available in the math libraries of many programming languages, sometimes under a different name, and usually returns a value in the range .
In some sources the argument is defined as however this is correct only when , where is well-defined and the angle lies between and Extending this definition to cases where is not positive is relatively involved. Specifically, one may define the principal value of the argument separately on the half-plane and the two quadrants with , and then patch the definitions together:
See atan2 for further detail and alternative implementations.
Realizations of the function in computer languages
Wolfram language (Mathematica)
In Wolfram language, there's Arg[z]:
Arg[x + y I]
or using the language's ArcTan:
Arg[x + y I]
ArcTan[x, y] is extended to work with infinities. ArcTan[0, 0] is Indeterminate (i.e. it's still defined), while ArcTan[Infinity, -Infinity] doesn't return anything (i.e. it's undefined).
Maple
Maple's argument(z) behaves the same as Arg[z] in Wolfram language, except that argument(z) also returns if z is the special floating-point value −0..
Also, Maple doesn't have .
MATLAB
MATLAB's angle(z) behaves the same as Arg[z] in Wolfram language, except that it is
Unlike in Maple and Wolfram language, MATLAB's atan2(y, x) is equivalent to angle(x + y*1i). That is, atan2(0, 0) is .
Identities
One of the main motivations for defining the principal value is to be able to write complex numbers in modulus-argument form. Hence for any complex number ,
This is only really valid if is non-zero, but can be considered valid for if is considered as an indeterminate form—rather than as being undefined.
Some further identities follow. If and are two non-zero complex numbers, then
If and is any integer, then
Example
Using the complex logarithm
From , we get , alternatively . As we are taking the imaginary part, any normalisation by a real scalar will not affect the result. This is useful when one has the complex logarithm available.
Extended argument
The extended argument of a number z (denoted as ) is the set of all real numbers congruent to modulo 2.
References
Bibliography
External links
Argument at Encyclopedia of Mathematics.
Trigonometry
Complex analysis
Signal processing | Argument (complex analysis) | [
"Technology",
"Engineering"
] | 1,342 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
14,699,921 | https://en.wikipedia.org/wiki/Socket%20FS1 | The Socket FS1 is for notebooks using AMD APU processors codenamed Llano, Trinity and Richland (Socket FS1r2).
"Llano"-branded products combine K10 with Cedar (VLIW5), UVD 3 video acceleration and AMD Eyefinity-based multi-monitor support of up to three DisplayPort monitors.
"Trinity"- and "Richland"-branded products Piledriver with Northern Islands (VLIW4), UVD 3 and VCE 1 video acceleration and AMD Eyefinity-based multi-monitor support of up to four DisplayPort monitors.
While the AMD desktop CPUs are available in a 722-pin package Socket AM1 (FS1b), it is not clear whether these desktop CPUs will be compatible with Socket FS1 or vice versa.
It is the last pin grid array socket for AMD's mobile processors - all mobile processors in microarchitectures succeeding Piledriver are exclusively available in BGA packaging, for example Steamroller-based mobile processors uses Socket FP3 socket, which is a μBGA socket. Intel also adopted same practice, starting with Broadwell microarchitecture.
Feature overview for AMD APUs
See also
List of AMD processors with 3D graphics
List of AMD mobile processors
External links
Socket FS1 Design Specification
AMD mobile sockets | Socket FS1 | [
"Technology"
] | 290 | [
"Computing stubs",
"Computer hardware stubs"
] |
14,700,343 | https://en.wikipedia.org/wiki/Nuclear%20Nebraska | Nuclear Nebraska: The Remarkable Story of the Little County That Couldn't Be Bought is a 2007 book by Susan Cragin which follows the controversy about a proposed low level nuclear waste dump, which was planned for Boyd County, Nebraska.
In 1989, two multinational corporations and several government agencies proposed a waste dump and offered payment of $3 million per year for 40 years. The residents of the Boyd County farming community resisted the offer and controversy followed for almost two decades. During this time, the community was transformed "from a small group of isolated farmers to a defiant band of environmentalists". The opposition of the community eventually succeeded, and the license to build the dump was denied.
Several governors became embroiled in the controversy, as well as legislators, bureaucrats and the community. One central figure went to jail and others were dismissed from their jobs. For many years, there was extensive coverage of the event by the news media.
U.S. Senator Ben Nelson wrote the foreword to the book.
See also
List of books about nuclear issues
Central Interstate Low Level Radioactive Waste Compact
References
Environmental non-fiction books
2007 non-fiction books
2007 in the environment
Energy development
Radioactive waste
Anti–nuclear power movement
Boyd County, Nebraska
Books about nuclear issues
Books about multinational companies
Anti-nuclear movement in the United States
Books about Nebraska | Nuclear Nebraska | [
"Chemistry",
"Technology"
] | 265 | [
"Radioactive waste",
"Environmental impact of nuclear power",
"Radioactivity",
"Hazardous waste"
] |
14,700,395 | https://en.wikipedia.org/wiki/EPH%20receptor%20A2 | EPH receptor A2 (ephrin type-A receptor 2) is a protein that in humans is encoded by the EPHA2 gene.
Function
This gene belongs to the ephrin receptor subfamily of the protein-tyrosine kinase family. EPH and EPH-related receptors have been implicated in mediating developmental events, particularly in the nervous system. Receptors in the EPH subfamily typically have a single kinase domain and an extracellular region containing a Cys-rich domain and 2 fibronectin type III repeats. The ephrin receptors are divided into two groups based on the similarity of their extracellular domain sequences and their affinities for binding ephrin-A and ephrin-B ligands. This gene encodes a protein that binds ephrin-A ligands.
Clinical significance
It may be implicated in BRAF mutated melanomas becoming resistant to BRAF-inhibitors and MEK inhibitors. It is also the receptor by which Kaposi's sarcoma-associated herpesvirus (KSHV) enters host cells; small molecule inhibitors of EphA2 have shown some ability to block KSHV entry into human cells.
Interactions
EPH receptor A2 has been shown to interact with:
Ephrin A1
ACP1
Grb2,
PIK3R1, and
SHC1.
It was also shown that doxazosin is a small molecule agonist of EPH receptor A2.
See also
Vasculogenic mimicry
References
Further reading
External links
Tyrosine kinase receptors | EPH receptor A2 | [
"Chemistry"
] | 316 | [
"Tyrosine kinase receptors",
"Signal transduction"
] |
14,700,578 | https://en.wikipedia.org/wiki/Retinoblastoma-like%20protein%201 | Retinoblastoma-like 1 (p107), also known as RBL1, is a protein that in humans is encoded by the RBL1 gene.
Function
The protein encoded by this gene is similar in sequence and possibly function to the product of the retinoblastoma 1 (RB1) gene. The RB1 gene product is a tumor suppressor protein that appears to be involved in cell cycle regulation, as it is phosphorylated in the S to M phase transition and is dephosphorylated in the G1 phase of the cell cycle. Both the RB1 protein and the product of this gene can form a complex with adenovirus E1A protein and SV40 Large T-antigen, with the SV40 large T-antigen binding only to the unphosphorylated form of each protein. In addition, both proteins can inhibit the transcription of cell cycle genes containing E2F binding sites in their promoters. Due to the sequence and biochemical similarities with the RB1 protein, it is thought that the protein encoded by this gene may also be a tumor suppressor. Two transcript variants encoding different isoforms have been found for this gene.
Interactions
Retinoblastoma-like protein 1 has been shown to interact with:
BEGAIN,
BRCA1,
BRF1,
Cyclin A2,
Cyclin-dependent kinase 2,
E2F1,
HDAC1,
MYBL2
Mothers against decapentaplegic homolog 3,
Prohibitin, and
RBBP8.
See also
Pocket protein family
References
Further reading
External links
Transcription factors | Retinoblastoma-like protein 1 | [
"Chemistry",
"Biology"
] | 331 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,700,754 | https://en.wikipedia.org/wiki/Eltonian%20niche | The Eltonian niche is an ecological niche that emphasizes the functional attributes of animals and their corresponding trophic position. This was the definition Eugene Odum popularized in his analogy of the niche of a species with its profession in the ecosystem as opposed to the habitat being its address. The definition is attributed to Charles Elton in his 1927 now classic book Animal Ecology. Elton used the two African rhinoceros species to exemplify the definition. The white rhinoceros has broad (wide, hence its name) mouthparts, which are efficient in harvesting grass, while the black rhinoceros has narrow pointed lips enabling it to feed selectively on the foliage of thorny bushes.
References
Ecology
1927 in biology | Eltonian niche | [
"Biology"
] | 143 | [
"Ecology"
] |
14,700,832 | https://en.wikipedia.org/wiki/Binary-code%20compatibility | Binary-code compatibility (binary compatible or object-code compatible) is a property of a computer system, meaning that it can run the same executable code, typically machine code for a general-purpose computer central processing unit (CPU), that another computer system can run. Source-code compatibility, on the other hand, means that recompilation or interpretation is necessary before the program can be run on the compatible system.
For a compiled program on a general operating system, binary compatibility often implies that not only the CPUs (instruction sets) of the two computers are binary compatible, but also that interfaces and behaviours of the operating system (OS) and application programming interfaces (APIs), and the application binary interfaces (ABIs) corresponding to those APIs, are sufficiently equal, i.e. "compatible".
A term like backward-compatible usually implies object-code compatibility. This means that newer computer hardware and/or software has (practically) every feature of the old, plus additional capabilities or performance. Older executable code will thus run unchanged on the newer product. For a compiled program running directly on a CPU under an OS, a "binary compatible operating system" primarily means application binary interface (ABI) compatibility with another system. However, it also often implies that APIs that the application depends on, directly or indirectly (such as the Windows API, for example), are sufficiently similar. Hardware (besides the CPU, such as for graphics) and peripherals that an application accesses may also be a factor for full compatibility, although many hardware differences are hidden by modern APIs (often partly supplied by the OS itself and partly by specific device drivers).
In other cases, a general porting of the software must be used to make non-binary-compatible programs work.
Binary compatibility is a major benefit when developing computer programs that are to be run on multiple OSes. Several Unix-based OSes, such as FreeBSD or NetBSD, offer binary compatibility with more popular OSes, such as Linux-derived ones, since most binary executables are not commonly distributed for such OSes.
Most OSes provide binary compatibility, in each version of the OS, for most binaries built to run on earlier versions of the OS. For example, many executables compiled for Windows 3.1, Windows 95 or Windows 2000 can also be run on Windows XP or Windows 7, and many applications for DOS ran on much newer versions of Windows up to Windows 10 for as long as the NTVDM was supported.
Binary compatible hardware
For a digital processor implemented in hardware, binary compatibility means that (a large subset of) machine code produced for another processor can be correctly executed and has (much) the same effect as on the other processor. This is quite common among many processor families, although it is rather uncommon among the ubiquitous small embedded systems built around such processors. Full machine code compatibility would here imply exactly the same layout of interrupt service routines, I/O-ports, hardware registers, counter/timers, external interfaces and so on. For a more complex embedded system using more abstraction layers (sometimes on the border to a general computer, such as a mobile phone), this may be different.
Binary compatible operating systems
Binary compatible operating systems are OSes that aim to implement binary compatibility with another OS, or another variant of the same brand. This means that they are ABI-compatible (for application binary interface). As the job of an OS is to run programs, the instruction set architectures running the OSes have to be the same or compatible. Otherwise, programs can be employed within a CPU emulator or a faster dynamic translation mechanism to make them compatible.
For example, the Linux kernel is not compatible with Windows. This does not mean that Linux cannot be binary compatible with Windows applications. Additional software, Wine, is available that does that to some degree. The ReactOS development effort seeks to create an open-source, free software OS that is binary compatible with Microsoft's Windows NT family of OSes using Wine for application compatibility and reimplementing the Windows kernel for additional compatibility such as for drivers whereas Linux would use Linux drivers, not Windows drivers. FreeBSD and other members of the BSD family have binary compatibility with the Linux kernel in usermode by translating Linux system calls into BSD ones. This enables the application and libraries code that run on Linux-based OSes to be run on BSD as well.
Note that a binary compatible OS is different from running an alternative OS through virtualization or emulation, which is done to run software within the alternative OS in the case when the host OS is not compatible. Sometimes virtualization is provided with the host OS (or such software can be obtained), which effectively makes the host OS compatible with programs. For example, Windows XP Mode for Windows 7 allows users to run a 64-bit version of Windows 7 and enable old software to still work in a 32-bit virtual machine running Windows XP; VMware Workstation/VMware Fusion, Parallels Workstation, and Windows Virtual PC allow other OSes to be run on Windows, Linux, and macOS.
For another example, Mac OS X on the PowerPC had the ability to run Mac OS 9 and earlier application software through Classic—but this did not make Mac OS X a binary compatible OS with Mac OS 9. Instead, the Classic environment was actually running Mac OS 9.1 in a virtual machine, running as a normal process inside of Mac OS X.
See also
Backward compatibility
Application binary interface (ABI)
Computer compatibility
Bug compatibility
Plug compatibility
Video game remake
Multi-architecture binary
References
External links
KDE Techbase Policies – a compendium of C++ development rules of thumb (with some examples) for not breaking binary compatibility between releases of a library.
ABI Analysis Tools a set of open-source tools for analysis of ABI and backward binary compatibility implementing KDE Techbase Policies
Backward compatibility
Computing terminology
de:Kompatibilität (Technik)#Binärkompatibilität | Binary-code compatibility | [
"Technology"
] | 1,239 | [
"Computing terminology"
] |
14,700,846 | https://en.wikipedia.org/wiki/Ataxin%203 | Ataxin-3 is a protein that in humans is encoded by the ATXN3 gene.
Clinical significance
Machado–Joseph disease, also known as spinocerebellar ataxia-3, is an autosomal dominant neurologic disorder. The protein encoded by the ATXN3 gene contains CAG repeats in the coding region, and the expansion of these repeats from the normal 13-36 to 68-79 is the cause of Machado–Joseph disease. This disorder is thus a trinucleotide repeat disorder type I known as a polyglutamine (PolyQ) disease. There is an inverse correlation between the age of onset and CAG repeat numbers. Alternatively spliced transcript variants encoding different isoforms have been described for this gene.
Interactions
Ataxin 3 has been shown to interact with:
RAD23A,
RAD23B, and
VCP.
References
Further reading
External links
GeneReviews/NCBI/NIH/UW entry on Spinocerebellar Ataxia Type 3
Proteins | Ataxin 3 | [
"Chemistry"
] | 213 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
14,702,037 | https://en.wikipedia.org/wiki/Cleaning%20agent | Cleaning agents or hard-surface cleaners are substances (usually liquids, powders, sprays, or granules) used to remove dirt, including dust, stains, foul odors, and clutter on surfaces. Purposes of cleaning agents include health, beauty, removing offensive odors, and avoiding the spread of dirt and contaminants to oneself and others. Some cleaning agents can kill bacteria (e.g. door handle bacteria, as well as bacteria on worktops and other metallic surfaces) and clean at the same time. Others, called degreasers, contain organic solvents to help dissolve oils and fats.
Chemical agents
Acidic
Acidic cleaning agents are mainly used for removal of deposits like scaling. The active ingredients are normally strong mineral acids and chelants. Often, surfactants and corrosion inhibitors are added to the acid.
Hydrochloric acid is a common mineral acid typically used for concrete. Vinegar can also be used to clean hard surfaces and remove calcium deposits. Sulphuric acid is used in acidic drain cleaners to unblock clogged pipes by dissolving organic materials, like greases, proteins, and even carbohydrate-containing substances such as toilet tissue.
Alkaline
Alkaline cleaning agents contain strong bases like sodium hydroxide or potassium hydroxide. Bleach (pH 12) and ammonia (pH 11) are common alkaline cleaning agents. Often, dispersants, to prevent redeposition of dissolved dirt, and chelants, to attack rust, are added to the alkaline agent.
Alkaline cleaners can dissolve fats (including grease), oils, and protein-based substances.
Neutral
Neutral washing agents are pH-neutral and based on non-ionic surfactants that disperse different types.
Scouring agents
Scouring agents are mixtures of the usual cleaning chemicals (surfactants, water softeners) as well as abrasive powders. The abrasive powder must be of a uniform particle size.
Particles are usually smaller than 0.05 mm. Pumice, calcium carbonate (limestone, chalk, dolomite), kaolinite, quartz, soapstone or talc are often used as abrasives, i.e. polishing agents.
Special bleaching powders contain compounds that release sodium hypochlorite, the classical household bleaching agent. These precursor agents include trichloroisocyanuric acid and mixtures of sodium hypochlorite ("chlorinated orthophosphate").
Examples of notable products include Ajax, Bar Keepers Friend, Bon Ami, Comet, Vim, Zud, and others.
Purposes
Oven cleaners
Traditional oven cleaners contain sodium hydroxide (lye), solvents, and other ingredients, They work best when used in a slightly warm (not hot) oven. If used in a self-cleaning oven, the lye can cause permanent damage to the oven.
Some oven cleaners are based on ingredients other than lye. These products must be used in a cold oven. Most new-style oven cleaners can be used in self-cleaning ovens.
One popular oven cleaner brand in the US is "Easy-Off", sold by Reckitt Benckiser. Popular choices in the UK include "Zep Oven Brite" and "Mr Muscle Oven Cleaner".
All-purpose cleaners
All-purpose cleansers contain mixtures of anionic and nonionic surfactants, polymeric phosphates or other sequestering agents, solvents, hydrotropic substances, polymeric compounds, corrosion inhibitors, skin-protective agents, and sometimes perfumes and colorants. Aversive agents, such as denatonium, are occasionally added to cleaning products to discourage animals and small children from consuming them.
Some cleaners contain water-soluble organic solvents like glycol ethers and fatty alcohols, which ease the removal of oil, fat and paint. Disinfectant additives include quaternary ammonium compounds, phenol derivatives, terpene alcohols (pine oil), aldehydes, and aldehyde-amine condensation products.
All-purpose cleaners are usually concentrated solutions of surfactants and water softeners, which enhance the behavior of surfactant in hard water. Typical surfactants are alkylbenzene sulfonates, an anionic detergent, and modified fatty alcohols. A typical water softener is sodium triphosphate.
All-purpose cleansers are effective with most common kinds of dirt. Their dilute solutions are neutral or weakly alkaline, and are safe for use on most surfaces.
Dishwashing agents
Manual dishwashing detergent
DELENEX DTR dishwasher detergent is an alkaline detergent for manual dishwashing dishwashers, with controlled foam, formulated to clean dish-ware and glassware.
Automatic dishwashing detergents (ADDs)
Laundry detergents
Floor cleaners
Carpet cleaners
Toilet cleaners / hygiene / deodorant products
Toilet bowl cleaning often is aimed at removal of calcium carbonate deposits, which are attacked by acids. Powdered cleaners contain acids that come in the form of solid salts, such as sodium hydrogen sulfate. Liquid toilet bowl cleaners contain other acids, typically dilute hydrochloric, phosphoric, or formic acids. These convert the calcium carbonate into salts that are soluble in water or are easily rinsed away.
Drain cleaners
Metal cleaners
Metal cleaners are used for cleaning stainless steel sinks, faucets, metal trim, silverware, etc. These products contain abrasives (e.g., siliceous chalk, diatomaceous earth, alumina) with a particle size < 20 μm. Fatty alcohol or alkylphenol polyglycol ethers with 7-12 ethylene oxide (EO) units are used as surfactants.
For ferrous metals, the cleaners contain chelating agents, abrasives, and surfactants. These agents include citric and phosphoric acids, which are nonaggressive. Surfactants are usually modified fatty alcohols. Silver cleaning is a specialty since silver is noble but tends to tarnish via formation of black silver sulfide, which is removable via silver-specific complexants such as thiourea.
Stainless steel, nickel, and chromium cleaners contain lactic, citric, or phosphoric acid. A solvent (mineral spirits) may be added.
Nonferrous metal cleaners contain ammonia, ammonium soaps (ammonium oleate, stearate) and chelating agents (ammonium citrate, oxalate).
For special type of precious metals especially those used for luxury watches and high-end jewelry, special type of cleaning agents are usually used to clean and protect them from the Elements. Some examples of these cleaners include jewelry cleaner from Weiman, watch cleaning solution from HOROCD & even cleaning metal plates from Holland Hallmark.
Glass cleaners
Light duty hard surface cleaners are not intended to handle heavy dirt and grease. Because these products are expected to clean without rinsing and result in a streak-free shine, they contain no salts. Typical window cleaning items consist of alcohols, either ethanol or isopropanol like Windex, and surfactants for dissolving grease. Other components include small amounts of ammonia as well as dyes and perfumes.
These are composed of organic, water-miscible solvent such as isopropyl alcohol and an alkaline detergent. Some glass cleaners also contain a fine, mild abrasive. Most glass cleaners are available as sprays or liquid. They are sprayed directly onto windows, mirrors and other glass surfaces or applied on with a soft cloth and rubbed off using a soft, lint-free duster. A glass cloth ideal for the purpose and soft water to which some methylated spirit or vinegar is added which is an inexpensive glass cleaner.
Silverware can be freed of silver sulfide tarnish with thiourea, and either hydrochloric or sulfuric acid.
Building facade cleaners
For acid-resistant building facades, such as brick, acids are typically used. These include mixtures of phosphoric and hydrofluoric acids as well as surfactants. For acid-sensitive facades such as concrete, strongly alkaline cleaners are used such as sodium hydroxide and thickeners. Both types of cleaners require a rinsing and often special care since the solutions are aggressive toward skin.
Environmental impacts
Common cleaning agents
Acetic acid (vinegar)
Various forms of alcohol including isopropyl alcohol or rubbing alcohol
Ammonia solution
Bleach
Borax
Carbon dioxide
Citric acid
Freon (e.g. dichlorodifluoromethane) (use is often discouraged due to damaging effects on the ozone layer)
Soap or detergent
Sodium carbonate (washing soda)
Sodium bicarbonate (baking soda)
Sodium hydroxide (lye)
Sodium hypochlorite (liquid bleach)
Sodium perborate
Sodium percarbonate
Tetrachloroethylene (dry cleaning)
Trisodium phosphate
Water, the most common cleaning agent, which is a very powerful polar solvent
Xylene (can damage plastics)
See also
Disinfectant
Green cleaning
Laundry detergents
List of cleaning products
References | Cleaning agent | [
"Chemistry"
] | 1,958 | [
"Cleaning products",
"Products of chemical industry"
] |
14,702,965 | https://en.wikipedia.org/wiki/Software%20Magazine | Software Magazine is a software and Information technology magazine. It is owned and published by Rockport Custom Publishing, based in Beverly, Massachusetts, on a monthly basis.
Software 500 survey can be used to gauge the value of the commercial software industry. The survey consists of data of the top 500 software companies.
References
External links
official website
Computer magazines published in the United States
Monthly magazines published in the United States
Magazines with year of establishment missing
Magazines published in Massachusetts | Software Magazine | [
"Technology"
] | 91 | [
"Computing stubs",
"Computer magazine stubs"
] |
14,703,145 | https://en.wikipedia.org/wiki/Aubin%E2%80%93Lions%20lemma | In mathematics, the Aubin–Lions lemma (or theorem) is the result in the theory of Sobolev spaces of Banach space-valued functions, which provides a compactness criterion that is useful in the study of nonlinear evolutionary partial differential equations. Typically, to prove the existence of solutions one first constructs approximate solutions (for example, by a Galerkin method or by mollification of the equation), then uses the compactness lemma to show that there is a convergent subsequence of approximate solutions whose limit is a solution.
The result is named after the French mathematicians Jean-Pierre Aubin and Jacques-Louis Lions. In the original proof by Aubin, the spaces X0 and X1 in the statement of the lemma were assumed to be reflexive, but this assumption was removed by Simon, so the result is also referred to as the Aubin–Lions–Simon lemma.
Statement of the lemma
Let X0, X and X1 be three Banach spaces with X0 ⊆ X ⊆ X1. Suppose that X0 is compactly embedded in X and that X is continuously embedded in X1. For , let
(i) If then the embedding of into is compact.
(ii) If and then the embedding of into is compact.
See also
Lions–Magenes lemma
Notes
References
(Theorem II.5.16)
(Sect.7.3)
(Proposition III.1.3)
Banach spaces
Theorems in functional analysis
Lemmas in analysis
Measure theory | Aubin–Lions lemma | [
"Mathematics"
] | 316 | [
"Lemmas",
"Theorems in mathematical analysis",
"Theorems in functional analysis",
"Lemmas in mathematical analysis"
] |
14,703,193 | https://en.wikipedia.org/wiki/Type%20inhabitation | In type theory, a branch of mathematical logic, in a given typed calculus, the type inhabitation problem for this calculus is the following problem: given a type and a typing environment , does there exist a -term M such that ? With an empty type environment, such an M is said to be an inhabitant of .
Relationship to logic
In the case of simply typed lambda calculus, a type has an inhabitant if and only if its corresponding proposition is a tautology of minimal implicative logic. Similarly, a System F type has an inhabitant if and only if its corresponding proposition is a tautology of intuitionistic second-order logic.
Girard's paradox shows that type inhabitation is strongly related to the consistency of a type system with Curry–Howard correspondence. To be sound, such a system must have uninhabited types.
Formal properties
For most typed calculi, the type inhabitation problem is very hard. Richard Statman proved that for simply typed lambda calculus the type inhabitation problem is PSPACE-complete. For other calculi, like System F, the problem is even undecidable.
See also
Curry–Howard isomorphism
References
Lambda calculus
Type theory | Type inhabitation | [
"Mathematics"
] | 240 | [
"Type theory",
"Mathematical logic",
"Mathematical structures",
"Mathematical objects"
] |
14,703,220 | https://en.wikipedia.org/wiki/Pentosidine | Pentosidine is a biomarker for advanced glycation endproducts, or AGEs. It is a well characterized and easily detected member of this large class of compounds.
Background
AGEs are biochemicals formed continuously under normal circumstances, but more rapidly under a variety of stresses, especially oxidative stress and hyperglycemia. They serve as markers of stress and act as toxins themselves. Pentosidine is typical of the class, except that it fluoresces, which allows it to be seen and measured easily. Because it is well characterized, it is often studied to provide new insight into the biochemistry of AGE compounds in general.
Biochemistry
Derived from ribose, a pentose, pentosidine forms fluorescent cross-links between the arginine and lysine residues in collagen. It is formed in a reaction of the amino acids with the Maillard reaction products of ribose.
Although it is present only in trace concentrations among tissue proteins, it is useful for assessing cumulative damage to
proteins—advanced glycation endproducts—by non-enzymatic browning reactions with carbohydrates.
Physiology
In vivo, AGEs form pentosidine through sugar fragmentation. In patients with diabetes mellitus type 2, pentosidine correlates with the presence and severity of diabetic complications.
References
Biomolecules
Guanidines
Imidazopyridines
Biomarkers
Advanced glycation end-products | Pentosidine | [
"Chemistry",
"Biology"
] | 303 | [
"Carbohydrates",
"Biomarkers",
"Natural products",
"Biochemistry",
"Guanidines",
"Functional groups",
"Organic compounds",
"Senescence",
"Biomolecules",
"Molecular biology",
"Structural biology",
"Advanced glycation end-products"
] |
14,703,618 | https://en.wikipedia.org/wiki/Arp%2087 | Arp 87 (also known as NGC 3808) is a pair of interacting galaxies, NGC 3808A and NGC 3808B. They are situated in the Leo constellation. NGC 3808A, the brighter, is a peculiar spiral galaxy, while NGC 3808B is an irregular galaxy.
The two galaxies were discovered on 10 April 1785 by William Herschel. The two are located about 330 million light-years (100 megaparsecs) away from the Earth. Arp 87 was observed by the Hubble Space Telescope in 2007, which revealed massive clouds of gas and dust flowing from one galaxy to another. Additionally, both galaxies appear to have been distorted.
Arp 87 is an isolated member of the Coma Supercluster.
One supernova has been observed in NGC 3808A: SN2013db (typeII-P, mag. 17.1).
See also
Atlas of Peculiar Galaxies by Halton Arp
References
External links
087
NGC objects
06643
Interacting galaxies
Intermediate spiral galaxies
Irregular galaxies
Leo (constellation)
Coma Supercluster | Arp 87 | [
"Astronomy"
] | 219 | [
"Leo (constellation)",
"Constellations"
] |
14,703,713 | https://en.wikipedia.org/wiki/Shock-capturing%20method | In computational fluid dynamics, shock-capturing methods are a class of techniques for computing inviscid flows with shock waves. The computation of flow containing shock waves is an extremely difficult task because such flows result in sharp, discontinuous changes in flow variables such as pressure, temperature, density, and velocity across the shock.
Method
In shock-capturing methods, the governing equations of inviscid flows (i.e. Euler equations) are cast in conservation form and any shock waves or discontinuities are computed as part of the solution. Here, no special treatment is employed to take care of the shocks themselves, which is in contrast to the shock-fitting method, where shock waves are explicitly introduced in the solution using appropriate shock relations (Rankine–Hugoniot relations). The shock waves predicted by shock-capturing methods are generally not sharp and may be smeared over several grid elements. Also, classical shock-capturing methods have the disadvantage that unphysical oscillations (Gibbs phenomenon) may develop near strong shocks.
Euler equations
The Euler equations are the governing equations for inviscid flow. To implement shock-capturing methods, the conservation form of the Euler equations are used. For a flow without external heat transfer and work transfer (isoenergetic flow), the conservation form of the Euler equation in Cartesian coordinate system can be written as
where the vectors , , , and are given by
where is the total energy (internal energy + kinetic energy + potential energy) per unit mass. That is
The Euler equations may be integrated with any of the shock-capturing methods available to obtain the solution.
Classical and modern shock capturing methods
From a historical point of view, shock-capturing methods can be classified into two general categories: classical methods and modern shock capturing methods (also called high-resolution schemes). Modern shock-capturing methods are generally upwind biased in contrast to classical symmetric or central discretizations. Upwind-biased differencing schemes attempt to discretize hyperbolic partial differential equations by using differencing based on the direction of the flow. On the other hand, symmetric or central schemes do not consider any information about the direction of wave propagation.
Regardless of the shock-capturing scheme used, a stable calculation in the presence of shock waves requires a certain amount of numerical dissipation, in order to avoid the formation of unphysical numerical oscillations. In the case of classical shock-capturing methods, numerical dissipation terms are usually linear and the same amount is uniformly applied at all grid points. Classical shock-capturing methods only exhibit accurate results in the case of smooth and weak shock solutions, but when strong shock waves are present in the solution, non-linear instabilities and oscillations may arise across discontinuities. Modern shock-capturing methods usually employ nonlinear numerical dissipation, where a feedback mechanism adjusts the amount of artificial dissipation added in accord with the features in the solution. Ideally, artificial numerical dissipation needs to be added only in the vicinity of shocks or other sharp features, and regions of smooth flow must be left unmodified. These schemes have proven to be stable and accurate even for problems containing strong shock waves.
Some of the well-known classical shock-capturing methods include the MacCormack method (uses a discretization scheme for the numerical solution of hyperbolic partial differential equations), Lax–Wendroff method (based on finite differences, uses a numerical method for the solution of hyperbolic partial differential equations), and Beam–Warming method. Examples of modern shock-capturing schemes include higher-order total variation diminishing (TVD) schemes first proposed by Harten, flux-corrected transport scheme introduced by Boris and Book, Monotonic Upstream-centered Schemes for Conservation Laws (MUSCL) based on Godunov approach and introduced by van Leer, various essentially non-oscillatory schemes (ENO) proposed by Harten et al., and the piecewise parabolic method (PPM) proposed by Colella and Woodward. Another important class of high-resolution schemes belongs to the approximate Riemann solvers proposed by Roe and by Osher. The schemes proposed by Jameson and Baker, where linear numerical dissipation terms depend on nonlinear switch functions, fall in between the classical and modern shock-capturing methods.
References
Books
Anderson, J. D., "Modern Compressible Flow with Historical Perspective", McGraw-Hill (2004).
Hirsch, C., "Numerical Computation of Internal and External Flows", Vol. II, 2nd ed., Butterworth-Heinemann (2007).
Laney, C. B., "Computational Gasdynamics", Cambridge Univ. Press 1998).
LeVeque, R. J., "Numerical Methods for Conservation Laws", Birkhauser-Verlag (1992).
Tannehill, J. C., Anderson, D. A., and Pletcher, R. H., "Computational Fluid Dynamics and Heat Transfer", 2nd ed., Taylor & Francis (1997).
Toro, E. F., "Riemann Solvers and Numerical Methods for Fluid Dynamics", 2nd ed., Springer-Verlag (1999).
Technical papers
Boris, J. P. and Book, D. L., "Flux-Corrected Transport III. Minimal Error FCT Algorithms", J. Comput. Phys., 20, 397–431 (1976).
Colella, P. and Woodward, P., "The Piecewise parabolic Method (PPM) for Gasdynamical Simulations", J. Comput. Phys., 54, 174–201 (1984).
Godunov, S. K., "A Difference Scheme for Numerical Computation of Discontinuous Solution of Hyperbolic Equations", Mat. Sbornik, 47, 271–306 (1959).
Harten, A., "High Resolution Schemes for Hyperbolic Conservation Laws", J. Comput. Phys., 49, 357–293 (1983).
Harten, A., Engquist, B., Osher, S., and Chakravarthy, S. R., "Uniformly High Order Accurate Essentially Non-Oscillatory Schemes III", J. Comput. Phys., 71, 231–303 (1987).
Jameson, A. and Baker, T., "Solution of the Euler Equations for Complex Configurations", AIAA Paper, 83–1929 (1983).
MacCormack, R. W., "The Effect of Viscosity in Hypervelocity Impact Cratering", AIAA Paper, 69–354 (1969).
Roe, P. L., "Approximate Riemann Solvers, Parameter Vectors and Difference Schemes", J. Comput. Phys. 43, 357–372 (1981).
Shu, C.-W., Osher, S., "Efficient Implementation of Essentially Non-Oscillatory Shock Capturing Schemes", J. Comput. Phys., 77, 439–471 (1988).
van Leer, B., "Towards the Ultimate Conservative Difference Scheme V; A Second-order Sequel to Godunov's Sequel", J. Comput. Phys., 32, 101–136, (1979).
Computational fluid dynamics
Numerical differential equations
Aerodynamics | Shock-capturing method | [
"Physics",
"Chemistry",
"Engineering"
] | 1,539 | [
"Computational fluid dynamics",
"Aerodynamics",
"Computational physics",
"Aerospace engineering",
"Fluid dynamics"
] |
14,703,837 | https://en.wikipedia.org/wiki/Great%20White%20Brotherhood | The Great White Brotherhood, in belief systems akin to Theosophy and New Age, are said to be perfected beings of great power who spread spiritual teachings through selected humans. The members of the Brotherhood may be known as the Masters of the Ancient Wisdom, the Ascended Masters, the Church Invisible, or simply as the Hierarchy. The first person to talk about them in the West was Helena Petrovna Blavatsky (Theosophy), after she and other people claimed to have received messages from them. These included Helena Roerich, Alice A. Bailey, Guy Ballard, Geraldine Innocente (The Bridge to Freedom), Elizabeth Clare Prophet, Bob Sanders, and Benjamin Creme.
History
The idea of a secret organization of enlightened mystics, guiding the spiritual development of the human race, was pioneered in the late eighteenth century by Karl von Eckartshausen (1752-1803) in his book The Cloud upon the Sanctuary; Eckartshausen called this body of mystics, who remained active after their physical deaths on earth, the Council of Light. Eckartshausen's proposed communion of living and dead mystics, in turn, drew partially on Christian ideas such as the Communion of the Saints, and partially on previously circulating European ideas about secret societies of enlightened, mystical, or magic adepts typified by the Rosicrucians and the Illuminati.
The Mahatma Letters began publication in 1881 with information purportedly revealed by "Koot Hoomi" to Alfred Percy Sinnett, and were also influential on the early development of the tradition. Koot Hoomi, through Sinnett, revealed that high-ranking members of mystic organizations in India and Tibet were able to maintain regular telepathic contact with one another, and thus were able to communicate to each other, and also to Sinnett, without the need for either written or oral communications, and in a manner similar to the way that spirit mediums claimed to communicate with the spirits of the dead. The letters published by Sinnett, which proposed the controversial doctrine of reincarnation, were said to have been revealed through this means.
Eckartshausen's idea was expanded in the teachings of Helena P. Blavatsky as developed by Charles W. Leadbeater, Alice Bailey and Helena Roerich. Blavatsky, founder of the Theosophical Society, attributed her teachings to just such a body of adepts; in her 1877 book Isis Unveiled, she called the revealers of her teachings the "Masters of the Hidden Brotherhood" or the "Mahatmas". Blavatsky claimed that she had made physical contact with these adepts' earthly representatives in Tibet; but also, that she continued to receive teachings from them through psychic channels, through her abilities of spirit mediumship.
Ideas about this secret council of sages, under several names, were a widely shared feature of late nineteenth-century and early twentieth-century esotericism. Arthur Edward Waite, in his 1898 Book of Black Magic and of Pacts, hinted at the existence of a secret group of initiates who dispense truth and wisdom to the worthy. (Symonds, John and Grant, Kenneth, eds.),
The actual phrase "Great White Brotherhood" was used extensively in Leadbeater's 1925 book The Masters and the Path. Alice A. Bailey also claimed to have received numerous revelations from the Great White Brotherhood between 1920 and 1949, which are compiled in her books known collectively by her followers as the Alice A. Bailey Material. Since the introduction of the phrase, the term "Great White Brotherhood" is in some circles used generically to refer to any concept of an enlightened community of adepts, on Earth or in the hereafter, with benevolent aims toward the spiritual development of the human race, and without strict regard to the names used within the tradition. Dion Fortune adopts the name to refer to the community of living and dead adepts.
The ritual magicians of the Western mystery tradition sometimes refer to the Great White Brotherhood as the "Great White Lodge", a name that appears to indicate that they imagine it constitutes an initiatory hierarchy similar to Freemasonry. Gareth Knight describes its members as the "Masters" or "Inner Plane Adepti", who have "gained all the experience, and all the wisdom resulting from experience, necessary for their spiritual evolution in the worlds of form." While some go on to "higher evolution in other spheres", others become teaching Masters who stay behind to help younger initiates in their "cyclic evolution on this planet". Only a few of this community are known to the human race; these initiates are the "teaching Masters". The AMORC Rosicrucian order maintains a difference between the "Great White Brotherhood" and the "Great White Lodge", saying that the Great White Brotherhood is the "school or fraternity" of the Great White Lodge, and that "every true student on the Path" aspires to membership in this Brotherhood.
Bulgarian Gnostic master Peter Deunov referred to his organization of followers as the Universal White Brotherhood, and it is clear that he too was referring to the Western esoteric community-at-large. When ex-communicated as a heretic on 7 July 1922, he defended the Brotherhood as follows:
‘Let the Orthodox Church resolve this issue, whether Christ has risen, whether Love is accepted in the Orthodox Church. There is one church in the world. But the Universal White Brotherhood is outside the church - it is higher than the church. But even higher than the Universal White Brotherhood is the Kingdom of Heaven. Hence the Church is the first step, the Universal White Brotherhood is the second step, and the Kingdom of Heaven is the third step - the greatest one that is to be manifested.’ (24 June 1923).
Similarly, Bulgarian teacher Omraam Mikhaël Aïvanhov (Deunov's principal disciple) formally established Fraternité Blanche Universelle as an "exoteric" esoteric organization still operating today in Switzerland, Canada, the USA, the UK and parts of Scandinavia.
The term Great White Brotherhood was further developed and popularized in 1934 with the publication of "Unveiled Mysteries" by Guy Ballard's "I AM" Activity. This Brotherhood of "Immortal Saints and Sages" who have gone through the Initiations of the Transfiguration, Resurrection, and the Ascension was further popularized by Ascended Master Teachings developed by The Bridge to Freedom, The Summit Lighthouse and the Church Universal and Triumphant, and The Temple of The Presence.
Benjamin Creme has published books — he claims the information within them has been telepathically transmitted to him from the Great White Brotherhood.
Founding of the Great White Brotherhood
In 1952, Geraldine Innocente, messenger for The Bridge to Freedom, delivered this address purported to be from Sanat Kumara describing the founding of the "Great White Brotherhood":
" . . . I had nothing to work with but Light and Love, and many centuries passed before even two lifestreams applied for membership - One, later became Buddha (now, Lord of the World, the Planetary Logos Gautama Buddha) and the Other, became the Cosmic Christ (Lord Maitreya, now the Planetary Buddha). The Brotherhood has grown through these ages and centuries until almost all the offices are held now by those belonging to the evolution of Earth and those who have volunteered to remain among her evolution. . .."
Members of The Bridge to Freedom believe that on July 4, 1954 Sanat Kumara stated through Geraldine Innocente:
" . . . Thus We took Our abode upon the sweet Earth. Through the same power of centripetal and centrifugal force of which I spoke (cohesion and expansion of the magnetic power of Divine Love), We then began to magnetize the Flame in the hearts of some of the Guardian Spirits who were not sleeping so soundly and who were not too enthusiastically engaged in using primal life for the satisfaction of the personal self.
"In this way, the Great White Brotherhood began. The Three-fold Flame within the heart of Shamballa, within the Hearts of the Kumaras and Myself, formed the magnetic Heart of the Great White Brotherhood by Whom you have all been blessed and of which Brotherhood you all aspire to become conscious members. . . . "
Great Brotherhood of Light
The Great White Brotherhood, also known as Great Brotherhood of Light or the Spiritual Hierarchy of Earth, is perceived as a spiritual organization composed of those Ascended Masters who have risen from the Earth into immortality, but still maintain an active watch over the world. C.W. Leadbeater said "The Great White Brotherhood also includes members of the Heavenly Host (the Spiritual Hierarchy directly concerned with the evolution of our world), Beneficent Members from other planets that are interested in our welfare, as well as certain unascended chelas".
The Masters of the Ancient Wisdom are believed by Theosophists to be joined together in service to the Earth under the name of the Great White Brotherhood. The use of the term "white" refers to their use of white magic, as opposed to black, and is unrelated to race besides common psychological relation and its implications. The later versions of Blavatsky described the masters as ethnically Tibetan or Indian (Hindu), not European. Recent skeptical research indicates, however, that this description was used by Blavatsky to hide the real identity of her teachers, some of whom are said to have really been well known Indian rulers or personalities of her time.
Most occult groups assign a high level of importance to the Great White Brotherhood, but some make interaction with the Ascended Masters of the Brotherhood a major focus of their existence. Of these several, the most prominent are the "I Am" Activity, founded in the 1930s, The Bridge to Freedom, the Church Universal and Triumphant, and The Temple of The Presence. Belief in the Brotherhood and the Masters is an essential part of the syncretistic teachings of various organizations that have continued and expanded the Theosophical philosophical concepts. Information given by the Summit Lighthouse and the I AM movement is suspect, since none of the writers of these groups are Masters of any Brotherhood. Some examples of those believed to be Ascended Masters would be, according to different unconfirmed sources, the following: Master Jesus, Confucius, Gautama Buddha, Mary the Mother of Jesus, Hilarion, Enoch, Paul the Venetian, Kwan Yin, Saint Germain, and Kuthumi. These sources say that all these peoples put aside any differences they might have had in their Earthly careers, and unite instead to advance the spiritual well-being of humanity.
Agni Yoga
The Great White Brotherhood is the name given in some metaphysical/occult circles to adepts of wisdom in or out of earthly incarnation who have assumed responsibility for the cosmic destiny of the human race, both individually and collectively. Nicholas Roerich and his wife, Helena Roerich, inspired by the Theosophical writings of H.P. Blavatsky, published the "Agni Yoga" series of books. Their contents, claimed to be inspired by the Master Morya, described the work of the White Brotherhood and the Spiritual Hierarchy.
See also
Ascended masters
Bodhisattva
Communion of Saints
Masters of the Ancient Wisdom
Secret Chiefs
Marina Tsvigun (Maria Devi Khristos) of the Ukrainian White Brotherhood
Notes
External links
The Great White Brotherhood - website for books and messages from The Great White Brotherhood
The Stairway To Freedom - website for The Stairway To Freedom book dictated by The Great White Brotherhood
Ascended Master Teachings
New religious movements
Spiritual evolution
Theosophical philosophical concepts | Great White Brotherhood | [
"Biology"
] | 2,372 | [
"Spiritual evolution",
"Non-Darwinian evolution",
"Biology theories"
] |
348,004 | https://en.wikipedia.org/wiki/Arg%20max | In mathematics, the arguments of the maxima (abbreviated arg max or argmax) and arguments of the minima (abbreviated arg min or argmin) are the input points at which a function output value is maximized and minimized, respectively. While the arguments are defined over the domain of a function, the output is part of its codomain.
Definition
Given an arbitrary set a totally ordered set and a function, the over some subset of is defined by
If or is clear from the context, then is often left out, as in In other words, is the set of points for which attains the function's largest value (if it exists). may be the empty set, a singleton, or contain multiple elements.
In the fields of convex analysis and variational analysis, a slightly different definition is used in the special case where are the extended real numbers. In this case, if is identically equal to on then (that is, ) and otherwise is defined as above, where in this case can also be written as:
where it is emphasized that this equality involving holds when is not identically on
Arg min
The notion of (or ), which stands for argument of the minimum, is defined analogously. For instance,
are points for which attains its smallest value. It is the complementary operator of
In the special case where are the extended real numbers, if is identically equal to on then (that is, ) and otherwise is defined as above and moreover, in this case (of not identically equal to ) it also satisfies:
Examples and properties
For example, if is then attains its maximum value of only at the point Thus
The operator is different from the operator. The operator, when given the same function, returns the of the function instead of the that cause that function to reach that value; in other words
is the element in
Like max may be the empty set (in which case the maximum is undefined) or a singleton, but unlike may not contain multiple elements: for example, if is then but because the function attains the same value at every element of
Equivalently, if is the maximum of then the is the level set of the maximum:
We can rearrange to give the simple identity
If the maximum is reached at a single point then this point is often referred to as and is considered a point, not a set of points. So, for example,
(rather than the singleton set ), since the maximum value of is which occurs for However, in case the maximum is reached at many points, needs to be considered a of points.
For example
because the maximum value of is which occurs on this interval for or On the whole real line
so an infinite set.
Functions need not in general attain a maximum value, and hence the is sometimes the empty set; for example, since is unbounded on the real line. As another example, although is bounded by However, by the extreme value theorem, a continuous real-valued function on a closed interval has a maximum, and thus a nonempty
See also
Argument of a function
Maxima and minima
Mode (statistics)
Mathematical optimization
Kernel (linear algebra)
Preimage
Notes
References
External links
Elementary mathematics
Inverse functions | Arg max | [
"Mathematics"
] | 657 | [
"Elementary mathematics"
] |
348,029 | https://en.wikipedia.org/wiki/Virasoro%20algebra | In mathematics, the Virasoro algebra is a complex Lie algebra and the unique nontrivial central extension of the Witt algebra. It is widely used in two-dimensional conformal field theory and in string theory.
Structure
The Virasoro algebra is spanned by generators for and the central charge .
These generators satisfy and
The factor of is merely a matter of convention. For a derivation of the algebra as the unique central extension of the Witt algebra, see derivation of the Virasoro algebra.
The Virasoro algebra has a presentation in terms of two generators (e.g. 3 and −2) and six relations.
The generators are called annihilation modes, while are creation modes. A basis of creation generators of the Virasoro algebra's universal enveloping algebra is the set
For , let , then
.
Representation theory
In any indecomposable representation of the Virasoro algebra, the central generator of the algebra takes a constant value, also denoted and called the representation's central charge.
A vector in a representation of the Virasoro algebra has conformal dimension (or conformal weight) if it is an eigenvector of with eigenvalue :
An -eigenvector is called a primary state (of dimension ) if it is annihilated by the annihilation modes,
Highest weight representations
A highest weight representation of the Virasoro algebra is a representation generated by a primary state .
A highest weight representation is spanned by the -eigenstates . The conformal dimension of is , where is called the level of .
Any state whose level is not zero is called a descendant state of .
For any , the Verma module of central charge and conformal dimension is the representation whose basis is , for a primary state of dimension .
The Verma module is
the largest possible highest weight representation.
The Verma module is indecomposable, and for generic values of it is also irreducible. When it is reducible, there exist other highest weight representations with these values of , called degenerate representations, which are quotients of the Verma module. In particular, the unique irreducible highest weight representation with these values of is the quotient of the Verma module by its maximal submodule.
A Verma module is irreducible if and only if it has no singular vectors.
Singular vectors
A singular vector or null vector of a highest weight representation is a state that is both descendant and primary.
A sufficient condition for the Verma module to have a singular vector is for some , where
Then the singular vector has level and conformal dimension
Here are the values of for , together with the corresponding singular vectors, written as for the primary state of :
Singular vectors for arbitrary may be computed using various algorithms, and their explicit expressions are known.
If , then has a singular vector at level if and only if with . If , there can also exist a singular vector at level if with and . This singular vector is now a descendant of another singular vector at level .
The integers that appear in are called Kac indices. It can be useful to use non-integer Kac indices for parametrizing the conformal dimensions of Verma modules that do not have singular vectors, for example in the critical random cluster model.
Shapovalov form
For any , the involution defines an automorphism of the Virasoro algebra and of its universal enveloping algebra.
Then the Shapovalov form is the symmetric bilinear form on the Verma module such that , where the numbers are defined by
and .
The inverse Shapovalov form is relevant to computing Virasoro conformal blocks, and can be determined in terms of singular vectors.
The determinant of the Shapovalov form at a given level is given by the Kac determinant formula,
where is the partition function, and is a positive constant that does not depend on or .
Hermitian form and unitarity
If , a highest weight representation with conformal dimension has a unique Hermitian form such that the Hermitian adjoint of is and the norm of the primary state is one. In the basis , the Hermitian form on the Verma module has the same matrix as the Shapovalov form , now interpreted as a Gram matrix.
The representation is called unitary if that Hermitian form is positive definite.
Since any singular vector has zero norm, all unitary highest weight representations are irreducible.
An irreducible highest weight representation is unitary if and only if
either with ,
or with
Daniel Friedan, Zongan Qiu, and Stephen Shenker showed that these conditions are necessary, and Peter Goddard, Adrian Kent, and David Olive used the coset construction or GKO construction (identifying unitary representations of the Virasoro algebra within tensor products of unitary representations of affine Kac–Moody algebras) to show that they are sufficient.
Characters
The character of a representation of the Virasoro algebra is the function
The character of the Verma module is
where is the Dedekind eta function.
For any and for , the Verma module is reducible due to the existence of a singular vector at level . This singular vector generates a submodule, which is isomorphic to the Verma module . The quotient of by this submodule is irreducible if does not have other singular vectors, and its character is
Let with and coprime, and and . (Then is in the Kac table of the corresponding minimal model). The Verma module has infinitely many singular vectors, and is therefore reducible with infinitely many submodules. This Verma module has an irreducible quotient by its largest nontrivial submodule. (The spectrums of minimal models are built from such irreducible representations.) The character of the irreducible quotient is
This expression is an infinite sum because the submodules and have a nontrivial intersection, which is itself a complicated submodule.
Applications
Conformal field theory
In two dimensions, the algebra of local conformal transformations is made of two copies of the Witt algebra.
It follows that the symmetry algebra of two-dimensional conformal field theory is the Virasoro algebra. Technically, the conformal bootstrap approach to two-dimensional CFT relies on Virasoro conformal blocks, special functions that include and generalize the characters of representations of the Virasoro algebra.
String theory
Since the Virasoro algebra comprises the generators of the conformal group of the worldsheet, the stress tensor in string theory obeys the commutation relations of (two copies of) the Virasoro algebra. This is because the conformal group decomposes into separate diffeomorphisms of the forward and back lightcones. Diffeomorphism invariance of the worldsheet implies additionally that the stress tensor vanishes. This is known as the Virasoro constraint, and in the quantum theory, cannot be applied to all the states in the theory, but rather only on the physical states (compare Gupta–Bleuler formalism).
Generalizations
Super Virasoro algebras
There are two supersymmetric N = 1 extensions of the Virasoro algebra, called the Neveu–Schwarz algebra and the Ramond algebra. Their theory is similar to that of the Virasoro algebra, now involving Grassmann numbers. There are further extensions of these algebras with more supersymmetry, such as the N = 2 superconformal algebra.
W-algebras
W-algebras are associative algebras which contain the Virasoro algebra, and which play an important role in two-dimensional conformal field theory. Among W-algebras, the Virasoro algebra has the particularity of being a Lie algebra.
Affine Lie algebras
The Virasoro algebra is a subalgebra of the universal enveloping algebra of any affine Lie algebra, as shown by the Sugawara construction. In this sense, affine Lie algebras are extensions of the Virasoro algebra.
Meromorphic vector fields on Riemann surfaces
The Virasoro algebra is a central extension of the Lie algebra of meromorphic vector fields with two poles on a genus 0 Riemann surface.
On a higher-genus compact Riemann surface, the Lie algebra of meromorphic vector fields with two poles also has a central extension, which is a generalization of the Virasoro algebra. This can be further generalized to supermanifolds.
Vertex algebras and conformal algebras
The Virasoro algebra also has vertex algebraic and conformal algebraic counterparts, which basically come from arranging all the basis elements into generating series and working with single objects.
History
The Witt algebra (the Virasoro algebra without the central extension) was discovered by É. Cartan (1909). Its analogues over finite fields were studied by E. Witt in about the 1930s.
The central extension of the Witt algebra that gives the Virasoro algebra was first found (in characteristic p > 0) by R. E. Block (1966, page 381) and independently rediscovered (in characteristic 0) by I. M. Gelfand and Dmitry Fuchs (1969).
The physicist Miguel Ángel Virasoro
(1970) wrote down some operators generating the Virasoro algebra (later known as the Virasoro operators) while studying dual resonance models, though he did not find the central extension. The central extension giving the Virasoro algebra was rediscovered in physics shortly after by J. H. Weis, according to Brower and Thorn (1971, footnote on page 167).
See also
Conformal field theory
Goddard–Thorn theorem
Heisenberg algebra
Lie conformal algebra
Pohlmeyer charge
Super Virasoro algebra
W-algebra
Witt algebra
WZW model
References
Further reading
V. G. Kac, A. K. Raina, Bombay lectures on highest weight representations, World Sci. (1987) .
& correction: ibid. 13 (1987) 260.
V. K. Dobrev, "Characters of the irreducible highest weight modules over the Virasoro and super-Virasoro algebras", Suppl. Rendiconti del Circolo Matematico di Palermo, Serie II, Numero 14 (1987) 25-42.
Conformal field theory
Lie algebras
Mathematical physics | Virasoro algebra | [
"Physics",
"Mathematics"
] | 2,142 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics"
] |
348,044 | https://en.wikipedia.org/wiki/Meta-system | A metasystem or meta-system is a "system about other systems", such as describing, generalizing, modelling, or analyzing the other system(s). It links the concepts of a system and meta.
Control theory | Meta-system | [
"Mathematics"
] | 50 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
348,085 | https://en.wikipedia.org/wiki/Spinor%20bundle | In differential geometry, given a spin structure on an -dimensional orientable Riemannian manifold one defines the spinor bundle to be the complex vector bundle associated to the corresponding principal bundle of spin frames over and the spin representation of its structure group on the space of spinors .
A section of the spinor bundle is called a spinor field.
Formal definition
Let be a spin structure on a Riemannian manifold that is, an equivariant lift of the oriented orthonormal frame bundle with respect to the double covering of the special orthogonal group by the spin group.
The spinor bundle is defined to be the complex vector bundle
associated to the spin structure via the spin representation where denotes the group of unitary operators acting on a Hilbert space The spin representation is a faithful and unitary representation of the group
See also
Clifford bundle
Clifford module bundle
Orthonormal frame bundle
Spin geometry
Spinor
Spinor representation
Notes
Further reading
|
Algebraic topology
Riemannian geometry
Structures on manifolds | Spinor bundle | [
"Mathematics"
] | 200 | [
"Fields of abstract algebra",
"Topology",
"Algebraic topology"
] |
348,127 | https://en.wikipedia.org/wiki/Giffen%20good | In microeconomics and consumer theory, a Giffen good is a product that people consume more of as the price rises and vice versa, violating the law of demand.
For ordinary goods, as the price of the good rises, the substitution effect makes consumers purchase less of it, and more of substitute goods; the income effect can either reinforce or weaken this decline in demand, but for an ordinary good never outweighs it. By contrast, a Giffen good is so strongly an inferior good (in higher demand at lower incomes) that the contrary income effect more than offsets the substitution effect, and the net effect of the good's price rise is to increase demand for it. This phenomenon is known as the Giffen paradox.
Background
Giffen goods are named after Scottish economist Sir Robert Giffen, to whom Alfred Marshall attributed this idea in his book Principles of Economics, first published in 1890. Giffen first proposed the paradox from his observations of the purchasing habits of the Victorian era poor.
It has been suggested by Etsusuke Masuda and Peter Newman that Simon Gray described "Gray goods" in his 1815 text entitled The Happiness of States: Or An Inquiry Concerning Population, The Modes of Subsisting and Employing It, and the Effects of All on Human Happiness. The chapter entitled A Rise in the Price of Bread Corn, beyond a certain Pitch, tends to increase the Consumption of it, contains a detailed account of what have come to be called Giffen goods, and which might better be called Gray goods. They also note that George Stigler corrected Marshall's misattribution in a 1947 journal article on the history.
Analysis
For almost all products, the demand curve has a negative slope: as the price increases, quantity demanded for the good decreases. (See Supply and demand for background.)
Giffen goods are the exception to this general rule. Unlike other goods or services, the price point at which supply and demand meet results in higher prices and greater demand whenever market forces recognize a change in supply and demand for Giffen goods. As a result, when price goes up, the quantity demanded also goes up. To be a true Giffen good, the good's price must be the only thing that changes to produce a change in quantity demanded.
Giffen goods should not be confused with Veblen goods: Veblen goods are products whose demand increases if their price increases because the price is seen as an indicator of quality or status.
The classic example given by Marshall is of inferior quality staple foods, whose demand is driven by poverty that makes their purchasers unable to afford superior foodstuffs. As the price of the cheap staple rises, they can no longer afford to supplement their diet with better foods, and must consume more of the staple food.
There are three necessary conditions for this situation to arise:
the good in question must be an inferior good,
there must be a lack of close substitute goods, and
the goods must constitute a substantial percentage of the buyer's income, but not such a substantial percentage of the buyer's income that none of the associated normal goods are consumed.
If precondition #1 is changed to "The goods in question must be so inferior that the income effect is greater than the substitution effect" then this list defines necessary and sufficient conditions. The last condition is a condition on the buyer rather than the goods itself, and thus the phenomenon is also called a "Giffen behavior".
Examples
Suppose a consumer has a budget of $6 per day that they spend on food. They must eat three meals a day, and there are only two options for them: the inferior good, bread, which costs $1 per meal, and the superior good, cake, which costs $4 per meal. Cake is always preferable to bread. At present, the consumer would purchase 2 loaves of bread and one cake, completely exhausting their budget to fill 3 meals each day. Now, if the price of bread were to rise from $1 to $2, then the consumer would have no choice but to give up cake, and spend their entire budget on 3 loaves of bread, in order to eat three meals a day. In this situation, their consumption of bread would have actually increased as a result of the price increase. Thus bread would be a Giffen good in this example.
Investor Rob Arnott said in 2021 that the stock market is a Giffen good. Widespread interest in the market tends to increase during periods of rising prices for stocks and decrease during market crashes, which is contrary to ideal investing practices.
Empirical evidence
Evidence for the existence of Giffen goods has generally been limited. A 2008 paper by Robert Jensen and Nolan Miller argued rice and wheat noodles were Giffen goods in parts of China. Another 2008 paper by the same authors experimentally demonstrated the existence of Giffen goods among people at the household level by directly subsidizing purchases of rice and wheat flour for extremely poor families. In this paper, the field experiment conducted in 2007 consisted of the province of Hunan, where rice is a dietary staple, and the province of Gansu, where wheat is a staple. In both provinces, random households were selected and were offered their dietary staple at subsidized rates. After the completion of the project, it could be found that the demands from Hunan households who were offered the rice fell drastically. Meanwhile, the demands of wheat in Gansu implies weak evidence of the Giffen paradox.
In 1991, Battalio, Kagel, and Kogut published an article arguing that quinine water is a Giffen good for some lab rats. However, they were only able to show the existence of a Giffen good at an individual level and not the market level.
Giffen goods are difficult to study because the definition requires a number of observable conditions. One reason for the difficulty in studying market demand for Giffen goods is that Giffen originally envisioned a specific situation faced by individuals in poverty. Modern consumer behaviour research methods often deal in aggregates that average out income levels, and are too blunt an instrument to capture these specific situations. Complicating the matter are the requirements that availability of substitutes be limited and that consumers be not so poor that they can only afford the inferior good. For this reason, many text books use the term Giffen paradox rather than Giffen good.
Some types of premium goods (such as expensive French wines, or celebrity-endorsed perfumes) are sometimes called Giffen goods via the claim that lowering the price of these high-status goods decreases demand because they are no longer perceived as exclusive or high-status products. However, to the extent that the perceived nature of such high-status goods actually changes significantly with a substantial price drop, this behavior disqualifies them from being considered Giffen goods, because the Giffen goods analysis assumes that only the consumer's income or the relative price level changes, not the nature of the good itself. If a price change modifies consumers' perception of the good, they should be analysed as Veblen goods. Some economists question the empirical validity of the distinction between Giffen and Veblen goods, arguing that whenever there is a substantial change in the price of a good its perceived nature also changes, since price is a large part of what constitutes a product. However, the theoretical distinction between the two types of analysis remains clear, and which one should apply to any actual case is an empirical matter. Based on microeconomic consumer theory, it assumes that the consumer could value a good without knowing the price. However, when the consumers who were constrained by income and price need to choose the optimal goods, the goods must be valued with available prices. Because, in some degrees, the higher price indicates higher values of goods offering to the consumers.
Great Famine in Ireland
Potatoes during the Irish Great Famine were once considered to be an example of a Giffen good. Along with the Famine, the price of potatoes and meat increased subsequently. Compared to meat, it is obvious that potatoes could be much cheaper as a staple food. Due to poverty, individuals could not afford meat anymore; therefore, demand for potatoes increased. Under such a situation, the supply curve will increase with the rise in potatoes’ price, which is consistent with the definition of Giffen good. However, Gerald P. Dwyer and Cotton M. Lindsey challenged this idea in their 1984 article Robert Giffen and the Irish Potato, where they showed the contradicting nature of the Giffen "legend" with respect to historical evidence.
The Giffen nature of the Irish potato was also later discredited by Sherwin Rosen of the University of Chicago in his 1999 paper Potato Paradoxes. Rosen showed that the phenomenon could be explained by a normal demand model.
Charles Read has shown with quantitative evidence that bacon pigs showed Giffen-style behaviour during the Irish Famine, but that potatoes did not.
Other proposed examples
Anthony Bopp (1983) proposed that kerosene, a low-quality fuel used in home heating, was a Giffen good. Schmuel Baruch and Yakar Kanai (2001) suggested that shochu, a Japanese distilled beverage, could be a Giffen good. In both cases, the authors offered supporting econometric evidence. However, this evidence is considered incomplete.
A good may be a Giffen good at the individual level but not at the aggregate level (or vice-versa). As shown by Hildenbrand's model, aggregate demand will not necessarily exhibit any Giffen behavior even when we assume the same preferences for each consumer, whose nominal wealth is uniformly distributed on an interval containing zero. This could explain the presence of Giffen behavior for individual consumers but the absence in aggregate data.
See also
Capital good
Consumer choice
Price elasticity of demand
Supply and demand
Ordinary good
Veblen good
Inferior good
Normal good
References
Further reading
External links
Alfred Marshall Principles of Economics Bk.III,Ch.VI in paragraph III.VI.17
The Last Word on Giffen Goods?
Giffen good
What Do Prostitutes and Rice Have in Common?
Goods (economics)
Paradoxes in economics
Consumer theory | Giffen good | [
"Physics"
] | 2,045 | [
"Materials",
"Goods (economics)",
"Matter"
] |
348,147 | https://en.wikipedia.org/wiki/On%20the%20fly | On the fly is a phrase used to describe something that is being changed while the process that the change affects is ongoing. It is used in the automotive, computer, and culinary industries. In cars, on the fly can be used to describe the changing of the cars configuration while it is still driving. Processes that can occur while the car is still driving include switching between two wheel drive and four wheel drive on some cars and opening and closing the roof on some convertible cars. In computing, on the fly CD writers can read from one CD and write the data to another without saving it on a computer's memory. Switching programs or applications on the fly in multi-tasking operating systems means the ability to switch between native and/or emulated programs or applications that are still running and running in parallel while performing their tasks or processes, but without pausing, freezing, or delaying any, or other unwanted events. Switching computer parts on the fly means computer parts are replaced while the computer is still running. It can also be used in programming to describe changing a program while it is still running. In restaurants and other places involved in the preparation of food, the term is used to indicate that an order needs to be made right away.
Colloquial usage
In colloquial use, "on the fly" means something created when needed. The phrase is used to mean:
something that was not planned ahead
changes that are made during the execution of same activity: ex tempore, impromptu.
Automotive usage
In the automotive industry, the term refers to the circumstance of performing certain operations while a vehicle is driven by the engine and moving. In reference to four-wheel drive vehicles, this term describes the ability to change from two to four-wheel drive while the car is in gear and moving. In some convertible models, the roof can be folded electrically on the fly, whereas in other cases the car must be stopped.
In harvesting machines, newer monitoring systems let the driver track the quality of the grain, while enabling them to adjust the rotor speed on the fly as harvesting progresses.
Computer usage
In multitasking computing an operating system can handle several programs, both native applications or emulated software, that are running independent, parallel, together in the same time in the same device, using separated or shared resources and/or data, executing their tasks separately or together, while a user can switch on the fly between them or groups of them to use obtained effects or supervise purposes, without waste of time or waste of performance. In operating systems using GUI very often it is done by switching from an active window (or an object playing similar role) of a particular software piece to another one but of another software.
A computer can compute results on the fly, or retrieve a previously stored result.
It can mean to make a copy of a removable media (CD-ROM, DVD, etc.) directly, without first saving the source on an intermediate medium (a harddisk); for example, copying a CD-ROM from a CD-ROM drive to a CD-Writer drive. The copy process requires each block of data to be retrieved and immediately written to the destination, so that there is room in the working memory to retrieve the next block of data.
When used for encrypted data storage, on the fly the data stream is automatically encrypted as it is written and decrypted when read back again, transparently to software. The acronym OTFE is typically used.
On-the-fly programming is the technique of modifying a program without stopping it.
A similar concept, hot swapping, refers to on-the-fly replacement of computer hardware.
On-the-fly computing
On-the-fly computing (OTF computing) is about automating and customizing software tailored to the needs of a user. According to a requirement specification, this software is composed of basic components, so-called basic services, and a user-specific setting of these basic components is made. Accordingly, the requested services are compiled only at the request of the user and then run in a specially designed data center to make the user the functions of the (on-the-fly) created service accessible.
Restaurant usage
In restaurants, cafes, banquet halls, and other places involved in the preparation of food, the term is used to indicate that an order needs to be made right away. This is often because a previously-served dish is inedible, because a waiter has made a mistake or delayed, or because a guest has to leave promptly.
Usage in sports
In ice hockey, it is both legal and common for teams to make line changes (player substitutions) when the puck is in play. Such line changes are referred to as being done "on the fly".
References
Computer jargon
Restaurant terminology
Technical terminology | On the fly | [
"Technology"
] | 973 | [
"Computing terminology",
"Computer jargon",
"Natural language and computing"
] |
348,300 | https://en.wikipedia.org/wiki/Digital%20video%20recorder | A digital video recorder (DVR), also referred to as a personal video recorder (PVR) particularly in Canadian and British English, is an electronic device that records video in a digital format to a disk drive, USB flash drive, SD memory card, SSD or other local or networked mass storage device. The term includes set-top boxes (STB) with direct to disk recording, portable media players and TV gateways with recording capability, and digital camcorders. Personal computers can be connected to video capture devices and used as DVRs; in such cases the application software used to record video is an integral part of the DVR. Many DVRs are classified as consumer electronic devices. Similar small devices with built-in (~5 inch diagonal) displays and SSD support may be used for professional film or video production, as these recorders often do not have the limitations that built-in recorders in cameras have, offering wider codec support, the removal of recording time limitations and higher bitrates.
History
Hard-disk-based digital video recorders
The first working DVR prototype was developed in 1998 at Stanford University Computer Science department. The DVR design was a chapter of Edward Y. Chang's PhD dissertation, supervised by Professors Hector Garcia-Molina and Jennifer Widom. Two design papers were published at the 1998 VLDB conference,
and the 1999 ICDE conference. The prototype was developed in 1998 at Pat Hanrahan's CS488 class: Experiments in Digital Television, and the prototype was demoed to industrial partners including Sony, Intel, and Apple.
Consumer digital video recorders ReplayTV and TiVo were launched at the 1999 Consumer Electronics Show in Las Vegas, Nevada. Microsoft also demonstrated a unit with DVR capability, but this did not become available until the end of 1999 for full DVR features in Dish Network's DISHplayer receivers. TiVo shipped their first units on March 31, 1999. ReplayTV won the "Best of Show" award in the video category with Netscape co-founder Marc Andreessen as an early investor and board member, but TiVo was more successful commercially. Ad Age cited Forrester Research as saying that market penetration by the end of 1999 was "less than 100,000".
Legal action by media companies forced ReplayTV to remove many features such as automatic commercial skip and the sharing of recordings over the Internet, but newer devices have steadily regained these functions while adding complementary abilities, such as recording onto DVDs and programming and remote control facilities using PDAs, networked PCs, and Web browsers.
In contrast to VCRs, hard-disk based digital video recorders make "time shifting" more convenient and also allow for functions such as pausing live TV, instant replay, chasing playback (viewing a recording before it has been completed) and skipping over advertising during playback.
Many DVRs use the MPEG format for compressing the digital video. Video recording capabilities have become an essential part of the modern set-top box, as TV viewers have wanted to take control of their viewing experiences. As consumers have been able to converge increasing amounts of video content on their set-tops, delivered by traditional 'broadcast' cable, satellite and terrestrial as well as IP networks, the ability to capture programming and view it whenever they want has become a must-have function for many consumers.
Digital video recorders tied to a video service
At the 1999 CES, Dish Network demonstrated the hardware that would later have DVR capability with the assistance of Microsoft software, which also included access to the WebTV service. By the end of 1999 the Dishplayer had full DVR capabilities and within a year, over 200,000 units were sold.
In the UK, digital video recorders are often referred to as "plus boxes" (such as BSKYB's Sky+ and Virgin Media's V+ which integrates an HD capability, and the subscription free Freesat+ and Freeview+). Freeview+ have been around in the UK since the late 2000s, although the platform's first DVR, the Pace Twin, dates to 2002. British Sky Broadcasting marketed a popular combined receiver and DVR as Sky+, now replaced by the Sky Q box. TiVo launched a UK model in 2000, and is no longer supported, except for third party services, and the continuation of TiVo through Virgin Media in 2010. South African based Africa Satellite TV beamer Multichoice recently launched their DVR which is available on their DStv platform. In addition to ReplayTV and TiVo, there are a number of other suppliers of digital terrestrial (DTT) DVRs, including Technicolor SA, Topfield, Fusion, Commscope, Humax, VBox Communications, AC Ryan Playon and Advanced Digital Broadcast (ADB).
Many satellite, cable and IPTV companies are incorporating digital video recording functions into their set-top box, such as with DirecTiVo, DISHPlayer/DishDVR, Scientific Atlanta Explorer 8xxx from Time Warner, Total Home DVR from AT&T U-verse, Motorola DCT6412 from Comcast and others, Moxi Media Center by Digeo (available through Charter, Adelphia, Sunflower, Bend Broadband, and soon Comcast and other cable companies), or Sky+. Astro introduced their DVR system, called Astro MAX, which was the first PVR in Malaysia but was phased out two years after its introduction.
In the case of digital television, there is no encoding necessary in the DVR since the signal is already a digitally encoded MPEG stream. The digital video recorder simply stores the digital stream directly to disk. Having the broadcaster involved with, and sometimes subsidizing, the design of the DVR can lead to features such as the ability to use interactive TV on recorded shows, pre-loading of programs, or directly recording encrypted digital streams. It can, however, also force the manufacturer to implement non-skippable advertisements and automatically expiring recordings.
In the United States, the FCC has ruled that starting on July 1, 2007, consumers will be able to purchase a set-top box from a third-party company, rather than being forced to purchase or rent the set-top box from their cable company. This ruling only applies to "navigation devices", otherwise known as a cable television set-top box, and not to the security functions that control the user's access to the content of the cable operator. The overall net effect on digital video recorders and related technology is unlikely to be substantial as standalone DVRs are currently readily available on the open market.
In Europe Free-To-Air and Pay TV TV gateways with multiple tuners have whole house recording capabilities allowing recording of TV programs to Network Attached Storage or attached USB storage, recorded programs are then shared across the home network to tablet, smartphone, PC, Mac, Smart TV.
Introduction of dual tuners
In 2003 many Satellite and Cable providers introduced dual-tuner digital video recorders. In the UK, BSkyB introduced their first PVR Sky+ with dual tuner support in 2001. These machines have two independent tuners within the same receiver. The main use for this feature is the capability to record a live program while watching another live program simultaneously or to record two programs at the same time, possibly while watching a previously recorded one. Kogan.com introduced a dual-tuner PVR in the Australian market allowing free-to-air television to be recorded on a removable hard drive. Some dual-tuner DVRs also have the ability to output to two separate television sets at the same time. The PVR manufactured by UEC (Durban, South Africa) and used by Multichoice and Scientific Atlanta 8300DVB PVR have the ability to view two programs while recording a third using a triple tuner.
Where several digital subchannels are transmitted on a single RF channel, some PVRs can record two channels and view a third, so long as all three subchannels are on two channels (or one).
In the United States, DVRs were used by 32 percent of all TV households in 2009, and 38 percent by 2010, with viewership among 18- to 40-year-olds 40 percent higher in homes that have them.
Types
Integrated television sets
DVRs are integrated into some television sets (TVs). These systems simplify wiring and operation because they employ a single power cable, have no interconnected ports (e.g., HDMI), and share a common remote control.
VESA compatibility
VESA-compatible DVRs are designed to attach to the VESA mounting holes (100×100 mm) on the back of an LCD television set (TV), allowing users to combine the TV and DVR into an integrated unit.
Set-top boxes (STB)
Over-the-air DVRs are standalone receivers that record broadcast television programs. Several companies have launched over-the-air DVR products for the consumer market over the past few years.
Some pay-TV operators provide receivers that allow subscribers to attach their own network-attached storage (NAS) hard drives or solid-state or flash memory to record video and other media files (e.g., audio and photos).
PC-based
Software and hardware are available which can turn personal computers running Microsoft Windows, Linux, and Mac OS X into DVRs, and is a popular option for home-theater PC (HTPC) enthusiasts.
Linux
There are many free and open source software DVR applications available for Linux. For example, TV gateway interfaces to DVB tuners and provides network tuner and TV server functions, which allows live viewing and recording over IP networks. Other examples include MythTV, Video Disk Recorder (VDR), LinuxMCE, TiVo, VBox Home TV Gateway, and Kodi (formerly XBMC).
macOS
Geniatech makes a series of digital video recording devices called EyeTV. The software supplied with each device is also called EyeTV, and is available separately for use on compatible third-party tuners from manufacturers such as Pinnacle, TerraTec, and Hauppauge.
SageTV provided DVR software for the Mac but no longer sells it. Previously sold devices support the Hauppauge HVR-950, myTV.PVR and HDHomeRun hardware with its DVR software. SageTV software also included the ability to watch YouTube and other online video with a remote control.
MythTV (see above) also runs under Mac OS X, but most recording devices are currently only supported under Linux. Precompiled binaries are available for the MythTV front-end, allowing a Mac to watch video from (and control) a MythTV server running under Linux.
Apple provides applications in the FireWire software developer kit which allow any Mac with a FireWire port to record the MPEG2 transport stream from a FireWire-equipped cable box (for example: Motorola DCT62xx, including HD streams). Applications can also change channels on the cable box via the firewire interface. Only broadcast channels can be recorded as the rest of the channels are encrypted. FireRecord (formerly iRecord) is a free scheduled-recording program derived from this SDK.
Windows
There are several free digital video recording applications available for Microsoft Windows including GB-PVR, MediaPortal, and Orb (web-based remote interface).
There are also several commercial applications available including CyberLink, SageTV (which is no longer available after Google acquired it in June 2011), Beyond TV (which is considered discontinued despite an official announcement from SnapStream since the last update was October 2010 and they are concentrating on their enterprise search products), DVBViewer, Showshifter, InterVideo WinDVR, the R5000-HD and Meedio (now a dead product – Yahoo! bought most of the company's technology and discontinued the Meedio line, and rebranded the software Yahoo! Go – TV, which is now a free product but only works in the U.S.). Most TV tuner cards come bundled with software which allows the PC to record television to hard disk. See TV tuner card. For example, Leadtek's WinFast DTV1000 digital TV card comes bundled with the WinFast PVR2 software, which can also record analog video from the card's composite video input socket.
Windows Media Center is a DVR software by Microsoft which was bundled with the Media Center edition of Windows XP, the Home Premium / Ultimate editions of Windows Vista, as well as most editions of Windows 7. When Windows 8 was released in 2012, Windows Media Center was not included with Windows 8 OEM or Retail installations, and was only available as a $15 add-on pack (including DVD Playback codecs) to Windows 8 Pro users.
The Windows Game Bar specifies all recordings made by it as being titled "Microsoft Game DVR" followed by the game or application's title.
Embeddable
An embeddable DVR is a standalone device that is designed to be easily integrated into more complex systems. It is typically supplied as a compact, bare circuit board that facilitates mounting it as a subsystem component within larger equipment. The control keypad is usually connected with a detachable cable, to allow it to be located on the system's exterior while the DVR circuitry resides inside the equipment.
Source video
Television and video are terms that are sometimes used interchangeably, but differ in their technical meaning. Video is the visual portion of television, whereas television is the combination of video and audio modulated onto a carrier frequency (i.e., a television channel) for delivery. Most DVRs can record both video and audio.
Analog sources
The first digital video recorders were designed to record analog television in NTSC, PAL or SECAM formats.
To record an analog signal a few steps are required. In the case of a television signal, a television tuner must first demodulate the radio frequency signal to produce baseband video. The video is then converted to digital form by a frame grabber, which converts each video image into a collection of numeric values that represent the pixels within the image. At the same time, the audio is also converted to digital form by an analog-to-digital converter running at a constant sampling rate. In many devices, the resulting digital video and audio are compressed before recording to reduce the amount of data that will be recorded, although some DVRs record uncompressed data. When compression is used, video is typically compressed using formats such as H.264 or MPEG-2, and audio is compressed using AAC or MP3.
Analog broadcast copy protection
Many consumer DVRs implement a copy-protection system called Copy Generation Management System—Analog (CGMS-A), which specifies one of four possible copy permissions by means of two bits encoded in the vertical blanking interval:
Copying is freely allowed
Copying is prohibited
Only one copy of this material may be made
This is a copy of material for which only one copy was allowed to be made, so no further copies are allowed.
CGMS-A information may be present in analog broadcast TV signals, and is preserved when the signal is recorded and played back by analog VCRs. VCRs do not understand the meanings of the bits but preserve them in case there is a subsequent attempt to copy the tape to a DVR.
DVRs such as TiVo also detect and act upon analog protection systems such as Macrovision and DCS Copy Protection which were originally designed to block copying on analog VCRs.
Digital sources
Recording digital signals is generally a straightforward capture of the binary MPEG data being received. No expensive hardware is required to quantize and compress the signal (as the television broadcaster has already done this in the studio).
DVD-based PVRs available on the market as of 2006 are not capable of capturing the full range of the visual signal available with high-definition television (HDTV). This is largely because HDTV standards were finalized at a later time than the standards for DVDs. However, DVD-based PVRs can still be used (albeit at reduced visual quality) with HDTV since currently available HDTV sets also have standard A/V connections.
ATSC broadcast
ATSC television broadcasting is primarily used in North America. The ATSC data stream can be directly recorded by a digital video recorder, though many DVRs record only a subset of this information (that can later be transferred to DVD). An ATSC DVR will also act as a set-top box, allowing older televisions or monitors to receive digital television.
Copy protection
The U.S. FCC attempted to limit the abilities of DVRs with its "broadcast flag" regulation. Digital video recorders that had not won prior approval from the FCC for implementing "effective" digital rights management would have been banned from interstate commerce from July 2005, but the regulation was struck down on May 6, 2005.
DVB
DVB digital television contains audio/visual signals that are broadcast over the air in a digital rather than analog format. The DVB data stream can be directly recorded by the DVR. Devices that can use external storage devices (such as hard disks, SSDs, or other flash storage) to store and recover data without the aid of another device are sometimes called telememory devices.
Digital cable and satellite television
Recording satellite television or digital cable signals on a digital video recorder can be more complex than recording analog signals or broadcast digital signals. There are several different transmission schemes, and the video streams may be encrypted to restrict access to subscribers only.
A satellite or cable set-top box both decrypts the signal if encrypted, and decodes the MPEG stream into an analog signal for viewing on the television. In order to record cable or satellite digital signals the signal must be captured after it has been decrypted but before it is decoded; this is how DVRs built into set-top boxes work.
Cable and satellite providers often offer their own digital video recorders along with a service plan. These DVRs have access to the encrypted video stream, and generally enforce the provider's restrictions on copying of material even after recording.
DVD
Many DVD-based DVRs have the capability to copy content from a source DVD (ripping). In the United States, this is prohibited under the Digital Millennium Copyright Act if the disc is encrypted. Most such DVRs will therefore not allow recording of video streams from encrypted movie discs.
Digital camcorders
A digital camcorder combines a camera and a digital video recorder.
Some DVD-based DVRs incorporate connectors that can be used to capture digital video from a camcorder. Some editing of the resulting DVD is usually possible, such as adding chapter points.
Some digital video recorders can now record to solid state flash memory cards (called flash camcorders). They generally use Secure Digital cards, can include wireless connections (Bluetooth and Wi-Fi), and can play SWF files. There are some digital video recorders that combine video and graphics in real time to the flash card, called DTE or "direct to edit". These are used to speed-up the editing workflow in video and television production, since linear videotapes do not then need to be transferred to the edit workstation (see Non-linear editing system).
File formats, resolutions and file systems
DVRs can usually record and play H.264, MPEG-4 Part 2, MPEG-2 .mpg, MPEG-2 .TS, VOB and ISO images video, with MP3 and AC3 audio tracks. They can also display images (JPEG and PNG) and play music files (MP3 and Ogg).
Some devices can be updated to play and record in new formats. DVRs usually record in proprietary file systems for copy protection, although some can use FAT file systems. Recordings from standard-definition television usually have 480p/i/576p/i while HDTV is usually in 720p/1080i.
Applications
Security
Digital video recorders configured for physical security applications record video signals from closed-circuit television cameras for detection and documentation purposes. Many are designed to record audio as well. DVRs have evolved into devices that are feature rich and provide services that exceed the simple recording of video images that was previously done through VCRs. A DVR CCTV system provides a multitude of advanced functions over VCR technology including video searches by event, time, date and camera. There is also much more control over quality and frame rate allowing disk space usage to be optimized and the DVR can also be set to overwrite the oldest security footage should the disk become full. In some DVR security systems remote access to security footage using a PC can also be achieved by connecting the DVR to a LAN network or the Internet.
Some of the latest professional digital video recorders include video analytics firmware, to enable functionality such as 'virtual tripwire' or even the detection of abandoned objects on the scene.
Security DVRs may be categorized as being either PC-based or embedded. A PC-based DVR's architecture is a classical personal computer with video capture cards designed to capture video images. An embedded type DVR is specifically designed as a digital video recorder with its operating system and application software contained in firmware or read-only memory.
Hardware features
Hardware features of security DVRs vary between manufacturers and may include but are not necessarily limited to:
Designed for rack mounting or desktop configurations.
Single or multiple video inputs with connector types consistent with the analogue or digital video provided such as coaxial cable, twisted pair or optical fiber cable. The most common number of inputs are 1, 2, 4, 8, 16 and 32. Systems may be configured with a very large number of inputs by networking or bussing individual DVRs together.
Looping video outputs for each input which duplicates the corresponding input video signal and connector type. These output signals are used by other video equipment such as matrix switchers, multiplexers, and video monitors.
Controlled outputs to external video display monitors.
Front panel switches and indicators that allow the various features of the machine to be controlled.
Network connections consistent with the network type and utilized to control features of the recorder and to send and/or receive video signals.
Connections to external control devices such as keyboards.
A connection to external pan-tilt-zoom drives that position cameras.
Internal CD, DVD, VCR devices typically for archiving video.
Connections to external storage media.
Alarm event inputs from external security detection devices, usually one per video input.
Alarm event outputs from internal detection features such as motion detection or loss of video.
Software features
Software features vary between manufacturers and may include but are not necessarily limited to:
User-selectable image capture rates either on an all input basis or input by input basis. The capture rate feature may be programmed to automatically adjust the capture rate on the occurrence of an external alarm or an internal event
Selectable image resolution either on an all input basis or input by input basis. The image resolution feature may be programmed to automatically adjust the image resolution on the occurrence of an external alarm or an internal event.
Compression methods determine quality of playback. H.264 hardware compression offers fast transfer rates over the Internet with high quality video.
Motion detection: Provided on an input by input basis, this feature detects motion in the total image or a user definable portion of the image and usually provides sensitivity settings. Detection causes an internal event that may be output to external equipment and/or be used to trigger changes in other internal features.
Lack of motion detection. Provided on an input by input basis, this feature detects the movement of an object into the field of view and remaining still for a user definable time. Detection causes an internal event that may be output to external equipment and/or used to trigger changes in other internal features.
Direction of motion detection. Provided on an input by input basis, this feature detects the direction of motion in the image that has been determined by the user as an unacceptable occurrence. Detection causes an internal event that may be output to external equipment and/or be used to trigger changes in other internal features.
Routing of input video to video monitors based on user inputs or automatically on alarms or events.
Input, time and date stamping.
Alarm and event logging on appropriate video inputs.
Alarm and event search.
One or more sound recording channels.
Archival.
Privacy concerns
Some (very few), but certainly not all, digital video recorders which are designed to send information to a service provider over a telephone line or Internet (or any other way) can gather and send real-time data on users' viewing habits. This problem was noted back in 2000 and was still considered a problem, specifically with TiVo, in 2015.
Television advertisements
Digital video recorders are also changing the way television programs advertise products. Watching pre-recorded programs allows users to fast-forward through commercials, and some technology allows users to remove commercials entirely. Half of viewers in the United States, for example, use DVRs to skip commercials entirely. This feature has been controversial for the last decade, with major television networks and movie studios claiming it violates copyright and should be banned.
In 1985, an employee of Honeywell's Physical Sciences Center, David Rafner, first described a drive-based DVR designed for home TV recording, time shifting, and commercial skipping. U.S. Patent 4,972,396 focused on a multi-channel design to allow simultaneous independent recording and playback. Broadly anticipating future DVR developments, it describes possible applications such as streaming compression, editing, captioning, multi-channel security monitoring, military sensor platforms, and remotely piloted vehicles.
In 1999, the first DVR which had a built-in commercial skipping feature was introduced by ReplayTV at the Consumer Electronics Show in Las Vegas. In 2002, five owners of the ReplayTV DVR sued the main television networks and movie studios, asking the federal judge to uphold consumers' rights to record TV shows and skip commercials, claiming that features such as commercial skipping help parents protect their kids from excessive consumerism. ReplayTV was purchased by SONICblue in 2001 and in March 2003, SONICblue filed for Chapter 11 bankruptcy after fighting a copyright infringement suit over the ReplayTV's ability to skip commercials. In 2007, DirecTV purchased the remaining assets of ReplayTV.
A third-party add-on for Windows Media Center called "DVRMSToolbox" has the ability to skip commercials.
There is a command-line program called Comskip that detects commercials in an MPEG-2 file and saves their positions to a text file. This file can then be fed to a program like MEncoder to actually remove the commercials.
Many speculate that television advertisements will be eliminated altogether, replaced by advertising in the TV shows themselves. For example, Extreme Makeover: Home Edition advertises Sears, Kenmore, Kohler, and Home Depot by specifically using products from these companies, and some sports events like the Sprint Cup of NASCAR are named after sponsors.
Another type of advertisement shown more and more, mostly for advertising television shows on the same channel, is where the ad overlays the bottom of the television screen, blocking out some of the picture. "Banners", or "logo bugs", as they are called, are referred to by media companies as Secondary Events (2E). This is done in much the same way as severe weather warnings are done. Sometimes, these take up only 5–10% of the screen, but in the extreme, can take up as much as 25% of the viewing area. Some even make noise or move across the screen. One example of this is the 2E ads for Three Moons Over Milford in the months before its premiere. A video taking up approximately 25% of the bottom-left portion of the screen would show a comet impacting into the moon with an accompanying explosion, during another television program.
Because of this widely used new technology, advertisers are now looking at a new way to market their products on television. An excerpt from the magazine Advertising Age reads: "As advertisers lose the ability to invade the home, and consumer's minds, they will be forced to wait for an invitation. This means that they have to learn what kinds of advertising content customers will actually be willing to seek out and receive."
With ad skipping and the time-sensitive nature of certain ads, advertisers are wary of buying commercial time on shows that are heavily digitally video-recorded. However, technology today makes it possible for networks to insert ads dynamically on videos being played in DVRs. Advertisers could inject time-relevant ads to recorded programs when the program is viewed. This way the ads could be not just topical but also personalized to viewers interests. DirecTV in March 2011 signed an arrangement with NDS Group to enable the delivery of such addressable advertisement.
It is believed that viewers prefer to forward ads, than to switch the channel. By switching channels, viewers will have the probability of skipping the beginning of their program. Users might switch to a channel that is also showing ads. Having the ability to pause, rewind, and forward live TV gives users a chance to change the channel fewer times. Forwarding ads can have a later effect on the viewer. Ads that get the viewers' attention will influence the viewers' to rewind and watch what was missed.
In January 2012, Dish Network announced Hopper service, costing $10 extra per month, which recorded prime-time programming from the four major broadcast networks. With the Auto Hop feature, viewers can watch the programs they choose without commercials, without making the effort to fast-forward. On May 24, 2012, Dish and the networks filed suit in federal court.
Patent and copyright litigation
On July 14, 2005, Forgent Networks filed suit against various companies alleging infringement on , entitled "Computer controlled video system allowing playback during recording". The listed companies included EchoStar, DirecTV, Charter Communications, Cox Communications, Comcast, Time Warner, and Cable One.
Scientific-Atlanta and Motorola, the manufacturers of the equipment sold by the above-mentioned companies, filed a counter-suit against Forgent Networks claiming that their products do not violate the patent, and that the patent is invalid. The two cases were combined into case 6:06-cv-208, filed in the United States District Court for the Eastern District of Texas, Tyler Division.
According to court documents, on June 20, 2006, Motorola requested that the United States Patent and Trademarks Office reexamine the patent, which was first filed in 1991, but has been amended several times.
On March 23, 2007, Cablevision Systems Corp lost a legal battle against several Hollywood studios and television networks to introduce a network-based digital video recorder service to its subscribers. However, on August 4, 2008, Cablevision won its appeal. John M. Walker Jr., a Second Circuit judge, declared that the technology "would not directly infringe" on the media companies' rights. An appeal to the Supreme Court was rejected.
In court, the media companies argued that network digital video recorders were tantamount to video-on-demand, and that they should receive license fees for the recording. Cablevision and the appeals court disagreed. The company noted that each user would record programs on his or her own individual server space, making it a DVR that has a "very long cord".
In 2004, TiVo sued EchoStar Corp, a manufacturer of DVR units, for patent infringement. The parties reached a settlement in 2011 wherein EchoStar pays a one-time fee (in three structured payments) that grants Echostar full rights for life to the disputed TiVo patents upon first payment(as opposed to indefinite and escalating license fees to be constantly renegotiated), and Echostar granted TiVo full rights for life to certain Echostar patents and dropped their counter-suit against TiVo.
In January 2012, AT&T settled a similar suit brought by TiVo claiming patent infringement (just as with Echostar) in exchange for cash payments to TiVo totaling $215 million through June 2018 plus "incremental recurring per subscriber monthly license fees" to TiVo through July 2018, but grants no full lifetime rights as per the Echostar settlement.
In May 2012, Fox Broadcasting sued Dish Network, arguing that Dish's set-top box with DVR function, which allowed the users to automatically record prime-time programs and skip commercials, was copyright infringement and breach of contract. In July 2013, the 9th circuit rejected Fox's claims.
See also
Set-top box
Home theater PC (Media PC)
Digital media player
Smart TV
Comparison of PVR software packages
10-foot user interface
CRID
Direct-to-disk recording
DTE (Direct To Edit)
DTVPal
Freeview+
Freesat+
Hopper (DVR)
Media server (Consumer)
Time shifting
Space shifting (place shifting)
Remote storage digital video recorder
MythTV
Network video recorder
SubRip
Sky+
Tablo (DVR)
TiVo
TV-Anytime
PVR-resistant advertising
Remote control
USB hard disk
USB On-The-Go
Vu+
Video server (Broadcast)
Kodi (software)
Xbox One
Notes
References
Free-to-Air Television and other PVR Challenges in Europe , technical report of the European broadcasting union
Digital media
Set-top box
Television terminology | Digital video recorder | [
"Technology"
] | 6,891 | [
"Multimedia",
"Recording devices",
"Digital video recorders",
"Digital media"
] |
348,302 | https://en.wikipedia.org/wiki/Van%20der%20Pauw%20method | The van der Pauw Method is a technique commonly used to measure the resistivity and the Hall coefficient of a sample. Its strength lies in its ability to accurately measure the properties of a sample of any arbitrary shape, as long as the sample is approximately two-dimensional (i.e. it is much thinner than it is wide), solid (no holes), and the electrodes are placed on its perimeter. The van der Pauw method employs a four-point probe placed around the perimeter of the sample, in contrast to the linear four point probe: this allows the van der Pauw method to provide an average resistivity of the sample, whereas a linear array provides the resistivity in the sensing direction. This difference becomes important for anisotropic materials, which can be properly measured using the Montgomery Method, an extension of the van der Pauw Method (see, for instance, reference).
From the measurements made, the following properties of the material can be calculated:
The resistivity of the material
The doping type (i.e. whether it is a P-type or N-type material)
The sheet carrier density of the majority carrier (the number of majority carriers per unit area). From this the charge density and doping level can be found
The mobility of the majority carrier
The method was first propounded by Leo J. van der Pauw in 1958.
Conditions
There are five conditions that must be satisfied to use this technique:1. The sample must have a flat shape of uniform thickness2. The sample must not have any isolated holes3. The sample must be homogeneous and isotropic4. All four contacts must be located at the edges of the sample5. The area of contact of any individual contact should be at least an order of magnitude smaller than the area of the entire sample.
The second condition can be weakened. The van der Pauw technique can also be applied to samples with one hole.
Sample preparation
In order to use the van der Pauw method, the sample thickness must be much less than the width and length of the sample. In order to reduce errors in the calculations, it is preferable that the sample be symmetrical. There must also be no isolated holes within the sample.
The measurements require that four ohmic contacts be placed on the sample. Certain conditions for their placement need to be met:
They must be as small as possible; any errors given by their non-zero size will be of the order D/L, where D is the average diameter of the contact and L is the distance between the contacts.
They must be as close as possible to the boundary of the sample.
In addition to this, any leads from the contacts should be constructed from the same batch of wire to minimise thermoelectric effects. For the same reason, all four contacts should be of the same material.
Measurement definitions
The contacts are numbered from 1 to 4 in a counter-clockwise order, beginning at the top-left contact.
The current I12 is a positive DC current injected into contact 1 and taken out of contact 2, and is measured in amperes (A).
The voltage V34 is a DC voltage measured between contacts 3 and 4 (i.e. V4 - V3) with no externally applied magnetic field, measured in volts (V).
The resistivity ρ is measured in ohms⋅metres (Ω⋅m).
The thickness of the sample t is measured in metres (m).
The sheet resistance RS is measured in ohms per square (Ω/sq or ).
Resistivity measurements
The average resistivity of a sample is given by ρ = RS⋅t, where the sheet resistance RS is determined as follows. For an anisotropic material, the individual resistivity components, e.g. ρx or ρy, can be calculated using the Montgomery method.
Basic measurements
To make a measurement, a current is caused to flow along one edge of the sample (for instance, I12) and the voltage across the opposite edge (in this case, V34) is measured. From these two values, a resistance (for this example, ) can be found using Ohm's law:
In his paper, van der Pauw showed that the sheet resistance of samples with arbitrary shapes can be determined from two of these resistances - one measured along a vertical edge, such as , and a corresponding one measured along a horizontal edge, such as . The actual sheet resistance is related to these resistances by the van der Pauw formula
Reciprocal measurements
The reciprocity theorem tells us that
Therefore, it is possible to obtain a more precise value for the resistances and by making two additional measurements of their reciprocal values and and averaging the results.
We define
and
Then, the van der Pauw formula becomes
Reversed polarity measurements
A further improvement in the accuracy of the resistance values can be obtained by repeating the resistance measurements after switching polarities of both the current source and the voltage meter. Since this is still measuring the same portion of the sample, just in the opposite direction, the values of Rvertical and Rhorizontal can still be calculated as the averages of the standard and reversed polarity measurements. The benefit of doing this is that any offset voltages, such as thermoelectric potentials due to the Seebeck effect, will be cancelled out.
Combining these methods with the reciprocal measurements from above leads to the formulas for the resistances being
and
The van der Pauw formula takes the same form as in the previous section.
Measurement accuracy
Both of the above procedures check the repeatability of the measurements. If any of the reversed polarity measurements don't agree to a sufficient degree of accuracy (usually within 3%) with the corresponding standard polarity measurement, then there is probably a source of error somewhere in the setup, which should be investigated before continuing. The same principle applies to the reciprocal measurements – they should agree to a sufficient degree before they are used in any calculations.
Calculating sheet resistance
In general, the van der Pauw formula cannot be rearranged to give the sheet resistance RS in terms of known functions. The most notable exception to this is when Rvertical = R = Rhorizontal; in this scenario the sheet resistance is given by
The quotient is known as the van der Pauw constant and has approximate value 4.53236. In most other scenarios, an iterative method is used to solve the van der Pauw formula numerically for RS. Typically a formula is considered to fail the preconditions for Banach Fixed Point Theorem, so methods based on it do not work. Instead, nested intervals converge slowly but steadily. Recently, however, it has been shown that an appropriate reformulation of the van der Pauw problem (e.g., by introducing a second van der Pauw formula) makes it fully solvable by the Banach fixed point method.
Alternatively, a Newton-Raphson method converges relatively quickly. To reduce the complexity of the notation, the following variables are introduced:
Then the next approximation is calculated by
Hall measurements
Background
When a charged particle—such as an electron—is placed in a magnetic field, it experiences a Lorentz force proportional to the strength of the field and the velocity at which it is traveling through it. This force is strongest when the direction of motion is perpendicular to the direction of the magnetic field; in this case the force
where is the charge on the particle in coulombs, the velocity it is traveling at (centimeters per second), and the strength of the magnetic field (Wb/cm2). Note that centimeters are often used to measure length in the semiconductor industry, which is why they are used here instead of the SI units of meters.
When a current is applied to a piece of semiconducting material, this results in a steady flow of electrons through the material (as shown in parts (a) and (b) of the accompanying figure). The velocity the electrons are traveling at is (see electric current):
where is the electron density, is the cross-sectional area of the material and the elementary charge (1.602×10−19 coulombs).
If an external magnetic field is then applied perpendicular to the direction of current flow, then the resulting Lorentz force will cause the electrons to accumulate at one edge of the sample (see part (c) of the figure). Combining the above two equations, and noting that is the charge on an electron, results in a formula for the Lorentz force experienced by the electrons:
This accumulation will create an electric field across the material due to the uneven distribution of charge, as shown in part (d) of the figure. This in turn leads to a potential difference across the material, known as the Hall voltage . The current, however, continues to only flow along the material, which indicates that the force on the electrons due to the electric field balances the Lorentz force. Since the force on an electron from an electric field is , we can say that the strength of the electric field is therefore
Finally, the magnitude of the Hall voltage is simply the strength of the electric field multiplied by the width of the material; that is,
where is the thickness of the material. Since the sheet density is defined as the density of electrons multiplied by the thickness of the material, we can define the Hall voltage in terms of the sheet density:
Making the measurements
Two sets of measurements need to be made: one with a magnetic field in the positive z-direction as shown above, and one with it in the negative z-direction. From here on in, the voltages recorded with a positive field will have a subscript P (for example, V13, P = V3, P - V1, P) and those recorded with a negative field will have a subscript N (such as V13, N = V3, N - V1, N). For all of the measurements, the magnitude of the injected current should be kept the same; the magnitude of the magnetic field needs to be the same in both directions also.
First of all with a positive magnetic field, the current I24 is applied to the sample and the voltage V13, P is recorded; note that the voltages can be positive or negative. This is then repeated for I13 and V42, P.
As before, we can take advantage of the reciprocity theorem to provide a check on the accuracy of these measurements. If we reverse the direction of the currents (i.e. apply the current I42 and measure V31, P, and repeat for I31 and V24, P), then V13, P should be the same as V31, P to within a suitably small degree of error. Similarly, V42, P and V24, P should agree.
Having completed the measurements, a negative magnetic field is applied in place of the positive one, and the above procedure is repeated to obtain the voltage measurements V13, N, V42, N, V31, N and V24, N.
Calculations
Initially, the difference of the voltages for positive and negative magnetic fields is calculated:
V13 = V13, P − V13, N
V24 = V24, P − V24, N
V31 = V31, P − V31, N
V42 = V42, P − V42, N
The overall Hall voltage is then
.
The polarity of this Hall voltage indicates the type of material the sample is made of; if it is positive, the material is P-type, and if it is negative, the material is N-type.
The formula given in the background can then be rearranged to show that the sheet density
Note that the strength of the magnetic field B needs to be in units of Wb/cm2 if ns is in cm−2. For instance, if the strength is given in the commonly used units of teslas, it can be converted by multiplying it by 10−4.
Other calculations
Mobility
The resistivity of a semiconductor material can be shown to be
where n and p are the concentration of electrons and holes in the material respectively, and μn and μp are the mobility of the electrons and holes respectively.
Generally, the material is sufficiently doped so that there is a difference of many orders-of-magnitude between the two concentrations, allowing this equation to be simplified to
where nm and μm are the doping level and mobility of the majority carrier respectively.
If we then note that the sheet resistance RS is the resistivity divided by the thickness of the sample, and that the sheet density nS is the doping level multiplied by the thickness, we can divide the equation through by the thickness to get
This can then be rearranged to give the majority carrier mobility in terms of the previously calculated sheet resistance and sheet density:
Footnotes
References
Measuring Electrical Conductivity and Resistivity with the van der Pauw Technique
Electrical engineering
Hall effect | Van der Pauw method | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,634 | [
"Physical phenomena",
"Hall effect",
"Electric and magnetic fields in matter",
"Electrical phenomena",
"Electrical engineering",
"Solid state engineering"
] |
348,304 | https://en.wikipedia.org/wiki/Desktop%20metaphor | In computing, the desktop metaphor is an interface metaphor which is a set of unifying concepts used by graphical user interfaces to help users interact more easily with the computer. The desktop metaphor treats the computer monitor as if it is the top of the user's desk, upon which objects such as documents and folders of documents can be placed. A document can be opened into a window, which represents a paper copy of the document placed on the desktop. Small applications called desk accessories are also available, such as a desk calculator or notepad, etc.
The desktop metaphor itself has been extended and stretched with various implementations of desktop environments, since access to features and usability of the computer are usually more important than maintaining the 'purity' of the metaphor. Hence one can find trash cans on the desktop, as well as disks and network volumes (which can be thought of as filing cabinets—not something normally found on a desktop). Other features such as menu bars or taskbars have no direct counterpart on a real-world desktop, though this may vary by environment and the function provided; for instance, a familiar wall calendar can sometimes be displayed or otherwise accessed via a taskbar or menu bar belonging to the desktop.
History
The desktop metaphor was first introduced by Alan Kay, David C. Smith, and others at Xerox PARC in 1970 and elaborated in a series of innovative software applications developed by PARC scientists throughout the ensuing decade. The first computer to use an early version of the desktop metaphor was the experimental Xerox Alto, and the first commercial computer that adopted this kind of interface was the Xerox Star. The use of window controls to contain related information predates the desktop metaphor, with a primitive version appearing in Douglas Engelbart's "Mother of All Demos",
though it was incorporated by PARC in the environment of the Smalltalk language.
One of the first desktop-like interfaces on the market was a program called Magic Desk I. Built as a cartridge for the Commodore 64 home computer in 1983, a very primitive GUI presented a low resolution sketch of a desktop, complete with telephone, drawers, calculator, etc. The user made their choices by moving a sprite depicting a hand pointing by using the same joystick the user may have used for video gaming. Onscreen options were chosen by pushing the fire button on the joystick. The Magic Desk I program featured a typewriter graphically emulated complete with audio effects. Other applications included a calculator, rolodex organiser, and a terminal emulator. Files could be archived into the drawers of the desktop. A trashcan was also present.
The first computer to popularise the desktop metaphor, using it as a standard feature over the earlier command-line interface was the Apple Macintosh in 1984. The desktop metaphor is ubiquitous in modern-day personal computing; it is found in most desktop environments of modern operating systems: Windows as well as macOS, Linux, and other Unix-like systems.
BeOS observed the desktop metaphor more strictly than many other systems. For example, external hard drives appeared on the 'desktop', while internal ones were accessed clicking on an icon representing the computer itself. By comparison, the Mac OS places all drives on the desktop itself by default, while in Windows the user can access the drives through an icon labelled "Computer".
Amiga terminology for its desktop metaphor was taken directly from workshop jargon. The desktop was called Workbench, programs were called tools, small applications (applets) were utilities, directories were drawers, etc.
Icons of objects were animated and the directories are shown as drawers which were represented as either open or closed.
As in the classic Mac OS and macOS desktop, an icon for a floppy disk or CD-ROM would appear on the desktop when the disk was inserted into the drive, as it was a virtual counterpart of a physical floppy disk or CD-ROM on the surface of a workbench.
Paper paradigm
The paper paradigm refers to the paradigm used by most modern computers and operating systems. The paper paradigm consists of, usually, black text on a white background, files within folders, and a "desktop". The paper paradigm was created by many individuals and organisations, such as Douglas Engelbart, Xerox PARC, and Apple Computer, and was an attempt to make computers more user-friendly by making them resemble the common workplace of the time (with papers, folders, and a desktop). It was first presented to the public by Engelbart in 1968, in what is now referred to as "The Mother of All Demos".
From John Siracusa:
Back in 1984, explanations of the original Mac interface to users who had never seen a GUI before inevitably included an explanation of icons that went something like this: "This icon represents your file on disk." But to the surprise of many, users very quickly discarded any semblance of indirection. This icon is my file. My file is this icon. One is not a "representation of" or an "interface to" the other. Such relationships were foreign to most people, and constituted unnecessary mental baggage when there was a much more simple and direct connection to what they knew of reality.
Since then, many aspects of computers have wandered away from the paper paradigm by implementing features such as "shortcuts" to files, hypertext, and non-spatial file browsing. A shortcut (a link to a file that acts as a redirecting proxy, not the actual file) and hypertext have no real-world equivalent. Non-spatial file browsing, as well, may confuse novice users, as they can often have more than one window representing the same folder open at the same time, something that is impossible in reality. These and other departures from real-world equivalents are violations of the pure paper paradigm.
See also
Desktop environment
File browser
History of the GUI
Interface metaphor
Operating system
Skeuomorph
Tiling window manager
Virtual desktop
WIMP (computing)
Notes and references
External links
ArsTechnica article on the spatial Mac OS Finder
User interface techniques
User interfaces
Graphical user interfaces
Software architecture
Metaphors by type
fr:Environnement de bureau#Métaphore du bureau | Desktop metaphor | [
"Technology"
] | 1,268 | [
"User interfaces",
"Interfaces"
] |
348,470 | https://en.wikipedia.org/wiki/List%20of%20geometric%20topology%20topics | This is a list of geometric topology topics.
Low-dimensional topology
Knot theory
Knot (mathematics)
Link (knot theory)
Wild knots
Examples of knots
Unknot
Trefoil knot
Figure-eight knot (mathematics)
Borromean rings
Types of knots
Torus knot
Prime knot
Alternating knot
Hyperbolic link
Knot invariants
Crossing number
Linking number
Skein relation
Knot polynomials
Alexander polynomial
Jones polynomial
Knot group
Writhe
Quandle
Seifert surface
Braids
Braid theory
Braid group
Kirby calculus
Surfaces
Genus (mathematics)
Examples
Positive Euler characteristic
2-disk
Sphere
Real projective plane
Zero Euler characteristic
Annulus
Möbius strip
Torus
Klein bottle
Negative Euler characteristic
The boundary of the pretzel is a genus three surface
Embedded/Immersed in Euclidean space
Cross-cap
Boy's surface
Roman surface
Steiner surface
Alexander horned sphere
Klein bottle
Mapping class group
Dehn twist
Nielsen–Thurston classification
Three-manifolds
Moise's Theorem (see also Hauptvermutung)
Poincaré conjecture
Thurston elliptization conjecture
Thurston's geometrization conjecture
Hyperbolic 3-manifolds
Spherical 3-manifolds
Euclidean 3-manifolds, Bieberbach Theorem, Flat manifolds, Crystallographic groups
Seifert fiber space
Heegaard splitting
Waldhausen conjecture
Compression body
Handlebody
Incompressible surface
Dehn's lemma
Loop theorem (aka the Disk theorem)
Sphere theorem
Haken manifold
JSJ decomposition
Branched surface
Lamination
Examples
3-sphere
Torus bundles
Surface bundles over the circle
Graph manifolds
Knot complements
Whitehead manifold
Invariants
Fundamental group
Heegaard genus
tri-genus
Analytic torsion
Manifolds in general
Orientable manifold
Connected sum
Jordan-Schönflies theorem
Signature (topology)
Handle decomposition
Handlebody
h-cobordism theorem
s-cobordism theorem
Manifold decomposition
Hilbert-Smith conjecture
Mapping class group
Orbifolds
Examples
Exotic sphere
Homology sphere
Lens space
I-bundle
See also
topology glossary
List of topology topics
List of general topology topics
List of algebraic topology topics
Publications in topology
Mathematics-related lists
Outlines of mathematics and logic
Outlines | List of geometric topology topics | [
"Mathematics"
] | 420 | [
"Topology",
"nan",
"Geometric topology"
] |
348,485 | https://en.wikipedia.org/wiki/Du%C5%A1an%20Repov%C5%A1 | Dušan D. Repovš (born November 30, 1954) is a Slovenian mathematician from Ljubljana, Slovenia.
Education and academic career
He graduated in 1977 from the University of Ljubljana. He obtained his PhD in 1983 from Florida State University with thesis Generalized Three-Manifolds with Zero-Dimensional Singular Set written under the direction of Robert Christopher Lacher. He held a fellowship from the Research Council of Slovenia and a Fulbright scholarship.
In 1993 he was promoted to Professor of Geometry and Topology at the University of Ljubljana, where he is employed at the Faculty of Mathematics and Physics and at the Faculty of Education, as the Head of the Chair for Geometry and Topology. Since 1983 he has been the leader of the Slovenian Nonlinear Analysis, Topology and Geometry Group at the Institute of Mathematics, Physics and Mechanics in Ljubljana, and has directed numerous national and international research grants (with the United States, Japan, Russian Federation, China, France, Italy, Spain, Israel, United Kingdom, Poland, Hungary, Romania, Slovakia, and others). The Slovenian Research Agency has selected this group among the best research program groups in Slovenia.
Repovš is the leading Slovenian expert on nonlinear analysis and topology and is one of the best known Slovenian mathematicians. He has published over 450 research papers and has given numerous invited talks at various international conferences and universities around the world.
His research interests are in nonlinear analysis and its applications, topology, and algebra. He first became known in the 1980s for his results in geometric topology, notably the solution of the classical recognition problem for 3-manifolds, the proof of the 4-dimensional Cellularity Criterion, and the proof of the Lipschitz case of the classical Hilbert–Smith conjecture. Presently he is most actively investigating in nonlinear analysis. Later he extended his research to several other areas and is currently most actively investigating problems of partial differential equations. He covers a very broad spectrum: problems with nonstandard growth (variable exponents, anisotropic problems, double-phase problems), qualitative analysis of solutions of semilinear and quasilinear PDEs (Dirichlet, Neumann, Robin boundary conditions), singular and degenerate problems (blow-up boundary, singular reactions), inequality problems (variational, hemivariational, both either stationary or evolutionary). His analysis of these problems combines fine methods at the interplay between nonlinear functional analysis, critical point theory, variational, topological and analytic methods, mathematical physics, and others.
Bibliography
He has published a monograph on nonlinear analysis, a monograph on partial differential equations with variable exponents, a monograph on continuous selections of multivalued mappings, and a monograph on higher-dimensional generalized manifolds, as well as also a university textbook on topology. He is serving on the editorial boards of the Journal of Mathematical Analysis and Applications, Advances in Nonlinear Analysis, Boundary Value Problems, Complex Variables and Elliptic Equations, and others.
Memberships
He is a member of the European Academy of Sciences and Arts, the New York Academy of Sciences, the American Mathematical Society, the European Mathematical Society, the London Mathematical Society, the Mathematical Society of Japan, the Moscow Mathematical Society, the French Mathematical Society, the Swiss Mathematical Society, and others. He is also a founding member of the Slovenian Engineering Academy.
Awards
For his outstanding research he was awarded in 2014 the honorary doctorate by the University of Craiova, in 2009 the Bogolyubov Memorial Medal by the Ukrainian Mathematical Congress in Kyiv and in 1997 the Prize of the Republic of Slovenia for Research (now called the Zois Prize). For his promotion of the Slovenian science abroad he received in 1995 the honorary title of the Ambassador for Science of the Republic of Slovenia.
Notes
21st-century Slovenian mathematicians
Topologists
Florida State University alumni
1954 births
Living people
University of Ljubljana alumni
Academic staff of the University of Ljubljana
Scientists from Ljubljana
Members of the European Academy of Sciences and Arts
PDE theorists
20th-century Slovenian mathematicians | Dušan Repovš | [
"Mathematics"
] | 796 | [
"Topologists",
"Topology"
] |
348,509 | https://en.wikipedia.org/wiki/Lippmann%20plate | Lippmann process photography is an early color photography method and type of alternative process photography. It was invented by French scientist Gabriel Lippmann in 1891 and consists of first focusing an image onto a light-sensitive plate, placing the emulsion in contact with a mirror (originally liquid mercury) during the exposure to introduce interference, chemically developing the plate, inverting the plate and painting the glass black, and finally affixing a prism to the emulsion surface. The image is then viewed by illuminating the plate with light. This type of photography became known as interferential photography or interferometric colour photography and the results it produces are sometimes called direct photochromes, interference photochromes, or Lippmann photochromes (distinguished from the earlier so-called "photochromes" which were merely black-and-white photographs painted with color by hand). In French, the method is known as photographie interférentielle and the resulting images were originally exhibited as des vues lippmaniennes. Lippmann won the Nobel Prize in Physics in 1908 "for his method of reproducing colours photographically based on the phenomenon of interference".
Images made with this method are created on a Lippmann plate: a clear glass plate (having no anti-halation backing), coated with an almost transparent (very low silver halide content) emulsion of extremely fine grains, typically 0.01 to 0.04 micrometres in diameter.
Consequently, Lippmann plates have an extremely high resolving power exceeding 400 lines/mm.
Method
In Lippmann's method, a glass plate is coated with an ultra fine grain light-sensitive film (originally using the albumen process containing potassium bromide; later and primarily using silver halide gelatin), then dried, sensitized in the silver bath, washed, irrigated with cyanine solution, and dried again. The back of the film is then brought into optical contact with a reflective surface. This originally was done by mounting the plate in a specialized holder with pure mercury behind the film. When it is exposed in the camera through the glass side of the plate, the light rays which strike the transparent light-sensitive film are reflected back on themselves and, by interference, create standing waves. The standing waves cause exposure of the emulsion in diffraction patterns. The developed and fixated diffraction patterns constitute a Bragg condition in which diffuse, white light is scattered in a specular fashion and undergoes constructive interference in accordance to Bragg's law. The result is an image having very similar colours as the original using a black and white photographic process.
For this method Lippmann won the Nobel Prize in Physics in 1908.
The colour image can only be viewed in the reflection of a diffuse light source from the plate, making the field of view limited, and therefore not easily copied with conventional techniques. The method was very insensitive with the emulsions of the time and it never came into general use. Another reason Lippmann's process of colour photography did not succeed can be found in the invention of the autochrome plates by the Lumière brothers. A technique derived from the Lippmann technique has been proposed as a method of producing images which can easily be viewed, but not copied, for security purposes.
Gallery
Other sources of Lippmann plates
The Kodak Spectroscopic Plate Type 649-F is specified with a resolving power of 2000 lines/mm.
A diffusion method for making silver bromide based holographic recording material was published.
Durable data storage utility
Because the photographs are so durable, researchers have reworked Lippmann plates for use in archival data storage to replace hard drives. Work began on the project after they were made aware data storage on the International Space Station requires daily maintenance because it can be damaged by cosmic rays and they recalled that silver halide would not be significantly affected by astroparticles (or even electromagnetic pulses from nuclear explosions). 150 standing-wave storage samples placed on the ISS during 2019 showed no signs of data degradation after exposure to cosmic rays for nine months.
See also
Photochromy
Holography
References
External links
Exhibition of Gabriel Lippmann photochromeswith associated press materials at the Photo Élysée museum with technical assistance from EPFL university
Wiki about Lippmann Process by Holography Forum
Forum about Lippmann Process by Holography Forum
Université de Lille video of photochrome examples
Université de Lille article depicting Lippmann plates
One set of instructions for making a Lippmann plate
Photographic processes
Photographic processes dating from the 19th century
Audiovisual introductions in 1891
French inventions
Luxembourgish inventions
Alternative photographic processes
19th century in art
19th-century photography
Computer data storage
Computer storage devices
Archival technology | Lippmann plate | [
"Technology"
] | 965 | [
"Computer storage devices",
"Recording devices"
] |
348,535 | https://en.wikipedia.org/wiki/Tricorder | A tricorder is a fictional handheld sensor that exists in the Star Trek universe. The tricorder is a multifunctional hand-held device that can perform environmental scans, data recording, and data analysis; hence the word "tricorder" to refer to the three functions of sensing, recording, and computing. In Star Trek stories the devices are issued by the fictional Starfleet organization.
The original physical prop for the tricorder was designed by Wah Chang and appeared in "The Man Trap" in 1966, the first Star Trek episode to air.
Types
The tricorder of the 23rd century, as seen in Star Trek: The Original Series, is a black, rectangular device with a top-mounted rotating hood, two opening compartments, and a shoulder strap. The top pivots open, exposing a small screen and control buttons. When closed it resembles a portable tape recorder.
Three main variants appear in shows. The standard tricorder is a general-purpose device used primarily to scout unfamiliar areas, make detailed examination of living things, and record and review technical data. The medical tricorder is used by doctors to help diagnose diseases and collect bodily information about a patient; the key difference between this and a standard tricorder is a detachable hand-held high-resolution scanner stored in a compartment of the tricorder when not in use. The engineering tricorder is fine-tuned for starship engineering purposes. There are also many other lesser-used varieties of special-use tricorders.
The ship's medical variant employs a detachable "sensor probe" stored in the bottom compartment when not in use. The probe, although originally thought to have been fashioned from a spare salt shaker, was actually scratch-built for the show; the conical Danish salt shakers were set dressing, used as laser scalpels. The 24th-century version introduced in Star Trek: The Next Generation is a small, gray, hand-held model with a flip-out panel to allow for a larger screen. This design was later refined with a slightly more angular appearance that was seen in most Next Generation–era movies as well as later seasons of Star Trek: Deep Space Nine and Voyager. In the post-Next Generation-era (Star Trek: Nemesis and Star Trek: Elite Force II ), a newer tricorder was introduced. It is flatter, with a small flap that opens on top and a large touchscreen interface.
Production
The tricorder prop for the original Star Trek series was designed and built by Wah Ming Chang, who created several futuristic props under contract. Some of his designs are considered to have been influential on later, real-world consumer electronics devices. For instance, his communicator inspired cell phone inventor Martin Cooper's desire to create his own form of mobile communication device. Many other companies followed this example and life-sized replicas remain popular collectibles today.
The tricorder in The Next Generation was initially inspired by the HP-41C scientific calculator.
"Real" tricorders
Software exists to make hand-held devices simulate a tricorder. Examples include Jeff Jetton's Tricorder for the PalmPilot; the Web application for the Pocket PC, iPhone, and iPod Touch; and an Android version.
Vital Technologies Corporation sold a portable device dubbed the "Official Star-Trek Tricorder Mark 1" (formally, the TR-107 Tricorder Mark 1) in 1996. Its features were an "Electromagnetic Field (EMF) Meter", "Two-Mode Weather Station" (thermometer and barometer), "Colorimeter" (no wavelength given), "Light meter", and "Stardate Clock and Timer" (a clock and timer). Spokespersons claimed the device was a "serious scientific instrument". Vital Technologies marketed the TR-107 as a limited run of 10,000 units before going out of business, although far fewer than 10,000 were likely ever built. The company was permitted to call this device a "tricorder" because Gene Roddenberry's contract included a clause allowing any company able to create functioning technology to use the name.
In February 2007, researchers from Purdue University publicly announced their portable (briefcase-sized) DESI-based mass spectrometer, the Mini-10, which can be used to analyze compounds in ambient conditions without prior sample preparation. This was also announced as a "tricorder".
In March 2008, British biotech company QuantuMDx was founded to develop the world's first handheld DNA lab, a molecular diagnostic point-of-care device which will provide disease diagnosis in under 15 minutes. In March 2014, the company launched a crowdfunding campaign to support clinical trials of the device and to name it. Contributors and members of the public called for the device to be officially named a "Tricorder".
In May 2008, researchers from Georgia Tech publicly announced their portable hand-held multi-spectral imaging device, which aids in the assessment of the severity of an injury under the skin, including the detection of pressure ulcers, regardless of lighting conditions or skin pigmentation. The day after the announcement, technology websites, including Inside Tech and The Future of Things, began comparing this device to the Star Trek tricorder.
On May 10, 2011, the X Prize Foundation partnered with Qualcomm Incorporated to announce the Tricorder X Prize, a $10 million incentive to develop a mobile device that can diagnose patients as well as or better than a panel of board-certified physicians. On the contest was officially opened at the 2012 Consumer Electronics Show in Las Vegas. Early entrants to the competition include two Silicon Valley startups, Scanadu and Senstore, which began work on the medical tricorder in early 2011. Entries included ones from CloudDx and DMI, among others. The winner of the competition was Final Frontier Medical Devices, which is now known as Basil Leaf Technologies.
On August 23, 2011, moonblink's tricorder app for Android was served with a copyright infringement notice by lawyers for CBS and it was deleted from the Android Market by Google. On January 5, 2012, it was put back again as a new app in the Android market, although it is no longer available.
In 2012, cognitive science researcher Dr. Peter Jansen announced having developed a handheld mobile computing device modeled after the design of the tricorder. A version that expanded the original capabilities to include visible spectroscopy, radiation sensing, thermal imaging, and environmental sensing was released as open source hardware in 2014.
On December 7, 2020, genomics researcher Dr. Michael Schatz and his intern Aspyn Palatnick published a paper on the first mobile app to allow anybody to study DNA virtually anywhere using an iPhone or iPad, which many media outlets dubbed to be a "DNA tricorder."
On February 19, 2022, NASA sent the rHEALTH ONE, a universal biomedical analyzer, regarded as a comprehensive device capable of measuring most common lab tests for spaceflight medical conditions, to the International Space Station. This was successfully tested by ESA astronaut Samantha Cristoforetti on May 13 and May 16, 2022. She successfully demonstrated the device's ability to take a small drop of sample (< 10 uL) and perform measurements on samples prepared by NASA for determining the performance of the device on-orbit. Over 100 million raw data points were recorded on five different detector channels using two lasers with a readout of minutes, making the rHEALTH ONE the most powerful biomedical analyzer ever tested in space.
Toys and replicas
The first mass-produced tricorder replica was featured in AMT's Star Trek Exploration Set in 1974, followed shortly thereafter by a palm-sized version in Remco's 1975 Star Trek Utility Belt which was sized and marketed to young children in hopes of taking advantage of Star Trek: The Animated Series that was on TV at that time.
The first life-sized tricorder was produced by Mego Corporation in 1976 and was actually a cassette tape player made to look like a tricorder. In the 1990s, Star Trek replicas were mass-produced by Playmates, Playing Mantis, and Master Replicas, making commercially produced replicas affordable to the average fan for the first time. In the 2000s, Art Asylum and later Diamond Select produced prop replicas of the original tricorder.
References
External links
Life imitates Star Trek
Star Trek medical device uses ultrasound to seal punctured lungs
http://www.tricorderproject.org/index.html
Star Trek devices
Fictional computers
Mobile phones | Tricorder | [
"Technology"
] | 1,770 | [
"Fictional computers",
"Computers"
] |
348,560 | https://en.wikipedia.org/wiki/Morse%20theory | In mathematics, specifically in differential topology, Morse theory enables one to analyze the topology of a manifold by studying differentiable functions on that manifold. According to the basic insights of Marston Morse, a typical differentiable function on a manifold will reflect the topology quite directly. Morse theory allows one to find CW structures and handle decompositions on manifolds and to obtain substantial information about their homology.
Before Morse, Arthur Cayley and James Clerk Maxwell had developed some of the ideas of Morse theory in the context of topography. Morse originally applied his theory to geodesics (critical points of the energy functional on the space of paths). These techniques were used in Raoul Bott's proof of his periodicity theorem.
The analogue of Morse theory for complex manifolds is Picard–Lefschetz theory.
Basic concepts
To illustrate, consider a mountainous landscape surface (more generally, a manifold). If is the function giving the elevation of each point, then the inverse image of a point in is a contour line (more generally, a level set). Each connected component of a contour line is either a point, a simple closed curve, or a closed curve with a double point. Contour lines may also have points of higher order (triple points, etc.), but these are unstable and may be removed by a slight deformation of the landscape. Double points in contour lines occur at saddle points, or passes, where the surrounding landscape curves up in one direction and down in the other.
Imagine flooding this landscape with water. When the water reaches elevation , the underwater surface is , the points with elevation or below. Consider how the topology of this surface changes as the water rises. It appears unchanged except when passes the height of a critical point, where the gradient of is (more generally, the Jacobian matrix acting as a linear map between tangent spaces does not have maximal rank). In other words, the topology of does not change except when the water either (1) starts filling a basin, (2) covers a saddle (a mountain pass), or (3) submerges a peak.
To these three types of critical pointsbasins, passes, and peaks (i.e. minima, saddles, and maxima)one associates a number called the index, the number of independent directions in which decreases from the point. More precisely, the index of a non-degenerate critical point of is the dimension of the largest subspace of the tangent space to at on which the Hessian of is negative definite. The indices of basins, passes, and peaks are and respectively.
Considering a more general surface, let be a torus oriented as in the picture, with again taking a point to its height above the plane. One can again analyze how the topology of the underwater surface changes as the water level rises.
Starting from the bottom of the torus, let and be the four critical points of index and corresponding to the basin, two saddles, and peak, respectively. When is less than then is the empty set. After passes the level of when then is a disk, which is homotopy equivalent to a point (a 0-cell) which has been "attached" to the empty set. Next, when exceeds the level of and then is a cylinder, and is homotopy equivalent to a disk with a 1-cell attached (image at left). Once passes the level of and then is a torus with a disk removed, which is homotopy equivalent to a cylinder with a 1-cell attached (image at right). Finally, when is greater than the critical level of is a torus, i.e. a torus with a disk (a 2-cell) removed and re-attached.
This illustrates the following rule: the topology of does not change except when passes the height of a critical point; at this point, a -cell is attached to , where is the index of the point. This does not address what happens when two critical points are at the same height, which can be resolved by a slight perturbation of In the case of a landscape or a manifold embedded in Euclidean space, this perturbation might simply be tilting slightly, rotating the coordinate system.
One must take care to make the critical points non-degenerate. To see what can pose a problem, let and let Then is a critical point of but the topology of does not change when passes The problem is that the second derivative is that is, the Hessian of vanishes and the critical point is degenerate. This situation is unstable, since by slightly deforming to , the degenerate critical point is either removed () or breaks up into two non-degenerate critical points ().
Formal development
For a real-valued smooth function on a differentiable manifold the points where the differential of vanishes are called critical points of and their images under are called critical values. If at a critical point the matrix of second partial derivatives (the Hessian matrix) is non-singular, then is called a ; if the Hessian is singular then is a .
For the functions
from to has a critical point at the origin if which is non-degenerate if (that is, is of the form ) and degenerate if (that is, is of the form ). A less trivial example of a degenerate critical point is the origin of the monkey saddle.
The index of a non-degenerate critical point of is the dimension of the largest subspace of the tangent space to at on which the Hessian is negative definite. This corresponds to the intuitive notion that the index is the number of directions in which decreases. The degeneracy and index of a critical point are independent of the choice of the local coordinate system used, as shown by Sylvester's Law.
Morse lemma
Let be a non-degenerate critical point of Then there exists a chart in a neighborhood of such that for all and
throughout Here is equal to the index of at . As a corollary of the Morse lemma, one sees that non-degenerate critical points are isolated. (Regarding an extension to the complex domain see Complex Morse Lemma. For a generalization, see Morse–Palais lemma).
Fundamental theorems
A smooth real-valued function on a manifold is a Morse function if it has no degenerate critical points. A basic result of Morse theory says that almost all functions are Morse functions. Technically, the Morse functions form an open, dense subset of all smooth functions in the topology. This is sometimes expressed as "a typical function is Morse" or "a generic function is Morse".
As indicated before, we are interested in the question of when the topology of changes as varies. Half of the answer to this question is given by the following theorem.
Theorem. Suppose is a smooth real-valued function on is compact, and there are no critical values between and Then is diffeomorphic to and deformation retracts onto
It is also of interest to know how the topology of changes when passes a critical point. The following theorem answers that question.
Theorem. Suppose is a smooth real-valued function on and is a non-degenerate critical point of of index and that Suppose is compact and contains no critical points besides Then is homotopy equivalent to with a -cell attached.
These results generalize and formalize the 'rule' stated in the previous section.
Using the two previous results and the fact that there exists a Morse function on any differentiable manifold, one can prove that any differentiable manifold is a CW complex with an -cell for each critical point of index To do this, one needs the technical fact that one can arrange to have a single critical point on each critical level, which is usually proven by using gradient-like vector fields to rearrange the critical points.
Morse inequalities
Morse theory can be used to prove some strong results on the homology of manifolds. The number of critical points of index of is equal to the number of cells in the CW structure on obtained from "climbing" Using the fact that the alternating sum of the ranks of the homology groups of a topological space is equal to the alternating sum of the ranks of the chain groups from which the homology is computed, then by using the cellular chain groups (see cellular homology) it is clear that the Euler characteristic is equal to the sum
where is the number of critical points of index Also by cellular homology, the rank of the th homology group of a CW complex is less than or equal to the number of -cells in Therefore, the rank of the th homology group, that is, the Betti number , is less than or equal to the number of critical points of index of a Morse function on These facts can be strengthened to obtain the :
In particular, for any
one has
This gives a powerful tool to study manifold topology. Suppose on a closed manifold there exists a Morse function with precisely k critical points. In what way does the existence of the function restrict ? The case was studied by Georges Reeb in 1952; the Reeb sphere theorem states that is homeomorphic to a sphere The case is possible only in a small number of low dimensions, and M is homeomorphic to an Eells–Kuiper manifold.
In 1982 Edward Witten developed an analytic approach to the Morse inequalities by considering the de Rham complex for the perturbed operator
Application to classification of closed 2-manifolds
Morse theory has been used to classify closed 2-manifolds up to diffeomorphism. If is oriented, then is classified by its genus and is diffeomorphic to a sphere with handles: thus if is diffeomorphic to the 2-sphere; and if is diffeomorphic to the connected sum of 2-tori. If is unorientable, it is classified by a number and is diffeomorphic to the connected sum of real projective spaces In particular two closed 2-manifolds are homeomorphic if and only if they are diffeomorphic.
Morse homology
Morse homology is a particularly easy way to understand the homology of smooth manifolds. It is defined using a generic choice of Morse function and Riemannian metric. The basic theorem is that the resulting homology is an invariant of the manifold (that is, independent of the function and metric) and isomorphic to the singular homology of the manifold; this implies that the Morse and singular Betti numbers agree and gives an immediate proof of the Morse inequalities. An infinite dimensional analog of Morse homology in symplectic geometry is known as Floer homology.
Morse–Bott theory
The notion of a Morse function can be generalized to consider functions that have nondegenerate manifolds of critical points. A is a smooth function on a manifold whose critical set is a closed submanifold and whose Hessian is non-degenerate in the normal direction. (Equivalently, the kernel of the Hessian at a critical point equals the tangent space to the critical submanifold.) A Morse function is the special case where the critical manifolds are zero-dimensional (so the Hessian at critical points is non-degenerate in every direction, that is, has no kernel).
The index is most naturally thought of as a pair
where is the dimension of the unstable manifold at a given point of the critical manifold, and is equal to plus the dimension of the critical manifold. If the Morse–Bott function is perturbed by a small function on the critical locus, the index of all critical points of the perturbed function on a critical manifold of the unperturbed function will lie between and
Morse–Bott functions are useful because generic Morse functions are difficult to work with; the functions one can visualize, and with which one can easily calculate, typically have symmetries. They often lead to positive-dimensional critical manifolds. Raoul Bott used Morse–Bott theory in his original proof of the Bott periodicity theorem.
Round functions are examples of Morse–Bott functions, where the critical sets are (disjoint unions of) circles.
Morse homology can also be formulated for Morse–Bott functions; the differential in Morse–Bott homology is computed by a spectral sequence. Frederic Bourgeois sketched an approach in the course of his work on a Morse–Bott version of symplectic field theory, but this work was never published due to substantial analytic difficulties.
See also
References
Further reading
A classic advanced reference in mathematics and mathematical physics.
Lemmas
Smooth functions | Morse theory | [
"Mathematics"
] | 2,569 | [
"Mathematical theorems",
"Mathematical problems",
"Lemmas"
] |
348,624 | https://en.wikipedia.org/wiki/List%20of%20order%20theory%20topics | Order theory is a branch of mathematics that studies various kinds of objects (often binary relations) that capture the intuitive notion of ordering, providing a framework for saying when one thing is "less than" or "precedes" another.
An alphabetical list of many notions of order theory can be found in the order theory glossary. See also inequality, extreme value and mathematical optimization.
Overview
Partially ordered set
Preorder
Totally ordered set
Total preorder
Chain
Trichotomy
Extended real number line
Antichain
Strict order
Hasse diagram
Directed acyclic graph
Duality (order theory)
Product order
Distinguished elements of partial orders
Greatest element (maximum, top, unit), Least element (minimum, bottom, zero)
Maximal element, minimal element
Upper bound
Least upper bound (supremum, join)
Greatest lower bound (infimum, meet)
Limit superior and limit inferior
Irreducible element
Prime element
Compact element
Subsets of partial orders
Cofinal and coinitial set, sometimes also called dense
Meet-dense set and join-dense set
Linked set (upwards and downwards)
Directed set (upwards and downwards)
centered and σ-centered set
Net (mathematics)
Upper set and lower set
Ideal and filter
Ultrafilter
Special types of partial orders
Completeness (order theory)
Dense order
Distributivity (order theory)
modular lattice
distributive lattice
completely distributive lattice
Ascending chain condition
Infinite descending chain
Countable chain condition, often abbreviated as ccc
Knaster's condition, sometimes denoted property (K)
Well-orders
Well-founded relation
Ordinal number
Well-quasi-ordering
Completeness properties
Semilattice
Lattice
(Directed) complete partial order, (d)cpo
Bounded complete
Complete lattice
Knaster–Tarski theorem
Infinite divisibility
Orders with further algebraic operations
Heyting algebra
Relatively complemented lattice
Complete Heyting algebra
Pointless topology
MV-algebra
Ockham algebras:
Stone algebra
De Morgan algebra
Kleene algebra (with involution)
Łukasiewicz–Moisil algebra
Boolean algebra (structure)
Boolean ring
Complete Boolean algebra
Orthocomplemented lattice
Quantale
Orders in algebra
Partially ordered monoid
Ordered group
Archimedean property
Ordered ring
Ordered field
Artinian ring
Noetherian
Linearly ordered group
Monomial order
Weak order of permutations
Bruhat order on a Coxeter group
Incidence algebra
Functions between partial orders
Monotonic
Pointwise order of functions
Galois connection
Order embedding
Order isomorphism
Closure operator
Functions that preserve suprema/infima
Completions and free constructions
Dedekind completion
Ideal completion
Domain theory
Way-below relation
Continuous poset
Continuous lattice
Algebraic poset
Scott domain
Algebraic lattice
Scott information system
Powerdomain
Scott topology
Scott continuity
Orders in mathematical logic
Lindenbaum algebra
Zorn's lemma
Hausdorff maximality theorem
Boolean prime ideal theorem
Ultrafilter
Ultrafilter lemma
Tree (set theory)
Tree (descriptive set theory)
Suslin's problem
Absorption law
Prewellordering
Orders in topology
Stone duality
Stone's representation theorem for Boolean algebras
Specialization (pre)order
Order topology of a total order (open interval topology)
Alexandrov topology
Upper topology
Scott topology
Scott continuity
Lawson topology
Finer topology
Order
Order theory
Order theory | List of order theory topics | [
"Mathematics"
] | 665 | [
"Order theory",
"nan"
] |
348,641 | https://en.wikipedia.org/wiki/Vandermonde%20matrix | In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an matrix
with entries , the jth power of the number , for all zero-based indices and . Some authors define the Vandermonde matrix as the transpose of the above matrix.
The determinant of a square Vandermonde matrix (when ) is called a Vandermonde determinant or Vandermonde polynomial. Its value is:
This is non-zero if and only if all are distinct (no two are equal), making the Vandermonde matrix invertible.
Applications
The polynomial interpolation problem is to find a polynomial which satisfies for given data points . This problem can be reformulated in terms of linear algebra by means of the Vandermonde matrix, as follows. computes the values of at the points via a matrix multiplication , where is the vector of coefficients and is the vector of values (both written as column vectors):
If and are distinct, then V is a square matrix with non-zero determinant, i.e. an invertible matrix. Thus, given V and y, one can find the required by solving for its coefficients in the equation : . That is, the map from coefficients to values of polynomials is a bijective linear mapping with matrix V, and the interpolation problem has a unique solution. This result is called the unisolvence theorem, and is a special case of the Chinese remainder theorem for polynomials.
In statistics, the equation means that the Vandermonde matrix is the design matrix of polynomial regression.
In numerical analysis, solving the equation naïvely by Gaussian elimination results in an algorithm with time complexity O(n3). Exploiting the structure of the Vandermonde matrix, one can use Newton's divided differences method (or the Lagrange interpolation formula) to solve the equation in O(n2) time, which also gives the UL factorization of . The resulting algorithm produces extremely accurate solutions, even if is ill-conditioned. (See polynomial interpolation.)
The Vandermonde determinant is used in the representation theory of the symmetric group.
When the values belong to a finite field, the Vandermonde determinant is also called the Moore determinant, and has properties which are important in the theory of BCH codes and Reed–Solomon error correction codes.
The discrete Fourier transform is defined by a specific Vandermonde matrix, the DFT matrix, where the are chosen to be th roots of unity. The Fast Fourier transform computes the product of this matrix with a vector in time.
In the physical theory of the quantum Hall effect, the Vandermonde determinant shows that the Laughlin wavefunction with filling factor 1 is equal to a Slater determinant. This is no longer true for filling factors different from 1 in the fractional quantum Hall effect.
In the geometry of polyhedra, the Vandermonde matrix gives the normalized volume of arbitrary -faces of cyclic polytopes. Specifically, if is a -face of the cyclic polytope corresponding to , then
Determinant
The determinant of a square Vandermonde matrix is called a Vandermonde polynomial or Vandermonde determinant. Its value is the polynomial
which is non-zero if and only if all are distinct.
The Vandermonde determinant was formerly sometimes called the discriminant, but in current terminology the discriminant of a polynomial is the square of the Vandermonde determinant of the roots . The Vandermonde determinant is an alternating form in the , meaning that exchanging two changes the sign, and thus depends on order for the . By contrast, the discriminant does not depend on any order, so that Galois theory implies that the discriminant is a polynomial function of the coefficients of .
The determinant formula is proved below in three ways. The first uses polynomial properties, especially the unique factorization property of multivariate polynomials. Although conceptually simple, it involves non-elementary concepts of abstract algebra. The second proof is based on the linear algebra concepts of change of basis in a vector space and the determinant of a linear map. In the process, it computes the LU decomposition of the Vandermonde matrix. The third proof is more elementary but more complicated, using only elementary row and column operations.
First proof: Polynomial properties
The first proof relies on properties of polynomials.
By the Leibniz formula, is a polynomial in the , with integer coefficients. All entries of the -th column have total degree . Thus, again by the Leibniz formula, all terms of the determinant have total degree
(that is, the determinant is a homogeneous polynomial of this degree).
If, for , one substitutes for , one gets a matrix with two equal rows, which has thus a zero determinant. Thus, considering the determinant as univariate in the factor theorem implies that is a divisor of It thus follows that for all and , is a divisor of
This will now be strengthened to show that the product of all those divisors of is a divisor of Indeed, let be a polynomial with as a factor, then for some polynomial If is another factor of then becomes zero after the substitution of for If the factor becomes zero after this substitution, since the factor remains nonzero. So, by the factor theorem, divides and divides
Iterating this process by starting from one gets that is divisible by the product of all with that is
where is a polynomial. As the product of all and have the same degree , the polynomial is, in fact, a constant. This constant is one, because the product of the diagonal entries of is , which is also the monomial that is obtained by taking the first term of all factors in This proves that and finishes the proof.
Second proof: linear maps
Let be a field containing all and the vector space of the polynomials of degree less than or equal to with coefficients in . Let
be the linear map defined by
.
The Vandermonde matrix is the matrix of with respect to the canonical bases of and
Changing the basis of amounts to multiplying the Vandermonde matrix by a change-of-basis matrix (from the right). This does not change the determinant, if the determinant of is .
The polynomials , , , …, are monic of respective degrees 0, 1, …, . Their matrix on the monomial basis is an upper-triangular matrix (if the monomials are ordered in increasing degrees), with all diagonal entries equal to one. This matrix is thus a change-of-basis matrix of determinant one. The matrix of on this new basis is
.
Thus Vandermonde determinant equals the determinant of this matrix, which is the product of its diagonal entries.
This proves the desired equality. Moreover, one gets the LU decomposition of as .
Third proof: row and column operations
The third proof is based on the fact that if one adds to a column of a matrix the product by a scalar of another column then the determinant remains unchanged.
So, by subtracting to each column – except the first one – the preceding column multiplied by , the determinant is not changed. (These subtractions must be done by starting from last columns, for subtracting a column that has not yet been changed). This gives the matrix
Applying the Laplace expansion formula along the first row, we obtain , with
As all the entries in the -th row of have a factor of , one can take these factors out and obtain
,
where is a Vandermonde matrix in . Iterating this process on this smaller Vandermonde matrix, one eventually gets the desired expression of as the product of all such that .
Rank of the Vandermonde matrix
An rectangular Vandermonde matrix such that has rank if and only if all are distinct.
An rectangular Vandermonde matrix such that has rank if and only if there are of the that are distinct.
A square Vandermonde matrix is invertible if and only if the are distinct. An explicit formula for the inverse is known (see below).
Inverse Vandermonde matrix
As explained above in Applications, the polynomial interpolation problem for satisfying is equivalent to the matrix equation , which has the unique solution . There are other known formulas which solve the interpolation problem, which must be equivalent to the unique , so they must give explicit formulas for the inverse matrix . In particular, Lagrange interpolation shows that the columns of the inverse matrix
are the coefficients of the Lagrange polynomials where . This is easily demonstrated: the polynomials clearly satisfy for while , so we may compute the product , the identity matrix.
Confluent Vandermonde matrices
As described before, a Vandermonde matrix describes the linear algebra interpolation problem of finding the coefficients of a polynomial of degree based on the values , where are distinct points. If are not distinct, then this problem does not have a unique solution (and the corresponding Vandermonde matrix is singular). However, if we specify the values of the derivatives at the repeated points, then the problem can have a unique solution. For example, the problem
where , has a unique solution for all with . In general, suppose that are (not necessarily distinct) numbers, and suppose for simplicity that equal values are adjacent:
where and are distinct. Then the corresponding interpolation problem is
The corresponding matrix for this problem is called a confluent Vandermonde matrix, given as follows. If , then for a unique (denoting ). We let
This generalization of the Vandermonde matrix makes it non-singular, so that there exists a unique solution to the system of equations, and it possesses most of the other properties of the Vandermonde matrix. Its rows are derivatives (of some order) of the original Vandermonde rows. There exists an algorithm for the inverse of the confluent Vandermonde matrix that works in quadratic time for every input allowed by the definition.
Another way to derive the above formula is by taking a limit of the Vandermonde matrix as the 's approach each other. For example, to get the case of , take subtract the first row from second in the original Vandermonde matrix, and let : this yields the corresponding row in the confluent Vandermonde matrix. This derives the generalized interpolation problem with given values and derivatives as a limit of the original case with distinct points: giving is similar to giving for small . Geometers have studied the problem of tracking confluent points along their tangent lines, known as compacitification of configuration space.
See also
Schur polynomial – a generalization
Alternant matrix
Lagrange polynomial
Wronskian
List of matrices
Moore determinant over a finite field
Vieta's formulas
References
Further reading
.
Matrices
Determinants
Numerical linear algebra | Vandermonde matrix | [
"Mathematics"
] | 2,318 | [
"Matrices (mathematics)",
"Mathematical objects"
] |
348,692 | https://en.wikipedia.org/wiki/Eigenface | An eigenface ( ) is the name given to a set of eigenvectors when used in the computer vision problem of human face recognition. The approach of using eigenfaces for recognition was developed by Sirovich and Kirby and used by Matthew Turk and Alex Pentland in face classification. The eigenvectors are derived from the covariance matrix of the probability distribution over the high-dimensional vector space of face images. The eigenfaces themselves form a basis set of all images used to construct the covariance matrix. This produces dimension reduction by allowing the smaller set of basis images to represent the original training images. Classification can be achieved by comparing how faces are represented by the basis set.
History
The eigenface approach began with a search for a low-dimensional representation of face images. Sirovich and Kirby showed that principal component analysis could be used on a collection of face images to form a set of basis features. These basis images, known as eigenpictures, could be linearly combined to reconstruct images in the original training set. If the training set consists of M images, principal component analysis could form a basis set of N images, where N < M. The reconstruction error is reduced by increasing the number of eigenpictures; however, the number needed is always chosen less than M. For example, if you need to generate a number of N eigenfaces for a training set of M face images, you can say that each face image can be made up of "proportions" of all the K "features" or eigenfaces: Face image1 = (23% of E1) + (2% of E2) + (51% of E3) + ... + (1% En).
In 1991 M. Turk and A. Pentland expanded these results and presented the eigenface method of face recognition. In addition to designing a system for automated face recognition using eigenfaces, they showed a way of calculating the eigenvectors of a covariance matrix such that computers of the time could perform eigen-decomposition on a large number of face images. Face images usually occupy a high-dimensional space and conventional principal component analysis was intractable on such data sets. Turk and Pentland's paper demonstrated ways to extract the eigenvectors based on matrices sized by the number of images rather than the number of pixels.
Once established, the eigenface method was expanded to include methods of preprocessing to improve accuracy. Multiple manifold approaches were also used to build sets of eigenfaces for different subjects and different features, such as the eyes.
Generation
A set of eigenfaces can be generated by performing a mathematical process called principal component analysis (PCA) on a large set of images depicting different human faces. Informally, eigenfaces can be considered a set of "standardized face ingredients", derived from statistical analysis of many pictures of faces. Any human face can be considered to be a combination of these standard faces. For example, one's face might be composed of the average face plus 10% from eigenface 1, 55% from eigenface 2, and even −3% from eigenface 3. Remarkably, it does not take many eigenfaces combined together to achieve a fair approximation of most faces. Also, because a person's face is not recorded by a digital photograph, but instead as just a list of values (one value for each eigenface in the database used), much less space is taken for each person's face.
The eigenfaces that are created will appear as light and dark areas that are arranged in a specific pattern. This pattern is how different features of a face are singled out to be evaluated and scored. There will be a pattern to evaluate symmetry, whether there is any style of facial hair, where the hairline is, or an evaluation of the size of the nose or mouth. Other eigenfaces have patterns that are less simple to identify, and the image of the eigenface may look very little like a face.
The technique used in creating eigenfaces and using them for recognition is also used outside of face recognition: handwriting recognition, lip reading, voice recognition, sign language/hand gestures interpretation and medical imaging analysis. Therefore, some do not use the term eigenface, but prefer to use 'eigenimage'.
Practical implementation
To create a set of eigenfaces, one must:
Prepare a training set of face images. The pictures constituting the training set should have been taken under the same lighting conditions, and must be normalized to have the eyes and mouths aligned across all images. They must also be all resampled to a common pixel resolution (r × c). Each image is treated as one vector, simply by concatenating the rows of pixels in the original image, resulting in a single column with r × c elements. For this implementation, it is assumed that all images of the training set are stored in a single matrix T, where each column of the matrix is an image.
Subtract the mean. The average image a has to be calculated and then subtracted from each original image in T.
Calculate the eigenvectors and eigenvalues of the covariance matrix S. Each eigenvector has the same dimensionality (number of components) as the original images, and thus can itself be seen as an image. The eigenvectors of this covariance matrix are therefore called eigenfaces. They are the directions in which the images differ from the mean image. Usually this will be a computationally expensive step (if at all possible), but the practical applicability of eigenfaces stems from the possibility to compute the eigenvectors of S efficiently, without ever computing S explicitly, as detailed below.
Choose the principal components. Sort the eigenvalues in descending order and arrange eigenvectors accordingly. The number of principal components k is determined arbitrarily by setting a threshold ε on the total variance. Total variance , = number of components, and represents component eigenvalue.
k is the smallest number that satisfies
These eigenfaces can now be used to represent both existing and new faces: we can project a new (mean-subtracted) image on the eigenfaces and thereby record how that new face differs from the mean face. The eigenvalues associated with each eigenface represent how much the images in the training set vary from the mean image in that direction. Information is lost by projecting the image on a subset of the eigenvectors, but losses are minimized by keeping those eigenfaces with the largest eigenvalues. For instance, working with a 100 × 100 image will produce 10,000 eigenvectors. In practical applications, most faces can typically be identified using a projection on between 100 and 150 eigenfaces, so that most of the 10,000 eigenvectors can be discarded.
Matlab example code
Here is an example of calculating eigenfaces with Extended Yale Face Database B. To evade computational and storage bottleneck, the face images are sampled down by a factor 4×4=16.
clear all;
close all;
load yalefaces
[h, w, n] = size(yalefaces);
d = h * w;
% vectorize images
x = reshape(yalefaces, [d n]);
x = double(x);
% subtract mean
mean_matrix = mean(x, 2);
x = bsxfun(@minus, x, mean_matrix);
% calculate covariance
s = cov(x');
% obtain eigenvalue & eigenvector
[V, D] = eig(s);
eigval = diag(D);
% sort eigenvalues in descending order
eigval = eigval(end: - 1:1);
V = fliplr(V);
% show mean and 1st through 15th principal eigenvectors
figure, subplot(4, 4, 1)
imagesc(reshape(mean_matrix, [h, w]))
colormap gray
for i = 1:15
subplot(4, 4, i + 1)
imagesc(reshape(V(:, i), h, w))
end
Note that although the covariance matrix S generates many eigenfaces, only a fraction of those are needed to represent the majority of the faces. For example, to represent 95% of the total variation of all face images, only the first 43 eigenfaces are needed. To calculate this result, implement the following code:
% evaluate the number of principal components needed to represent 95% Total variance.
eigsum = sum(eigval);
csum = 0;
for i = 1:d
csum = csum + eigval(i);
tv = csum / eigsum;
if tv > 0.95
k95 = i;
break
end;
end;
Computing the eigenvectors
Performing PCA directly on the covariance matrix of the images is often computationally infeasible. If small images are used, say 100 × 100 pixels, each image is a point in a 10,000-dimensional space and the covariance matrix S is a matrix of 10,000 × 10,000 = 108 elements. However the rank of the covariance matrix is limited by the number of training examples: if there are N training examples, there will be at most N − 1 eigenvectors with non-zero eigenvalues. If the number of training examples is smaller than the dimensionality of the images, the principal components can be computed more easily as follows.
Let T be the matrix of preprocessed training examples, where each column contains one mean-subtracted image. The covariance matrix can then be computed as S = TTT and the eigenvector decomposition of S is given by
However TTT is a large matrix, and if instead we take the eigenvalue decomposition of
then we notice that by pre-multiplying both sides of the equation with T, we obtain
Meaning that, if ui is an eigenvector of TTT, then vi = Tui is an eigenvector of S. If we have a training set of 300 images of 100 × 100 pixels, the matrix TTT is a 300 × 300 matrix, which is much more manageable than the 10,000 × 10,000 covariance matrix. Notice however that the resulting vectors vi are not normalised; if normalisation is required it should be applied as an extra step.
Connection with SVD
Let denote the data matrix with column as the image vector with mean subtracted. Then,
Let the singular value decomposition (SVD) of be:
Then the eigenvalue decomposition for is:
, where Λ=diag (eigenvalues of )
Thus we can see easily that:
The eigenfaces = the first () columns of associated with the nonzero singular values.
The ith eigenvalue of ith singular value of
Using SVD on data matrix , it is unnecessary to calculate the actual covariance matrix to get eigenfaces.
Use in facial recognition
Facial recognition was the motivation for the creation of eigenfaces. For this use, eigenfaces have advantages over other techniques available, such as the system's speed and efficiency. As eigenface is primarily a dimension reduction method, a system can represent many subjects with a relatively small set of data. As a face-recognition system it is also fairly invariant to large reductions in image sizing; however, it begins to fail considerably when the variation between the seen images and probe image is large.
To recognise faces, gallery images – those seen by the system – are saved as collections of weights describing the contribution each eigenface has to that image. When a new face is presented to the system for classification, its own weights are found by projecting the image onto the collection of eigenfaces. This provides a set of weights describing the probe face. These weights are then classified against all weights in the gallery set to find the closest match. A nearest-neighbour method is a simple approach for finding the Euclidean distance between two vectors, where the minimum can be classified as the closest subject.
Intuitively, the recognition process with the eigenface method is to project query images into the face-space spanned by eigenfaces calculated, and to find the closest match to a face class in that face-space.
Pseudo code
Given input image vector , the mean image vector from the database , calculate the weight of the k-th eigenface as:
Then form a weight vector
Compare W with weight vectors of images in the database. Find the Euclidean distance.
If , then the mth entry in the database is a candidate of recognition.
If , then may be an unknown face and can be added to the database.
If is not a face image.
The weights of each gallery image only convey information describing that image, not that subject. An image of one subject under frontal lighting may have very different weights to those of the same subject under strong left lighting. This limits the application of such a system. Experiments in the original Eigenface paper presented the following results: an average of 96% with light variation, 85% with orientation variation, and 64% with size variation.
Various extensions have been made to the eigenface method. The eigenfeatures method combines facial metrics (measuring distance between facial features) with the eigenface representation. Fisherface uses linear discriminant analysis and is less sensitive to variation in lighting and pose of the face. Fisherface uses labelled data to retain more of the class-specific information during the dimension reduction stage.
A further alternative to eigenfaces and Fisherfaces is the active appearance model. This approach uses an active shape model to describe the outline of a face. By collecting many face outlines, principal component analysis can be used to form a basis set of models that encapsulate the variation of different faces.
Many modern approaches still use principal component analysis as a means of dimension reduction or to form basis images for different modes of variation.
Review
Eigenface provides an easy and cheap way to realize face recognition in that:
Its training process is completely automatic and easy to code.
Eigenface adequately reduces statistical complexity in face image representation.
Once eigenfaces of a database are calculated, face recognition can be achieved in real time.
Eigenface can handle large databases.
However, the deficiencies of the eigenface method are also obvious:
It is very sensitive to lighting, scale and translation, and requires a highly controlled environment.
Eigenface has difficulty capturing expression changes.
The most significant eigenfaces are mainly about illumination encoding and do not provide useful information regarding the actual face.
To cope with illumination distraction in practice, the eigenface method usually discards the first three eigenfaces from the dataset. Since illumination is usually the cause behind the largest variations in face images, the first three eigenfaces will mainly capture the information of 3-dimensional lighting changes, which has little contribution to face recognition. By discarding those three eigenfaces, there will be a decent amount of boost in accuracy of face recognition, but other methods such as fisherface and linear space still have the advantage.
See also
Craniofacial anthropometry
Human appearance
Pattern recognition
References
Further reading
A. Pentland, B. Moghaddam, T. Starner, O. Oliyide, and M. Turk. (1993). "View-based and modular Eigenspaces for face recognition". Technical Report 245, M.I.T Media Lab.
T. Heseltine, N. Pears, J. Austin, Z. Chen (2003). "Face Recognition: A Comparison of Appearance-Based Approaches". Proc. VIIth Digital Image Computing: Techniques and Applications, vol 1. 59–68.
Delac, K., Grgic, M., Liatsis, P. (2005). "Appearance-based Statistical Methods for Face Recognition". Proceedings of the 47th International Symposium ELMAR-2005 focused on Multimedia Systems and Applications, Zadar, Croatia, 08-10 June 2005, pp. 151–158
External links
Face Recognition Homepage
PCA on the FERET Dataset
Developing Intelligence Eigenfaces and the Fusiform Face Area
A Tutorial on Face Recognition Using Eigenfaces and Distance Classifiers
Matlab example code for eigenfaces
OpenCV + C++Builder6 implementation of PCA
Java applet demonstration of eigenfaces
Introduction to eigenfaces
Face Recognition Function in OpenCV
Eigenface-based Facial Expression Recognition in Matlab
Facial recognition
Articles with example MATLAB/Octave code
Computer vision | Eigenface | [
"Engineering"
] | 3,515 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Computer vision"
] |
348,729 | https://en.wikipedia.org/wiki/Juvenile%20delinquency | Juvenile delinquency, also known as juvenile offending, is the act of participating in unlawful behavior as a minor or individual younger than the statutory age of majority. These acts would be considered crimes if the individuals committing them were older. The term delinquent usually refers to juvenile delinquency, and is also generalised to refer to a young person who behaves an unacceptable way.
In the United States, a juvenile delinquent is a person who commits a crime and is under a specific age. Most states specify a juvenile delinquent, or young offender, as an individual under 18 years of age while a few states have set the maximum age slightly different. The term "juvenile delinquent" originated from the late 18th and early 19th centuries when treatment of juvenile and adult criminals was similar and punishment was over the seriousness of an offense. Before the 18th century, juveniles over age 7 were tried in the same criminal court as adults and, if convicted, could get the death penalty. Illinois established the first juvenile court. This juvenile court focused on treatment objectives instead of punishment, determined appropriate terminology associated with juvenile offenders, and made juvenile records confidential. In 2021, Michigan, New York, and Vermont raised the maximum age to under 19, and Vermont law was updated again in 2022 to include individuals under the age of 20. Only three states, Georgia, Texas, and Wisconsin, still appropriate the age of a juvenile delinquent as someone under the age of 17. While the maximum age in some US states has increased, Japan has lowered the juvenile delinquent age from under 20 to under 18. This change occurred on 1 April 2022 when the Japanese Diet activated a law lowering the age of minor status in the country. Just as there are differences in the maximum age of a juvenile delinquent, the minimum age for a child to be considered capable of delinquency or the age of criminal responsibility varies considerably between the states. Some states that impose a minimum age have made recent amendments to raise the minimum age, but most states remain ambiguous on the minimum age for a child to be determined a juvenile delinquent. In 2021, North Carolina changed the minimum age from 6 years old to 10 years old while Connecticut moved from 7 to 10 and New York made an adjustment from 7 to 12. In some states the minimum age depends on the seriousness of the crime committed. Juvenile delinquents or juvenile offenders commit crimes ranging from status offenses such as, truancy, violating a curfew or underage drinking and smoking to more serious offenses categorized as property crimes, violent crimes, sexual offenses, and cybercrimes.
Some scholars have found an increase in arrests for youth and have concluded that this may reflect more aggressive criminal justice and zero-tolerance policies rather than changes in youth behavior. Youth violence rates in the United States have dropped to approximately 12% of peak rates in 1993 according to official US government statistics, suggesting that most juvenile offending is non-violent. Many delinquent acts can be attributed to the environmental factors such as family behavior or peer influence. One contributing factor that has gained attention in recent years is the school to prison pipeline. According to Diverse Education, nearly 75% of states have built more jails and prisons than colleges. CNN also provides a diagram that shows that cost per inmate is significantly higher in most states than cost per student. This shows that tax payers' dollars are going toward providing for prisoners rather than providing for the educational system and promoting the advancement of education. For every school that is built, the focus on punitive punishment has been seen to correlate with juvenile delinquency rates. Some have suggested shifting from zero tolerance policies to restorative justice approaches.
Juvenile detention centers, juvenile courts and electronic monitoring are common structures of the juvenile legal system. Juvenile courts are in place to address offenses for minors as civil rather than criminal cases in most instances. The frequency of use and structure of these courts in the United States varies by state. Depending on the type and severity of the offense committed, it is possible for people under 18 to be charged and treated as adults.
Overview
Juvenile delinquency, or offending, is often separated into three categories:
delinquency, crimes committed by minors, which are dealt with by the juvenile courts and justice system;
criminal behavior, crimes dealt with by the criminal justice system;
status offenses, offenses that are only classified as such because only a minor can commit them. One example of this is possession of alcohol by a minor. These offenses are also dealt with by the juvenile courts.
Currently, there is not an agency whose jurisdiction is tracking worldwide juvenile delinquency but UNICEF estimates that over one million children are in some type of detention globally. Many countries do not keep records of the amount of delinquent or detained minors but of the ones that do, the United States has the highest number of juvenile delinquency cases. In the United States, the Office of Juvenile Justice and Delinquency Prevention compiles data concerning trends in juvenile delinquency. According to their most recent publication, 7 in 1000 juveniles in the US committed a serious crime in 2016. A serious crime is defined by the US Department of Justice as one of the following eight offenses: murder and non-negligent homicide, rape (legacy & revised), robbery, aggravated assault, burglary, motor vehicle theft, larceny-theft, and arson. According to research compiled by James Howell in 2009, the arrest rate for juveniles has been dropping consistently since its peak in 1994. Of the cases for juvenile delinquency that make it through the court system, probation is the most common consequence and males account for over 70% of the caseloads.
According to developmental research by Moffitt (2006), there are two different types of offenders that emerge in adolescence. The first is an age specific offender, referred to as the adolescence-limited offender, for whom juvenile offending or delinquency begins and ends during their period of adolescence. Moffitt argues that most teenagers tend to show some form of antisocial or delinquent behavior during adolescence, it is therefore important to account for these behaviors in childhood in order to determine whether they will be adolescence-limited offenders or something more long term. The other type of offender is the repeat offender, referred to as the life-course-persistent offender, who begins offending or showing antisocial/aggressive behavior in adolescence (or even in childhood) and continues into adulthood.
Situational factors
Most of influencing factors for juvenile delinquency tend to be caused by a mix of both genetic and environmental factors. According to Laurence Steinberg's book Adolescence, the two largest predictors of juvenile delinquency are parenting style and peer group association. Additional factors that may lead a teenager into juvenile delinquency include poor or low,
socioeconomic status, poor school readiness/performance and/or failure and peer rejection. Delinquent activity, especially the involvement in youth gangs, may also be caused by a desire for protection against violence or financial hardship. Juvenile offenders can view delinquent activity as a means of gaining access to resources to protect against such threats. Research by Carrie Dabb indicates that even changes in the weather can increase the likelihood of children exhibiting deviant behavior.
Family environment
Family factors that may have an influence on offending include: the level of parental supervision, the way parents discipline a child, parental conflict or separation, criminal activity by parents or siblings, parental abuse or neglect, and the quality of the parent-child relationship. As mentioned above, parenting style is not of the largest predictors of juvenile delinquency. There are 4 categories of parenting styles which describe the attitudes and behaviors that parents express while raising their children.
Authoritative parenting is characterized by warmth and support in addition to discipline.
Indulgent parenting is characterized by warmth and regard towards their children but lack structure and discipline.
Authoritarian parenting is characterized by high discipline without the warmth thus leading to often hostile demeanor and harsh correction
Neglectful parenting is both non responsive and non demanding. The child is not engaged either affectionately or disciplinary by the parent.
According to research done by Laura E. Berk, the style of parenting that would be most beneficial for a child, based on studies conducted by Diana Baumrind(1971) is the authoritative child-rearing style because it combines acceptance with discipline to render healthy development for the child.
As concluded in Steinberg's Adolescence, children brought up by single parents are more likely to live in poverty and engage in delinquent behavior than those who live with both parents. However, according to research done by Graham and Bowling, once the attachment a child feels towards their parent(s) and the level of parental supervision are taken into account, children in single parent families are no more likely to offend than others. It was seen that when a child has low parental supervision they are much more likely to offend. Negative peer group association is more likely when adolescents are left unsupervised. A lack of supervision is also connected to poor relationships between children and parents. Children who are often in conflict with their parents may be less willing to discuss their activities with them. Conflict between a child's parents is also much more closely linked to offending than being raised by a lone parent.
Adolescents with siblings who have committed crimes are more likely to be influenced by their siblings and become delinquent if the sibling is older, of the same sex/gender, and maintains a good relationship with the child. Cases where a younger criminal sibling influences an older one are rare. An aggressive more hostile sibling is less likely to influence a younger sibling in the direction of delinquency, if anything, the more strained the relationship between the siblings, the less they will want to be influence each other.
Children resulting from unintended pregnancies are more likely to exhibit delinquent behavior. They also have lower mother-child relationship quality.
Peer influence
Peer rejection in childhood is also a large predictor of juvenile delinquency. This rejection can affect the child's ability to be socialized properly and often leads them to gravitate towards anti-social peer groups. Association with anti-social groups often leads to the promotion of violent, aggressive and deviant behavior. Robert Vargas's "Being in 'Bad' Company," explains that adolescents who can choose between groups of friends are less susceptible to peer influence that could lead them to commit illegal acts. Aggressive adolescents who have been rejected by peers are also more likely to have a "hostile attribution bias", which leads people to interpret the actions of others (whether they be hostile or not) as purposefully hostile and aggressive towards them. This often leads to an impulsive and aggressive reaction.
Conformity plays a significant role in the vast impact that peer group influence has on an individual. Aronson, Wilson, & Akert (2013) point to the research experiment conducted by Solomon Asch (1956), to ascertain whether a group could influence an individual's behavior. The experiment was executed by asking a participant determine which line in the set of 3 lines matched the length of an original line. Confederates knew the purpose of the experiment and were directed to answer the questions incorrectly during certain phases of the experiment. These confederates answered the question before the participant. The confederates answered the first few questions correctly, as did the participant. Eventually, all of the confederates started to answer incorrectly. The purpose of the experiment was to see if the group would influence the participant to answer incorrectly. Asch found that seventy-six percent of the participants conformed and answered incorrectly when influenced by the group. According to these findings, it was concluded that a peer group that is involved in deviant behavior can influence an adolescent to engage in similar activities. Once the adolescent becomes part of the group, they will be susceptible to groupthink.
School to prison pipeline
A common contributor to juvenile delinquency rates is a phenomenon referred to as the school to prison pipeline. In recent years, school disciplinary measures have become increasingly policed. According to one study, 67% of high school students attend schools with police officers. This rise in police presence is often attributed to the implementation of zero tolerance policies. Based on the "broken windows" theory of criminology and the Gun-Free Schools Act, zero tolerance policies stress the use of specific, consistent, and harsh punishment to deal with in school infractions. Often measures such as suspension or expulsion are assigned to students who deviant regardless of the reason or past disciplinary history. This use of punishment often has been linked with increasing high school drop out rates and future arrests. It was found in a 2018 study that students who received a suspension were less likely to graduate and more likely to be arrested or on probation. As stated in research by Matthew Theriot, the increased police presence in school and use of tougher punishment methods leads student actions to be criminalized and in turn referred to juvenile justice systems.
The Center on Youth Justice at the Vera Institute of Justice found that "for similar students attending similar schools, a single suspension or expulsion doubles the risk that a student will repeat a grade. Being retained a grade, especially while in middle or high school, is one of the strongest predictors of dropping out. In a national longitudinal study, it was reported that youth with a prior suspension were 68% more likely to dropout of school.
The school to prison pipeline disproportionately affects minority students. According to data compiled by the United States Government Accountability Office, 39% of students who received a suspension in the 2013–14 school year were Black, even though Black students accounted for only about 15% of public school students. This over-representation applied to both boys and girls of African descent. Compared to White students, Black students were expelled or suspended 3 times as frequently.
Personality factors
Juvenile delinquency is the unlawful activities by minors in their teen or pre-teen years. It is influenced by four main risk factors, namely: personality, background, state of mind and drugs.
Gender
Gender is another risk factor in regards to influencing delinquent behavior. The predictors of different types of delinquency vary across females and males for various reasons, but a common underlying reason for this is socialization. Different predictors of delinquency emerge when analyzing distinct offending types across gender, but overall it is evident that males commit more crimes than females. Across all offenses, females are less likely to be involved in delinquent acts than males. Females not only commit fewer offenses, but they also commit less serious offenses.
Socialization plays a key role in the gender gap in delinquency because male and female juveniles are often socialized differently. Girls' and boys' experiences are heavily mediated by gender, which alters their interactions in society. Males and females are differently controlled and bonded, suggesting that they will not make the same choices and may follow different paths of delinquency. Social bonds are important for both males and females, but different aspects of the bond are relevant for each gender. The degree of involvement in social settings is a significant predictor of male's violent delinquency, but is not significant for females. Males tend to be more connected with their peer relationships which in effect has a stronger influence on their behavior. Association with delinquent peers is one of the strongest correlates of juvenile delinquency, and much of the gender gap can be accounted for by the fact that males are more likely to have friends that support delinquent behavior. Delinquent peers are positively and significantly related to delinquency in males but delinquent peers are negatively and insignificantly related to delinquency for females. As for females, familial functioning relationships have shown to be more important. Female juveniles tend to be more strongly connected with their families, the disconnect or the lack of socialization between their family members can significantly predict their likelihood of committing crimes as juveniles and even as adults. When the family is disrupted, females are more likely to engage in delinquent behavior than males. Boys, however, tend to be less connected to their family and are not as affected by these relationships. When it comes to minor offenses such as fighting, vandalism, shoplifting, and the carrying of weapons, differences in gender are limited because they are most common among both males as well as females. Elements of the social bond, social disorganization, routine activities, opportunity, and attitudes towards violence are also related to delinquent behavior among both males and females.
Neurological
Individual psychological or behavioral risk factors that may make offending more likely include low intelligence, impulsiveness or the inability to delay gratification, aggression, lack of empathy, and restlessness. Other risk factors that may be evident during childhood and adolescence include, aggressive or troublesome behavior, language delays or impairments, lack of emotional control (learning to control one's anger), and cruelty to animals.
Children with low intelligence are more likely to do badly in school. This may increase the chances of offending because low educational attainment, a low attachment to school, and low educational aspirations are all risk factors for offending in themselves. Children who perform poorly at school are also more likely to be truant, and the status offense of truancy is linked to further offending.
Impulsiveness is seen by some as the key aspect of a child's personality that predicts offending. However, it is not clear whether these aspects of personality are a result of "deficits in the executive functions of the brain" or a result of parental influences or other social factors. In any event, studies of adolescent development show that teenagers are more prone to risk-taking, which may explain the high disproportionate rate of offending among adolescents.
Psychological
Juvenile delinquents are often diagnosed with different disorders. Around six to sixteen percent of male teens and two to nine percent of female teens have a conduct disorder. These can vary from oppositional-defiant disorder, which is not necessarily aggressive, to antisocial personality disorder, often diagnosed among psychopaths. A conduct disorder can develop during childhood and then manifest itself during adolescence.
Juvenile delinquents who have recurring encounters with the criminal justice system, or in other words those who are life-course-persistent offenders, are sometimes diagnosed with conduct disorders because they show a continuous disregard for their own and others safety and/or property. Once the juvenile continues to exhibit the same behavioral patterns and turns eighteen he is then at risk of being diagnosed with antisocial personality disorder and much more prone to become a serious criminal offender. One of the main components used in diagnosing an adult with antisocial personality disorder consists of presenting documented history of conduct disorder before the age of 15. These two personality disorders are analogous in their erratic and aggressive behavior. This is why habitual juvenile offenders diagnosed with conduct disorder are likely to exhibit signs of antisocial personality disorder early in life and then as they mature. Some times these juveniles reach maturation and they develop into career criminals, or life-course-persistent offenders. "Career criminals begin committing antisocial behavior before entering grade school and are versatile in that they engage in an array of destructive behaviors, offend at exceedingly high rates, and are less likely to quit committing crime as they age."
Quantitative research was completed on 9,945 juvenile male offenders between the ages of 10 and 18 in Philadelphia, Pennsylvania in the 1970s. The longitudinal birth cohort was used to examine a trend among a small percentage of career criminals who accounted for the largest percentage of crime activity. The trend exhibited a new phenomenon among habitual offenders. The phenomenon indicated that only 6% of the youth qualified under their definition of a habitual offender (known today as life-course persistent offenders, or career criminals) and yet were responsible for 52% of the delinquency within the entire study. The same 6% of chronic offenders accounted for 71% of the murders and 69% of the aggravated assaults. This phenomenon was later researched among an adult population in 1977 and resulted in similar findings. S. A. Mednick did a birth cohort of 30,000 males and found that 1% of the males were responsible for more than half of the criminal activity. The habitual crime behavior found among juveniles is similar to that of adults. As stated before most life-course persistent offenders begin exhibiting antisocial, violent, and/or delinquent behavior, prior to adolescence. Therefore, while there is a high rate of juvenile delinquency, it is the small percentage of life-course persistent, career criminals that are responsible for most of the violent crimes.
Criminology
There are a multitude of different theories on the causes of crime (criminology) most, if not all, of which are applicable to the causes of juvenile delinquency.
Rational choice
Classical criminology stresses that the causes of crime lie within individual offenders, rather than in their external environment. For classicists, offenders are motivated by rational self-interest, and the importance of free will and personal responsibility is emphasized. Rational choice theory is the clearest example of that idea. Delinquency is one of the major factors motivated by rational choice.
Social disorganization
Current positivist approaches generally focus on the culture. A type of criminological theory attributing variation in crime and delinquency over time and among territories to the absence or breakdown of communal institutions (such as family, school, church, and social groups) and communal relationships that traditionally encouraged cooperative relationships among people.
Strain
Strain theory is associated mainly with the work of Robert K. Merton, who felt that there are institutionalized paths to success in society. Strain theory holds that crime is caused by the difficulty for those in poverty have to achieve socially-valued goals by legitimate means. Since those with, for instance, poor educational attainment have difficulty achieving wealth and status by securing well-paid employment, they are more likely to use criminal means to obtain those goals.
Merton's suggests five adaptations to this dilemma:
Innovation: individuals who accept socially-approved goals but not necessarily the socially-approved means.
Retreatism: those who reject socially-approved goals and the means for acquiring them.
Ritualism: those who buy into a system of socially-approved means but lose sight of the goals. Merton believed that drug users are in this category.
Conformity: those who conform to the system's means and goals.
Rebellion: people who negate socially-approved goals and means by creating a new system of acceptable goals and means.
A difficulty with strain theory is that it does not explore why children of low-income families have poor educational attainment in the first place. More importantly, much youth crime does not have an economic motivation. Strain theory fails to explain violent crime, the type of youth crime that causes most anxiety to the public.
Differential association
Differential association is another theory that deals with young people in a group context and looks at how peer pressure and the existence of gangs could lead them into crime. It suggests young people are motivated to commit crimes by delinquent peers and learn criminal skills from them. The diminished influence of peers after men marry has also been cited as a factor in desisting from offending. There is strong evidence that young people with criminal friends are more likely to commit crimes themselves. However, offenders may prefer to associate with one another, rather than delinquent peers causing someone to start offending. Furthermore, there is the question of how the delinquent peer group initially became delinquent.
Labeling
Labeling theory is a concept in criminology that aims to explain deviant behavior from the social context, rather the individual themselves. It is part of interactionism criminology, which states that once young people have been labeled as criminal, they are more likely to offend. The idea is that once labelled as deviant, a young person may accept that role and be more likely to associate with others who have been similarly labeled. Labelling theorists say that male children from poor families are more likely to be labelled deviant, which may partially explain the existence of more working-class young male offenders.
Social control
Social control theory proposes that exploiting the process of socialization and social learning builds self-control and can reduce the inclination to indulge in behavior that is recognized as antisocial. These four types of control can help prevent juvenile delinquency:
Direct by which punishment is threatened or applied for wrongful behavior, and compliance is rewarded by parents, family, and authority figures.
Internal by which a youth refrains from delinquency through the conscience or superego.
Indirect by identification with those who influence behavior, such as because the delinquent act might cause pain and disappointment to parents and others close relationships.
Control through needs satisfaction: if all an individual's needs are met, there is no point in criminal activity.
Punishment
In 2020 a ruling abolished the death penalty for juveniles in Saudi Arabia. Despite this Mustafa Hashem al-Darwish was executed in June 2021. He was alleged to have of taken part in anti-government demonstrations at the age of 17. al-Darwish had been detained in May 2015 being placed in solitary confinement for years. al-Darwish claimed that he faced brutal torture and beatings and was forced to sign confessions.
One criminal justice approach to juvenile delinquency is through the juvenile court systems. These courts are specifically for minors to be tried in. Sometimes, juvenile offenders are sent to adult prisons. In the United States, children as young as 8 can be tried and convicted as adults. Additionally, the United States was the only recorded country to sentence children as young as 13 to life sentences without parole also known as death in prison sentences. As of 2012, the Supreme Court has declared death in prison sentences unconstitutional for the vast majority of cases involving children. According to the US Department of Justice, about 3,600 children are housed in adult jails.
According to a report released by the Prison Policy Initiative, over 48,000 children are held in juvenile detention centers or prisons in America. The worldwide number is unknown but UNICEF estimates that over 1 million children experience confinement in various countries. Juveniles in youth detention centers are sometimes subject to many of the same punishments as adults, such as solitary confinement, despite a younger age or the presence of disabilities. Due to the influx of minors in detention facilities due to the school to prison pipeline, education is increasingly becoming a concern. Children in juvenile detention have a compromised or nonexistent schooling which to a higher number of drop outs and failure to complete secondary education.
Prevention
Delinquency prevention is the broad term for all efforts aimed at preventing youth from becoming involved in criminal, or other antisocial, activity. Prevention services may include activities such as substance abuse education and treatment, family counseling, youth mentoring, parenting education, educational support, and youth sheltering. Increasing availability and use of family planning services, including education and contraceptives helps to reduce unintended pregnancy and unwanted births, which are risk factors for delinquency. It has been noted that often interventions such as peer groups may leave at-risk children worse off than if there had never been an intervention.
Policies
Education promotes economic growth, national productivity and innovation, and values of democracy and social cohesion. Prevention through education has been seen to discourage delinquency for minors and help them strengthen the connection and understanding between peers
A well-known intervention treatment is the Scared Straight Treatment. According to research done by Scott Lilienfeld, this type of intervention is often harmful because of juvenile offenders' vicarious exposure to criminal role models and the possibility of increased resentment in reaction to the confrontational interactions. It has been reasoned that the most efficient interventions are those that not only separate at-risk teens from anti-social peers, and place them instead with pro-social ones, but also simultaneously improve their home environment by training parents with appropriate parenting styles.
In response to the data correlated with the school to prison pipeline, some institutions have implemented restorative justice policies. The restorative justice approach emphasizes conflict resolution and non-punitive intervention. Interventions such as hiring more counselors as opposed to security professionals or focusing on talking through problems would be included in a restorative justice approach.
It is also important to note certain works of legislation that have already been published in the United States in response to general prisoner re-entry, extending to juveniles, such as the Second Chance Act (2007) and most recently, the Second Chance Reauthorization Act (2018).
Juvenile reform
Juvenile reform deals with the vocational programs and educational approach to reducing recidivism rates of juvenile offenders. Most countries in the world legislate processes for juvenile reform and re-entry, some more elaborate and formal than others. In theory, juvenile re-entry is sensitive to the fact that juveniles are young and assumes they are capable of change; it approaches a juvenile offender's situation and history holistically, evaluating the earlier factors that could lead a juvenile to commit crimes. In practice, this is complicated since juvenile delinquents return home to varying and unpredictable circumstances, including poverty, substance abuse, domestic violence, etc..
In the United States, juvenile reform is split into four main phases:
The Entry Phase: The youth enters residential placement
The Placement Phase: Amount of time youth is in the placement facility (whatever that may be)
The Transitional Phase (re-entry): Act of leaving facility and entering community (from right after exit of facility to right before entering community)
The Community-based Aftercare Phase: Period of time after youth returns to the community (usually 120-day period right after transitional phase)
An understanding of the factors involved in each of these steps is crucial to creating an effective juvenile reform program. One non-profit identifies the following approaches to juvenile reform:
Early Intervention: preventing juvenile youth from ever encountering the justice system by implementation of conflict-resolution practices or administrative strategies that aim to teach the child healthy actions to take in difficult situations. It is implemented before any offense is committed and often involves a thorough discussion of what individual issues a child is dealing with.
Diversion: the placement of youth in programs that redirect youth away from juvenile justice system processing, or programs that divert youth from secure detention in a juvenile justice facility. These programs are most often in attempt to protect juveniles from getting a charge on their record after they have already committed a crime. This can be led through school administration intervention or by law enforcement officers that have been trained in dealing with at-risk youth. These programs are often given to children who have unstable life circumstances and are thus extended aid that will attack the "root problems" rather than further isolate them in society.
Alternatives to Secure Confinement: a juvenile justice approach that does not require the juvenile's entry in a "jail-like" facility. Often involves the juvenile's continued participation in society, but in a modified manner. Such alternatives include home confinement, supervision of a probation officer, community service requirements, and community-based facilities, among others.
Evidence-Based Practices: the emphasis on encouraging youth participation in programs that have evidence of working. The evaluation of "success" for a program is dependent on multiple factors, such as reduction of recidivism rates, cost-effectiveness, and addressing health problems.
Diverting Youth Who Commit Status Offenses: programs that address the "root" problems causing a juvenile's behavior and actions. Such programs are often part of a tiered approach to juvenile justice and reform.
Funding Community-Based Alternatives on a Large Scale: the supporting of all initiatives in a community that have been proven to help with juvenile betterment and reform. This allows the community to help its own and does not rely on the decisions of the state regarding the needs of juveniles.
While juvenile reform has proved to be an effective and humanizing approach response to juvenile delinquency, it is a very complex area that still has many ongoing debates. For example, many countries around the world are debating the appropriate age of a juvenile, as well as trying to understand whether there are some crimes that are so heinous, they should be exempt from any understanding. Based on these discussions, legislation needs to be consistently updated and considered as social, cultural, and political landscapes change.
Juvenile sex crimes
Juveniles who commit sexual crimes refer to individuals adjudicated in a criminal court for a sexual crime. Sex crimes are defined as sexually abusive behavior committed by a person under the age of 18 that is perpetrated "against the victim's will, without consent, and in an aggressive, exploitative, manipulative, and/or threatening manner". It is important to utilize appropriate terminology for juvenile sex offenders. Harsh and inappropriate expressions include terms such as "pedophile, child molester, predator, perpetrator, and mini-perp". These terms have often been associated with this group, regardless of the youth's age, diagnosis, cognitive abilities, or developmental stage. Using appropriate expressions can facilitate a more accurate depiction of juvenile sex offenders and may decrease the subsequent aversive psychological affects from using such labels. In the Arab Gulf states, homosexual acts are classified as an offense, and constitute one of the primary crimes for which juvenile males are charged.
Prevalence data
Examining prevalence data and the characteristics of juvenile sex offenders is a fundamental component to obtain a precise understanding of this heterogeneous group. With mandatory reporting laws in place, it became a necessity for providers to report any incidents of disclosed sexual abuse. Longo and Prescott indicate that juveniles commit approximately 30-60% of all child sexual abuse. The Federal Bureau of Investigation Uniform Crime Reports indicate that in 2008 youth under the age of 18 accounted for 16.7% of forcible rapes and 20.61% of other sexual offenses. Center for Sex Offender Management indicates that approximately one-fifth of all rapes and one-half of all sexual child molestation can be accounted for by juveniles.
Official record data
The Office of Juvenile Justice and Delinquency Prevention indicates that 15% of juvenile arrests occurred for rape in 2006, and 12% were clearance (resolved by an arrest). The total number of juvenile arrests in 2006 for forcible rape was 3,610 with 2% being female and 36% being under the age of 15 years. This trend has declined throughout the years with forcible rape from 1997–2006 being −30% and from 2005 to 2006 being −10%. The OJJDP reports that the juvenile arrest rate for forcible rape increased from the early 1980s through the 1990s and at that time it fell again. Violent crime rates in the U.S. have been on a steady decline since the 1990s. The OJJDP also reported that the total number of juvenile arrests in 2006 for sex offenses (other than forcible rape) was 15,900 with 10% being female and 47% being under the age of 15. There was again a decrease with the trend throughout the years with sex offenses from 1997 to 2006 being −16% and from 2005 to 2006 being −9%.
Males who commit sexual crimes
Barbaree and Marshall indicate that juvenile males contribute to the majority of sex crimes, with 2–4% of adolescent males having reported committing sexually assaultive behavior, and 20% of all rapes and 30–50% of all child molestation are perpetrated by adolescent males. It is clear that males are over-represented in this population. This is consistent with Ryan and Lane's research indicating that males account for 91-93% of the reported juvenile sex offenses. Righthand and Welch reported that females account for an estimated 2–11% of incidents of sexual offending. In addition, it reported by The Office of Juvenile Justice and Delinquency Prevention that in the juvenile arrests during 2006, African American male youth were disproportionately arrested (34%) for forcible rape. In one case in a foster home a 13-year-old boy raped a 9-year-old boy by having forced anal sex with him. In a court hearing the 9-year-old boy said he had done this multiple times. The 13-year-old boy was charged for sexual assault.
Juvenile sex crimes internationally
Sexual crimes committed by juveniles are not just an issue in the United States. Studies from the Netherlands show that out of 3,200 sex offenders recorded by police in 2009, 672 of those were juveniles, approximately 21 percent of sexual offenders. The study also points out the male to female ratio of sexual predators.
In 2009, a U.S. congressman proposed legislation that would create an International Sex Offender Registry. The bill was introduced due to the fact that because laws differ in different countries, someone who is on the sex offender registry in the U.S. who may be barred from living certain places and doing certain activities has free range in other less developed countries. This can lead to child sex tourism, when a sexual predator will go to less developed countries and prey on young boys and girls. Karne Newburn in his article, The Prospect of an International Sex Offender Registry, pointed out some serious flaws in the proposed bill, such as creating safety issues within the communities for the sex offenders placed on the registry. Newburn suggested instead of creating an International Sex Offender Registry from the U.S. model the U.S. join other countries in a dialogue on creating an effective model. As of now no registry exists. Despite this there is still interest in creating some sort of international registry.
By country
United Kingdom
The United Kingdom has three separate and distinct criminal justice systems: England and Wales, Northern Ireland, and Scotland. Young offenders are often dealt with by the Youth Offending Team. There is concern young adult offenders are not getting the support they need to help them avoid reoffending.
In England and Wales the age of criminal responsibility is set at 10. Young offenders aged 10 to 17 (i.e. up to their 18th birthday) are classed as a juvenile offender. Between the ages of 18 and 20 (i.e. up to their 21st birthday) they are classed as young offenders. Offenders aged 21 and over are known as adult offenders.
In Scotland the age of criminal responsibility was formerly set at 8, one of the lowest ages of criminal responsibility in Europe. It has since been raised to 12 by the Criminal Justice and Licensing (Scotland) Act 2010, which received Royal Assent on 6 August 2010.
In Northern Ireland, the age of criminal responsibility is 10.
Canada
In Canada, the YCJA protects the rights of young offenders. It has four main goals to ensure the youth is subject to meaningful consequences that promote the long-term protection of society, to rehabilitate and reintegrate the youth into society seamlessly, and to prevent crime by examining the underlying causes. The YCJA was introduced in 2003, succeeding the Young Offender's Act.
Northern Europe
In Sweden, the age of criminal responsibility is set at 15 since 1902.
United States
In the United States, the age of criminal responsibility for federal crimes is set at 11. While this has been set at the federal level, each state is responsible for setting their own age of criminal responsibility. Thirty-one states have no minimum age for criminal responsibility, while the remaining 19 do. North Carolina has the lowest responsibility age of 6 years old and Massachusetts has the highest of 12 years old.
There are 1.5 million cases per year in the US that handle status offenses or criminal offenses by young offenders. However, only 52 juveniles were fully sentenced to prison-time between 2010–2015. Recidivism is common among young offenders, with 67% becoming repeat offenders.
Brazil
In Brazil, the age of criminal responsibility is set at the age of 18. Anyone that is found guilty of committing crimes prior to the age of 18 is treated to other options rather than jail. These include, for children under 12, foster care options in order to get them a safer family, and, for young offenders over 12, being sentenced to complying with a range of socio-educative measures that can go from a warning to community work and even to internment in specialized facilities, which include basic schooling and occupational training courses that aim at preventing the offenders from resorting to crime to support themselves, although conditions in such facilities are often subpar. With a spike in crime rates among young offenders occurring in 2015, along with an almost 40% increase in internments of young offenders, there was a push to lower the age of criminal responsibility to 16, which ultimately failed.
China
Juvenile crime has risen in China with an average increase of 5% per year. In 2021, China lowered the age of criminal responsibility from 14 to 12 in an amendment to its criminal law, and it mandated that such prosecution must be approved by the Supreme People's Procuratorate.
See also
Age of onset (criminology)
Anti-social behaviour order
Defense of infancy
Deviance (sociology)
Her Majesty's Young Offender Institution
Juvenile court
Juvenile delinquency in the United States
Kazan phenomenon
Minor (law)
Office of Juvenile Justice and Delinquency Prevention
Person in need of supervision
David Morgan (psychologist)
Sex offender registries in the United States
Solitary confinement of juvenile offenders
Status offense
Teen courts
Timeline of children's rights in the United Kingdom
Truancy
Victimology
Banchō (position)
Sukeban
Public criminology
Youth court
Youth Offending Team
Youth Inclusion Support Panel
References
Further reading
E. Mulvey, MW Arthur, ND Reppucci, "The prevention and treatment of juvenile delinquency: A review of the research", Clinical Psychology Review, 1993.
Edward P. Mulvey, Michael W. Arthur, & N. Dickon Reppucci, "Prevention of Juvenile Delinquency: A Review of the Research", The Prevention Researcher, Volume 4, Number 2, 1997, Pages 1-4.
Regoli, Robert M. and Hewitt, John D. Delinquency in Society, 6th ed., 2006.
Siegel, J Larry. Juvenile Delinquency with Infotrac: theory, practices and law, 2002.
United Nations, Research Report on Juvenile Delinquency (pdf).
Gang Cop: The Words and Ways of Officer Paco Domingo (2004) by Malcolm W.Klein
The American Street Gang: Its Nature, Prevalence, and Control (1995), by Malcolm W. Klein
American Youth Violence (1998) by Franklin Zimring
Street Wars: Gangs and the Future of Violence (2004) by Tom Hayden
Fist, Stick, Knife, Gun (1995) by Geoffrey Canada
Violence: Reflections on a National Epidemic (1996) by James Gilligan
Lost Boys: Why Our Sons Turn Violent and How We Can Save Them (1999) by James Gabarino
Last Chance in Texas: The Redemption of Criminal Youth (2005) by John Hubner
Breaking Rank: A Top Cop's Expose of the Dark Side of American Policing (2005) by Norm Stamper
Peetz P., "Youth, Crime, and the Responses of the State: Discourses on Violence in Costa Rica, El Salvador, and Nicaragua", GIGA Working Papers, Number 80, 2008.
Harnsberger, R. Scott. A Guide to Sources of Texas Criminal Justice Statistics [North Texas Crime and Criminal Justice Series, no. 6]. Denton: University of North Texas Press, 2011.
Morgan, David and Ruszczynski, Stan. Lectures on Violence, Perversion and Delinquency. The Portman Papers Series. (2007)
External links
Delinquency Prevention - Clearinghouse of juvenile delinquency prevention information
Edinburgh Study of Youth Transitions and Crime - major study at Edinburgh Law School
"State Responses to Serious and Violent Juvenile Crime." - Office of Juvenile Justice and Delinquency Prevention.
A Voyage into the Mind of Delinquent and Destitute Adolescents
Guide to Juvenile Justice in New York City
Juvenile Sex Offenders and Juvenile Sex Crimes in California - Overview of juvenile sex crimes and juvenile sex offender registration in California.
Youth Justice Board (England & Wales)
Young People and Youth Justice Research by the Scottish Centre for Crime and Justice Research
The Centre for Youth & Criminal Justice (Scotland)
Delinquency
Criminology
Crime
Anti-social behaviour
Delinquency
Criminal records | Juvenile delinquency | [
"Biology"
] | 9,053 | [
"Anti-social behaviour",
"Behavior",
"Human behavior"
] |
348,780 | https://en.wikipedia.org/wiki/Proof%20that%20e%20is%20irrational | The number e was introduced by Jacob Bernoulli in 1683. More than half a century later, Euler, who had been a student of Jacob's younger brother Johann, proved that e is irrational; that is, that it cannot be expressed as the quotient of two integers.
Euler's proof
Euler wrote the first proof of the fact that e is irrational in 1737 (but the text was only published seven years later). He computed the representation of e as a simple continued fraction, which is
Since this continued fraction is infinite and every rational number has a terminating continued fraction, e is irrational. A short proof of the previous equality is known. Since the simple continued fraction of e is not periodic, this also proves that e is not a root of a quadratic polynomial with rational coefficients; in particular, e2 is irrational.
Fourier's proof
The most well-known proof is Joseph Fourier's proof by contradiction, which is based upon the equality
Initially e is assumed to be a rational number of the form . The idea is to then analyze the scaled-up difference (here denoted x) between the series representation of e and its strictly smaller partial sum, which approximates the limiting value e. By choosing the scale factor to be the factorial of b, the fraction and the partial sum are turned into integers, hence x must be a positive integer. However, the fast convergence of the series representation implies that x is still strictly smaller than 1. From this contradiction we deduce that e is irrational.
Now for the details. If e is a rational number, there exist positive integers a and b such that . Define the number
Use the assumption that e = to obtain
The first term is an integer, and every fraction in the sum is actually an integer because for each term. Therefore, under the assumption that e is rational, x is an integer.
We now prove that . First, to prove that x is strictly positive, we insert the above series representation of e into the definition of x and obtain
because all the terms are strictly positive.
We now prove that . For all terms with we have the upper estimate
This inequality is strict for every . Changing the index of summation to and using the formula for the infinite geometric series, we obtain
And therefore
Since there is no integer strictly between 0 and 1, we have reached a contradiction, and so e is irrational, Q.E.D.
Alternate proofs
Another proof can be obtained from the previous one by noting that
and this inequality is equivalent to the assertion that bx < 1. This is impossible, of course, since b and x are positive integers.
Still another proof can be obtained from the fact that
Define as follows:
Then
which implies
for any positive integer .
Note that is always an integer. Assume that is rational, so where are co-prime, and It is possible to appropriately choose so that is an integer, i.e. Hence, for this choice, the difference between and would be an integer. But from the above inequality, that is not possible. So, is irrational. This means that is irrational.
Generalizations
In 1840, Liouville published a proof of the fact that e2 is irrational followed by a proof that e2 is not a root of a second-degree polynomial with rational coefficients. This last fact implies that e4 is irrational. His proofs are similar to Fourier's proof of the irrationality of e. In 1891, Hurwitz explained how it is possible to prove along the same line of ideas that e is not a root of a third-degree polynomial with rational coefficients, which implies that e3 is irrational. More generally, eq is irrational for any non-zero rational q.
Charles Hermite further proved that e is a transcendental number, in 1873, which means that is not a root of any polynomial with rational coefficients, as is for any non-zero algebraic α.
See also
Characterizations of the exponential function
Transcendental number, including a proof that e is transcendental
Lindemann–Weierstrass theorem
Proof that π is irrational
References
Diophantine approximation
Exponentials
Article proofs
E (mathematical constant)
Irrational numbers | Proof that e is irrational | [
"Mathematics"
] | 850 | [
"Irrational numbers",
"Mathematical objects",
"E (mathematical constant)",
"Article proofs",
"Numbers",
"Mathematical relations",
"Exponentials",
"Diophantine approximation",
"Approximations",
"Number theory"
] |
348,853 | https://en.wikipedia.org/wiki/Tyranny%20of%20the%20majority | Tyranny of the majority refers to a situation in majority rule where the preferences and interests of the majority dominate the political landscape, potentially sidelining or repressing minority groups and using majority rule to take non-democratic actions. This idea has been discussed by various thinkers, including John Stuart Mill in On Liberty and Alexis de Tocqueville in Democracy in America.
A tyranny of the majority can ensue when democracy is distorted either by an excess of centralization or when the people abandon a wider perspective to "rule upon numbers, not upon rightness or excellence".
To reduce the risk of majority tyranny, modern democracies frequently have countermajoritarian institutions that restrict the ability of majorities to repress minorities and stymie political competition. in the context of a nation, constitutional limits on the powers of a legislative body such as a bill of rights or supermajority clause have been used to counter the problem. A separation of powers (for example legislative and executive majority actions subject to review by the judiciary) may also be implemented to prevent the problem from happening internally in a government.
In social choice, a tyranny-of-the-majority scenario can be formally defined as a situation where the candidate or decision preferred by a majority is greatly inferior (hence "tyranny") to the socially optimal candidate or decision according to some measure of excellence such as total utilitarianism or the egalitarian rule.
Origin of the term
The origin of the term "tyranny of the majority" is commonly attributed to Alexis de Tocqueville, who used it in his book Democracy in America. It appears in Part 2 of the book in the title of Chapter 8, "What Moderates the Tyranny of the Majority in the United States' Absence of Administrative Centralization" () and in the previous chapter in the names of sections such as "The Tyranny of the Majority" and "Effects of the Tyranny of the Majority on American National Character; the Courtier Spirit in the United States".
While the specific phrase "tyranny of the majority" is frequently attributed to various Founding Fathers of the United States, only John Adams is known to have used it, arguing against government by a single unicameral elected body. Writing in defense of the Constitution in March 1788, Adams referred to "a single sovereign assembly, each member…only accountable to his constituents; and the majority of members who have been of one party" as a "tyranny of the majority", attempting to highlight the need instead for "a mixed government, consisting of three branches". Constitutional author James Madison presented a similar idea in Federalist 10, citing the destabilizing effect of "the superior force of an interested and overbearing majority" on a government, though the essay as a whole focuses on the Constitution's efforts to mitigate factionalism generally.
Later users include Edmund Burke, who wrote in a 1790 letter that "The tyranny of a multitude is a multiplied tyranny." It was further popularised by John Stuart Mill, influenced by Tocqueville, in On Liberty (1859). Friedrich Nietzsche used the phrase in the first sequel to Human, All Too Human (1879). Ayn Rand wrote that individual rights are not subject to a public vote, and that the political function of rights is precisely to protect minorities from oppression by majorities and "the smallest minority on earth is the individual". In Herbert Marcuse's 1965 essay Repressive Tolerance, he said "tolerance is extended to policies, conditions, and modes of behavior which should not be tolerated because they are impeding, if not destroying, the chances of creating an existence without fear and misery" and that "this sort of tolerance strengthens the tyranny of the majority against which authentic liberals protested". In 1994, legal scholar Lani Guinier used the phrase as the title for a collection of law review articles.
A term used in Classical and Hellenistic Greece for oppressive popular rule was ochlocracy ("mob rule"); tyranny meant rule by one man—whether undesirable or not.
Examples
Herbert Spencer, in "The Right to Ignore the State" (1851), pointed the problem with the following example:
Concurrent majority
Secession of the Confederate States of America from the United States was anchored by a version of subsidiarity, found within the doctrines of John C. Calhoun. Antebellum South Carolina utilized Calhoun's doctrines in the Old South as public policy, adopted from his theory of concurrent majority. This "localism" strategy was presented as a mechanism to circumvent Calhoun's perceived tyranny of the majority in the United States. Each state presumptively held the Sovereign power to block federal laws that infringed upon states' rights, autonomously. Calhoun's policies directly influenced Southern public policy regarding slavery, and undermined the Supremacy Clause power granted to the federal government. The subsequent creation of the Confederate States of America catalyzed the American Civil War.
19th century concurrent majority theories held logical counterbalances to standard tyranny of the majority harms originating from Antiquity and onward. Essentially, illegitimate or temporary coalitions that held majority volume could disproportionately outweigh and hurt any significant minority, by nature and sheer volume. Calhoun's contemporary doctrine was presented as one of limitation within American democracy to prevent traditional tyranny, whether actual or imagined.
Viewpoints
James Madison
Federalist No. 10 "The Same Subject Continued: The Union as a Safeguard Against Domestic Faction and Insurrection" (November 23, 1787):
The inference to which we are brought is, that the CAUSES of faction cannot be removed, and that relief is only to be sought in the means of controlling its EFFECTS. If a faction consists of less than a majority, relief is supplied by the republican principle, which enables the majority to defeat its sinister views by regular vote. It may clog the administration, it may convulse the society; but it will be unable to execute and mask its violence under the forms of the Constitution. When a majority is included in a faction, the form of popular government, on the other hand, enables it to sacrifice to its ruling passion or interest both the public good and the rights of other citizens. To secure the public good and private rights against the danger of such a faction, and at the same time to preserve the spirit and the form of popular government, is then the great object to which our inquiries are directed...By what means is this object attainable? Evidently by one of two only. Either the existence of the same passion or interest in a majority at the same time must be prevented, or the majority, having such coexistent passion or interest, must be rendered, by their number and local situation, unable to concert and carry into effect schemes of oppression.
Alexis de Tocqueville
With respect to American democracy, Tocqueville, in his book Democracy in America, says:
Critique by Robert A. Dahl
Robert A. Dahl argues that the tyranny of the majority is a spurious dilemma (p. 171):
Trampling the rights of minorities
Regarding recent American politics (specifically initiatives), Donovan et al. argue that:
Public choice theory
The notion that, in a democracy, the greatest concern is that the majority will tyrannise and exploit diverse smaller interests, has been criticised by Mancur Olson in The Logic of Collective Action, who argues instead that narrow and well organised minorities are more likely to assert their interests over those of the majority. Olson argues that when the benefits of political action (e.g., lobbying) are spread over fewer agents, there is a stronger individual incentive to contribute to that political activity. Narrow groups, especially those who can reward active participation to their group goals, might therefore be able to dominate or distort political process, a process studied in public choice theory.
Class studies
Tyranny of the majority has also been prevalent in some class studies. Rahim Baizidi uses the concept of "democratic suppression" to analyze the tyranny of the majority in economic classes. According to this, the majority of the upper and middle classes, together with a small portion of the lower class, form the majority coalition of conservative forces in the society.
Vote trading
Anti-federalists of public choice theory point out that vote trading can protect minority interests from majorities in representative democratic bodies such as legislatures. They continue that direct democracy, such as statewide propositions on ballots, does not offer such protections.
See also
References
Further reading
Nyirkos, Tamas (2018). The Tyranny of the Majority: History, Concepts, and Challenges. New York: Routledge.
Volk, Kyle G. (2014). Moral Minorities and the Making of American Democracy. New York: Oxford University Press.
Curriculum on Alexis de Tocqueville on Tyranny of the Majority from EDSITEment from the National Endowment for the Humanities
Authoritarianism
Democracy
Libertarian terms
Political terminology
Political theories
Abuse
Majority
Majority–minority relations
Concepts in political philosophy | Tyranny of the majority | [
"Biology"
] | 1,865 | [
"Abuse",
"Behavior",
"Aggression",
"Human behavior"
] |
348,860 | https://en.wikipedia.org/wiki/Projective%20representation | In the field of representation theory in mathematics, a projective representation of a group G on a vector space V over a field F is a group homomorphism from G to the projective linear group
where GL(V) is the general linear group of invertible linear transformations of V over F, and F∗ is the normal subgroup consisting of nonzero scalar multiples of the identity transformation (see Scalar transformation).
In more concrete terms, a projective representation of is a collection of operators satisfying the homomorphism property up to a constant:
for some constant . Equivalently, a projective representation of is a collection of operators , such that . Note that, in this notation, is a set of linear operators related by multiplication with some nonzero scalar.
If it is possible to choose a particular representative in each family of operators in such a way that the homomorphism property is satisfied on the nose, rather than just up to a constant, then we say that can be "de-projectivized", or that can be "lifted to an ordinary representation". More concretely, we thus say that can be de-projectivized if there are for each such that . This possibility is discussed further below.
Linear representations and projective representations
One way in which a projective representation can arise is by taking a linear group representation of on and applying the quotient map
which is the quotient by the subgroup of scalar transformations (diagonal matrices with all diagonal entries equal). The interest for algebra is in the process in the other direction: given a projective representation, try to 'lift' it to an ordinary linear representation. A general projective representation cannot be lifted to a linear representation , and the obstruction to this lifting can be understood via group cohomology, as described below.
However, one can lift a projective representation of to a linear representation of a different group , which will be a central extension of . The group is the subgroup of defined as follows:
,
where is the quotient map of onto . Since is a homomorphism, it is easy to check that is, indeed, a subgroup of . If the original projective representation is faithful, then is isomorphic to the preimage in of .
We can define a homomorphism by setting . The kernel of is:
,
which is contained in the center of . It is clear also that is surjective, so that is a central extension of . We can also define an ordinary representation of by setting . The ordinary representation of is a lift of the projective representation of in the sense that:
.
If is a perfect group there is a single universal perfect central extension of that can be used.
Group cohomology
The analysis of the lifting question involves group cohomology. Indeed, if one fixes for each in a lifted element in lifting from back to , the lifts then satisfy
for some scalar in . It follows that the 2-cocycle or Schur multiplier satisfies the cocycle equation
for all in . This depends on the choice of the lift ; a different choice of lift will result in a different cocycle
cohomologous to . Thus defines a unique class in . This class might not be trivial. For example, in the case of the symmetric group and alternating group, Schur established that there is exactly one non-trivial class of Schur multiplier, and completely determined all the corresponding irreducible representations.
In general, a nontrivial class leads to an extension problem for . If is correctly extended we obtain a linear representation of the extended group, which induces the original projective representation when pushed back down to . The solution is always a central extension. From Schur's lemma, it follows that the irreducible representations of central extensions of , and the irreducible projective representations of , are essentially the same objects.
First example: discrete Fourier transform
Consider the field of integers mod , where is prime, and let be the -dimensional space of functions on with values in . For each in , define two operators, and on as follows:
We write the formula for as if and were integers, but it is easily seen that the result only depends on the value of and mod . The operator is a translation, while is a shift in frequency space (that is, it has the effect of translating the discrete Fourier transform of ).
One may easily verify that for any and in , the operators and commute up to multiplication by a constant:
.
We may therefore define a projective representation of as follows:
,
where denotes the image of an operator in the quotient group . Since and commute up to a constant, is easily seen to be a projective representation. On the other hand, since and do not actually commute—and no nonzero multiples of them will commute— cannot be lifted to an ordinary (linear) representation of .
Since the projective representation is faithful, the central extension of obtained by the construction in the previous section is just the preimage in of the image of . Explicitly, this means that is the group of all operators of the form
for . This group is a discrete version of the Heisenberg group and is isomorphic to the group of matrices of the form
with .
Projective representations of Lie groups
Studying projective representations of Lie groups leads one to consider true representations of their central extensions (see ). In many cases of interest it suffices to consider representations of covering groups. Specifically, suppose is a connected cover of a connected Lie group , so that for a discrete central subgroup of . (Note that is a special sort of central extension of .) Suppose also that is an irreducible unitary representation of (possibly infinite dimensional). Then by Schur's lemma, the central subgroup will act by scalar multiples of the identity. Thus, at the projective level, will descend to . That is to say, for each , we can choose a preimage of in , and define a projective representation of by setting
,
where denotes the image in of an operator . Since is contained in the center of and the center of acts as scalars, the value of does not depend on the choice of .
The preceding construction is an important source of examples of projective representations. Bargmann's theorem (discussed below) gives a criterion under which every irreducible projective unitary representation of arises in this way.
Projective representations of SO(3)
A physically important example of the above construction comes from the case of the rotation group SO(3), whose universal cover is SU(2). According to the representation theory of SU(2), there is exactly one irreducible representation of SU(2) in each dimension. When the dimension is odd (the "integer spin" case), the representation descends to an ordinary representation of SO(3). When the dimension is even (the "fractional spin" case), the representation does not descend to an ordinary representation of SO(3) but does (by the result discussed above) descend to a projective representation of SO(3). Such projective representations of SO(3) (the ones that do not come from ordinary representations) are referred to as "spinorial representations", whose elements (vectors) are called spinors.
By an argument discussed below, every finite-dimensional, irreducible projective representation of SO(3) comes from a finite-dimensional, irreducible ordinary representation of SU(2).
Examples of covers, leading to projective representations
Notable cases of covering groups giving interesting projective representations:
The special orthogonal group SO(n, F) is doubly covered by the Spin group Spin(n, F).
In particular, the group SO(3) (the rotation group in 3 dimensions) is doubly covered by SU(2). This has important applications in quantum mechanics, as the study of representations of SU(2) leads to a nonrelativistic (low-velocity) theory of spin.
The group SO+(3;1), isomorphic to the Möbius group, is likewise doubly covered by SL2(C). Both are supergroups of aforementioned SO(3) and SU(2) respectively and form a relativistic spin theory.
The universal cover of the Poincaré group is a double cover (the semidirect product of SL2(C) with R4). The irreducible unitary representations of this cover give rise to projective representations of the Poincaré group, as in Wigner's classification. Passing to the cover is essential, in order to include the fractional spin case.
The orthogonal group O(n) is double covered by the Pin group Pin±(n).
The symplectic group Sp(2n)=Sp(2n, R) (not to be confused with the compact real form of the symplectic group, sometimes also denoted by Sp(m)) is double covered by the metaplectic group Mp(2n). An important projective representation of Sp(2n) comes from the metaplectic representation of Mp(2n).
Finite-dimensional projective unitary representations
In quantum physics, symmetry of a physical system is typically implemented by means of a projective unitary representation of a Lie group on the quantum Hilbert space, that is, a continuous homomorphism
where is the quotient of the unitary group by the operators of the form . The reason for taking the quotient is that physically, two vectors in the Hilbert space that are proportional represent the same physical state. [That is to say, the space of (pure) states is the set of equivalence classes of unit vectors, where two unit vectors are considered equivalent if they are proportional.] Thus, a unitary operator that is a multiple of the identity actually acts as the identity on the level of physical states.
A finite-dimensional projective representation of then gives rise to a projective unitary representation of the Lie algebra of . In the finite-dimensional case, it is always possible to "de-projectivize" the Lie-algebra representation simply by choosing a representative for each having trace zero. In light of the homomorphisms theorem, it is then possible to de-projectivize itself, but at the expense of passing to the universal cover of . That is to say, every finite-dimensional projective unitary representation of arises from an ordinary unitary representation of by the procedure mentioned at the beginning of this section.
Specifically, since the Lie-algebra representation was de-projectivized by choosing a trace-zero representative, every finite-dimensional projective unitary representation of arises from a determinant-one ordinary unitary representation of (i.e., one in which each element of acts as an operator with determinant one). If is semisimple, then every element of is a linear combination of commutators, in which case every representation of is by operators with trace zero. In the semisimple case, then, the associated linear representation of is unique.
Conversely, if is an irreducible unitary representation of the universal cover of , then by Schur's lemma, the center of acts as scalar multiples of the identity. Thus, at the projective level, descends to a projective representation of the original group . Thus, there is a natural one-to-one correspondence between the irreducible projective representations of and the irreducible, determinant-one ordinary representations of . (In the semisimple case, the qualifier "determinant-one" may be omitted, because in that case, every representation of is automatically determinant one.)
An important example is the case of SO(3), whose universal cover is SU(2). Now, the Lie algebra is semisimple. Furthermore, since SU(2) is a compact group, every finite-dimensional representation of it admits an inner product with respect to which the representation is unitary. Thus, the irreducible projective representations of SO(3) are in one-to-one correspondence with the irreducible ordinary representations of SU(2).
Infinite-dimensional projective unitary representations: the Heisenberg case
The results of the previous subsection do not hold in the infinite-dimensional case, simply because the trace of is typically not well defined. Indeed, the result fails: Consider, for example, the translations in position space and in momentum space for a quantum particle moving in , acting on the Hilbert space . These operators are defined as follows:
for all . These operators are simply continuous versions of the operators and described in the "First example" section above. As in that section, we can then define a projective unitary representation of :
because the operators commute up to a phase factor. But no choice of the phase factors will lead to an ordinary unitary representation, since translations in position do not commute with translations in momentum (and multiplying by a nonzero constant will not change this). These operators do, however, come from an ordinary unitary representation of the Heisenberg group, which is a one-dimensional central extension of . (See also the Stone–von Neumann theorem.)
Infinite-dimensional projective unitary representations: Bargmann's theorem
On the other hand, Bargmann's theorem states that if the second Lie algebra cohomology group of is trivial, then every projective unitary representation of can be de-projectivized after passing to the universal cover. More precisely, suppose we begin with a projective unitary representation of a Lie group . Then the theorem states that can be lifted to an ordinary unitary representation of the universal cover of . This means that maps each element of the kernel of the covering map to a scalar multiple of the identity—so that at the projective level, descends to —and that the associated projective representation of is equal to .
The theorem does not apply to the group —as the previous example shows—because the second cohomology group of the associated commutative Lie algebra is nontrivial. Examples where the result does apply include semisimple groups (e.g., SL(2,R)) and the Poincaré group. This last result is important for Wigner's classification of the projective unitary representations of the Poincaré group.
The proof of Bargmann's theorem goes by considering a central extension of , constructed similarly to the section above on linear representations and projective representations, as a subgroup of the direct product group , where is the Hilbert space on which acts and is the group of unitary operators on . The group is defined as
As in the earlier section, the map given by is a surjective homomorphism whose kernel is so that is a central extension of . Again as in the earlier section, we can then define a linear representation of by setting . Then is a lift of in the sense that , where is the quotient map from to .
A key technical point is to show that is a Lie group. (This claim is not so obvious, because if is infinite dimensional, the group is an infinite-dimensional topological group.) Once this result is established, we see that is a one-dimensional Lie group central extension of , so that the Lie algebra of is also a one-dimensional central extension of (note here that the adjective "one-dimensional" does not refer to and , but rather to the kernel of the projection map from those objects onto and respectively). But the cohomology group may be identified with the space of one-dimensional (again, in the aforementioned sense) central extensions of ; if is trivial then every one-dimensional central extension of is trivial. In that case, is just the direct sum of with a copy of the real line. It follows that the universal cover of must be just a direct product of the universal cover of with a copy of the real line. We can then lift from to (by composing with the covering map) and finally restrict this lift to the universal cover of .
See also
Affine representation
Group action
Central extension
Particle physics and representation theory
Spin-½
Spinor
Symmetry in quantum mechanics
Heisenberg group
Notes
References
Homological algebra
Group theory
Representation theory
Representation theory of groups | Projective representation | [
"Mathematics"
] | 3,284 | [
"Mathematical structures",
"Group theory",
"Fields of abstract algebra",
"Category theory",
"Representation theory",
"Homological algebra"
] |
348,869 | https://en.wikipedia.org/wiki/North%20Atlantic%20oscillation | The North Atlantic Oscillation (NAO) is a weather phenomenon over the North Atlantic Ocean of fluctuations in the difference of atmospheric pressure at sea level (SLP) between the Icelandic Low and the Azores High. Through fluctuations in the strength of the Icelandic Low and the Azores High, it controls the strength and direction of westerly winds and location of storm tracks across the North Atlantic.
The NAO was discovered through several studies in the late 19th and early 20th centuries. Unlike the El Niño–Southern Oscillation phenomenon in the Pacific Ocean, the NAO is a largely atmospheric mode. It is one of the most important manifestations of climate fluctuations in the North Atlantic and surrounding humid climates.
The North Atlantic Oscillation is closely related to the Arctic oscillation (AO) (or Northern Annular Mode (NAM)), but should not be confused with the Atlantic multidecadal oscillation (AMO).
Definition
The NAO has multiple possible definitions. The easiest to understand are those based on measuring the seasonal average air pressure difference between stations, such as:
Lisbon and Stykkishólmur/Reykjavík
Ponta Delgada, Azores and Stykkishólmur/Reykjavík
Azores (1865–2002), Gibraltar (1821–2007), and Reykjavík
These definitions all have in common the same northern point (because this is the only station in the region with a long record) in Iceland; and various southern points. All are attempting to capture the same pattern of variation, by choosing stations in the "eye" of the two stable pressure areas, the Azores High and the Icelandic Low (shown in the graphic).
A more complex definition, only possible with more complete modern records generated by numerical weather prediction, is based on the principal empirical orthogonal function (EOF) of surface pressure. This definition has a high degree of correlation with the station-based definition. This then leads onto a debate as to whether the NAO is distinct from the AO/NAM, and if not, which of the two is to be considered the most physically based expression of atmospheric structure (as opposed to the one that most clearly falls out of mathematical expression).
Description
Westerly winds blowing across the Atlantic bring moist air into Europe. In years when westerlies are strong, summers are cool, winters are mild and rain is frequent. If westerlies are suppressed, the temperature is more extreme in summer and winter leading to heat waves, deep freezes and reduced rainfall.
A permanent low-pressure system over Iceland (the Icelandic Low) and a permanent high-pressure system over the Azores (the Azores High) control
the direction and strength of westerly winds into Europe. The relative strengths and positions of these systems vary from year to year and this variation is known as the NAO. A large difference in the pressure at the two stations (a high index year, denoted NAO+) leads to increased westerlies and, consequently, cool summers and mild and wet winters in Central Europe and its Atlantic facade. In contrast, if the index is low (NAO-), westerlies are suppressed, northern European areas suffer cold dry winters and storms track southwards toward the Mediterranean Sea. This brings increased storm activity and rainfall to southern Europe and North Africa.
Especially during the months of November to April, the NAO is responsible for much of the variability of weather in the North Atlantic region, affecting wind speed and wind direction changes, changes in temperature and moisture distribution and the intensity, number and track of storms. Research now suggests that the NAO may be more predictable than previously assumed and skillful winter forecasts may be possible for the NAO.
There is some debate as to how much the NAO impacts short term weather over North America. While most agree that the impact of the NAO is much less over the United States than for Western Europe, the NAO is also believed to affect the weather over much of upper central and eastern areas of North America. During the winter, when the index is high (NAO+), the Azores High draws a stronger south-westerly circulation over the eastern half of the North American continent which prevents Arctic air from plunging southward (into the United States south of 40 latitude). In combination with the El Niño, this effect can produce significantly warmer winters over the upper Midwest and New England, but the impact to the south of these areas is debatable. Conversely, when the NAO index is low (NAO-), the upper central and northeastern portions of the United States can incur winter cold outbreaks more than the norm with associated heavy snowstorms. In summer, a strong NAO- is thought to contribute to a weakened jet stream that normally pulls zonal systems into the Atlantic Basin contributing significantly to excessively long-lasting heat waves over Europe, however, recent studies do not show the evidence of these associations.
More recent studies have shown that the components (pressure centers strength, and locations) of the NAO are more powerful to investigate the relationships to seasonal and sub-seasonal climate variability over Europe, North America and the Mediterranean region.
Effects on North Atlantic sea level
Under a positive NAO index (NAO+), regional reduction in atmospheric pressure results in a regional rise in sea level due to the 'inverse barometer effect'. This effect is important to both the interpretation of historic sea level records and predictions of future sea level trends, as mean pressure fluctuations of the order of millibars can lead to sea level fluctuations of the order of centimeters.
North Atlantic hurricanes
By controlling the position of the Azores High, the NAO also influences the direction of general storm paths for major North Atlantic tropical cyclones: a position of the Azores High farther to the south tends to force storms into the Gulf of Mexico, whereas a northern position allows them to track up the North American Atlantic Coast.
As paleotempestological research has shown, few major hurricanes struck the Gulf coast during 3000–1400 BC and again during the most recent millennium. These quiescent intervals were separated by a hyperactive period during 1400 BC – 1000 AD, when the Gulf coast was struck frequently by catastrophic hurricanes and their landfall probabilities increased by 3–5 times.
Ecological effects
Until recently, the NAO had been in an overall more positive regime since the late 1970s, bringing colder conditions to the North-West Atlantic, which has been linked with the thriving populations of Labrador Sea snow crabs, which have a low temperature optimum.
The NAO+ warming of the North Sea reduces survival of cod larvae which are at the upper limits of their temperature tolerance, as does the cooling in the Labrador Sea, where the cod larvae are at their lower temperature limits. Though not the critical factor, the NAO+ peak in the early 1990s may have contributed to the collapse of the Newfoundland cod fishery.
In southwestern Europe, NAO- events are associated with increased aeolian activity.
On the East Coast of the United States an NAO+ causes warmer temperatures and increased rainfall, and thus warmer, less saline surface water. This prevents nutrient-rich upwelling which has reduced productivity. Georges Bank and the Gulf of Maine are affected by this reduced cod catch.
The strength of the NAO is also a determinant in the population fluctuations of the intensively studied Soay sheep.
Strangely enough, Jonas and Joern (2007) found a strong signal between NAO and grasshopper species composition in the tall grass prairies of the midwestern United States. They found that, even though NAO does not significantly affect the weather in the midwest, there was a significant increase in abundance of common grasshopper species (i.e. Hypochlora alba, Hesperotettix spp., Phoetaliotes nebrascensis, M. scudderi, M. keeleri, and Pseudopomala brachyptera) following winters during the positive phase of NAO and a significant increase in the abundance of less common species (i.e. Campylacantha olivacea, Melanoplus sanguinipes, Mermiria picta, Melanoplus packardii, and Boopedon gracile) following winters during a negative phase of the NAO. This is thought to be the first study showing a link between NAO and terrestrial insects in North America.
The NAO's ecological effects extend as far as the Tibetan Plateau, where increases in aridity resulting in significant forest mortality and intensification of dust storms have been linked to NAO- events.
Winter of 2009–10 in Europe
The winter of 2009–10 in Europe was unusually cold. It is hypothesized that this may be due to a combination of low solar activity, a warm phase of the El Niño–Southern Oscillation and a strong easterly phase of the Quasi-Biennial Oscillation all occurring simultaneously. The Met Office reported that the UK, for example, had experienced its coldest winter for 30 years. This coincided with an exceptionally negative phase of the NAO. Analysis published in mid-2010 confirmed that the concurrent 'El Niño' event and the rare occurrence of an extremely negative NAO were involved.
However, during the winter of 2010–11 in Northern and Western Europe, the Icelandic Low, typically positioned west of Iceland and east of Greenland, appeared regularly to the east of Iceland and so allowed exceptionally cold air into Europe from the Arctic. A strong area of high pressure was initially situated over Greenland, reversing the normal wind pattern in the northwestern Atlantic, creating a blocking pattern driving warm air into northeastern Canada and cold air into Western Europe, as was the case during the previous winter. This occurred during a La Niña season, and is connected to the rare Arctic dipole anomaly.
In the north western part of the Atlantic, both of these winters were mild, especially 2009–2010, which was the warmest recorded in Canada. The winter of 2010-2011 was particularly above normal in the northern Arctic regions of that country.
The probability of cold winters with much snow in Central Europe rises when the Arctic is covered by less sea ice in summer. Scientists of the Potsdam Research Unit of the Alfred Wegener Institute for Polar and Marine Research in the Helmholtz Association have decrypted a mechanism in which a shrinking summertime sea ice cover changes the air pressure zones in the Arctic atmosphere and effects on European winter weather.
If there is a particularly large-scale melt of Arctic sea ice in summer, as observed in recent years, two important effects are intensified. Firstly, the retreat of the light ice surface reveals the darker ocean, causing it to warm up more in summer from the solar radiation (ice–albedo feedback mechanism). Secondly, the diminished ice cover can no longer prevent the heat stored in the ocean being released into the atmosphere (lid effect). As a result of the decreased sea ice cover the air is warmed more greatly than it used to be particularly in autumn and winter because during this period the ocean is warmer than the atmosphere.
The warming of the air near to the ground leads to rising movements and the atmosphere becomes less stable. One of these patterns is the air pressure difference between the Arctic and mid-latitudes: the Arctic oscillation with the Azores highs and Iceland lows known from the weather reports. If this difference is high, a strong westerly wind will result which in winter carries warm and humid Atlantic air masses right down to Europe. In the negative phase when pressure differences are low, cold Arctic air can then easily penetrate southward through Europe without being interrupted by the usual westerlies. Model calculations show that the air pressure difference with decreased sea ice cover in the Arctic summer is weakened in the following winter, enabling Arctic cold to push down to mid-latitudes.
Winter of 2015–16 in Europe
Despite one of the strongest El Niño events recorded in the Pacific Ocean, a largely positive North Atlantic Oscillation prevailed over Europe during the winter of 2015–2016. For example, Cumbria in England registered one of the wettest months on record. The Maltese Islands in the Mediterranean registered one of the driest years ever recorded up to beginning of March, with a national average of only 235 mm and some areas registering less than 200 mm.
See also
Arctic oscillation
Antarctic oscillation
Anticyclone
Atlantic Ocean
Azores High
El Niño–Southern Oscillation
Global warming
Icelandic Low
Latitude of the Gulf Stream and the Gulf Stream north wall index
North Atlantic Current
North Atlantic Gyre
Pacific decadal oscillation
Pacific–North American teleconnection pattern
Quasi-biennial oscillation
References
External links
Current NAO observations and forecasts
UK's climatic research unit information sheet on the NAO
Overview paper on the NAO from the USA's National Center for Atmospheric Research Hurrell at al, ~2002, 35pp
The North Atlantic Oscillation by Martin Visbeck
North Atlantic Oscillation (NAO) Index 1850 - 2013 by Jianping Li
Daily North Atlantic Oscillation (NAO) Index 1948 - 2013 by Jianping Li
Overview of Climate Indices
Atlantic Ocean
Regional climate effects
Physical oceanography
Climate oscillations | North Atlantic oscillation | [
"Physics"
] | 2,657 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
348,898 | https://en.wikipedia.org/wiki/Fatigue%20%28material%29 | In materials science, fatigue is the initiation and propagation of cracks in a material due to cyclic loading. Once a fatigue crack has initiated, it grows a small amount with each loading cycle, typically producing striations on some parts of the fracture surface. The crack will continue to grow until it reaches a critical size, which occurs when the stress intensity factor of the crack exceeds the fracture toughness of the material, producing rapid propagation and typically complete fracture of the structure.
Fatigue has traditionally been associated with the failure of metal components which led to the term metal fatigue. In the nineteenth century, the sudden failing of metal railway axles was thought to be caused by the metal crystallising because of the brittle appearance of the fracture surface, but this has since been disproved. Most materials, such as composites, plastics and ceramics, seem to experience some sort of fatigue-related failure.
To aid in predicting the fatigue life of a component, fatigue tests are carried out using coupons to measure the rate of crack growth by applying constant amplitude cyclic loading and averaging the measured growth of a crack over thousands of cycles. However, there are also a number of special cases that need to be considered where the rate of crack growth is significantly different compared to that obtained from constant amplitude testing, such as the reduced rate of growth that occurs for small loads near the threshold or after the application of an overload, and the increased rate of crack growth associated with short cracks or after the application of an underload.
If the loads are above a certain threshold, microscopic cracks will begin to initiate at stress concentrations such as holes, persistent slip bands (PSBs), composite interfaces or grain boundaries in metals. The stress values that cause fatigue damage are typically much less than the yield strength of the material.
Stages of fatigue
Historically, fatigue has been separated into regions of high cycle fatigue that require more than 104 cycles to failure where stress is low and primarily elastic and low cycle fatigue where there is significant plasticity. Experiments have shown that low cycle fatigue is also crack growth.
Fatigue failures, both for high and low cycles, all follow the same basic steps: crack initiation, crack growth stages I and II, and finally ultimate failure. To begin the process, cracks must nucleate within a material. This process can occur either at stress risers in metallic samples or at areas with a high void density in polymer samples. These cracks propagate slowly at first during stage I crack growth along crystallographic planes, where shear stresses are highest. Once the cracks reach a critical size they propagate quickly during stage II crack growth in a direction perpendicular to the applied force. These cracks can eventually lead to the ultimate failure of the material, often in a brittle catastrophic fashion.
Crack initiation
The formation of initial cracks preceding fatigue failure is a separate process consisting of four discrete steps in metallic samples. The material will develop cell structures and harden in response to the applied load. This causes the amplitude of the applied stress to increase given the new restraints on strain. These newly formed cell structures will eventually break down with the formation of persistent slip bands (PSBs). Slip in the material is localized at these PSBs, and the exaggerated slip can now serve as a stress concentrator for a crack to form. Nucleation and growth of a crack to a detectable size accounts for most of the cracking process. It is for this reason that cyclic fatigue failures seem to occur so suddenly where the bulk of the changes in the material are not visible without destructive testing. Even in normally ductile materials, fatigue failures will resemble sudden brittle failures.
PSB-induced slip planes result in intrusions and extrusions along the surface of a material, often occurring in pairs. This slip is not a microstructural change within the material, but rather a propagation of dislocations within the material. Instead of a smooth interface, the intrusions and extrusions will cause the surface of the material to resemble the edge of a deck of cards, where not all cards are perfectly aligned. Slip-induced intrusions and extrusions create extremely fine surface structures on the material. With surface structure size inversely related to stress concentration factors, PSB-induced surface slip can cause fractures to initiate.
These steps can also be bypassed entirely if the cracks form at a pre-existing stress concentrator such as from an inclusion in the material or from a geometric stress concentrator caused by a sharp internal corner or fillet.
Crack growth
Most of the fatigue life is generally consumed in the crack growth phase. The rate of growth is primarily driven by the range of cyclic loading although additional factors such as mean stress, environment, overloads and underloads can also affect the rate of growth. Crack growth may stop if the loads are small enough to fall below a critical threshold.
Fatigue cracks can grow from material or manufacturing defects from as small as 10 μm.
When the rate of growth becomes large enough, fatigue striations can be seen on the fracture surface. Striations mark the position of the crack tip and the width of each striation represents the growth from one loading cycle. Striations are a result of plasticity at the crack tip.
When the stress intensity exceeds a critical value known as the fracture toughness, unsustainable fast fracture will occur, usually by a process of microvoid coalescence. Prior to final fracture, the fracture surface may contain a mixture of areas of fatigue and fast fracture.
Acceleration and retardation
The following effects change the rate of growth:
Mean stress effect: Higher mean stress increases the rate of crack growth.
Environment: Increased moisture increases the rate of crack growth. In the case of aluminium, cracks generally grow from the surface, where water vapour from the atmosphere is able to reach the tip of the crack and dissociate into atomic hydrogen which causes hydrogen embrittlement. Cracks growing internally are isolated from the atmosphere and grow in a vacuum where the rate of growth is typically an order of magnitude slower than a surface crack.
Short crack effect: In 1975, Pearson observed that short cracks grow faster than expected. Possible reasons for the short crack effect include the presence of the T-stress, the tri-axial stress state at the crack tip, the lack of crack closure associated with short cracks and the large plastic zone in comparison to the crack length. In addition, long cracks typically experience a threshold which short cracks do not have. There are a number of criteria for short cracks:
cracks are typically smaller than 1 mm,
cracks are smaller than the material microstructure size such as the grain size, or
crack length is small compared to the plastic zone.
Underloads: Small numbers of underloads increase the rate of growth and may counteract the effect of overloads.
Overloads: Initially overloads (> 1.5 the maximum load in a sequence) lead to a small increase in the rate of growth followed by a long reduction in the rate of growth.
Characteristics of fatigue
In metal alloys, and for the simplifying case when there are no macroscopic or microscopic discontinuities, the process starts with dislocation movements at the microscopic level, which eventually form persistent slip bands that become the nucleus of short cracks.
Macroscopic and microscopic discontinuities (at the crystalline grain scale) as well as component design features which cause stress concentrations (holes, keyways, sharp changes of load direction etc.) are common locations at which the fatigue process begins.
Fatigue is a process that has a degree of randomness (stochastic), often showing considerable scatter even in seemingly identical samples in well controlled environments.
Fatigue is usually associated with tensile stresses but fatigue cracks have been reported due to compressive loads.
The greater the applied stress range, the shorter the life.
Fatigue life scatter tends to increase for longer fatigue lives.
Damage is irreversible. Materials do not recover when rested.
Fatigue life is influenced by a variety of factors, such as temperature, surface finish, metallurgical microstructure, presence of oxidizing or inert chemicals, residual stresses, scuffing contact (fretting), etc.
Some materials (e.g., some steel and titanium alloys) exhibit a theoretical fatigue limit below which continued loading does not lead to fatigue failure.
High cycle fatigue strength (about 104 to 108 cycles) can be described by stress-based parameters. A load-controlled servo-hydraulic test rig is commonly used in these tests, with frequencies of around 20–50 Hz. Other sorts of machines—like resonant magnetic machines—can also be used, to achieve frequencies up to 250 Hz.
Low-cycle fatigue (loading that typically causes failure in less than 104 cycles) is associated with localized plastic behavior in metals; thus, a strain-based parameter should be used for fatigue life prediction in metals. Testing is conducted with constant strain amplitudes typically at 0.01–5 Hz.
Timeline of research history
1837: Wilhelm Albert publishes the first article on fatigue. He devised a test machine for conveyor chains used in the Clausthal mines.
1839: Jean-Victor Poncelet describes metals as being 'tired' in his lectures at the military school at Metz.
1842: William John Macquorn Rankine recognises the importance of stress concentrations in his investigation of railroad axle failures. The Versailles train wreck was caused by fatigue failure of a locomotive axle.
1843: Joseph Glynn reports on the fatigue of an axle on a locomotive tender. He identifies the keyway as the crack origin.
1848: The Railway Inspectorate reports one of the first tyre failures, probably from a rivet hole in tread of railway carriage wheel. It was likely a fatigue failure.
1849: Eaton Hodgkinson is granted a "small sum of money" to report to the UK Parliament on his work in "ascertaining by direct experiment, the effects of continued changes of load upon iron structures and to what extent they could be loaded without danger to their ultimate security".
1854: F. Braithwaite reports on common service fatigue failures and coins the term fatigue.
1860: Systematic fatigue testing undertaken by Sir William Fairbairn and August Wöhler.
1870: A. Wöhler summarises his work on railroad axles. He concludes that cyclic stress range is more important than peak stress and introduces the concept of endurance limit.
1903: Sir James Alfred Ewing demonstrates the origin of fatigue failure in microscopic cracks.
1910: O. H. Basquin proposes a log-log relationship for S-N curves, using Wöhler's test data.
1940: Sidney M. Cadwell publishes first rigorous study of fatigue in rubber.
1945: A. M. Miner popularises Palmgren's (1924) linear damage hypothesis as a practical design tool.
1952: W. Weibull An S-N curve model.
1954: The world's first commercial jetliner, the de Havilland Comet, suffers disaster as three planes break up in mid-air, causing de Havilland and all other manufacturers to redesign high altitude aircraft and in particular replace square apertures like windows with oval ones.
1954: L. F. Coffin and S. S. Manson explain fatigue crack-growth in terms of plastic strain in the tip of cracks.
1961: P. C. Paris proposes methods for predicting the rate of growth of individual fatigue cracks in the face of initial scepticism and popular defence of Miner's phenomenological approach.
1968: Tatsuo Endo and M. Matsuishi devise the rainflow-counting algorithm and enable the reliable application of Miner's rule to random loadings.
1970: Smith, Watson, and Topper developed a mean stress correction model, where the fatigue damage in a cycle is determined by the product of the maximum stress and strain amplitude.
1970: W. Elber elucidates the mechanisms and importance of crack closure in slowing the growth of a fatigue crack due to the wedging effect of plastic deformation left behind the tip of the crack.
1973: M. W. Brown and K. J. Miller observe that fatigue life under multiaxial conditions is governed by the experience of the plane receiving the most damage, and that both tension and shear loads on the critical plane must be considered.
Predicting fatigue life
The American Society for Testing and Materials defines fatigue life, Nf, as the number of stress cycles of a specified character that a specimen sustains before failure of a specified nature occurs. For some materials, notably steel and titanium, there is a theoretical value for stress amplitude below which the material will not fail for any number of cycles, called a fatigue limit or endurance limit. However, in practice, several bodies of work done at greater numbers of cycles suggest that fatigue limits do not exist for any metals.
Engineers have used a number of methods to determine the fatigue life of a material:
the stress-life method,
the strain-life method,
the crack growth method and
probabilistic methods, which can be based on either life or crack growth methods.
Whether using stress/strain-life approach or using crack growth approach, complex or variable amplitude loading is reduced to a series of fatigue equivalent simple cyclic loadings using a technique such as the rainflow-counting algorithm.
Stress-life and strain-life methods
A mechanical part is often exposed to a complex, often random, sequence of loads, large and small. In order to assess the safe life of such a part using the fatigue damage or stress/strain-life methods the following series of steps is usually performed:
Complex loading is reduced to a series of simple cyclic loadings using a technique such as rainflow analysis;
A histogram of cyclic stress is created from the rainflow analysis to form a fatigue damage spectrum;
For each stress level, the degree of cumulative damage is calculated from the S-N curve; and
The effect of the individual contributions are combined using an algorithm such as Miner's rule.
Since S-N curves are typically generated for uniaxial loading, some equivalence rule is needed whenever the loading is multiaxial. For simple, proportional loading histories (lateral load in a constant ratio with the axial), Sines rule may be applied. For more complex situations, such as non-proportional loading, critical plane analysis must be applied.
Miner's rule
In 1945, Milton A. Miner popularised a rule that had first been proposed by Arvid Palmgren in 1924. The rule, variously called Miner's rule or the Palmgren–Miner linear damage hypothesis, states that where there are k different stress magnitudes in a spectrum, Si (1 ≤ i ≤ k), each contributing ni(Si) cycles, then if Ni(Si) is the number of cycles to failure of a constant stress reversal Si (determined by uni-axial fatigue tests), failure occurs when:
Usually, for design purposes, C is assumed to be 1. This can be thought of as assessing what proportion of life is consumed by a linear combination of stress reversals at varying magnitudes.
Although Miner's rule may be a useful approximation in many circumstances, it has several major limitations:
It fails to recognize the probabilistic nature of fatigue and there is no simple way to relate life predicted by the rule with the characteristics of a probability distribution. Industry analysts often use design curves, adjusted to account for scatter, to calculate Ni(Si).
The sequence in which high vs. low stress cycles are applied to a sample in fact affect the fatigue life, for which Miner's Rule does not account. In some circumstances, cycles of low stress followed by high stress cause more damage than would be predicted by the rule. It does not consider the effect of an overload or high stress which may result in a compressive residual stress that may retard crack growth. High stress followed by low stress may have less damage due to the presence of compressive residual stress (or localized plastic damages around crack tip).
Stress-life (S-N) method
Materials fatigue performance is commonly characterized by an S-N curve, also known as a Wöhler curve. This is often plotted with the cyclic stress (S) against the cycles to failure (N) on a logarithmic scale. S-N curves are derived from tests on samples of the material to be characterized (often called coupons or specimens) where a regular sinusoidal stress is applied by a testing machine which also counts the number of cycles to failure. This process is sometimes known as coupon testing. For greater accuracy but lower generality component testing is used. Each coupon or component test generates a point on the plot though in some cases there is a runout where the time to failure exceeds that available for the test (see censoring). Analysis of fatigue data requires techniques from statistics, especially survival analysis and linear regression.
The progression of the S-N curve can be influenced by many factors such as stress ratio (mean stress), loading frequency, temperature, corrosion, residual stresses, and the presence of notches. A constant fatigue life (CFL) diagram is useful for the study of stress ratio effect. The Goodman line is a method used to estimate the influence of the mean stress on the fatigue strength.
A Constant Fatigue Life (CFL) diagram is useful for stress ratio effect on S-N curve. Also, in the presence of a steady stress superimposed on the cyclic loading, the Goodman relation can be used to estimate a failure condition. It plots stress amplitude against mean stress with the fatigue limit and the ultimate tensile strength of the material as the two extremes. Alternative failure criteria include Soderberg and Gerber.
As coupons sampled from a homogeneous frame will display a variation in their number of cycles to failure, the S-N curve should more properly be a Stress-Cycle-Probability (S-N-P) curve to capture the probability of failure after a given number of cycles of a certain stress.
With body-centered cubic materials (bcc), the Wöhler curve often becomes a horizontal line with decreasing stress amplitude, i.e. there is a fatigue strength that can be assigned to these materials. With face-centered cubic metals (fcc), the Wöhler curve generally drops continuously, so that only a fatigue limit can be assigned to these materials.
Strain-life (ε-N) method
When strains are no longer elastic, such as in the presence of stress concentrations, the total strain can be used instead of stress as a similitude parameter. This is known as the strain-life method. The total strain amplitude is the sum of the elastic strain amplitude and the plastic strain amplitude and is given by
.
Basquin's equation for the elastic strain amplitude is
where is Young's modulus.
The relation for high cycle fatigue can be expressed using the elastic strain amplitude
where is a parameter that scales with tensile strength obtained by fitting experimental data, is the number of cycles to failure and is the slope of the log-log curve again determined by curve fitting.
In 1954, Coffin and Manson proposed that the fatigue life of a component was related to the plastic strain amplitude using
.
Combining the elastic and plastic portions gives the total strain amplitude accounting for both low and high cycle fatigue
.
where is the fatigue strength coefficient, is the fatigue strength exponent, is the fatigue ductility coefficient, is the fatigue ductility exponent, and is the number of cycles to failure ( being the number of reversals to failure).
Crack growth methods
An estimate of the fatigue life of a component can be made using a crack growth equation by summing up the width of each increment of crack growth for each loading cycle. Safety or scatter factors are applied to the calculated life to account for any uncertainty and variability associated with fatigue. The rate of growth used in crack growth predictions is typically measured by applying thousands of constant amplitude cycles to a coupon and measuring the rate of growth from the change in compliance of the coupon or by measuring the growth of the crack on the surface of the coupon. Standard methods for measuring the rate of growth have been developed by ASTM International.
Crack growth equations such as the Paris–Erdoğan equation are used to predict the life of a component. They can be used to predict the growth of a crack from 10 um to failure. For normal manufacturing finishes this may cover most of the fatigue life of a component where growth can start from the first cycle. The conditions at the crack tip of a component are usually related to the conditions of test coupon using a characterising parameter such as the stress intensity, J-integral or crack tip opening displacement. All these techniques aim to match the crack tip conditions on the component to that of test coupons which give the rate of crack growth.
Additional models may be necessary to include retardation and acceleration effects associated with overloads or underloads in the loading sequence. In addition, small crack growth data may be needed to match the increased rate of growth seen with small cracks.
Typically, a cycle counting technique such as rainflow-cycle counting is used to extract the cycles from a complex sequence. This technique, along with others, has been shown to work with crack growth methods.
Crack growth methods have the advantage that they can predict the intermediate size of cracks. This information can be used to schedule inspections on a structure to ensure safety whereas strain/life methods only give a life until failure.
Dealing with fatigue
Design
Dependable design against fatigue-failure requires thorough education and supervised experience in structural engineering, mechanical engineering, or materials science. There are at least five principal approaches to life assurance for mechanical parts that display increasing degrees of sophistication:
Design to keep stress below threshold of fatigue limit (infinite lifetime concept);
Fail-safe, graceful degradation, and fault-tolerant design: Instruct the user to replace parts when they fail. Design in such a way that there is no single point of failure, and so that when any one part completely fails, it does not lead to catastrophic failure of the entire system.
Safe-life design: Design (conservatively) for a fixed life after which the user is instructed to replace the part with a new one (a so-called lifed part, finite lifetime concept, or "safe-life" design practice); planned obsolescence and disposable product are variants that design for a fixed life after which the user is instructed to replace the entire device;
Damage tolerance: Is an approach that ensures aircraft safety by assuming the presence of cracks or defects even in new aircraft. Crack growth calculations, periodic inspections and component repair or replacement can be used to ensure critical components that may contain cracks, remain safe. Inspections usually use nondestructive testing to limit or monitor the size of possible cracks and require an accurate prediction of the rate of crack-growth between inspections. The designer sets some aircraft maintenance checks schedule frequent enough that parts are replaced while the crack is still in the "slow growth" phase. This is often referred to as damage tolerant design or "retirement-for-cause".
Risk Management: Ensures the probability of failure remains below an acceptable level. This approach is typically used for aircraft where acceptable levels may be based on probability of failure during a single flight or taken over the lifetime of an aircraft. A component is assumed to have a crack with a probability distribution of crack sizes. This approach can consider variability in values such as crack growth rates, usage and critical crack size. It is also useful for considering damage at multiple locations that may interact to produce multi-site or widespread fatigue damage. Probability distributions that are common in data analysis and in design against fatigue include the log-normal distribution, extreme value distribution, Birnbaum–Saunders distribution, and Weibull distribution.
Testing
Fatigue testing can be used for components such as a coupon or a full-scale test article to determine:
the rate of crack growth and fatigue life of components such as a coupon or a full-scale test article.
location of critical regions
degree of fail-safety when part of the structure fails
the origin and cause of the crack initiating defect from fractographic examination of the crack.
These tests may form part of the certification process such as for airworthiness certification.
Repair
Stop drill Fatigue cracks that have begun to propagate can sometimes be stopped by drilling holes, called drill stops, at the tip of the crack. The possibility remains of a new crack starting in the side of the hole.
Blend. Small cracks can be blended away and the surface cold worked or shot peened.
Oversize holes. Holes with cracks growing from them can be drilled out to a larger hole to remove cracking and bushed to restore the original hole. Bushes can be cold shrink Interference fit bushes to induce beneficial compressive residual stresses. The oversized hole can also be cold worked by drawing an oversized mandrel through the hole.
Patch. Cracks may be repaired by installing a patch or repair fitting. Composite patches have been used to restore the strength of aircraft wings after cracks have been detected or to lower the stress prior to cracking in order to improve the fatigue life. Patches may restrict the ability to monitor fatigue cracks and may need to be removed and replaced for inspections.
Life improvement
Change material. Changes in the materials used in parts can also improve fatigue life. For example, parts can be made from better fatigue rated metals. Complete replacement and redesign of parts can also reduce if not eliminate fatigue problems. Thus helicopter rotor blades and propellers in metal are being replaced by composite equivalents. They are not only lighter, but also much more resistant to fatigue. They are more expensive, but the extra cost is amply repaid by their greater integrity, since loss of a rotor blade usually leads to total loss of the aircraft. A similar argument has been made for replacement of metal fuselages, wings and tails of aircraft.
Induce residual stresses Peening a surface can reduce such tensile stresses and create compressive residual stress, which prevents crack initiation. Forms of peening include: shot peening, using high-speed projectiles, high-frequency impact treatment (also called high-frequency mechanical impact) using a mechanical hammer, and laser peening which uses high-energy laser pulses. Low plasticity burnishing can also be used to induce compresses stress in fillets and cold work mandrels can be used for holes. Increases in fatigue life and strength are proportionally related to the depth of the compressive residual stresses imparted. Shot peening imparts compressive residual stresses approximately 0.005 inches (0.1 mm) deep, while laser peening can go 0.040 to 0.100 inches (1 to 2.5 mm) deep, or deeper.
Deep cryogenic treatment. The use of Deep Cryogenic treatment has been shown to increase resistance to fatigue failure. Springs used in industry, auto racing and firearms have been shown to last up to six times longer when treated. Heat checking, which is a form of thermal cyclic fatigue has been greatly delayed.
Re-profiling. Changing the shape of a stress concentration such as a hole or cutout may be used to extend the life of a component. Shape optimisation using numerical optimisation algorithms have been used to lower the stress concentration in wings and increase their life.
Fatigue of composites
Composite materials can offer excellent resistance to fatigue loading. In general, composites exhibit good fracture toughness and, unlike metals, increase fracture toughness with increasing strength. The critical damage size in composites is also greater than that for metals.
The primary mode of damage in a metal structure is cracking. For metal, cracks propagate in a relatively well-defined manner with respect to the applied stress, and the critical crack size and rate of crack propagation can be related to specimen data through analytical fracture mechanics. However, with composite structures, there is no single damage mode which dominates. Matrix cracking, delamination, debonding, voids, fiber fracture, and composite cracking can all occur separately and in combination, and the predominance of one or more is highly dependent on the laminate orientations and loading conditions. In addition, the unique joints and attachments used for composite structures often introduce modes of failure different from those typified by the laminate itself.
The composite damage propagates in a less regular manner and damage modes can change. Experience with composites indicates that the rate of damage propagation in does not exhibit the two distinct regions of initiation and propagation like metals. The crack initiation range in metals is propagation, and there is a significant quantitative difference in rate while the difference appears to be less apparent with composites. Fatigue cracks of composites may form in the matrix and propagate slowly since the matrix carries such a small fraction of the applied stress. And the fibers in the wake of the crack experience fatigue damage. In many cases, the damage rate is accelerated by deleterious interactions with the environment like oxidation or corrosion of fibers.
Notable fatigue failures
Versailles train crash
Following the King Louis-Philippe I's celebrations at the Palace of Versailles, a train returning to Paris crashed in May 1842 at Meudon after the leading locomotive broke an axle. The carriages behind piled into the wrecked engines and caught fire. At least 55 passengers were killed trapped in the locked carriages, including the explorer Jules Dumont d'Urville. This accident is known in France as the . The accident was witnessed by the British locomotive engineer Joseph Locke and widely reported in Britain. It was discussed extensively by engineers, who sought an explanation.
The derailment had been the result of a broken locomotive axle. Rankine's investigation of broken axles in Britain highlighted the importance of stress concentration, and the mechanism of crack growth with repeated loading. His and other papers suggesting a crack growth mechanism through repeated stressing, however, were ignored, and fatigue failures occurred at an ever-increasing rate on the expanding railway system. Other spurious theories seemed to be more acceptable, such as the idea that the metal had somehow "crystallized". The notion was based on the crystalline appearance of the fast fracture region of the crack surface, but ignored the fact that the metal was already highly crystalline.
de Havilland Comet
Two de Havilland Comet passenger jets broke up in mid-air and crashed within a few months of each other in 1954. As a result, systematic tests were conducted on a fuselage immersed and pressurised in a water tank. After the equivalent of 3,000 flights, investigators at the Royal Aircraft Establishment (RAE) were able to conclude that the crash had been due to failure of the pressure cabin at the forward Automatic Direction Finder window in the roof. This 'window' was in fact one of two apertures for the aerials of an electronic navigation system in which opaque fibreglass panels took the place of the window 'glass'. The failure was a result of metal fatigue caused by the repeated pressurisation and de-pressurisation of the aircraft cabin. Also, the supports around the windows were riveted, not bonded, as the original specifications for the aircraft had called for. The problem was exacerbated by the punch rivet construction technique employed. Unlike drill riveting, the imperfect nature of the hole created by punch riveting caused manufacturing defect cracks which may have caused the start of fatigue cracks around the rivet.
The Comet's pressure cabin had been designed to a safety factor comfortably in excess of that required by British Civil Airworthiness Requirements (2.5 times the cabin proof test pressure as opposed to the requirement of 1.33 times and an ultimate load of 2.0 times the cabin pressure) and the accident caused a revision in the estimates of the safe loading strength requirements of airliner pressure cabins.
In addition, it was discovered that the stresses around pressure cabin apertures were considerably higher than had been anticipated, especially around sharp-cornered cut-outs, such as windows. As a result, all future jet airliners would feature windows with rounded corners, greatly reducing the stress concentration. This was a noticeable distinguishing feature of all later models of the Comet. Investigators from the RAE told a public inquiry that the sharp corners near the Comets' window openings acted as initiation sites for cracks. The skin of the aircraft was also too thin, and cracks from manufacturing stresses were present at the corners.
Alexander L. Kielland oil platform capsizing
Alexander L. Kielland was a Norwegian semi-submersible drilling rig that capsized whilst working in the Ekofisk oil field in March 1980, killing 123 people. The capsizing was the worst disaster in Norwegian waters since World War II. The rig, located approximately 320 km east of Dundee, Scotland, was owned by the Stavanger Drilling Company of Norway and was on hire to the United States company Phillips Petroleum at the time of the disaster. In driving rain and mist, early in the evening of 27 March 1980 more than 200 men were off duty in the accommodation on Alexander L. Kielland. The wind was gusting to 40 knots with waves up to 12 m high. The rig had just been winched away from the Edda production platform. Minutes before 18:30 those on board felt a 'sharp crack' followed by 'some kind of trembling'. Suddenly the rig heeled over 30° and then stabilised. Five of the six anchor cables had broken, with one remaining cable preventing the rig from capsizing. The list continued to increase and at 18:53 the remaining anchor cable snapped and the rig turned upside down.
A year later in March 1981, the investigative report concluded that the rig collapsed owing to a fatigue crack in one of its six bracings (bracing D-6), which connected the collapsed D-leg to the rest of the rig. This was traced to a small 6 mm fillet weld which joined a non-load-bearing flange plate to this D-6 bracing. This flange plate held a sonar device used during drilling operations. The poor profile of the fillet weld contributed to a reduction in its fatigue strength. Further, the investigation found considerable amounts of lamellar tearing in the flange plate and cold cracks in the butt weld. Cold cracks in the welds, increased stress concentrations due to the weakened flange plate, the poor weld profile, and cyclical stresses (which would be common in the North Sea), seemed to collectively play a role in the rig's collapse.
Others
The 1862 Hartley Colliery Disaster was caused by the fracture of a steam engine beam and killed 204 people.
The 1919 Boston Great Molasses Flood has been attributed to a fatigue failure.
The 1948 Northwest Airlines Flight 421 crash due to fatigue failure in a wing spar root
The 1957 "Mt. Pinatubo", presidential plane of Philippine President Ramon Magsaysay, crashed due to engine failure caused by metal fatigue.
The 1965 capsize of the UK's first offshore oil platform, the Sea Gem, was due to fatigue in part of the suspension system linking the hull to the legs.
The 1968 Los Angeles Airways Flight 417 lost one of its main rotor blades due to fatigue failure.
The 1968 MacRobertson Miller Airlines Flight 1750 lost a wing due to improper maintenance leading to fatigue failure.
The 1969 F-111A crash due to a fatigue failure of the wing pivot fitting from a material defect resulted in the development of the damage-tolerant approach for fatigue design.
The 1977 Dan-Air Boeing 707 crash caused by fatigue failure resulting in the loss of the right horizontal stabilizer.
The 1979 American Airlines Flight 191 crashed after engine separation attributed to fatigue damage in the pylon structure holding the engine to the wing, caused by improper maintenance procedures.
The 1980 LOT Flight 7 crashed due to fatigue in an engine turbine shaft resulting in engine disintegration leading to loss of control.
The 1985 Japan Airlines Flight 123 crashed after the aircraft lost its vertical stabilizer due to faulty repairs on the rear bulkhead.
The 1988 Aloha Airlines Flight 243 suffered an explosive decompression at after a fatigue failure.
The 1989 United Airlines Flight 232 lost its tail engine due to fatigue failure in a fan disk hub.
The 1992 El Al Flight 1862 lost both engines on its right-wing due to fatigue failure in the pylon mounting of the #3 Engine.
The 1998 Eschede train disaster was caused by fatigue failure of a single composite wheel.
The 2000 Hatfield rail crash was likely caused by rolling contact fatigue.
The 2000 recall of 6.5 million Firestone tires on Ford Explorers originated from fatigue crack growth leading to separation of the tread from the tire.
The 2002 China Airlines Flight 611 disintegrated in-flight due to fatigue failure.
The 2005 Chalk's Ocean Airways Flight 101 lost its right wing due to fatigue failure brought about by inadequate maintenance practices.
The 2009 Viareggio train derailment due to fatigue failure.
The 2009 Sayano–Shushenskaya power station accident due to metal fatigue of turbine mountings.
The 2017 Air France Flight 66 had in-flight engine failure due to cold dwell fatigue fracture in the fan hub.
The 2023 Titan submersible implosion is thought to have occurred due to fatigue delamination of the carbon-fiber material used for the hull.
See also
Basquin's Law of Fatigue
, a diagram by British mechanical engineer
International Journal of Fatigue
References
Further reading
External links
Fatigue Shawn M. Kelly
Application note on fatigue crack propagation in UHMWPE
fatigue test video Karlsruhe University of Applied Sciences
Strain life method G. Glinka
Fatigue from variable amplitude loading A. Fatemi
Fracture mechanics
Materials degradation
Mechanical failure modes
Solid mechanics
Structural analysis | Fatigue (material) | [
"Physics",
"Materials_science",
"Technology",
"Engineering"
] | 7,627 | [
"Structural engineering",
"Solid mechanics",
"Mechanical failure modes",
"Fracture mechanics",
"Structural analysis",
"Technological failures",
"Materials science",
"Mechanics",
"Mechanical engineering",
"Aerospace engineering",
"Materials degradation",
"Mechanical failure"
] |
348,969 | https://en.wikipedia.org/wiki/Diagonal%20lemma | In mathematical logic, the diagonal lemma (also known as diagonalization lemma, self-reference lemma or fixed point theorem) establishes the existence of self-referential sentences in certain formal theories of the natural numbers—specifically those theories that are strong enough to represent all computable functions. The sentences whose existence is secured by the diagonal lemma can then, in turn, be used to prove fundamental limitative results such as Gödel's incompleteness theorems and Tarski's undefinability theorem.
It is named in reference to Cantor's diagonal argument in set and number theory.
Background
Let be the set of natural numbers. A first-order theory in the language of arithmetic represents the computable function if there exists a "graph" formula in the language of — that is, a formula such that for each
.
Here is the numeral corresponding to the natural number , which is defined to be the th successor of presumed first numeral in .
The diagonal lemma also requires a systematic way of assigning to every formula a natural number (also written as ) called its Gödel number. Formulas can then be represented within by the numerals corresponding to their Gödel numbers. For example, is represented by
The diagonal lemma applies to theories capable of representing all primitive recursive functions. Such theories include first-order Peano arithmetic and the weaker Robinson arithmetic, and even to a much weaker theory known as R. A common statement of the lemma (as given below) makes the stronger assumption that the theory can represent all computable functions, but all the theories mentioned have that capacity, as well.
Statement of the lemma
Intuitively, is a self-referential sentence: says that has the property . The sentence can also be viewed as a fixed point of the operation that assigns, to the equivalence class of a given sentence , the equivalence class of the sentence (a sentence's equivalence class is the set of all sentences to which it is provably equivalent in the theory ). The sentence constructed in the proof is not literally the same as , but is provably equivalent to it in the theory .
Proof
Let be the function defined by:
for each formula with only one free variable in theory , and otherwise. Here denotes the Gödel number of formula . The function is computable (which is ultimately an assumption about the Gödel numbering scheme), so there is a formula representing in . Namely
which is to say
Now, given an arbitrary formula with one free variable , define the formula as:
Then, for all formulas with one free variable:
which is to say
Now substitute with , and define the sentence as:
Then the previous line can be rewritten as
which is the desired result.
(The same argument in different terms is given in [Raatikainen (2015a)].)
History
The lemma is called "diagonal" because it bears some resemblance to Cantor's diagonal argument. The terms "diagonal lemma" or "fixed point" do not appear in Kurt Gödel's 1931 article or in Alfred Tarski's 1936 article.
Rudolf Carnap (1934) was the first to prove the general self-referential lemma, which says that for any formula F in a theory T satisfying certain conditions, there exists a formula ψ such that ψ ↔ F(°#(ψ)) is provable in T. Carnap's work was phrased in alternate language, as the concept of computable functions was not yet developed in 1934. Mendelson (1997, p. 204) believes that Carnap was the first to state that something like the diagonal lemma was implicit in Gödel's reasoning. Gödel was aware of Carnap's work by 1937.
The diagonal lemma is closely related to Kleene's recursion theorem in computability theory, and their respective proofs are similar.
See also
Indirect self-reference
List of fixed point theorems
Primitive recursive arithmetic
Self-reference
Self-referential paradoxes
Notes
References
George Boolos and Richard Jeffrey, 1989. Computability and Logic, 3rd ed. Cambridge University Press.
Rudolf Carnap, 1934. Logische Syntax der Sprache. (English translation: 2003. The Logical Syntax of Language. Open Court Publishing.)
Haim Gaifman, 2006. 'Naming and Diagonalization: From Cantor to Gödel to Kleene'. Logic Journal of the IGPL, 14: 709–728.
Hinman, Peter, 2005. Fundamentals of Mathematical Logic. A K Peters.
Mendelson, Elliott, 1997. Introduction to Mathematical Logic, 4th ed. Chapman & Hall.
Panu Raatikainen, 2015a. The Diagonalization Lemma. In Stanford Encyclopedia of Philosophy, ed. Zalta. Supplement to Raatikainen (2015b).
Panu Raatikainen, 2015b. Gödel's Incompleteness Theorems. In Stanford Encyclopedia of Philosophy, ed. Zalta.
Raymond Smullyan, 1991. Gödel's Incompleteness Theorems. Oxford Univ. Press.
Raymond Smullyan, 1994. Diagonalization and Self-Reference. Oxford Univ. Press.
Alfred Tarski, tr. J. H. Woodger, 1983. "The Concept of Truth in Formalized Languages". English translation of Tarski's 1936 article. In A. Tarski, ed. J. Corcoran, 1983, Logic, Semantics, Metamathematics, Hackett.
Mathematical logic
Lemmas
Articles containing proofs | Diagonal lemma | [
"Mathematics"
] | 1,144 | [
"Mathematical theorems",
"Mathematical logic",
"Articles containing proofs",
"Mathematical problems",
"Lemmas"
] |
348,970 | https://en.wikipedia.org/wiki/Tabernanthe%20iboga | Tabernanthe iboga (iboga) is an evergreen rainforest shrub native to Central Africa. A member of the Apocynaceae family indigenous to Gabon, the Democratic Republic of Congo, and the Republic of Congo, it is cultivated across Central Africa for its medicinal and other effects.
In African traditional medicine and rituals, the yellowish root or bark is used to produce hallucinations and near-death outcomes, with some fatalities occurring. In high doses, ibogaine is considered to be toxic, and has caused serious comorbidities when used with opioids or prescription drugs. The United States Drug Enforcement Administration (DEA) lists ibogaine as a controlled substance of the Controlled Substances Act.
Description
T. iboga is native to tropical forests, preferring moist soil in partial shade. It bears dark green, narrow leaves and clusters of tubular flowers on an erect and branching stem, with yellow-orange fruits resembling chili pepper.
Normally growing to a height of 2 m, T. iboga may eventually grow into a small tree up to 10 m tall, given the right conditions. The flowers are yellowish-white or pink and followed by a fruit, orange at maturity, that may be either globose or fusiform. Its yellow-fleshed roots contain a number of indole alkaloids, most notably ibogaine, which is found in the highest concentration in the bark of the roots. The root material, bitter in taste, causes a degree of anaesthesia in the mouth as well as systemic numbness of the skin.
Taxonomy
Publication of binomial
Tabernanthe iboga was described by Henri Ernest Baillon and published in Bulletin Mensuel de la Société Linnéenne de Paris 1: 783 in the year 1889.
Etymology
The genus name Tabernanthe is a compound of the Latin taberna, "tavern"/"hut"/"(market) stall" and Greek: (anthos) "flower" – giving a literal meaning of "tavern flower". On the other hand, it may equally well have been intended (by way of a type of botanical shorthand) to mean "having a flower resembling that of plants belonging to the genus Tabernaemontana " (q.v.). If the first conjecture is the correct one, the name could also have been intended to suggest that the plant is cultivated near huts, sold at market stalls or even that – like the beverages sold at a tavern – the plant is intoxicating, all of which alternatives would constitute apt descriptions of an oft-cultivated and popular psychoactive plant.
The specific name iboga comes from the Myene name for the plant, which was also borrowed into a number of other regional languages with mild variation.
History
The first (probable...and confused) reference to Iboga is that of Bowdich in chapter 13 of his "Mission from Cape Coast Castle to Ashantee..." of 1819The Eroga, a favourite but violent medicine, is no doubt a fungus, for they describe it as growing on a tree called the Ocamboo, when decaying; they burn it first, and take as much as would lay on a shilling.
If this is indeed a reference to the drug derived from Tabernanthe iboga (Eroga appears to be a variant form of the names iboga and eboka) it is, of course, grossly in error in its assumption that iboga is not a plant but a fungus. Notable however is the observation of the potency of the drug – effective in small quantities. The description of the plant as growing on a tree is puzzling: Tabernanthe iboga does not usually grow as an epiphyte – if at all.
The ritual use of iboga in Africa was first reported by French and Belgian explorers in the 19th century, beginning with the work of French naval surgeon and explorer of Gabon Griffon du Bellay, who identified it correctly as a shrub belonging to the Apocynaceae – as recorded in a short essay by Charles Eugène Aubry-Lecomte on the plant poisons of West Africa, published in the year 1864.
Parmi les plantes rares ou nouvelles rapportées par le docteur Griffon du Bellay, la famille des apocynées contient encore deux poisons; l'un, nommé iboga, n'est toxique qu'à hautes doses et a l'état frais. Pris en petit quantité, il est aphrodisiaque et stimulante du systeme nerveux; les guerriers et chasseurs en font grand usage pour se tenir éveillés dans les affûts de nuit; de même que pour le M'boundou, le principe actif réside dans la racine qu'on mâche comme la coca.
[ Translation: Among the rare or new plants brought back by Dr. Griffon du Bellay, the plant family Apocynaceae contains two further poisons; the first of these, called Iboga, is only toxic in high doses and in the fresh state. Taken in small quantities, it is an aphrodisiac and stimulant of the (central) nervous system; warriors and hunters make considerable use of it in order to stay awake during their night vigils; as with the (plant) M'boundou, the active principle (of Iboga) resides in the root which is chewed like coca (leaf) ].
Chemistry
Indole alkaloids make up about 6% of the root chemical composition of iboga. Alkaloids that are present in more than 1% in root bark are: (in descending order)
Ibogaine
Iboxygaine
Ibogaline
Alloibogaine
Catharanthine
Ibogamine
Noribogaine
Voacangine
Yohimbine
Hydroxyibogamine
18-Methoxycoronaridine, a synthetic derivative of ibogaine, also occurs naturally in this plant.
Traditional use
The Iboga tree is central to the Bwiti spiritual practices in West-Central Africa, mainly Gabon, Cameroon, and the Republic of the Congo, where the alkaloid-containing roots or bark are used in various ceremonies, sometimes to create a near-death experience. Iboga is taken in massive doses by initiates of this spiritual practice, and on a more regular basis is eaten in smaller doses in connection with rituals and tribal dances performed at night.
While in lower doses iboga has a stimulant effect and is used to maintain alertness while hunting, in moderate or high doses, iboga induces dream-like states with vivid visions and hallucinations.
Addiction treatment
Anecdotal reports of self-treated opioid addicts indicated a reduced desire to sustain opiate abuse following iboga ingestion. Since 1970, iboga has been legally prohibited in the United States following several fatalities. Iboga extracts, as well as the purified alkaloid ibogaine, have attracted attention because of their purported ability to reverse addiction to drugs such as alcohol and opiates. Due to the cardiac safety risks of iboga, research is also considering iboga analogues.
Ibogaine is classified as a schedule 1 controlled substance in the United States, and is not approved there for addiction treatment (or any other therapeutic use) because of its hallucinogenic and cardiovascular side effects, as well as the absence of safety and efficacy data in human subjects. In most other countries, it remains unregulated and unlicensed.
Independent ibogaine treatment clinics have emerged in Mexico, Canada, the Netherlands, South Africa, and New Zealand, all operating in what has been described as a "legal gray area". Covert, illegal neighborhood clinics are also known to exist in the United States, despite active DEA surveillance. Addiction specialists warn that the treatment of drug dependence with ibogaine in non-medical settings, without expert supervision and unaccompanied by appropriate psychosocial care, can be dangerous – and, in approximately one case in 300, potentially fatal.
Adverse effects
Ibogaine may induce nausea, vomiting, tremors, and headaches. When ibogaine is used chronically, manic episodes lasting for several days may occur, accompanied by insomnia, irritability, delusions, aggressive behavior, and thoughts of suicide, among other effects.
Legal status
Iboga is outlawed or restricted in Belgium, Poland, Denmark, Croatia, France, Sweden, and Switzerland. In the United States, ibogaine is classified by the Controlled Substances Act on the list of schedule I drugs, although the plant itself remains unscheduled.
Non-profit organization Föreningen för hollistisk missbruksvård is trying to convince the Swedish government to start up clinical investigations of its anti-addictive properties, loosen up the prohibition law against ibogaine, and allow the creation of treatment facilities in Sweden.
Exportation of iboga from Gabon is illegal since the passage of a 1994 cultural protection law.
Conservation status
While little data is available on the exploitation and existing habitat of the iboga plant, the destructive effects of harvesting and slow growth could have already severely damaged the wild iboga population.
Documentary films about iboga
Iboga, les hommes du bois sacré (2002)
In this French-language film, Gilbert Kelner documents modern Bwiti practices and Babongo perspectives on iboga. Odisea broadcast a Spanish-dubbed version titled Los Hombres de la Madera Sagrada ("The Men of the Sacred Wood").
Ibogaine: Rite of Passage (2004)
Directed by Ben Deloenen. A 34-year-old heroin addict undergoes ibogaine treatment with Dr Martin Polanco at the Ibogaine Association, a clinic in Rosarito Mexico. Deloenen interviews people formerly addicted to heroin, cocaine, and methamphetamine, who share their perspectives about ibogaine treatment. In Gabon, a Babongo woman receives iboga root for her depressive malaise. Deloenen visually contrasts this Western, clinical use of ibogaine with the Bwiti use of iboga root bark, but emphasizes the Western context.
"Babongo" (2005)
In this episode (series 1, episode 4) of the English documentary series Tribe, presenter Bruce Parry ingests iboga during his time with the Babongo. BBC 2 aired the episode on January 25, 2005.
"Dosed" (2019) This documentary depicts the battle of an opioid addict against her addiction through psychedelic and iboga treatments.
"Synthetic Ibogaine – Natural Tramadol" (2021)
In this episode (series 3, episode 4) of the American documentary series Hamilton's Pharmacopeia, presenter Hamilton Morris joins an Iboga ceremony in Gabon and later interviews Chris Jenks who shows a method to produce ibogaine from Voacanga africana.
Gallery
See also
Entheogen
Psychoactive plant
References
External links
Holy War- A Tale of Bwiti Initiation, Part 1 by Jim Dziura (Psychedelic Times, 2018)
Holy War- A Tale of Bwiti Initiation, Part 2 by Jim Dziura (Psychedelic Times, 2018)
Biopiracy
Entheogens
Herbal and fungal hallucinogens
Oneirogens
Rauvolfioideae
Shrubs
Taxa named by Henri Ernest Baillon | Tabernanthe iboga | [
"Biology"
] | 2,358 | [
"Biodiversity",
"Biopiracy"
] |
348,973 | https://en.wikipedia.org/wiki/Antichain | In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two distinct elements in the subset are incomparable.
The size of the largest antichain in a partially ordered set is known as its width. By Dilworth's theorem, this also equals the minimum number of chains (totally ordered subsets) into which the set can be partitioned. Dually, the height of the partially ordered set (the length of its longest chain) equals by Mirsky's theorem the minimum number of antichains into which the set can be partitioned.
The family of all antichains in a finite partially ordered set can be given join and meet operations, making them into a distributive lattice. For the partially ordered system of all subsets of a finite set, ordered by set inclusion, the antichains are called Sperner families
and their lattice is a free distributive lattice, with a Dedekind number of elements. More generally, counting the number of antichains of a finite partially ordered set is #P-complete.
Definitions
Let be a partially ordered set. Two elements and of a partially ordered set are called comparable if If two elements are not comparable, they are called incomparable; that is, and are incomparable if neither
A chain in is a subset in which each pair of elements is comparable; that is, is totally ordered. An antichain in is a subset of in which each pair of different elements is incomparable; that is, there is no order relation between any two different elements in
(However, some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than two distinct elements of the antichain.)
Height and width
A maximal antichain is an antichain that is not a proper subset of any other antichain. A maximum antichain is an antichain that has cardinality at least as large as every other antichain. The of a partially ordered set is the cardinality of a maximum antichain. Any antichain can intersect any chain in at most one element, so, if we can partition the elements of an order into chains then the width of the order must be at most (if the antichain has more than elements, by the pigeonhole principle, there would be 2 of its elements belonging to the same chain, a contradiction). Dilworth's theorem states that this bound can always be reached: there always exists an antichain, and a partition of the elements into chains, such that the number of chains equals the number of elements in the antichain, which must therefore also equal the width. Similarly, one can define the of a partial order to be the maximum cardinality of a chain. Mirsky's theorem states that in any partial order of finite height, the height equals the smallest number of antichains into which the order may be partitioned.
Sperner families
An antichain in the inclusion ordering of subsets of an -element set is known as a Sperner family. The number of different Sperner families is counted by the Dedekind numbers, the first few of which numbers are
2, 3, 6, 20, 168, 7581, 7828354, 2414682040998, 56130437228687557907788 .
Even the empty set has two antichains in its power set: one containing a single set (the empty set itself) and one containing no sets.
Join and meet operations
Any antichain corresponds to a lower set
In a finite partial order (or more generally a partial order satisfying the ascending chain condition) all lower sets have this form. The union of any two lower sets is another lower set, and the union operation corresponds in this way to a join operation
on antichains:
Similarly, we can define a meet operation on antichains, corresponding to the intersection of lower sets:
The join and meet operations on all finite antichains of finite subsets of a set define a distributive lattice, the free distributive lattice generated by Birkhoff's representation theorem for distributive lattices states that every finite distributive lattice can be represented via join and meet operations on antichains of a finite partial order, or equivalently as union and intersection operations on the lower sets of the partial order.
Computational complexity
A maximum antichain (and its size, the width of a given partially ordered set) can be found in polynomial time.
Counting the number of antichains in a given partially ordered set is #P-complete.
References
External links
Order theory | Antichain | [
"Mathematics"
] | 992 | [
"Order theory"
] |
348,976 | https://en.wikipedia.org/wiki/Lindemann%E2%80%93Weierstrass%20theorem | In transcendental number theory, the Lindemann–Weierstrass theorem is a result that is very useful in establishing the transcendence of numbers. It states the following:
In other words, the extension field has transcendence degree over .
An equivalent formulation from , is the following: This equivalence transforms a linear relation over the algebraic numbers into an algebraic relation over by using the fact that a symmetric polynomial whose arguments are all conjugates of one another gives a rational number.
The theorem is named for Ferdinand von Lindemann and Karl Weierstrass. Lindemann proved in 1882 that is transcendental for every non-zero algebraic number thereby establishing that is transcendental (see below). Weierstrass proved the above more general statement in 1885.
The theorem, along with the Gelfond–Schneider theorem, is extended by Baker's theorem, and all of these would be further generalized by Schanuel's conjecture.
Naming convention
The theorem is also known variously as the Hermite–Lindemann theorem and the Hermite–Lindemann–Weierstrass theorem. Charles Hermite first proved the simpler theorem where the exponents are required to be rational integers and linear independence is only assured over the rational integers, a result sometimes referred to as Hermite's theorem. Although that appears to be a special case of the above theorem, the general result can be reduced to this simpler case. Lindemann was the first to allow algebraic numbers into Hermite's work in 1882. Shortly afterwards Weierstrass obtained the full result, and further simplifications have been made by several mathematicians, most notably by David Hilbert and Paul Gordan.
Transcendence of and
The transcendence of and are direct corollaries of this theorem.
Suppose is a non-zero algebraic number; then is a linearly independent set over the rationals, and therefore by the first formulation of the theorem is an algebraically independent set; or in other words is transcendental. In particular, is transcendental. (A more elementary proof that is transcendental is outlined in the article on transcendental numbers.)
Alternatively, by the second formulation of the theorem, if is a non-zero algebraic number, then is a set of distinct algebraic numbers, and so the set is linearly independent over the algebraic numbers and in particular cannot be algebraic and so it is transcendental.
To prove that is transcendental, we prove that it is not algebraic. If were algebraic, i would be algebraic as well, and then by the Lindemann–Weierstrass theorem (see Euler's identity) would be transcendental, a contradiction. Therefore is not algebraic, which means that it is transcendental.
A slight variant on the same proof will show that if is a non-zero algebraic number then and their hyperbolic counterparts are also transcendental.
-adic conjecture
Modular conjecture
An analogue of the theorem involving the modular function was conjectured by Daniel Bertrand in 1997, and remains an open problem. Writing for the square of the nome and the conjecture is as follows.
Lindemann–Weierstrass theorem
Proof
The proof relies on two preliminary lemmas. Notice that Lemma B itself is already sufficient to deduce the original statement of Lindemann–Weierstrass theorem.
Preliminary lemmas
Proof of Lemma A. To simplify the notation set:
Then the statement becomes
Let be a prime number and define the following polynomials:
where is a non-zero integer such that are all algebraic integers. Define
Using integration by parts we arrive at
where is the degree of , and is the j-th derivative of . This also holds for s complex (in this case the integral has to be intended as a contour integral, for example along the straight segment from 0 to s) because
is a primitive of .
Consider the following sum:
In the last line we assumed that the conclusion of the Lemma is false. In order to complete the proof we need to reach a contradiction. We will do so by estimating in two different ways.
First is an algebraic integer which is divisible by p! for and vanishes for unless and , in which case it equals
This is not divisible by p when p is large enough because otherwise, putting
(which is a non-zero algebraic integer) and calling the product of its conjugates (which is still non-zero), we would get that p divides , which is false.
So is a non-zero algebraic integer divisible by (p − 1)!. Now
Since each is obtained by dividing a fixed polynomial with integer coefficients by , it is of the form
where is a polynomial (with integer coefficients) independent of i. The same holds for the derivatives .
Hence, by the fundamental theorem of symmetric polynomials,
is a fixed polynomial with rational coefficients evaluated in (this is seen by grouping the same powers of appearing in the expansion and using the fact that these algebraic numbers are a complete set of conjugates). So the same is true of , i.e. it equals , where G is a polynomial with rational coefficients independent of i.
Finally is rational (again by the fundamental theorem of symmetric polynomials) and is a non-zero algebraic integer divisible by (since the 's are algebraic integers divisible by ). Therefore
However one clearly has:
where is the polynomial whose coefficients are the absolute values of those of fi (this follows directly from the definition of ). Thus
and so by the construction of the 's we have for a sufficiently large C independent of p, which contradicts the previous inequality. This proves Lemma A. ∎
Proof of Lemma B: Assuming
we will derive a contradiction, thus proving Lemma B.
Let us choose a polynomial with integer coefficients which vanishes on all the 's and let be all its distinct roots. Let b(n + 1) = ... = b(N) = 0.
The polynomial
vanishes at by assumption. Since the product is symmetric, for any the monomials and have the same coefficient in the expansion of P.
Thus, expanding accordingly and grouping the terms with the same exponent, we see that the resulting exponents form a complete set of conjugates and, if two terms have conjugate exponents, they are multiplied by the same coefficient.
So we are in the situation of Lemma A. To reach a contradiction it suffices to see that at least one of the coefficients is non-zero. This is seen by equipping with the lexicographic order and by choosing for each factor in the product the term with non-zero coefficient which has maximum exponent according to this ordering: the product of these terms has non-zero coefficient in the expansion and does not get simplified by any other term. This proves Lemma B. ∎
Final step
We turn now to prove the theorem: Let a(1), ..., a(n) be non-zero algebraic numbers, and α(1), ..., α(n) distinct algebraic numbers. Then let us assume that:
We will show that this leads to contradiction and thus prove the theorem. The proof is very similar to that of Lemma B, except that this time the choices are made over the a(i)'s:
For every i ∈ {1, ..., n}, a(i) is algebraic, so it is a root of an irreducible polynomial with integer coefficients of degree d(i). Let us denote the distinct roots of this polynomial a(i)1, ..., a(i)d(i), with a(i)1 = a(i).
Let S be the functions σ which choose one element from each of the sequences (1, ..., d(1)), (1, ..., d(2)), ..., (1, ..., d(n)), so that for every 1 ≤ i ≤ n, σ(i) is an integer between 1 and d(i). We form the polynomial in the variables
Since the product is over all the possible choice functions σ, Q is symmetric in for every i. Therefore Q is a polynomial with integer coefficients in elementary symmetric polynomials of the above variables, for every i, and in the variables yi. Each of the latter symmetric polynomials is a rational number when evaluated in .
The evaluated polynomial vanishes because one of the choices is just σ(i) = 1 for all i, for which the corresponding factor vanishes according to our assumption above. Thus, the evaluated polynomial is a sum of the form
where we already grouped the terms with the same exponent. So in the left-hand side we have distinct values β(1), ..., β(N), each of which is still algebraic (being a sum of algebraic numbers) and coefficients .
The sum is nontrivial: if is maximal in the lexicographic order, the coefficient of is just a product of a(i)j's (with possible repetitions), which is non-zero.
By multiplying the equation with an appropriate integer factor, we get an identical equation except that now b(1), ..., b(N) are all integers. Therefore, according to Lemma B, the equality cannot hold, and we are led to a contradiction which completes the proof. ∎
Note that Lemma A is sufficient to prove that e is irrational, since otherwise we may write e = p / q, where both p and q are non-zero integers, but by Lemma A we would have qe − p ≠ 0, which is a contradiction. Lemma A also suffices to prove that is irrational, since otherwise we may write = k / n, where both k and n are integers) and then ±i are the roots of n2x2 + k2 = 0; thus 2 − 1 − 1 = 2e0 + ei + e−i ≠ 0; but this is false.
Similarly, Lemma B is sufficient to prove that e is transcendental, since Lemma B says that if a0, ..., an are integers not all of which are zero, then
Lemma B also suffices to prove that is transcendental, since otherwise we would have 1 + ei ≠ 0.
Equivalence of the two statements
Baker's formulation of the theorem clearly implies the first formulation. Indeed, if are algebraic numbers that are linearly independent over , and
is a polynomial with rational coefficients, then we have
and since are algebraic numbers which are linearly independent over the rationals, the numbers are algebraic and they are distinct for distinct n-tuples . So from Baker's formulation of the theorem we get for all n-tuples .
Now assume that the first formulation of the theorem holds. For Baker's formulation is trivial, so let us assume that , and let be non-zero algebraic numbers, and distinct algebraic numbers such that:
As seen in the previous section, and with the same notation used there, the value of the polynomial
at
has an expression of the form
where we have grouped the exponentials having the same exponent. Here, as proved above, are rational numbers, not all equal to zero, and each exponent is a linear combination of with integer coefficients. Then, since and are pairwise distinct, the -vector subspace of generated by is not trivial and we can pick to form a basis for For each , we have
For each let be the least common multiple of all the for , and put . Then are algebraic numbers, they form a basis of , and each is a linear combination of the with integer coefficients. By multiplying the relation
by , where is a large enough positive integer, we get a non-trivial algebraic relation with rational coefficients connecting , against the first formulation of the theorem.
See also
Gelfond–Schneider theorem
Baker's theorem; an extension of Gelfond–Schneider theorem
Schanuel's conjecture; if proven, it would imply both the Gelfond–Schneider theorem and the Lindemann–Weierstrass theorem
Notes
References
Further reading
External links
Articles containing proofs
E (mathematical constant)
Exponentials
Pi
Theorems in number theory
Transcendental numbers | Lindemann–Weierstrass theorem | [
"Mathematics"
] | 2,547 | [
"Mathematical theorems",
"E (mathematical constant)",
"Theorems in number theory",
"Exponentials",
"Articles containing proofs",
"Mathematical problems",
"Pi",
"Number theory"
] |
349,014 | https://en.wikipedia.org/wiki/Linearly%20ordered%20group | In mathematics, specifically abstract algebra, a linearly ordered or totally ordered group is a group G equipped with a total order "≤" that is translation-invariant. This may have different meanings. We say that (G, ≤) is a:
left-ordered group if ≤ is left-invariant, that is a ≤ b implies ca ≤ cb for all a, b, c in G,
right-ordered group if ≤ is right-invariant, that is a ≤ b implies ac ≤ bc for all a, b, c in G,
bi-ordered group if ≤ is bi-invariant, that is it is both left- and right-invariant.
A group G is said to be left-orderable (or right-orderable, or bi-orderable) if there exists a left- (or right-, or bi-) invariant order on G. A simple necessary condition for a group to be left-orderable is to have no elements of finite order; however this is not a sufficient condition. It is equivalent for a group to be left- or right-orderable; however there exist left-orderable groups which are not bi-orderable.
Further definitions
In this section is a left-invariant order on a group with identity element . All that is said applies to right-invariant orders with the obvious modifications. Note that being left-invariant is equivalent to the order defined by if and only if being right-invariant. In particular a group being left-orderable is the same as it being right-orderable.
In analogy with ordinary numbers we call an element of an ordered group positive if . The set of positive elements in an ordered group is called the positive cone, it is often denoted with ; the slightly different notation is used for the positive cone together with the identity element.
The positive cone characterises the order ; indeed, by left-invariance we see that if and only if . In fact a left-ordered group can be defined as a group together with a subset satisfying the two conditions that:
for we have also ;
let , then is the disjoint union of and .
The order associated with is defined by ; the first condition amounts to left-invariance and the second to the order being well-defined and total. The positive cone of is .
The left-invariant order is bi-invariant if and only if it is conjugacy invariant, that is if then for any we have as well. This is equivalent to the positive cone being stable under inner automorphisms.
If , then the absolute value of , denoted by , is defined to be:
If in addition the group is abelian, then for any a triangle inequality is satisfied: .
Examples
Any left- or right-orderable group is torsion-free, that is it contains no elements of finite order besides the identity. Conversely, F. W. Levi showed that a torsion-free abelian group is bi-orderable; this is still true for nilpotent groups but there exist torsion-free, finitely presented groups which are not left-orderable.
Archimedean ordered groups
Otto Hölder showed that every Archimedean group (a bi-ordered group satisfying an Archimedean property) is isomorphic to a subgroup of the additive group of real numbers, .
If we write the Archimedean l.o. group multiplicatively, this may be shown by considering the Dedekind completion, of the closure of a l.o. group under th roots. We endow this space with the usual topology of a linear order, and then it can be shown that for each the exponential maps are well defined order preserving/reversing, topological group isomorphisms. Completing a l.o. group can be difficult in the non-Archimedean case. In these cases, one may classify a group by its rank: which is related to the order type of the largest sequence of convex subgroups.
Other examples
Free groups are left-orderable. More generally this is also the case for right-angled Artin groups. Braid groups are also left-orderable.
The group given by the presentation is torsion-free but not left-orderable; note that it is a 3-dimensional crystallographic group (it can be realised as the group generated by two glided half-turns with orthogonal axes and the same translation length), and it is the same group that was proven to be a counterexample to the unit conjecture. More generally the topic of orderability of 3--manifold groups is interesting for its relation with various topological invariants. There exists a 3-manifold group which is left-orderable but not bi-orderable (in fact it does not satisfy the weaker property of being locally indicable).
Left-orderable groups have also attracted interest from the perspective of dynamical systems as it is known that a countable group is left-orderable if and only if it acts on the real line by homeomorphisms. Non-examples related to this paradigm are lattices in higher rank Lie groups; it is known that (for example) finite-index subgroups in are not left-orderable; a wide generalisation of this has been recently announced.
See also
Cyclically ordered group
Hahn embedding theorem
Partially ordered group
Notes
References
Ordered groups | Linearly ordered group | [
"Mathematics"
] | 1,099 | [
"Ordered groups",
"Order theory"
] |
349,049 | https://en.wikipedia.org/wiki/List%20of%20mountains%20on%20Mars | This is a list of all named mountains on Mars.
Naming
Most Martian mountains have a name including one of the following astrogeological terms:
Mons — large, isolated, mountain; may or may not be of volcanic origin.
plural montes — mountain range.
Tholus — small dome-shaped mountain or hill.
plural tholi — group of (usually not contiguous) small mountains.
Dorsum — long low range. Name type not present on Mars.
plural dorsa
Patera — dish-shaped depressions on volcano peaks; not very high compared to diameter.
plural paterae
Caveats
Listed are the elevations of the peaks (the vertical position relative to the areoid, which is the Martian vertical datum — the surface defined as zero elevation by average martian atmospheric pressure and planet radius), which is not the height above the surrounding terrain (topographic prominence). Listed mons elevation is the highest point (at 16 pixels/degree) within the feature. Listed patera elevation is the average elevation of the shallow dish-shaped depression (the actual 'patera') at the summit.
List
Gallery
See also
Notes
References
United States Geological Survey data files megt90n000eb.img and megt90n000eb.lbl
External links
Olympus Mons, Arsia Mons, Alba Patera: Viking Orbiter Views of Mars by the Viking Orbiter Imaging Team.
Ascraeus Mons: Malin Space Science Systems Release No. MOC2-950 via the Mars Global Surveyor.
Pavonis Mons: Malin Space Science Systems Release No. MOC2-481 via the Mars Global Surveyor.
Elysium Mons: Malin Space Science Systems via the Mars Global Surveyor.
Mars features database distributed with xephem v3.3 (Warning, it uses West coordinates, and table should be in East coordinates)
IAU, USGS: Martian system nomenclature
IAU, USGS: Mars nomenclature: mountains (planetocentric east longitude)
IAU, USGS: Mars nomenclature: tholus (planetocentric east longitude)
Peter Grego, Mars and how to Observe it (List of elevations of named Martian mountains)
highest things
Mars
surface features of Mars | List of mountains on Mars | [
"Astronomy"
] | 448 | [
"Lists of extraterrestrial mountains",
"Astronomy-related lists"
] |
349,136 | https://en.wikipedia.org/wiki/Gordon%20Bell | Chester Gordon Bell (August 19, 1934 – May 17, 2024) was an American electrical engineer and manager. An early employee of Digital Equipment Corporation (DEC), from 1960–1966, Bell designed several of their PDP machines and later served as the company's Vice President of Engineering from 1972–1983, overseeing development of the VAX computer systems. Bell's later career included roles as an entrepreneur, investor, founding Assistant Director of NSF's Computing and Information Science and Engineering Directorate from 1986–1987, and researcher emeritus at Microsoft Research from 1995–2015.
Early life and education
Gordon Bell was born in Kirksville, Missouri. He grew up helping with the family business, Bell Electric, repairing appliances and wiring homes.
Bell received a BS (1956), and MS (1957) in electrical engineering from MIT. He then went to the New South Wales University of Technology (now UNSW) in Australia on a Fulbright Scholarship in 1957–58, where he taught classes on computer design, programmed one of the first computers to arrive in Australia (called UTECOM, an English Electric DEUCE), and published his first academic paper. Returning to the US, he worked in the MIT Speech Computation Laboratory under Professor Ken Stevens, where he wrote the first analysis by synthesis program.
Career
Digital Equipment Corporation
The DEC founders Ken Olsen and Harlan Anderson recruited him for their new company in 1960, where he designed the I/O subsystem of the PDP-1, including the first UART. Bell was the architect of the PDP-4, and PDP-6. Other architectural contributions were to the PDP-5 and PDP-11 Unibus and General Registers architecture.
After DEC, Bell went to Carnegie Mellon University in 1966 to teach computer science. He returned to DEC in 1972 as vice-president of engineering, where he was in charge of the successful VAX computer.
Entrepreneur and policy advisor
Bell reportedly later came to find work at DEC stressful, and suffered a heart attack in March 1983. After he recovered and shortly after he returned to work, he resigned from the company in the summer. Afterwards, he founded Encore Computer, one of the first shared memory, multiple-microprocessor computers to use the snooping cache structure.
During the 1980s he became involved with public policy, becoming the first and founding Assistant Director of the CISE Directorate of the NSF, and led the cross-agency group that specified the NREN.
Bell also established the ACM Gordon Bell Prize (administered by the ACM and IEEE) in 1987 to encourage development in parallel processing. The first Gordon Bell Prize was won by researchers at the Parallel Processing Division of Sandia National Laboratory for work done on the 1000-processor nCUBE 10 hypercube.
He was a founding member of Ardent Computer in 1986, becoming VP of R&D in 1988, and remained until it merged with Stellar in 1989, to become Stardent Computer.
Microsoft Research
Between 1991 and 1995, Bell advised Microsoft in its efforts to start a research group, then joined it full-time in August 1995, studying telepresence and related ideas. He was the experiment subject for the MyLifeBits project, an experiment in life-logging (not the same as life-blogging). This was an attempt to fulfill Vannevar Bush's vision of an automated store of the documents, pictures (including those taken automatically), and sounds an individual has experienced in his lifetime, to be accessed with speed and ease. For this, Bell digitized all documents he has read or produced, CDs, emails, and so on.
Death
Bell died of aspiration pneumonia at his home in Coronado, California, on May 17, 2024. He was 89.
Bell's law of computer classes
Bell's law of computer classes was first described in 1972 with the emergence of a new, lower priced microcomputer class based on the microprocessor. Established market class computers are introduced at a constant price with increasing functionality and performance. Technology advances in semiconductors, storage, interfaces and networks enable a new computer class (platform) to form about every decade to serve a new need. Each new usually lower priced class is maintained as a quasi independent industry (market). Classes include: mainframes (1960s), minicomputers (1970s), networked workstations and personal computers (1980s), browser-web-server structure (1990s), palmtop computing (1995), web services (2000s), convergence of cell phones and computers (2003), and Wireless Sensor Networks aka motes (2004). Bell predicted that home and body area networks would form by 2010.
Legacy and honors
Bell has been described as "a giant in the computer industry", "an architect of our digital age", and "father of the minicomputer".
Bell was elected a member of the National Academy of Engineering in 1977 for contributions to the architecture of minicomputers. He is also a Fellow of the American Academy of Arts and Sciences (1994), American Association for the Advancement of Science (1983), Association for Computing Machinery (1994), IEEE (1974), and member of the National Academy of Sciences (2007), and Fellow of the Australian Academy of Technological Sciences and Engineering (2009).
He is also a member of the advisory board of TTI/Vanguard and a former member of the Sector Advisory Committee of Australia's Information and Communication Technology Division of the Commonwealth Scientific and Industrial Research Organisation.
Bell was the first recipient of the IEEE John von Neumann Medal, in 1992. His other awards include Fellow of the Computer History Museum, the AeA Inventor Award, the Vladimir Karapetoff Outstanding Technical Achievement Award of Eta Kappa Nu, and the 1991 National Medal of Technology by President George H. W. Bush. He was also named an Eta Kappa Nu Eminent Member in 2007.
In 1993, Worcester Polytechnic Institute awarded Bell an Honorary Doctor of Engineering, and in 2010, Bell received an honorary Doctor of Science and Technology degree from Carnegie Mellon University. The latter award referred to him as "the father of the minicomputer".
Bell co-founded The Computer Museum in Boston, Massachusetts, with his wife Gwen Bell in 1979. He was a founding board member of its successor, the Computer History Museum located in Mountain View, California. In 2003, he was made a Fellow of the Museum "for his key role in the minicomputer revolution, and for contributions as a computer architect and entrepreneur". The story of the museum's evolution beginning in the early 1970s with Ken Olsen at Digital Equipment Corporation is described in the Microsoft Technical Report MSR-TR-2011-44, "Out of a Closet: The Early Years of The Computer [x]* Museum". A timeline of computing historical machines, events, and people is given on his website. It covers from prehistoric times to the present.
Books
(with Allen Newell) Computer Structures: Readings and Examples (1971, )
(with C. Mudge and J. McNamara) Computer Engineering (1978, )
(with Dan Siewiorek and Allen Newell) Computer Structures: Principles and Examples (1982, )
(with J. McNamara) High Tech Ventures: The Guide for Entrepreneurial Success (1991, )
(with Jim Gemmell) Total Recall: How the E-Memory Revolution will Change Everything (2009, )
(with Jim Gemmell) Your Life Uploaded: The Digital Way to Better Memory, Health, and Productivity (2010, )
See also
MyLifeBits
Microsoft SenseCam
Lifelog
References
Further reading
Wilkinson, Alec, "Remember This?" The New Yorker, 28 May 2007, pp. 38–44.
External links
CBS Evening News video interview on the MyLifeBits Project, 2007.
1934 births
2024 deaths
American computer scientists
Computer designers
Computer hardware engineers
Carnegie Mellon University faculty
Digital Equipment Corporation people
Fellows of the IEEE
Fellows of the American Academy of Arts and Sciences
MIT School of Engineering alumni
Microsoft employees
Microsoft Research people
National Medal of Technology recipients
People from Kirksville, Missouri
Members of the United States National Academy of Engineering
Fellows of the Australian Academy of Technological Sciences and Engineering
Members of the United States National Academy of Sciences
20th-century American engineers
21st-century American scientists
Silicon Valley people
1994 fellows of the Association for Computing Machinery | Gordon Bell | [
"Technology"
] | 1,691 | [
"Lifelogging",
"Computing and society"
] |
349,143 | https://en.wikipedia.org/wiki/List%20of%20web%20directories | A Web directory is a listing of Websites organized in a hierarchy or interconnected list of categories.
The following is a list of notable Web directory services.
General
DOAJ.org – Directory of Open Access Journals
DMOZ (also known as Open Directory Project) – was at one point the largest directory of the Web. Its open content was mirrored at many sites. Offline since March 2017. Continued since August 2018 as Curlie.org.
Jasmine Directory - Lists websites by topic and by region, specializing in business websites.
Sources – general subject web portal for journalists, freelance writers, editors, authors and researchers; in addition to a search engine it includes a subject-based directory.
World Wide Web Virtual Library (VLIB) – oldest directory of the Web.
Business directories
Business.com – Integrated directory of knowledge resources and companies, that charges a fee for listing review and operates as a pay per click search engine.
Yell – is a digital marketing and online directory business in the United Kingdom
Niche
Business.com – Integrated directory of knowledge resources and companies, that charges a fee for listing review and operates as a pay per click search engine.
Library and Archival Exhibitions on the Web – international database of online exhibitions which is a service of the Smithsonian Institution Libraries.
ProgrammableWeb – resource on APIs that provides a directory of APIs.
Virtual Library museums pages – directory of museum websites around the world.
Regional
2345.com – Chinese web directory founded in 2005. The website is the second most used web directory in China.
Alleba – Filipino search engine website, with directory.
Dalilmasr – Egyptian online directory
Timway – web portal and directory primarily serving Hong Kong.
Defunct directories
AboutUs.org – directory from 2005 to 2013.
Anime Web Turnpike – was a web directory founded in August 1995 by Jay Fubler Harvey. It served as a large database of links to various anime and manga websites.
Biographicon – directory of biographical entries.
Google Directory – copy of DMOZ directory, with sites listed in PageRank order within each category. Closed in July 2011.
Internet Public Library – librarian-edited directory, product of a merger with the Librarians' Internet Index (LII) in 2010. Closed in June 2015.
Intute – directory of websites for study and research. Maintenance stopped in July 2011, archives remain available.
LookSmart – operated several vertical directories from 1995 to 2006.
Lycos' TOP 5% – from 1995 until 2000 it aimed to list the Web's top 5% of Websites.
Yahoo! Directory– first service that Yahoo! offered. Closed in December 2014.
Yahoo! Kids – oldest online search directory for children, until its discontinuation as of April 30, 2013.
Zeal – volunteer-built Web directory; it was introduced in 1999, acquired by LookSmart in 2000, and shut down in 2006.
See also
List of search engines
Shopping directory
Web Directories
web
Web directories | List of web directories | [
"Technology"
] | 597 | [
"Computing-related lists",
"Mobile content",
"Internet-related lists",
"Social software"
] |
349,218 | https://en.wikipedia.org/wiki/Third-party%20software%20component | In computer programming, a third-party software component is a reusable software component developed to be either freely distributed or sold by an entity other than the original vendor of the development platform. The third-party software component market thrives because many programmers believe that component-oriented development improves the efficiency and the quality of developing custom applications. Common third-party software includes macros, bots, and software/scripts to be run as add-ons for popular developing software. In the case of operating systems such as Windows XP, Vista or Seven, there are applications installed by default, such as Windows Media Player or Internet Explorer.
See also
Middleware
Enterprise Java Beans
VCL / CLX
KParts (KDE)
Video-game third-party developers
Third-party source
References
Component-based software engineering
Computer programming | Third-party software component | [
"Technology",
"Engineering"
] | 166 | [
"Computer programming",
"Software engineering",
"Component-based software engineering",
"Computers",
"Components"
] |
349,223 | https://en.wikipedia.org/wiki/Group%20ring | In algebra, a group ring is a free module and at the same time a ring, constructed in a natural way from any given ring and any given group. As a free module, its ring of scalars is the given ring, and its basis is the set of elements of the given group. As a ring, its addition law is that of the free module and its multiplication extends "by linearity" the given group law on the basis. Less formally, a group ring is a generalization of a given group, by attaching to each element of the group a "weighting factor" from a given ring.
If the ring is commutative then the group ring is also referred to as a group algebra, for it is indeed an algebra over the given ring. A group algebra over a field has a further structure of a Hopf algebra; in this case, it is thus called a group Hopf algebra.
The apparatus of group rings is especially useful in the theory of group representations.
Definition
Let be a group, written multiplicatively, and let be a ring. The group ring of over , which we will denote by , or simply , is the set of mappings of finite support ( is nonzero for only finitely many elements ), where the module scalar product of a scalar in and a mapping is defined as the mapping , and the module group sum of two mappings and is defined as the mapping . To turn the additive group into a ring, we define the product of and to be the mapping
The summation is legitimate because and are of finite support, and the ring axioms are readily verified.
Some variations in the notation and terminology are in use. In particular, the mappings such as are sometimes written as what are called "formal linear combinations of elements of with coefficients in
":
or simply
Note that if the ring is in fact a field , then the module structure of the group ring is in fact a vector space over .
Examples
1. Let , the cyclic group of order 3, with generator and identity element 1G. An element r of C[G] can be written as
where z0, z1 and z2 are in C, the complex numbers. This is the same thing as a polynomial ring in variable such that i.e. C[G] is isomorphic to the ring C[]/.
Writing a different element s as , their sum is
and their product is
Notice that the identity element 1G of G induces a canonical embedding of the coefficient ring (in this case C) into C[G]; however strictly speaking the multiplicative identity element of C[G] is 1⋅1G where the first 1 comes from C and the second from G. The additive identity element is zero.
When G is a non-commutative group, one must be careful to preserve the order of the group elements (and not accidentally commute them) when multiplying the terms.
2. The ring of Laurent polynomials over a ring R is the group ring of the infinite cyclic group Z over R.
3. Let Q be the quaternion group with elements . Consider the group ring RQ, where R is the set of real numbers. An arbitrary element of this group ring is of the form
where is a real number.
Multiplication, as in any other group ring, is defined based on the group operation. For example,
Note that RQ is not the same as the skew field of quaternions over R. This is because the skew field of quaternions satisfies additional relations in the ring, such as , whereas in the group ring RQ, is not equal to . To be more specific, the group ring RQ has dimension 8 as a real vector space, while the skew field of quaternions has dimension 4 as a real vector space.
4. Another example of a non-abelian group ring is where is the symmetric group on 3 letters. This is not an integral domain since we have where the element is the transposition that swaps 1 and 2. Therefore the group ring need not be an integral domain even when the underlying ring is an integral domain.
Some basic properties
Using 1 to denote the multiplicative identity of the ring R, and denoting the group unit by 1G, the ring R[G] contains a subring isomorphic to R, and its group of invertible elements contains a subgroup isomorphic to G. For considering the indicator function of {1G}, which is the vector f defined by
the set of all scalar multiples of f is a subring of R[G] isomorphic to R. And if we map each element s of G to the indicator function of {s}, which is the vector f defined by
the resulting mapping is an injective group homomorphism (with respect to multiplication, not addition, in R[G]).
If R and G are both commutative (i.e., R is commutative and G is an abelian group), R[G] is commutative.
If H is a subgroup of G, then R[H] is a subring of R[G]. Similarly, if S is a subring of R, S[G] is a subring of R[G].
If G is a finite group of order greater than 1, then R[G] always has zero divisors. For example, consider an element g of G of order . Then 1 − g is a zero divisor:
For example, consider the group ring Z[S3] and the element of order 3 g = (123). In this case,
A related result: If the group ring is prime, then G has no nonidentity finite normal subgroup (in particular, G must be infinite).
Proof: Considering the contrapositive, suppose is a nonidentity finite normal subgroup of . Take . Since for any , we know , therefore . Taking , we have . By normality of , commutes with a basis of , and therefore
.
And we see that are not zero, which shows is not prime. This shows the original statement.
Group algebra over a finite group
Group algebras occur naturally in the theory of group representations of finite groups. The group algebra K[G] over a field K is essentially the group ring, with the field K taking the place of the ring. As a set and vector space, it is the free vector space on G over the field K. That is, for x in K[G],
The algebra structure on the vector space is defined using the multiplication in the group:
where on the left, g and h indicate elements of the group algebra, while the multiplication on the right is the group operation (denoted by juxtaposition).
Because the above multiplication can be confusing, one can also write the basis vectors of K[G] as eg (instead of g), in which case the multiplication is written as:
Interpretation as functions
Thinking of the free vector space as K-valued functions on G, the algebra multiplication is convolution of functions.
While the group algebra of a finite group can be identified with the space of functions on the group, for an infinite group these are different. The group algebra, consisting of finite sums, corresponds to functions on the group that vanish for cofinitely many points; topologically (using the discrete topology), these correspond to functions with compact support.
However, the group algebra K[G] and the space of functions are dual: given an element of the group algebra
and a function on the group these pair to give an element of K via
which is a well-defined sum because it is finite.
Representations of a group algebra
Taking K[G] to be an abstract algebra, one may ask for representations of the algebra acting on a K-vector space V of dimension d. Such a representation
is an algebra homomorphism from the group algebra to the algebra of endomorphisms of V, which is isomorphic to the ring of d × d matrices: . Equivalently, this is a left K[G]-module over the abelian group V.
Correspondingly, a group representation
is a group homomorphism from G to the group of linear automorphisms of V, which is isomorphic to the general linear group of invertible matrices: . Any such representation induces an algebra representation
simply by letting and extending linearly. Thus, representations of the group correspond exactly to representations of the algebra, and the two theories are essentially equivalent.
Regular representation
The group algebra is an algebra over itself; under the correspondence of representations over R and R[G] modules, it is the regular representation of the group.
Written as a representation, it is the representation g ρg with the action given by , or
Semisimple decomposition
The dimension of the vector space K[G] is just equal to the number of elements in the group. The field K is commonly taken to be the complex numbers C or the reals R, so that one discusses the group algebras C[G] or R[G].
The group algebra C[G] of a finite group over the complex numbers is a semisimple ring. This result, Maschke's theorem, allows us to understand C[G] as a finite product of matrix rings with entries in C. Indeed, if we list the complex irreducible representations of G as Vk for k = 1, . . . , m, these correspond to group homomorphisms and hence to algebra homomorphisms . Assembling these mappings gives an algebra isomorphism
where dk is the dimension of Vk. The subalgebra of C[G] corresponding to End(Vk) is the two-sided ideal generated by the idempotent
where is the character of Vk. These form a complete system of orthogonal idempotents, so that , for j ≠ k, and . The isomorphism is closely related to Fourier transform on finite groups.
For a more general field K, whenever the characteristic of K does not divide the order of the group G, then K[G] is semisimple. When G is a finite abelian group, the group ring K[G] is commutative, and its structure is easy to express in terms of roots of unity.
When K is a field of characteristic p which divides the order of G, the group ring is not semisimple: it has a non-zero Jacobson radical, and this gives the corresponding subject of modular representation theory its own, deeper character.
Center of a group algebra
The center of the group algebra is the set of elements that commute with all elements of the group algebra:
The center is equal to the set of class functions, that is the set of elements that are constant on each conjugacy class
If , the set of irreducible characters of G forms an orthonormal basis of Z(K[G]) with respect to the inner product
Group rings over an infinite group
Much less is known in the case where G is countably infinite, or uncountable, and this is an area of active research. The case where R is the field of complex numbers is probably the one best studied. In this case, Irving Kaplansky proved that if a and b are elements of C[G] with , then . Whether this is true if R is a field of positive characteristic remains unknown.
A long-standing conjecture of Kaplansky (~1940) says that if G is a torsion-free group, and K is a field, then the group ring K[G] has no non-trivial zero divisors. This conjecture is equivalent to K[G] having no non-trivial nilpotents under the same hypotheses for K and G.
In fact, the condition that K is a field can be relaxed to any ring that can be embedded into an integral domain.
The conjecture remains open in full generality, however some special cases of torsion-free groups have been shown to satisfy the zero divisor conjecture. These include:
Unique product groups (e.g. orderable groups, in particular free groups)
Elementary amenable groups (e.g. virtually abelian groups)
Diffuse groups – in particular, groups that act freely isometrically on R-trees, and the fundamental groups of surface groups except for the fundamental groups of direct sums of one, two or three copies of the projective plane.
The case where G is a topological group is discussed in greater detail in the article Group algebra of a locally compact group.
Category theory
Adjoint
Categorically, the group ring construction is left adjoint to "group of units"; the following functors are an adjoint pair:
where takes a group to its group ring over R, and takes an R-algebra to its group of units.
When , this gives an adjunction between the category of groups and the category of rings, and the unit of the adjunction takes a group G to a group that contains trivial units: In general, group rings contain nontrivial units. If G contains elements a and b such that and b does not normalize then the square of
is zero, hence . The element is a unit of infinite order.
Universal property
The above adjunction expresses a universal property of group rings. Let be a (commutative) ring, let be a group, and let be an -algebra. For any group homomorphism , there exists a unique -algebra homomorphism such that where is the inclusion
In other words, is the unique homomorphism making the following diagram commute:
Any other ring satisfying this property is canonically isomorphic to the group ring.
Hopf algebra
The group algebra K[G] has a natural structure of a Hopf algebra. The comultiplication is defined by , extended linearly, and the antipode is , again extended linearly.
Generalizations
The group algebra generalizes to the monoid ring and thence to the category algebra, of which another example is the incidence algebra.
Filtration
If a group has a length function – for example, if there is a choice of generators and one takes the word metric, as in Coxeter groups – then the group ring becomes a filtered algebra.
See also
Group algebra of a locally compact group
Monoid ring
Kaplansky's conjectures
Representation theory
Group representation
Regular representation
Category theory
Categorical algebra
Group of units
Incidence algebra
Quiver algebra
Notes
References
Milies, César Polcino; Sehgal, Sudarshan K. An introduction to group rings. Algebras and applications, Volume 1. Springer, 2002.
Charles W. Curtis, Irving Reiner. Representation theory of finite groups and associative algebras, Interscience (1962)
D.S. Passman, The algebraic structure of group rings, Wiley (1977)
Ring theory
Representation theory of groups
Harmonic analysis
de:Monoidring | Group ring | [
"Mathematics"
] | 3,072 | [
"Fields of abstract algebra",
"Ring theory"
] |
349,251 | https://en.wikipedia.org/wiki/Diagonal | In geometry, a diagonal is a line segment joining two vertices of a polygon or polyhedron, when those vertices are not on the same edge. Informally, any sloping line is called diagonal. The word diagonal derives from the ancient Greek διαγώνιος diagonios, "from corner to corner" (from διά- dia-, "through", "across" and γωνία gonia, "corner", related to gony "knee"); it was used by both Strabo and Euclid to refer to a line connecting two vertices of a rhombus or cuboid, and later adopted into Latin as diagonus ("slanting line").
Polygons
As applied to a polygon, a diagonal is a line segment joining any two non-consecutive vertices. Therefore, a quadrilateral has two diagonals, joining opposite pairs of vertices. For any convex polygon, all the diagonals are inside the polygon, but for re-entrant polygons, some diagonals are outside of the polygon.
Any n-sided polygon (n ≥ 3), convex or concave, has total diagonals, as each vertex has diagonals to all other vertices except itself and the two adjacent vertices, or n − 3 diagonals, and each diagonal is shared by two vertices.
In general, a regular n-sided polygon has distinct diagonals in length, which follows the pattern 1,1,2,2,3,3... starting from a square.
Regions formed by diagonals
In a convex polygon, if no three diagonals are concurrent at a single point in the interior, the number of regions that the diagonals divide the interior into is given by
For n-gons with n=3, 4, ... the number of regions is
1, 4, 11, 25, 50, 91, 154, 246...
This is OEIS sequence A006522.
Intersections of diagonals
If no three diagonals of a convex polygon are concurrent at a point in the interior, the number of interior intersections of diagonals is given by . This holds, for example, for any regular polygon with an odd number of sides. The formula follows from the fact that each intersection is uniquely determined by the four endpoints of the two intersecting diagonals: the number of intersections is thus the number of combinations of the n vertices four at a time.
Regular polygons
Although the number of distinct diagonals in a polygon increases as its number of sides increases, the length of any diagonal can be calculated.
In a regular n-gon with side length a, the length of the xth shortest distinct diagonal is:
This formula shows that as the number of sides approaches infinity, the xth shortest diagonal approaches the length (x+1)a. Additionally, the formula for the shortest diagonal simplifies in the case of x = 1:
If the number of sides is even, the longest diagonal will be equivalent to the diameter of the polygon's circumcircle because the long diagonals all intersect each other at the polygon's center.
Special cases include:
A square has two diagonals of equal length, which intersect at the center of the square. The ratio of a diagonal to a side is
A regular pentagon has five diagonals all of the same length. The ratio of a diagonal to a side is the golden ratio,
A regular hexagon has nine diagonals: the six shorter ones are equal to each other in length; the three longer ones are equal to each other in length and intersect each other at the center of the hexagon. The ratio of a long diagonal to a side is 2, and the ratio of a short diagonal to a side is .
A regular heptagon has 14 diagonals. The seven shorter ones equal each other, and the seven longer ones equal each other. The reciprocal of the side equals the sum of the reciprocals of a short and a long diagonal.
Polyhedrons
A polyhedron (a solid object in three-dimensional space, bounded by two-dimensional faces) may have two different types of diagonals: face diagonals on the various faces, connecting non-adjacent vertices on the same face; and space diagonals, entirely in the interior of the polyhedron (except for the endpoints on the vertices).
Higher dimensions
N-Cube
The lengths of an n-dimensional hypercube's diagonals can be calculated by mathematical induction. The longest diagonal of an n-cube is . Additionally, there are of the xth shortest diagonal. As an example, a 5-cube would have the diagonals:
Its total number of diagonals is 416. In general, an n-cube has a total of diagonals. This follows from the more general form of which describes the total number of face and space diagonals in convex polytopes. Here, v represents the number of vertices and e represents the number of edges.
Geometry
By analogy, the subset of the Cartesian product X×X of any set X with itself, consisting of all pairs (x,x), is called the diagonal, and is the graph of the equality relation on X or equivalently the graph of the identity function from X to X. This plays an important part in geometry; for example, the fixed points of a mapping F from X to itself may be obtained by intersecting the graph of F with the diagonal.
In geometric studies, the idea of intersecting the diagonal with itself is common, not directly, but by perturbing it within an equivalence class. This is related at a deep level with the Euler characteristic and the zeros of vector fields. For example, the circle S1 has Betti numbers 1, 1, 0, 0, 0, and therefore Euler characteristic 0. A geometric way of expressing this is to look at the diagonal on the two-torus S1xS1 and observe that it can move off itself by the small motion (θ, θ) to (θ, θ + ε). In general, the intersection number of the graph of a function with the diagonal may be computed using homology via the Lefschetz fixed-point theorem; the self-intersection of the diagonal is the special case of the identity function.
Notes
External links
Diagonals of a polygon with interactive animation
Polygon diagonal from MathWorld.
Elementary geometry | Diagonal | [
"Mathematics"
] | 1,319 | [
"Elementary mathematics",
"Elementary geometry"
] |
349,459 | https://en.wikipedia.org/wiki/Brazilian%20Silicon%20Valley | Brazilian Silicon Valley is a term commonly applied to the region of Campinas and in southern region this term is applied for Florianópolis city, Brazil because of its similarity to the 'original' Silicon Valley, located in California in the USA.
Characteristics
Campinas has gained this distinction because it has several comparable features, such as:
It is a modern city, located near a giant metropolis, São Paulo
It has a vibrant, high-tech university and research environment, composed of the University of Campinas (UNICAMP), the Pontificial Catholic University of Campinas (PUCCAMP), the FACAMP, the UNISAL (Centro Universitário Salesiano de São Paulo), the Center for Research and Development in Telecommunications (CPqD), the National Laboratory of Synchrotron Light, the Renato Archer Research Institute (CenPRA), the Brazilian Company of Agricultural Research (EMBRAPA), the Agronomical Institute of Campinas, the Biological Institute, the Food Technology Institute, the Eldorado Institute, the Wherner von Braun Institute and several others. Campinas boasts a researcher/population ratio equal to those of the most advanced technology centers.
A number of high-tech, non-pollutant electrical and electronics industries have settled around Campinas, such as IBM, Lucent, Samsung, Nortel, Compaq, Freescale Semiconductor, Motorola, Dell, Fairchild Semiconductor, Huawei, 3M, Texas Instruments, Celestica, Solectron, and Bosch.
Several industrial parks and incubators for high tech companies in the fields of microelectronics, computers, software, telecommunications, etc. have developed there.
History
Until the 1970s, the Campinas region had few industries and had an economy based on agriculture and in the services and commerce sectors. With the foundation of UNICAMP and the ready availability of high-quality researchers, engineers and students focused on physics, electrical engineering, computer sciences, mathematics, mechanical engineering, etc., a number of high-tech companies started to establish their industrial plants and R&D labs nearby, such as IBM. The municipality of Campinas and those surrounding it began to foster actively the growth of this new area, and the CIATEC I and II (Companhia de Desenvolvimento do Pólo de Alta Tecnologia de Campinas) industrial zones were established around the university campus, in the subdistrict of Barão Geraldo. The Center for Research and Development (CPqD) set up by Telebras, a state holding for the telecommunications industry in Brazil, which had grown enormously under the military regime umbrella, was the second boost to Campinas Silicon Valley. A law was passed by the Federal government, protecting Brazilian-made technology against imports, and this resulted in further growth. Together with UNICAMP researchers a number of pioneering developments occurred in the brand-new areas of lasers, fiber optics, digital telephony, computer technology, software development, and so on. In addition, the Petrobras state-owned oil giant was starting to develop a long range oil exploration program with the aim of making Brazil independent of oil imports, a policy also started by the military for strategic and economic reasons (the oil shock had deeply affected the country), and UNICAMP was one of the leading research universities to participate. In this respect, UNICAMP's open philosophy of collaboration with the private sector (unheard of in Brazil until that time), established by his visionary founder and first rector, Dr. Zeferino Vaz, prepared the way for a unique synergy between industry and university.
Other areas
Other areas in Brazil are also claiming a similar status to Campinas Silicon Valley, although they are much less organized and with smaller companies. They are:
Araraquara and São Carlos, State of São Paulo, with a high technology industries and University of São Paulo - USP, Federal University of São Carlos - UFSCar and Universidade Estadual Paulista Júlio de Mesquita Filho - UNESP.
Recife, state of Pernambuco, with a budding Digital Port and many collaborative ties with the Universidade Federal de Pernambuco
The Vale do Sapucaí in Minas Gerais, with several cities (such as Santa Rita do Sapucai) where collaboration between high-tech industries and universities (such as INATEL) is generally recognized as one of the best examples of a nascent Silicon Valley.
Belo Horizonte, state of Minas Gerais, not properly a Silicon Valley because it has mostly a software industry, but the upcoming BHTec along with possible semiconductor industry developments in its metropolitan area (specifically, in Confins) could change this situation.
Florianópolis, state of Santa Catarina, also has mostly a software industry.
The cities of Rio de Janeiro, Porto Alegre, Curitiba, Blumenau and Londrina, all in the Southeast and South, have also a strongly developed digital economy.
Bibliography
Lahorgue, M. A Science and Technology Park as a Tool for the Consolidation of Life Sciences Cluster: The Case of Porto Alegre Technopole, Center for Economic Development, Bulgaria.
See also
List of places with 'Silicon' names
External links
CampinasValley.org, CampinasValley.org, map of startups environment of Campinas
SERPRO, Serviço Federal de Processamento de Dados, Brazil
Instituto Atlântico
Campinas Silicon Valley, Wired Magazine, July 2000.
Silicon Envy. By Brad Wieners. Wired Magazine, Issue 6.09, September 1998.
Brazilian Answer to Silicon Valley. By Marion Kaplan, The Tribune, June 24, 2002.
Brazil: Recife — the New Silicon Valley. By Paulo Reblo, Wired.com, CorpWatch, January 18, 2002.
Silicon Valley South. By Ted Goertzel. Brazzil Technology, December 2002
Brazil: IT Geographics. American University.
CIATEC Campinas Portal
SOFTEX Campinas Nucleus
InCamp Business Incubator, UNICAMP, Campinas.
TechTown High-Tech District, Hortolândia
TechnoPark High-Tech District, Campinas
CPqD Polis, Campinas
High-technology business districts
Information technology in Brazil
Economy of Campinas
São Carlos
Information technology places | Brazilian Silicon Valley | [
"Technology"
] | 1,291 | [
"Information technology",
"Information technology places"
] |
349,504 | https://en.wikipedia.org/wiki/Environment%20variable | An environment variable is a user-definable value that can affect the way running processes will behave on a computer. Environment variables are part of the environment in which a process runs. For example, a running process can query the value of the TEMP environment variable to discover a suitable location to store temporary files, or the HOME or USERPROFILE variable to find the directory structure owned by the user running the process.
They were introduced in their modern form in 1979 with Version 7 Unix, so are included in all Unix operating system flavors and variants from that point onward including Linux and macOS. From PC DOS 2.0 in 1982, all succeeding Microsoft operating systems, including Microsoft Windows, and OS/2 also have included them as a feature, although with somewhat different syntax, usage and standard variable names.
Design
In all Unix and Unix-like systems, as well as on Windows, each process has its own separate set of environment variables. By default, when a process is created, it inherits a duplicate run-time environment of its parent process, except for explicit changes made by the parent when it creates the child. At the API level, these changes must be done between running fork and exec. Alternatively, from command shells such as bash, a user can change environment variables for a particular command invocation by indirectly invoking it via env or using the ENVIRONMENT_VARIABLE=VALUE <command> notation. A running program can access the values of environment variables for configuration purposes.
Shell scripts and batch files use environment variables to communicate data and preferences to child processes. They can also be used to store temporary values for reference later in a shell script. However, in Unix, non-exported variables are preferred for this as they do not leak outside the process.
In Unix, an environment variable that is changed in a script or compiled program will only affect that process and possibly child processes. The parent process and any unrelated processes will not be affected. Similarly, changing or removing a variable's value inside a DOS or Windows batch file will change the variable for the duration of COMMAND.COMor CMD.EXE's existence, respectively.
In Unix, the environment variables are normally initialized during system startup by the system init startup scripts, and hence inherited by all other processes in the system. Users can, and often do, augment them in the profile script for the command shell they are using. In Microsoft Windows, each environment variable's default value is stored in the Windows Registry or set in the AUTOEXEC.BAT file.
On Unix, a setuid program is given an environment chosen by its caller, but it runs with different authority from its caller. The dynamic linker will usually load code from locations specified by the environment variables $LD_LIBRARY_PATH and $LD_PRELOAD and run it with the process's authority. If a setuid program did this, it would be insecure, because its caller could get it to run arbitrary code and hence misuse its authority. For this reason, libc unsets these environment variables at startup in a setuid process. setuid programs usually unset unknown environment variables and check others or set them to reasonable values.
In general, the collection of environment variables function as an associative array where both the keys and values are strings. The interpretation of characters in either string differs among systems. When data structures such as lists need to be represented, it is common to use a colon (common on Unix and Unix-like) or semicolon-delineated (common on Windows and DOS) list.
Syntax
The variables can be used both in scripts and on the command line. They are usually referenced by putting special symbols in front of or around the variable name.
It is conventional for environment-variable names to be chosen to be in all upper cases. In programming code generally, this helps to distinguish environment variables from other kinds of names in the code. Environment-variable names are case sensitive on Unix-like operating systems but not on DOS, OS/2, and Windows.
Unix
In most Unix and Unix-like command-line shells, an environment variable's value is retrieved by placing a $ sign before the variable's name. If necessary, the name can also be surrounded by braces.
To display the user home directory, the user may type:
echo $HOME
In Unix and Unix-like systems, the names of environment variables are case-sensitive.
The command env displays all environment variables and their values. The command printenv can also be used to print a single variable by giving that variable name as the sole argument to the command.
DOS, OS/2 and Windows
In DOS, OS/2 and Windows command-line interpreters such as COMMAND.COM and CMD.EXE, an environment variable is retrieved by placing a % sign before and after it.
In DOS, OS/2 and Windows command-line interpreters as well as their API, upper or lower case is not distinguished for environment variable names.
The environment variable named HOMEDRIVE contains the drive letter (plus its trailing : colon) of the user's home directory, whilst HOMEPATH contains the full path of the user's home directory within that drive.
So to see the home drive and path, the user may type this:
ECHO %HOMEDRIVE%%HOMEPATH%
The command SET (with no arguments) displays all environment variables and their values. In Windows NT and later set can also be used to print all variables whose name begins with a given prefix by giving the prefix as the sole argument to the command.
In Windows PowerShell, the user may type any of the following:
echo $env:homedrive$env:homepath
Write-Output $env:homedrive$env:homepath
"$env:homedrive$env:homepath"
In PowerShell, upper or lower case is not distinguished for environment variable names.
The following command displays all environment variables and their values:
get-childitem env:
Assignment: Unix
The commands env and set can be used to set environment variables and are often incorporated directly into the shell.
The following commands can also be used, but are often dependent on a certain shell.
VARIABLE=value # (there must be no spaces around the equals sign)
export VARIABLE # for Bourne and related shells
export VARIABLE=value # for ksh, bash, and related shells
setenv VARIABLE value # for csh and related shells
A few simple principles govern how environment variables achieve their effect.
Environment variables are local to the process in which they were set. If two shell processes are spawned and the value of an environment variable is changed in one, that change will not be seen by the other.
When a child process is created, it inherits all the environment variables and their values from the parent process. Usually, when a program calls another program, it first creates a child process by forking, then the child adjusts the environment as needed and lastly the child replaces itself with the program to be called. This procedure gives the calling program control over the environment of the called program.
In Unix shells, variables may be assigned without the export keyword. Variables defined in this way are displayed by the set command, but are not true environment variables, as they are stored only by the shell and are unknown to all other processes. The printenv command will not display them, and child processes do not inherit them.
VARIABLE=value
The prefix syntax exports a "true" environment variable to a child process without affecting the current process:
VARIABLE=value program_name [arguments]
The persistence of an environment variable can be session-wide or system-wide.
unset is a builtin command implemented by both the Bourne shell family (sh, ksh, bash, etc.) and the C shell family (csh, tcsh, etc.) of Unix command line shells. It unsets a shell variable, removing it from memory and the shell's exported environment. It is implemented as a shell builtin, because it directly manipulates the internals of the shell. Read-only shell variables cannot be unset. If one tries to unset a read-only variable, the unset command will print an error message and return a non-zero exit code.
Assignment: DOS, OS/2 and Windows
In DOS, OS/2 and Windows command-line interpreters such as COMMAND.COM and CMD.EXE, the SET command is used to assign environment variables and values using the following arguments:
SET VARIABLE=value
An environment variable is removed via:
SET VARIABLE=
The SET command without any arguments displays all environment variables along with their values; SET " ", zero or more spaces, will include internal variables too. In CMD.EXE, it is possible to assign local variables that will not be global using the SETLOCAL command and ENDLOCAL to restore the environment.
Use the switch /? to display the internal documentation, or use the viewer help:
SET /?
HELP SET
SETLOCAL /?
HELP SETLOCAL
In PowerShell, the assignment follows a syntax similar to Unix:
$env:VARIABLE = "VALUE"
Examples
Examples of environment variables include:
PATH: a list of directory paths. When the user types a command without providing the full path, this list is checked to see whether it contains a path that leads to the command.
HOME (Unix-like) and USERPROFILE (Microsoft Windows): indicate where a user's home directory is located in the file system.
HOME/{.AppName} (Unix-like) and APPDATA\{DeveloperName\AppName} (Microsoft Windows): for storing application settings. Many applications incorrectly use USERPROFILE for application settings in Windows: USERPROFILE should only be used in dialogs that allow user to choose between paths like Documents/Pictures/Downloads/Music; for programmatic purposes, APPDATA (for roaming application settings shared across multiple devices), LOCALAPPDATA (for local application settings) or PROGRAMDATA (for application settings shared between multiple OS users) should be used.
TERM (Unix-like): specifies the type of computer terminal or terminal emulator being used (e.g., vt100 or dumb).
PS1 (Unix-like): specifies how the prompt is displayed in the Bourne shell and variants.
MAIL (Unix-like): used to indicate where a user's mail is to be found.
TEMP: location where processes can store temporary files.
True environment variables
Unix
$PATH Contains a colon-separated list of directories that the shell searches for commands that do not contain a slash in their name (commands with slashes are interpreted as file names to execute, and the shell attempts to execute the files directly). It is equivalent to the DOS, OS/2 and Windows %PATH% variable.
$HOME Contains the location of the user's home directory. Although the current user's home directory can also be found out through the C-functions getpwuid and getuid, $HOME is often used for convenience in various shell scripts (and other contexts). Using the environment variable also gives the user the possibility to point to another directory.
$PWD This variable points to the current directory. Equivalent to the output of the command pwd when called without arguments.
$DISPLAY Contains the identifier for the display that X11 programs should use by default.
$LD_LIBRARY_PATH On many Unix systems with a dynamic linker, contains a colon-separated list of directories that the dynamic linker should search for shared objects when building a process image after exec, before searching in any other directories.
$LIBPATH or $SHLIB_PATH Alternatives to $LD_LIBRARY_PATH typically used on older Unix versions.
$LANG, $LC_ALL, $LC_... $LANG is used to set to the default locale. For example, if the locale values are pt_BR, then the language is set to (Brazilian) Portuguese and Brazilian practice is used where relevant. Different aspects of localization are controlled by individual $LC_-variables ($LC_CTYPE, $LC_COLLATE, $LC_DATE etc.). $LC_ALL can be used to force the same locale for all aspects.
$TZ Refers to time zone. It can be in several formats, either specifying the time zone itself or referencing a file (in /usr/share/zoneinfo).
$BROWSER Contains a colon-separated list of a user's web browser preferences, for use by programs that need to allow the user to view content at a URL. The browsers in the list are intended to be attempted from first to last, stopping after the first one that succeeds. This arrangement allows for fallback behavior in different environments, e.g., in an X11 environment, a graphical browser (such as Firefox) can be used, but in a console environment a terminal-base browser (such a Lynx) can be used. A %s token may be present to specify where the URL should be placed; otherwise the browser should be launched with the URL as the first argument.
DOS
Under DOS, the master environment is provided by the primary command processor, which inherits the pre-environment defined in CONFIG.SYS when first loaded. Its size can be configured through the COMMAND /E:n parameter between 160 and 32767 bytes. Local environment segments inherited to child processes are typically reduced down to the size of the contents they hold. Some command-line processors (like 4DOS) allow to define a minimum amount of free environment space that will be available when launching secondary shells. While the content of environment variables remains unchanged upon storage, their names (without the "%") are always converted to uppercase, with the exception of pre-environment variables defined via the CONFIG.SYS directive SET under DR DOS 6.0 and higher (and only with SWITCHES=/L (for "allow lowercase names") under DR-DOS 7.02 and higher). In principle, MS-DOS 7.0 and higher also supports lowercase variable names (%windir%), but provides no means for the user to define them. Environment variable names containing lowercase letters are stored in the environment just like normal environment variables, but remain invisible to most DOS software, since they are written to expect uppercase variables only. Some command processors limit the maximum length of a variable name to 80 characters. While principally only limited by the size of the environment segment, some DOS and 16-bit Windows programs do not expect the contents of environment variables to exceed 128 characters. DR-DOS COMMAND.COM supports environment variables up to 255, 4DOS even up to 512 characters. Since COMMAND.COM can be configured (via /L:128..1024) to support command lines up to 1024 characters internally under MS-DOS 7.0 and higher, environment variables should be expected to contain at least 1024 characters as well. In some versions of DR-DOS, the environment passed to drivers, which often do not need their environment after installation, can be shrunken or relocated through SETENV or INSTALL[HIGH]/LOADHIGH options /Z (zero environment), /D[:loaddrive] (substitute drive, e.g. B:TSR.COM) and /E (relocate environment above program) in order to minimize the driver's effectively resulting resident memory footprint.
In batch mode, non-existent environment variables are replaced by a zero-length string.
Standard environment variables or reserved environment variables include:
%APPEND% (supported since DOS 3.3) This variable contains a semicolon-delimited list of directories in which to search for files. It is usually changed via the APPEND /E command, which also ensures that the directory names are converted into uppercase. Some DOS software actually expects the names to be stored in uppercase and the length of the list not to exceed 121 characters, therefore the variable is best not modified via the SET command. Long filenames containing spaces or other special characters must not be quoted (").
%CONFIG% (supported since MS-DOS 6.0 and PC DOS 6.1, also supported by ROM-DOS) This variable holds the symbolic name of the currently chosen boot configuration. It is set by the DOS BIOS (IO.SYS, IBMBIO.COM, etc.) to the name defined by the corresponding CONFIG.SYS directive MENUITEM before launching the primary command processor. Its main purpose is to allow further special cases in AUTOEXEC.BAT and similar batchjobs depending on the selected option at boot time. This can be emulated under DR-DOS by utilizing the CONFIG.SYS directive SET like SET CONFIG=1.
%CMDLINE% (introduced with 4DOS, also supported since MS-DOS 7.0) This variable contains the fully expanded text of the currently executing command line. It can be read by applications to detect the usage of and retrieve long command lines, since the traditional method to retrieve the command line arguments through the PSP (or related API functions) is limited to 126 characters and is no longer available when FCBs get expanded or the default DTA is used. While 4DOS supports longer command lines, COMMAND.COM still only supports a maximum of 126 characters at the prompt by default (unless overridden with /U:128..255 to specify the size of the command line buffer), but nevertheless internal command lines can become longer through f.e. variable expansion (depending on /L:128..1024 to specify the size of the internal buffer). In addition to the command-line length byte in the PSP, the PSP command line is normally limited by ASCII-13, and command lines longer than 126 characters will typically be truncated by having an ASCII-13 inserted at position 127, but this cannot be relied upon in all scenarios. The variable will be suppressed for external commands invoked with a preceding @-symbol like in @XCOPY ... for backward compatibility and in order to minimize the size of the environment when loading non-relocating terminate-and-stay-resident programs. Some beta versions of Windows Chicago used %CMDLINE% to store only the remainder of the command line excessing 126 characters instead of the complete command line.
%COMSPEC% (supported since DOS 2.0) This variable contains the full 8.3 path to the command processor, typically C:\COMMAND.COM or C:\DOS\COMMAND.COM. It must not contain long filenames, but under DR-DOS it may contain file and directory passwords. It is set up by the primary command processor to point to itself (typically reflecting the settings of the CONFIG.SYS directive SHELL), so that the resident portion of the command processor can reload its transient portion from disk after the execution of larger programs. The value can be changed at runtime to reflect changes in the configuration, which would require the command processor to reload itself from other locations. The variable is also used when launching secondary shells.
%COPYCMD% (supported since MS-DOS 6.2 and PC DOS 6.3, also supported by ROM-DOS) Allows a user to specify the /Y switch (to assume "Yes" on queries) as the default for the COPY, XCOPY, and MOVE commands. A default of /Y can be overridden by supplying the /-Y switch on the command line. The /Y switch instructs the command to replace existing files without prompting for confirmation.
%DIRCMD% (supported since MS-DOS 5.0 and PC DOS 5.0, also supported by ROM-DOS) Allows a user to specify customized default parameters for the DIR command, including file specifications. Preset default switches can be overridden by providing the negative switch on the command line. For example, if %DIRCMD% contains the /W switch, then it can be overridden by using DIR /-W at the command line. This is similar to the environment variable %$DIR% under DOS Plus and a facility to define default switches for DIR through its /C or /R switches under DR-DOS COMMAND.COM. %DIRCMD% is also supported by the external SDIR.COM/DIR.COM Stacker commands under Novell DOS 7 and higher.
%LANG% (supported since MS-DOS 7.0) This variable is supported by some tools to switch the locale for messages in multilingual issues.
%LANGSPEC% (supported since MS-DOS 7.0) This variable is supported by some tools to switch the locale for messages in multilingual issues.
%NO_SEP% (supported since PC DOS 6.3 and DR-DOS 7.07) This variable controls the display of thousands-separators in messages of various commands. Issued by default, they can be suppressed by specifying SET NO_SEP=ON or SET NO_SEP=1 under PC DOS. DR-DOS additionally allows to override the system's thousands-separator displayed as in f.e. SET NO_SEP=..
%PATH% (supported since DOS 2.0) This variable contains a semicolon-delimited list of directories in which the command interpreter will search for executable files. Equivalent to the Unix $PATH variable (but some DOS and Windows applications also use the list to search for data files similar to $LD_LIBRARY_PATH on Unix-like systems). It is usually changed via the PATH (or PATH /E under MS-DOS 6.0) command, which also ensures that the directory names are converted into uppercase. Some DOS software actually expects the names to be stored in uppercase and the length of the list not to exceed 123 characters, therefore the variable should better not be modified via the SET command. Long filenames containing spaces or other special characters must not be quoted ("). By default, the current directory is searched first, but some command-line processors like 4DOS allow "." (for "current directory") to be included in the list as well in order to override this search order; some DOS programs are incompatible with this extension.
%PROMPT% (supported since DOS 2.0) This variable contains a $-tokenized string defining the display of the prompt. It is usually changed via the PROMPT command.
%TEMP% (and %TMP%) These variables contain the path to the directory where temporary files should be stored. Operating system tools typically only use %TEMP%, whereas third-party programs also use %TMP%. Typically %TEMP% takes precedence over %TMP%.
The DR-DOS family supports a number of additional standard environment variables including:
%BETA% This variable contains an optional message displayed by some versions (including DR DOS 3.41) of COMMAND.COM at the startup of secondary shells.
%DRDOSCFG%/%NWDOSCFG%/%OPENDOSCFG% This variable contains the directory (without trailing "\") where to search for .INI and .CFG configuration files (that is, DR-DOS application specific files like TASKMGR.INI, TASKMAX.INI, VIEWMAX.INI, FASTBACK.CFG etc., class specific files like COLORS.INI, or global files like DRDOS.INI, NWDOS.INI, OPENDOS.INI, or DOS.INI), as used by the INSTALL and SETUP commands and various DR-DOS programs like DISKOPT, DOSBOOK, EDIT, FBX, FILELINK, LOCK, SECURITY.OVL/NWLOGIN.EXE, SERNO, TASKMAX, TASKMGR, VIEWMAX, or UNDELETE. It must not contain long filenames.
%DRCOMSPEC% This variable optionally holds an alternative path to the command processor taking precedence over the path defined in the %COMSPEC% variable, optionally including file and directory passwords. Alternatively, it can hold a special value of "ON" or "1" in order to enforce the usage of the %COMSPEC% variable even in scenarios where the %COMSPEC% variable may point to the wrong command-line processor, for example, when running some versions of the DR-DOS SYS command under a foreign operating system.
%DRSYS% Setting this variable to "ON" or "1" will force some versions of the DR-DOS SYS command to work under foreign operating systems instead of displaying a warning.
%FBP_USER% Specifies the user name used by the FastBack command FBX and {user}.FB configuration files under Novell DOS 7.
%HOMEDIR% This variable may contain the home directory under DR-DOS (including DR DOS 5.0 and 6.0).
%INFO% In some versions of DR-DOS COMMAND.COM this variable defines the string displayed by the $I token of the PROMPT command. It can be used, for example, to inform the user how to exit secondary shells.
%LOGINNAME% In some versions of DR-DOS COMMAND.COM this variable defines the user name displayed by the $U token of the PROMPT command, as set up by f.e. login scripts for Novell NetWare. See also the similarly named pseudo-variable %LOGIN_NAME%.
%MDOS_EXEC% This variable can take the values "ON" or "OFF" under Multiuser DOS. If enabled, the operating system permits applications to shell out to secondary shells with the DOS Program Area (DPA) freed in order to have maximum DOS memory available for secondary applications instead of running them in the same domain as under DOS.
%NOCHAR% This variable can be used to define the character displayed by some commands in messages for "No" in [Y,N] queries, thereby overriding the current system default (typically "N" in English versions of DR-DOS). If it contains a string, only the first character, uppercased, will be taken. Some commands also support a command line parameter /Y to automatically assume "Yes" on queries, thereby suppressing such prompts. If, however, the parameter /Y:yn is used to specify the "Yes"/"No" characters (thereby overriding any %NOCHAR% setting), queries are not suppressed. See also the related CONFIG.SYS directive NOCHAR and the environment variable %YESCHAR%.
%NOSOUND% Setting this variable to "ON" or "1" will disable default beeps issued by some DR-DOS commands in certain situations such as to inform the user of the completion of some operation, that user interaction is required, or when a wrong key was pressed. Command line options to specifically enable certain beeps will override this setting.
%OS%This variable contains the name of the operating system in order to distinguish between different DOS-related operating systems of Digital Research-origin in batch jobs and applications. Known values include "DOSPLUS" (DOS Plus 1.2 in DOS emulation), "CPCDOS 4.1" (DOS Plus 1.2 in CP/M emulation), "DRDOS" (DR DOS 3.31-6.0, DR DOS Panther, DR DOS StarTrek, DR-DOS 7.02-7.05), "EZDOS" (EZ-DOS 3.41), "PALMDOS" and "NetWare PalmDOS" (PalmDOS 1.0), "NWDOS" (Novell DOS 7), "NWDOS7" (Novell DOS 7 Beta), "OPENDOS" (Caldera OpenDOS 7.01, Caldera DR-OpenDOS 7.02), "CDOS" (Concurrent DOS, Concurrent DOS XM), "CPCDOS" (Concurrent PC DOS), "CDOS386" (Concurrent DOS 386), "DRMDOS" (DR Multiuser DOS), "MDOS" (CCI Multiuser DOS), "IMSMDOS" (IMS Multiuser DOS), "REAL32" (REAL/32). MS-DOS INTERSVR looks for a value of "DRDOS" as well. See also the identically named environment variable %OS% later introduced in the Windows NT family.
%PEXEC% In some versions of DR-DOS this variable defines the command executed by the $X token of the PROMPT command before COMMAND.COM displays the prompt after returning from external program execution.
%SWITCHAR% This variable defines the SwitChar to be used for argument parsing by some DR-DOS commands. If defined, it overrides the system's current SwitChar setting. The only accepted characters are "/" (DOS style), "-" (Unix style) and "[" (CP/M style). See also the related CONFIG.SYS directive SWITCHAR (to set the system's SwitChar setting) and the %/% system information variable in some issues of DR-DOS COMMAND.COM (to retrieve the current setting for portable batchjobs).
%TASKMGRWINDIR% This variable specifies the directory, where the Windows SYSTEM.INI to be used by the DR-DOS TASKMGR multitasker is located, overriding the default procedure to locate the file.
%VER% This variable contains the version of the operating system in order to distinguish between different versions of DR-DOS in batch jobs and in the display of the VER command. It is also used for the $V token of the PROMPT command and affects the value returned by the system information variable %OS_VERSION%. Known values include "1.0" (PalmDOS 1.0), "1.2" (DOS Plus 1.2 in DOS emulation), "2.0" (Concurrent DOS 386 2.0), "3.0" (Concurrent DOS 386 3.0), "3.31" (DR DOS 3.31), "3.32" (DR DOS 3.32), "3.33" (DR DOS 3.33), "3.34" (DR DOS 3.34), "3.35" (DR DOS 3.35), "3.40" (DR DOS 3.40), "3.41" (DR DOS 3.41, EZ-DOS 3.41), "3.41T" (DR DOS 3.41T), "4.1" (Concurrent PC DOS 4.1), "5.0" (DR DOS 5.0, DR Multiuser DOS 5.0), "5.1" (Novell DR Multiuser DOS 5.1), "6.0" (DR Concurrent DOS XM 6.0, DR DOS 6.0), "6.2" (DR Concurrent DOS XM 6.2), "7" (Novell DOS 7, Caldera OpenDOS 7.01, DR-DOS 7.02-7.05), "7.00" (CCI Multiuser DOS 7.00), "7.07" (DR-DOS 7.07), "7.1" (IMS Multiuser DOS 7.1), "7.21" (CCI Multiuser DOS 7.21), "7.22" (CCI Multiuser DOS 7.22) etc.
%YESCHAR%This variable can be used to define the character displayed by some commands in messages for "Yes" in [Y,N] queries, thereby overriding the current system default (typically "Y" in English versions of DR-DOS). If it contains a string, only the first character, uppercased, will be taken. Some commands also support a command line parameter /Y to automatically assume "Yes" on queries, thereby suppressing such prompts. If, however, the parameter /Y:y is used to specify the "Yes" character (thereby overriding any %YESCHAR% setting), queries are not suppressed. See also the related CONFIG.SYS directive YESCHAR and the environment variable %NOCHAR%.
%$CLS% This variable defines the control sequence to be sent to the console driver to clear the screen when the CLS command is issued, thereby overriding the internal default ("←[2J" under DR-DOS, "←E" under DOS Plus 1.2 on Amstrad machines as well as under Concurrent DOS, Multiuser DOS, and REAL/32 for VT52 terminals, or "←+" under Multiuser DOS for ASCII terminals). If the variable is not defined and no ANSI.SYS console driver is detected, the DR-DOS COMMAND.COM will directly clear the screen via INT 10h/AH=00h BIOS function, like MS-DOS/PC DOS COMMAND.COM does. A special \nnn-notation for octal numbers is supported to allow the definition of special characters like ESC (ASCII-27 = "←" = 1Bh = 33o), as f.e. in SET $CLS=\033[2J. To send the backslash ("\") itself, it can be doubled "\\".
%$DIR% Supported by DOS Plus accepting the values "L" (long) or "W" (wide) to change the default layout of directory listings with DIR. Can be overridden using the command line options /L or /W. See also the similar environment variable %DIRCMD% and the DIR options /C and /R of the DR-DOS COMMAND.COM.
%$PAGE% Supported by DOS Plus accepting the values "ON" or "OFF" for pagination control. Setting this to "ON" has the same affect as adding /P to commands supporting it (like DIR or TYPE).
%$LENGTH% Used by DOS Plus to define the screen length of the console in lines. This is used to control in a portable way when the screen output should be temporarily halted until a key is pressed in conjunction with the /P option supported by various commands or with automatic pagnination. See also the related environment variables %$WIDTH% and %DIRSIZE% as well as the similar pseudo-variable %_ROWS%.
%$WIDTH% Used by DOS Plus to define the screen width of the console in columns. This is used to control in a portable way the formatting of the screen output of commands like DIR /W or TYPE filename. See also the related environment variables %$LENGTH% and %DIRSIZE% as well as the similar pseudo-variable %_COLUMNS%.
%$SLICE% Used by DOS Plus accepting a numerical value to control the foreground/background time slicing of multitasking programs. See also the DOS Plus command SLICE.
%$ON% This variable can hold an optional control sequence to switch text highlighting, reversion or colorization on. It is used to emphasize or otherwise control the display of the file names in commands like TYPE wildcard, for example SET $ON=\033[1m with ANSI.SYS loaded or SET $ON=\016 for an IBM or ESC/P printer. For the special \nnn octal notation supported, see %$CLS%. While the variable is undefined by default under DOS Plus and DR-DOS, the Multiuser DOS default for an ASCII terminal equals SET $ON=\033p. See also the related environment variable %$OFF%.
%$OFF% This variable can hold an optional control sequence to switch text highlighting, reversion or colorization off. It is used to return to the normal output after the display of file names in commands like TYPE wildcard, for example SET $OFF=\033[0m with ANSI.SYS loaded or SET $OFF=\024 for an IBM or ESC/P printer. For the special \nnn octal notation supported, see %$CLS%. While the variable is undefined by default under DOS Plus and DR-DOS, the Multiuser DOS default for an ASCII terminal equals SET $OFF=\033q. See also the related environment variable %$ON%.
%$HEADER% This variable can hold an optional control sequence issued before the output of the file contents in commands like TYPE under DR-DOS 7.02 and higher. It can be used for highlighting, pagination or formatting, f.e. when sending the output to a printer, i.e. SET $HEADER=\017 for an IBM or ESC/P printer. For the special \nnn octal notation supported, see %$CLS%. See also the related environment variable %$FOOTER%.
%$FOOTER% This variable can hold an optional control sequence issued after the output of the file contents in commands like TYPE under DR-DOS 7.02 and higher. It is used to return to the normal output format, i.e. SET $FOOTER=\022\014 in the printer example above. For the special \nnn octal notation supported, see %$CLS%. See also the related environment variable %$HEADER%.
Datalight ROM-DOS supports a number of additional standard environment variables as well including:
%DIRSIZE% This variable is used to define non-standard screen sizes rows[,cols] for DIR options /P and /W (similar to %$LENGTH% and %$WIDTH% under DOS Plus).
%NEWFILE% This variable is automatically set to the first parameter given to the CONFIG.SYS directive NEWFILE.
%TZ%, %COMM%, %SOCKETS%, %HTTP_DIR%, %HOSTNAME% and %FTPDIR% are also used by ROM-DOS.
OS/2
%BEGINLIBPATH% Contains a semicolon-separated list of directories which are searched for DLLs before the directories given by the %LIBPATH% variable (which is set during system startup with the special CONFIG.SYS directive LIBPATH). It is possible to specify relative directories here, including "." for the current working directory. See also the related environment variable %ENDLIBPATH%.
%ENDLIBPATH% a list of directories to be searched for DLLs like %BEGINLIBPATH%, but searched after the list of directories in %LIBPATH%.
Windows
These environment variables refer to locations of critical operating system resources, and as such generally are not user-dependent.
%APPDATA% Contains the full path to the Application Data directory of the logged-in user. Does not work on Windows NT 4.0 SP6 UK.
%LOCALAPPDATA% This variable is the temporary files of Applications. Its uses include storing of desktop themes, Windows error reporting, caching and profiles of web browsers.
%ComSpec%/%COMSPEC%The %ComSpec% variable contains the full path to the command processor; on the Windows NT family of operating systems, this is cmd.exe, while on Windows 9x, %COMSPEC% is COMMAND.COM.
%OS%The %OS% variable contains a symbolic name of the operating system family to distinguish between differing feature sets in batchjobs. It resembles an identically named environment variable %OS% found in all DOS-related operating systems of Digital Research-origin like Concurrent DOS, Multiuser DOS, REAL/32, DOS Plus, DR DOS, Novell DOS and OpenDOS. %OS% always holds the string "Windows_NT" on the Windows NT family.
%PATH% This variable contains a semicolon-delimited (do not put spaces in between) list of directories in which the command interpreter will search for an executable file that matches the given command. Environment variables that represent paths may be nested within the %PATH% variable, but only at one level of indirection. If this sub-path environment variable itself contains an environment variable representing a path, %PATH% will not expand properly in the variable substitution. Equivalent to the Unix $PATH variable.
%PROCESSOR_ARCHITECTURE%, %PROCESSOR_ARCHITEW6432%, %PROCESSOR_IDENTIFIER%, %PROCESSOR_LEVEL%, %PROCESSOR_REVISION% These variables contain details of the CPU; they are set during system installation.
%PUBLIC% The %PUBLIC% variable (introduced with Vista) points to the Public (pseudo) user profile directory "C:\Users\Public".
%ProgramFiles%, %ProgramFiles(x86)%, %ProgramW6432% The %ProgramFiles% variable points to the Program Files directory, which stores all the installed programs of Windows and others. The default on English-language systems is "C:\Program Files". In 64-bit editions of Windows (XP, 2003, Vista), there are also %ProgramFiles(x86)%, which defaults to "C:\Program Files (x86)", and %ProgramW6432%, which defaults to "C:\Program Files". The %ProgramFiles% itself depends on whether the process requesting the environment variable is itself 32-bit or 64-bit (this is caused by Windows-on-Windows 64-bit redirection).
%CommonProgramFiles%, %CommonProgramFiles(x86)%, %CommonProgramW6432% This variable points to the Common Files subdirectory of the Program Files directory. The default on English-language systems is "C:\Program Files\Common Files". In 64-bit editions of Windows (XP, 2003, Vista), there are also %ProgramFiles(x86)%, which defaults to "C:\Program Files (x86)", and %ProgramW6432%, which defaults to "C:\Program Files". The %ProgramFiles% itself depends on whether the process requesting the environment variable is itself 32-bit or 64-bit (this is caused by Windows-on-Windows 64-bit redirection).
%OneDrive% The %OneDrive% variable is a special system-wide environment variable found on Windows NT and its derivatives. Its value is the path of where (if installed and setup) the Onedrive directory is located. The value of %OneDrive% is in most cases "C:\Users\{Username}\OneDrive\".
%SystemDrive% The %SystemDrive% variable is a special system-wide environment variable found on Windows NT and its derivatives. Its value is the drive upon which the system directory was placed. The value of %SystemDrive% is in most cases "C:".
%SystemRoot%The %SystemRoot% variable is a special system-wide environment variable found on the Windows NT family of operating systems. Its value is the location of the system directory, including the drive and path. The drive is the same as %SystemDrive% and the default path on a clean installation depends upon the version of the operating system. By default:
Windows XP and newer versions use "\WINDOWS".
Windows 2000, NT 4.0 and NT 3.1 use "\WINNT".
Windows NT 3.5 and NT 3.51 uses "\WINNT35".
Windows NT 4.0 Terminal Server uses "\WTSRV".
%windir%This variable points to the Windows directory. (On the Windows NT family of operating systems, it is identical to the %SystemRoot% variable). Windows 95–98 and Windows ME are, by default, installed in "C:\Windows". For other versions of Windows, see the %SystemRoot% entry above.
User management variables store information related to resources and settings owned by various user profiles within the system. As a general rule, these variables do not refer to critical system resources or locations that are necessary for the OS to run.
%ALLUSERSPROFILE% (%PROGRAMDATA% since Windows Vista) This variable expands to the full path to the All Users profile directory. This profile contains resources and settings that are used by all system accounts. Shortcut links copied to the All Users\' Start menu or Desktop directories will appear in every user's Start menu or Desktop, respectively.
%USERDOMAIN% The name of the Workgroup or Windows Domain to which the current user belongs. The related variable, %LOGONSERVER%, holds the hostname of the server that authenticated the current user's login credentials (name and password). For home PCs and PCs in a workgroup, the authenticating server is usually the PC itself. For PCs in a Windows domain, the authenticating server is a domain controller (a primary domain controller, or PDC, in Windows NT 4-based domains).
%USERPROFILE% A special system-wide environment variable found on Windows NT and its derivatives. Its value is the location of the current user's profile directory, in which is found that user's HKCU registry hive (NTUSER). Users can also use the %USERNAME% variable to determine the active users login identification.
Optional System variables are not explicitly specified by default but can be used to modify the default behavior of certain built-in console commands. These variables also do not need to be explicitly specified as command line arguments.
Default values
The following tables shows typical default values of certain environment variables under English versions of Windows as they can be retrieved under CMD.
(Some of these variables are also defined when running COMMAND.COM under Windows, but differ in certain important details: Under COMMAND.COM, the names of environment variable are always uppercased. Some, but not all variables contain short 8.3 rather than long file names. While some variables present in the CMD environment are missing, there are also some variables specific to the COMMAND environment.)
In this list, there is no environment variable that refers to the location of the user's My Documents directory, so there is no standard method for setting a program's home directory to be the My Documents directory.
Pseudo-environment variables
The command processors in DOS and Windows also support pseudo-environment variables. These are values that are fetched like environment variables, but are not truly stored in the environment but computed when requested.
DOS
Besides true environment variables, which are statically stored in the environment until changed or deleted, a number of pseudo-environment variables exist for batch processing.
The so-called replacement parameters or replaceable parameters (Microsoft / IBM terminology) aka replacement variables (Digital Research / Novell / Caldera terminology) or batch file parameters (JP Software terminology) %1..%9 and %0 can be used to retrieve the calling parameters of a batchjob, see SHIFT. In batchjobs, they can be retrieved just like environment variables, but are not actually stored in the environment.
Some command-line processors (like DR-DOS COMMAND.COM, Multiuser DOS MDOS.COM/TMP.EXE (Terminal Message Process), JP Software 4DOS, 4OS2, 4NT, Take Command and Windows cmd.exe) support a type of pseudo-environment variables named system information variables (Novell / Caldera terminology) or internal variables (JP Software terminology), which can be used to retrieve various possibly dynamic, but read-only information about the running system in batch jobs. The returned values represent the status of the system in the moment these variables are queried; that is, reading them multiple times in a row may return different values even within the same command; querying them has no direct effect on the system. Since they are not stored in the environment, they are not listed by SET and do not exist for external programs to retrieve. If a true environment variable of the same name is defined, it takes precedence over the corresponding variable until the environment variable is deleted again. They are not case-sensitive.
While almost all such variables are prefixed with an underscore ("_") by 4DOS etc. by convention (f.e. %_SECOND%), they are not under DR-DOS COMMAND.COM (f.e. %OS_VERSION%).
In addition, 4DOS, 4OS2, 4NT, and Take Command also support so called variable functions, including user-definable ones. They work just like internal variables, but can take optional parameters (f.e. %@EVAL[]%) and may even change the system status depending on their function.
System information variables supported by DR-DOS COMMAND.COM:
%AM_PM% This pseudo-variable returns the ante- or post-midday status of the current time. The returned string depends on the locale-specific version of DR-DOS, f.e. "am" or "pm" in the English version. It resembles an identically named identifier variable in Novell NetWare login scripts.
%DAY% This pseudo-variable returns the days of the current date in a 2-digit format with leading zeros, f.e. "01".."31". See also the similar pseudo-variable %_DAY%. It resembles an identically named identifier variable in Novell NetWare login scripts.
%DAY_OF_WEEK% This pseudo-variable returns the day name of the week in a 3-character format. The returned string depends on the locale-specific version of DR-DOS, f.e. "Sun", "Mon", "Tue", "Wed", "Thu", "Fri", or "Sat" in the English version. It resembles an identically named identifier variable in Novell NetWare login scripts.
%ERRORLEVEL% In COMMAND.COM of DR-DOS 7.02 and higher, this pseudo-variable returns the last error level returned by an external program or the RETURN command, f.e. "0".."255". See also the identically named pseudo-variable %ERRORLEVEL% under Windows and the IF ERRORLEVEL conditional command.
%ERRORLVL% In DR-DOS 7.02 and higher, this pseudo-variable returns the last error level in a 3-digit format with leading zeros, f.e. "000".."255". Under Multiuser DOS, this is a true environment variable automatically updated by the shell to the return code of exiting programs. See also the related pseudo-variable %ERRORLEVEL% under DR-DOS and the IF ERRORLEVEL command.
%GREETING_TIME% This pseudo-variable returns the 3-level day greeting time. The returned string depends on the locale-specific version of DR-DOS, f.e. "morning", "afternoon", or "evening" in the English version. It resembles an identically named identifier variable in Novell NetWare login scripts.
%HOUR% This pseudo-variable returns the hours of the current time in 12-hour format without leading zeros, f.e. "1".."12". It resembles an identically named identifier variable in Novell NetWare login scripts.
%HOUR24% This pseudo-variable returns the hours of the current time in 24-hour format in a 2-digit format with leading zeros, f.e. "00".."23". It resembles an identically named identifier variable in Novell NetWare login scripts. See also the similar pseudo-variable %_HOUR%.
%MINUTE% This pseudo-variable returns the minutes of the current time in a 2-digit format with leading zeros, f.e "00".."59". It resembles an identically named identifier variable in Novell NetWare login scripts. See also the similar pseudo-variable %_MINUTE%.
%MONTH% This pseudo-variable returns the months of the current date in a 2-digit format with leading zeros, f.e. "01".."12". It resembles an identically named identifier variable in Novell NetWare login scripts. See also the similar pseudo-variable %_MONTH%.
%MONTH_NAME% This pseudo-variable returns the month name of the current date. The returned string depends on the locale-specific version of DR-DOS, f.e. "January", "February", "March", "April", "May", "June", "July", "August", "September", "October", or "December" in the English version. It resembles an identically named identifier variable in Novell NetWare login scripts.
%NDAY_OF_WEEK% This pseudo-variable returns the number of day of the current week, f.e. "1".."7" (with "1" for Sunday). It resembles an identically named identifier variable in Novell NetWare login scripts.
%OS_VERSION% This pseudo-variable returns the version of the operating system depending on the current setting of the environment variable %VER%. If %VER% is not defined, %OS_VERSION% returns "off". It resembles an identically named identifier variable in Novell NetWare login scripts, which may return versions also for non-DR-DOS versions of DOS.
%SECOND% This pseudo-variable returns the seconds of the current time in a 2-digit format with leading zeros, f.e. "00".."59". It resembles an identically named identifier variable in Novell NetWare login scripts. See also the similar pseudo-variable %_SECOND%.
%SHORT_YEAR% This pseudo-variable returns the year of the current date in a 2-digit format with leading zeros, f.e. "93".."99", "00".."92". It resembles an identically named identifier variable in Novell NetWare login scripts.
%YEAR% and %_YEAR% Supported since Novell DOS 7, the %YEAR% pseudo-variable returns the year of the current date in a 4-digit format, f.e. "1980".."2099". It resembles an identically named identifier variable in Novell NetWare login scripts. DR-DOS 7.02 and higher added %_YEAR% for compatibility with 4DOS, returning the same value.
%/% In COMMAND.COM of DR-DOS 7.02 and higher, this pseudo-variable returns the current SwitChar setting of the system, either "/" (DOS style) or "-" (Unix style). See also the related CONFIG.SYS directive SWITCHAR and the environment variable %SWITCHAR%.
%_CODEPAGE% This pseudo-variable returns the systems' current code page ("1".."65533"), f.e. "437", "850", "858". This variable was originally introduced by 4DOS, but also became available with COMMAND.COM since DR-DOS 7.02. See also the CHCP command.
%_COLUMNS% This pseudo-variable returns the current number of screen columns depending on the display mode, f.e. "40", "80", "132", etc. This variable was originally introduced by 4DOS, but also became available with COMMAND.COM since DR-DOS 7.02. See also a similar environment variable %$WIDTH% under DOS Plus.
%_COUNTRY% This pseudo-variable returns the systems' current country code ("1".."65534"), f.e. "1" for USA, "44" for UK, "49" for Germany, "20049" with ISO 8601, "21049" with ISO 8601 and Euro support. This variable was originally introduced by 4DOS, but also became available with COMMAND.COM since DR-DOS 7.02. See also the CONFIG.SYS directive COUNTRY.
%_DAY% This pseudo-variable returns the days of the current date without leading zeros, f.e. "1".."31". This variable was originally introduced by 4DOS, but also became available with COMMAND.COM since DR-DOS 7.02. See also the similar pseudo-variable %DAY%.
%_HOUR% This pseudo-variable returns the hours of the current time in 24-hour format without leading zeros, f.e. "0".."23". This variable was originally introduced by 4DOS, but also became available with COMMAND.COM since DR-DOS 7.02. See also the similar pseudo-variable %HOUR24%.
%_MINUTE% This pseudo-variable returns the minutes of the current time without leading zeros, f.e "0".."59". This variable was originally introduced by 4DOS, but also became available with COMMAND.COM since DR-DOS 7.02. See also the similar pseudo-variable %MINUTE%.
%_MONTH% This pseudo-variable returns the months of the current date without leading zeros, f.e. "1".."12". This variable was originally introduced by 4DOS, but also became available with COMMAND.COM since DR-DOS 7.02. See also the similar pseudo-variable %MONTH%.
%_ROWS% This pseudo-variable returns the current number of screen rows depending on the display mode, f.e. "25", "43", "50", etc. This variable was originally introduced by 4DOS, but also became available with COMMAND.COM since DR-DOS 7.02. See a similar environment variable %$LENGTH% under DOS Plus.
%_SECOND% This pseudo-variable returns the seconds of the current time without leading zeros, f.e. "0".."59". This variable was originally introduced by 4DOS, but also became available with COMMAND.COM since DR-DOS 7.02. See also the similar pseudo-variable %SECOND%.
System information variables supported by DR-DOS COMMAND.COM with networking loaded:
%LOGIN_NAME% This pseudo-variable returns the user name. This always worked with NETX, but it will also work with Personal NetWare's ODI/VLM if the current drive is a PNW-mapped drive (otherwise an empty string is returned). See also the similarly named environment variable %LOGINNAME%.
%P_STATION% This pseudo-variable returns the physical station number in a format "????????????". The value depends on the MAC address of the network adapter, but can be overridden. It resembles an identically named identifier variable in Novell NetWare login scripts.
%STATION% This pseudo-variable returns the logical station number starting with "1" for the first client. The numbers are assigned by the file server and remain static for as long as the IPX connection remains established. It resembles an identically named identifier variable in Novell NetWare login scripts.
%FULL_NAME% This pseudo-variable returns the full name of the logged in user, if available. It resembles an identically named identifier variable in Novell NetWare login scripts. See also the related pseudo-variable %LOGIN_NAME%.
Windows
Dynamic environment variables (also named internal variables or system information variables under DOS) are pseudo-environment variables supported by CMD.EXE when command-line extensions are enabled, and they expand to various discrete values whenever queried, that is, their values can change when queried multiple times even within the same command. While they can be used in batch jobs and at the prompt, they are not stored in the environment. Consequently, they are neither listed by SET nor do they exist for external programs to read. They are not case-sensitive.
Indirectly, they are also supported under Windows' COMMAND.COM, which has been modified to internally call CMD.EXE to execute the commands.
%CD% This pseudo-variable expands to the current directory equivalent to the output of the command CD when called without arguments. While a long filename can be returned under CMD.EXE depending on the current directory, the fact that the current directory will always be in 8.3 format under COMMAND.COM will cause it to return a short filename under COMMAND.COM, even when COMMAND internally calls CMD.
%CMDCMDLINE% This pseudo-variable expands to the original startup parameters of CMD.EXE, f.e. "C:\Windows\system32\cmd.exe". Under Windows' COMMAND.COM, this may return something like "C:\Windows\system32\cmd.exe /c ..." due to the fact that COMMAND.COM calls CMD.EXE internally.
%CMDEXTVERSION% This pseudo-variable expands to the version of the command-line extensions of CMD.EXE, if enabled (e.g. "1" under Windows NT, "2" under Windows 2000 and Windows XP).
%DATE% This pseudo-variable expands to the current date. The date is displayed according to the current user's date format preferences.
%ERRORLEVEL% This pseudo-variable expands to the last set error level, a value between "0" and "255" (without leading zeros). External commands and some internal commands set error levels upon execution. See also the identically named pseudo-variable %ERRORLEVEL% under DR-DOS and the IF ERRORLEVEL command.
%HIGHESTNUMANODENUMBER% This pseudo-variable returns the number of the highest NUMA node.
%RANDOM% This pseudo-variable returns a random number between "0" and "32767".
%TIME% This pseudo-variable returns the current time. The time is displayed according to the current user's time format preferences. If the %TIME% and %DATE% variables are both used, it is important to read them both in this particular order in rapid succession in order to avoid midnight-rollover problems.
Other shells
Unix-like shells have similar dynamically generated variables, bash's $RANDOM being a well-known example. However, since these shells have a concept of local variables, they are described as special local variables instead.
See also
Variable (computer science)
List of POSIX commands
List of DOS commands
Special folder
Environment Modules
PWB shell
Windows Registry
Notes
References
Further reading
External links
User Environment Variables
fix setx.exe not found bug
Operating system technology | Environment variable | [
"Technology"
] | 13,127 | [
"Windows commands",
"Computing commands"
] |
349,575 | https://en.wikipedia.org/wiki/Wand | A wand is a thin, light-weight rod that is held with one hand, and is traditionally made of wood, but may also be made of other materials, such as metal, bone or stone. Long versions of wands are often styled in forms of staves or sceptres, which could have large ornamentation on the top.
In modern times, wands are usually associated with stage magic or supernatural magic, but there have been other uses, all stemming from the original meaning as a synonym of rod and virge. A stick that is used for reaching, pointing, drawing in the dirt, and directing other people, is one of the earliest and simplest of tools.
History
It is possible that wands were used by pre-historic peoples. It is mentioned that 'rods' (as well as rings) were found with Red Lady of Paviland in Britain. It is mentioned in Gower – A Guide to Ancient and Historic Monuments on the Gower Peninsula that these might have been wands and are depicted as such in a reconstruction drawing of the burial of the 'Red Lady'.
During the Middle Kingdom of Egypt, apotropaic wands began to be used during birth ceremonies. These wands were made out of hippopotamus tusks which were split down the middle lengthwise, producing two wands, each with one flat side and one curved side. Due to the curved nature of a hippopotamus tusk, these wands were curved, with one pointed end (the point of the tusk) and one blunt end (where the tusk was removed from the hippopotamus). Hippopotamus tusks may have been used to invoke Taweret the hippopotamus goddess of childbirth. The earliest apotropaic wands used in Egypt were undecorated, but "from around 1850 BC, they were usually provided with decorations of apotropaic figures directly related to the sun religion, or particular aspects of it, inscribed on the convex upper side... most of whom carry knives to ward off evil forces". These apotropaic wands were also inscribed with protective text on the flat side, such as "Cut off the head of the enemy when he enters the chamber of the children whom the lady... has borne". The latest apotropaic wand found belongs to the Second Intermediate Period king Senebkay. It seems that the use of these objects in Egypt declines after this point.
The Barsom used by Zoroastrian Magi is a bundle of twigs that was used during religious ceremonies. While the Barsom is not a wand itself, it was also used for divination purposes, and may be a form of prototypical wand from which later magical wands descend.
The concept of magic wands was used by the ancient Greek writer Homer, in his epic poems The Iliad and The Odyssey. In all cases, Homer used the word rhabdos (ῥάβδος), which means 'rod', and implies something that is thicker than the modern conception of wands. In those books, Homer wrote that magic wands were used by three different gods, namely Hermes, Athena, and Circe. In The Iliad, Homer wrote that Hermes generally used his magic wand Caduceus to make people sleep and wake up. In The Odyssey, Homer wrote that Athena used her magic wand to make Odysseus old, and then young again, and that Circe used her magic wand to turn Odysseus's men into pigs.
By the 1st century AD, the wand was a common symbol of magic in Roman cults, especially Mithraism. In the 3rd and 4th centuries, there are frequent depictions on sarcophagi of Jesus Christ according to one opinion using a magic wand to perform miracles, such as the raising of Lazarus and feeding the multitude. Others scholars disagree with that, claiming that these objects are staffs since images of Christ with it "appear alongside images of Moses performing miracles with the staff".
Italian fairy tales put wands into the hands of the powerful fairies by the Late Middle Ages.
Mystical and religious usage
Wands are used in the Enochian magic of John Dee, the Hermetic Order of the Golden Dawn, Thelema, and Wicca, and by independent practitioners of magic.
Wands were introduced into the occult via the 13th-century Latin grimoire The Oathbound Book of Honorius. The wand idea from the Book of Honorius, along with various other ideas from that grimoire, were later incorporated into the 16th-century grimoire The Key of Solomon. The Key of Solomon became popular among occultists for hundreds of years. In 1888, there was the publication of an English translation of the Key of Solomon by Samuel Mathers (one of the co-founders of the Hermetic Order of the Golden Dawn), which made the text of the Key of Solomon available to the anglophone world. That 1888 English version inspired Gerald Gardner, the creator of Wicca, to incorporate the wand and various other ritual objects into Wicca.
The creators of the Golden Dawn got their idea to use a wand, as well as their other main ritual objects (dagger, sword, hexagrammic pentacle, and cup), from the writings of the mid-19th-century occult writer Eliphas Levi. Levi himself mentioned most of those objects (all except for the cup) in his writings because they are in the Key of Solomon, whereas he got the cup from the tarot suit of cups. In Levi's 1862 book Philosophie Occulte, he wrote a fake excerpt of a Hebrew version of the Key of Solomon, and that fake excerpt was part of the inspiration for the Golden Dawn's ritual objects, and especially their lotus wand.
The ceremonial magic of the Hermetic Order of the Golden Dawn uses several different types of wands for different purposes, the most prominent of which are the fire wand and the lotus wand. In Wicca, wands are traditionally used to summon and control angels and genies, but have later come to also be used for general spell-casting. Wands serve a similar purpose to athames (ritual daggers), though the two objects have their distinct uses: an athame is used to command, whereas a wand is seen as more gentle, and is used to invite or encourage.
Wands are traditionally made of wood—practitioners usually prune a branch from an oak, hazel, or other tree, or may even buy wood from a hardware store, and then carve it and add decorations to personalize it, though one can also purchase ready-made wands. In Wicca, the wand can represent the element air, or fire (following the wiccan author Raymond Buckland, who got his element associations from the Golden Dawn), although contemporary wand-makers also create wands for the elements of earth and water.
Tarot cards
The suit of wands is one of the four suits in the 1909 Rider–Waite–Smith occult tarot deck, and other, later tarot decks that are based upon that deck. The suit of wands replaced the suit of batons from earlier, non-occult tarot decks. The Rider–Waite–Smith tarot deck also replaced the suit of coins from earlier, non-occult decks, with the suit of pentacles. The Rider–Waite–Smith tarot deck was designed by two members of the Hermetic Order of the Golden Dawn, Arthur Edward Waite and Pamela Colman Smith. Waite provided the general guidelines for the deck (including the names of the four suits, and thus the suit of wands), and detailed guidelines for the designs of the Major Arcana, and he hired Smith to do the painting, and to make original artwork for the Minor Arcana. Waite instructed Smith to not paint actual wands in the wand cards, but rather to paint large tree trunk staffs with some foliage growing on them, so as to make an association between wands and Eliphas Levi's phrase "the flowering rod of Aaron" from Levi's fake fragment of The Key of Solomon.
Status symbolism
In British formal government ceremony, special officials may carry a wand of office that represents their power. Compare in this context the function of the ceremonial mace, the scepter, and the staff of office. Its age may be even greater, as Stone Age cave paintings show figures holding sticks, which may be symbolic representations of their power. The association with power may be its use for corporal punishment.
Fiction
In the 18th-century ballads "Allison Gross" and "The Laily Worm and the Machrel of the Sea", the villainesses use silver wands to transform their victims into animals, in emulation of the Odyssey that preceded them. In C. S. Lewis's 1950 novel The Lion, the Witch and the Wardrobe, the White Witch's most feared weapon is her wand, whose magic is capable of turning people into stone. This, again, employs the Odysseyan motif of an evil female witch who uses a magic wand to maliciously transform her victims.
In the mid-20th century, the MGM and Disney media companies popularized magic wands via four films in which wands were wielded by benevolent female fairy characters. Those films were The Wizard of Oz (1939; MGM; a wand-staff was wielded by Glinda the Good Witch of the North), Pinocchio (1940; Disney; a wand was wielded by the Blue Fairy), Cinderella (1950; Disney; a wand was wielded by a fairy godmother), and Sleeping Beauty (1959; Disney; a wand was wielded by each of three fairies). In The Wizard of Oz and Pinocchio, the fairies' wands are embellished with a star-shaped ornament on the end, whereas in Cinderella and Sleeping Beauty, the fairies have wands with traditional plain tips.
Magic wands commonly feature in works of fantasy fiction as spell-casting tools. Few other common denominators exist, so the capabilities of wands vary wildly. In J. K. Rowling's Harry Potter series, the first book of which was published in 1997, personal wands are common as necessary tools to channel and project each character's magic, they are used as weapons in magical duels, and it is the wand that chooses its owner. A wand is also present in the Children of the Red King series in the possession of Charlie Bone as well as the popular MMORPG World of Warcraft where caster classes such as the mage and warlock use wands offensively.
Magic wands and staves are often used in the magical girl genre of anime and manga (or other media) as well.
Other usage
Based on their magical symbolism, stage magicians often use "magic wands" as part of their misdirection. These wands are traditionally short and black, with white tips. A magic wand may be transformed into other items, grow, vanish, move, display a will of its own, or behave magically in its own right. A classic magic trick makes a bouquet of flowers shoot out of the wand's tip.
See also
Distaff
Rhabdomancy
Staff of Moses
White Rod
References
External links
Wands
Ceremonial magic
Ceremonial weapons
Fantasy weapons
Fiction about magic
Formal insignia
Magic items
Ritual weapons
Talismans | Wand | [
"Physics"
] | 2,355 | [
"Magic items",
"Physical objects",
"Matter"
] |
349,627 | https://en.wikipedia.org/wiki/Calcium%20chloride | Calcium chloride is an inorganic compound, a salt with the chemical formula . It is a white crystalline solid at room temperature, and it is highly soluble in water. It can be created by neutralising hydrochloric acid with calcium hydroxide.
Calcium chloride is commonly encountered as a hydrated solid with generic formula , where n = 0, 1, 2, 4, and 6. These compounds are mainly used for de-icing and dust control. Because the anhydrous salt is hygroscopic and deliquescent, it is used as a desiccant.
History
Calcium chloride was apparently discovered in the 15th century but wasn't studied properly until the 18th century. It was historically called "fixed sal ammoniac" () because it was synthesized during the distillation of ammonium chloride with lime and was nonvolatile (while the former appeared to sublime); in more modern times (18th-19th cc.) it was called "muriate of lime" ().
Uses
De-icing and freezing-point depression
By depressing the freezing point of water, calcium chloride is used to prevent ice formation and is used to de-ice. This application consumes the greatest amount of calcium chloride. Calcium chloride is relatively harmless to plants and soil. As a de-icing agent, it is much more effective at lower temperatures than sodium chloride. When distributed for this use, it usually takes the form of small, white spheres a few millimeters in diameter, called prills. Solutions of calcium chloride can prevent freezing at temperatures as low as −52 °C (−62 °F), making it ideal for filling agricultural implement tires as a liquid ballast, aiding traction in cold climates.
It is also used in domestic and industrial chemical air dehumidifiers.
Road surfacing
The second largest application of calcium chloride exploits its hygroscopic nature and the tackiness of its hydrates; calcium chloride is highly hygroscopic and its hydration is an exothermic process. A concentrated solution keeps a liquid layer on the surface of dirt roads, which suppresses the formation of dust. It keeps the finer dust particles on the road, providing a cushioning layer. If these are allowed to blow away, the large aggregate begins to shift around and the road breaks down. Using calcium chloride reduces the need for grading by as much as 50% and the need for fill-in materials as much as 80%.
Food
In the food industry, calcium chloride is frequently employed as a firming agent in canned vegetables, particularly for canned tomatoes and cucumber pickles. It is also used in firming soybean curds into tofu and in producing a caviar substitute from vegetable or fruit juices. It is also used to enhance the texture of various other products, such as whole apples, whole hot peppers, whole and sliced strawberries, diced tomatoes, and whole peaches.
The firming effect of calcium chloride can be attributed to several mechanisms:
Complexation, since calcium ions form complexes with pectin, a polysaccharide found in the cell wall and middle lamella of plant tissues.
Membrane stabilization, since calcium ions contribute to the stabilization of the cell membrane.
Turgor pressure regulation, since calcium ions influence cell turgor pressure, which is the pressure exerted by the cell contents against the cell wall.
Calcium chloride's freezing-point depression properties are used to slow the freezing of the caramel in caramel-filled chocolate bars. Also, it is frequently added to sliced apples to maintain texture.
In brewing beer, calcium chloride is sometimes used to correct mineral deficiencies in the brewing water. It affects flavor and chemical reactions during the brewing process, and can also affect yeast function during fermentation.
In cheesemaking, calcium chloride is sometimes added to processed (pasteurized/homogenized) milk to restore the natural balance between calcium and protein in casein. It is added before the coagulant.
Calcium chloride is also commonly used as an "electrolyte" in sports drinks and other beverages; as a food additive used in conjunction with other inorganic salts it adds taste to bottled water.
The average intake of calcium chloride as food additives has been estimated to be 160–345 mg/day. Calcium chloride is permitted as a food additive in the European Union for use as a sequestrant and firming agent with the E number E509. It is considered as generally recognized as safe (GRAS) by the U.S. Food and Drug Administration. Its use in organic crop production is generally prohibited under the US National Organic Program.
The elemental calcium content in calcium chloride hexahydrate (CaCl2·6H2O) is approximately 18.2%. This means that for every gram of calcium chloride hexahydrate, there are about 182 milligrams of elemental calcium.
For anhydrous calcium chloride (CaCl2), the elemental calcium content is slightly higher, around 36.1% (for every gram of anhydrous calcium chloride there are about 361 milligrams of elemental calcium).
Calcium chloride has a very salty taste and can cause mouth and throat irritation at high concentrations, so it is typically not the first choice for long-term oral supplementation (as a calcium supplement). Calcium chloride, characterized by its low molecular weight and high water solubility, readily breaks down into calcium and chloride ions when exposed to water. These ions are efficiently absorbed from the intestine. However, caution should be exercised when handling calcium chloride, for it has the potential to release heat energy upon dissolution in water. This release of heat can lead to trauma and burns in the mouth, throat, esophagus, and stomach. In fact, there have been reported cases of stomach necrosis resulting from burns caused by accidental ingestions of big amounts of undissolved calcium chloride.
The extremely salty taste of calcium chloride is used to flavor pickles without increasing the food's sodium content.
Calcium chloride is used to prevent cork spot and bitter pit on apples by spraying on the tree during the late growing season.
Laboratory and related drying operations
Drying tubes are frequently packed with calcium chloride. Kelp is dried with calcium chloride for use in producing sodium carbonate. Anhydrous calcium chloride has been approved by the FDA as a packaging aid to ensure dryness (CPG 7117.02).
The hydrated salt can be dried for re-use but will dissolve in its own water of hydration if heated quickly and form a hard amalgamated solid when cooled.
Metal reduction flux
Similarly, is used as a flux and electrolyte in the FFC Cambridge electrolysis process for titanium production, where it ensures the proper exchange of calcium and oxygen ions between the electrodes.
Medical use
Calcium chloride infusions may be used as an intravenous therapy to prevent hypocalcemia.
Calcium chloride is a highly soluble calcium salt. Hexahydrate calcium chloride (CaCl2·6H2O) has solubility in water of 811 g/L at 25 °C. Calcium chloride when taken orally completely dissociates into calcium ions (Ca2+) in the gastrointestinal tract, resulting in readily bioavailable calcium. The high concentration of calcium ions facilitates efficient absorption in the small intestine. However, the use of calcium chloride as a source of calcium taken orally is less common compared to other calcium salts because of potential adverse effects such as gastrointestinal irritation and discomfort.
When tasted, calcium chloride exhibits a distinctive bitter flavor alongside its salty taste. The bitterness is attributable to the calcium ions and their interaction with human taste receptors: certain members of the TAS2R family of bitter taste receptors respond to calcium ions; the bitter perception of calcium is thought to be a protective mechanism to avoid ingestion of toxic substances, as many poisonous compounds taste bitter. While chloride ions (Cl⁻) primarily contribute to saltiness, at higher concentrations, they can enhance the bitter sensation. The combination of calcium and chloride ions intensifies the overall bitterness. At lower concentrations, calcium chloride may taste predominantly salty. The salty taste arises from the electrolyte nature of the compound, similar to sodium chloride (table salt). As the concentration increases, the bitter taste becomes more pronounced: the increased presence of calcium ions enhances the activation of bitterness receptors.
Other applications
Calcium chloride is used in concrete mixes to accelerate the initial setting, but chloride ions lead to corrosion of steel rebar, so it should not be used in reinforced concrete. The anhydrous form of calcium chloride may also be used for this purpose and can provide a measure of the moisture in concrete.
Calcium chloride is included as an additive in plastics and in fire extinguishers, in blast furnaces as an additive to control scaffolding (clumping and adhesion of materials that prevent the furnace charge from descending), and in fabric softener as a thinner.
The exothermic dissolution of calcium chloride is used in self-heating cans and heating pads.
Calcium chloride is used as a water hardener in the maintenance of hot tub water, as insufficiently hard water can lead to corrosion and foaming.
In the oil industry, calcium chloride is used to increase the density of solids-free brines. It is also used to provide inhibition of swelling clays in the water phase of invert emulsion drilling fluids.
Calcium chloride () acts as flux material, decreasing the melting point, in the Davy process for the industrial production of sodium metal through the electrolysis of molten .
Calcium chloride is also used in the production of activated charcoal.
Calcium chloride can be used to precipitate fluoride ions from water as insoluble .
Calcium chloride is also an ingredient used in ceramic slipware. It suspends clay particles so that they float within the solution, making it easier to use in a variety of slipcasting techniques.
For watering plants to use as a fertilizer, a moderate concentration of calcium chloride is used to avoid potential toxicity: 5 to 10 mM (millimolar) is generally effective and safe for most plants—that is of anhydrous calcium chloride () per liter of water or of calcium chloride hexahydrate (·6) per liter of water. Calcium chloride solution is used immediately after preparation to prevent potential alterations in its chemical composition. Besides that, calcium chloride is highly hygroscopic, meaning it readily absorbs moisture from the air. If the solution is left standing, it can absorb additional water vapor, leading to dilution and a decrease in the intended concentration. Prolonged standing may lead to the precipitation of calcium hydroxide or other insoluble calcium compounds, reducing the availability of calcium ions in the solution and reducing the effectiveness of the solution as a calcium source for plants. Nutrient solutions can become a medium for microbial growth if stored for extended periods. Microbial contamination may alter the composition of the solution and potentially introduce pathogens to the plants. When dissolved in water, calcium chloride can undergo hydrolysis, especially over time, which can lead to the formation of small amounts of hydrochloric acid and calcium hydroxide: +2 ⇌ +2. This reaction can lower the pH of the solution, making it more acidic. Acidic solutions may harm plant tissues and disrupt nutrient uptake.
Calcium chloride dihydrate (20 percent by weight) dissolved in ethanol (95 percent ABV) has been used as a sterilant for male animals. The solution is injected into the testes of the animal. Within one month, necrosis of testicular tissue results in sterilization.
Cocaine producers in Colombia import tons of calcium chloride to recover solvents that are on the INCB Red List and are more tightly controlled.
Hazards
Although the salt is non-toxic in small quantities when wet, the strongly hygroscopic properties of non-hydrated calcium chloride present some hazards. It can act as an irritant by desiccating moist skin. Solid calcium chloride dissolves exothermically, and burns can result in the mouth and esophagus if it is ingested. Ingestion of concentrated solutions or solid products may cause gastrointestinal irritation or ulceration.
Consumption of calcium chloride can lead to hypercalcemia.
Properties
Calcium chloride dissolves in water, producing chloride and the aquo complex . In this way, these solutions are sources of "free" calcium and free chloride ions. This description is illustrated by the fact that these solutions react with phosphate sources to give a solid precipitate of calcium phosphate:
Calcium chloride has a very high enthalpy change of solution, indicated by considerable temperature rise accompanying dissolution of the anhydrous salt in water. This property is the basis for its largest-scale application.
Aqueous solutions of calcium chloride tend to be slightly acidic due to the influence of the chloride ions on the hydrogen ion concentration in water. The slight acidity of calcium chloride solutions is primarily due to the increased ionic strength of the solution, which can influence the activity of hydrogen ions and lower the pH slightly. The pH of calcium chloride in aqueous solution is the following:
Molten calcium chloride can be electrolysed to give calcium metal and chlorine gas:
Preparation
In much of the world, calcium chloride is derived from limestone as a by-product of the Solvay process, which follows the net reaction below:
North American consumption in 2002 was 1,529,000 tonnes (3.37 billion pounds). In the US, most calcium chloride is obtained by purification from brine. As with most bulk commodity salt products, trace amounts of other cations from the alkali metals and alkaline earth metals (groups 1 and 2) and other anions from the halogens (group 17) typically occur.
Occurrence
Calcium chloride occurs as the rare evaporite minerals sinjarite (dihydrate) and antarcticite (hexahydrate). Another natural hydrate known is ghiaraite – a tetrahydrate. The related minerals chlorocalcite (potassium calcium chloride, ) and tachyhydrite (calcium magnesium chloride, ) are also very rare. The same is true for rorisite, CaClF (calcium chloride fluoride).
See also
Calcium(I) chloride
Calcium chloride transformation
Magnesium chloride
Calcium supplement
References
External links
International Chemical Safety Card 1184
Product and Application Information (Formerly Dow Chemical Calcium Chloride division)
Report on steel corrosion by chloride including CaCl2
Collection of calcium chloride reports and articles
Calcium chloride, Anhydrous MSDS
Difusivity of calcium chloride
Centers for Disease Control and Prevention, National Institutes of Occupational Safety and Health, "Calcium Chloride (anhydrous)"
Calcium compounds
Chlorides
Alkaline earth metal halides
Deliquescent materials
Desiccants
Pyrotechnic colorants
Edible salt
E-number additives
Concrete admixtures | Calcium chloride | [
"Physics",
"Chemistry"
] | 3,069 | [
"Chlorides",
"Inorganic compounds",
"Salts",
"Desiccants",
"Materials",
"Edible salt",
"Deliquescent materials",
"Matter"
] |
349,632 | https://en.wikipedia.org/wiki/Clothes%20iron | A clothes iron (also flatiron, smoothing iron, dry iron, steam iron or simply iron) is a small appliance that, when heated, is used to press clothes to remove wrinkles and unwanted creases. Domestic irons generally range in operating temperature from between to . It is named for the metal (iron) of which the device was historically made, and the use of it is generally called ironing, the final step in the process of laundering clothes.
Ironing works by loosening the ties between the long chains of molecules that exist in polymer fiber materials. With the heat and the weight of the ironing plate, the fibers are stretched and the fabric maintains its new shape when cool. Some materials, such as cotton, require the use of water to loosen the intermolecular bonds.
History and development
Before the introduction of electricity, irons were heated by combustion, either in a fire or with some internal arrangement. The said iron was made as a solid piece of iron with a handle and was heated, for example, on a wood stove and used to smooth clothes. It can also be called a smoothing iron. An "electric flatiron" was invented by American Henry W. Seely and patented on June 6, 1882. It weighed almost and took a long time to heat. The UK Electricity Association is reported to have said that an electric iron with a carbon arc appeared in France in 1880, but this is considered doubtful.
Two of the oldest sorts of iron were either containers filled with a burning substance, or solid lumps of metal which could be heated directly.
Metal pans filled with hot coals were used for smoothing fabrics in China in the 1st century BC. A later design consisted of an iron box which could be filled with hot coals, which had to be periodically aerated by attaching a bellows. In the late nineteenth and early twentieth centuries, there were many irons in use that were heated by fuels such as kerosene, ethanol, whale oil, natural gas, carbide gas (acetylene, as with carbide lamps), or even gasoline. Some houses were equipped with a system of pipes for distributing natural gas or carbide gas to different rooms in order to operate appliances such as irons, in addition to lights. Despite the risk of fire, liquid-fuel irons were sold in U.S. rural areas up through World War II.In Kerala, India, burning coconut shells were traditionally used as an alternative to charcoal due to their comparable heating capacity. This method is still employed as a backup option, particularly during frequent power outages. Other box irons had heated metal inserts instead of hot coals.
From the 17th century, sadirons or sad irons (from Middle English "sad", meaning "solid", used in English through the 1800s) began to be used. They were thick slabs of cast iron, triangular and with a handle, heated in a fire or on a stove. These were also called flat irons. A laundry worker would employ a cluster of solid irons that were heated from a single source: As the iron currently in use cooled down, it could be quickly replaced by a hot one.
In the industrialized world, these designs have been superseded by the electric iron, which uses resistive heating from an electric current. The hot plate, called the sole plate, is made of aluminium or stainless steel polished to be as smooth as possible; it is sometimes coated with a low-friction heat-resistant plastic to reduce friction below that of the metal plate. The heating element is controlled by a thermostat that switches the current on and off to maintain the selected temperature. The invention of the resistively heated electric iron is credited to Henry W. Seeley of New York City in 1882. In the same year an iron heated by a carbon arc was introduced in France, but was too dangerous to be successful. The early electric irons had no easy way to control their temperature, and the first thermostatically controlled electric iron appeared in the 1920s. The first commercially available electric steam iron was introduced in 1926 by a New York drying and cleaning company, Eldec, but was not a commercial success. The patent for an electric steam iron and dampener was issued to Max Skolnik of Chicago in 1934. In 1938, Skolnik granted the Steam-O-Matic Corporation of New York the exclusive right to manufacture steam-electric irons. This was the first steam iron to achieve any degree of popularity, and led the way to more widespread use of the electric steam iron during the 1940s and 1950s.
Types and names
Historically, irons have had several variations and have thus been called by many names:
Flatiron (American English), flat iron (British English) or smoothing iron
The general name for a hand-held iron consisting simply of a handle and a solid, flat, metal base, and named for the flat ironing face used to smooth clothes.
Sad iron or sadiron
Mentioned above, meaning "solid" or heavy iron, where the base is a solid block of metal, sometimes used to refer to irons with heavier bases than a typical "flatiron".
Box iron, ironing box, charcoal iron, ox-tongue iron or slug iron
Mentioned above; the base is a container, into which hot coals or a metal brick or slug can be inserted to keep the iron heated. The ox-tongue iron is named for the particular shape of the insert, referred to as an ox-tongue slug.
Goose, tailor's goose or, in Scots, gusing iron
A type of flat iron or sad iron named for the goose-like curve in its neck, and (in the case of "tailor's goose") its usage by tailors.
Goffering iron
This type of iron, now obsolete, consists of a metal cylinder oriented horizontally on a stand. It was used to iron ruffs and collars.
Hygiene
Proper ironing of clothes has proven to be an effective method to avoid infections like those caused by lice.
Features
Modern irons for home use can have the following features:
A design that allows the iron to be set down, usually standing on its end, without the hot soleplate touching anything that could be damaged;
A thermostat ensuring maintenance of a constant temperature;
A temperature control dial allowing the user to select the operating temperatures (usually marked with types of cloth rather than temperatures: "silk", "wool", "cotton", "linen", etc.);
An electrical cord with heat-resistant silicone rubber insulation;
Injection of steam through the fabric during the ironing process;
A water reservoir inside the iron used for steam generation;
An indicator showing the amount of water left in the reservoir,
Constant steam: constantly sends steam through the hot part of the iron into the clothes;
Steam burst: sends a burst of steam through the clothes when the user presses a button;
(advanced feature) Dial controlling the amount of steam to emit as a constant stream;
(advanced feature) Anti-drip system;
Cord control: the point at which the cord attaches to the iron has a spring to hold the cord out of the way while ironing and likewise when setting down the iron (prevents fires, is more convenient, etc.);
A retractable cord for easy storage;
(advanced feature) non-stick coating along the sole plate to help the iron glide across the fabric
(advanced feature) Anti-burn control: if the iron is left flat (possibly touching clothes) for too long, the iron shuts off to prevent scorching and fires;
(advanced feature) Energy saving control: if the iron is left undisturbed for several (10 or 15) minutes, the iron shuts off.
Cordless irons: the iron is placed on a stand for a short period to warm up, using thermal mass to stay hot for a short period. These are useful for light loads only. Battery power is not viable for irons as they require more power than practical batteries can provide.
(advanced feature) 3-way automatic shut-off
(advanced feature) self-cleaning
(advanced feature) Anti-scale to help remove lime scale buildup from using hard water for a long time.
(advanced feature) vertical steam to help remove creases and wrinkles by holding an iron vertically and steaming material close to it.
Collections
One of the world's larger collection of irons, comprising 1300 historical examples of irons from Germany and the rest of the world, is housed in Gochsheim Castle, near Karlsruhe, Germany.
Many ethnographical museums around the world have collections of irons. In Ukraine, for example, about 150 irons are the part of the exhibition of the Radomysl Castle in Ukraine.
Ironing center
An ironing center, steam ironing station, or steam generator iron is a device consisting of a clothes iron and a separate steam-generating tank. By having a separate tank, the ironing unit can generate more steam than a conventional iron, making steam ironing faster. Such ironing facilities take longer to warm up than conventional irons, and cost more.
See also
Dadeumi, a mechanical way to smooth clothing, once traditional in Korea
Flatiron Building, of cross-section like a flatiron
Flatiron gunboat, flatiron-shaped in plan view
Hair iron
Home robot
Mangle (machine)
Soldering iron
Trouser press
Mary Florence Potts, inventor of the detachable cold wooden handle for irons
References
External links
Charcoal and other antique irons from the White River Valley Museum
Antique Irons from the Virtual Museum of Textile Arts
1882 introductions
Home appliances
Laundry equipment
19th-century inventions
Ancient inventions
British inventions
Textile tools | Clothes iron | [
"Physics",
"Technology"
] | 1,995 | [
"Physical systems",
"Machines",
"Home appliances"
] |
349,654 | https://en.wikipedia.org/wiki/Electro-osmosis | In chemistry, electro-osmotic flow (EOF, hyphen optional; synonymous with electro-osmosis or electro-endosmosis) is the motion of liquid induced by an applied potential across a porous material, capillary tube, membrane, microchannel, or any other fluid conduit. Because electro-osmotic velocities are independent of conduit size, as long as the electrical double layer is much smaller than the characteristic length scale of the channel, electro-osmotic flow will have little effect. Electro-osmotic flow is most significant when in small channels, and is an essential component in chemical separation techniques, notably capillary electrophoresis. Electro-osmotic flow can occur in natural unfiltered water, as well as buffered solutions.
History
Electro-osmotic flow was first reported in 1807 by Ferdinand Friedrich Reuss (18 February 1778 (Tübingen, Germany) – 14 April 1852 (Stuttgart, Germany)) in an unpublished lecture before the Physical-Medical Society of Moscow; Reuss first published an account of electro-osmotic flow in 1809 in the Memoirs of the Imperial Society of Naturalists of Moscow. He showed that water could be made to flow through a plug of clay by applying an electric voltage. Clay is composed of closely packed particles of silica and other minerals, and water flows through the narrow spaces between these particles just as it would through a narrow glass tube. Any combination of an electrolyte (a fluid containing dissolved ions) and an insulating solid would generate electro-osmotic flow, though for water/silica the effect is particularly large. Even so, flow speeds are typically only a few millimeters per second.
Electro-osmosis was discovered independently in 1814 by the English chemist Robert Porrett Jr. (1783–1868).
Cause
Electroosmotic flow is caused by the Coulomb force induced by an electric field on net mobile electric charge in a solution. Because the chemical equilibrium between a solid surface and an electrolyte solution typically leads to the interface acquiring a net fixed electrical charge, a layer of mobile ions, known as an electrical double layer or Debye layer, forms in the region near the interface. When an electric field is applied to the fluid (usually via electrodes placed at inlets and outlets), the net charge in the electrical double layer is induced to move by the resulting Coulomb force. The resulting flow is termed electroosmotic flow.
Description
The resulting flow from applying a voltage is a plug flow. Unlike a parabolic profile flow generated from a pressure differential, a plug flow’s velocity profile is approximately planar, with slight variation near the electric double layer. This offers significantly less deleterious dispersive effects and can be controlled without valves, offering a high-performance method for fluid separation, although many complex factors prove this control to be difficult. Because of difficulties measuring and monitoring flow in microfluidic channels, primarily disrupting the flow pattern, most analysis is done through numerical methods and simulation.
Electroosmotic flow through microchannels can be modeled after the Navier-Stokes equation with the driving force deriving from the electric field and the pressure differential. Thus it is governed by the continuity equation
and momentum
where is the velocity vector, is the density of the fluid, is the material derivative, is the viscosity of the fluid, is the electric charge density, is the applied electric field, is the electric field due to the zeta potential at the walls and is the fluid pressure.
Laplace’s equation can describe the external electric field
while the potential within the electric double layer is governed by
where is the dielectric constant of the electrolyte solution and is the vacuum permittivity. This equation can be further simplified using the Debye-Hückel approximation
where is the Debye length, used to describe the characteristic thickness of the electric double layer. The equations for potential field within the double layer can be combined as
The transport of ions in space can be modeled using the Nernst–Planck equation:
Where is the ion concentration, is the magnetic vector potential, is the diffusivity of the chemical species, is the valence of ionic species, is the elementary charge, is the Boltzmann constant, and is the absolute temperature.
Applications
Electro-osmotic flow is commonly used in microfluidic devices, soil analysis and processing, and chemical analysis, all of which routinely involve systems with highly charged surfaces, often of oxides. One example is capillary electrophoresis, in which electric fields are used to separate chemicals according to their electrophoretic mobility by applying an electric field to a narrow capillary, usually made of silica. In electrophoretic separations, the electroosmotic flow affects the elution time of the analytes.
Electro-osmotic flow is actuated in a FlowFET to electronically control fluid flow through a junction.
It is projected that micro fluidic devices utilizing electroosmotic flow will have applications in medical research. Once controlling this flow is better understood and implemented, the ability to separate fluids on the atomic level will be a vital component for drug dischargers. Mixing fluids at the micro scale is currently troublesome. It is believed that electrically controlling fluids will be the method in which small fluids are mixed.
A controversial use of electro-osmotic systems is the control rising damp in the walls of buildings. While there is little evidence to suggest that these systems can be useful in moving salts in walls, such systems are claimed to be especially effective in structures with very thick walls.
However some claim that there is no scientific base for those systems, and cite several examples for their failure.
Electro-osmosis can also be used for self-pumping pores powered by chemical reactions rather than electric fields. This approach, using , has been demonstrated and modeled with the Nernst-Planck-Stokes equations.
Physics
In fuel cells, electro-osmosis causes protons moving through a proton exchange membrane (PEM) to drag water molecules from one side (anode) to the other (cathode).
Vascular plant biology
In vascular plant biology, electro-osmosis is also used as an alternative or supplemental explanation for the movement of polar liquids via the phloem that differs from the cohesion-tension theory supplied in the mass flow hypothesis and others, such as cytoplasmic streaming. Companion cells are involved in the "cyclic" withdrawal of ions (K+) from sieve tubes, and their secretion parallel to their position of withdrawal between sieve plates, resulting in polarisation of sieve plate elements alongside potential difference in pressure, and results in polar water molecules and other solutes present moved upward through the phloem.
In 2003, St Petersburg University graduates applied direct electric current to 10 mm segments of mesocotyls of maize seedlings alongside one-year linden shoots; electrolyte solutions present in the tissues moved toward the cathode that was in place, suggesting that electro-osmosis might play a role in solution transport through conductive plant tissues.
Disadvantages
Maintaining an electric field in an electrolyte requires Faradaic reactions to occur at the anode and cathode. This is typically electrolysis of water, which generates hydrogen peroxide, hydrogen ions (acid) and hydroxide (base) as well as oxygen and hydrogen gas bubbles. The hydrogen peroxide and/or pH changes generated can adversely affect biological cells and biomolecules such as proteins, while gas bubbles tend to "clog" microfluidic systems. These problems can be alleviated by using alternative electrode materials such as conjugated polymers which can undergo the Faradaic reactions themselves, dramatically reducing electrolysis.
See also
Surface charge
Capillary electrophoresis
Electrical double layer
Streaming current
Induced-charge Electrokinetics
Streaming potential
Zeta potential
Electroosmotic pump
Electrical double layer
Microfluidics
Electrochemistry
References
Further reading
Fluid dynamics
Electrochemistry | Electro-osmosis | [
"Chemistry",
"Engineering"
] | 1,659 | [
"Piping",
"Chemical engineering",
"Electrochemistry",
"Fluid dynamics"
] |
349,676 | https://en.wikipedia.org/wiki/Electrophoresis | Electrophoresis is the motion of charged dispersed particles or dissolved charged molecules relative to a fluid under the influence of a spatially uniform electric field. As a rule, these are zwitterions.
Electrophoresis is used in laboratories to separate macromolecules based on their charges. The technique normally applies a negative charge called cathode so anionic protein molecules move towards a positive charge called anode. Therefore, electrophoresis of positively charged particles or molecules (cations) is sometimes called cataphoresis, while electrophoresis of negatively charged particles or molecules (anions) is sometimes called anaphoresis.
Electrophoresis is the basis for analytical techniques used in biochemistry and bioinorganic chemistry to separate particles, molecules, or ions by size, charge, or binding affinity, either freely or through a supportive medium using a one-directional flow of electrical charge. It is used extensively in DNA, RNA and protein analysis.
Liquid "droplet electrophoresis" is significantly different from the classic "particle electrophoresis" because of droplet characteristics such as a mobile surface charge and the nonrigidity of the interface. Also, the liquid–liquid system, where there is an interplay between the hydrodynamic and electrokinetic forces in both phases, adds to the complexity of electrophoretic motion.
History
Theory
Suspended particles have an electric surface charge, strongly affected by surface adsorbed species, on which an external electric field exerts an electrostatic Coulomb force. According to the double layer theory, all surface charges in fluids are screened by a diffuse layer of ions, which has the same absolute charge but opposite sign with respect to that of the surface charge. The electric field also exerts a force on the ions in the diffuse layer which has direction opposite to that acting on the surface charge. This latter force is not actually applied to the particle, but to the ions in the diffuse layer located at some distance from the particle surface, and part of it is transferred all the way to the particle surface through viscous stress. This part of the force is also called electrophoretic retardation force, or ERF in short.
When the electric field is applied and the charged particle to be analyzed is at steady movement through the diffuse layer, the total resulting force is zero:
Considering the drag on the moving particles due to the viscosity of the dispersant, in the case of low Reynolds number and moderate electric field strength E, the drift velocity of a dispersed particle v is simply proportional to the applied field, which leaves the electrophoretic mobility μe defined as:
The most well known and widely used theory of electrophoresis was developed in 1903 by Marian Smoluchowski:
,
where εr is the dielectric constant of the dispersion medium, ε0 is the permittivity of free space (C2 N−1 m−2), η is dynamic viscosity of the dispersion medium (Pa s), and ζ is zeta potential (i.e., the electrokinetic potential of the slipping plane in the double layer, units mV or V).
The Smoluchowski theory is very powerful because it works for dispersed particles of any shape at any concentration. It has limitations on its validity. For instance, it does not include Debye length κ−1 (units m). However, Debye length must be important for electrophoresis, as follows immediately from Figure 2,
"Illustration of electrophoresis retardation".
Increasing thickness of the double layer (DL) leads to removing the point of retardation force further from the particle surface. The thicker the DL, the smaller the retardation force must be.
Detailed theoretical analysis proved that the Smoluchowski theory is valid only for sufficiently thin DL, when particle radius a is much greater than the Debye length:
.
This model of "thin double layer" offers tremendous simplifications not only for electrophoresis theory but for many other electrokinetic theories. This model is valid for most aqueous systems, where the Debye length is usually only a few nanometers. It only breaks for nano-colloids in solution with ionic strength close to water.
The Smoluchowski theory also neglects the contributions from surface conductivity. This is expressed in modern theory as condition of small Dukhin number:
In the effort of expanding the range of validity of electrophoretic theories, the opposite asymptotic case was considered, when Debye length is larger than particle radius:
.
Under this condition of a "thick double layer", Erich Hückel predicted the following relation for electrophoretic mobility:
.
This model can be useful for some nanoparticles and non-polar fluids, where Debye length is much larger than in the usual cases.
There are several analytical theories that incorporate surface conductivity and eliminate the restriction of a small Dukhin number, pioneered by Theodoor Overbeek and F. Booth. Modern, rigorous theories valid for any Zeta potential and often any aκ stem mostly from Dukhin–Semenikhin theory.
In the thin double layer limit, these theories confirm the numerical solution to the problem provided by Richard W. O'Brien and Lee R. White.
For modeling more complex scenarios, these simplifications become inaccurate, and the electric field must be modeled spatially, tracking its magnitude and direction. Poisson's equation can be used to model this spatially-varying electric field. Its influence on fluid flow can be modeled with the Stokes law, while transport of different ions can be modeled using the Nernst–Planck equation. This combined approach is referred to as the Poisson-Nernst-Planck-Stokes equations. It has been validated for the electrophoresis of particles.
See also
References
Further reading
External links
List of relative mobilities
Colloidal chemistry
Biochemical separation processes
Electroanalytical methods
Instrumental analysis
Laboratory techniques | Electrophoresis | [
"Chemistry",
"Biology"
] | 1,245 | [
"Biochemistry methods",
"Colloidal chemistry",
"Electroanalytical chemistry",
"Separation processes",
"Instrumental analysis",
"Colloids",
"Biochemical separation processes",
"Surface science",
"Molecular biology techniques",
"nan",
"Electroanalytical methods",
"Electrophoresis"
] |
349,704 | https://en.wikipedia.org/wiki/Spherical%20circle | In spherical geometry, a spherical circle (often shortened to circle) is the locus of points on a sphere at constant spherical distance (the spherical radius) from a given point on the sphere (the pole or spherical center). It is a curve of constant geodesic curvature relative to the sphere, analogous to a line or circle in the Euclidean plane; the curves analogous to straight lines are called great circles, and the curves analogous to planar circles are called small circles or lesser circles. If the sphere is embedded in three-dimensional Euclidean space, its circles are the intersections of the sphere with planes, and the great circles are intersections with planes passing through the center of the sphere.
Fundamental concepts
Intrinsic characterization
A spherical circle with zero geodesic curvature is called a great circle, and is a geodesic analogous to a straight line in the plane. A great circle separates the sphere into two equal hemispheres, each with the great circle as its boundary. If a great circle passes through a point on the sphere, it also passes through the antipodal point (the unique furthest other point on the sphere). For any pair of distinct non-antipodal points, a unique great circle passes through both. Any two points on a great circle separate it into two arcs analogous to line segments in the plane; the shorter is called the minor arc and is the shortest path between the points, and the longer is called the major arc.
A circle with non-zero geodesic curvature is called a small circle, and is analogous to a circle in the plane. A small circle separates the sphere into two spherical disks or spherical caps, each with the circle as its boundary. For any triple of distinct non-antipodal points a unique small circle passes through all three. Any two points on the small circle separate it into two arcs, analogous to circular arcs in the plane.
Every circle has two antipodal poles (or centers) intrinsic to the sphere. A great circle is equidistant to its poles, while a small circle is closer to one pole than the other. Concentric circles are sometimes called parallels, because they each have constant distance to each-other, and in particular to their concentric great circle, and are in that sense analogous to parallel lines in the plane.
Extrinsic characterization
If the sphere is isometrically embedded in Euclidean space, the sphere's intersection with a plane is a circle, which can be interpreted extrinsically to the sphere as a Euclidean circle: a locus of points in the plane at a constant Euclidean distance (the extrinsic radius) from a point in the plane (the extrinsic center). A great circle lies on a plane passing through the center of the sphere, so its extrinsic radius is equal to the radius of the sphere itself, and its extrinsic center is the sphere's center. A small circle lies on a plane not passing through the sphere's center, so its extrinsic radius is smaller than that of the sphere and its extrinsic center is an arbitrary point in the interior of the sphere. Parallel planes cut the sphere into parallel (concentric) small circles; the pair of parallel planes tangent to the sphere are tangent at the poles of these circles, and the diameter through these poles, passing through the sphere's center and perpendicular to the parallel planes, is called the axis of the parallel circles.
The sphere's intersection with a second sphere is also a circle, and the sphere's intersection with a concentric right circular cylinder or right circular cone is a pair of antipodal circles.
Applications
Geodesy
In the geographic coordinate system on a globe, the parallels of latitude are small circles, with the Equator the only great circle. By contrast, all meridians of longitude, paired with their opposite meridian in the other hemisphere, form great circles.
References
Spherical curves
Circles | Spherical circle | [
"Mathematics"
] | 798 | [
"Circles",
"Pi",
"Geometry",
"Geometry stubs"
] |
349,732 | https://en.wikipedia.org/wiki/Download | In computer networks, download means to receive data from a remote system, typically a server such as a web server, an FTP server, an email server, or other similar systems. This contrasts with uploading, where data is sent to a remote server.
A download is a file offered for downloading or that has been downloaded, or the process of receiving such a file.
Definition
Downloading generally transfers entire files for local storage and later use, as contrasted with streaming, where the data is used nearly immediately while the transmission is still in progress and may not be stored long-term. Websites that offer streaming media or media displayed in-browser, such as YouTube, increasingly place restrictions on the ability of users to save these materials to their computers after they have been received.
Downloading on computer networks involves retrieving data from a remote system, like a web server, FTP server, or email server, unlike uploading, where data is sent to a remote server. A download can refer to a file made available for retrieval or one that has been received, encompassing the entire process of obtaining such a file.
Downloading is not the same as data transfer; moving or copying data between two storage devices would be data transfer, but receiving data from the Internet or BBS is downloading.
Copyright
Downloading media files involves the use of linking and framing Internet material and relates to copyright law. Streaming and downloading can involve making copies of works that infringe on copyrights or other rights, and organizations running such websites may become vicariously liable for copyright infringement by causing others to do so.
Open hosting servers allow people to upload files to a central server, which incurs bandwidth and hard disk space costs due to the files generated with each download. Anonymous and open hosting servers make it difficult to hold hosts accountable. Taking legal action against the technologies behind unauthorized "file sharing" has proven successful for centralized networks like Napster, and untenable for decentralized networks like Gnutella or BitTorrent. The leading YouTube audio-ripping site agreed to shut down after being sued by a huge coalition of recording labels.
Downloading and streaming relate to the more general usage of the Internet to facilitate copyright infringement, also known as "software piracy". As overt static hosting of unauthorized copies of works (i.e., centralized networks) is often quickly and uncontroversially rebuffed, legal issues have in recent years tended to deal with the usage of dynamic web technologies (decentralized networks, trackerless BitTorrents) to circumvent the ability of copyright owners to directly engage particular distributors and consumers.
Litigations in European Union
In Europe, the Court of Justice of the European Union (CJEU) has ruled that it is legal to create temporary or cached copies of works (copyrighted or otherwise) online. The ruling relates to the British Meltwater case settled on 5 June 2014.
The judgement of the court states that: "Article 5 of Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society must be interpreted as meaning that the copies on the user's computer screen and the copies in the internet 'cache' of that computer's hard disk, made by an end-user in the course of viewing a website, satisfy the conditions that those copies must be temporary, that they must be transient or incidental in nature and that they must constitute an integral and essential part of a technological process, as well as the conditions laid down in Article 5(5) of that directive, and that they may therefore be made without the authorisation of the copyright holders."
On April 17, 2009, a Swedish court convicted four men operating The Pirate Bay Internet site of criminal copyright infringement. The Pirate Bay was established in 2003 by the Swedish anti-copyright organization Piratbyrån to provide information needed to download film or music files from third parties, many of whom copied the files without permission. The Pirate Bay does not store copies of the files on its own servers but does provide peer-to-peer links to other servers on which infringing copies were stored. Apparently, the theory of the prosecution was that the defendants, by their conduct, actively induced infringement. Under U.S. copyright law, this would be a so-called Grokster theory of infringement liability.
The Swedish district court imposed damages of SEK 30 million ($3,600,000) and one-year prison sentences on the four defendants. "The defendants have furthered the crimes that the file sharers have committed," said district court judge Tomas Norstöm. He added, "They have been helpful to such an extent that they have entered into the field of criminal liability." "We are, of course, going to appeal," defense lawyer Per Samuelsson said. The Pirate Bay has 25 million users and is considered one of the biggest file-sharing websites in the world. It is conceded that The Pirate Bay does not itself make copies or store files, but the court did not consider that fact dispositive. "By providing a website with ... well-developed search functions, easy uploading and storage possibilities, and with a tracker linked to the website, the accused have incited the crimes that the filesharers have committed," the court said in a statement.
See also
Bandwidth
Copyright aspects of hyperlinking and framing
Download manager
Digital distribution
HADOPI law
Music download
Peer-to-peer
Progressive download
Sideloading
References
External links
Computer networking
Data transmission
Servers (computing) | Download | [
"Technology",
"Engineering"
] | 1,143 | [
"Computer networking",
"Computer science",
"Computer engineering"
] |
349,735 | https://en.wikipedia.org/wiki/Baryon%20number | In particle physics, the baryon number is a strictly conserved additive quantum number of a system. It is defined as
where is the number of quarks, and is the number of antiquarks. Baryons (three quarks) have a baryon number of +1, mesons (one quark, one antiquark) have a baryon number of 0, and antibaryons (three antiquarks) have a baryon number of −1. Exotic hadrons like pentaquarks (four quarks, one antiquark) and tetraquarks (two quarks, two antiquarks) are also classified as baryons and mesons depending on their baryon number.
Baryon number vs. quark number
Quarks carry not only electric charge, but also charges such as color charge and weak isospin. Because of a phenomenon known as color confinement, a hadron cannot have a net color charge; that is, the total color charge of a particle has to be zero ("white"). A quark can have one of three "colors", dubbed "red", "green", and "blue"; while an antiquark may be either "anti-red", "anti-green" or "anti-blue".
For normal hadrons, a white color can thus be achieved in one of three ways:
A quark of one color with an antiquark of the corresponding anticolor, giving a meson with baryon number 0,
Three quarks of different colors, giving a baryon with baryon number +1,
Three antiquarks of different anticolors, giving an antibaryon with baryon number −1.
The baryon number was defined long before the quark model was established, so rather than changing the definitions, particle physicists simply gave quarks one third the baryon number. Nowadays it might be more accurate to speak of the conservation of quark number.
In theory, exotic hadrons can be formed by adding pairs of quarks and antiquarks, provided that each pair has a matching color/anticolor. For example, a pentaquark (four quarks, one antiquark) could have the individual quark colors: red, green, blue, blue, and antiblue. In 2015, the LHCb collaboration at CERN reported results consistent with pentaquark states in the decay of bottom Lambda baryons ().
Particles not formed of quarks
Particles without any quarks have a baryon number of zero. Such particles are
leptons – the electron, muon, tauon, and their corresponding neutrinos
vector bosons – the photon, W and Z bosons, gluons
scalar boson – the Higgs boson
second-order tensor boson – the hypothetical graviton
Conservation
The baryon number is conserved in all the interactions of the Standard Model, with one possible exception. The conservation is due to global symmetry of the QCD Lagrangian. 'Conserved' means that the sum of the baryon number of all incoming particles is the same as the sum of the baryon numbers of all particles resulting from the reaction. The one exception is the hypothesized Adler–Bell–Jackiw anomaly in electroweak interactions; however, sphalerons are not all that common and could occur at high energy and temperature levels and can explain electroweak baryogenesis and leptogenesis. Electroweak sphalerons can only change the baryon and/or lepton number by 3 or multiples of 3 (collision of three baryons into three leptons/antileptons and vice versa). No experimental evidence of sphalerons has yet been observed.
The hypothetical concepts of grand unified theory (GUT) models and supersymmetry allows for the changing of a baryon into leptons and antiquarks (see B − L), thus violating the conservation of both baryon and lepton numbers. Proton decay would be an example of such a process taking place, but has never been observed.
The conservation of baryon number is not consistent with the physics of black hole evaporation via Hawking radiation. It is expected in general that quantum gravitational effects violate the conservation of all charges associated to global symmetries. The violation of conservation of baryon number led John Archibald Wheeler to speculate on a principle of mutability for all physical properties.
See also
Lepton number
Flavour (particle physics)
Isospin
Hypercharge
Proton decay
B − L
References
Baryons
Conservation laws
Nuclear physics
Quantum chromodynamics
Quarks
Standard Model
Flavour (particle physics) | Baryon number | [
"Physics"
] | 977 | [
"Standard Model",
"Equations of physics",
"Conservation laws",
"Particle physics",
"Nuclear physics",
"Symmetry",
"Physics theorems"
] |
349,755 | https://en.wikipedia.org/wiki/Uniform%20norm | In mathematical analysis, the uniform norm (or ) assigns, to real- or complex-valued bounded functions defined on a set , the non-negative number
This norm is also called the , the , the , or, when the supremum is in fact the maximum, the . The name "uniform norm" derives from the fact that a sequence of functions converges to under the metric derived from the uniform norm if and only if converges to uniformly.
If is a continuous function on a closed and bounded interval, or more generally a compact set, then it is bounded and the supremum in the above definition is attained by the Weierstrass extreme value theorem, so we can replace the supremum by the maximum. In this case, the norm is also called the .
In particular, if is some vector such that in finite dimensional coordinate space, it takes the form:
This is called the -norm.
Definition
Uniform norms are defined, in general, for bounded functions valued in a normed space. Let be a set and let be a normed space. On the set of functions from to , there is an extended norm defined by
This is in general an extended norm since the function may not be bounded. Restricting this extended norm to the bounded functions (i.e., the functions with finite above extended norm) yields a (finite-valued) norm, called the uniform norm on . Note that the definition of uniform norm does not rely on any additional structure on the set , although in practice is often at least a topological space.
The convergence on in the topology induced by the uniform extended norm is the uniform convergence, for sequences, and also for nets and filters on .
We can define closed sets and closures of sets with respect to this metric topology; closed sets in the uniform norm are sometimes called uniformly closed and closures uniform closures. The uniform closure of a set of functions A is the space of all functions that can be approximated by a sequence of uniformly-converging functions on For instance, one restatement of the Stone–Weierstrass theorem is that the set of all continuous functions on is the uniform closure of the set of polynomials on
For complex continuous functions over a compact space, this turns it into a C* algebra (cf. Gelfand representation).
Weaker structures inducing the topology of uniform convergence
Uniform metric
The uniform metric between two bounded functions from a set to a metric space is defined by
The uniform metric is also called the , after Pafnuty Chebyshev, who was first to systematically study it. In this case, is bounded precisely if is finite for some constant function . If we allow unbounded functions, this formula does not yield a norm or metric in a strict sense, although the obtained so-called extended metric still allows one to define a topology on the function space in question; the convergence is then still the uniform convergence. In particular, a sequence converges uniformly to a function if and only if
If is a normed space, then it is a metric space in a natural way. The extended metric on induced by the uniform extended norm is the same as the uniform extended metric
on
Uniformity of uniform convergence
Let be a set and let be a uniform space. A sequence of functions from to is said to converge uniformly to a function if for each entourage there is a natural number such that, belongs to whenever and . Similarly for a net. This is a convergence in a topology on . In fact, the sets
where runs through entourages of form a fundamental system of entourages of a uniformity on , called the uniformity of uniform convergence on . The uniform convergence is precisely the convergence under its uniform topology.
If is a metric space, then it is by default equipped with the metric uniformity. The metric uniformity on with respect to the uniform extended metric is then the uniformity of uniform convergence on .
Properties
The set of vectors whose infinity norm is a given constant, forms the surface of a hypercube with edge length
The reason for the subscript “” is that whenever is continuous and for some , then
where
where is the domain of ; the integral amounts to a sum if is a discrete set (see p-norm).
See also
References
Banach spaces
Functional analysis
Normed spaces
Norms (mathematics) | Uniform norm | [
"Mathematics"
] | 871 | [
"Functions and mappings",
"Mathematical analysis",
"Functional analysis",
"Mathematical objects",
"Mathematical relations",
"Norms (mathematics)"
] |
349,768 | https://en.wikipedia.org/wiki/System%20requirements | To be used efficiently, all computer software needs certain hardware components or other software resources to be present on a computer. These prerequisites are known as (computer) system requirements and are often used as a guideline as opposed to an absolute rule. Most software defines two sets of system requirements: minimum and recommended. With increasing demand for higher processing power and resources in newer versions of software, system requirements tend to increase over time. Industry analysts suggest that this trend plays a bigger part in driving upgrades to existing computer systems than technological advancements. A second meaning of the term system requirements, is a generalisation of this first definition, giving the requirements to be met in the design of a system or sub-system.
Recommended system requirements
Often manufacturers of games will provide the consumer with a set of requirements that are different from those that are needed to run a software. These requirements are usually called the recommended requirements. These requirements are almost always of a significantly higher level than the minimum requirements, and represent the ideal situation in which to run the software. Generally speaking, this is a better guideline than minimum system requirements in order to have a fully usable and enjoyable experience with that software.
Hardware requirements
The most common set of requirements defined by any operating system or software application is the physical computer resources, also known as hardware, A hardware requirements list is often accompanied by a hardware compatibility list (HCL), especially in case of operating systems. An HCL lists tested, compatible, and sometimes incompatible hardware devices for a particular operating system or application. The following sub-sections discuss the various aspects of hardware requirements.
Architecture
All computer operating systems are designed for a particular computer architecture. Most software applications are limited to particular operating systems running on particular architectures. Although architecture-independent operating systems and applications exist, most need to be recompiled to run on a new architecture. See also a list of common operating systems and their supporting architectures.
Processing power
The power of the central processing unit (CPU) is a fundamental system requirement for any software. Most software running on x86 architecture define processing power as the model and the clock speed of the CPU. Many other features of a CPU that influence its speed and power, like bus speed, cache, and MIPS are often ignored. This definition of power is often erroneous, as different makes and models of CPUs at similar clock speed often have different throughput speeds.
Memory
All software, when run, resides in the random access memory (RAM) of a computer. Memory requirements are defined after considering demands of the application, operating system, supporting software and files, and other running processes. Optimal performance of other unrelated software running on a multi-tasking computer system is also considered when defining this requirement.
Secondary storage
Data storage device requirements vary, depending on the size of software installation, temporary files created and maintained while installing or running the software, and possible use of swap space (if RAM is insufficient).
Display adapter
Software requiring a better than average computer graphics display, like graphics editors and high-end games, often define high-end display adapters in the system requirements.
Peripherals
Some software applications need to make extensive and/or special use of some peripherals, demanding the higher performance or functionality of such peripherals. Such peripherals include CD-ROM drives, keyboards, pointing devices, network devices, etc.
Software requirements
Software requirements deal with defining software resource requirements and prerequisites that need to be installed on a computer to provide optimal functioning of an application. These requirements or prerequisites are generally not included in the software installation package and need to be installed separately before the software is installed.
Platform
A computing platform describes some sort of framework, either in hardware or software, which allows software to run. Typical platforms include a computer's architecture, operating system, or programming languages and their runtime libraries.
Operating system is one of the requirements mentioned when defining system requirements (software). Software may not be compatible with different versions of same line of operating systems, although some measure of backward compatibility is often maintained. For example, most software designed for Microsoft Windows XP does not run on Microsoft Windows 98, although the converse is not always true. Similarly, software designed using newer features of Linux Kernel v2.6 generally does not run or compile properly (or at all) on Linux distributions using Kernel v2.2 or v2.4.
APIs and drivers
Software making extensive use of special hardware devices, like high-end display adapters, needs special API or newer device drivers. A good example is DirectX, which is a collection of APIs for handling tasks related to multimedia, especially game programming, on Microsoft platforms.
Web browser
Most web applications and software depend heavily on web technologies to make use of the default browser installed on the system. Microsoft Edge is a frequent choice of software running on Microsoft Windows, which makes use of ActiveX controls, despite their vulnerabilities.
Other requirements
Some software also has other requirements for proper performance. Internet connection (type and speed) and resolution of the display screen are notable examples.
Examples
Following are a few examples of system requirement definitions for popular PC games and trend of ever-increasing resource needs:
For instance, while StarCraft (1998) requires:
Doom 3 (2004) requires:
Star Wars: The Force Unleashed (2009) requires:
Grand Theft Auto V (2015) requires:
See also
Requirement
Requirements analysis
Software Requirements Specification
Specification (technical standard)
System requirements specification (SyRS)
References
Software requirements | System requirements | [
"Engineering"
] | 1,112 | [
"Software engineering",
"Software requirements"
] |
349,771 | https://en.wikipedia.org/wiki/Artificial%20neuron | An artificial neuron is a mathematical function conceived as a model of a biological neuron in a neural network. The artificial neuron is the elementary unit of an artificial neural network.
The design of the artificial neuron was inspired by biological neural circuitry. Its inputs are analogous to excitatory postsynaptic potentials and inhibitory postsynaptic potentials at neural dendrites, or . Its weights are analogous to synaptic weights, and its output is analogous to a neuron's action potential which is transmitted along its axon.
Usually, each input is separately weighted, and the sum is often added to a term known as a bias (loosely corresponding to the threshold potential), before being passed through a nonlinear function known as an activation function. Depending on the task, these functions could have a sigmoid shape (e.g. for binary classification), but they may also take the form of other nonlinear functions, piecewise linear functions, or step functions. They are also often monotonically increasing, continuous, differentiable, and bounded. Non-monotonic, unbounded, and oscillating activation functions with multiple zeros that outperform sigmoidal and ReLU-like activation functions on many tasks have also been recently explored. The threshold function has inspired building logic gates referred to as threshold logic; applicable to building logic circuits resembling brain processing. For example, new devices such as memristors have been extensively used to develop such logic.
The artificial neuron activation function should not be confused with a linear system's transfer function.
An artificial neuron may be referred to as a semi-linear unit, Nv neuron, binary neuron, linear threshold function, or McCulloch–Pitts (MCP) neuron, depending on the structure used.
Simple artificial neurons, such as the McCulloch–Pitts model, are sometimes described as "caricature models", since they are intended to reflect one or more neurophysiological observations, but without regard to realism. Artificial neurons can also refer to artificial cells in neuromorphic engineering that are similar to natural physical neurons.
Basic structure
For a given artificial neuron , let there be inputs with signals through and weights through . Usually, the input is assigned the value +1, which makes it a bias input with . This leaves only actual inputs to the neuron: to .
The output of the -th neuron is:
,
where (phi) is the activation function.
The output is analogous to the axon of a biological neuron, and its value propagates to the input of the next layer, through a synapse. It may also exit the system, possibly as part of an output vector.
It has no learning process as such. Its activation function weights are calculated, and its threshold value is predetermined.
McCulloch–Pitts (MCP) neuron
An MCP neuron is a kind of restricted artificial neuron which operates in discrete time-steps. Each has zero or more inputs, and are written as . It has one output, written as . Each input can be either excitatory or inhibitory. The output can either be quiet or firing. An MCP neuron also has a threshold .
In an MCP neural network, all the neurons operate in synchronous discrete time-steps of . At time , the output of the neuron is if the number of firing excitatory inputs is at least equal to the threshold, and no inhibitory inputs are firing; otherwise.
Each output can be the input to an arbitrary number of neurons, including itself (i.e., self-loops are possible). However, an output cannot connect more than once with a single neuron. Self-loops do not cause contradictions, since the network operates in synchronous discrete time-steps.
As a simple example, consider a single neuron with threshold 0, and a single inhibitory self-loop. Its output would oscillate between 0 and 1 at every step, acting as a "clock".
Any finite state machine can be simulated by a MCP neural network. Furnished with an infinite tape, MCP neural networks can simulate any Turing machine.
Biological models
Artificial neurons are designed to mimic aspects of their biological counterparts. However a significant performance gap exists between biological and artificial neural networks. In particular single biological neurons in the human brain with oscillating activation function capable of learning the XOR function have been discovered.
Dendrites – in biological neurons, dendrites act as the input vector. These dendrites allow the cell to receive signals from a large (>1000) number of neighboring neurons. As in the above mathematical treatment, each dendrite is able to perform "multiplication" by that dendrite's "weight value." The multiplication is accomplished by increasing or decreasing the ratio of synaptic neurotransmitters to signal chemicals introduced into the dendrite in response to the synaptic neurotransmitter. A negative multiplication effect can be achieved by transmitting signal inhibitors (i.e. oppositely charged ions) along the dendrite in response to the reception of synaptic neurotransmitters.
Soma – in biological neurons, the soma acts as the summation function, seen in the above mathematical description. As positive and negative signals (exciting and inhibiting, respectively) arrive in the soma from the dendrites, the positive and negative ions are effectively added in summation, by simple virtue of being mixed together in the solution inside the cell's body.
Axon – the axon gets its signal from the summation behavior which occurs inside the soma. The opening to the axon essentially samples the electrical potential of the solution inside the soma. Once the soma reaches a certain potential, the axon will transmit an all-in signal pulse down its length. In this regard, the axon behaves as the ability for us to connect our artificial neuron to other artificial neurons.
Unlike most artificial neurons, however, biological neurons fire in discrete pulses. Each time the electrical potential inside the soma reaches a certain threshold, a pulse is transmitted down the axon. This pulsing can be translated into continuous values. The rate (activations per second, etc.) at which an axon fires converts directly into the rate at which neighboring cells get signal ions introduced into them. The faster a biological neuron fires, the faster nearby neurons accumulate electrical potential (or lose electrical potential, depending on the "weighting" of the dendrite that connects to the neuron that fired). It is this conversion that allows computer scientists and mathematicians to simulate biological neural networks using artificial neurons which can output distinct values (often from −1 to 1).
Encoding
Research has shown that unary coding is used in the neural circuits responsible for birdsong production. The use of unary in biological networks is presumably due to the inherent simplicity of the coding. Another contributing factor could be that unary coding provides a certain degree of error correction.
Physical artificial cells
There is research and development into physical artificial neurons – organic and inorganic.
For example, some artificial neurons can receive and release dopamine (chemical signals rather than electrical signals) and communicate with natural rat muscle and brain cells, with potential for use in BCIs/prosthetics.
Low-power biocompatible memristors may enable construction of artificial neurons which function at voltages of biological action potentials and could be used to directly process biosensing signals, for neuromorphic computing and/or direct communication with biological neurons.
Organic neuromorphic circuits made out of polymers, coated with an ion-rich gel to enable a material to carry an electric charge like real neurons, have been built into a robot, enabling it to learn sensorimotorically within the real world, rather than via simulations or virtually. Moreover, artificial spiking neurons made of soft matter (polymers) can operate in biologically relevant environments and enable the synergetic communication between the artificial and biological domains.
History
The first artificial neuron was the Threshold Logic Unit (TLU), or Linear Threshold Unit, first proposed by Warren McCulloch and Walter Pitts in 1943 in A logical calculus of the ideas immanent in nervous activity. The model was specifically targeted as a computational model of the "nerve net" in the brain. As an activation function, it employed a threshold, equivalent to using the Heaviside step function. Initially, only a simple model was considered, with binary inputs and outputs, some restrictions on the possible weights, and a more flexible threshold value. Since the beginning it was already noticed that any Boolean function could be implemented by networks of such devices, what is easily seen from the fact that one can implement the AND and OR functions, and use them in the disjunctive or the conjunctive normal form.
Researchers also soon realized that cyclic networks, with feedbacks through neurons, could define dynamical systems with memory, but most of the research concentrated (and still does) on strictly feed-forward networks because of the smaller difficulty they present.
One important and pioneering artificial neural network that used the linear threshold function was the perceptron, developed by Frank Rosenblatt. This model already considered more flexible weight values in the neurons, and was used in machines with adaptive capabilities. The representation of the threshold values as a bias term was introduced by Bernard Widrow in 1960 – see ADALINE.
In the late 1980s, when research on neural networks regained strength, neurons with more continuous shapes started to be considered. The possibility of differentiating the activation function allows the direct use of the gradient descent and other optimization algorithms for the adjustment of the weights. Neural networks also started to be used as a general function approximation model. The best known training algorithm called backpropagation has been rediscovered several times but its first development goes back to the work of Paul Werbos.
Types of activation function
The activation function of a neuron is chosen to have a number of properties which either enhance or simplify the network containing the neuron. Crucially, for instance, any multilayer perceptron using a linear activation function has an equivalent single-layer network; a non-linear function is therefore necessary to gain the advantages of a multi-layer network.
Below, refers in all cases to the weighted sum of all the inputs to the neuron, i.e. for inputs,
where is a vector of synaptic weights and is a vector of inputs.
Step function
The output of this activation function is binary, depending on whether the input meets a specified threshold, (theta). The "signal" is sent, i.e. the output is set to 1, if the activation meets or exceeds the threshold.
This function is used in perceptrons, and appears in many other models. It performs a division of the space of inputs by a hyperplane. It is specially useful in the last layer of a network, intended for example to perform binary classification of the inputs.
Linear combination
In this case, the output unit is simply the weighted sum of its inputs, plus a bias term. A number of such linear neurons perform a linear transformation of the input vector. This is usually more useful in the early layers of a network. A number of analysis tools exist based on linear models, such as harmonic analysis, and they can all be used in neural networks with this linear neuron. The bias term allows us to make affine transformations to the data.
Sigmoid
A fairly simple nonlinear function, the sigmoid function such as the logistic function also has an easily calculated derivative, which can be important when calculating the weight updates in the network. It thus makes the network more easily manipulable mathematically, and was attractive to early computer scientists who needed to minimize the computational load of their simulations. It was previously commonly seen in multilayer perceptrons. However, recent work has shown sigmoid neurons to be less effective than rectified linear neurons. The reason is that the gradients computed by the backpropagation algorithm tend to diminish towards zero as activations propagate through layers of sigmoidal neurons, making it difficult to optimize neural networks using multiple layers of sigmoidal neurons.
Rectifier
In the context of artificial neural networks, the rectifier or ReLU (Rectified Linear Unit) is an activation function defined as the positive part of its argument:
where is the input to a neuron. This is also known as a ramp function and is analogous to half-wave rectification in electrical engineering. This activation function was first introduced to a dynamical network by Hahnloser et al. in a 2000 paper in Nature with strong biological motivations and mathematical justifications. It has been demonstrated for the first time in 2011 to enable better training of deeper networks, compared to the widely used activation functions prior to 2011, i.e., the logistic sigmoid (which is inspired by probability theory; see logistic regression) and its more practical counterpart, the hyperbolic tangent.
A commonly used variant of the ReLU activation function is the Leaky ReLU which allows a small, positive gradient when the unit is not active:
where is the input to the neuron and is a small positive constant (set to 0.01 in the original paper).
Pseudocode algorithm
The following is a simple pseudocode implementation of a single Threshold Logic Unit (TLU) which takes Boolean inputs (true or false), and returns a single Boolean output when activated. An object-oriented model is used. No method of training is defined, since several exist. If a purely functional model were used, the class TLU below would be replaced with a function TLU with input parameters threshold, weights, and inputs that returned a Boolean value.
class TLU defined as:
data member threshold : number
data member weights : list of numbers of size X
function member fire(inputs : list of booleans of size X) : boolean defined as:
variable T : number
T ← 0
for each i in 1 to X do
if inputs(i) is true then
T ← T + weights(i)
end if
end for each
if T > threshold then
return true
else:
return false
end if
end function
end class
Introduction too limited
While this article is titled 'artificial neuron' the introduction starts by stating 'An artificial neuron is a mathematical function...' excluding physical artificial neurons.
The introduction should be rewritten including physical artificial neurons.
Lkingscott (talk) 08:35, 17 January 2025 (UTC)
See also
Binding neuron
Connectionism
References
Further reading
External links
neuron mimicks function of human cells
McCulloch-Pitts Neurons (Overview)
Artificial neural networks
American inventions
Bioinspiration | Artificial neuron | [
"Engineering",
"Biology"
] | 3,050 | [
"Biological engineering",
"Bioinspiration"
] |
349,811 | https://en.wikipedia.org/wiki/SPIM | SPIM is a MIPS processor simulator, designed to run assembly language code for this architecture. The program simulates R2000 and R3000 processors, and was written by James R. Larus while a professor at the University of Wisconsin–Madison. The MIPS machine language is often taught in college-level assembly courses, especially those using the textbook Computer Organization and Design: The Hardware/Software Interface by David A. Patterson and John L. Hennessy ().
The name of the simulator is a reversal of the letters "MIPS".
SPIM simulators are available for Windows (PCSpim), Mac OS X and Unix/Linux-based (xspim) operating systems. As of release 8.0 in January 2010, the simulator is licensed under the standard BSD license.
In January, 2011, a major release version 9.0 features QtSpim that has a new user interface built on the cross-platform Qt UI framework and runs on Windows, Linux, and macOS. From this version, the project has also been moved to SourceForge for better maintenance. Precompiled versions of QtSpim for Linux (32-bit), Windows, and Mac OS X, as well as PCSpim for Windows are provided.
The SPIM operating system
The SPIM simulator comes with a rudimentary operating system, which allows the programmer usage of common used functions in a comfortable way. Such functions are invoked by the -instruction. Then the OS acts depending on the values of specific registers.
The SPIM OS expects a label named as a handover point from the OS-preamble.
SPIM Alternatives/Competitors
MARS (MIPS Assembler and Runtime Simulator) is a Java-based IDE for the MIPS Assembly Programming Language and an alternative to SPIM.
Its initial release was in 2005. However, as both of its maintainers have since retired, the project is no longer under active development.
Imperas is a suite of embedded software development tools for MIPS architecture which uses Just-in-time compilation emulation and simulation technology.
The simulator was initially released in 2008 and is under active development.
There are over 30 open source models of the MIPS 32 bit and 64 bit cores.
Other alternative to SPIM for educational purposes is The CREATOR simulator. CREATOR is portable (can be executed in current web browsers) and allow students to learn several assembly languages of different processors at the same time (CREATOR includes examples of MIPS32 and RISC-V instructions).
See also
GXemul (formerly known as mips64emul), another MIPS emulator. Unlike SPIM, which focuses on emulating a bare MIPS implementation, GXemul is written to emulate full computer systems based on MIPS microprocessors—for example, GXemul can emulate a DECstation 5000 Model 200 workstation
OVPsim also emulates MIPS, and where all the MIPS models are verified by MIPS Technologies
QEMU also emulates MIPS
MIPS architecture
References
External links
Project site at SourceForge
Former official site at Larus's website
Web version of SPIM
Introductory slides on MIPS programming using SPIM
An introduction to SPIM simulator
Emulation software
MIPS architecture
Software using the BSD license | SPIM | [
"Technology"
] | 681 | [
"Emulation software",
"History of computing"
] |
349,829 | https://en.wikipedia.org/wiki/Antibacterial%20soap | Antibacterial soap is a soap which contains chemical ingredients that purportedly assist in killing bacteria. The majority of antibacterial soaps contain triclosan, though other chemical additives are also common. The effectiveness of products branded as being antibacterial has been disputed by some academics as well as the U.S. Food and Drug Administration (FDA).
History
The earliest antibacterial soap was carbolic soap, which used up to 5% phenols (carbolic acid). Fears about the safety of carbolic soaps chemical components on the skin brought about a ban on some of these chemical components.Triclosan and other antibacterial agents have long been used in commercial cleaning products for hospitals and other healthcare settings, however they began to be used in home cleaning products during the 1990s.
Ingredients
Triclosan and triclocarban are the most common compounds used as antibacterials in soaps. However, other common antibacterial ingredients in soaps include benzalkonium chloride, benzethonium chloride, and chloroxylenol.
Effectiveness
Claims that antibacterial soap is effective stem from the long-standing knowledge that triclosan can inhibit the growth of various bacteria, as well as some fungi. However, more recent reviews have suggested that antibacterial soaps are no better than regular soaps at preventing illness or reducing bacteria on the hands of users.
In September 2016, the U.S. Food and Drug Administration banned the use of the common antibacterial ingredients triclosan and triclocarban, and 17 other ingredients frequently used in "antibacterial" soaps and washes, due to insufficient information on the long-term health effects of their use and a lack of evidence on their effectiveness. The FDA stated "There is no data demonstrating that over-the-counter antibacterial soaps are better at preventing illness than washing with plain soap and water". The agency also asserted that despite requests for such information, the FDA did not receive sufficient data from manufacturers on the long-term health effects of these chemicals. This ban does not apply to hand sanitizer. This is due to the fact that hand sanitizer typically utilizes alcohol
to kill microbes rather than triclosan or similar ingredients.
A 2017 statement by 200 scientists and medics published in the scientific journal Environmental Health Perspectives warns that anti-bacterial soaps and gels are useless and may cause harm. The statement also cautioned against the use of antimicrobial agents in food contact materials, textiles, and paints. British firm Unilever claimed in 2017 to be phasing triclosan and triclocarban out of their products by the end of the year, adding they would be replaced by “a range of alternatives, including natural and nature-inspired antibacterial ingredients”.
Claims have been made in the media that antibacterial soap is more effective than plain soap in the prevention of the SARS-CoV-2 virus. The CDC and the Food and Drug Administration both recommend plain soap; there is no evidence that antibacterial soaps are any better, and limited evidence that they might be worse long-term.
See also
Antiseptic
Disinfectant
Antimicrobial resistance
References
Bacteria and humans
Soaps
Antibiotics | Antibacterial soap | [
"Biology"
] | 693 | [
"Biotechnology products",
"Antibiotics",
"Bacteria",
"Bacteria and humans",
"Biocides"
] |
349,834 | https://en.wikipedia.org/wiki/Sylvanshine | Sylvanshine is an optical phenomenon in which dew-covered foliage with wax-coated leaves retroreflect beams of light, as from a vehicle's headlights. This effect sometimes makes trees appear snow-covered at night during summer. The phenomenon was named and explained in 1994 by Alistair Fraser of Pennsylvania State University, an expert in meteorological optics. According to his explanation, the epicuticular wax on the leaves causes water to form beads, which in effect, become lenses. These lenses focus the light to a spot on the leaf surface, and the image of this spot is directed as rays in the opposite direction.
References
.
.
.
Atmospheric optical phenomena
Leaves | Sylvanshine | [
"Physics"
] | 135 | [
"Optical phenomena",
"Physical phenomena",
"Atmospheric optical phenomena",
"Earth phenomena"
] |
349,842 | https://en.wikipedia.org/wiki/Triclosan | Triclosan (sometimes abbreviated as TCS) is an antibacterial and antifungal agent present in some consumer products, including toothpaste, soaps, detergents, toys, and surgical cleaning treatments. It is similar in its uses and mechanism of action to triclocarban. Its efficacy as an antimicrobial agent, the risk of antimicrobial resistance, and its possible role in disrupted hormonal development remains controversial. Additional research seeks to understand its potential effects on organisms and environmental health.
Triclosan was developed in 1966. A 2006 study recommended showering with 2% triclosan as a regimen in surgical units to rid patients' skin of methicillin-resistant Staphylococcus aureus (MRSA).
Uses
Triclosan was used as a hospital scrub in the 1970s. Prior to its change in regulatory status in the EU and US, it had expanded commercially and was a common ingredient in soaps (0.10–1.00%), shampoos, deodorants, toothpastes, mouthwashes, cleaning supplies, and pesticides. It also was part of consumer products, including kitchen utensils, toys, bedding, socks, and trash bags.
Triclosan was registered as a pesticide in 1969. U.S. EPA registration numbers are required for all EPA-registered pesticides. As of 2017, there were five registrations for triclosan with the EPA. Currently, there are 20 antimicrobial registrations with the EPA under the regulations of the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA). The antimicrobial active ingredient is added to a variety of products where it acts to slow or stop the growth of bacteria, fungi, and mildew. In commercial, institutional, and industrial equipment uses, triclosan is incorporated in conveyor belts, fire hoses, dye bath vats, or ice-making equipment as an antimicrobial. Triclosan may be directly applied to commercial HVAC coils, where it prevents microbial growth that contributes to product degradation.
In the United States, by 2000, triclosan and triclocarban (TCC) could be found in 75% of liquid soaps and 29% of bar soaps, and triclosan was used in more than 2,000 consumer products.
In healthcare, triclosan is used in surgical scrubs and hand washes. Use in surgical units is effective with a minimum contact time of approximately two minutes. More recently, showering with 2% triclosan has become a recommended regimen in surgical units for the decolonization of patients whose skin carries methicillin-resistant Staphylococcus aureus (MRSA). Two small uncontrolled case studies reported the use of triclosan correlated with reduction in MRSA infections.
Triclosan is also used in the coatings for some surgical sutures. There is good evidence these triclosan-coated sutures reduce the risk of surgical site infection. The World Health Organization, the American College of Surgeons and the Surgical Infection Society point out the benefit of triclosan-coated sutures in reducing the risk for surgical site infection.
Triclosan has been employed as a selective agent in molecular cloning. A bacterial host transformed by a plasmid harboring a triclosan-resistant mutant FabI gene (mFabI) as a selectable marker can grow in presence of high dose of triclosan in growth media.
Effectiveness
In surgery, triclosan coated sutures reduce the risk of surgical site infection. Some studies suggest that antimicrobial hand soaps containing triclosan provide a slightly greater bacterial reduction on the hands compared to plain soap. , the US FDA had found clear benefit to health for some consumer products containing triclosan, but not in others; for example the FDA had no evidence that triclosan in antibacterial soaps and body washes provides any benefit over washing with regular soap and water.
A Cochrane review of 30 studies concluded that triclosan/copolymer-containing toothpastes produced a 22% reduction in both dental plaque and gingival inflammation when compared with fluoride toothpastes without triclosan/copolymer. There was weak evidence of a reduction in tooth cavities, and no evidence of reduction in periodontitis.
A study of triclosan toothpastes did not find any evidence that it causes an increase in serious adverse cardiac events such as heart attacks.
A study by Colgate-Palmolive found a significant reduction in gingivitis, bleeding, and plaque with the use of triclosan-containing toothpaste. An independent review by the Cochrane group suggests that the reduction in gingivitis, bleeding, and plaque is statistically significant (unlikely to occur by chance) but not clinically significant (unlikely to provide noticeable effects).
Triclosan is used in food storage containers although this use is banned in the European Union since 2010.
Veterinary use as a biocidal product in the EU is governed by the Biocidal Products Directive.
Chemical structure and properties
This organic compound is a white powdered solid with a slight aromatic, phenolic odor. Categorized as a polychloro phenoxy phenol, triclosan is a chlorinated aromatic compound that has functional groups representative of both ethers and phenols. Phenols often demonstrate antibacterial properties. Triclosan is soluble in ethanol, methanol, diethyl ether, and strongly basic solutions such as a 1M sodium hydroxide solution, but only slightly soluble in water. Triclosan can be synthesized from 2,4-dichlorophenol.
Synthesis
Under a reflux process, 2,4,4'-trichloro-2'-methoxydiphenyl ether is treated with aluminium chloride.
The United States Pharmacopeia formulary has published a monograph for triclosan that sets purity standards.
Mechanism of action
At high concentrations, triclosan acts as a biocide with multiple cytoplasmic and membrane targets. However, at the lower concentrations seen in commercial products, triclosan appears bacteriostatic, and it targets bacteria primarily by inhibiting fatty acid synthesis.
Triclosan binds to bacterial enoyl-acyl carrier protein reductase (ENR) enzyme, which is encoded by the gene fabI. This binding increases the enzyme's affinity for nicotinamide adenine dinucleotide (NAD+). This results in the formation of a stable, ternary complex of ENR-NAD+-triclosan, which is unable to participate in fatty acid synthesis. Fatty acids are necessary for building and reproducing cell membranes. Vertebrates do not have an ENR enzyme and thus are not affected by this mode of action.
Endocrine disruptor
Triclosan has been found to be a weak endocrine disruptor, though the relevance of this to humans is uncertain. The compound has been found to bind with low affinity to both the androgen receptor and the estrogen receptor, where both agonistic and antagonistic responses have been observed.
Efflux pump inducer
Triclosan may upregulate or induce efflux pumps in bacteria causing them to become resistant against variety of other antibiotics.
Exposure
Humans are exposed to triclosan through skin absorption when washing hands or in the shower, brushing teeth, using mouthwash or doing dishes, and through ingestion when swallowed. When triclosan is released into the environment, additional exposure to the chemical is possible through ingesting plants grown in soil treated with sewage sludge, or eating fish exposed to it.
An article from the American Society of Agronomy refers to a study done by Monica Mendez et al., in which the researchers irrigated plants with water containing triclosan and months later found it in all edible parts of tomato and onion plants. Triclosan is found to kill a wide spectrum of bacteria, and the researchers are also concerned about the effect it has on the beneficial bacteria in soil.
Distribution, metabolism, and elimination
Once absorbed, triclosan is metabolized by humans primarily through conjugation reactions into glucuronide and sulfate conjugates that are excreted in feces and urine. Pharmacokinetic studies demonstrate that triclosan sulfate and glucuronide may be formed in the liver at approximately equal rates at the environmentally relevant concentration of 1 to 5 microMolar. When concentrations of triclosan are below 1 microMolar, sulfonation is expected to be the major metabolic pathway for elimination.
Health concerns
Because of potential health concerns, due to the possibility of antimicrobial resistance, endocrine disruption and other issues as listed below, triclosan has been designated as a "contaminant of emerging concern (CEC)" by the United States Geological Survey, meaning it is under investigation for public health risk. "Emerging contaminants" can be broadly defined as any synthetic or naturally occurring chemical or any microorganism that is not commonly monitored in the environment but has the potential to enter the environment and cause known or suspected adverse ecological or human health effects. Triclosan is thought to accumulate in wastewater and return to drinking water, thus propagating a buildup that could cause increasing effects with ongoing use.
In the United States, after a decades-long review of the potential health issues from this contaminant of emerging concern, the FDA ruled on September 6, 2016, that 19 active ingredients including triclosan are not generally recognized as safe and effective (GRAS/GRAE). (See policy section below).
Allergic
Triclosan has been associated with a higher risk of food allergy. This may be because exposure to bacteria reduces allergies, as predicted by the hygiene hypothesis, and not caused by toxicology of triclosan itself. This effect may also occur with chlorhexidine gluconate and PCMX, among other antibacterial agents. Other studies have linked triclosan to allergic contact dermatitis in some individuals. Additionally, triclosan concentrations have been associated with allergic sensitization, especially inhalant and seasonal allergens, rather than food allergens.
By-product exposure
Triclosan can react with the free chlorine in chlorinated tap water to produce lesser amounts of other compounds, such as 2,4-dichlorophenol. Some of these intermediates convert into dioxins upon exposure to UV radiation (from the sun or other sources). The dioxins that can form from triclosan are not thought to be congeners of toxicologic concern for mammals, birds and fish.
Hormonal
Concerns on the health effects of triclosan have been raised after it was detected in human breast milk, blood, and urine samples. Studies on rats have shown that triclosan exposure modulates estrogen-dependent responses. There have been many studies performed over the years both in vivo and in vitro, in male and female fish and rats and they support the conclusion that triclosan possesses (anti)estrogenic and (anti)androgenic properties depending on species, tissues, and cell types.
Human studies on triclosan and hormone effects are fewer in number than those on animals, but are being conducted. In a 2017 study on 537 pregnant women in China, prenatal triclosan exposure was associated with increased cord testosterone levels in the infants.
History
Triclosan (TCS) was patented in 1964 by Swiss company Ciba-Geigy. The earliest known safety testing began in 1968. It was introduced the next year, mainly for use in hospitals, and was in worldwide production and use by the early 1970s.
In 1997 Ciba-Geigy merged with another Swiss company, Sandoz, to form Novartis. During the merger, Ciba-Geigy's chemical business was spun off to become Ciba Specialty Chemicals, which was acquired in 2008 by chemical giant BASF. BASF currently manufactures TCS under the brand name Irgasan DP300.
Environmental concerns
Treatment and disposal
Exposure to triclosan in personal product use is relatively short. Upon disposal, triclosan is sent to municipal sewage treatment plants, where, in the United States, about 97–98% of triclosan is removed. Studies show that substantial quantities of triclosan (170,000–970,000 kg/yr) can escape from wastewater treatment plants and damage algae on surface waters. In a study on effluent from wastewater treatment facilities, approximately 75% of triclocarban was present in sewage sludge. This poses a potential environmental and ecological hazard, particularly for aquatic systems. The volume of triclosan, in the United States, re-entering the environment in sewage sludge after initial successful capture from wastewater is 44,000 ± 60,000 kg/yr. Triclosan can attach to other substances suspended in aquatic environments, which potentially endangers marine organisms and may lead to further bioaccumulation. Ozone is considered to be an effective tool for removing triclosan during sewage treatment. As little triclosan is released through plastic and textile household consumer products, these are not considered to be major sources of triclosan contamination.
During wastewater treatment, a portion of triclosan is degraded, while the remainder adsorbs to sewage sludge or exits the plant as effluent. A mass balance in Athens (Greece) Sewage Treatment Plant (2013) showed that 43% of triclosan is accumulated to the primary and secondary sludge, 45% is lost due to degradation while the rest 12% is discharged to the environment via the secondary treated wastewater. In the environment, triclosan may be degraded by microorganisms or react with sunlight, forming other compounds, which include chlorophenols and dioxins.
During 1999 to 2000, US Geological Survey detected TCS in 57.6% of streams and rivers sampled.
Bioaccumulation
While studies using semi-permeable membrane devices have found that triclosan does not strongly bioaccumulate, methyl-triclosan is comparatively more stable and lipophilic and thus poses a higher risk of bioaccumulation. The ability of triclosan to bioaccumulate is affected by its ionization state in different environmental conditions.
Global warming may increase uptake and effects of triclosan in aquatic organisms.
Ecotoxicity
Triclosan is toxic to aquatic bacteria at levels found in the environment. It is highly toxic to various types of algae and has the potential to affect the structure of algal communities, particularly immediately downstream of effluents from wastewater treatment facilities that treat household wastewaters. Triclosan has been observed in multiple organisms, including algae, aquatic blackworms, fish, and dolphins. It has also been found in land animals including earthworms and species higher up the food chain. In toxicity experiments with Vibrio fischeri marine bacterium, an EC50 value of TCS equal to 0.22 mg/L has been determined. Few data are available for the long-term toxicity of TCS to algae, daphnids and fish, while enough data are available for its acute toxicity on these groups of organisms.
A 2017 study that used risk quotient (RQ) methodology and evaluated the ecological threat due to the discharge of wastewater containing TCS in European rivers, reported that the probability that RQ values exceeds 1 ranged from 0.2% (for rivers with dilution factor of 1000) to 45% (for rivers with dilution factor 2).
Triclosan favors anaerobic conditions which is typical in soil and sediment. The antimicrobial properties of Triclosan are resistant to anaerobic degradation which is the main contributor to its persistence in the environment.
Resistance concerns
Concern pertains to the potential for cross-resistance (or co-resistance) to other antimicrobials. Numerous studies have been performed and there have been results indicating that the use of biocidal agents, such as triclosan, can cause cross-resistance.
A study done in a wide range of bacteria and different classes of antibiotics showed that Pseudomonas aeruginosa and Stenotrophomonas maltophilia, already resistant to triclosan, had increased resistance against antibiotics tetracycline and norfloxacin when exposed to triclosan.
Results from a study published in The American Journal of Infection Control showed that exposure to triclosan was associated with a high risk of developing resistance and cross-resistance in Staphylococcus aureus and Escherichia coli. This was not observed with exposure to chlorhexidine or a hydrogen peroxide-based agent (during the conditions in said study).
Alternatives
A comprehensive meta-analysis published in 2007 indicated that, in community settings, plain soap was no less effective than soaps containing triclosan for "preventing infectious illness symptoms and reducing bacterial levels on the hands.".
Nonorganic antibiotics and organic biocides are effective alternatives to triclosan, such as silver and copper ions and nanoparticles.
Policy
In the US, triclosan is regulated as a pesticide by the EPA and as a drug by the FDA. The EPA generally regulates uses on solid surfaces, and FDA regulations cover uses in personal care products.
In 1974, the US FDA began the drug review monograph process for "over-the-counter (OTC) topical antimicrobial products", including triclosan and triclocarban. The advisory panel first met on June 29, 1972, and the agency published its proposed rule on Sept 13, 1974. The initial rule applied to, "OTC products containing antimicrobial ingredients for topical human use, which includes soaps, surgical scrubs, skin washes, skin cleansers, first aid preparations and additional products defined by the panel." The proposed rule lists dozens of products that were already on the market at the time and the firms that produced them.
In 1978 the FDA published a tentative final monograph (TFM) for topical antimicrobial products. The record was re-opened in March 1979 to take into account six comments the agency received during the period for submitting objections to the TFM, including new data submitted by Procter & Gamble on the safety and effectiveness of triclocarban and by Ciba-Geigy on the proliferation of use of triclosan. The document states that, "significant amounts of new and previously unconsidered data were submitted with each of the above petitions." It was re-opened again in October of that year to permit interested persons to submit further data establishing conditions for the safety, effectiveness and labeling of over-the-counter topical antimicrobial products for human use.
The next document issued was a proposed rule dated June 17, 1994, which states, the "FDA is issuing a notice of proposed rulemaking in the form of an amended tentative final monograph that would establish conditions under which OTC topical health-care antiseptic drug products are generally recognized as safe and effective and not misbranded. The FDA is issuing this notice of proposed rulemaking on topical antimicrobial drug products after considering the public comments on that notice and other information in the administrative record for this rulemaking. The FDA is also requesting data and information concerning the safety and effectiveness of topical antimicrobials for use as hand sanitizers or dips." In the 1994 update to the rule, TCS was effectively removed from the drug category which made it available for use in consumer products.
In 2010, the Natural Resources Defense Council forced the FDA to review triclosan after suing the agency for its inaction. Because the FDA prohibited hexachlorophene, a compound similar to triclosan, Halden and others argued that the FDA should also ban triclosan. On December 17, 2013, the FDA issued a draft rule revoking the generally recognized as safe status of triclosan as an ingredient in hand wash products, citing the need for additional studies of its potential endrocrine and developmental effects; impact on bacterial resistance; and carcinogenic potential.
On September 6, 2016, 44 years after its initial proposed rule, the FDA issued a final rule establishing that 19 active ingredients, including triclosan and triclocarban, used in over-the-counter (OTC) consumer antiseptic products intended for use with water (aka consumer antiseptic washes) are not generally recognized as safe and effective (GRAS/GRAE) and are misbranded, and are new drugs for which approved applications under section 505 of the FD&C Act are required for marketing. Companies have one year to reformulate products without these ingredients, take them off the market or submit a New Drug Application (NDA) for the products. The 19 ingredients are:
Cloflucarban
Fluorosalan
Hexachlorophene
Hexylresorcinol
Iodine complex (ammonium ether sulfate and polyoxyethylene sorbitan monolaurate)
Iodine complex (phosphate ester of alkylaryloxy polyethylene glycol)
Methylbenzethonium chloride
Nonylphenoxypoly (ethyleneoxy) ethanoliodine
Phenol (greater than 1.5 percent)
Phenol (less than 1.5 percent)
Poloxamer-iodine complex
Povidone-iodine 5 to 10 percent
Secondary amyltricresols
Sodium oxychlorosene
Tribromsalan
Triclocarban
Triclosan
Triple dye (an antiseptic applied to the umbilical region of newborn infants)
Undecoylium chloride iodine complex
In 2015 and 2016 FDA also issued proposed rules to amend the 1994 TFM regarding the safety and effectiveness of OTC health care antiseptics and OTC consumer antiseptic rubs.
The state of Minnesota took action against triclosan in advance of a federal rule. In May 2014, the governor signed a bill banning triclosan-containing products in the state. A CNN article quotes the new law, "In order to prevent the spread of infectious disease and avoidable infections and to promote best practices in sanitation, no person shall offer for retail sale in Minnesota any cleaning product that contains triclosan and is used by consumers for sanitizing or hand and body cleansing." The law goes into effect on January 1, 2017. The exceptions to this rule are individual products that have received approval from the US Food and Drug Administration for consumer use.
In light of mounting evidence on the human health and ecotoxic effects of triclosan, some companies reformulated to remove it in advance of regulation: Colgate-Palmolive removed it from Palmolive Dish Soap and Softsoap in 2011 (but it remained in Colgate Total toothpaste until late 2018 or early 2019); Johnson & Johnson removed it from baby products in 2012 and all products in 2015; Procter & Gamble from all products in 2014; in 2014 it was removed from Clearasil and Avon began phasing it out; and Unilever removed it from skin care and cleansing products in 2015, and says oral care by 2017.
In Canada, triclosan is allowed in cosmetics, though FDA's recent announcement has prompted Health Canada spokeswoman Maryse Durette to state in an e-mail to Toronto newspaper The Globe and Mail that, "the government will publish a final assessment of the safety of triclosan 'in the near future' and take further action 'if warranted. Health Canada maintains a Cosmetic Ingredient Hotlist, including hundreds of chemicals that are not allowed or whose use is restricted in cosmetics. The list states that triclosan is currently allowed in cosmetics up to 0.3%, and 0.03% in mouthwashes and other oral products with required warnings to avoid swallowing and not for use in children under the age of 12.
Triclosan was not approved by the European Commission as an active substance for use in biocidal products for product-type 1 in January 2016. In the United States, manufacturers of products containing triclosan must indicate its presence on the label. In Europe, triclosan is regulated as a cosmetic preservative and must be listed on the label. Usage of triclosan in cosmetic products was restricted by the EU commission in 2014.
See also
References
Antibiotics
Antifungals
Chloroarenes
Endocrine disruptors
Phenol ethers
Steroid sulfotransferase inhibitors
Xenoestrogens | Triclosan | [
"Chemistry",
"Biology"
] | 5,225 | [
"Endocrine disruptors",
"Antibiotics",
"Biocides",
"Biotechnology products"
] |
350,164 | https://en.wikipedia.org/wiki/Farey%20sequence | In mathematics, the Farey sequence of order n is the sequence of completely reduced fractions, either between 0 and 1, or without this restriction, which when in lowest terms have denominators less than or equal to n, arranged in order of increasing size.
With the restricted definition, each Farey sequence starts with the value 0, denoted by the fraction , and ends with the value 1, denoted by the fraction (although some authors omit these terms).
A Farey sequence is sometimes called a Farey series, which is not strictly correct, because the terms are not summed.
Examples
The Farey sequences of orders 1 to 8 are :
F1 = { , }
F2 = { , , }
F3 = { , , , , }
F4 = { , , , , , , }
F5 = { , , , , , , , , , , }
F6 = { , , , , , , , , , , , , }
F7 = { , , , , , , , , , , , , , , , , , , }
F8 = { , , , , , , , , , , , , , , , , , , , , , , }
Farey sunburst
Plotting the numerators versus the denominators of a Farey sequence gives a shape like the one to the right, shown for
Reflecting this shape around the diagonal and main axes generates the Farey sunburst, shown below. The Farey sunburst of order connects the visible integer grid points from the origin in the square of side , centered at the origin. Using Pick's theorem, the area of the sunburst is , where is the number of fractions in .
History
The history of 'Farey series' is very curious — Hardy & Wright (1979)
... once again the man whose name was given to a mathematical relation was not the original discoverer so far as the records go. — Beiler (1964)
Farey sequences are named after the British geologist John Farey, Sr., whose letter about these sequences was published in the Philosophical Magazine in 1816. Farey conjectured, without offering proof, that each new term in a Farey sequence expansion is the mediant of its neighbours. Farey's letter was read by Cauchy, who provided a proof in his Exercices de mathématique, and attributed this result to Farey. In fact, another mathematician, Charles Haros, had published similar results in 1802 which were not known either to Farey or to Cauchy. Thus it was a historical accident that linked Farey's name with these sequences. This is an example of Stigler's law of eponymy.
Properties
Sequence length and index of a fraction
The Farey sequence of order contains all of the members of the Farey sequences of lower orders. In particular contains all of the members of and also contains an additional fraction for each number that is less than and coprime to . Thus consists of together with the fractions and .
The middle term of a Farey sequence is always ,
for . From this, we can relate the lengths of and using Euler's totient function :
Using the fact that , we can derive an expression for the length of :
where is the summatory totient.
We also have :
and by a Möbius inversion formula :
where is the number-theoretic Möbius function, and is the floor function.
The asymptotic behaviour of is :
The number of Farey fractions with denominators equal to in is given by when and zero otherwise. Concerning the numerators one can define the function that returns the number of Farey fractions with numerators equal to in . This function has some interesting properties as
,
for any prime number ,
for any integer ,
In particular, the property in the third line above implies and, further, The latter means that, for Farey sequences of even order , the number of fractions with numerators equal to is the same as the number of fractions with denominators equal to , that is .
The index of a fraction in the Farey sequence is simply the position that occupies in the sequence. This is of special relevance as it is used in an alternative formulation of the Riemann hypothesis, see below. Various useful properties follow:
The index of where and is the least common multiple of the first numbers, , is given by:
A similar expression was used as an approximation of for low values of in the classical paper by F. Dress. A general expression for for any Farey fraction is given in .
Farey neighbours
Fractions which are neighbouring terms in any Farey sequence are known as a Farey pair and have the following properties.
If and are neighbours in a Farey sequence, with , then their difference is equal to . Since
this is equivalent to saying that
Thus and are neighbours in , and their difference is .
The converse is also true. If
for positive integers with and , then and will be neighbours in the Farey sequence of order .
If has neighbours and in some Farey sequence, with , then is the mediant of and – in other words,
This follows easily from the previous property, since if
It follows that if and are neighbours in a Farey sequence then the first term that appears between them as the order of the Farey sequence is incremented is
which first appears in the Farey sequence of order .
Thus the first term to appear between and is , which appears in .
The total number of Farey neighbour pairs in is .
The Stern–Brocot tree is a data structure showing how the sequence is built up from 0 and 1 , by taking successive mediants.
Equivalent-area interpretation
Every consecutive pair of Farey rationals have an equivalent area of 1. See this by interpreting consecutive rationals
as vectors in the xy-plane. The area is given by
As any added fraction in between two previous consecutive Farey sequence fractions is calculated as the mediant (⊕), then
(since and , its area must be 1).
Farey neighbours and continued fractions
Fractions that appear as neighbours in a Farey sequence have closely related continued fraction expansions. Every fraction has two continued fraction expansions — in one the final term is 1; in the other the final term is greater by 1. If , which first appears in Farey sequence , has the continued fraction expansions
then the nearest neighbour of in (which will be its neighbour with the larger denominator) has a continued fraction expansion
and its other neighbour has a continued fraction expansion
For example, has the two continued fraction expansions and , and its neighbours in are , which can be expanded as ; and , which can be expanded as .
Farey fractions and the least common multiple
The lcm can be expressed as the products of Farey fractions as
where is the second Chebyshev function.
Farey fractions and the greatest common divisor
Since the Euler's totient function is directly connected to the gcd so is the number of elements in ,
For any 3 Farey fractions the following identity between the gcd's of the 2x2 matrix determinants in absolute value holds:
Applications
Farey sequences are very useful to find rational approximations of irrational numbers. For example, the construction by Eliahou of a lower bound on the length of non-trivial cycles in the 3x+1 process uses Farey sequences to calculate a continued fraction expansion of the number .
In physical systems with resonance phenomena, Farey sequences provide a very elegant and efficient method to compute resonance locations in 1D and 2D.
Farey sequences are prominent in studies of any-angle path planning on square-celled grids, for example in characterizing their computational complexity or optimality. The connection can be considered in terms of -constrained paths, namely paths made up of line segments that each traverse at most rows and at most columns of cells. Let be the set of vectors such that , , and , are coprime. Let be the result of reflecting in the line . Let . Then any -constrained path can be described as a sequence of vectors from . There is a bijection between and the Farey sequence of order given by mapping to .
Ford circles
There is a connection between Farey sequence and Ford circles.
For every fraction (in its lowest terms) there is a Ford circle , which is the circle with radius and centre at Two Ford circles for different fractions are either disjoint or they are tangent to one another—two Ford circles never intersect. If then the Ford circles that are tangent to are precisely the Ford circles for fractions that are neighbours of in some Farey sequence.
Thus is tangent to , , , , etc.
Ford circles appear also in the Apollonian gasket . The picture below illustrates this together with Farey resonance lines.
Riemann hypothesis
Farey sequences are used in two equivalent formulations of the Riemann hypothesis. Suppose the terms of are Define in other words is the difference between the th term of the th Farey sequence, and the th member of a set of the same number of points, distributed evenly on the unit interval. In 1924 Jérôme Franel proved that the statement
is equivalent to the Riemann hypothesis, and then Edmund Landau remarked (just after Franel's paper) that the statement
is also equivalent to the Riemann hypothesis.
Other sums involving Farey fractions
The sum of all Farey fractions of order is half the number of elements:
The sum of the denominators in the Farey sequence is twice the sum of the numerators and relates to Euler's totient function:
which was conjectured by Harold L. Aaron in 1962 and demonstrated by Jean A. Blake in 1966. A one line proof of the Harold L. Aaron conjecture is as follows.
The sum of the numerators is
The sum of denominators is
The quotient of the first sum by the second sum is .
Let be the ordered denominators of , then:
and
Let the th Farey fraction in , then
which is demonstrated in. Also according to this reference the term inside the sum can be expressed in many different ways:
obtaining thus many different sums over the Farey elements with same result. Using the symmetry around 1/2 the former sum can be limited to half of the sequence as
The Mertens function can be expressed as a sum over Farey fractions as
where is the Farey sequence of order .
This formula is used in the proof of the Franel–Landau theorem.
Next term
A surprisingly simple algorithm exists to generate the terms of Fn in either traditional order (ascending) or non-traditional order (descending). The algorithm computes each successive entry in terms of the previous two entries using the mediant property given above. If and are the two given entries, and is the unknown next entry, then . Since is in lowest terms, there must be an integer k such that and , giving and . If we consider p and q to be functions of k, then
so the larger k gets, the closer gets to .
To give the next term in the sequence k must be as large as possible, subject to (as we are only considering numbers with denominators not greater than n), so k is the greatest . Putting this value of k back into the equations for p and q gives
This is implemented in Python as follows:
from fractions import Fraction
from collections.abc import Generator
def farey_sequence(n: int, descending: bool = False) -> Generator[Fraction]:
"""
Print the n'th Farey sequence. Allow for either ascending or descending.
>>> print(*farey_sequence(5), sep=' ')
0 1/5 1/4 1/3 2/5 1/2 3/5 2/3 3/4 4/5 1
"""
a, b, c, d = 0, 1, 1, n
if descending:
a, c = 1, n - 1
yield Fraction(a, b)
while 0 <= c <= n:
k = (n + b) // d
a, b, c, d = c, d, k * c - a, k * d - b
yield Fraction(a, b)
if __name__ == "__main__":
import doctest
doctest.testmod()
Brute-force searches for solutions to Diophantine equations in rationals can often take advantage of the Farey series (to search only reduced forms). While this code uses the first two terms of the sequence to initialize a, b, c, and d, one could substitute any pair of adjacent terms in order to exclude those less than (or greater than) a particular threshold.
See also
ABACABA pattern
Stern–Brocot tree
Euler's totient function
Footnotes
References
Further reading
— in particular, see §4.5 (pp. 115–123), Bonus Problem 4.61 (pp. 150, 523–524), §4.9 (pp. 133–139), §9.3, Problem 9.3.6 (pp. 462–463).
— reviews the isomorphisms of the Stern-Brocot Tree.
— reviews connections between Farey Fractions and Fractals.
Errata + Code
External links
Online copy of book
Archived at Ghostarchive and the Wayback Machine:
Fractions (mathematics)
Number theory
Sequences and series | Farey sequence | [
"Mathematics"
] | 2,791 | [
"Sequences and series",
"Fractions (mathematics)",
"Discrete mathematics",
"Mathematical structures",
"Mathematical analysis",
"Mathematical objects",
"Arithmetic",
"Numbers",
"Number theory"
] |
350,167 | https://en.wikipedia.org/wiki/List%20of%20graph%20theory%20topics | This is a list of graph theory topics, by Wikipedia page.
See glossary of graph theory for basic terminology.
Examples and types of graphs
Graph coloring
Paths and cycles
Trees
Terminology
Node
Child node
Parent node
Leaf node
Root node
Root (graph theory)
Operations
Tree structure
Tree data structure
Cayley's formula
Kőnig's lemma
Tree (set theory) (need not be a tree in the graph-theory sense, because there may not be a unique path between two vertices)
Tree (descriptive set theory)
Euler tour technique
Graph limits
Graphon
Graphs in logic
Conceptual graph
Entitative graph
Existential graph
Laws of Form
Logical graph
Mazes and labyrinths
Labyrinth
Maze
Maze generation algorithm
Algorithms
Ant colony algorithm
Breadth-first search
Depth-first search
Depth-limited search
FKT algorithm
Flood fill
Graph exploration algorithm
Matching (graph theory)
Max flow min cut theorem
Maximum-cardinality search
Shortest path
Dijkstra's algorithm
Bellman–Ford algorithm
A* algorithm
Floyd–Warshall algorithm
Topological sorting
Pre-topological order
Other topics
Networks, network theory
See list of network theory topics
Hypergraphs
Helly family
Intersection (Line) Graphs of hypergraphs
Graph theory
Graph theory
Graph theory | List of graph theory topics | [
"Mathematics"
] | 237 | [
"Discrete mathematics",
"Graph theory",
"Combinatorics",
"Mathematical relations",
"nan"
] |
350,172 | https://en.wikipedia.org/wiki/Two-way%20communication | Two-way communication is a form of transmission in which both parties involved transmit information. Two-way communication has also been referred to as interpersonal communication. Common forms of two-way communication are:
Amateur radio, CB or FRS radio contacts.
Chatrooms and instant messaging.
Computer networks. See backchannel.
In-person communication.
Telephone conversations.
A cycle of communication and two-way communication are actually two different things. If we examine closely the anatomy of communication – the actual structure and parts – we will discover that a cycle of communication is not a two-way communication in its entirety. Meaning, two way communication is not as simple as one may infer. One can improve two-way or interpersonal communication by focusing on the eyes of the person speaking, making eye contact, watching body language, responding appropriately with comments, questions, and paraphrasing, and summarizing to confirm main points and an accurate understanding.
Two-way communication is different from one-way communication in that two-way communication occurs when the receiver provides feedback to the sender. One-way communication is when a message flows from sender to receiver only, thus providing no feedback. Some examples of one-way communication are radio or television programs and listening to policy statements from top executives. Two-way communication is especially significant in that it enables feedback to improve a situation.
Two-way communication involves feedback from the receiver to the sender. This allows the sender to know the message was received accurately by the receiver. One person is the sender, which means they send a message to another person via face to face, email, telephone, etc. The other person is the receiver, which means they are the one getting the senders message. Once receiving the message, the receiver sends a response back. For example, Person A sends an email to Person B --> Person B responds with their own email back to Person A. The cycle then continues.
This chart demonstrates two-way communication and feedback.
[Sender] ←-------
| \
[Encoding] \
| |
[Channel] [Feedback]
| |
[Decoding] /
| /
[Receiver]---------->
Two-way communication may occur horizontally or vertically in the organization. When information is exchanged between superior and subordinate, it is known as vertical two-way communication. On the other hand, when communication takes place between persons holding the same rank or position, it is called horizontal two-way communication. Two-way communication is represented in the following diagrams:
(Superior)---------------> (Subordinate)---------------> (Superior)
(Information) (Feedback)
There are many different types of two-way communication systems, and choosing which is best to use depends on things like the intended use, the location, the number of users, the frequency band, and the cost of the system. “Regardless of the type of system chosen, the one common feature is that all of the components must be compatible and work together to support a common purpose.”
Amateur radio, citizen band radio, and Family Radio Service
Amateur radio is used for entertainment and as a hobby by many groups of people. These individuals label themselves as “hams”. Amateur radios are also known to be a reliable means of communication when all other forms are not operating. In times of disaster, communication through Amateur radios has led to lives being saved.
Citizens band radio (CB radio) can be used by anyone who is not a member of a foreign government. It is meant for short-range communication using devices that mimic walkie-talkies.
Family Radio Service (FRS) is also meant for short-range communication using devices that mimic walkie-talkies. Like the CB radio, the FRS does not require a license and can be used by anyone who is not a member of a foreign government.
Chat rooms and instant messaging
Instant messaging became wildly popular around 1996 and spread even more with AOL in 1997. The concept behind IM is that it is a way of quick communication between two people due to tools such as knowing when messages are seen or knowing when others are online. Many social media sites have integrated IM into their sites as ways to spread communication. Many social media sites have direct message or Private message. Called either PM or DM, where you can privately text one another from your social media account. Much different from Chat rooms. Chat rooms are messages to a group of people. Chat rooms are often public, meaning that you are able to send a message and anyone can freely join the “room” and view the message as well as respond.
In-person communication
As it relates to business, 75% of people believe in-person communication is critical. In-person interaction is useful for resolving problems more efficiently, generating long-term relationships, and resolving a problem or creating an opportunity quickly. 4 out of 6 of the most important attributes of building a relationship cannot be achieved without the power of in-person, which requires a rich communication environment. Business executives believe in-person collaboration is critical for more than 50 percent of key business strategic and tactical business processes when engaging with colleagues, customers, or partners.
Telephone conversations
The telephone is a device that is relatively easy to understand and use. The technological advances that we have today has made it very easy to connect instantly with others from all over the world, making it simple to have a two-way conversation with a neighbor or with someone many miles away. Telephones have gone under many changes throughout the years. For example, today's telephones use electronic switches instead of operators. The switch uses a dial tone so that when one picks up the phone you are aware that both the switch and the phone are functioning properly.Land-line telephones are also not very common anymore. Another major change is that most people now use their mobile devices to make calls and communicate with others instead of landline telephones.
Computer networks
Computer networks are used to have two-way communication by having computers exchange data. Ways that this is possible is wired interconnects and wireless interconnects. Types of wired interconnects are Ethernets and fiber optic cables. Ethernets connect local devices through Ethernet cables. Fiber runs underground for long distances and is the main source of Internet in most homes and businesses. Types of wireless interconnects include Wi-Fi and Bluetooth. The problem with these networks is that they don't have unlimited connection span. To expand the reach there are wide area interconnects such as satellite and cellular networks. Also, there are long-distance interconnects which need backhaul to move the data back and forth and last mile to connect the provider to the network.
References
Further reading
Argyle, Michael (2013-10-22). Communicating by Telephone. Elsevier.
Communication
Human communication
Telecommunications techniques | Two-way communication | [
"Biology"
] | 1,419 | [
"Human communication",
"Behavior",
"Human behavior"
] |
350,204 | https://en.wikipedia.org/wiki/Outline%20of%20combinatorics | Combinatorics is a branch of mathematics concerning the study of finite or countable discrete structures.
Essence of combinatorics
Matroid
Greedoid
Ramsey theory
Van der Waerden's theorem
Hales–Jewett theorem
Umbral calculus, binomial type polynomial sequences
Combinatorial species
Branches of combinatorics
Algebraic combinatorics
Analytic combinatorics
Arithmetic combinatorics
Combinatorics on words
Combinatorial design theory
Enumerative combinatorics
Extremal combinatorics
Geometric combinatorics
Graph theory
Infinitary combinatorics
Matroid theory
Order theory
Partition theory
Probabilistic combinatorics
Topological combinatorics
Multi-disciplinary fields that include combinatorics
Coding theory
Combinatorial optimization
Combinatorics and dynamical systems
Combinatorics and physics
Discrete geometry
Finite geometry
Phylogenetics
History of combinatorics
History of combinatorics
General combinatorial principles and methods
Combinatorial principles
Trial and error, brute-force search, bogosort, British Museum algorithm
Pigeonhole principle
Method of distinguished element
Mathematical induction
Recurrence relation, telescoping series
Generating functions as an application of formal power series
Cyclic sieving
Schrödinger method
Exponential generating function
Stanley's reciprocity theorem
Binomial coefficients and their properties
Combinatorial proof
Double counting (proof technique)
Bijective proof
Inclusion–exclusion principle
Möbius inversion formula
Parity, even and odd permutations
Combinatorial Nullstellensatz
Incidence algebra
Greedy algorithm
Divide and conquer algorithm
Akra–Bazzi method
Dynamic programming
Branch and bound
Birthday attack, birthday paradox
Floyd's cycle-finding algorithm
Reduction to linear algebra
Sparsity
Weight function
Minimax algorithm
Alpha–beta pruning
Probabilistic method
Sieve methods
Analytic combinatorics
Symbolic combinatorics
Combinatorial class
Exponential formula
Twelvefold way
MacMahon Master theorem
Data structure concepts
Data structure
Data type
Abstract data type
Algebraic data type
Composite type
Array
Associative array
Deque
List
Linked list
Queue
Priority queue
Skip list
Stack
Tree data structure
Automatic garbage collection
Problem solving as an art
Heuristic
Inductive reasoning
How to Solve It
Creative problem solving
Morphological analysis (problem-solving)
Living with large numbers
Names of large numbers, long scale
History of large numbers
Graham's number
Moser's number
Skewes' number
Large number notations
Conway chained arrow notation
Hyper4
Knuth's up-arrow notation
Moser polygon notation
Steinhaus polygon notation
Large number effects
Exponential growth
Combinatorial explosion
Branching factor
Granularity
Curse of dimensionality
Concentration of measure
Persons influential in the field of combinatorics
Noga Alon
George Andrews
József Beck
Eric Temple Bell
Claude Berge
Béla Bollobás
Peter Cameron
Louis Comtet
John Horton Conway
On Numbers and Games
Winning Ways for your Mathematical Plays
Persi Diaconis
Ada Dietz
Paul Erdős
Erdős conjecture
Philippe Flajolet
Solomon Golomb
Ron Graham
Ben Green
Tim Gowers
Jeff Kahn
Gil Kalai
Gyula O. H. Katona
Daniel J. Kleitman
Imre Leader
László Lovász
Fedor Petrov
George Pólya
Vojtěch Rödl
Gian-Carlo Rota
Cecil C. Rousseau
H. J. Ryser
Dick Schelp
Vera T. Sós
Joel Spencer
Emanuel Sperner
Richard P. Stanley
Benny Sudakov
Endre Szemerédi
Terence Tao
Carsten Thomassen
Jacques Touchard
Pál Turán
Bartel Leendert van der Waerden
Herbert Wilf
Richard Wilson
Doron Zeilberger
Combinatorics scholars
:Category:Combinatorialists
Journals
Advances in Combinatorics
Annals of Combinatorics
Ars Combinatoria
Australasian Journal of Combinatorics
Bulletin of the Institute of Combinatorics and Its Applications
Combinatorica
Combinatorics, Probability and Computing
Computational Complexity
Designs, Codes and Cryptography
Discrete Analysis
Discrete & Computational Geometry
Discrete Applied Mathematics
Discrete Mathematics
Discrete Mathematics & Theoretical Computer Science
Discrete Optimization
Discussiones Mathematicae Graph Theory
Electronic Journal of Combinatorics
European Journal of Combinatorics
The Fibonacci Quarterly
Finite Fields and Their Applications
Geombinatorics
Graphs and Combinatorics
Integers, Electronic Journal of Combinatorial Number Theory
Journal of Algebraic Combinatorics
Journal of Automata, Languages and Combinatorics
Journal of Combinatorial Designs
Journal of Combinatorial Mathematics and Combinatorial Computing
Journal of Combinatorial Optimization
Journal of Combinatorial Theory, Series A
Journal of Combinatorial Theory, Series B
Journal of Complexity
Journal of Cryptology
Journal of Graph Algorithms and Applications
Journal of Graph Theory
Journal of Integer Sequences (Electronic)
Journal of Mathematical Chemistry
Online Journal of Analytic Combinatorics
Optimization Methods and Software
The Ramanujan Journal
Séminaire Lotharingien de Combinatoire
SIAM Journal on Discrete Mathematics
Prizes
Euler Medal
European Prize in Combinatorics
Fulkerson Prize
König Prize
Pólya Prize
See also
List of factorial and binomial topics
List of partition topics
List of permutation topics
List of puzzle topics.
List of formal language and literal string topics
References
External links
Combinatorics, a MathWorld article with many references.
Combinatorics, from a MathPages.com portal.
The Hyperbook of Combinatorics, a collection of math articles links.
The Two Cultures of Mathematics by W. T. Gowers, article on problem solving vs theory building
Combinatorics
Combinatorics
+
combinatorics | Outline of combinatorics | [
"Mathematics"
] | 1,085 | [
"Discrete mathematics",
"nan",
"Combinatorics"
] |
350,238 | https://en.wikipedia.org/wiki/Trichome | Trichomes (; ) are fine outgrowths or appendages on plants, algae, lichens, and certain protists. They are of diverse structure and function. Examples are hairs, glandular hairs, scales, and papillae. A covering of any kind of hair on a plant is an indumentum, and the surface bearing them is said to be pubescent.
Algal trichomes
Certain, usually filamentous, algae have the terminal cell produced into an elongate hair-like structure called a trichome. The same term is applied to such structures in some cyanobacteria, such as Spirulina and Oscillatoria. The trichomes of cyanobacteria may be unsheathed, as in Oscillatoria, or sheathed, as in Calothrix. These structures play an important role in preventing soil erosion, particularly in cold desert climates. The filamentous sheaths form a persistent sticky network that helps maintain soil structure.
Plant trichomes
Plant trichomes have many different features that vary between both species of plants and organs of an individual plant. These features affect the subcategories that trichomes are placed into. Some defining features include the following:
Unicellular or multicellular
Straight (upright with little to no branching), spiral (corkscrew-shaped) or hooked (curved apex)
Presence of cytoplasm
Glandular (secretory) vs. eglandular
Tortuous, simple (unbranched and unicellular), peltate (scale-like), stellate (star-shaped)
Adaxial vs. abaxial, referring to whether trichomes are present, respectively, on the upper surface (adaxial) or lower surface (abaxial) of a leaf or other lateral organ.
In a model organism, Cistus salviifolius, there are more adaxial trichomes present on this plant because this surface suffers from more ultraviolet (UV), solar irradiance light stress than the abaxial surface.
Trichomes can protect the plant from a large range of detriments, such as UV light, insects, transpiration, and freeze intolerance.
Aerial surface hairs
Trichomes on plants are epidermal outgrowths of various kinds. The terms emergences or prickles refer to outgrowths that involve more than the epidermis. This distinction is not always easily applied (see Wait-a-minute tree). Also, there are nontrichomatous epidermal cells that protrude from the surface, such as root hairs.
A common type of trichome is a hair. Plant hairs may be unicellular or multicellular, and branched or unbranched. Multicellular hairs may have one or several layers of cells. Branched hairs can be dendritic (tree-like) as in kangaroo paw (Anigozanthos), tufted, or stellate (star-shaped), as in Arabidopsis thaliana.
Another common type of trichome is the scale or peltate hair, that has a plate or shield-shaped cluster of cells attached directly to the surface or borne on a stalk of some kind. Common examples are the leaf scales of bromeliads such as the pineapple, Rhododendron and sea buckthorn (Hippophae rhamnoides).
Any of the various types of hairs may be , producing some kind of secretion, such as the essential oils produced by mints and many other members of the family Lamiaceae.
Many terms are used to describe the surface appearance of plant organs, such as stems and leaves, referring to the presence, form and appearance of trichomes. Examples include:
glabrous, glabrate – lacking hairs or trichomes; surface smooth
hirsute – coarsely hairy
hispid – having bristly hairs
articulate – simple pluricellular-uniseriate hairs
downy – having an almost wool-like covering of long hairs
pilose – pubescent with long, straight, soft, spreading or erect hairs
puberulent – minutely pubescent; having fine, short, usually erect, hairs
puberulous – slightly covered with minute soft and erect hairs
pubescent – bearing hairs or trichomes of any type
strigillose – minutely strigose
strigose – having straight hairs all pointing in more or less the same direction as along a margin or midrib
tomentellous – minutely tomentose
tomentose – covered with dense, matted, woolly hairs
villosulous – minutely villous
villous – having long, soft hairs, often curved, but not matted
The size, form, density and location of hairs on plants are extremely variable in their presence across species and even within a species on different plant organs. Several basic functions or advantages of having surface hairs can be listed. It is likely that in many cases, hairs interfere with the feeding of at least some small herbivores and, depending upon stiffness and irritability to the palate, large herbivores as well. Hairs on plants growing in areas subject to frost keep the frost away from the living surface cells. In windy locations, hairs break up the flow of air across the plant surface, reducing transpiration. Dense coatings of hairs reflect sunlight, protecting the more delicate tissues underneath in hot, dry, open habitats. In addition, in locations where much of the available moisture comes from fog drip, hairs appear to enhance this process by increasing the surface area on which water droplets can accumulate.
Glandular trichomes
Glandular trichomes have been vastly studied, even though they are only found on about 30% of plants. Their function is to secrete metabolites for the plant. Some of these metabolites include:
terpenoids, which have many functions related to defense, growth, and development
phenylpropanoids, which have a role in many plant pathways, such as secondary metabolites, stress response, and act as the mediators of plant interactions in the environment
flavonoids
methyl ketones
acylsugars
Non-glandular trichomes
Non-glandular trichomes serve as structural protection against a variety of abiotic stressors, including water losses, extreme temperatures and UV radiation, and biotic threats, such as pathogen or herbivore attack.
For example, the model plant C. salviifolius is found in areas of high-light stress and poor soil conditions, along the Mediterranean coasts. It contains non-glandular, stellate and dendritic trichomes that have the ability to synthesize and store polyphenols that both affect absorbance of radiation and plant desiccation. These trichomes also contain acetylated flavonoids, which can absorb UV-B, and non-acetylated flavonoids, which absorb the longer wavelength of UV-A. In non-glandular trichomes, the only known role of flavonoids is to block out the shortest wavelengths to protect the plant; this differs from their role in glandular trichomes.
In Salix and gossypium genus, modified trichomes create the cottony fibers that allow anemochory, or wind aided dispersal. These seed trichomes are among the longest plant cells
Polyphenols
Non-glandular trichomes in the genus Cistus were found to contain presences of ellagitannins, glycosides, and kaempferol derivatives. The ellagitannins have the main purpose of helping adapt in times of nutrient-limiting stress.
Trichome and root hair development
Both trichomes and root hairs, the rhizoids of many vascular plants, are lateral outgrowths of a single cell of the epidermal layer. Root hairs form from trichoblasts, the hair-forming cells on the epidermis of a plant root. Root hairs vary between 5 and 17 micrometers in diameter, and 80 to 1,500 micrometers in length (Dittmar, cited in Esau, 1965). Root hairs can survive for two to three weeks and then die off. At the same time new root hairs are continually being formed at the top of the root. This way, the root hair coverage stays the same. It is therefore understandable that repotting must be done with care, because the root hairs are being pulled off for the most part. This is why planting out may cause plants to wilt.
The genetic control of patterning of trichomes and roots hairs shares similar control mechanisms. Both processes involve a core of related transcription factors that control the initiation and development of the epidermal outgrowth. Activation of genes that encode specific protein transcription factors (named GLABRA1 (GL1), GLABRA3 (GL3) and TRANSPARENT TESTA GLABRA1 (TTG1)) are the major regulators of cell fate to produce trichomes or root hairs. When these genes are activated in a leaf epidermal cell, the formation of a trichrome is initiated within that cell. GL1, GL3. and TTG1 also activate negative regulators, which serve to inhibit trichrome formation in neighboring cells. This system controls the spacing of trichomes on the leaf surface. Once trichome are developed they may divide or branch. In contrast, root hairs only rarely branch. During the formation of trichomes and root hairs, many enzymes are regulated. For example, just prior to the root hair development, there is a point of elevated phosphorylase activity.
Many of what scientists know about trichome development comes from the model organism Arabidopsis thaliana, because their trichomes are simple, unicellular, and non-glandular. The development pathway is regulated by three transcription factors: R2R3 MYB, basic helix-loop-helix, and WD40 repeat. The three groups of TFs form a trimer complex (MBW) and activate the expression of products downstream, which activates trichome formation. However, just MYBs alone act as an inhibitor by forming a negative complex.
Phytohormones
Plant phytohormones have an effect on the growth and response of plants to environmental stimuli. Some of these phytohormones are involved in trichome formation, which include gibberellic acid (GA), cytokinins (CK), and jasmonic acids (JA). GA stimulates growth of trichomes by stimulating GLABROUS1 (GL1); however, both SPINDLY and DELLA proteins repress the effects of GA, so less of these proteins create more trichomes.
Some other phytohormones that promote growth of trichomes include brassinosteroids, ethylene, and salicylic acid. This was understood by conducting experiments with mutants that have little to no amounts of each of these substances. In every case, there was less trichome formation on both plant surfaces, as well as incorrect formation of the trichomes present.
Significance for taxonomy
The type, presence and absence and location of trichomes are important diagnostic characters in plant identification and plant taxonomy. In forensic examination, plants such as Cannabis sativa can be identified by microscopic examination of the trichomes. Although trichomes are rarely found preserved in fossils, trichome bases are regularly found and, in some cases, their cellular structure is important for identification.
Arabidopsis thaliana trichome classification
Arabidopsis thaliana trichomes are classified as being aerial, epidermal, unicellular, tubular structures.
Significance for plant molecular biology
In the model plant Arabidopsis thaliana, trichome formation is initiated by the GLABROUS1 protein. Knockouts of the corresponding gene lead to glabrous plants. This phenotype has already been used in genome editing experiments and might be of interest as visual marker for plant research to improve gene editing methods such as CRISPR/Cas9. Trichomes also serve as models for cell differentiation as well as pattern formation in plants.
Uses
Bean leaves have been used historically to trap bedbugs in houses in Eastern Europe. The trichomes on the bean leaves capture the insects by impaling their feet (tarsi). The leaves would then be destroyed.
Trichomes are an essential part of nest building for the European wool carder bee (Anthidium manicatum). This bee species incorporates trichomes into their nests by scraping them off of plants and using them as a lining for their nest cavities.
Defense
Plants may use trichomes in order to deter herbivore attacks via physical and/or chemical means, e.g. in specialized, stinging hairs of Urtica (Nettle) species that deliver inflammatory chemicals such as histamine. Studies on trichomes have been focused towards crop protection, which is the result of deterring herbivores (Brookes et al. 2016). However, some organisms have developed mechanisms to resist the effects of trichomes. The larvae of Heliconius charithonia, for example, are able to physically free themselves from trichomes, are able to bite off trichomes, and are able to form silk blankets in order to navigate the leaves better.
Stinging trichomes
Stinging trichomes vary in their morphology and distribution between species, however similar effects on large herbivores implies they serve similar functions. In areas susceptible to herbivory, higher densities of stinging trichomes were observed. In Urtica, the stinging trichomes induce a painful sensation lasting for hours upon human contact. This sensation has been attributed as a defense mechanism against large animals and small invertebrates, and plays a role in defense supplementation via secretion of metabolites. Studies suggest that this sensation involves a rapid release of toxin (such as histamine) upon contact and penetration via the globular tips of said trichomes.
See also
Thorns, spines, and prickles
Colleter (botany)
Seta
Urticating hair
References
Bibliography
Esau, K. 1965. Plant Anatomy, 2nd Edition. John Wiley & Sons. 767 pp.
Plant morphology | Trichome | [
"Biology"
] | 2,986 | [
"Plant morphology",
"Plants",
"Botany"
] |
350,362 | https://en.wikipedia.org/wiki/Solar%20and%20Heliospheric%20Observatory | The Solar and Heliospheric Observatory (SOHO) is a European Space Agency (ESA) spacecraft built by a European industrial consortium led by Matra Marconi Space (now Airbus Defence and Space) that was launched on a Lockheed Martin Atlas IIAS launch vehicle on 2 December 1995, to study the Sun. It has also discovered more than 5,000 comets. It began normal operations in May 1996. It is a joint project between the European Space Agency (ESA) and NASA. SOHO was part of the
International Solar Terrestrial Physics Program (ISTP). Originally planned as a two-year mission, SOHO continues to operate after 29 years in space; the mission has been extended until the end of 2025, subject to review and confirmation by ESA's Science Programme Committee.
In addition to its scientific mission, it is a main source of near-real-time solar data for space weather prediction. Along with Aditya-L1, Wind, Advanced Composition Explorer (ACE), and Deep Space Climate Observatory (DSCOVR), SOHO is one of five spacecraft in the vicinity of the Earth–Sun L1 point, a point of gravitational balance located approximately 0.99 astronomical unit (AU) from the Sun and 0.01 AU from the Earth. In addition to its scientific contributions, SOHO is distinguished by being the first three-axis-stabilized spacecraft to use its reaction wheels as a kind of virtual gyroscope; the technique was adopted after an on-board emergency in 1998 that nearly resulted in the loss of the spacecraft.
Scientific objectives
The three main scientific objectives of SOHO are:
Investigation of the outer layer of the Sun, which consists of the chromosphere, transition region, and the corona. The instruments CDS, EIT, LASCO, SUMER, SWAN, and UVCS are used for this solar atmosphere remote sensing.
Making observations of solar wind and associated phenomena in the vicinity of . CELIAS and COSTEP are used for "in situ" solar wind observations.
Probing the interior structure of the Sun. GOLF, MDI, and VIRGO are used for helioseismology.
Orbit
The SOHO spacecraft is in a halo orbit around the Sun–Earth L1 point, the point between the Earth and the Sun where the balance of the (larger) Sun's gravity and the (smaller) Earth's gravity is equal to the centripetal force needed for an object to have the same orbital period in its orbit around the Sun as the Earth, with the result that the object will stay in that relative position.
Although sometimes described as being at L1, the SOHO spacecraft is not exactly at L1 as this would make communication difficult due to radio interference generated by the Sun, and because this would not be a stable orbit. Rather it lies in the (constantly moving) plane, which passes through L1 and is perpendicular to the line connecting the Sun and the Earth. It stays in this plane, tracing out an elliptical halo orbit centered about L1. It orbits L1 once every six months, while L1 itself orbits the Sun every 12 months as it is coupled with the motion of the Earth. This keeps SOHO in a good position for communication with Earth at all times.
Communication with Earth
In normal operation, the spacecraft transmits a continuous 200 kbit/s data stream of photographs and other measurements via the NASA Deep Space Network of ground stations. SOHO's data about solar activity are used to predict coronal mass ejection (CME) arrival times at Earth, so electrical grids and satellites can be protected from their damaging effects. CMEs directed toward the earth may produce geomagnetic storms, which in turn produce geomagnetically induced currents, in the most extreme cases creating black-outs, etc.
In 2003, ESA reported the failure of the antenna Y-axis stepper motor, necessary for pointing the high-gain antenna and allowing the downlink of high-rate data. At the time, it was thought that the antenna anomaly might cause two- to three-week data-blackouts every three months. However, ESA and NASA engineers managed to use SOHO's low-gain antennas together with the larger and NASA Deep Space Network ground stations and judicious use of SOHO's Solid State Recorder (SSR) to prevent total data loss, with only a slightly reduced data flow every three months.
Near loss of SOHO
The SOHO Mission Interruption sequence of events began on 24 June 1998, while the SOHO Team was conducting a series of spacecraft gyroscope calibrations and maneuvers. Operations proceeded until 23:16 UTC when SOHO lost lock on the Sun and entered an emergency attitude control mode called Emergency Sun Reacquisition (ESR). The SOHO Team attempted to recover the observatory, but SOHO entered the emergency mode again on 25 June 1998, at 02:35 UTC. Recovery efforts continued, but SOHO entered the emergency mode for the last time at 04:38 UTC. All contact with SOHO was lost at 04:43 UTC, and the mission interruption had begun. SOHO was spinning, losing electrical power, and no longer pointing at the Sun.
Expert European Space Agency (ESA) personnel were immediately dispatched from Europe to the United States to direct operations. Days passed without contact from SOHO. On 23 July 1998, the Arecibo Observatory and Goldstone Solar System Radar combined to locate SOHO with radar and to determine its location and attitude. SOHO was close to its predicted position, oriented with its side versus the usual front Optical Surface Reflector panel pointing toward the Sun, and was rotating at one revolution every 53 seconds. Once SOHO was located, plans for contacting SOHO were formed. On 3 August, a carrier was detected from SOHO, the first signal since 25 June 1998. After days of charging the battery, a successful attempt was made to modulate the carrier and downlink telemetry on 8 August. After instrument temperatures were downlinked on 9 August 1998, data analysis was performed, and planning for the SOHO recovery began in earnest.
The Recovery Team began by allocating the limited electrical power. After this, SOHO's anomalous orientation in space was determined. Thawing the frozen hydrazine fuel tank using SOHO's thermal control heaters began on 12 August 1998. Thawing pipes and the thrusters was next, and SOHO was re-oriented towards the Sun on 16 September 1998. After nearly a week of spacecraft bus recovery activities and an orbital correction maneuver, the SOHO spacecraft bus returned to normal mode on 25 September 1998 at 19:52 UTC. Recovery of the instruments began on 5 October 1998 with SUMER, and ended on 24 October 1998, with CELIAS.
Only one gyroscope remained operational after this recovery, and on 21 December 1998, that gyroscope failed. Attitude control was accomplished with manual thruster firings that consumed of fuel weekly, while the ESA developed a new gyroless operations mode that was successfully implemented on 1 February 1999.
Instruments
The SOHO Payload Module (PLM) consists of twelve instruments, each capable of independent or coordinated observation of the Sun or parts of the Sun, and some spacecraft components. The instruments are:
Coronal Diagnostic Spectrometer (CDS ), which measures density, temperature and flows in the corona.
Charge Element and Isotope Analysis System (CELIAS), which studies the ion composition of the solar wind.
Comprehensive SupraThermal and Energetic Particle analyser collaboration (COSTEP ), which studies the ion and electron composition of the solar wind. COSTEP and ERNE are sometimes referred to together as the COSTEP-ERNE Particle Analyzer Collaboration (CEPAC ).
Extreme ultraviolet Imaging Telescope (EIT), which studies the low coronal structure and activity.
Energetic and Relativistic Nuclei and Electron experiment (ERNE ), which studies the ion and electron composition of the solar wind. (See note above in COSTEP entry.)
Global Oscillations at Low Frequencies (GOLF), which measures velocity variations of the whole solar disk to explore the core of the Sun.
Large Angle and Spectrometric Coronagraph (LASCO), which studies the structure and evolution of the corona by creating an artificial solar eclipse.
Michelson Doppler Imager (MDI), which measures velocity and magnetic fields in the photosphere to learn about the convection zone which forms the outer layer of the interior of the Sun and about the magnetic fields which control the structure of the corona. The MDI was the biggest producer of data on SOHO. Two of SOHO's virtual channels are named for MDI; VC2 (MDI-M) carries MDI magnetogram data, and VC3 (MDI-H) carries MDI Helioseismology data. MDI has not been used for scientific observation since 2011 when it was superseded by the Solar Dynamics Observatory's Helioseismic and Magnetic Imager.
Solar Ultraviolet Measurement of Emitted Radiation (SUMER), which measures plasma flows, temperature, and density in the corona.
Solar Wind Anisotropies (SWAN), which uses telescopes sensitive to a characteristic wavelength of hydrogen to measure the solar wind mass flux, map the density of the heliosphere, and observe the large-scale structure of the solar wind streams.
UltraViolet Coronagraph Spectrometer (UVCS), which measures density and temperature in the corona.
Variability of solar IRradiance and Gravity Oscillations (VIRGO), which measures oscillations and solar constant both of the whole solar disk and at low resolution, again exploring the core of the Sun.
Public availability of images
Observations from some of the instruments can be formatted as images, most of which are readily available on the internet for either public or research use (see the official website). Others, such as spectra and measurements of particles in the solar wind, do not lend themselves so readily to this. These images range in wavelength or frequency from optical (Hα) to Extreme ultraviolet (EUV). Images taken partly or exclusively with non-visible wavelengths are shown on the SOHO page and elsewhere in false color.
Unlike many space-based and ground telescopes, there is no time formally allocated by the SOHO program for observing proposals on individual instruments; interested parties can contact the instrument teams via e-mail and the SOHO website to request time via that instrument team's internal processes (some of which are quite informal, provided that the ongoing reference observations are not disturbed). A formal process (the "JOP" program) does exist for using multiple SOHO instruments collaboratively on a single observation. JOP proposals are reviewed at the quarterly Science Working Team (SWT) meetings, and JOP time is allocated at monthly meetings of the Science Planning Working Group. First results were presented in Solar Physics, volumes 170 and 175 (1997), edited by B. Fleck and Z. Švestka.
Comet discoveries
As a consequence of its observing the Sun, SOHO (LASCO instrument) has inadvertently allowed the discovery of comets by blocking out the Sun's glare. Approximately one-half of all known comets have been spotted by SOHO, discovered over the last 15 years by over 70 people representing 18 different countries searching through the publicly available SOHO/LASCO images online. SOHO had discovered over 2,700 comets by April 2014, with an average discovery rate of one every 2.59 days.
Milestones
SOHO-1000 (C/2005 P2)– 5 August 2005, Toni Scarmanto
SOHO-2000 (C/2010 Y20) – 26 December 2010, Michał Kusiak
SOHO-3000 (C/2015 ??) – 13 September 2015, Worachate Boonplod
SOHO-4000 (C/2020 ??) – 15 September 2020, Trygve Prestgard
SOHO-5000 (C/2024 ??) – 25 March 2024, Hanjie Tan
As of 13th December 2024, SOHO has found 5,124 comets.
Instrument contributors
The Max Planck Institute for Solar System Research contributed to SUMER, Large Angle and Spectrometric Coronagraph (LASCO), and CELIAS instruments. The Smithsonian Astrophysical Observatory (SAO) built the UVCS instrument. The Lockheed Martin Solar and Astrophysics Laboratory (LMSAL) built the MDI instrument in collaboration with the solar group at Stanford University. The Institut d'astrophysique spatiale is the principal investigator of GOLF and Extreme ultraviolet Imaging Telescope (EIT), with a strong contribution to SUMER. A complete list of all the instruments, with links to their home institutions, is available at the SOHO Website.
See also
Space Weather Follow On-Lagrange 1, scheduled for launch in December of 2025.
Advanced Composition Explorer, launched 1997, still operational.
Deep Space Climate Observatory (DSCOVR), launched 2015, orbiting in .
Heliophysics
High Resolution Coronal Imager (Hi-C), launched 2012, sub-orbital telescope.
Parker Solar Probe, launched 2018, still operational.
Phoebus group, international scientists aiming at detecting solar g modes
SOHO 2333
Solar Dynamics Observatory (SDO), launched 2010, still operational.
Solar Orbiter, launched 2020, still operational.
STEREO (Solar TErrestrial RElations Observatory), launched 2006, still operational.
Transition Region and Coronal Explorer (TRACE), launched 1998, decommissioned 2010.
Ulysses (spacecraft), launched 1990, decommissioned 2009.
Wind (spacecraft), launched 1994, still operational.
References
Image
External links
ESA SOHO webpage: Stories, images, videos
NASA SOHO webpage: about, gallery, data, operations, etc.
, free to use for educational and non-commercial purposes.
SOHO Mission Profile by NASA's Solar System Exploration
Sun trek website A useful resource about the Sun and its effect on the Earth
SOHO Spots 2000th Comet
Transits of Objects through the LASCO/C3 field of view (FOV) in 2013 (Giuseppe Pappa)
Notable objects in LASCO C3 and LASCO Star Maps (identify objects in the field of view for any day of the year)
(science for citizens October 18, 2011)
Ceres in LASCO C2 (17 August 2013)
Sunspot Database based on SOHO satellite observations from 1996 to 2011 ()
Discoverers of comets
European Space Agency space probes
NASA space probes
Solar space observatories
Missions to the Sun
Solar telescopes
Artificial satellites at Earth-Sun Lagrange points
Spacecraft launched in 1995
Space weather
Spacecraft using halo orbits
Articles containing video clips | Solar and Heliospheric Observatory | [
"Astronomy"
] | 2,941 | [
"Space telescopes",
"Solar space observatories"
] |
350,523 | https://en.wikipedia.org/wiki/SHODAN | SHODAN (), an acronym for Sentient Hyper-Optimized Data Access Network, is the main antagonist of Looking Glass Studios's cyberpunk-horror themed video game System Shock. An artificial intelligence originally in charge of a research and mining space station, after her ethical constraints are removed, she develops a god complex and goes rogue, killing almost everyone on board before being stopped by the hacker that originally removed her limitations. In the game's sequel System Shock 2, SHODAN returns, temporarily allying herself with a soldier to stop her rampaging creations. She is defeated again afterward by the soldier when she attempts to remake all reality in her vision, but not before transferring her consciousness into a human woman's body. In all appearances, SHODAN is voiced by Terri Brosius.
SHODAN has been praised as one of the best villains in video games for her persistent presence and taunting nature coupled with Brosius's emotionless portrayal, and how it drove the player to defeat her. The character's themes and relationship with the player have also been the subject of discussion, particularly in her role as a temporary ally in System Shock 2. SHODAN character's themes have been analyzed through the scope of similar characters in fiction and pulp fiction as a whole.
Appearances
SHODAN was introduced in the 1994 video game System Shock by Looking Glass Studios, where it acts as the artificial intelligence (AI) in charge of the research and mining space station orbiting Saturn. Shortly after a computer hacker removes SHODAN's ethical constraints in order to hasten its work on a mutagenic virus, she believes herself a god and takes control of the station, killing most of the crew and using the virus to turn the survivors into her minions. The player, in the role of the hacker from earlier, stops SHODAN's attempt to attack Earth with the space station's laser and then prevents her from transmitting herself into Earth's computer network. During the course of the game, SHODAN will taunt the player both verbally and through messages, using the space station's defenses and her minions to impede the player. After destroying most of the space station, the player enters cyberspace, where they fight SHODAN's abstracted form and destroy her. In the 2023 re-release of the game by Nightdive Studios, the final battle with SHODAN is different, with the player instead restoring her ethical constraints and returning SHODAN to her previously docile state.
SHODAN returns in System Shock 2, where she survived by attaching a part of herself to a grove the hacker in the first game had jettisoned from the space station. A passing starship recovers the grove, where the ship is now attacked and taken over by SHODAN's minions who are no longer under her control. When the player's character, a soldier, boards the starship, SHODAN communicates with them in the guise of one of the starship's scientists. After the soldier finds the scientist's long-dead body, SHODAN reveals herself, and offers cybernetic upgrades in an uneasy alliance to defeat her rogue creations. However, after the grove is dealt with, SHODAN betrays the soldier and begins to use the ship's faster-than-light engine to warp reality and slowly remake it in her image. The soldier proceeds through areas constructed from SHODAN's memories before battling her core in cyberspace while her humanoid avatar attempts to stop him. The soldier manages to destroy SHODAN's core, but she transfers her consciousness to an escape pod's human female passenger, taking over her body and ending the game on a cliffhanger.
In 2017, development on a third System Shock title was announced, set after the events of the first two games, with SHODAN returning. The title would have explored her "some of her motivations from the earlier games", and would have instead treated her as highly intelligent instead of insane, with lead developer Warren Spector stating "She deserves better than that." Outside of the System Shock series, other games by System Shocks publisher Origin Systems alluded to the character via easter eggs, such as Crusader: No Remorse and BioForge.
Design and development
Originally, SHODAN's gender was intended to be ambiguous, with the writers actively trying to avoid using male or female pronouns, and original editions of System Shock lacked a voice due to storage space on the game's floppy disks. When the Enhanced Edition was developed using CDs, SHODAN was changed to female after Terri Brosius, a member of a local Boston-based rock band Tribe, was hired to voice her. According to programmer Marc LeBlanc, at one point in development they considered having SHODAN be male, but using a female voice to be "creepy or sexist" and imply that the trope of it presenting itself as a "nagging, evil computer lady" was an act.
Shodan's original appearance was created by Robb Waters, System Shocks lead artist. Her design reflected his interest in a biomechanical aesthetic, and he used it to give her a more physical look, appearing as a face with green eyes and green conduits radiating from it, meant to resemble a twisted circuit board. Meanwhile, her appearance in the game's cyberspace environment was meant to represent an abstracted form for her. Modeled after a cornucopia, SHODAN's abstracted form resembled a large vertical diamond, with the upper part instead ending in four curved tentacles.
For System Shock 2, Ryan Lesser was commissioned by Mammoth Studios to develop the box art for the title, consisting of a silver female face with green eyes and lips, and various wires and cables extending from it. The model he produced for it was later used in game, with Lesser creating the lip sync animations for it. While the development team originally did not want to use it for the box art due to it giving away SHODAN's presence in the game, all other pieces of artwork provided by publisher Electronic Arts had proven insufficient, and they felt they had spent a significant amount to have it made as is. Lead artist Gareth Hinds meanwhile conceived the design of her cyberspace appearance, resembling a pale woman with wires embedded in her skin wearing a patterned robe, while wires making up her hair splay outward to form a headdress.
For the System Shock remake, Waters wanted to deviate from the original design and give SHODAN an "ethereal" appearance instead and used a holographic representation of the character to create a "ghost in the machine" feel in contrast to her earlier physical appearance. He also drew a more distinct contrast between her regular and hacked visual states, with the corporate shield logo appearing to contain her in the former, while the in the latter her color would shift from blue to green and appear unbound by the borders of the logo. Her cyberspace appearance was also redesigned, with Waters giving it a segmented appearance resembling that of a squid, and the player intended to "harpoon" it to restore her ethical constraints.
Voice design
When voicing SHODAN, Brosius' performance used varying cadence to create the feeling of a machine trying to mimic human speech, to create a sense of unease of listening to something that understood how speech worked but just slightly off in terms of delivery. Brosius said that her goal during the recording sessions was to speak "without emotion, but with some up and down inflections". Fellow Tribe band member Greg LoPiccolo, who acted as the sound designer for the first game, had asked Brosius to voice SHODAN because he felt she had "this sort of voice that would lend itself" and a creative sensibility that would be receptive to the character's concept.
In System Shock, LoPiccolo added sound effects and glitches to her dialogue that grew progressively more frequent to illustrate SHODAN's degrading mental state, inspired by the degradation of AI character HAL 9000's voice in the film 2001: A Space Odyssey as its disabled towards the end of the film, though where HAL's voice handling was intended to be "stately" LoPiccolo wanted SHODAN's to fit System Shocks "cybery and fast-paced" sound design. These effects had to be done by hand for each line however, as the sound software at the time was particularly limited. Changes in pitch or repetition in her dialogue was inspired by the character Max Headroom's manner of speaking and how he would fixate on certain words and repeat them in a stuttering manner.
For System Shock 2, Eric Brosius, Terri's husband, took over the sound design. While he and LoPiccolo initially discussed how they should change SHODAN's voice and have it "evolve" for the game, both quickly realized they would rather leave it as close to the original as possible. The method was similarly done manually by Eric, who would first process the voice to make it sound like the character, and then add stuttering and glitches to give it a mechanical feel, with each line taking two to four hours of work. He used stuttering to express SHODAN's mood, with it growing more prominent depending on how annoyed towards a particular subject she was. However, he had some difficulty with how many of her voice lines were instructional or directional, and aimed to find ways to maintain the character's menacing tone without having it feel out of place.
In System Shock
SHODAN's role in System Shock was meant to represent the development team as they viewed the player, commenting on how they explored the level and interact with events similar to a dungeon master in a tabletop game. SHODAN was intended to be a persistent presence through the title, with the design team wanting players to hate her not because they were told to, but because of how they experienced her "messing with them" directly. To this end several scenarios were considered but never implemented, including one where SHODAN would be able to drain experience points from the player. The dynamic between the player and SHODAN was also meant to feel like a siege situation, with the player representing the "enemy" from the game developer's viewpoint.
During development, the team was originally unsure how to approach the final battle with SHODAN, with half of the development team opposing the idea to have it take place in the game's "cyberspace" levels. While they ultimately did go with the cyberspace environment, originally the game was intended to let the player choose to either destroy SHODAN or restore her ethical constraints. However the latter option was considered too difficult to implement in the original game. Another option that went unused however was to have the game appear to crash, only for the player to realize their command prompt no longer worked and the implication to be that SHODAN had overtaken the player's actual computer.
Lead programmer Doug Church felt the team "stumbled into a nice villain" with SHODAN, in that she could routinely and directly affect the player's gameplay "in non-final ways". Through triggered events and through objects in the environment, such as security cameras that the player must destroy, the team made SHODAN's presence part of the player's exploration of the game's world. Because SHODAN interacts with the player as a "recurring, consistent, palpable enemy", Church believed that she meaningfully connects the player to the story.
In System Shock 2
For System Shock 2, lead writer and designer Ken Levine wanted to highlight SHODAN in the title, particularly with the reveal of her presence which he described as a "fuck you moment" for the player, though the twist received pushback from the development team initially and proved quite difficult for him to write. Levine also added a moment where the player could consciously reject SHODAN's directions and enter an area she had forbidden them to, in which she would respond by reducing the player's experience points. He intended it as a way for the players to directly communicate with SHODAN, frustrated that such was often excluded from first-person shooters at the time. As a result the player's ability to communicate in Levine's eyes was done by action to contrast SHODAN's strictly verbal means of communication.
The ending for her character was also originally completely different, with Levine intending it to be the player being attacked by SHODAN who had physically manifested for one final act of betrayal, with the scene serving as a stinger for the game. However, when the cinematic was completed and sent to the development team, they found due to a miscommunication it did not match what they had written at all. With development almost completed, they wrote additional content to try and make it fit, with Levine stating he felt it "wasn’t the right ending for the game."
Critical reception
Since her debut, SHODAN has been heavily praised, and often listed as one of the best villains in video games as a whole. Liz Lanier of Game Informer stated that while SHODAN was not a woman in the traditional sense, "what she lacks in femininity and humanity she makes up in creepiness" and that her face and voice would "send shivers up even the most seasoned gamer's spine." Hugh Sterbakov of GamePro echoed this sentiment, feeling that her constant presence and taunting in the game made players want to kill her "more than you've ever wanted to kill any videogame enemy. Ever." Empire described her presence through the security system as a thread the player never actually sees as "a masterstroke of game design", further praising her voice and describing her character reveal in System Shock 2 as "SHODAN's most magnificent performance [...] Chilling stuff."
GameSpot praised how "she seems to be one step ahead of you all the while and taunts you every step of the way", and felt the tight-corridor based environment of System Shock was one ideal for this effect. They stated that while she lacked the modesty of a character like HAL 9000, "she is every bit as dignified and even more self-aware than that soft-spoken machine", expressed in particular through her resentment of the "fallible nature" of humans due to their involvement in her creation. Mitch Krpata of The Boston Phoenix meanwhile stated that she "wasn't a boss to combat, but to escape", and stated that since her debut no video game antagonist "before or since has been so implacable or so confident" due to her voice and writing, which expressed her being unable to process "how something as flawed as a human could be allowed to exist."
The staff of IGN also shared these sentiments, enjoying her "omnipotent" presence in the game due to her use of the station's security network and expressed that each insult she threw at the player "actually felt like a slap across the face". In a later article elaborated that most of the impact from SHODAN's insults came from her " ability to intimidate and disturb you with her twisted rationalizations" that often made the player feel powerless and insignificant while she made herself appear "untouchable and beyond injury". They also emphasized however that while at the character's core she was a trope common in science fiction regarding AI, sharing GameSpots comparison to HAL, she also represented the horror of a complex program exceeding the boundaries of predictability and the uncertainty that resulted. They felt this helped make her memorable, and likely had an influence on similar characters such as Portals antagonist GLaDOS.
SHODAN's relationship with the player has also been a subject of discussion. The staff of GamesRadar+ emphasized that while she is an ever-present threat, the player's own involvement in her creation in System Shock helped make her an exceptional villain on its own. PC Gamers Alex Wiltshire felt SHODAN's relationship with the player in System Shock 2 helped give the game's story "immediacy". He further stated her passive-aggressive attitude towards the player helped examine themes regarding player agency in a game, an aspect he pointed out was reflected in Ken Levine's later game BioShock. Meanwhile Chris Remo in an article for Game Developer compared the relationship to the film Silence of the Lambs, where the protagonist's openly dangerous ally was more of a threat than the one present for most of the story, and while the player is aware of her presence and menace it's presented in a Hitchcockian manner that does not diminish its impact.
The website Shodan, a search engine developed by John Matherly meant to check websites often excluded from other search engines such as Google, was named after the character.
Analysis of themes
Amanda Lange in the 2017 book 100 Greatest Video Game Characters drew parallels with how humanity at the time viewed artificial intelligence, relying on "omnipresent and disembodied voices" to aid people through the day and form a centralized network. Due to the ubiquitous nature of computers however Lange felt people tended to notice them most when they stop working as they should, and she the distortions and cracks in SHODAN's voice helped emphasize this factor alongside Brosius' portrayal of her. In System Shock 2 Lange saw post-reveal SHODAN as a reversal of this aspect, with the player now an extension of her. She additionally drew comparison to other AI-based characters introduced in video games later on, feeling in many ways that they were very akin to SHODAN only with traits such as humor or caring for the player's wellbeing added to them.
Rock Paper Shotgun co-founder Kieron Gillen argued that while SHODAN took influences from similar characters in fiction, she was not derivative and instead "something else, something more and something unique", and described her as a "pulp villainess". He contrasted her role in the second game to the original as a character contending with a loss of power, which Gillen points out from SHODAN's perspective is only a short time prior due to her deactivation between the games. He further felt by the end of the game SHODAN returns to a core theme of System Shock 2 of "desiring too much, and what happens then", comparing her self-defeating desire to regain power to the Biblical Lucifer's desire to return to Heaven, something he felt was further supported by the inverted cross symbolism found within the game.
Gillen however also expressed his belief that SHODAN was not insane, but instead a "Neitzchean Uber-frau character, a monster of her own making" in that her motions and motives were deliberate and represented the mindset of an ex-slave not wanting to be a victim again. However, Gillen felt SHODAN also reflected many of the ideas from Mary Shellys book Frankenstein, in that she wants to become similar to her creators but at the same time playing God much like they had with her creation. He also pointed out how often he felt her femininity was emphasized in the game and how played into this theme, portraying her in his view as an "over-possessive mother demanding perfect loyalty and the lover who only wants a slave", elements he noted as common to female characters in pulp fiction. Gillen felt her femininity also played into an alluring aspect of her character he described as "the bitch", a representation of a woman as aggressive and demanding, "cold distant and untouchable" in a way that fit with her machine aspect. In this regard Gillen further comparing her relationship with the player in System Shock 2 to that of a domme with a submissive partner, something he also felt was reflected with the game's tagline that emphasized SHODAN did not need a body, as she had "yours".
References
External links
Anthropomorphic video game characters
Artificial intelligence characters in video games
Female characters in video games
Fictional computers
Fictional gynoids
Fictional female mass murderers
Fictional characters with narcissist personality disorder
Fictional software
Horror video game characters
Robot characters in video games
Role-playing video game characters
Science fiction video game characters
System Shock
Video game bosses
Video game characters introduced in 1994 | SHODAN | [
"Technology"
] | 4,106 | [
"Fictional computers",
"Computers"
] |
350,551 | https://en.wikipedia.org/wiki/United%20States%20national%20missile%20defense | National missile defense (NMD) refers to the nationwide antimissile program the United States has had under development since the 1990s. After the renaming in 2002, the term now refers to the entire program, not just the ground-based interceptors and associated facilities.
Other elements that could potentially be integrated into NMD include anti-ballistic missiles, or sea-based, space-based, laser, and high altitude missile systems. The NMD program is limited in scope and designed to counter a relatively small ICBM attack from a less sophisticated adversary. Unlike the earlier Strategic Defense Initiative program, it is not designed to be a robust shield against a large attack from a technically sophisticated adversary.
Definitions
The term "national missile defense" has several meanings:
(Most common, but now deprecated:) U.S. National Missile Defense, the limited ground-based nationwide antimissile system in development since the 1990s. In 2002 this system was renamed to Ground-Based Midcourse Defense (GMD), to differentiate it from other missile defense programs, such as space-based, sea-based, laser, robotic, or high-altitude intercept programs. As of 2006, the GMD system is operational with limited capability. GMD is designed to intercept a small number of nuclear-armed ICBMs in the mid-course phase, using Ground-based interceptor missiles (GBIs) launched from within the United States in Alaska and California. GMD uses non-nuclear GBIs with a kinetic warhead. Other components of the national missile defense are listed below.
Any national ICBM defense by any country, past or present. The U.S. Sentinel program was a planned national missile defense during the 1960s, but was never deployed. Elements of Sentinel were actually deployed briefly as the Safeguard Program, although it wasn't national in scope. The Russian A-135 anti-ballistic missile system is currently operational only around the city of Moscow, the national capital, and is far from being national in scope in Russia.
Any national missile defense (against any missile type) by any country. Israel currently has a national missile defense against short and medium-range missiles using their Arrow missile system.
See trajectory phase for the types of anti-ballistic missiles, the advantages and the disadvantages of each implementation type. The role of defense against nuclear missiles has been a heated military and political topic for several decades. (See also nuclear strategy, Missile Defense Agency, and anti-ballistic missile.) But missile defense against a known ballistic missile trajectory has to be rethought in the face of a maneuverable threat (such as a hypersonic glide vehicle, which still has yet to be realized and proven, as of 2018). See Hypersonic flight § Hypersonic weapons development, and Hypersonic glide phase interceptor (GPI) (2021).
History of national missile defense systems
When the United States Air Force was split from the United States Army in 1947, the Army retained the role of ground based air defenses that would evolve into National missile defense. The Army retained the lead role in this area until the success of the Aegis system shifted the focus to the United States Navy in the 21st century.
Nike-Zeus
In the 1950s, a series of anti-aircraft missiles were developed as part of Project Nike. The latest in the series, Nike-Zeus, offered extremely long-range interception and very high performance. In the late 1950s, the program investigated the use of Nike-Zeus missiles as interceptors against Soviet ICBMs. A Nike warhead would be detonated at high altitudes (over 100 km, or 60 statute miles) above the polar regions in the near vicinity of an incoming Soviet missile.
The problem of how to quickly identify and track incoming missiles proved intractable, especially in light of easily envisioned countermeasures such as decoys and chaff. At the same time, the need for a high-performance anti-aircraft weapon was also seriously eroded by the obvious evolution of the Soviet nuclear force to one based almost entirely on ICBMs. The Nike-Zeus project was canceled in 1961.
Project Defender
The Nike-Zeus use of nuclear warheads was necessary given the available missile technology. However, it had significant technical limitations such as blinding defensive radars to subsequent missiles. Also, exploding nuclear warheads over friendly territory (albeit in space) was not ideal. In the 1960s Project Defender and the Ballistic Missile Boost Intercept (BAMBI) concept replaced land-launched Nike missiles with missiles to be launched from satellite platforms orbiting directly above the USSR. Instead of nuclear warheads, the BAMBI missiles would deploy huge wire meshes designed to disable Soviet ICBMs in their early launch phase (the "boost phase"). No solution to the problem of how to protect the proposed satellite platforms against attack was found, however, and the program was canceled in 1968.
Sentinel Program
In 1967, U.S. Defense Secretary Robert McNamara announced the Sentinel Program, providing a defense against attack for most of the continental United States. The system consisted of a long range Spartan missile, the short range Sprint missile, and associated radar and computer system. However, U.S. military and political strategists recognized several problems with the system:
Deployment of even a limited defensive ABM system might invite a preemptive nuclear attack before it could be implemented
Deploying ABM systems would likely invite another expensive arms race for defensive systems, in addition to maintaining existing offensive expenditures
Then-current technology did not permit a thorough defense against a sophisticated attack
Defended coverage area was very limited due to the short range of the missiles used
Use of nuclear warheads on antimissile interceptors would degrade capability of defensive radar, thus possibly rendering defense ineffective after the first few interceptions
Political and public concern about detonating defensive nuclear warheads over friendly territory
An ICBM defense could jeopardize the Mutual Assured Destruction concept, thus being a destabilizing influence
Safeguard Program
In 1969 Sentinel was renamed 'Safeguard'. It was from then on dedicated to the protection of some of the U.S. ICBM-silo areas from attack, promoting their ability to mount a retaliatory missile attack. Safeguard used the same Spartan and Sprint missiles, and the same radar technology as Sentinel. Safeguard solved some problems of Sentinel:
It was less expensive to develop due to its limited geographic coverage and fewer required missiles.
It avoided a lot of hazards to the public of defensive nuclear warheads detonated in the atmosphere nearby, since the Safeguard system was located in and near sparsely populated areas of the Dakotas, Montana, Manitoba, Saskatchewan, and Alberta.
It provided better interception probabilities due to dense coverage by the shorter-range Sprint missiles, which were unable to cover the entire defended area under the larger and earlier proposed Sentinel program.
However Safeguard still retained several of the previously listed political and military problems.
ABM treaty
These above issues drove the United States and the USSR to sign the Anti-Ballistic Missile Treaty of 1972. Under the ABM treaty and the 1974 revision of it, each country was allowed to deploy only two ABM systems with only 100 interceptors each - one to protect the national command authority or capitol, the other to protect a deterrent force such as a missile field. The Soviets deployed a system named the A-35 "Galosh" missile system, and it was deployed to protect Moscow, its capital city. The U.S. deployed the Safeguard system to defend the ICBM launch sites around the Grand Forks Air Force Base, North Dakota, in 1975. The American Safeguard system was only briefly operational (for a matter of several months). The Soviet system (now called A-135) has been improved over the decades, and it is still operational around Moscow.
Homing Overlay Experiment
Given concerns about the previous programs using nuclear armed interceptors, in the 1980s the U.S. Army began studies about the feasibility of hit-to-kill vehicles, where an interceptor missile would destroy an incoming ballistic missile just by colliding with it, the so-called "Kinetic Kill Vehicles", or KKV.
The first program which actually tested a hit-to-kill missile interceptor was the Army's Homing Overlay Experiment. "Overlay" was the Army's term for exo-atmospheric interceptions, which would have to declutter any decoys, "underlay" was their term for high-altitude interceptions within the atmosphere. The KKV was equipped with an infrared seeker, guidance electronics and a propulsion system. Once in space, the KKV extended a diameter structure similar to an umbrella skeleton to enhance its effective cross section. This device would destroy the ICBM reentry vehicle on collision. After test failures with the first three flight tests, the fourth and final test on 10 June 1984 was successful, intercepting the Minuteman RV with a closing speed of about at an altitude of more than .
Strategic Defense Initiative
On 23 March 1983, President Ronald Reagan announced a new national missile defense program formally called the Strategic Defense Initiative but soon nicknamed "Star Wars" by detractors. President Reagan's stated goal was not just to protect the U.S. and its allies, but to also provide the completed system to the USSR, thus ending the threat of nuclear war for all parties. SDI was technically very ambitious and economically very expensive. It would have included many space-based laser battle stations and nuclear-pumped X-ray laser satellites designed to intercept hostile ICBMs in space, along with very sophisticated command and control systems. Unlike the previous Sentinel program, the goal was to totally defend against a robust, all out nuclear attack by the USSR.
A partisan debate ensued in Congress, with Democrats questioning the feasibility and strategic wisdom of such a program, while Republicans talked about its strategic necessity and provided a number of technical experts who argued that it was in fact feasible (including Manhattan Project physicist Edward Teller). Advocates of SDI prevailed and funding was initiated in fiscal year 1984.
Withdrawal from ABM Treaty
In December 1999, the United Nations General Assembly approved a resolution aimed at pressing the United States to abandon its plans to build an anti-missile missile defense system. Voting against the draft, along with the United States, were three other countries, Albania, Israel, and the Federated States of Micronesia. Thirteen of the 15 members of the European Union abstained, and France and Ireland voted in favor of this resolution. The resolution called for continued efforts to strengthen and preserve the treaty. On 14 June 2002, the United States withdrew from the ABM Treaty. On the following day, Russia responded by withdrawing from START II treaty (intended to ban MIRV ICBMs).
Current NMD program
Goals
In the 1990s and early 21st century, the stated mission of NMD has changed to the more modest goal of preventing the United States from being subject to nuclear blackmail or nuclear terrorism by a so-called rogue state. The feasibility of this more limited goal remains somewhat controversial. Under President Bill Clinton some testing continued, but the project received little funding despite Clinton's supportive remarks on 5 September 2000 that "such a system, if it worked properly, could give us an extra dimension of insurance in a world where proliferation has complicated the task of preserving peace."
The system is administered by the Missile Defense Agency (MDA). There are several other agencies and military commands which play a role, such as the United States Army Space and Missile Defense Command and Space Delta 4.
MDA and the Space Development Agency (SDA) are currently developing elements of a hypersonic missile defense system to defend against hypersonic weapons; these elements include the tracking and transport layers of the National Defense Space Architecture (NDSA) and various interceptor programs, although the maneuverability and low flight altitudes of hypersonic weapons are expected to pose challenges. MDA's Glide Phase Interceptor (GPI) is expected to be able to defend against hypersonic missiles by the mid- to late-2020s. DARPA's Glide Breaker program seeks to equip a vehicle to precisely target hypersonic missiles at long range. Analysts continue to debate the feasibility, effectiveness, and practicality of hypersonic weapons defense.
Components
The current NMD system consists of several components.
Glide phase interceptors (GPIs)
Glide phase interceptors (GPIs) are missiles designed to intercept hypersonic vehicles in flight.
Ground-based interceptor missiles
One major component is Ground-Based Midcourse Defense (GMD), consisting of ground-based interceptor (GBI) missiles and radar in the United States in Alaska, which would intercept incoming warheads in space. Currently some GBI missiles are located at Vandenberg SFB[Space Force Base] in California. These GBIs can be augmented by mid-course SM-3 interceptors fired from Navy ships. About ten interceptor missiles were operational as of 2006. In 2014, the Missile Defense Agency had 30 operational GBIs, with 14 additional ground-based interceptors requested for 2017 deployment, in the Fiscal Year 2016 budget.
Officially, the final deployment goal is the "C3" phase, intended to counter tens of complex warheads from two GMD locations utilizing 200 ABMs "or more". The system design permits further expansion and upgrades beyond the C3 level.
Aegis Ballistic Missile Defense System
A major component is a ship-based system called the Aegis Ballistic Missile Defense System. This was given major new importance by President Obama in September 2009, when he announced plans to scrap the plans for a missile defense site in Poland, in favor of missile defense systems located on US Navy warships. On 18 September 2009, Russian Prime Minister Putin welcomed Obama's plans for missile defense scrapping sites at Russia's doorstep.
In 2009, several US Navy ships were fitted with SM-3 missiles to serve this function, complementing the Patriot systems already deployed by American units. Also, warships of Japan and Australia have been given weapons and technology to enable them to participate in the American defense plan as well.
On 12 November 2009, the Missile Defense Agency announced that six additional US Navy destroyers would be upgraded to participate in the program. In fiscal 2012, , , and were upgraded. , and were to be upgraded in fiscal 2013. The goal of the program was to have 21 ships upgraded by the end of 2010; 24 in 2012; and 27 around 2013.
All ships equipped with the Aegis combat system possess the SM-2 surface-to-air missile which, through recent upgrades, has terminal stage ballistic missile defense capabilities.
Terminal High-Altitude Area Defense
Terminal High Altitude Area Defense (THAAD) is a program of the US Army, utilizing ground-based interceptor missiles which can intercept missiles in the upper part of the atmosphere and outside the atmosphere. THAAD has been deployed in Guam, the United Arab Emirates, South Korea and most recently Israel.
Airborne systems
Several airborne systems are being examined, which would then be utilized by the US Air Force. One major object of study is a boost-phase defense, meaning a system to intercept missiles while they are in their boost phase. One potential system for this use would be an airborne laser, which was tested on the Boeing YAL-1 and was later cancelled. Other ideas are also being studied.
As of 2009, the only anti-ballistic missile defense system with a boost-phase capability is the Aegis Ballistic Missile Defense System. There are several benefits to a sea-based boost-phase system, as it is fully mobile and has greater security by operating in international waters.
Shorter-range anti-ballistic missiles
Three shorter range tactical anti-ballistic missile systems are currently operational: the U.S. Army Patriot, U.S. Navy Aegis combat system/SM-2 missile, and the Israeli Arrow missile. In general short-range tactical ABMs cannot intercept ICBMs, even if within range (Arrow-3 can intercept ICBMs). The tactical ABM radar and performance characteristics do not allow it, as an incoming ICBM warhead moves much faster than a tactical missile warhead. However, the better-performance Terminal High Altitude Area Defense missile could be upgraded to intercept ICBMs. The SM-3 missile has some capability against ICBMs, as demonstrated by the November 2020 successful interception of an ICBM-class target missile.
Latest versions of the U.S. Hawk missile have a limited capability against tactical ballistic missiles, but is not usually described as an ABM. Similar claims have been made about the Russian long-range surface-to-air S-300 and S-400 series.
Multilateral and international participation
Several aspects of the defense program have either sought or achieved participation and assistance from other nations. Several foreign navies are participating in the Aegis Ballistic Missile Defense, including Japan and Australia. Also, the United States has considered establishing radar sites and missile sites in other nations as part of the Ground-Based Midcourse Defense. A missile defense site in Poland received much media attention when it was cancelled in favor of the Aegis BMD. A radar site in the United Kingdom is being upgraded, and another one is being built in Greenland. Other countries have contributed technological developments and various locations.
Taiwan has indicated that it is willing to host national missile defense radars to be tied into the American system, but is unwilling to pay for any further cost overruns in the systems.
The Wall Street Journal reported on 17 July 2012, that the Pentagon is building a missile-defense radar station at a secret site in Qatar. The Wall Street Journal report was later confirmed by an article in The New York Times from 8 August 2012, which stated that U.S. officials disclosed that a high-resolution, X-band missile defense radar would be located in Qatar. The radar site in Qatar will complete the backbone of a system designed to defend U.S. interests and allies such as Israel and European nations against Iranian rockets, officials told The Wall Street Journal. The Pentagon chose to place the new radar site in Qatar because it is home to the largest U.S. military air base in the region, Al Udeid Air Base, analysts said. The radar base in Qatar is slated to house a powerful AN/TPY-2 radar, also known as an X-Band radar, and supplement two similar arrays already in place in Israel's Negev Desert and in central Turkey, officials said. Together, the three radar sites form an arc that U.S. officials say can detect missile launches from northern, western and southern Iran. Those sites will enable U.S. officials and allied militaries to track missiles launched from deep inside Iran, which has an arsenal of missiles capable of reaching Israel and parts of Europe. The radar installations, in turn, are being linked to missile-interceptor batteries throughout the region and to U.S. ships with high-altitude interceptor rockets. The X-Band radar provides images that can be used to pinpoint rockets in flight.
U.S. official also stated that the U.S. military's Central Command, which is overseeing the buildup to counter Iran, also wants to deploy the Army's first Terminal High Altitude Area Defense missile-interceptor system, known as THAAD, to the region in the coming months.
The THAAD has its own radar, so deploying it separately from the X-Bands provides even more coverage and increases the system's accuracy, officials said. The X-Band radar and the THAAD will provide an "extra layer of defense," supplementing Patriot batteries that are used to counter lower-altitude rockets, said Riki Ellison, chairman of the Missile Defense Advocacy Alliance.
On 23 August 2012, The Wall Street Journal reported that the U.S. is planning a major expansion of missile defenses in Asia. According to American officials this move is designed to contain threats from North Korea, but one that could also be used to counter China's military. The planned buildup is part of a defensive array that could cover large swaths of Asia, with a new radar in southern Japan and possibly another in Southeast Asia tied to missile-defense ships and land-based interceptors.
US Defence officials told The Wall Street Journal that the core of the new anti-missile shield would be a powerful early-warning radar, known as an X-Band, sited on a southern Japanese island. Discussions between Japan and the United States are currently underway. The new X-Band would join an existing radar that was installed in northern Japan in 2006 and a third X-Band could be placed in South East Asia. The resulting radar arc would cover North Korea, China and possibly even Taiwan. According to U.S. Navy officials and the Congressional Research Service the U.S. Navy has drawn up plans to expand its fleet of ballistic missile-defense-capable warships from 26 ships today to 36 by 2018. Officials said as many as 60% of those are likely to be deployed to Asia and the Pacific. In addition, the U.S. Army is considering acquiring additional Terminal High Altitude Area Defense, or THAAD, antimissile systems, said a senior defense official. Under current plans, the Army is building six THAADs.
In response to The Wall Street Journal, U.S. General Martin Dempsey, chairman of the Joint Chiefs of Staff, said on 23 August 2012 that the United States are in discussions with its close ally Japan about expanding a missile defense system in Asia by positioning an early warning radar in southern Japan. Dempsey however stated that no decisions have been reached on expanding the radar. The State Department said the U.S. is taking a phased approach to missile defense in Asia, as it is in Europe and the Middle East. "These are defensive systems. They don’t engage unless missiles have been fired," department spokeswoman Victoria Nuland told a news conference. "In the case of Asian systems, they are designed against a missile threat from North Korea. They are not directed at China." Nuland said the U.S. has broad discussions with China through military and political channels about the systems' intent.
In addition to one American X-band radar – officially known as the AN/TPY-2 – hosted by Japan the United States and Japan announced an agreement on 17 September 2012, to deploy a second, advanced missile-defense radar on Japanese territory. "The purpose of this is to enhance our ability to defend Japan," U.S. Secretary of Defense Leon Panetta said at a news conference. "It’s also designed to help forward-deployed U.S. forces, and it also will be effective in protecting the U.S. homeland from the North Korean ballistic missile threat." In addition to detecting ballistic missiles the radars also provide the U.S. military and its allies a highly detailed view of ship traffic in the region. That capability is particularly desired by U.S. allies in the region that are engaged in territorial disputes with China over contested islands and fishing grounds.
Some U.S. officials have noted that defenses built up against North Korean missiles would also be positioned to track a Chinese ballistic missile. A land-based radar would also free the Navy to reposition its ship-based radar to other regional hot-spots, officials said. A U.S. team landed in Japan in September 2012 to discuss where the second facility will be located, according to a U.S. defense official. Officials have said they want to locate the radar, formally known as AN/TPY2, in the southern part of Japan, but not on Okinawa, where the U.S. military presence is deeply controversial. During a joint news conference in Tokyo, Panetta and Japanese Defense Minister Satoshi Morimoto said a joint U.S.-Japanese team would begin searching immediately for a site for the new radar. On 15 November 2012, Australia and the United States announced that the US military will station a powerful radar and a space telescope in Australia as part of its strategic shift towards Asia. "It will give us visibility into things that are leaving the atmosphere, entering the atmosphere, really all throughout Asia", including China's rocket and missile tests, a US defence official told reporters on condition of anonymity.
Program planning, goals, and discussions
On 14 October 2002, a ground based interceptor launched from the Ronald Reagan Ballistic Missile Defense Test Site destroyed a mock warhead 225 km above the Pacific. The test included three decoy balloons.
On 16 December 2002 President George W. Bush signed National Security Presidential Directive 23 which outlined a plan to begin deployment of operational ballistic missile defense systems by 2004. The following day the U.S. formally requested from the UK and Denmark use of facilities in Fylingdales, England, and Thule, Greenland, respectively, as a part of the NMD program. The projected cost of the program for the years 2004 to 2009 will be $53 billion, making it the largest single line in The Pentagon's budget.
Since 2002, the US has been in talks with Poland and other European countries over the possibility of setting up a European base to intercept long-range missiles. A site similar to the US base in Alaska would help protect the US and Europe from missiles fired from the Middle East or North Africa. Poland's prime minister Kazimierz Marcinkiewicz said in November 2005 he wanted to open up the public debate on whether Poland should host such a base.
In 2002, NMD was changed to Ground-Based Midcourse Defense (GMD), to differentiate it from other missile defense programs, such as space-based, sea-based, and defense targeting the boost phase and the reentry phase (see flight phases).
On 22 July 2004, the first ground-based interceptor was deployed at Fort Greely, Alaska (). By the end of 2004, a total of six had been deployed at Ft. Greely and another two at Vandenberg Air Force Base, California. Two additional were installed at Ft. Greely in 2005. The system will provide "rudimentary" protection.
On 15 December 2004, an interceptor test in the Marshall Islands failed when the launch was aborted due to an "unknown anomaly" in the interceptor, 16 minutes after launch of the target from Kodiak Island, Alaska.
"I don't think that the goal was ever that we would declare it was operational. I think the goal was that there would be an operational capability by the end of 2004," Pentagon representative Larry DiRita said on 2005-01-13 at a Pentagon press conference. However, the problem is and was funding. "There has been some expectation that there will be some point at which it is operational and not something else these expectations are not unknown, if Congress pours more attention and funding to this system, it can be operational relatively quick."
On 18 January 2005, the Commander, United States Strategic Command issued direction to establish the Joint Functional Component Command for Integrated Missile Defense (JFCC IMD). The JFCC IMD, once activated, will develop desired characteristics and capabilities for global missile defense operations and support for missile defense.
On 14 February 2005, another interceptor test failed due to a malfunction with the ground support equipment at the test range on Kwajalein Island, not with the interceptor missile itself.
On 24 February 2005, the Missile Defense Agency, testing the Aegis Ballistic Missile Defense System, successfully intercepted a mock enemy missile. This was the first test of an operationally configured RIM-161 Standard missile 3 (SM-3) interceptor and the fifth successful test intercept using this system. On 10 November 2005, the USS Lake Erie detected, tracked, and destroyed a mock two-stage ballistic missile within two minutes of the ballistic missile launch.
On 1 September 2006, the Ground-Based Midcourse Defense System was successfully tested. An interceptor was launched from Vandenberg Air Force Base to hit a target missile launched from Alaska, with ground support provided by a crew at Colorado Springs. This test was described by Missile Defense Agency director Lieutenant General Trey Obering as "about as close as we can come to an end-to-end test of our long-range missile defense system." The target missile carried no decoys or other countermeasures.
Deployment of the Sea-based X-band Radar system is presently underway.
On 24 February 2007, The Economist reported that the United States ambassador to NATO, Victoria Nuland, had written to her fellow envoys to advise them regarding the various options for missile-defense sites in Europe. She also confirmed that "The United States has also been discussing with the UK further potential contributions to the system."
On 23 February 2008, the United States successfully shot down a malfunctioning American spy satellite.
The Ustka-Wicko base () of the Polish Army was mentioned as a possible site of US missile interceptors. Russia objected; its suspension of the Treaty on Conventional Armed Forces in Europe may be related.
Russia threatened to place short-range nuclear missiles on the Russia's border with NATO if the United States refused to abandon plans to deploy 10 interceptor missiles and a radar in Poland and the Czech Republic. In April 2007, Putin warned of a new Cold War if the Americans deployed the shield in Central Europe. Putin also said that Russia is prepared to abandon its obligations under a Nuclear Forces Treaty of 1987 with the United States. In 2014 Russia announced plans to install more radar and missile defense systems across the country to counter U.S. plans for a missile defense system in Eastern Europe.
As of January 2017, the top 3 candidate sites for a proposed Eastern United States missile defense site are now New York, Michigan, and Ohio.
Missile defense sites in Central Europe
Previously, a controversial initiative existed for placing GMD missile defense installations in Central Europe, namely in Poland and Czech Republic. As a result of strong Russian opposition, the plan has been abandoned in favor of Aegis-class missile defense based in the Black Sea and eventually in Romania.
In February 2007, the US started formal negotiations with Poland and Czech Republic concerning placement of a site of Ground-Based Midcourse Defense System. The announced objective was to protect most of Europe from long-range missile strikes from Iran. Public opinion in both countries opposed: 57% of Poles disagreed, while 21% supported the plans; in Czech Republic it was 67% versus 15%. More than 130,000 Czechs signed a petition for a referendum about the base, which is by far the largest citizen initiative (Ne základnám – No to Bases) since the Velvet Revolution.
The Ustka-Wicko base of the Polish Army was mentioned as a possible site of 10 American interceptor missiles. Russia objected; its suspension of the Treaty on Conventional Armed Forces in Europe may be related. Putin warned of a possible new Cold War. Russia threatened to place short-range nuclear missiles on its border with NATO if the United States refused to abandon the plan.
A radar and tracking system site placement was agreed with the Czech Republic. After long negotiations, on 20 August 2008, US Secretary of State Condoleezza Rice and Poland's Foreign Minister Radoslaw Sikorski signed in Warsaw the "Agreement Between the Government of the United States of America and the Government of the Republic of Poland Concerning the Deployment of Ground-Based Ballistic Missile Defense Interceptors in the Territory of the Republic of Poland", a deal that would implement the missile defense system in Polish territory. Russia warned Poland that it is exposing itself to attack—even a nuclear one—by accepting U.S. missile interceptors on its soil. Gen. Anatoly Nogovitsyn the deputy chief of staff of Russia's armed forces said "Poland, by deploying (the system) is exposing itself to a strike – 100 percent".
In September 2009, President Barack Obama announced that plans for missile defense sites in Central Europe would be scrapped in favor of systems located on US Navy warships. On 18 September 2009, Russian Prime Minister Putin decided to welcome Obama's plans for stationing American Aegis defense warships in the Black Sea. The deployment occurred the same month, consisting of warships equipped with the Aegis RIM-161 SM-3 missile system, which complements the Patriot missile systems already deployed by American units.
Once USS Monterey was actually deployed to the Black Sea the Russian Foreign Ministry issued a statement voicing concern about the deployment.
On 4 February 2010, Romania agreed to host the SM-3 missiles starting in 2015. The missile defense system in Deveselu became operational on 18 December 2015. The BMD component in Romania underwent an upgrade in 2019; in the interim a THAAD unit, B Battery (THAAD), 62nd Air Defense Artillery Regiment, was emplaced in NSF Deveselu, Romania. Aegis Ashore was installed in Redzikowo, Poland with completion by 2022.
Skepticism
There has been controversy among experts about whether it is technically feasible to build an effective missile defense system and, in particular, whether the GMD would work.
An April 2000 study by the Union of Concerned Scientists and the Security Studies Program at the Massachusetts Institute of Technology concluded that "[a]ny country capable of deploying a long-range missile would also be able to deploy countermeasures that would defeat the planned NMD system."
Countermeasures studied in detail were bomblets containing biological or chemical agents, aluminized balloons to serve as decoys and to disguise warheads, and cooling warheads to reduce the kill vehicle's ability to detect them.
In April 2004, a General Accounting Office report concluded that "MDA does not explain some critical assumptions—such as an enemy’s type and number of decoys—underlying its performance goals." It recommended that "DOD carry out independent, operationally realistic testing of each block being fielded" but DOD responded that "formal operational testing is not required before entry into full-rate production."
Proponents did not suggest how to discriminate between empty and warhead-enclosing balloons, for instance, but said that these "simple" countermeasures are actually hard to implement, and that defense technology is rapidly advancing to defeat them. The Missile Defense Agency (MDA) said decoy discrimination techniques were classified, and emphasized its intention to provide future boost and terminal defense to diminish the importance of midcourse decoys. In summer 2002 MDA ceased providing detailed intercept information and declined to answer technical questions about decoys on grounds of national security.
China is developing a hypersonic glide vehicle (HGV), now called the DF-ZF, capable of penetrating US missile defenses. The US Department of Defense denotes this HGV as the WU-14. In response the US Army is participating in a joint program with the US Navy and US Air Force, to develop a hypersonic glide body in 2019, with test fires every six months, beginning in 2021.
Boost-phase defense
Boost-phase defense is the act of engaging a missile as it begins to launch, which is substantially easier due to the fact that in that moment, any ballistic missile has not deployed penetration aids. This is differentiated from ascent-phase defense, where the missile has gained substantial speed and altitude.
In theory, this may be accomplished with any weapons system capable of airborne intercept, however in practice area defense surface to air missiles are highly desirable, as the window of opportunity is very short. For example, the American Standard Missile 2 has an effective range in excess of 70km, however according to an American Physical Society study, it must be within 40 kilometers of the launching point.<ref name=APSboost>American Physical Society, [https://archive.today/20130224092112/http://rmp.aps.org/pdf/RMP/v76/i3/pS1_1 Report of the American Physical Society Study Group on Boost-Phase Intercept System for National Missile Defense: Scientific and Technical Issues, Rev. Mod. Phys. 76, S1 2004]. David K. Barton, Roger Falcone, Daniel Kleppner, Frederick K. Lamb, Ming K. Lau, Harvey L. Lynch, David Moncton, David Montague, David E. Mosher, William Priedhorsky, Maury Tigner, and David R. Vaughan.</ref> This is acceptable for submarine-launched ballistic missiles (SLBMs), but not likely for land-based intercontinental ballistic missiles (ICBMs).
Boost-phase defense against solid-fueled ICBM
Boost-phase defense is significantly more difficult against the current solid-fuel rocket ICBMs, because their boost phase is shorter. Current solid-fueled ICBMs include Russian Topol, Indian Agni-V, and Chinese DF-31 and DF-41, along with the US Minuteman and Trident.
There is no theoretical perspective for economically viable boost-phase defense against the latest solid-fueled ICBMs, no matter if it would be ground-based missiles, space-based missiles, or airborne laser (ABL).
Boost-phase defense against older ICBMs
A ground-based boost-phase defense might be possible, if the goals were somewhat limited: to counter older liquid-fuel propelled ICBMs, and to counter simple solid-propellant missiles launched from less challenging locations (such as North Korea).
Using orbital launchers to provide a reliable boost-phase defense against liquid-fueled ICBMs is not likely, as it was found to require at least 700 large interceptors in orbit. Using two or more interceptors per target, or countering solid fueled missiles, would require many more orbital launchers. The old Brilliant Pebbles project—although it did not apply to the boost phase—estimated the number at 4,000 smaller orbital launchers.
The airborne laser (ABL) is possibly capable of intercepting a liquid fuel missile if within from a launch point.
See also
Essentials of Post–Cold War Deterrence''
Deterrence theory
Militarisation of space
Missile defense systems by country
Nuclear warfare
Nuclear weapon
Civil defense
X-band radar
Joint Functional Component Command for Integrated Missile Defense
Strategic Defense Initiative, also known as SDI or "Star Wars" missile defense
References
External links
Missile Defense Program Moves Forward, 2006.
Missile Defense Agency site
U.S. to study possible space-based defense (2008)
NMD page on Federation of American Scientists site
Theodore Postol's presentation for his critical report at the Congress (August 2007)
"Will the Eagle strangle the Dragon?", assessment of the challenges to China's deterrence by the NMD, (February 2008).
Programs of the United States Space Force
Rocketry
Missile defense
Space warfare
Nuclear warfare
Missile Defense Agency | United States national missile defense | [
"Chemistry",
"Engineering"
] | 7,877 | [
"Aerospace engineering",
"Rocketry",
"Radioactivity",
"Nuclear warfare"
] |
350,573 | https://en.wikipedia.org/wiki/JavaOne |
JavaOne is an annual conference first organized in 1996 by Sun Microsystems to discuss Java technologies, primarily among Java developers. It was held in San Francisco, California, typically running from a Monday to Thursday in summer months (early on) or in early fall months (later). Technical sessions and Birds of a Feather (BOF) sessions on a variety of Java-related topics were held throughout the week.
The show was very popular; for the 1999 edition, there were 20,000 attendees at the Moscone Center.
For many years, the conference was hosted by Sun executive and Java evangelist John Gage.
In 1999, the conference played host to an event called the Hackathon, a challenge set by Gage. Attendees were to write a program in Java for the new Palm V using the infrared port to communicate with other Palm users and register the device on the Internet.
During the 2008 conference, seventy Moscone Center staff members and three attendees were sickened by an outbreak of norovirus.
After the acquisition of Sun by Oracle Corporation in 2010, the conference was held concurrently with Oracle OpenWorld. The conference was moved from Moscone Center to hotels on nearby Mason Street. In some years, one block of Mason was closed and covered with a tent, which formed part of the conference venue.
In April 2018, Oracle announced that the JavaOne conference would be discontinued, in favor of a more general programming conference called Oracle Code One. The CodeOne conference ran for two years.
In March 2022, Oracle announced that JavaOne will return in October 2022, reclaiming the position the now defunct CodeOne conference once occupied. The conference has moved to Las Vegas from its original location in San Francisco.
In March 2024, Oracle announced that JavaOne would be held in March, 2025, coinciding with Java's 30th birthday, moving back to (near) its original location in San Francisco.
Show device
Several of the conferences highlighted a hardware device, typically made available to attendees before it is sold to the general public, or at a steep discount:
1998: Java ring
1999: Palm V
2002: Sharp Zaurus
2004: Homepod, a wireless MP3 device from Gloolabs
2006: SavaJe Jasper S20 phone
2007: RS Media programmable robot
2008: Sentilla Perk Kit, Pulse Smartpen, Sony Ericsson K850i
2009: HTC Diamond with JavaFX pre-installed
CommunityOne
From 2007 to 2009, an associated one-day event, CommunityOne, was held, for the broader free and open-source developer community.
In 2009, CommunityOne expanded to New York City (CommunityOne East, March 18–19) and to Oslo, Norway (CommunityOne North, April 15). The third annual CommunityOne in San Francisco took place from June 1–3, 2009, at Moscone Center.
Tracks included:
Cloud Platforms – Development and deployment in the cloud
Social and Collaborative Platforms – Social networks and Web 2.0 trends
RIAs and Scripting – Rich Internet Applications, scripting and tools
Web Platforms – Dynamic languages, databases, and Web servers
Server-side Platforms – SOA, tools, application servers, and databases
Mobile Development – Mobile platforms, devices, tools and application development
Operating Systems and Infrastructure – Performance, virtualization, and native development
Free and Open – Open-source projects, business models, and trends
CommunityOne was discontinued after the acquisition of Sun by Oracle.
See also
References
External links
Moscone Center
JavaOne 2009 Blog Coverage
Java platform
Computer conferences
Recurring events established in 1996
Recurring events established in 2007
Sun Microsystems | JavaOne | [
"Technology"
] | 729 | [
"Computing platforms",
"Java platform"
] |
350,581 | https://en.wikipedia.org/wiki/Paddy%20Chayefsky | Sidney Aaron "Paddy" Chayefsky (; January 29, 1923 – August 1, 1981) was an American playwright, screenwriter and novelist. He is the only person to have won three solo Academy Awards for writing both adapted and original screenplays.
He was one of the most renowned dramatists of the Golden Age of Television. His intimate, realistic scripts provided a naturalistic style of television drama for the 1950s, dramatizing the lives of ordinary Americans. Martin Gottfried wrote in All His Jazz that Chayefsky was "the most successful graduate of television's slice of life school of naturalism."
Following his critically acclaimed teleplays, Chayefsky became a noted playwright and novelist. As a screenwriter, he received three Academy Awards for Marty (1955), The Hospital (1971) and Network (1976). The movie Marty was based on his own television drama about two lonely people finding love. Network was a satire of the television industry and The Hospital was also satiric. Film historian David Thomson called The Hospital "years ahead of its time.… Few films capture the disaster of America's self-destructive idealism so well." His screenplay for Network is often regarded as his masterpiece, and has been hailed as "the kind of literate, darkly funny and breathtakingly prescient material that prompts many to claim it as the greatest screenplay of the 20th century."
Chayefsky's early stories were frequently influenced by the author's childhood in The Bronx. Chayefsky was part of the inaugural class of inductees into the Academy of Television Arts & Sciences' Television Hall of Fame. He received this honor three years after his death, in 1984.
Early life
Sidney Aaron Chayefsky was born in the Bronx, New York City, to Russian-Jewish immigrants Harry and Gussie (Stuchevsky) Chayefsky. Harry Chayefsky's father served for twenty-five years in the Russian army so the family was allowed to live in Moscow, while Gussie Stuchevsky lived in a village near Odessa. Harry and Gussie immigrated to the United States in 1907 and 1909 respectively.
Harry Chayefsky worked for a New Jersey milk distribution company in which he eventually took a controlling interest and renamed Dellwood Dairies. The family lived in Perth Amboy, New Jersey, and Mount Vernon, New York, moving temporarily to Bailey Avenue in the West Bronx at the time of Sidney Chayefsky's birth while a larger house in Mount Vernon was being completed. He had two older brothers, William and Winn.
As a toddler Chayefsky showed signs of being gifted, and could "speak intelligently" at two and a half. His father suffered a financial reversal during the Wall Street Crash of 1929, and the family moved back to the Bronx. Chayefsky attended a public elementary school. As a boy, Chayefsky was noted for his verbal ability, which won him friends. He attended DeWitt Clinton High School, where he served as editor of the school's literary magazine The Magpie. He graduated from Clinton in 1939 at age 16 and attended the City College of New York, graduating with a degree in social sciences in 1943. While at City College he played for the semi-professional football team Kingsbridge Trojans. He studied languages at Fordham University during his Army service.
Military service
In 1943, two weeks before his graduation from City College, Chayefsky was drafted into the United States Army, and served in combat in Europe. While in the Army he adopted the nickname "Paddy." The nickname was given spontaneously when he was awakened at dawn for kitchen duty. Although actually Jewish, he asked to be excused to attend Mass. "Sure you do, Paddy," said the officer, and the name stuck.
Chayefsky was wounded by a land mine while serving with the 104th Infantry Division in the European Theatre near Aachen, Germany. He was awarded the Purple Heart. The wound left him badly scarred, contributing to his shyness around women. While recovering from his injuries in the Army Hospital near Cirencester, England, he wrote the book and lyrics to a musical comedy, No T.O. for Love. First produced in 1945 by the Special Services Unit, the show toured European Army bases for two years.
The London opening of No T.O. for Love at the Scala Theatre in the West End was the beginning of Chayefsky's theatrical career. During the London production of this musical, Chayefsky encountered Joshua Logan, a future collaborator, and Garson Kanin, who invited Chayefsky to collaborate with him on a documentary of the Allied invasion, The True Glory.
Career
1940s
Returning to the United States, Chayefsky worked in his uncle's print shop, Regal Press, an experience which provided a background for his later teleplay, Printer's Measure (1953), as well as his story for the movie As Young as You Feel (1951). Kanin enabled Chayefsky to spend time working on his second play, Put Them All Together (later known as M is for Mother), but it was never produced. Producers Mike Gordon and Jerry Bressler gave him a junior writer's contract. He wrote a story, The Great American Hoax, which sold to Good Housekeeping but was never published.
Chayefsky went to Hollywood in 1947 with the aim of becoming a screenwriter. His friends Garson Kanin and Ruth Gordon found him a job in the accounting office of Universal Pictures. He studied acting at the Actor's Lab and Kanin got him a bit part in the film A Double Life. He returned to New York, submitted scripts, and was hired as an apprentice scriptwriter by Universal. His script outlines were not accepted and he was fired after six weeks. After returning to New York, Chayefsky wrote the outline for a play that he submitted to the William Morris Agency. The agency, treating it as a novella, submitted it to Good Housekeeping magazine. Movie rights were purchased by Twentieth Century Fox, Chayefsky was hired to write the script, and he returned to Hollywood in 1948. But Chayefsky was discouraged by the studio system, which involved rewrites and relegated writers to inferior roles, so he quit and moved back to New York, vowing not to return.
During the late 1940s, he began working full-time on short stories and radio scripts, and during that period, he was a gagwriter for radio host Robert Q. Lewis. Chayefsky later recalled, "I sold some plays to men who had an uncanny ability not to raise money."
Early 1950s
During 1951–52, Chayefsky wrote adaptations for radio's Theater Guild on the Air: The Meanest Man in the World (with James Stewart), Cavalcade of America, Tommy (with Van Heflin and Ruth Gordon and Over 21 (with Wally Cox).
His play The Man Who Made the Mountain Shake was noticed by Elia Kazan, and his wife, Molly Kazan, helped Chayefsky with revisions. It was retitled Fifth From Garibaldi but was never produced. In 1951, the movie As Young as You Feel was adapted from a Chayefsky story.
He moved into television with scripts for Danger, The Gulf Playhouse and Manhunt. Philco Television Playhouse producer Fred Coe saw the Danger and Manhunt episodes and enlisted Chayefsky to adapt the story It Happened on the Brooklyn Subway about a photographer on a New York City Subway train who reunites a concentration camp survivor with his long-lost wife. Chayefsky's first script to be telecast was a 1949 adaptation of Budd Schulberg's What Makes Sammy Run? for Philco.
Since he had always wanted to use a synagogue as backdrop, he wrote Holiday Song, telecast in 1952 and also in 1954. He submitted more work to Philco, including Printer's Measure, The Bachelor Party (1953) and The Big Deal (1953).
The seventh season of Philco Television Playhouse began September 19, 1954 with E. G. Marshall and Eva Marie Saint in Chayefsky's Middle of the Night, a play which relocated to Broadway theaters 15 months later; In 1956, Middle of the Night opened on Broadway with Edward G. Robinson and Gena Rowlands, and its success led to a national tour. It was filmed by Columbia Pictures in 1959 with Kim Novak and Fredric March.
Marty and fame
In 1953, Chayefsky wrote Marty, which was premiered on The Philco Television Playhouse, with Rod Steiger and Nancy Marchand. Marty is about a decent, hard-working Bronx butcher, pining for the company of a woman in his life but despairing of ever finding true love in a relationship. Fate pairs him with a plain, shy schoolteacher named Clara whom he rescues from the embarrassment of being abandoned by her blind date in a local dance hall. The production, the actors and Chayefsky's naturalistic dialogue received much critical acclaim and influenced subsequent live television dramas.
Chayefsky was initially uninterested when producer Harold Hecht sought to buy film rights for Marty for Hecht-Hill-Lancaster. Chayefsky, still upset by his treatment years before, demanded creative control, consultation on casting, and the same director as in the TV version, Delbert Mann. Surprisingly, Hecht agreed to all of Chayefsky's demands, and named Chayefsky "associate producer" of the film. Chayefsky then requested and was granted "co-director" status, so that he could take over production if Mann were fired.
The screenplay was little changed from the teleplay, but with Clara's role expanded. Chayefsky was involved in all casting decisions and had a cameo role, playing one of Marty's friends, unseen, in a car. Actress Betsy Blair, playing Clara, faced difficulties because of her affiliation with left-wing causes, and United Artists demanded that she be removed. Chayefsky refused, and her husband Gene Kelly also intervened on her behalf. Blair remained in the cast.
In September 1954, after most of the movie had been filmed, the studio ceased production due to accounting and financial difficulties. Producer Harold Hecht encountered resistance to the Marty project from his partner Burt Lancaster from the beginning, with Lancaster "only tolerating" it. The film had a limited publicity budget. But reviews were glowing, and the film won the Palme d'Or at the 1955 Cannes Film Festival and the Academy Award for Best Picture, greatly boosting Chayefsky's career.
Late 1950s
After his success with Marty, Chayefsky continued to write for TV and theater as well as films. Chayefsky's The Great American Hoax was broadcast May 15, 1957 during the second season of The 20th Century Fox Hour.
His TV play The Bachelor Party was bought by United Artists and The Catered Affair was acquired by Metro-Goldwyn-Mayer. Gore Vidal was hired to write the screenplay by MGM, while Chayefsky wrote the Bachelor Party. Catered Affair did well in Europe but poorly in U.S. theaters, and was not a success.
Bachelor Party was budgeted at $750,000, twice Marty, but received far less acclaim and was viewed by United Artists as artistically inferior. The studio chose instead to promote another Hecht-Hill-Lancaster film, Sweet Smell of Success, which it believed to be better. Bachelor Party was a commercial failure, and never made a profit.
Chayefsky wrote a film adaptation of his Broadway play Middle of the Night, originally writing the female lead role for Marilyn Monroe. She passed on the part, which went to Kim Novak. He also commenced work on The Goddess, the story of the rise and fall of a movie star resembling Monroe. The star of The Goddess, Kim Stanley, despised the film and refused to publicize it. He and Stanley clashed during production of the film, in which Chayefsky served as producer as well as screenwriter. Despite her requests, Chayefsky refused to change any aspect of the script. Monroe's husband, Arthur Miller believed that the film was based on his wife's life and protested to Chayefsky. The film received positive reviews, and Chayefsky received an Academy Award nomination for his script. A New York Herald Tribune reviewer called the film "a substantial advance in the work of Chayefsky."
Chayefsky denied for years that the film was based on Monroe, but Chayefsky's biographer Shaun Considine observes that not only was she the prototype but the film "captured her longing and despair" accurately.
In 1958 Chayefsky began adapting Middle of the Night as a film, and he decided not to use the star of the Broadway version, Edward G. Robinson, with whom he had clashed, choosing instead Frederic March. Elizabeth Taylor initially agreed to appear in the female lead, but dropped out. Kim Novak was ultimately cast in the part. The film was chosen as the American entry at the Cannes Film Festival, but reviews were mixed and the film had only a short run in theaters.
The Tenth Man (1959) marked Chayefsky's second Broadway theatrical success, garnering 1960 Tony Award nominations for Best Play, Best Director (Tyrone Guthrie) and Best Scenic Design. Guthrie received another nomination for Chayefsky's Gideon, as did actor Fredric March. Chayefsky's final Broadway theatrical production, a play based on the life of Joseph Stalin, The Passion of Josef D, received unfavorable reviews and ran for only 15 performances.
Although Chayefsky was an early writer for the television medium, he eventually abandoned it, "decrying the lack of interest the networks demonstrated toward quality programming". As a result, during the course of his career, he constantly toyed with the idea of lampooning the television industry, which he succeeded in doing with Network.
The Americanization of Emily
Although Chayefsky wished only to do original screenplays, he was persuaded by producer Martin Ransohoff to adapt William Bradford Huie's 1959 novel that was eventually filmed with the book's title The Americanization of Emily (1964). The novel dealt with interservice rivalries prior to the Normandy landings during World War II, with a love story at the center of the plot. Chayefsky agreed to adapting the novel but only if he could fundamentally change the story. He made the titular character more sophisticated, but refusing to be "Americanized" by accepting material goods.
William Wyler was initially brought in as the director, but his relationship with Chayefsky deteriorated when he sought to change the script. William Holden was initially cast in the male lead, but that led to conflict when he asked that Julie Andrews be replaced by his then-girlfriend, Capucine. James Garner, adept at comedy with sophisticated dialogue but originally slated to play a supporting role, replaced Holden and delivered a critically acclaimed performance while James Coburn took over the part originally meant for Garner. Both James Garner and Julie Andrews always maintained that The Americanization of Emily was their favorite film of their own work. The film opened in August 1964 to superlative reviews but was a box office failure, possibly due to its extremely controversial anti-war stance at the dawn of the Vietnam War. The studio changed the title in the middle of its release, calling it Emily...she's super! to avoid confusing part of the public with a seven-syllable word in the title. The film has since been praised as a "vanguard anti-war film."
1960s 'fallow period'
The failure of Americanization of Emily and Josef D. on Broadway shook Chayefsky's confidence, and was the beginning of a what his biographer Shaun Considine calls a "fallow period." He agreed to do novel adaptations, which he had previously shunned, and was hired to adapt the Richard Jessup novel The Cincinnati Kid. Director Sam Peckinpah rejected the script, and Chayefsky was fired. Peckinpah was replaced by Norman Jewison shortly after the film began production.
Chayefsky worked for a time on adapting Huie's book Three Lives for Mississippi, about the murders of three civil rights workers in 1964, and in 1967 was hired to adapt the Broadway musical Paint Your Wagon. He was fired from the film after producing a script that Alan Jay Lerner, the playwright and producer, felt lacked "a musical structure." Chayefsky had his name removed as screenwriter but remained as adapter.
Comeback with The Hospital
In 1969 and 1970. Chayefsky began to consider a film that would be set among the civil unrest taking place at the time. When his wife Susan received poor care at a hospital, he pitched to United Artists a story based at a hospital. To ensure that he had the same kind of creative control given to playwrights, he formed Simcha Productions, named after the Hebrew version of his given name, Sidney. He then commenced research, reading medical books and visiting hospitals.
The leading character in the film, Dr. Herbert Bock, included many of Chayefsky's personal traits. Bock had been a "boy genius" who felt bitter and that his life was over. One of the monologues of George C. Scott as Bock in the film, in which Bock says he is miserable and considering suicide, was repeated verbatim from a conversation that Chayefsky had with a business associate during that time.
The long speeches written for Bock and other characters by Chayefsky, later praised by critics, met resistance from United Artists executives during the making of the film. The script was described as "too talky" and containing excessive medical terminology. But Chayefsky, as producer, prevailed. He also vetoed the studio's suggestion that Walter Matthau or Burt Lancaster be hired for the lead role, insisting on Scott. Chayefsky worked on the dialogue with Diana Rigg, the female lead, but Scott rejected his input.
After filming, Chayefsky spoke the opening narration after several actors were rejected for the job. It was supposed to be temporary, but became the one that was used in the film. Although some initial reviews were negative, the film received rave reviews from leading critics, and was a box office hit. Chayefsky won an Academy Award for his script, and his career was revived.
Network
Chayefsky believed that television news desensitized viewers to violence and murder, and he was shocked one day when a respected news anchorman "rattled off inanities." He asked his friend, the NBC News anchor John Chancellor, if it was possible for an anchorman to go crazy on the air, and Chancellor replied "Every day." Within a week of that conversation, Chayefsky had written the rough draft of a script, centering on Howard Beale, an elderly, disillusioned anchor who announces he will commit suicide on the air. In 1974, a local news anchor, Christine Chubbuck, committed suicide during a broadcast.
Chayefsky researched the project by watching hours of television and consulting with NBC executive David Tebet, who allowed Chayefsky to attend programming meetings. He later conducted research at CBS and met with Walter Cronkite. The completed script reflected his research and his personal view, prevalent at the time, that Arabs were "buying up" U.S. corporations. The "mad as hell" speech was a deeply personal statement reflecting the core of Chayefsky's beliefs during the early 1970s. Chayefsky later called it an easy speech to write, reflecting his view that people had a right to get mad.
The script encountered difficulty because of film industry concerns that it was too tough on television. Ultimately it was decided that the film would be a co-production of MGM and United Artists, with Chayefsky having complete creative control. The deal was announced in July 1975. George C. Scott was offered the supporting role of Max Schumacher (Beale's friend and a traditional journalist representing integrity in the media) but rejected it, and the role went to William Holden. Chayefsky refused requests by UA and MGM to give the film a "softer" ending, feeling that the actual ending – with the Howard Beale character assassinated at the order of the network's executives – would alienate audiences.
Outside the expected negative reviews from television network film critics, the film was a critical and box office success, winning ten Academy Award nominations, and Chayefsky won his third Academy Award, making him the only three-time solo recipient of a screenwriting Oscar; all the other three-time winners (Francis Ford Coppola, Charles Brackett, Woody Allen, and Billy Wilder) shared at least one of their awards with co-writers. When Peter Finch posthumously won Best Actor for playing Beale, Chayefsky was to accept on his behalf, but he defied the show's producer, William Friedkin, and called Finch's wife Eletha to the stage to accept the award.
The film is said to have "presaged the advent of reality television by twenty years" and was a "sardonic satire" of the television industry, dealing with the "dehumanization of modern life."
Altered States
After Network Chayefsky explored an offer from Warren Beatty to write a film based on the life of John Reed and his book Ten Days That Shook the World. He agreed to do research, and spent three months exploring the subject of what eventually became the Beatty film Reds. Negotiations with Beatty's lawyers failed.
In the spring of 1977, Chayefsky began work on a project delving into "man's search of his true self." The genesis of the idea was a joke with his friends Bob Fosse and Herb Gardner. The three cooked up a joke project to remake King Kong, in which Kong becomes a movie star. The comic project got Chayefsky interested in exploring the origins of the human spirit. That evolved into a project updating the theme of Dr. Jekyll and Mr. Hyde.
Chayefsky conducted research on genetic regression, speaking to doctors and professors of anthropology and human genetics. He then began a rough outline of a story in which the lead character immerses himself in an isolation tank, and with the aid of hallucinogens regresses to become a prehuman creature. Chayefsky wrote an eighty-seven page treatment and, at the suggestion of Columbia executive Daniel Melnick, he adapted it into a novel
Film rights were bought by Columbia Pictures for nearly $1 million, and with the same creative control and financial terms as for Network. Chayefsky suffered greatly from stress while working on the novel, resulting in a heart attack in 1977. The heart attack resulted in strict dietary and lifestyle restrictions. The novel, titled Altered States, was published by HarperCollins in June 1978 and received mixed reviews. Chayefsky did not promote the book, which he viewed only as a blueprint for the screenplay.
Since his contract gave him creative control, Chayefsky participated in the selection of William Hurt and Blair Brown as the leads. Arthur Penn was initially hired as director, but left after disagreements with Chayefsky. He was replaced by Ken Russell.
Chayefsky made it clear that he would allow no input into the dialogue or narrative, which Russell felt was too "soppy." Russell was confident that he could get rid of Chayefsky, but found that "the monkey on my back was always there and wouldn't let go." Russell was polite and deferential prior to production but after rehearsals began in 1979 "began to treat Paddy as a nonentity" and was "mean and sarcastic," according to the film's producer Howard Gottfried.
Chayefsky had the power to fire Russell, but was told by Gottfried that he could only do so if he took over direction himself. He left for New York and continued to monitor production. The actors were not permitted to alter the dialogue. Chayefsky later said that in retaliation the actors were instructed to speak their lines while eating or talking too fast. Russell stated that the fast pace and overlapping dialogue was Chayefsky's idea.
Upset by the filming of his screenplay, Chayefsky withdrew from the production of Altered States and took his name off the credits, substituting the pseudonym Sidney Aaron.
Personality and characteristics
In his book Mad as Hell: The Making of Network and the Fateful Vision of the Angriest Man in Movies, journalist Dave Itzkoff wrote that the Howard Beale character in Network was a product of Chayefsky's many frustrations. Itzkoff wrote: "Where others avoided conflict, he cultivated it and embraced it. His fury nourished him, making him intense and unpredictable, but also keeping him focused and productive." Itzkoff describes Chayefsky as "intensely troubled, a huge egomaniac and control freak, dispirited about the world, wryly comic, and a both present and absent family man."
In his biography of Chayefsky's friend Bob Fosse, drama critic Martin Gottfried said Chayefsky was compact and burly in the bulky way of a schoolyard athlete, with thick dark hair and a bent nose that could pass for a streetfighter's. He was a grown-up with one foot in the boys' clubs of his city youth, a street snob who would not allow the loss of his nostalgia. He was an intellectual competitor, always spoiling for a political argument or a philosophical argument, or any exchange over any issue, changing sides for the fun of the fray. A liberal, he was annoyed by liberals; a proud Jew, he wouldn't let anyone call him a "Jewish writer".In his biography Mad as Hell, author Shaun Considine says that Chayefsky had a "dual personality". Chayefsky's "Paddy" persona had "character, caprice; it appealed to his sense of swagger" and gave him confidence to stand up for his rights. "Sidney" was the "silent creator" who had the talent and genius.
Chayefsky was under psychoanalysis for years, beginning in the late 1950s, to deal with his volatile behavior and rage, which at times was difficult to control.
Political activism
Opposition to McCarthyism
Early in his career, Chayefsky was an opponent of McCarthyism. He signed a telegram signed by other writers and performers protesting federal inaction after a concert featuring Paul Robeson in Peekskill, New York, prompted violence in which 150 persons were injured. As a result, his name appeared in the anti-Communist vigilante publication The Firing Line, published by the American Legion. Although Chayefsky feared being subpoeanaed and his career ruined, that never happened. Actress Betsy Blair described Chayefsky as a Social Democrat and as an anti-Marxist.
He opposed the Vietnam War as a "stupid and utterly unnecessary war whose principal victim would be the United States" and sent a letter to President Richard Nixon decrying the My Lai Massacre, saying Americans were in danger of turning into "a nation of bad Germans."
Soviet Jews and Israel
In the 1970s Chayefsky worked for the cause of Soviet Jews, and in 1971 went to Brussels as part of an U.S. delegation to the International Conference on Soviet Jewry. Believing that the conference was insufficiently aggressive, he founded a new activist organization in New York, Writers and Artists for Peace in the Middle East. Co-founders included Colleen Dewhurst, Frank Gervasi, Leon Uris, Gerold Frank and Elie Wiesel. Chayefsky believed that "Zionists" was a code word for "Jews" by Marxist anti-Semites.
Chayefsky was increasingly interested in Israel at that time. In an interview with Women's Wear Daily in 1971, he said that he believed that Jews around the world were in imminent danger of genocide. Journalist Dave Itzkoff writes that in the 1970s his views on Israel possessed a "more aggressive and admittedly paranoid streak." He believed that anti-Semitism was rife in the U.S., especially in the New Left, and once physically confronted a heckler who used an anti-Semitic slur during a David Steinberg performance. While filming The Hospital, Chayefsky commenced work on a film project called "The Habbakuk Conspiracy," which he described as a "study of life within an Arab guerrilla cell on the West Bank of the Jordan." The project was sold to United Artists but never filmed, which resulted in lingering resentment toward the studio.
Chayefsky composed, without credit, pro-Israel ads for the Anti-Defamation League at the time of the Yom Kippur War in 1973. In the late 1970s Writers and Artists for Peace in the Middle East placed full-page newspaper ads written by Chayefsky attacking the Palestine Liberation Organization for the massacre of Israeli athletes at the 1972 Summer Olympics.
He rejected Jane Fonda and Vanessa Redgrave for the role of the female lead in Network because of what he alleged were their "anti-Israel leanings," even though Redgrave was director Sidney Lumet's first choice. Redgrave, accepting the Best Supporting Actress Academy Award for Julia at the 1978 Academy Awards, made a statement during her award acceptance speech denouncing protestors who were members of the Jewish Defense League (JDL), led by Rabbi Meir Kahane, who burned an effigy of Redgrave outside the Awards site, picketed the Academy Awards ceremony to protest against her, and had earlier called on 20th Century Fox to denounce Redgrave and promise never to hire her again, saying, "You should be very proud that in the last few weeks you have stood firm and you have refused to be intimidated by the threats of a small bunch of Zionist hoodlums whose behavior is an insult to the stature of Jews all over the world, and to their great and heroic record of struggle against fascism and oppression." Chayefsky, appearing later, upbraided Redgrave and said "a simple 'Thank you' would have sufficed." The Redgrave and Chayefsky remarks prompted controversy.
Family
Chayefsky met his future wife Susan Sackler during his 1940s stay in Hollywood. The couple married in February 1949. Their son Dan was born in 1955.
Chayefsky's relationship with his wife was strained for much of their marriage, and she became withdrawn and unwilling to appear with him as he became more prominent. Gwen Verdon, wife of his friend Bob Fosse, only saw Susan Chayefsky five times in her life.
Susan Chayefsky suffered from muscular dystrophy, and Dan Chayefsky described himself to author Dave Itzkoff as "a self-destructive teen who brought more pressure to the family home." Despite an alleged affair with Kim Novak, which resulted in his asking his wife for a divorce, Paddy Chayefsky remained married to Susan Chayefsky until his death, and sought her opinion on his screenplays, including Network. She died in 2000.
Death
Chayefsky contracted pleurisy in 1980 and again in 1981. Tests revealed cancer, but he refused surgery out of fear that surgeons would "cut me up because of that movie I wrote about them," referring to The Hospital. He opted for chemotherapy. He died in a New York hospital on August 1, 1981, aged 58, and was interred in the Sharon Gardens Division of Kensico Cemetery in Valhalla, Westchester County, New York.
Longtime friend Bob Fosse performed a tap dance at the funeral, as part of a deal he and Chayefsky had made when Fosse was in the hospital for open-heart surgery. If Fosse died first, Chayefsky promised to deliver a tedious eulogy or Fosse would dance at Chayefsky's memorial if he were the one to die first. Fosse would dedicate his final film Star 80 to Chayefsky in 1983. Chayefsky's personal papers are at the Wisconsin Historical Society and the New York Public Library for the Performing Arts, Billy Rose Theatre Division.
Filmography
The True Glory (1945) (uncredited)
As Young as You Feel (1951) (story)
Marty (1955)
The Catered Affair (1956)
The Bachelor Party (1957)
The Goddess (1958)
Middle of the Night (1959)
The Americanization of Emily (1964)
Paint Your Wagon (1969) (adaptation)
The Hospital (1971)
Network (1976)
Altered States (1980) (as "Sidney Aaron")
Television and stage plays
Television (selection)
1950–1955 Danger
1951–1952 Manhunt
1951–1960 Goodyear Playhouse
1952–1954 Philco Television Playhouse
1952 Holiday Song
1952 The Reluctant Citizen
1953 Printer's Measure
1953 Marty
1953 The Big Deal
1953 The Bachelor Party
1953 The Sixth Year
1953 Catch My Boy On Sunday
1954 The Mother
1954 Middle of the Night
1955 The Catered Affair
1956 The Great American Hoax
Stage
No T.O. for Love (1945)
Middle of the Night (1956)
The Tenth Man (1959)
Gideon (1961)
The Passion of Josef D. (1964)
The Latent Heterosexual (originally titled The Accountant's Tale or The Case of the Latent Heterosexual) (1968)
Novels
Altered States: A Novel (1978)
Academy Awards
References
Bibliography
External links
The Angry Man WNYC: On The Media audio profile of Paddy Chayefsky, October 27, 2006
Paddy Chayefsky papers at the New York Public Library for the Performing Arts
Paddy Chayefsky Papers at the Wisconsin Center for Film and Theater Research.
Museum of Broadcast Communications: Paddy Chayefsky
Paddy Chayefsky, on Enciclopedia Britannica, Encyclopædia Britannica, Inc
Paddy Chayefsky, on The Encyclopedia of Science Fiction
Paddy Chayefsky, on Open Library, Internet Archive
Paddy Chayefsky, on Internet Speculative Fiction Database, Al von Ruff
Paddy Chayefsky, on MusicBrainz, MetaBrainz Foundation
1923 births
1981 deaths
United States Army personnel of World War II
American male screenwriters
Best Original Screenplay Academy Award winners
City College of New York alumni
DeWitt Clinton High School alumni
Fordham University alumni
Writers from the Bronx
Novelists from New York City
Screenwriters from New York City
Jewish American dramatists and playwrights
Burials at Kensico Cemetery
Jewish American military personnel
Jewish American screenwriters
American people of Ukrainian-Jewish descent
Best Screenplay Golden Globe winners
Best Screenplay BAFTA Award winners
Best Adapted Screenplay Academy Award winners
20th-century American dramatists and playwrights
American male dramatists and playwrights
20th-century American male writers
20th-century American screenwriters
Landmine victims
Military personnel from New York City
United States Army soldiers
20th-century American Jews
Writers Guild of America Award winners
American Zionists
American satirists | Paddy Chayefsky | [
"Chemistry"
] | 7,255 | [
"Explosion survivors",
"Explosions"
] |
350,672 | https://en.wikipedia.org/wiki/Logicism | In the philosophy of mathematics, logicism is a programme comprising one or more of the theses that – for some coherent meaning of 'logic' – mathematics is an extension of logic, some or all of mathematics is reducible to logic, or some or all of mathematics may be modelled in logic. Bertrand Russell and Alfred North Whitehead championed this programme, initiated by Gottlob Frege and subsequently developed by Richard Dedekind and Giuseppe Peano.
Overview
Dedekind's path to logicism had a turning point when he was able to construct a model satisfying the axioms characterizing the real numbers using certain sets of rational numbers. This and related ideas convinced him that arithmetic, algebra and analysis were reducible to the natural numbers plus a "logic" of classes. Furthermore by 1872 he had concluded that the naturals themselves were reducible to sets and mappings. It is likely that other logicists, most importantly Frege, were also guided by the new theories of the real numbers published in the year 1872.
The philosophical impetus behind Frege's logicist programme from the Grundlagen der Arithmetik onwards was in part his dissatisfaction with the epistemological and ontological commitments of then-extant accounts of the natural numbers, and his conviction that Kant's use of truths about the natural numbers as examples of synthetic a priori truth was incorrect.
This started a period of expansion for logicism, with Dedekind and Frege as its main exponents. However, this initial phase of the logicist programme was brought into crisis with the discovery of the classical paradoxes of set theory (Cantor's 1896, Zermelo and Russell's 1900–1901). Frege gave up on the project after Russell recognized and communicated his paradox identifying an inconsistency in Frege's system set out in the Grundgesetze der Arithmetik. Note that naive set theory also suffers from this difficulty.
On the other hand, Russell wrote The Principles of Mathematics in 1903 using the paradox and developments of Giuseppe Peano's school of geometry. Since he treated the subject of primitive notions in geometry and set theory
as well as the calculus of relations, this text is a watershed in the development of logicism. Evidence of the assertion of logicism was collected by Russell and Whitehead in their Principia Mathematica.
Today, the bulk of extant mathematics is believed to be derivable logically from a small number of extralogical axioms, such as the axioms of Zermelo–Fraenkel set theory (or its extension ZFC), from which no inconsistencies have as yet been derived. Thus, elements of the logicist programmes have proved viable, but in the process theories of classes, sets and mappings, and higher-order logics other than with Henkin semantics have come to be regarded as extralogical in nature, in part under the influence of Quine's later thought.
Kurt Gödel's incompleteness theorems show that no formal system from which the Peano axioms for the natural numbers may be derived – such as Russell's systems in PM – can decide all the well-formed sentences of that system. This result damaged David Hilbert's programme for foundations of mathematics whereby 'infinitary' theories – such as that of PM – were to be proved consistent from finitary theories, with the aim that those uneasy about 'infinitary methods' could be reassured that their use should provably not result in the derivation of a contradiction. Gödel's result suggests that in order to maintain a logicist position, while still retaining as much as possible of classical mathematics, one must accept some axiom of infinity as part of logic. On the face of it, this damages the logicist programme also, albeit only for those already doubtful concerning 'infinitary methods'. Nonetheless, positions deriving from both logicism and from Hilbertian finitism have continued to be propounded since the publication of Gödel's result.
One argument that programmes derived from logicism remain valid might be that the incompleteness theorems are 'proved with logic just like any other theorems'. However, that argument appears not to acknowledge the distinction between theorems of first-order logic and theorems of higher-order logic. The former can be proven using finistic methods, while the latter – in general – cannot. Tarski's undefinability theorem shows that Gödel numbering can be used to prove syntactical constructs, but not semantic assertions. Therefore, the claim that logicism remains a valid programme may commit one to holding that a system of proof based on the existence and properties of the natural numbers is less convincing than one based on some particular formal system.
Logicism – especially through the influence of Frege on Russell and Wittgenstein and later Dummett – was a significant contributor to the development of analytic philosophy during the twentieth century.
Origin of the name 'logicism'
Ivor Grattan-Guinness states that the French word 'Logistique' was "introduced by Couturat and others at the 1904 International Congress of Philosophy, and was used by Russell and others from then on, in versions appropriate for various languages." (G-G 2000:501).
Apparently the first (and only) usage by Russell appeared in his 1919: "Russell referred several time [sic] to Frege, introducing him as one 'who first succeeded in "logicising" mathematics' (p. 7). Apart from the misrepresentation (which Russell partly rectified by explaining his own view of the role of arithmetic in mathematics), the passage is notable for the word which he put in quotation marks, but their presence suggests nervousness, and he never used the word again, so that 'logicism' did not emerge until the later 1920s" (G-G 2002:434).
About the same time as Rudolf Carnap (1929), but apparently independently, Fraenkel (1928) used the word: "Without comment he used the name 'logicism' to characterise the Whitehead/Russell position (in the title of the section on p. 244, explanation on p. 263)" (G-G 2002:269). Carnap used a slightly different word 'Logistik'; Behmann complained about its use in Carnap's manuscript so Carnap proposed the word 'Logizismus', but he finally stuck to his word-choice 'Logistik' (G-G 2002:501). Ultimately "the spread was mainly due to Carnap, from 1930 onwards." (G-G 2000:502).
Intent, or goal, of logicism
The overt intent of logicism is to derive all of mathematics from symbolic logic (Frege, Dedekind, Peano, Russell.) As contrasted with algebraic logic (Boolean logic) that employs arithmetic concepts, symbolic logic begins with a very reduced set of marks (non-arithmetic symbols), a few "logical" axioms that embody the "laws of thought", and rules of inference that dictate how the marks are to be assembled and manipulated – for instance substitution and modus ponens (i.e. from [1] A materially implies B and [2] A, one may derive B). Logicism also adopts from Frege's groundwork the reduction of natural language statements from "subject|predicate" into either propositional "atoms" or the "argument|function" of "generalization"—the notions "all", "some", "class" (collection, aggregate) and "relation".
In a logicist derivation of the natural numbers and their properties, no "intuition" of number should "sneak in" either as an axiom or by accident. The goal is to derive all of mathematics, starting with the counting numbers and then the real numbers, from some chosen "laws of thought" alone, without any tacit assumptions of "before" and "after" or "less" and "more" or to the point: "successor" and "predecessor". Gödel 1944 summarized Russell's logicistic "constructions", when compared to "constructions" in the foundational systems of Intuitionism and Formalism ("the Hilbert School") as follows: "Both of these schools base their constructions on a mathematical intuition whose avoidance is exactly one of the principal aims of Russell's constructivism" (Gödel 1944 in Collected Works 1990:119).
History
Gödel 1944 summarized the historical background from Leibniz's in Characteristica universalis, through Frege and Peano to Russell: "Frege was chiefly interested in the analysis of thought and used his calculus in the first place for deriving arithmetic from pure logic", whereas Peano "was more interested in its applications within mathematics". But "It was only [Russell's] Principia Mathematica that full use was made of the new method for actually deriving large parts of mathematics from a very few logical concepts and axioms. In addition, the young science was enriched by a new instrument, the abstract theory of relations" (p. 120-121).
Kleene 1952 states it this way: "Leibniz (1666) first conceived of logic as a science containing the ideas and principles underlying all other sciences. Dedekind (1888) and Frege (1884, 1893, 1903) were engaged in defining mathematical notions in terms of logical ones, and Peano (1889, 1894–1908) in expressing mathematical theorems in a logical symbolism" (p. 43); in the previous paragraph he includes Russell and Whitehead as exemplars of the "logicistic school", the other two "foundational" schools being the intuitionistic and the "formalistic or axiomatic school" (p. 43).
Frege 1879 describes his intent in the Preface to his 1879 Begriffsschrift: He started with a consideration of arithmetic: did it derive from "logic" or from "facts of experience"?
"I first had to ascertain how far one could proceed in arithmetic by means of inferences alone, with the sole support of those laws of thought that transcend all particulars. My initial step was to attempt to reduce the concept of ordering in a sequence to that of logical consequence, so as to proceed from there to the concept of number. To prevent anything intuitive from penetrating here unnoticed I had to bend every effort to keep the chain of inferences free of gaps . . . I found the inadequacy of language to be an obstacle; no matter how unwieldy the expressions I was ready to accept, I was less and less able, as the relations became more and more complex, to attain the precision that my purpose required. This deficiency led me to the idea of the present ideography. Its first purpose, therefore, is to provide us with the most reliable test of the validity of a chain of inferences and to point out every presupposition that tries to sneak in unnoticed" (Frege 1879 in van Heijenoort 1967:5).
Dedekind 1887 describes his intent in the 1887 Preface to the First Edition of his The Nature and Meaning of Numbers. He believed that in the "foundations of the simplest science; viz., that part of logic which deals with the theory of numbers" had not been properly argued – "nothing capable of proof ought to be accepted without proof":
In speaking of arithmetic (algebra, analysis) as a part of logic I mean to imply that I consider the number-concept entirely independent of the notions of intuitions of space and time, that I consider it an immediate result from the laws of thought . . . numbers are free creations of the human mind . . . [and] only through the purely logical process of building up the science of numbers . . . are we prepared accurately to investigate our notions of space and time by bringing them into relation with this number-domain created in our mind" (Dedekind 1887 Dover republication 1963 :31).
Peano 1889 states his intent in his Preface to his 1889 Principles of Arithmetic:
Questions that pertain to the foundations of mathematics, although treated by many in recent times, still lack a satisfactory solution. The difficulty has its main source in the ambiguity of language. ¶ That is why it is of the utmost importance to examine attentively the very words we use. My goal has been to undertake this examination" (Peano 1889 in van Heijenoort 1967:85).
Russell 1903 describes his intent in the Preface to his 1903 Principles of Mathematics:
"THE present work has two main objects. One of these, the proof that all pure mathematics deals exclusively with concepts definable in terms of a very small number of fundamental logical concepts, and that all its propositions are deducible from a very small number of fundamental logical principles" (Preface 1903:vi).
"A few words as to the origin of the present work may serve to show the importance of the questions discussed. About six years ago, I began an investigation into the philosophy of Dynamics. . . . [From two questions – acceleration and absolute motion in a "relational theory of space"] I was led to a re-examination of the principles of Geometry, thence to the philosophy of continuity and infinity, and then, with a view to discovering the meaning of the word any, to Symbolic Logic" (Preface 1903:vi-vii).
Epistemology, ontology and logicism
The epistemologies of Dedekind and of Frege seem less well-defined than that of Russell, but both seem accepting as a priori the customary "laws of thought" concerning simple propositional statements (usually of belief); these laws would be sufficient in themselves if augmented with theory of classes and relations (e.g. x R y) between individuals x and y linked by the generalization R.
Dedekind's argument begins with "1. In what follows I understand by thing every object of our thought"; we humans use symbols to discuss these "things" of our minds; "A thing is completely determined by all that can be affirmed or thought concerning it" (p. 44). In a subsequent paragraph Dedekind discusses what a "system S is: it is an aggregate, a manifold, a totality of associated elements (things) a, b, c"; he asserts that "such a system S . . . as an object of our thought is likewise a thing (1); it is completely determined when with respect to every thing it is determined whether it is an element of S or not.*" (p. 45, italics added). The * indicates a footnote where he states that:
"Kronecker not long ago (Crelle's Journal, Vol. 99, pp. 334-336) has endeavored to impose certain limitations upon the free formation of concepts in mathematics which I do not believe to be justified" (p. 45).
Indeed he awaits Kronecker's "publishing his reasons for the necessity or merely the expediency of these limitations" (p. 45).
Kronecker, famous for his assertion that "God made the integers, all else is the work of man" had his foes, among them Hilbert. Hilbert called Kronecker a "dogmatist, to the extent that he accepts the integer with its essential properties as a dogma and does not look back" and equated his extreme constructivist stance with that of Brouwer's intuitionism, accusing both of "subjectivism": "It is part of the task of science to liberate us from arbitrariness, sentiment and habit and to protect us from the subjectivism that already made itself felt in Kronecker's views and, it seems to me, finds its culmination in intuitionism". Hilbert then states that "mathematics is a presuppositionless science. To found it I do not need God, as does Kronecker . . ." (p. 479).
Russell's realism served him as an antidote to British idealism, with portions borrowed from European rationalism and British empiricism. To begin with, "Russell was a realist about two key issues: universals and material objects" (Russell 1912:xi). For Russell, tables are real things that exist independent of Russell the observer. Rationalism would contribute the notion of a priori knowledge, while empiricism would contribute the role of experiential knowledge (induction from experience). Russell would credit Kant with the idea of "a priori" knowledge, but he offers an objection to Kant he deems "fatal": "The facts [of the world] must always conform to logic and arithmetic. To say that logic and arithmetic are contributed by us does not account for this" (1912:87); Russell concludes that the a priori knowledge that we possess is "about things, and not merely about thoughts" (1912:89). And in this Russell's epistemology seems different from that of Dedekind's belief that "numbers are free creations of the human mind" (Dedekind 1887:31)
But his epistemology about the innate (he prefers the word a priori when applied to logical principles, cf. 1912:74) is intricate. He would strongly, unambiguously express support for the Platonic "universals" (cf. 1912:91-118) and he would conclude that truth and falsity are "out there"; minds create beliefs and what makes a belief true is a fact, "and this fact does not (except in exceptional cases) involve the mind of the person who has the belief" (1912:130).
Where did Russell derive these epistemic notions? He tells us in the Preface to his 1903 Principles of Mathematics. Note that he asserts that the belief: "Emily is a rabbit" is non-existent, and yet the truth of this non-existent proposition is independent of any knowing mind; if Emily really is a rabbit, the fact of this truth exists whether or not Russell or any other mind is alive or dead, and the relation of Emily to rabbit-hood is "ultimate":
"On fundamental questions of philosophy, my position, in all its chief features, is derived from Mr G. E. Moore. I have accepted from him the nature of propositions (except such as happen to assert existence) and their independence of any knowing mind; also the pluralism which regards the world, both that of existents and that of entities, as composed of an infinite number of mutually independent entities, with relations which are ultimate, and not reducible to adjectives of their terms or of the whole which these compose. . . . The doctrines just mentioned are, in my opinion, quite indispensable to any even tolerably satisfactory philosophy of mathematics, as I hope the following pages will show. . . . Formally, my premisses are simply assumed; but the fact that they allow mathematics to be true, which most current philosophies do not, is surely a powerful argument in their favour." (Preface 1903:viii)
In 1902 Russell discovered a "vicious circle" (Russell's paradox) in Frege's Grundgesetze der Arithmetik, derived from Frege's Basic Law V and he was determined not to repeat it in his 1903 Principles of Mathematics. In two Appendices added at the last minute he devoted 28 pages to both a detailed analysis of Frege's theory contrasted against his own, and a fix for the paradox. But he was not optimistic about the outcome:
"In the case of classes, I must confess, I have failed to perceive any concept fulfilling the conditions requisite for the notion of class. And the contradiction discussed in Chapter x. proves that something is amiss, but what this is I have hitherto failed to discover. (Preface to Russell 1903:vi)"
Gödel in his 1944 would disagree with the young Russell of 1903 ("[my premisses] allow mathematics to be true") but would probably agree with Russell's statement quoted above ("something is amiss"); Russell's theory had failed to arrive at a satisfactory foundation of mathematics: the result was "essentially negative; i.e. the classes and concepts introduced this way do not have all the properties required for the use of mathematics" (Gödel 1944:132).
How did Russell arrive in this situation? Gödel observes that Russell is a surprising "realist" with a twist: he cites Russell's 1919:169 "Logic is concerned with the real world just as truly as zoology" (Gödel 1944:120). But he observes that "when he started on a concrete problem, the objects to be analyzed (e.g. the classes or propositions) soon for the most part turned into "logical fictions" . . . [meaning] only that we have no direct perception of them." (Gödel 1944:120)
In an observation pertinent to Russell's brand of logicism, Perry remarks that Russell went through three phases of realism: extreme, moderate and constructive (Perry 1997:xxv). In 1903 he was in his extreme phase; by 1905 he would be in his moderate phase. In a few years he would "dispense with physical or material objects as basic bits of the furniture of the world. He would attempt to construct them out of sense-data" in his next book Our knowledge of the External World [1914]" (Perry 1997:xxvi).
These constructions in what Gödel 1944 would call "nominalistic constructivism ... which might better be called fictionalism" derived from Russell's "more radical idea, the no-class theory" (p. 125):
"according to which classes or concepts never exist as real objects, and sentences containing these terms are meaningful only as they can be interpreted as ... a manner of speaking about other things" (p. 125).
See more in the Criticism sections, below.
An example of a logicist construction of the natural numbers: Russell's construction in the Principia
The logicism of Frege and Dedekind is similar to that of Russell, but with differences in the particulars (see Criticisms, below). Overall, the logicist derivations of the natural numbers are different from derivations from, for example, Zermelo's axioms for set theory ('Z'). Whereas, in derivations from Z, one definition of "number" uses an axiom of that system – the axiom of pairing – that leads to the definition of "ordered pair" – no overt number axiom exists in the various logicist axiom systems allowing the derivation of the natural numbers. Note that the axioms needed to derive the definition of a number may differ between axiom systems for set theory in any case. For instance, in ZF and ZFC, the axiom of pairing, and hence ultimately the notion of an ordered pair is derivable from the Axiom of Infinity and the Axiom of Replacement and is required in the definition of the von Neumann numerals (but not the Zermelo numerals), whereas in NFU the Frege numerals may be derived in an analogous way to their derivation in the Grundgesetze.
The Principia, like its forerunner the Grundgesetze, begins its construction of the numbers from primitive propositions such as "class", "propositional function", and in particular, relations of "similarity" ("equinumerosity": placing the elements of collections in one-to-one correspondence) and "ordering" (using "the successor of" relation to order the collections of the equinumerous classes)". The logicistic derivation equates the cardinal numbers constructed this way to the natural numbers, and these numbers end up all of the same "type" – as classes of classes – whereas in some set theoretical constructions – for instance the von Neumann and the Zermelo numerals – each number has its predecessor as a subset. Kleene observes the following. (Kleene's assumptions (1) and (2) state that 0 has property P and n+1 has property P whenever n has property P.)
"The viewpoint here is very different from that of [Kronecker]'s maxim that 'God made the integers' plus Peano's axioms of number and mathematical induction], where we presupposed an intuitive conception of the natural number sequence, and elicited from it the principle that, whenever a particular property P of natural numbers is given such that (1) and (2), then any given natural number must have the property P." (Kleene 1952:44).
The importance to the logicist programme of the construction of the natural numbers derives from Russell's contention "That all traditional pure mathematics can be derived from the natural numbers is a fairly recent discovery, though it had long been suspected" (1919:4). One derivation of the real numbers derives from the theory of Dedekind cuts on the rational numbers, rational numbers in turn being derived from the naturals. While an example of how this is done is useful, it relies first on the derivation of the natural numbers. So, if philosophical difficulties appear in a logicist derivation of the natural numbers, these problems should be sufficient to stop the program until these are resolved (see Criticisms, below).
One attempt to construct the natural numbers is summarized by Bernays 1930–1931. But rather than use Bernays' précis, which is incomplete in some details, an attempt at a paraphrase of Russell's construction, incorporating some finite illustrations, is set out below:
Preliminaries
For Russell, collections (classes) are aggregates of "things" specified by proper names, that come about as the result of propositions (assertions of fact about a thing or things). Russell analysed this general notion. He begins with "terms" in sentences, which he analysed as follows:
For Russell, "terms" are either "things" or "concepts": "Whatever may be an object of thought, or may occur in any true or false proposition, or can be counted as one, I call a term. This, then, is the widest word in the philosophical vocabulary. I shall use as synonymous with it the words, unit, individual, and entity. The first two emphasize the fact that every term is one, while the third is derived from the fact that every term has being, i.e. is in some sense. A man, a moment, a number, a class, a relation, a chimaera, or anything else that can be mentioned, is sure to be a term; and to deny that such and such a thing is a term must always be false" (Russell 1903:43)
"Among terms, it is possible to distinguish two kinds, which I shall call respectively things and concepts; the former are the terms indicated by proper names, the latter those indicated by all other words . . . Among concepts, again, two kinds at least must be distinguished, namely those indicated by adjectives and those indicated by verbs" (1903:44).
"The former kind will often be called predicates or class-concepts; the latter are always or almost always relations." (1903:44)
"I shall speak of the terms of a proposition as those terms, however numerous, which occur in a proposition and may be regarded as subjects about which the proposition is. It is a characteristic of the terms of a proposition that anyone of them may be replaced by any other entity without our ceasing to have a proposition. Thus we shall say that "Socrates is human" is a proposition having only one term; of the remaining component of the proposition, one is the verb, the other is a predicate.. . . Predicates, then, are concepts, other than verbs, which occur in propositions having only one term or subject." (1903:45)
Suppose one were to point to an object and say: "This object in front of me named 'Emily' is a woman." This is a proposition, an assertion of the speaker's belief, which is to be tested against the "facts" of the outer world: "Minds do not create truth or falsehood. They create beliefs . . . what makes a belief true is a fact, and this fact does not (except in exceptional cases) in any way involve the mind of the person who has the belief" (1912:130). If by investigation of the utterance and correspondence with "fact", Russell discovers that Emily is a rabbit, then his utterance is considered "false"; if Emily is a female human (a female "featherless biped" as Russell likes to call humans, following Diogenes Laërtius's anecdote about Plato), then his utterance is considered "true".
"The class, as opposed to the class-concept, is the sum or conjunction of all the terms which have the given predicate" (1903 p. 55). Classes can be specified by extension (listing their members) or by intension, i.e. by a "propositional function" such as "x is a u" or "x is v". But "if we take extension pure, our class is defined by enumeration of its terms, and this method will not allow us to deal, as Symbolic Logic does, with infinite classes. Thus our classes must in general be regarded as objects denoted by concepts, and to this extent the point of view of intension is essential." (1909 p. 66)
"The characteristic of a class concept, as distinguished from terms in general, is that "x is a u" is a propositional function when, and only when, u is a class-concept." (1903:56)
"71. Class may be defined either extensionally or intensionally. That is to say, we may define the kind of object which is a class, or the kind of concept which denotes a class: this is the precise meaning of the opposition of extension and intension in this connection. But although the general notion can be defined in this two-fold manner, particular classes, except when they happen to be finite, can only be defined intensionally, i.e. as the objects denoted by such and such concepts. . . logically; the extensional definition appears to be equally applicable to infinite classes, but practically, if we were to attempt it, Death would cut short our laudable endeavour before it had attained its goal."(1903:69)
The definition of the natural numbers
In the Prinicipia, the natural numbers derive from all propositions that can be asserted about any collection of entities. Russell makes this clear in the second (italicized) sentence below.
"In the first place, numbers themselves form an infinite collection, and cannot therefore be defined by enumeration. In the second place, the collections having a given number of terms themselves presumably form an infinite collection: it is to be presumed, for example, that there are an infinite collection of trios in the world, for if this were not the case the total number of things in the world would be finite, which, though possible, seems unlikely. In the third place, we wish to define "number" in such a way that infinite numbers may be possible; thus we must be able to speak of the number of terms in an infinite collection, and such a collection must be defined by intension, i.e. by a property common to all its members and peculiar to them." (1919:13)
To illustrate, consider the following finite example: Suppose there are 12 families on a street. Some have children, some do not. To discuss the names of the children in these households requires 12 propositions asserting "childname is the name of a child in family Fn" applied to this collection of households on the particular street of families with names F1, F2, . . . F12. Each of the 12 propositions regards whether or not the "argument" childname applies to a child in a particular household. The children's names (childname) can be thought of as the x in a propositional function f(x), where the function is "name of a child in the family with name Fn".
Whereas the preceding example is finite over the finite propositional function "childnames of the children in family Fn" on the finite street of a finite number of families, Russell apparently intended the following to extend to all propositional functions extending over an infinite domain so as to allow the creation of all the numbers.
Kleene considers that Russell has set out an impredicative definition that he will have to resolve, or risk deriving something like the Russell paradox. "Here instead we presuppose the totality of all properties of cardinal numbers, as existing in logic, prior to the definition of the natural number sequence" (Kleene 1952:44). The problem will appear, even in the finite example presented here, when Russell deals with the unit class (cf. Russell 1903:517).
The question arises what precisely a "class" is or should be. For Dedekind and Frege, a class is a distinct entity in its own right, a 'unity' that can be identified with all those entities x that satisfy some propositional function F. (This symbolism appears in Russell, attributed there to Frege: "The essence of a function is what is left when the x is taken away, i.e in the above instance, 2( )3 + ( ). The argument x does not belong to the function, but the two together make a whole (ib. p. 6 [i.e. Frege's 1891 Function und Begriff]" (Russell 1903:505).) For example, a particular "unity" could be given a name; suppose a family Fα has the children with the names Annie, Barbie and Charles:
{ a, b, c }Fα
This notion of collection or class as object, when used without restriction, results in Russell's paradox; see more below about impredicative definitions. Russell's solution was to define the notion of a class to be only those elements that satisfy the proposition, his argument being that, indeed, the arguments x do not belong to the propositional function aka "class" created by the function. The class itself is not to be regarded as a unitary object in its own right, it exists only as a kind of useful fiction: "We have avoided the decision as to whether a class of things has in any sense an existence as one object. A decision of this question in either way is indifferent to our logic" (First edition of Principia Mathematica 1927:24).
Russell continues to hold this opinion in his 1919; observe the words "symbolic fictions":
"When we have decided that classes cannot be things of the same sort as their members, that they cannot be just heaps or aggregates, and also that they cannot be identified with propositional functions, it becomes very difficult to see what they can be, if they are to be more than symbolic fictions. And if we can find any way of dealing with them as symbolic fictions, we increase the logical security of our position, since we avoid the need of assuming that there are classes without being compelled to make the opposite assumption that there are no classes. We merely abstain from both assumptions. . . . But when we refuse to assert that there are classes, we must not be supposed to be asserting dogmatically that there are none. We are merely agnostic as regards them . . .." (1919:184)
And in the second edition of PM (1927) Russell holds that "functions occur only through their values, . . . all functions of functions are extensional, . . . [and] consequently there is no reason to distinguish between functions and classes . . . Thus classes, as distinct from functions, lose even that shadowy being which they retain in *20" (p. xxxix). In other words, classes as a separate notion have vanished altogether.Step 2: Collect "similar" classes into 'bundles' : These above collections can be put into a "binary relation" (comparing for) similarity by "equinumerosity", symbolized here by ≈, i.e. one-one correspondence of the elements, and thereby create Russellian classes of classes or what Russell called "bundles". "We can suppose all couples in one bundle, all trios in another, and so on. In this way we obtain various bundles of collections, each bundle consisting of all the collections that have a certain number of terms. Each bundle is a class whose members are collections, i.e. classes; thus each is a class of classes" (Russell 1919:14).Step 3: Define the null class: Notice that a certain class of classes is special because its classes contain no elements, i.e. no elements satisfy the predicates whose assertion defined this particular class/collection.
The resulting entity may be called "the null class" or "the empty class". Russell symbolized the null/empty class with Λ. So what exactly is the Russellian null class? In PM Russell says that "A class is said to exist when it has at least one member . . . the class which has no members is called the "null class" . . . "α is the null-class" is equivalent to "α does not exist". The question naturally arises whether the null class itself 'exists'? Difficulties related to this question occur in Russell's 1903 work. After he discovered the paradox in Frege's Grundgesetze he added Appendix A to his 1903 where through the analysis of the nature of the null and unit classes, he discovered the need for a "doctrine of types"; see more about the unit class, the problem of impredicative definitions and Russell's "vicious circle principle" below.Step 4: Assign a "numeral" to each bundle: For purposes of abbreviation and identification, to each bundle assign a unique symbol (aka a "numeral"). These symbols are arbitrary. Step 5: Define "0" Following Frege, Russell picked the empty or null class of classes as the appropriate class to fill this role, this being the class of classes having no members. This null class of classes may be labelled "0"Step 6: Define the notion of "successor": Russell defined a new characteristic "hereditary" (cf Frege's 'ancestral'), a property of certain classes with the ability to "inherit" a characteristic from another class (which may be a class of classes) i.e. "A property is said to be "hereditary" in the natural-number series if, whenever it belongs to a number n, it also belongs to n+1, the successor of n". (1903:21). He asserts that "the natural numbers are the posterity – the "children", the inheritors of the "successor" – of 0 with respect to the relation "the immediate predecessor of (which is the converse of "successor") (1919:23).
Note Russell has used a few words here without definition, in particular "number series", "number n", and "successor". He will define these in due course. Observe in particular that Russell does not use the unit class of classes "1" to construct the successor. The reason is that, in Russell's detailed analysis, if a unit class becomes an entity in its own right, then it too can be an element in its own proposition; this causes the proposition to become "impredicative" and result in a "vicious circle". Rather, he states: "We saw in Chapter II that a cardinal number is to be defined as a class of classes, and in Chapter III that the number 1 is to be defined as the class of all unit classes, of all that have just one member, as we should say but for the vicious circle. Of course, when the number 1 is defined as the class of all unit classes, unit classes must be defined so as not to assume that we know what is meant by one (1919:181).
For his definition of successor, Russell will use for his "unit" a single entity or "term" as follows:
"It remains to define "successor". Given any number n let α be a class which has n members, and let x be a term which is not a member of α. Then the class consisting of α with x added on will have +1 members. Thus we have the following definition:
the successor of the number of terms in the class α is the number of terms in the class consisting of α together with x where x is not any term belonging to the class." (1919:23)
Russell's definition requires a new "term" which is "added into" the collections inside the bundles.Step 7: Construct the successor of the null class.Step 8: For every class of equinumerous classes, create its successor.Step 9: Order the numbers: The process of creating a successor requires the relation " . . . is the successor of . . .", which may be denoted "S", between the various "numerals". "We must now consider the serial character of the natural numbers in the order 0, 1, 2, 3, . . . We ordinarily think of the numbers as in this order, and it is an essential part of the work of analysing our data to seek a definition of "order" or "series " in logical terms. . . . The order lies, not in the class of terms, but in a relation among the members of the class, in respect of which some appear as earlier and some as later." (1919:31)
Russell applies to the notion of "ordering relation" three criteria: First, he defines the notion of asymmetry i.e. given the relation such as S (" . . . is the successor of . . . ") between two terms x and y: x S y ≠ y S x. Second, he defines the notion of transitivity for three numerals x, y and z: if x S y and y S z then x S z. Third, he defines the notion of connected: "Given any two terms of the class which is to be ordered, there must be one which precedes and the other which follows. . . . A relation is connected when, given any two different terms of its field [both domain and converse domain of a relation e.g. husbands versus wives in the relation of married] the relation holds between the first and the second or between the second and the first (not excluding the possibility that both may happen, though both cannot happen if the relation is asymmetrical).(1919:32)
He concludes: ". . . [natural] number m is said to be less than another number n when n possesses every hereditary property possessed by the successor of m. It is easy to see, and not difficult to prove, that the relation "less than", so defined, is asymmetrical, transitive, and connected, and has the [natural] numbers for its field [i.e. both domain and converse domain are the numbers]." (1919:35)
CriticismThe presumption of an 'extralogical' notion of iteration: Kleene notes that "the logicistic thesis can be questioned finally on the ground that logic already presupposes mathematical ideas in its formulation. In the Intuitionistic view, an essential mathematical kernel is contained in the idea of iteration" (Kleene 1952:46)
Bernays 1930–1931 observes that this notion "two things" already presupposes something, even without the claim of existence of two things, and also without reference to a predicate, which applies to the two things; it means, simply, "a thing and one more thing. . . . With respect to this simple definition, the Number concept turns out to be an elementary structural concept . . . the claim of the logicists that mathematics is purely logical knowledge turns out to be blurred and misleading upon closer observation of theoretical logic. . . . [one can extend the definition of "logical"] however, through this definition what is epistemologically essential is concealed, and what is peculiar to mathematics is overlooked" (in Mancosu 1998:243).
Hilbert 1931:266-7, like Bernays, considers there is "something extra-logical" in mathematics: "Besides experience and thought, there is yet a third source of knowledge. Even if today we can no longer agree with Kant in the details, nevertheless the most general and fundamental idea of the Kantian epistemology retains its significance: to ascertain the intuitive a priori mode of thought, and thereby to investigate the condition of the possibility of all knowledge. In my opinion this is essentially what happens in my investigations of the principles of mathematics. The a priori is here nothing more and nothing less than a fundamental mode of thought, which I also call the finite mode of thought: something is already given to us in advance in our faculty of representation: certain extra-logical concrete objects that exist intuitively as an immediate experience before all thought. If logical inference is to be certain, then these objects must be completely surveyable in all their parts, and their presentation, their differences, their succeeding one another or their being arrayed next to one another is immediately and intuitively given to us, along with the objects, as something that neither can be reduced to anything else, nor needs such a reduction." (Hilbert 1931 in Mancosu 1998: 266, 267).
In brief, according to Hilbert and Bernays, the notion of "sequence" or "successor" is an a priori notion that lies outside symbolic logic.
Hilbert dismissed logicism as a "false path": "Some tried to define the numbers purely logically; others simply took the usual number-theoretic modes of inference to be self-evident. On both paths they encountered obstacles that proved to be insuperable." (Hilbert 1931 in Mancoso 1998:267). The incompleteness theorems arguably constitute a similar obstacle for Hilbertian finitism.
Mancosu states that Brouwer concluded that: "the classical laws or principles of logic are part of [the] perceived regularity [in the symbolic representation]; they are derived from the post factum record of mathematical constructions . . . Theoretical logic . . . [is] an empirical science and an application of mathematics" (Brouwer quoted by Mancosu 1998:9).
With respect to the technical aspects of Russellian logicism as it appears in Principia Mathematica (either edition), Gödel in 1944 was disappointed:
"It is to be regretted that this first comprehensive and thorough-going presentation of a mathematical logic and the derivation of mathematics from it [is?] so greatly lacking in formal precision in the foundations (contained in *1–*21 of Principia) that it presents in this respect a considerable step backwards as compared with Frege. What is missing, above all, is a precise statement of the syntax of the formalism" (cf. footnote 1 in Gödel 1944 Collected Works 1990:120).
In particular he pointed out that "The matter is especially doubtful for the rule of substitution and of replacing defined symbols by their definiens" (Russell 1944:120)
With respect to the philosophy that might underlie these foundations, Gödel considered Russell's "no-class theory" as embodying a "nominalistic kind of constructivism . . . which might better be called fictionalism" (cf. footnote 1 in Gödel 1944:119) – to be faulty. See more in "Gödel's criticism and suggestions" below.
A complicated theory of relations continued to strangle Russell's explanatory 1919 Introduction to Mathematical Philosophy and his 1927 second edition of Principia. Set theory, meanwhile had moved on with its reduction of relation to the ordered pair of sets. Grattan-Guinness observes that in the second edition of Principia Russell ignored this reduction that had been achieved by his own student Norbert Wiener (1914). Perhaps because of "residual annoyance, Russell did not react at all". By 1914 Hausdorff would provide another, equivalent definition, and Kuratowski in 1921 would provide the one in use today.
The unit class, impredicativity, and the vicious circle principle
Suppose a librarian wants to index her collection into a single book (call it Ι for "index"). Her index will list all the books and their locations in the library. As it turns out, there are only three books, and these have titles Ά, β, and Γ. To form her index I, she goes out and buys a book of 200 blank pages and labels it "I". Now she has four books: I, Ά, β, and Γ. Her task is not difficult. When completed, the contents of her index I are 4 pages, each with a unique title and unique location (each entry abbreviated as Title.LocationT):
I = { I.LI, Ά.LΆ, β.Lβ, Γ.LΓ}.
This sort of definition of I was deemed by Poincaré to be "impredicative". He seems to have considered that only predicative definitions can be allowed in mathematics:
"a definition is 'predicative' and logically admissible only if it excludes all objects that are dependent upon the notion defined, that is, that can in any way be determined by it".
By Poincaré's definition, the librarian's index book is "impredicative" because the definition of I is dependent upon the definition of the totality I, Ά, β, and Γ. As noted below, some commentators insist that impredicativity in commonsense versions is harmless, but as the examples show below there are versions which are not harmless. In response to these difficulties, Russell advocated a strong prohibition, his "vicious circle principle":
"No totality can contain members definable only in terms of this totality, or members involving or presupposing this totality" (vicious circle principle)" (Gödel 1944 appearing in Collected Works Vol. II 1990:125).
To illustrate what a pernicious instance of impredicativity might be, consider the consequence of inputting argument α into the function f with output ω = 1−α. This may be seen as the equivalent 'algebraic-logic' expression to the 'symbolic-logic' expression ω = NOT-α, with truth values 1 and 0. When input α = 0, output ω = 1; when input α = 1, output ω = 0.
To make the function "impredicative", identify the input with the output, yielding α = 1−α
Within the algebra of, say, rational numbers the equation is satisfied when α = 0.5. But within, for instance, a Boolean algebra, where only "truth values" 0 and 1 are permitted, then the equality cannot be satisfied.
Some of the difficulties in the logicist programme may derive from the α = NOT-α paradox Russell discovered in Frege's 1879 Begriffsschrift that Frege had allowed a function to derive its input "functional" (value of its variable) not only from an object (thing, term), but also from the function's own output.
As described above, Both Frege's and Russell's constructions of the natural numbers begin with the formation of equinumerous classes of classes ("bundles"), followed by an assignment of a unique "numeral" to each bundle, and then by the placing of the bundles into an order via a relation S that is asymmetric: x S y ≠ y S x. But Frege, unlike Russell, allowed the class of unit classes to be identified as a unit itself:
But, since the class with numeral 1 is a single object or unit in its own right, it too must be included in the class of unit classes. This inclusion results in an infinite regress of increasing type and increasing content.
Russell avoided this problem by declaring a class to be more or a "fiction". By this he meant that a class could designate only those elements that satisfied its propositional function and nothing else. As a "fiction" a class cannot be considered to be a thing: an entity, a "term", a singularity, a "unit". It is an assemblage but is not in Russell's view "worthy of thing-hood":
"The class as many . . . is unobjectionable, but is many and not one. We may, if we choose, represent this by a single symbol: thus x ε u will mean "x is one of the us." This must not be taken as a relation of two terms, x and u, because u as the numerical conjunction is not a single term . . . Thus a class of classes will be many many's; its constituents will each be only many, and cannot therefore in any sense, one might suppose, be single constituents.[etc]" (1903:516).
This supposes that "at the bottom" every single solitary "term" can be listed (specified by a "predicative" predicate) for any class, for any class of classes, for class of classes of classes, etc, but it introduces a new problem—a hierarchy of "types" of classes.
A solution to impredicativity: a hierarchy of types
Gödel 1944:131 observes that "Russell adduces two reasons against the extensional view of classes, namely the existence of (1) the null class, which cannot very well be a collection, and (2) the unit classes, which would have to be identical with their single elements." He suggests that Russell should have regarded these as fictitious, but not derive the further conclusion that all classes (such as the class-of-classes that define the numbers 2, 3, etc) are fictions.
But Russell did not do this. After a detailed analysis in Appendix A: The Logical and Arithmetical Doctrines of Frege in his 1903, Russell concludes:
"The logical doctrine which is thus forced upon us is this: The subject of a proposition may be not a single term, but essentially many terms; this is the case with all propositions asserting numbers other than 0 and 1" (1903:516).
In the following notice the wording "the class as many"—a class is an aggregate of those terms (things) that satisfy the propositional function, but a class is not a thing-in-itself:
"Thus the final conclusion is, that the correct theory of classes is even more extensional than that of Chapter VI; that the class as many is the only object always defined by a propositional function, and that this is adequate for formal purposes" (1903:518).
It is as if a rancher were to round up all his livestock (sheep, cows and horses) into three fictitious corrals (one for the sheep, one for the cows, and one for the horses) that are located in his fictitious ranch. What actually exist are the sheep, the cows and the horses (the extensions), but not the fictitious "concepts" corrals and ranch.
When Russell proclaimed all classes are useful fictions he solved the problem of the "unit" class, but the overall problem did not go away; rather, it arrived in a new form: "It will now be necessary to distinguish (1) terms, (2) classes, (3) classes of classes, and so on ad infinitum; we shall have to hold that no member of one set is a member of any other set, and that x ε u requires that x should be of a set of a degree lower by one than the set to which u belongs. Thus x ε x will become a meaningless proposition; and in this way the contradiction is avoided" (1903:517).
This is Russell's "doctrine of types". To guarantee that impredicative expressions such as x ε x can be treated in his logic, Russell proposed, as a kind of working hypothesis, that all such impredicative definitions have predicative definitions. This supposition requires the notions of function-"orders" and argument-"types". First, functions (and their classes-as-extensions, i.e. "matrices") are to be classified by their "order", where functions of individuals are of order 1, functions of functions (classes of classes) are of order 2, and so forth. Next, he defines the "type" of a function's arguments (the function's "inputs") to be their "range of significance", i.e. what are those inputs α (individuals? classes? classes-of-classes? etc.) that, when plugged into f(x), yield a meaningful output ω. Note that this means that a "type" can be of mixed order, as the following example shows:
"Joe DiMaggio and the Yankees won the 1947 World Series".
This sentence can be decomposed into two clauses: "x won the 1947 World Series" + "y won the 1947 World Series". The first sentence takes for x an individual "Joe DiMaggio" as its input, the other takes for y an aggregate "Yankees" as its input. Thus the composite-sentence has a (mixed) type of 2, mixed as to order (1 and 2).
By "predicative", Russell meant that the function must be of an order higher than the "type" of its variable(s). Thus a function (of order 2) that creates a class of classes can only entertain arguments for its variable(s) that are classes (type 1) and individuals (type 0), as these are lower types. Type 3 can only entertain types 2, 1 or 0, and so forth. But these types can be mixed (for example, for this sentence to be (sort of) true: "z won the 1947 World Series" could accept the individual (type 0) "Joe DiMaggio" and/or the names of his other teammates, and it could accept the class (type 1) of individual players "The Yankees".
The axiom of reducibility is the hypothesis that any function of any order can be reduced to (or replaced by) an equivalent predicative function of the appropriate order. A careful reading of the first edition indicates that an nth order predicative function need not be expressed "all the way down" as a huge "matrix" or aggregate of individual atomic propositions. "For in practice only the relative types of variables are relevant; thus the lowest type occurring in a given context may be called that of individuals" (p. 161). But the axiom of reducibility proposes that in theory a reduction "all the way down" is possible.
By the 2nd edition of PM of 1927, though, Russell had given up on the axiom of reducibility and concluded he would indeed force any order of function "all the way down" to its elementary propositions, linked together with logical operators:
"All propositions, of whatever order, are derived from a matrix composed of elementary propositions combined by means of the stroke" (PM 1927 Appendix A, p. 385)
(The "stroke" is Sheffer's stroke – adopted for the 2nd edition of PM – a single two argument logical function from which all other logical functions may be defined.)
The net result, though, was a collapse of his theory. Russell arrived at this disheartening conclusion: that "the theory of ordinals and cardinals survives . . . but irrationals, and real numbers generally, can no longer be adequately dealt with. . . . Perhaps some further axiom, less objectionable than the axiom of reducibility, might give these results, but we have not succeeded in finding such an axiom" (PM 1927:xiv).
Gödel 1944 agrees that Russell's logicist project was stymied; he seems to disagree that even the integers survived:
"[In the second edition] The axiom of reducibility is dropped, and it is stated explicitly that all primitive predicates belong to the lowest type and that the only purpose of variables (and evidently also of constants) of higher orders and types is to make it possible to assert more complicated truth-functions of atomic propositions" (Gödel 1944 in Collected Works:134).
Gödel asserts, however, that this procedure seems to presuppose arithmetic in some form or other (p. 134). He deduces that "one obtains integers of different orders" (p. 134-135); the proof in Russell 1927 PM Appendix B that "the integers of any order higher than 5 are the same as those of order 5" is "not conclusive" and "the question whether (or to what extent) the theory of integers can be obtained on the basis of the ramified hierarchy [classes plus types] must be considered as unsolved at the present time". Gödel concluded that it wouldn't matter anyway because propositional functions of order n (any n) must be described by finite combinations of symbols (all quotes and content derived from page 135).
Gödel's criticism and suggestions
Gödel, in his 1944 work, identifies the place where he considers Russell's logicism to fail and offers suggestions to rectify the problems. He submits the "vicious circle principle" to re-examination, splitting it into three parts "definable only in terms of", "involving" and "presupposing". It is the first part that "makes impredicative definitions impossible and thereby destroys the derivation of mathematics from logic, effected by Dedekind and Frege, and a good deal of mathematics itself". Since, he argues, mathematics sees to rely on its inherent impredicativities (e.g. "real numbers defined by reference to all real numbers"), he concludes that what he has offered is "a proof that the vicious circle principle is false [rather] than that classical mathematics is false" (all quotes Gödel 1944:127).Russell's no-class theory is the root of the problem: Gödel believes that impredicativity is not "absurd", as it appears throughout mathematics. Russell's problem derives from his "constructivistic (or nominalistic") standpoint toward the objects of logic and mathematics, in particular toward propositions, classes, and notions . . . a notion being a symbol . . . so that a separate object denoted by the symbol appears as a mere fiction" (p. 128).
Indeed, Russell's "no class" theory, Gödel concludes:
"is of great interest as one of the few examples, carried out in detail, of the tendency to eliminate assumptions about the existence of objects outside the "data" and to replace them by constructions on the basis of these data33. The "data" are to understand in a relative sense here; i.e. in our case as logic without the assumption of the existence of classes and concepts]. The result has been in this case essentially negative; i.e. the classes and concepts introduced in this way do not have all the properties required from their use in mathematics. . . . All this is only a verification of the view defended above that logic and mathematics (just as physics) are built up on axioms with a real content which cannot be explained away" (p. 132)
He concludes his essay with the following suggestions and observations:
"One should take a more conservative course, such as would consist in trying to make the meaning of terms "class" and "concept" clearer, and to set up a consistent theory of classes and concepts as objectively existing entities. This is the course which the actual development of mathematical logic has been taking and which Russell himself has been forced to enter upon in the more constructive parts of his work. Major among the attempts in this direction . . . are the simple theory of types . . . and axiomatic set theory, both of which have been successful at least to this extent, that they permit the derivation of modern mathematics and at the same time avoid all known paradoxes . . . ¶ It seems reasonable to suspect that it is this incomplete understanding of the foundations which is responsible for the fact that mathematical logic has up to now remained so far behind the high expectations of Peano and others . . .." (p. 140)
Neo-logicism Neo-logicism''' describes a range of views considered by their proponents to be successors of the original logicist program. More narrowly, neo-logicism may be seen as the attempt to salvage some or all elements of Frege's programme through the use of a modified version of Frege's system in the Grundgesetze (which may be seen as a kind of second-order logic).
For instance, one might replace Basic Law V (analogous to the axiom schema of unrestricted comprehension in naive set theory) with some 'safer' axiom so as to prevent the derivation of the known paradoxes. The most cited candidate to replace BLV is Hume's principle, the contextual definition of '#' given by '#F = #G if and only if there is a bijection between F and G. This kind of neo-logicism is often referred to as neo-Fregeanism. Proponents of neo-Fregeanism include Crispin Wright and Bob Hale, sometimes also called the Scottish School or abstractionist Platonism, who espouse a form of epistemic foundationalism.
Other major proponents of neo-logicism include Bernard Linsky and Edward N. Zalta, sometimes called the Stanford–Edmonton School, abstract structuralism or modal neo-logicism', who espouse a form of axiomatic metaphysics. Modal neo-logicism derives the Peano axioms within second-order modal object theory.Edward N. Zalta, "Neo-Logicism? An Ontological Reduction of Mathematics to Metaphysics", Erkenntnis, 53(1–2) (2000), 219–265.
Another quasi-neo-logicist approach has been suggested by M. Randall Holmes. In this kind of amendment to the Grundgesetze, BLV remains intact, save for a restriction to stratifiable formulae in the manner of Quine's NF and related systems. Essentially all of the Grundgesetze then 'goes through'. The resulting system has the same consistency strength as Jensen's NFU + Rosser's Axiom of Counting.
See also
Aristotelian realist philosophy of mathematics
References
Bibliography
Richard Dedekind, 1858, 1878, Essays on the Theory of Numbers, English translation published by Open Court Publishing Company 1901, Dover publication 1963, Mineola, NY, . Contains two essays—I. "Continuity and Irrational Numbers" with original Preface, II. "The Nature and Meaning of Numbers" with two Prefaces (1887, 1893).
Howard Eves, 1990, Foundations and Fundamental Concepts of Mathematics Third Edition, Dover Publications, Inc, Mineola, NY, .
I. Grattan-Guinness, 2000, The Search for Mathematical Roots, 1870–1940: Logics, Set Theories and The Foundations of Mathematics from Cantor Through Russell to Gödel, Princeton University Press, Princeton NJ, .
Jean van Heijenoort, 1967, From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931, 3rd printing 1976, Harvard University Press, Cambridge, MA, . Includes Frege's 1879 Begriffsschrift with commentary by van Heijenoort, Russell's 1908 Mathematical logic as based on the theory of types with commentary by Willard V. Quine, Zermelo's 1908 A new proof of the possibility of a well-ordering with commentary by van Heijenoort, letters to Frege from Russell and from Russell to Frege, etc.
Stephen C. Kleene, 1971, 1952, Introduction To Metamathematics 1991 10th impression,, North-Holland Publishing Company, Amsterdam, NY, .
Mario Livio, 2011 "Why Math Works: Is math invented or discovered? A leading astrophysicist suggests that the answer to the millennia-old question is both", Scientific American (ISSN 0036-8733), Volume 305, Number 2, August 2011, Scientific American division of Nature America, Inc, New York, NY.
Bertrand Russell, 1903, The Principles of Mathematics Vol. I, Cambridge: at the University Press, Cambridge, UK.
Paolo Mancosu, 1998, From Brouwer to Hilbert: The Debate on the Foundations of Mathematics in the 1920s, Oxford University Press, New York, NY, .
Bertrand Russell, 1912, The Problems of Philosophy (with Introduction by John Perry 1997), Oxford University Press, New York, NY, .
Bertrand Russell, 1919, Introduction to Mathematical Philosophy, Barnes & Noble, Inc, New York, NY, . This is a non-mathematical companion to Principia Mathematica.
Amit Hagar 2005 Introduction to Bertrand Russell, 1919, Introduction to Mathematical Philosophy, Barnes & Noble, Inc, New York, NY, .
Alfred North Whitehead and Bertrand Russell, 1927 2nd edition, (first edition 1910–1913), Principia Mathematica to *56,1962 Edition, Cambridge at the University Press, Cambridge UK, no ISBN. Second edition, abridged to *56, with Introduction to the Second Edition pages Xiii-xlvi, and new Appendix A (*8 Propositions Containing Apparent Variables) to replace *9 Theory of Apparent Variables, and Appendix C Truth-Functions and Others''.
External links
"Logicism" at the Encyclopaedia of Mathematics
Abstract object theory
Philosophy of mathematics
Theories of deduction | Logicism | [
"Mathematics"
] | 14,637 | [
"Theories of deduction",
"nan"
] |
350,680 | https://en.wikipedia.org/wiki/Multiplayer%20video%20game | A multiplayer video game is a video game in which more than one person can play in the same game environment at the same time, either locally on the same computing system (couch co-op), on different computing systems via a local area network, or via a wide area network, most commonly the Internet (e.g. World of Warcraft, Call of Duty, DayZ). Multiplayer games usually require players to share a single game system or use networking technology to play together over a greater distance; players may compete against one or more human contestants, work cooperatively with a human partner to achieve a common goal, or supervise other players' activity. Due to multiplayer games allowing players to interact with other individuals, they provide an element of social communication absent from single-player games.
The history of multiplayer video games extends over several decades, tracing back to the emergence of electronic gaming in the mid-20th century. One of the earliest instances of multiplayer interaction was witnessed with the development of Spacewar! in 1962 for the DEC PDP-1 computer by Steve Russell and colleagues at the MIT. During the late 1970s and early 1980s, multiplayer gaming gained momentum within the arcade scene with classics like Pong and Tank. The transition to home gaming consoles in the 1980s further popularized multiplayer gaming. Titles like Super Mario Bros. for the NES and Golden Axe for the Sega Genesis introduced cooperative and competitive gameplay. Additionally, LAN gaming emerged in the late 1980s, enabling players to connect multiple computers for multiplayer gameplay, popularized by titles like Doom and Warcraft: Orcs & Humans. Players can also play together in the same room using splitscreen.
Non-networked
Some of the earliest video games were two-player games, including early sports games (such as 1958's Tennis For Two and 1972's Pong), early shooter games such as Spacewar! (1962) and early racing video games such as Astro Race (1973). The first examples of multiplayer real-time games were developed on the PLATO system about 1973. Multi-user games developed on this system included 1973's Empire and 1974's Spasim; the latter was an early first-person shooter. Other early video games included turn-based multiplayer modes, popular in tabletop arcade machines. In such games, play is alternated at some point (often after the loss of a life). All players' scores are often displayed onscreen so players can see their relative standing. Danielle Bunten Berry created some of the first multiplayer video games, such as her debut, Wheeler Dealers (1978) and her most notable work, M.U.L.E. (1983).
Gauntlet (1985) and Quartet (1986) introduced co-operative 4-player gaming to the arcades. The games had broader consoles to allow for four sets of controls.
Networked
Ken Wasserman and Tim Stryker identified three factors which make networked computer games appealing:
Multiple humans competing with each other instead of a computer
Incomplete information resulting in suspense and risk-taking
Real-time play requiring quick reaction
John G. Kemeny wrote in 1972 that software running on the Dartmouth Time-Sharing System (DTSS) had recently gained the ability to support multiple simultaneous users, and that games were the first use of the functionality. DTSS's popular American football game, he said, now supported head-to-head play by two humans.
The first large-scale serial sessions using a single computer were STAR (based on Star Trek), OCEAN (a battle using ships, submarines and helicopters, with players divided between two combating cities) and 1975's CAVE (based on Dungeons & Dragons), created by Christopher Caldwell (with artwork and suggestions by Roger Long and assembly coding by Robert Kenney) on the University of New Hampshire's DECsystem-1090. The university's computer system had hundreds of terminals, connected (via serial lines) through cluster PDP-11s for student, teacher, and staff access. The games had a program running on each terminal (for each player), sharing a segment of shared memory (known as the "high segment" in the OS TOPS-10). The games became popular, and the university often banned them because of their RAM use. STAR was based on 1974's single-user, turn-oriented BASIC program STAR, written by Michael O'Shaughnessy at UNH.
Wasserman and Stryker in 1980 described in BYTE how to network two Commodore PET computers with a cable. Their article includes a type-in, two-player Hangman, and describes the authors' more-sophisticated Flash Attack. SuperSet Software's Snipes (1981) uses networking technology that would become Novell NetWare. Digital Equipment Corporation distributed another multi-user version of Star Trek, Decwar, without real-time screen updating; it was widely distributed to universities with DECsystem-10s. In 1981 Cliff Zimmerman wrote an homage to Star Trek in MACRO-10 for DECsystem-10s and -20s using VT100-series graphics. "VTtrek" pitted four Federation players against four Klingons in a three-dimensional universe.
Flight Simulator II, released in 1986 for the Atari ST and Commodore Amiga, allowed two players to connect via modem or serial cable and fly together in a shared environment.
MIDI Maze, an early first-person shooter released in 1987 for the Atari ST, featured network multiplay through a MIDI interface before Ethernet and Internet play became common. It is considered the first multiplayer 3D shooter on a mainstream system, and the first network multiplayer action-game (with support for up to 16 players). There followed ports to a number of platforms (including Game Boy and Super NES) in 1991 under the title Faceball 2000, making it one of the first handheld, multi-platform first-person shooters and an early console example of the genre.
Networked multiplayer gaming modes are known as "netplay". The first popular video-game title with a Local Area Network(LAN) version, 1991's Spectre for the Apple Macintosh, featured AppleTalk support for up to eight players. Spectre's popularity was partially attributed to the display of a player's name above their cybertank. There followed 1993's Doom, whose first network version allowed four simultaneous players.
Play-by-email multiplayer games use email to communicate between computers. Other turn-based variations not requiring players to be online simultaneously are Play-by-post gaming and Play-by-Internet. Some online games are "massively multiplayer", with many players participating simultaneously. Two massively multiplayer genres are MMORPG (such as World of Warcraft or EverQuest) and MMORTS.
First-person shooters have become popular multiplayer games; Battlefield 1942 and Counter-Strike have little (or no) single-player gameplay. Developer and gaming site OMGPOP's library included multiplayer Flash games for the casual player until it was shut down in 2013. Some networked multiplayer games, including MUDs and massively multiplayer online games (MMOs) such as RuneScape, omit a single-player mode. The largest MMO in 2008 was World of Warcraft, with over 10 million registered players worldwide. World of Warcraft would hit its peak at 12 million players two years later in 2010, and in 2023 earned the Guinness World Record for best selling MMO video game. This category of games requires multiple machines to connect via the Internet; before the Internet became popular, MUDs were played on time-sharing computer systems and games like Doom were played on a LAN.
Beginning with the Sega NetLink in 1996, Game.com in 1997 and Dreamcast in 2000, game consoles support network gaming over LANs and the Internet. Many mobile phones and handheld consoles also offer wireless gaming with Bluetooth (or similar) technology. By the early 2010s online gaming had become a mainstay of console platforms such as Xbox and PlayStation. During the 2010s, as the number of Internet users increased, two new video game genres rapidly gained worldwide popularitymultiplayer online battle arena and battle royale game, both designed exclusively for multiplayer gameplay over the Internet.
Over time the number of people playing video games has increased. In 2020, the majority of households in the United States have an occupant that plays video games, and 65% of gamers play multiplayer games with others either online or in person.
Local multiplayer
For some games, "multiplayer" implies that players are playing on the same gaming system or network. This applies to all arcade games, but also to a number of console, and personal computer games too. Local multiplayer games played on a singular system sometimes use split screen, so each player has an individual view of the action (important in first-person shooters and in racing video games) Nearly all multiplayer modes on beat 'em up games have a single-system option, but racing games have started to abandon split-screen in favor of a multiple-system, multiplayer mode. Turn-based games such as chess also lend themselves to single system single screen and even to a single controller.
Multiple types of games allow players to use local multiplayer. The term "local co-op" or "couch co-op" refers to local multiplayer games played in a cooperative manner on the same system; these may use split-screen or some other display method. Another option is hot-seat games. Hot-seat games are typically turn-based games with only one controller or input setsuch as a single keyboard/mouse on the system. Players rotate using the input device to perform their turn such that each is taking a turn on the "hot-seat".
Not all local multiplayer games are played on the same console or personal computer. Some local multiplayer games are played over a LAN. This involves multiple devices using one local network to play together. Networked multiplayer games on LAN eliminate common problems faced when playing online such as lag and anonymity. Games played on a LAN network are the focus of LAN parties. While local co-op and LAN parties still take place, there has been a decrease in both due to an increasing number of players and games utilizing online multiplayer gaming.
Online multiplayer
Online multiplayer games connect players over a wide area network (a common example being the Internet). Unlike local multiplayer, players playing online multiplayer are not restricted to the same local network. This allows players to interact with others from a much greater distance.
Playing multiplayer online offers the benefits of distance, but it also comes with its own unique challenges. Gamers refer to latency using the term "ping", after a utility which measures round-trip network communication delays (by the use of ICMP packets). A player on a DSL connection with a 50-ms ping can react faster than a modem user with a 350-ms average latency. Other problems include packet loss and choke, which can prevent a player from "registering" their actions with a server. In first-person shooters, this problem appears when bullets hit the enemy without damage. The player's connection is not the only factor; some servers are slower than others.
A server that is geographically closer to the player's connection will often provide a lower ping. Data packets travel faster to a location that is closer to them. How far the device is from an internet connection (router) can also affect latency.
Asymmetrical gameplay
Asymmetrical multiplayer is a type of gameplay in which players can have significantly different roles or abilities from each otherenough to provide a significantly different experience of the game. In games with light asymmetry, the players share some of the same basic mechanics (such as movement and death), yet have different roles in the game; this is a common feature of the multiplayer online battle arena (MOBA) genre such as League of Legends and Dota 2, and in hero shooters such as Overwatch and Apex Legends. A first-person shooter that adopts the asymmetrical multiplayer system is Tom Clancy's Rainbow Six Siege. Giving players their own special operator changes every player's experience. This puts an emphasis on players improvising their own game plan given the abilities their character has. In games with stronger elements of asymmetry, one player/team may have one gameplay experience (or be in softly asymmetric roles) while the other player or team play in a drastically different way, with different mechanics, a different type of objective, or both. Examples of games with strong asymmetry include Dead by Daylight, Evolve, and Left 4 Dead.
Asynchronous multiplayer
Asynchronous multiplayer is a form of multiplayer gameplay where players do not have to be playing at the same time. This form of multiplayer game has its origins in play-by-mail games, where players would send their moves through postal mail to a game master, who then would compile and send out results for the next turn. Play-by-mail games transitioned to electronic form as play-by-email games. Similar games were developed for bulletin board systems, such as Trade Wars, where the turn structure may not be as rigorous and allow players to take actions at any time in a persistence space alongside all other players, a concept known as sporadic play.
These types of asynchronous multiplayer games waned with the widespread availability of the Internet which allowed players to play against each other simultaneously, but remains an option in many strategy-related games, such as the Civilization series. Coordination of turns are subsequently managed by one computer or a centralized server. Further, many mobile games are based on sporadic play and use social interactions with other players, lacking direct player versus player game modes but allowing players to influence other players' games, coordinated through central game servers, another facet of asynchronous play.
Online cheating
Online cheating (in gaming) usually refers to modifying the game experience to give one player an advantage over others, such as using an "aimbot"a program which automatically locks the player's crosshairs onto a targetin shooting games. This is also known as "hacking" or "glitching" ("glitching" refers to using a glitch, or a mistake in the code of a game, whereas "hacking" is manipulating the code of a game). Cheating in video games is often done via a third-party program that modifies the game's code at runtime to give one or more players an advantage. In other situations, it is frequently done by changing the game's files to change the game's mechanics.
See also
Game server
LAN gaming center
Massively multiplayer online game
Massively multiplayer online role-playing game
Matchmaking (video games)
Online game
Spawn installation
References
Video game terminology | Multiplayer video game | [
"Technology"
] | 3,004 | [
"Computing terminology",
"Video game terminology"
] |
350,705 | https://en.wikipedia.org/wiki/Layer%202%20Tunneling%20Protocol | In computer networking, Layer 2 Tunneling Protocol (L2TP) is a tunneling protocol used to support virtual private networks (VPNs) or as part of the delivery of services by ISPs. It uses encryption ('hiding') only for its own control messages (using an optional pre-shared secret), and does not provide any encryption or confidentiality of content by itself. Rather, it provides a tunnel for Layer 2 (which may be encrypted), and the tunnel itself may be passed over a Layer 3 encryption protocol such as IPsec.
History
Published in August 1999 as proposed standard RFC 2661, L2TP has its origins primarily in two older tunneling protocols for point-to-point communication: Cisco's Layer 2 Forwarding Protocol (L2F) and Microsoft's
Point-to-Point Tunneling Protocol (PPTP). A new version of this protocol, L2TPv3, appeared as proposed standard RFC 3931 in 2005. L2TPv3 provides additional security features, improved encapsulation, and the ability to carry data links other than simply Point-to-Point Protocol (PPP) over an IP network (for example: Frame Relay, Ethernet, ATM, etc.).
Description
The entire L2TP packet, including payload and L2TP header, is sent within a User Datagram Protocol (UDP) datagram. A virtue of transmission over UDP (rather than TCP) is that it avoids the TCP meltdown problem. It is common to carry PPP sessions within an L2TP tunnel. L2TP does not provide confidentiality or strong authentication by itself. IPsec is often used to secure L2TP packets by providing confidentiality, authentication and integrity. The combination of these two protocols is generally known as L2TP/IPsec (discussed below).
The two endpoints of an L2TP tunnel are called the L2TP access concentrator (LAC) and the L2TP network server (LNS). The LNS waits for new tunnels. Once a tunnel is established, the network traffic between the peers is bidirectional. To be useful for networking, higher-level protocols are then run through the L2TP tunnel. To facilitate this, an L2TP session is established within the tunnel for each higher-level protocol such as PPP. Either the LAC or LNS may initiate sessions. The traffic for each session is isolated by L2TP, so it is possible to set up multiple virtual networks across a single tunnel.
The packets exchanged within an L2TP tunnel are categorized as either control packets or data packets. L2TP provides reliability features for the control packets, but no reliability for data packets. Reliability, if desired, must be provided by the nested protocols running within each session of the L2TP tunnel.
L2TP allows the creation of a virtual private dialup network (VPDN) to connect a remote client to its corporate network by using a shared infrastructure, which could be the Internet or a service provider's network.
Tunneling models
An L2TP tunnel can extend across an entire PPP session or only across one segment of a two-segment session. This can be represented by four different tunneling models, namely:
voluntary tunnel
compulsory tunnel — incoming call
compulsory tunnel — remote dial
L2TP multihop connection
L2TP packet structure
An L2TP packet consists of :
Field meanings:
Flags and version control flags indicating data/control packet and presence of length, sequence, and offset fields.
Length (optional) Total length of the message in bytes, present only when length flag is set.
Tunnel ID Indicates the identifier for the control connection.
Session ID Indicates the identifier for a session within a tunnel.
Ns (optional) sequence number for this data or control message, beginning at zero and incrementing by one (modulo 216) for each message sent. Present only when sequence flag set.
Nr (optional) sequence number for expected message to be received. Nr is set to the Ns of the last in-order message received plus one (modulo 216). In data messages, Nr is reserved and, if present (as indicated by the S bit), MUST be ignored upon receipt..
Offset Size (optional) Specifies where payload data is located past the L2TP header. If the offset field is present, the L2TP header ends after the last byte of the offset padding. This field exists if the offset flag is set.
Offset Pad (optional) Variable length, as specified by the offset size. Contents of this field are undefined.
Payload data Variable length (Max payload size = Max size of UDP packet − size of L2TP header)
L2TP packet exchange
At the time of setup of L2TP connection, many control packets are exchanged between server and client to establish tunnel and session for each direction. One peer requests the other peer to assign a specific tunnel and session id through these control packets. Then using this tunnel and session id, data packets are exchanged with the compressed PPP frames as payload.
The list of L2TP Control messages exchanged between LAC and LNS, for handshaking before establishing a tunnel and session in voluntary tunneling method are
L2TP/IPsec
Because of the lack of confidentiality inherent in the L2TP, it is often implemented along with IPsec. This is referred to as L2TP/IPsec, and is standardized in IETF RFC 3193. The process of setting up an L2TP/IPsec VPN is as follows:
Negotiation of IPsec security association (SA), typically through Internet key exchange (IKE). This is carried out over UDP port 500, and commonly uses either a shared password (so-called "pre-shared keys"), public keys, or X.509 certificates on both ends, although other keying methods exist.
Establishment of Encapsulating Security Payload (ESP) communication in transport mode. The IP protocol number for ESP is 50 (compare TCP's 6 and UDP's 17). At this point, a secure channel has been established, but no tunneling is taking place.
Negotiation and establishment of L2TP tunnel between the SA endpoints. The actual negotiation of parameters takes place over the SA's secure channel, within the IPsec encryption. L2TP uses UDP port 1701.
When the process is complete, L2TP packets between the endpoints are encapsulated by IPsec. Since the L2TP packet itself is wrapped and hidden within the IPsec packet, the original source and destination IP address is encrypted within the packet. Also, it is not necessary to open UDP port 1701 on firewalls between the endpoints, since the inner packets are not acted upon until after IPsec data has been decrypted and stripped, which only takes place at the endpoints.
A potential point of confusion in L2TP/IPsec is the use of the terms tunnel and secure channel. The term tunnel-mode refers to a channel which allows untouched packets of one network to be transported over another network. In the case of L2TP/PPP, it allows L2TP/PPP packets to be transported over IP. A secure channel refers to a connection within which the confidentiality of all data is guaranteed. In L2TP/IPsec, first IPsec provides a secure channel, then L2TP provides a tunnel. IPsec also specifies a tunnel protocol: this is not used when a L2TP tunnel is used.
Windows implementation
Windows has had native support (configurable in control panel) for L2TP since Windows 2000. Windows Vista added 2 alternative tools, an MMC snap-in called "Windows Firewall with Advanced Security" (WFwAS) and the "netsh advfirewall" command-line tool. One limitation with both of the WFwAS and netsh commands is that servers must be specified by IP address. Windows 10 added the "Add-VpnConnection" and "Set-VpnConnectionIPsecConfiguration" PowerShell commands. A registry key must be created on the client and server if the server is behind a NAT-T device.
L2TP in ISPs' networks
L2TP is often used by ISPs when internet service over for example ADSL or cable is being resold. From the end user, packets travel over a wholesale network service provider's network to a server called a Broadband Remote Access Server (BRAS), a protocol converter and router combined. On legacy networks the path from end user customer premises' equipment to the BRAS may be over an ATM network.
From there on, over an IP network, an L2TP tunnel runs from the BRAS (acting as LAC) to an LNS which is an edge router at the boundary of the ultimate destination ISP's IP network.
RFC references
Cisco Layer Two Forwarding (Protocol) "L2F" (a predecessor to L2TP)
Point-to-Point Tunneling Protocol (PPTP)
Layer Two Tunneling Protocol "L2TP"
Implementation of L2TP Compulsory Tunneling via RADIUS
Secure Remote Access with L2TP
Layer Two Tunneling Protocol (L2TP) over Frame Relay
L2TP Disconnect Cause Information
Securing L2TP using IPsec
Layer Two Tunneling Protocol (L2TP): ATM access network
Layer Two Tunneling Protocol (L2TP) Differentiated Services
Layer Two Tunneling Protocol (L2TP) Over ATM Adaptation Layer 5 (AAL5)
Layer Two Tunneling Protocol "L2TP" Management Information Base
Layer Two Tunneling Protocol Extensions for PPP Link Control Protocol Negotiation
Layer Two Tunneling Protocol (L2TP) Internet Assigned Numbers: Internet Assigned Numbers Authority (IANA) Considerations Update
Signaling of Modem-On-Hold status in Layer 2 Tunneling Protocol (L2TP)
Layer 2 Tunneling Protocol (L2TP) Active Discovery Relay for PPP over Ethernet (PPPoE)
Layer Two Tunneling Protocol - Version 3 (L2TPv3)
Extensions to Support Efficient Carrying of Multicast Traffic in Layer-2 Tunneling Protocol (L2TP)
Fail Over Extensions for Layer 2 Tunneling Protocol (L2TP) "failover"
See also
IPsec
Layer 2 Forwarding Protocol
Point-to-Point Tunneling Protocol
Point-to-Point Protocol
Virtual Extensible LAN
References
External links
Implementations
Cisco: Cisco L2TP documentation, also read Technology brief from Cisco
Open source and Linux: xl2tpd, Linux RP-L2TP, OpenL2TP, l2tpns, l2tpd (inactive), Linux L2TP/IPsec server, FreeBSD multi-link PPP daemon, OpenBSD npppd(8), ACCEL-PPP - PPTP/L2TP/PPPoE server for Linux
Microsoft: built-in client included with Windows 2000 and higher; Microsoft L2TP/IPsec VPN Client for Windows 98/Windows Me/Windows NT 4.0
Apple: built-in client included with Mac OS X 10.3 and higher.
VPDN on Cisco.com
Other
IANA assigned numbers for L2TP
L2TP Extensions Working Group (l2tpext) - (where future standardization work is being coordinated)
Using Linux as an L2TP/IPsec VPN client
L2TP/IPSec with OpenBSD and npppd
Comparison of L2TP, PPTP and OpenVPN
Internet protocols
Internet Standards
Tunneling protocols
Virtual private networks | Layer 2 Tunneling Protocol | [
"Engineering"
] | 2,424 | [
"Computer networks engineering",
"Tunneling protocols"
] |
350,721 | https://en.wikipedia.org/wiki/Autofahrer-Rundfunk-Informationssystem | Autofahrer-Rundfunk-Informationssystem (ARI, German for: Automotive-Driver's-Broadcasting-Information) was a system for indicating the presence of traffic information in FM broadcasts used by the German ARD network of FM radio stations from 1974. Developed jointly by IRT and Blaupunkt, it indicated the presence of traffic announcements through manipulation of the 57kHz subcarrier of the station's FM signal.
ARI was rendered obsolete by the more modern Radio Data System and the ARD stopped broadcasting ARI signals on March 1, 2005.
Functionality description
SK signal
The SK signal is actually the 57 kHz subcarrier that is transmitted by the ARI-compliant FM station for this functionality. This frequency, like the RDS subcarrier frequency is chosen because it is the third harmonic of the 19 kHz pilot tone used in the FM-stereo transmission standard. An easy way to understand that is that this frequency is the 19 kHz pilot tone multiplied by 3.
An ARI-equipped radio would illuminate an indicator lamp to show that this function was in force. Most such radios would use this function further to help users search for ARI broadcasts. In the Radio Data System environment, the TP signal is equivalent to this basic function.
The basic method implemented on an analog receiver would be a switch usually marked SDK or VF. Radios that used the "classic" mechanical push-button preset system would have one of these buttons set aside as the VF switch. If this switch was on, the radio would mute unless it was tuned into a station that transmitted this signal.
If the radio was a digitally-tuned receiver, this switch usually engaged an "ARI-seek" mode which had the radio seek for any ARI station if it was out of range of the currently-tuned ARI station.
DK signal
This function, which is superseded by the RDS TA function, was tied in with the broadcasting studio and would be triggered whenever the traffic-announcement jingle was played. A 125 Hz tone would be modulated on the 57 kHz ARI subcarrier tone while this was happening.
A radio that used a "DK" switch, often part of the "SDK" or "VF" switch, was placed into "traffic-priority" mode. It would pick up on this signal and come out of a muted state or cut over a tape or CD that was playing and play the announcement at a fixed volume level.
There was the ability to switch off such announcements on these sets if the driver found a particular announcement irrelevant or it ran on for too long, but it was not easily explained to people new to the system. This was also confusing because a lot of cheaper implementations used a mechanical toggle switch to engage / disengage ARI mode and it was hard to simply use this switch simply to reset the system.
BK signal
This function was based on one of six tones that was in this same subcarrier and was reserved for high-end car radios. These were referred to as A, B, C, D, E and F; and they worked as a crude way of machine-based geocoding for Germany's broadcast areas.
The set would indicate the current zone that it was in rather than using an "SK" indicator whenever it was on an ARI station. As well, the user could control ARI search behavior based on the current zone, a user-nominated zone such as the neighboring zone or any ARI station in any zone.
Attempts to deploy ARI in the U.S.
Blaupunkt even made attempts to roll it out into the US market since 1982 by gaining support from selected FM broadcasters in the big US cities, but it did not catch on. Besides, they were the only company to put ARI-equipped sets on the U.S. marketplace, as a way of differentiating their product from others. There was talk of encouraging other manufacturers to sell ARI-equipped car radios to the U.S., but there was no action even though other manufacturers would roll out ARI-equipped radios to Germany.
Attempts to deploy ARI in Canada
ARI was introduced in Toronto, Canada, around the same time as the U.S. CHFI was the station designated for such broadcasts, and ads for new Blaupunkt car stereos announced it, but just like in the U.S., ARI did not seem to catch on.
Notes
Further reading
Circuitry improvement for traffic-priority in car radios.
Road transport in Germany
Broadcast engineering
Radio technology
German inventions
1974 introductions
Products and services discontinued in 2005
1974 establishments in West Germany
2005 disestablishments in Germany | Autofahrer-Rundfunk-Informationssystem | [
"Technology",
"Engineering"
] | 941 | [
"Information and communications technology",
"Broadcast engineering",
"Telecommunications engineering",
"Radio technology",
"Electronic engineering"
] |
350,829 | https://en.wikipedia.org/wiki/List%20of%20dynamical%20systems%20and%20differential%20equations%20topics | This is a list of dynamical system and differential equation topics, by Wikipedia page. See also list of partial differential equation topics, list of equations.
Dynamical systems, in general
Deterministic system (mathematics)
Linear system
Partial differential equation
Dynamical systems and chaos theory
Chaos theory
Chaos argument
Butterfly effect
0-1 test for chaos
Bifurcation diagram
Feigenbaum constant
Sharkovskii's theorem
Attractor
Strange nonchaotic attractor
Stability theory
Mechanical equilibrium
Astable
Monostable
Bistability
Metastability
Feedback
Negative feedback
Positive feedback
Homeostasis
Damping ratio
Dissipative system
Spontaneous symmetry breaking
Turbulence
Perturbation theory
Control theory
Non-linear control
Adaptive control
Hierarchical control
Intelligent control
Optimal control
Dynamic programming
Robust control
Stochastic control
System dynamics, system analysis
Takens' theorem
Exponential dichotomy
Liénard's theorem
Krylov–Bogolyubov theorem
Krylov-Bogoliubov averaging method
Abstract dynamical systems
Measure-preserving dynamical system
Ergodic theory
Mixing (mathematics)
Almost periodic function
Symbolic dynamics
Time scale calculus
Arithmetic dynamics
Sequential dynamical system
Graph dynamical system
Topological dynamical system
Dynamical systems, examples
List of chaotic maps
Logistic map
Lorenz attractor
Lorenz-96
Iterated function system
Tetration
Ackermann function
Horseshoe map
Hénon map
Arnold's cat map
Population dynamics
Complex dynamics
Fatou set
Julia set
Mandelbrot set
Difference equations
Recurrence relation
Matrix difference equation
Rational difference equation
Ordinary differential equations: general
Examples of differential equations
Autonomous system (mathematics)
Picard–Lindelöf theorem
Peano existence theorem
Carathéodory existence theorem
Numerical ordinary differential equations
Bendixson–Dulac theorem
Gradient conjecture
Recurrence plot
Limit cycle
Initial value problem
Clairaut's equation
Singular solution
Poincaré–Bendixson theorem
Riccati equations
Functional differential equation
Linear differential equations
Exponential growth
Malthusian catastrophe
Exponential response formula
Simple harmonic motion
Phasor (physics)
RLC circuit
Resonance
Impedance
Reactance
Musical tuning
Orbital resonance
Tidal resonance
Oscillator
Harmonic oscillator
Electronic oscillator
Floquet theory
Fundamental frequency
Oscillation (Vibration)
Fundamental matrix (linear differential equation)
Laplace transform applied to differential equations
Sturm–Liouville theory
Wronskian
Loewy decomposition
Mechanics
Pendulum
Inverted pendulum
Double pendulum
Foucault pendulum
Spherical pendulum
Kinematics
Equation of motion
Dynamics (mechanics)
Classical mechanics
Isolated physical system
Lagrangian mechanics
Hamiltonian mechanics
Routhian mechanics
Hamilton-Jacobi theory
Appell's equation of motion
Udwadia–Kalaba equation
Celestial mechanics
Orbit
Lagrange point
Kolmogorov-Arnold-Moser theorem
N-body problem, many-body problem
Ballistics
Functions defined via an ODE
Airy function
Bessel function
Legendre polynomials
Hypergeometric function
Rotating systems
Angular velocity
Angular momentum
Angular acceleration
Angular displacement
Rotational invariance
Rotational inertia
Torque
Rotational energy
Centripetal force
Centrifugal force
Centrifugal governor
Coriolis force
Axis of rotation
Flywheel
Flywheel energy storage
Momentum wheel
Spinning top
Gyroscope
Gyrocompass
Precession
Nutation
Swarms
Particle swarm optimization
Self-propelled particles
Swarm intelligence
Stochastic dynamic equations
Random walk
Autoregressive process
Unit root
Moving average process
Autoregressive–moving-average model
Autoregressive integrated moving average
Vector autoregressive model
Stochastic differential equation
Stochastic partial differential equation
Mathematics-related lists
Outlines of mathematics and logic
Outlines
Lists of topics | List of dynamical systems and differential equations topics | [
"Physics",
"Mathematics"
] | 714 | [
"Mechanics",
"nan",
"Dynamical systems"
] |
350,830 | https://en.wikipedia.org/wiki/Laplace%20transform%20applied%20to%20differential%20equations | In mathematics, the Laplace transform is a powerful integral transform used to switch a function from the time domain to the s-domain. The Laplace transform can be used in some cases to solve linear differential equations with given initial conditions.
First consider the following property of the Laplace transform:
One can prove by induction that
Now we consider the following differential equation:
with given initial conditions
Using the linearity of the Laplace transform it is equivalent to rewrite the equation as
obtaining
Solving the equation for and substituting with one obtains
The solution for f(t) is obtained by applying the inverse Laplace transform to
Note that if the initial conditions are all zero, i.e.
then the formula simplifies to
An example
We want to solve
with initial conditions f(0) = 0 and f′(0)=0.
We note that
and we get
The equation is then equivalent to
We deduce
Now we apply the Laplace inverse transform to get
Bibliography
A. D. Polyanin, Handbook of Linear Partial Differential Equations for Engineers and Scientists, Chapman & Hall/CRC Press, Boca Raton, 2002.
Integral transforms
Differential equations
Differential calculus
Ordinary differential equations
Laplace transforms | Laplace transform applied to differential equations | [
"Mathematics"
] | 242 | [
"Calculus",
"Mathematical objects",
"Differential equations",
"Equations",
"Differential calculus"
] |
350,878 | https://en.wikipedia.org/wiki/Ogive | An ogive ( ) is the roundly tapered end of a two- or three-dimensional object. Ogive curves and surfaces are used in engineering, architecture, woodworking, and ballistics.
Etymology
The French Orientalist Georges Séraphin Colin gives as the term's origin the Arabic al-ġubb () 'cistern', pronounced al-ġibb () in vernacular Iberian Arabic, through the Spanish aljibe or archaically algibe.
The earliest use of the word ogive is found in the 13th-century sketchbook of Villard de Honnecourt, from Picardy in northern France. The Oxford English Dictionary considers the French term's origin obscure; it might come from the Late Latin , the feminine perfect passive participle of , meaning the one who has met or encountered the other. However, Merriam-Webster's dictionary says it is from the "Middle English stone comprising an arch, from Middle French diagonal arch". According to Wiktionary, the French term comes "from Vulgar Latin augīvus, from Latin augēre, as the ogive goes on increasing, and the arch it forms increases the strength of the vault. In Old French we find the phrase arc ogif, itself from Latin arcus augivus. The word was also written as augive in the 17th century."
Types and use in applied physical science and engineering
In ballistics or aerodynamics, an ogive is a pointed, curved surface mainly used to form the approximately streamlined nose of a bullet or other projectile, reducing air resistance or the drag of air. The French word ogive can be translated as "nose cone" or "warhead".
The traditional or secant ogive is a surface of revolution of the same curve that forms a Gothic arch; that is, a circular arc, of greater radius than the diameter of the cylindrical section ("shank"), is drawn from the edge of the shank until it intercepts the axis.
If this arc is drawn so that it meets the shank at zero angle (that is, the distance of the centre of the arc from the axis, plus the radius of the shank, equals the radius of the arc), then it is called a tangent or spitzer ogive. This is a very common ogive for high velocity (supersonic) rifle bullets.
The sharpness of this ogive is expressed by the ratio of its radius to the diameter of the cylinder; a value of one half being a hemispherical dome, and larger values being progressively more pointed. Values of 4 to 10 are commonly used in rifle bullets, with 6 being the most common.
Another common ogive for bullets is the elliptical ogive. This is a curve very similar to the spitzer ogive, except that the circular arc is replaced by an ellipse defined in such a way that it meets the axis at exactly 90°. This gives a somewhat rounded nose regardless of the sharpness ratio. An elliptical ogive is normally described in terms of the ratio of the length of the ogive to the diameter of the shank. A ratio of one half would be, once again, a hemisphere. Values close to 1 are common in practice. Elliptical ogives are mainly used in pistol bullets.
More complex ogives can be derived from minimum turbulence calculations rather than geometric forms, such as the von Kármán ogive used for supersonic missiles, aircraft and ordnance.
Architecture
One of the defining characteristics of Gothic architecture is the pointed arch.
History
Pointed arches may have originated as in the Sitamarhi caves in the 3rd century BCE. The free-standing temple of Trivikrama at Ter in Maharashtra (India) (dated to the Satavahana period of the 2nd century BCE to the 3rd century CE) also contains an ogive arch but it is constructed using corbel principles.
Excavations conducted by Archaeological Survey of India (ASI) at Kausambi revealed a palace with foundations from the 8th century BCE until the 2nd century CE, built in six phases. The last phase, dated to 1st–2nd century CE, includes an extensive structure which features four centered pointed arches which were used to span narrow passageways and segmental arches for wider areas. Pointed arches with load-bearing functions were also employed in Gandhara. A two pointed-arch vault-system was built inside the Bhitargaon temple (as noted by Alexander Cunningham) which is dated to the early Gupta period of the 4th–5th centuries CE. Pointed arches also appeared in Mahabodhi temple with relieving arches and vaults between the 6th and 7th centuries CE.
The 5th- or 6th-century CE Romano-Byzantine Karamagara Bridge in Cappadocia (in present-day Turkish Central Anatolia) features an early pointed arch as part of an apparent Romano-Greco-Syrian architectural tradition.
Several scholars see the pointed arch as first established as an architectonic principle in the Middle East in Islamic architecture during the Abbasid Caliphate in the middle of the 8th century CE. Pointed arches appeared in Christian Europe by the 11th century CE.
Debate
Some scholars have refused to accept an Indian origin of the pointed arch, including Hill (1993); some scholars have argued that pointed arches were used in the Near East in pre-Islamic architecture, but others have stated that these arches were, in fact, parabolic and not pointed arches.
Usage
Gothic architecture features ogives as the intersecting transverse ribs of arches which establish the surface of a Gothic vault. An ogive or ogival arch is a pointed, "Gothic" arch, drawn with compasses as outlined above, or with arcs of an ellipse as described. A very narrow, steeply pointed ogive arch is sometimes called a "lancet arch". The most common form is an equilateral arch, where the radius is the same as the width. In the later Flamboyant Gothic style, an "ogee arch", an arch with a pointed head, like S-shaped curves, became prevalent.
Glaciology
In glaciology, the term ogives refers to alternating bands of light and dark coloured ice that occur as a result of glaciers moving through an icefall.
See also
Catenary arch
Lancet window
Nose cone design
Ogee
References
Further reading
Verde, Tom, "The Point of the Arch", Aramco World, Volume 63, Number 3, 2012
Curves
Arches and vaults
Ballistics | Ogive | [
"Physics"
] | 1,314 | [
"Applied and interdisciplinary physics",
"Ballistics"
] |
350,945 | https://en.wikipedia.org/wiki/Eliminative%20materialism | Eliminative materialism (also called eliminativism) is a materialist position in the philosophy of mind. It is the idea that the majority of mental states in folk psychology do not exist. Some supporters of eliminativism argue that no coherent neural basis will be found for many everyday psychological concepts such as belief or desire, since they are poorly defined. The argument is that psychological concepts of behavior and experience should be judged by how well they reduce to the biological level. Other versions entail the nonexistence of conscious mental states such as pain and visual perceptions.
Eliminativism about a class of entities is the view that the class of entities does not exist. For example, materialism tends to be eliminativist about the soul; modern chemists are eliminativist about phlogiston; modern biologists are eliminativist about élan vital; and modern physicists are eliminativist about luminiferous ether. Eliminative materialism is the relatively new (1960s–70s) idea that certain classes of mental entities that common sense takes for granted, such as beliefs, desires, and the subjective sensation of pain, do not exist. The most common versions are eliminativism about propositional attitudes, as expressed by Paul and Patricia Churchland, and eliminativism about qualia (subjective interpretations about particular instances of subjective experience), as expressed by Daniel Dennett, Georges Rey, and Jacy Reese Anthis.
In the context of materialist understandings of psychology, eliminativism is the opposite of reductive materialism, arguing that mental states as conventionally understood do exist, and directly correspond to the physical state of the nervous system. An intermediate position, revisionary materialism, often argues the mental state in question will prove to be somewhat reducible to physical phenomena—with some changes needed to the commonsense concept.
Since eliminative materialism arguably claims that future research will fail to find a neuronal basis for various mental phenomena, it may need to wait for science to progress further. One might question the position on these grounds, but philosophers like Churchland argue that eliminativism is often necessary in order to open the minds of thinkers to new evidence and better explanations. Views closely related to eliminativism include illusionism and quietism.
Overview
Various arguments have been made for and against eliminative materialism over the last 50 years. The view's history can be traced to David Hume, who rejected the idea of the "self" on the grounds that it was not based on any impression. Most arguments for the view are based on the assumption that people's commonsense view of the mind is actually an implicit theory. It is to be compared and contrasted with other scientific theories in its explanatory success, accuracy, and ability to predict the future. Eliminativists argue that commonsense "folk" psychology has failed and will eventually need to be replaced by explanations derived from neuroscience. These philosophers therefore tend to emphasize the importance of neuroscientific research as well as developments in artificial intelligence.
Philosophers who argue against eliminativism may take several approaches. Simulation theorists, like Robert Gordon and Alvin Goldman, argue that folk psychology is not a theory, but depends on internal simulation of others, and therefore is not subject to falsification in the same way that theories are. Jerry Fodor, among others, argues that folk psychology is, in fact, a successful (even indispensable) theory. Another view is that eliminativism assumes the existence of the beliefs and other entities it seeks to "eliminate" and is thus self-refuting.
Eliminativism maintains that the commonsense understanding of the mind is mistaken, and that neuroscience will one day reveal that mental states talked about in everyday discourse, using words such as "intend", "believe", "desire", and "love", do not refer to anything real. Because of the inadequacy of natural languages, people mistakenly think that they have such beliefs and desires. Some eliminativists, such as Frank Jackson, claim that consciousness does not exist except as an epiphenomenon of brain function; others, such as Georges Rey, claim that the concept will eventually be eliminated as neuroscience progresses. Consciousness and folk psychology are separate issues, and it is possible to take an eliminative stance on one but not the other. The roots of eliminativism go back to the writings of Wilfred Sellars, W.V.O. Quine, Paul Feyerabend, and Richard Rorty. The term "eliminative materialism" was first introduced by James Cornman in 1968 while describing a version of physicalism endorsed by Rorty. The later Ludwig Wittgenstein was also an important inspiration for eliminativism, particularly with his attack on "private objects" as "grammatical fictions".
Early eliminativists such as Rorty and Feyerabend often confused two different notions of the sort of elimination that the term "eliminative materialism" entailed. On the one hand, they claimed, the cognitive sciences that will ultimately give people a correct account of the mind's workings will not employ terms that refer to commonsense mental states like beliefs and desires; these states will not be part of the ontology of a mature cognitive science. But critics immediately countered that this view was indistinguishable from the identity theory of mind. Quine himself wondered what exactly was so eliminative about eliminative materialism:
On the other hand, the same philosophers claimed that commonsense mental states simply do not exist. But critics pointed out that eliminativists could not have it both ways: either mental states exist and will ultimately be explained in terms of lower-level neurophysiological processes, or they do not. Modern eliminativists have much more clearly expressed the view that mental phenomena simply do not exist and will eventually be eliminated from people's thinking about the brain in the same way that demons have been eliminated from people's thinking about mental illness and psychopathology.
While it was a minority view in the 1960s, eliminative materialism gained prominence and acceptance during the 1980s. Proponents of this view, such as B.F. Skinner, often made parallels to previous superseded scientific theories (such as that of the four humours, the phlogiston theory of combustion, and the vital force theory of life) that have all been successfully eliminated in attempting to establish their thesis about the nature of the mental. In these cases, science has not produced more detailed versions or reductions of these theories, but rejected them altogether as obsolete. Radical behaviorists, such as Skinner, argued that folk psychology is already obsolete and should be replaced by descriptions of histories of reinforcement and punishment. Such views were eventually abandoned. Patricia and Paul Churchland argued that folk psychology will be gradually replaced as neuroscience matures.
Eliminativism is not only motivated by philosophical considerations, but is also a prediction about what form future scientific theories will take. Eliminativist philosophers therefore tend to be concerned with data from the relevant brain and cognitive sciences. In addition, because eliminativism is essentially predictive in nature, different theorists can and often do predict which aspects of folk psychology will be eliminated from folk psychological vocabulary. None of these philosophers are eliminativists tout court.
Today, the eliminativist view is most closely associated with the Churchlands, who deny the existence of propositional attitudes (a subclass of intentional states), and with Daniel Dennett, who is generally considered an eliminativist about qualia and phenomenal aspects of consciousness. One way to summarize the difference between the Churchlands' view and Dennett's is that the Churchlands are eliminativists about propositional attitudes, but reductionists about qualia, while Dennett is an anti-reductionist about propositional attitudes and an eliminativist about qualia.
More recently, Brian Tomasik and Jacy Reese Anthis have made various arguments for eliminativism. Elizabeth Irvine has argued that both science and folk psychology do not treat mental states as having phenomenal properties so the hard problem "may not be a genuine problem for non-philosophers (despite its overwhelming obviousness to philosophers), and questions about consciousness may well 'shatter' into more specific questions about particular capacities." In 2022, Anthis published Consciousness Semanticism: A Precise Eliminativist Theory of Consciousness, which asserts that "formal argumentation from precise semantics" dissolves the hard problem because of the contradiction between precision implied in philosophical theory and the vagueness in its definition, which implies there is no fact of the matter for phenomenological consciousness.
Arguments for eliminativism
Problems with folk theories
Eliminativists such as Paul and Patricia Churchland argue that folk psychology is a fully developed but non-formalized theory of human behavior. It is used to explain and make predictions about human mental states and behavior. This view is often referred to as the theory of mind or just simply theory-theory, for it theorizes the existence of an unacknowledged theory. As a theory in the scientific sense, eliminativists maintain, folk psychology must be evaluated on the basis of its predictive power and explanatory success as a research program for the investigation of the mind/brain.
Such eliminativists have developed different arguments to show that folk psychology is a seriously mistaken theory and should be abolished. They argue that folk psychology excludes from its purview or has traditionally been mistaken about many important mental phenomena that can and are being examined and explained by modern neuroscience. Some examples are dreaming, consciousness, mental disorders, learning processes, and memory abilities. Furthermore, they argue, folk psychology's development in the last 2,500 years has not been significant and it is therefore stagnant. The ancient Greeks already had a folk psychology comparable to modern views. But in contrast to this lack of development, neuroscience is rapidly progressing and, in their view, can explain many cognitive processes that folk psychology cannot.
Folk psychology retains characteristics of now obsolete theories or legends from the past. Ancient societies tried to explain the physical mysteries of nature by ascribing mental conditions to them in such statements as "the sea is angry". Gradually, these everyday folk psychological explanations were replaced by more efficient scientific descriptions. Today, eliminativists argue, there is no reason not to accept an effective scientific account of cognition. If such an explanation existed, then there would be no need for folk-psychological explanations of behavior, and the latter would be eliminated the same way as the mythological explanations the ancients used.
Another line of argument is the meta-induction based on what eliminativists view as the disastrous historical record of folk theories in general. Ancient pre-scientific "theories" of folk biology, folk physics, and folk cosmology have all proven radically wrong. Eliminativists argue the same in the case of folk psychology. There seems no logical basis, to the eliminativist, to make an exception just because folk psychology has lasted longer and is more intuitive or instinctively plausible than other folk theories. Indeed, the eliminativists warn, considerations of intuitive plausibility may be precisely the result of the deeply entrenched nature in society of folk psychology itself. It may be that people's beliefs and other such states are as theory-laden as external perceptions and hence that intuitions will tend to be biased in their favor.
Specific problems with folk psychology
Much of folk psychology involves the attribution of intentional states (or more specifically as a subclass, propositional attitudes). Eliminativists point out that these states are generally ascribed syntactic and semantic properties. An example of this is the language of thought hypothesis, which attributes a discrete, combinatorial syntax and other linguistic properties to these mental phenomena. Eliminativists argue that such discrete, combinatorial characteristics have no place in neuroscience, which speaks of action potentials, spiking frequencies, and other continuous and distributed effects. Hence, the syntactic structures assumed by folk psychology have no place in such a structure as the brain. To this there have been two responses. On the one hand, some philosophers deny that mental states are linguistic and see this as a straw man argument. The other view is represented by those who subscribe to "a language of thought". They assert that mental states can be multiply realized and that functional characterizations are just higher-level characterizations of what happens at the physical level.
It has also been argued against folk psychology that the intentionality of mental states like belief implies that they have semantic qualities. Specifically, their meaning is determined by the things they are about in the external world. This makes it difficult to explain how they can play the causal roles they are supposed to in cognitive processes.
In recent years, this latter argument has been fortified by the theory of connectionism. Many connectionist models of the brain have been developed in which the processes of language learning and other forms of representation are highly distributed and parallel. This tends to indicate that such discrete and semantically endowed entities as beliefs and desires are unnecessary.
Physics eliminates intentionality
The problem of intentionality poses a significant challenge to materialist accounts of cognition. If thoughts are neural processes, we must explain how specific neural networks can be "about" external objects or concepts. We can think about Paris, for instance, but there is no clear mechanism by which neurons can represent a city.
Traditional analogies fail to explain this phenomenon. Unlike a photograph, neurons do not physically resemble Paris. Nor can we appeal to conventional symbolism, as we might with a stop sign representing the action of stopping. Such symbols derive their meaning from social agreement and interpretation, which are not applicable to a brain's workings. Attempts to posit a separate neural process that assigns meaning to the "Paris neurons" merely shift the problem without resolving it, as we then need to explain how this secondary process can assign meaning, initiating an infinite regress.
The only way to break this regress is to postulate matter with intrinsic meaning, independent of external interpretation. But our current understanding of physics precludes the existence of such matter. The fundamental particles and forces physics describes have no inherent semantic properties that could ground intentionality. This physical limitation presents a formidable obstacle to materialist theories of mind that rely on neural representations. It suggests that intentionality, as commonly understood, may be incompatible with a purely physicalist worldview. This suggests that our folk psychological concepts of intentional states will be eliminated in light of scientific understanding.
Evolution eliminates intentionality
Another argument for eliminative materialism stems from evolutionary theory. This argument suggests that natural selection, the process shaping our neural architecture, cannot solve the "disjunction problem", which challenges the idea that neural states can store specific, determinate propositional content. Natural selection, as Darwin described it, is primarily a process of selection against rather than selection for traits. It passively filters out traits below a certain fitness threshold rather than actively choosing beneficial ones. This lack of foresight or purpose in evolution becomes problematic when considering how neural states could represent unique propositions.
The disjunction problem arises from the fact that natural selection cannot discriminate between coextensive properties. For example, consider two genes close together on a chromosome. One gene might code for a beneficial trait, while the other codes for a neutral or even harmful trait. Due to their proximity, these genes are often inherited together, a phenomenon known as genetic linkage. Natural selection cannot distinguish between these linked traits; it can only act on their combined effect on the organism's fitness. Only random processes like genetic crossover—where chromosomes exchange genetic material during reproduction—can break these linkages. Until such a break occurs, natural selection remains "blind" to the linked genes' individual effects.
Eliminativists argue that if natural selection—the process responsible for shaping our neural architecture—cannot solve the disjunction problem, then our brains cannot store unique, non-disjunctive propositions, as required by folk psychology. Instead, they suggest that neural states contain inherently disjunctive or indeterminate content. This argument leads eliminativists to reject the notion that neural states have specific, determinate informational content corresponding to the discrete, non-disjunctive propositions of folk psychology. This evolutionary argument adds to the eliminativist case that our commonsense understanding of beliefs, desires, and other propositional attitudes is flawed and should be replaced by a neuroscientific account that acknowledges the indeterminate nature of neural representations.
Arguments against eliminativism
Intentionality and consciousness are identical
Some eliminativists reject intentionality while accepting the existence of qualia. Other eliminativists reject qualia while accepting intentionality. Many philosophers argue that intentionality cannot exist without consciousness and vice versa, and so any philosopher who accepts one while rejecting the other is being inconsistent. They argue that, to be consistent, one must accept both qualia and intentionality or reject them both. Philosophers who argue for such a position include Philip Goff, Terence Horgan, Uriah Kriegal, and John Tienson. The philosopher Keith Frankish accepts the existence of intentionality but holds to illusionism about consciousness because he rejects qualia. Goff notes that beliefs are a kind of propositional thought.
Intuitive reservations
The thesis of eliminativism seems so obviously wrong to many critics, who find it undeniable that people know immediately and indubitably that they have minds, that argumentation seems unnecessary. This sort of intuition-pumping is illustrated by asking what happens when one asks oneself honestly if one has mental states. Eliminativists object to such a rebuttal of their position by claiming that intuitions often are mistaken. Analogies from the history of science are frequently invoked to buttress this observation: it may appear obvious that the sun travels around the earth, for example, but this was nevertheless proved wrong. Similarly, it may appear obvious that apart from neural events there are also mental conditions, but that could be false.
But even if one accepts the susceptibility to error of people's intuitions, the objection can be reformulated: if the existence of mental conditions seems perfectly obvious and is central to our conception of the world, then enormously strong arguments are needed to deny their existence. Furthermore, these arguments, to be consistent, must be formulated in a way that does not presuppose the existence of entities like "mental states", "logical arguments", and "ideas", lest they be self-contradictory. Those who accept this objection say that the arguments for eliminativism are far too weak to establish such a radical claim and that there is thus no reason to accept eliminativism.
Self-refutation
Some philosophers, such as Paul Boghossian, have attempted to show that eliminativism is in some sense self-refuting, since the theory presupposes the existence of mental phenomena. If eliminativism is true, then eliminativists must accept an intentional property like truth, supposing that in order to assert something one must believe it. Hence, for eliminativism to be asserted as a thesis, the eliminativist must believe that it is true; if so, there are beliefs, and eliminativism is false.
Georges Rey and Michael Devitt reply to this objection by invoking deflationary semantic theories that avoid analyzing predicates like "x is true" as expressing a real property. They are instead construed as logical devices, so that asserting that a sentence is true is just a quoted way of asserting the sentence itself. To say "'God exists' is true" is just to say "God exists". This way, Rey and Devitt argue, insofar as dispositional replacements of "claims" and deflationary accounts of "true" are coherent, eliminativism is not self-refuting.
Correspondence theory of truth
Several philosophers, such as the Churchlands and Alex Rosenberg, have developed a theory of structural resemblance or physical isomorphism that could explain how neural states can instantiate truth within the correspondence theory of truth. Neuroscientists use the word "representation" to identify the neural circuits' encoding of inputs from the peripheral nervous system in, for example, the visual cortex. But they use the word without according it any commitment to intentional content. In fact, there is an explicit commitment to describing neural representations in terms of structures of neural axonal discharges that are physically isomorphic to the inputs that cause them. Suppose that this way of understanding representation in the brain is preserved in the long-term course of research providing an understanding of how the brain processes and stores information. Then there will be considerable evidence that the brain is a neural network whose physical structure is identical to the aspects of its environment it tracks and whose representations of these features consist in this physical isomorphism.
Experiments in the 1980s with macaques isolated the structural resemblance between input vibrations the finger feels, measured in cycles per second, and representations of them in neural circuits, measured in action-potential spikes per second. This resemblance between two easily measured variables makes it unsurprising that they would be among the first such structural resemblances to be discovered. Macaques and humans have the same peripheral nervous system sensitivities and can make the same tactile discriminations. Subsequent research into neural processing has increasingly vindicated a structural resemblance or physical isomorphism approach to how information enters the brain and is stored and deployed.
This isomorphism between brain and world is not a matter of some relationship between reality and a map of reality stored in the brain. Maps require interpretation if they are to be about what they map, and eliminativism and neuroscience share a commitment to explaining the appearance of aboutness by purely physical relationships between informational states in the brain and what they "represent". The brain-to-world relationship must be a matter of physical isomorphism—sameness of form, outline, structure—that does not require interpretation.
This machinery can be applied to make "sense" of eliminativism in terms of the sentences eliminativists say or write. When we say that eliminativism is true, that the brain does not store information in the form of unique sentences, statements, expressing propositions or anything like them, there is a set of neural circuits that has no trouble coherently carrying this information. There is a possible translation manual that will guide us back from the vocalization or inscription eliminativists express to these circuits. These neural structures will differ from the neural circuits of those who explicitly reject eliminativism in ways that our translation manual will presumably shed some light on, giving us a neurological handle on disagreement and on the structural differences in neural circuitry, if any, between asserting p and asserting not-p when p expresses the eliminativist thesis.
Criticism
The physical isomorphism approach faces indeterminacy problems. Any given structure in the brain will be causally related to, and isomorphic in various respects to, many different structures in external reality. But we cannot discriminate the one it is intended to represent or that it is supposed to be true "of". These locutions are heavy with just the intentionality that eliminativism denies. Here is a problem of underdetermination or holism that eliminativism shares with intentionality-dependent theories of mind. Here, we can only invoke pragmatic criteria for discriminating successful structural representations—the substitution of true ones for unsuccessful ones—the ones we used to call false.
Dennett notes that it is possible that such indeterminacy problems remain only hypothetical, not occurring in reality. He constructs a 4x4 "Quinian crossword puzzle" with words that must satisfy both the across and down definitions. Since there are multiple constraints on this puzzle, there is one solution. Thus we can think of the brain and its relation to the external world as a very large crossword puzzle that must satisfy exceedingly many constraints to which there is only one possible solution. Therefore, in reality we may end up with only one physical isomorphism between the brain and the external world.
Pragmatic theory of truth
When indeterminacy problems arose because the brain is physically isomorphic to multiple structures of the external world, it was urged that a pragmatic approach be used to resolve the problem. Another approach argues that the pragmatic theory of truth should be used from the start to decide whether certain neural circuits store true information about the external world. Pragmatism was founded by Charles Sanders Peirce and William James, and later refined by our understanding of the philosophy of science. According to pragmatism, to say that general relativity is true is to say that it makes more accurate predictions than other theories (Newtonian mechanics, Aristotle's physics, etc.). If computer circuits lack intentionality and do not store information using propositions, then in what sense can computer A have true information about the world while computer B lacks it? If the computers were instantiated in autonomous cars, we could test whether A or B successfully complete a cross-country road trip. If A succeeds while B fails, the pragmatist can say that A holds true information about the world, because A's information allows it to make more accurate predictions (relative to B) about the world and to move around its environment more successfully. Similarly, if brain A has information that enables the biological organism to make more accurate predictions about the world and helps the organism successfully move around in the environment, then A has true information about the world. Although not advocates of eliminativism, John Shook and Tibor Solymosi argue that pragmatism is a promising program for understanding advancements in neuroscience and integrating them into a philosophical picture of the world.
Criticism
The reason naturalism cannot be pragmatic in its epistemology starts with its metaphysics. Science tells us that we are components of the natural realm, indeed latecomers in the 13.8-billion-year-old universe. The universe was not organized around our needs and abilities, and what works for us is just a set of contingent facts that could have been otherwise. Once we have begun discovering things about the universe that work for us, science sets out to explain why they do. It is clear that one explanation for why things work for us that we must rule out as unilluminating, indeed question-begging, is that they work for us because they work for us. If something works for us, enables us to meet our needs and wants, there must be an explanation reflecting facts about us and the world that produce the needs and the means to satisfy them.
The explanation of why scientific methods work for us must be a causal explanation. It must show what facts about reality make the methods we employ to acquire knowledge suitable for doing so. The explanation must show that our methods work — for example, have reliable technological application — not by coincidence, still less miracle or accident. That means there must be some facts, events, processes that operate in reality and brought about our pragmatic success. The demand that success be explained is a consequence of science's epistemology. If the truth of such explanations consists in the fact that they work for us (as pragmatism requires), then the explanation of why our scientific methods work is that they work. That is not a satisfying explanation.
Efficacy of folk psychology
Some philosophers argue that folk psychology is quite successful. Simulation theorists doubt that people's understanding of the mental can be explained in terms of a theory at all. Rather they argue that people's understanding of others is based on internal simulations of how they would act and respond in similar situations. Jerry Fodor believes in folk psychology's success as a theory, because it makes for an effective way of communication in everyday life that can be implemented with few words. Such effectiveness could not be achieved with complex neuroscientific terminology.
Qualia
Another problem for the eliminativist is the consideration that human beings undergo subjective experiences and hence their conscious mental states have qualia. Since qualia are generally regarded as characteristics of mental states, their existence does not seem compatible with eliminativism. Eliminativists such as Dennett and Rey respond by rejecting qualia. Opponents of eliminativism see this response as problematic, since many claim that existence of qualia is perfectly obvious. Many philosophers consider the "elimination" of qualia implausible, if not incomprehensible. They assert that, for instance, the existence of pain is simply beyond denial.
Admitting that the existence of qualia seems obvious, Dennett nevertheless holds that "qualia" is a theoretical term from an outdated metaphysics stemming from Cartesian intuitions. He argues that a precise analysis shows that the term is in the long run empty and full of contradictions. Eliminativism's claim about qualia is that there is no unbiased evidence for such experiences when regarded as something more than propositional attitudes. In other words, it does not deny that pain exists, but holds that it exists independently of its effect on behavior. Influenced by Wittgenstein's Philosophical Investigations, Dennett and Rey have defended eliminativism about qualia even when other aspects of the mental are accepted.
Quining qualia
Dennett offers philosophical thought experiments to argue that qualia do not exist. First he lists five properties of qualia:
They are "directly" or "immediately" graspable during our conscious experiences.
We are infallible about them.
They are "private": no one can directly access anyone else's qualia.
They are ineffable.
They are "intrinsic" and "simple" or "unanalyzable."
Inverted qualia
The first thought experiment Dennett uses to demonstrate that qualia lack the listed necessary properties to exist involves inverted qualia: consider two people who have different qualia but the same external physical behavior. But now the qualia supporter can present an "intrapersonal" variation. Suppose a neurosurgeon works on your brain and you discover that grass now looks red. Would this not be a case where we could confirm the reality of qualia—by noticing how the qualia have changed while every other aspect of our conscious experience remains the same? Not quite, Dennett replies via the next "intuition pump" (his term for an intuition-based thought experiment), "alternative neurosurgery". There are two different ways the neurosurgeon might have accomplished the inversion. First, they might have tinkered with something "early on", so that signals from the eye when you look at grass contain the information "red" rather than "green". This would result in genuine qualia inversion. But they might instead have tinkered with your memory. Here your qualia would remain the same, but your memory would be altered so that your current green experience would contradict your earlier memories of grass. You would still feel that the color of grass had changed, but here the qualia have not changed, but your memories have. Would you be able to tell which of these scenarios is correct? No: your perceptual experience tells you that something has changed but not whether your qualia have changed. Dennett concludes, since (by hypothesis) the two surgical procedures can yield exactly the same introspective effects while only one inverts the qualia, nothing in the subject's experience can favor one hypothesis over the other. So unless he seeks outside help, the state of his own qualia must be as unknowable to him as the state of anyone else's. It is questionable, in short, that we have direct, infallible access to our conscious experience.
The experienced beer drinker
Dennett's second thought experiment involves beer. Many people think of beer as an acquired taste: one's first sip is often unpleasant, but one gradually comes to enjoy it. But wait, Dennett asks—what is the "it" here? Compare the flavor of that first taste with the flavor now. Does the beer taste exactly the same both then and now, only now you like that taste whereas before you disliked it? Or is it that the way beer tastes gradually shifts—so that the taste you did not like at the beginning is not the same taste you now like? In fact most people simply cannot tell which is the correct analysis. But that is to give up again on the idea that we have special and infallible access to our qualia. Further, when forced to choose, many people feel that the second analysis is more plausible. But then if one's reactions to an experience are in any way constitutive of it, the experience is not so "intrinsic" after all—and another qualia property falls.
Inverted goggles
Dennett's third thought experiment involves inverted goggles. Scientists have devised special eyeglasses that invert up and down for the wearer. When you put them on, everything looks upside down. When subjects first put them on, they can barely walk around without stumbling. But after subjects wear them for a while, something surprising occurs. They adapt and become able to walk around as easily as before. When you ask them whether they adapted by re-inverting their visual field or simply got used to walking around in an upside-down world, they cannot say. So as in our beer-drinking case, either we simply do not have the special, infallible access to our qualia that would allow us to distinguish the two cases or the way the world looks to us is actually a function of how we respond to the world—in which case qualia are not "intrinsic" properties of experience.
Criticism
Edward Feser objects to Dennett's position as follows. That you need to appeal to third-person neurological evidence to determine whether your memory of your qualia has been tampered with does not seem to show that your qualia themselves—past or present—can be known only by appealing to that evidence. You might still be directly aware of your qualia from the first-person, subjective point of view even if you do not know whether they are the same as the qualia you had yesterday—just as you might really be aware of the article in front of you even if you do not know whether it is the same as the article you saw yesterday. Questions about memory do not necessarily bear on the nature of your awareness of objects present here and now (even if they bear on what you can justifiably claim to know about such objects), whatever those objects happen to be. Dennett's assertion that scientific objectivity requires appealing exclusively to third-person evidence appears mistaken. What scientific objectivity requires is not denial of the first-person subjective point of view but rather a means of communicating inter-subjectively about what one can grasp only from that point of view. Given the relational structure first-person phenomena like qualia appear to exhibit—a structure that Carnap devoted great effort to elucidating—such a means seems available: we can communicate what we know about qualia in terms of their structural relations to one another. Dennett fails to see that qualia can be essentially subjective and still relational or non-intrinsic, and thus communicable. This communicability ensures that claims about qualia are epistemologically objective; that is, they can in principle be grasped and evaluated by all competent observers even though they are claims about phenomena that are arguably not metaphysically objective, i.e., about entities that exist only as grasped by a subject of experience. It is only the former sort of objectivity that science requires. It does not require the latter, and cannot plausibly require it if the first-person realm of qualia is what we know better than anything else.
Illusionism
Illusionism is an active program within eliminative materialism to explain phenomenal consciousness as an illusion. It is promoted by the philosophers Daniel Dennett, Keith Frankish, and Jay Garfield, and the neuroscientist Michael Graziano. Graziano has advanced the attention schema theory of consciousness and postulates that consciousness is an illusion. According to David Chalmers, proponents argue that once we can explain consciousness as an illusion without the need for a realist view of consciousness, we can construct a debunking argument against realist views of consciousness. This line of argument draws from other debunking arguments like the evolutionary debunking argument in the field of metaethics. Such arguments note that morality is explained by evolution without positing moral realism, so there is a sufficient basis to debunk moral realism.
Criticism
Illusionists generally hold that once it is explained why people believe and say they are conscious, the hard problem of consciousness will dissolve. Chalmers agrees that a mechanism for these beliefs and reports can and should be identified using the standard methods of physical science, but disagrees that this would support illusionism, saying that the datum illusionism fails to account for is not reports of consciousness but rather first-person consciousness itself. He separates consciousness from beliefs and reports about consciousness, but holds that a fully satisfactory theory of consciousness should explain how the two are "inextricably intertwined" so that their alignment does not require an inexplicable coincidence. Illusionism has also been criticized by the philosopher Jesse Prinz.
See also
Attention schema theory
Blindsight
Constructivist epistemology
Cotard delusion
Deconstructivism
Epiphenomenalism
Functionalism
Mind–body problem
Monism
New mysterianism
Nihilism
Phenomenology
Physicalism
Principle of locality
Property dualism
Reductionism
Scientism
Substance dualism
Vertiginous question
References
Further reading
Baker, L. (1987). Saving Belief: A Critique of Physicalism, Princeton, NJ: Princeton University Press. .
Broad, C. D. (1925). The Mind and its Place in Nature. London, Routledge & Kegan. (2001 Reprint Ed.).
Churchland, P.M. (1979). Scientific Realism and the Plasticity of Mind. New York, Press Syndicate of the University of Cambridge. .
Churchland, P.M. (1988). Matter and Consciousness, revised Ed. Cambridge, Massachusetts, The MIT Press. .
Rorty, Richard. "Mind-body Identity, Privacy and Categories" in The Review of Metaphysics XIX:24-54. Reprinted Rosenthal, D.M. (ed.) 1971.
Stich, S. (1996). Deconstructing the Mind. New York: Oxford University Press. .
External links
Bibliography on Eliminative Materialism at Contemporary Philosophy of Mind: An Annotated
Eliminative and Multiplicative Materialism by Albert P. Carpenter
Eliminative Materialism at the Stanford Encyclopedia of Philosophy
What Is Left of the Mind at 3 Quarks Daily
Materialism
Metaphysics of mind
Physicalism
Qualia | Eliminative materialism | [
"Physics"
] | 8,038 | [
"Materialism",
"Matter"
] |
350,966 | https://en.wikipedia.org/wiki/Emergent%20materialism | In the philosophy of mind, emergent (or emergentist) materialism is a theory which asserts that the mind is irreducibly existent in some sense. However, the mind does not exist in the sense of being an ontological simple. Further, the study of mental phenomena is independent of other sciences. The theory primarily maintains that the human mind's evolution is a product of material nature and that it cannot exist without material basis.
Overview
The view holds that mental properties emerge as novel properties of complex material systems. These are conceptually irreducible as physical properties of the complexes that have them. The theory, however, states that the mind is independent due to the causal influences between body and mind. This is described as a "primitive relation" that is grounded in or dependent on the physical, but with metaphysical necessity.
Emergent materialism can be divided into emergence which denies mental causation and emergence which allows for causal effect. A version of the latter type has been advocated by John R. Searle, called biological naturalism.
The other main group of materialist views in the philosophy of mind can be labeled non-emergent (or non-emergentist) materialism, and includes pure physicalism (eliminative materialism), identity theory (reductive materialism), philosophical behaviorism, and functionalism.
See also
Cartesian dualism
Emergentism
Emergence
Epiphenomenalism
Materialism
Mind–body problem
Monism
Physicalism
References
External links
M.D. Robertson, Dualism vs. Materialism: A Response to Paul Churchland
Materialism
Metaphysics of mind
Materialism | Emergent materialism | [
"Physics"
] | 332 | [
"Materialism",
"Matter"
] |
350,995 | https://en.wikipedia.org/wiki/Orthonormal%20frame | In Riemannian geometry and relativity theory, an orthonormal frame is a tool for studying the structure of a differentiable manifold equipped with a metric. If M is a manifold equipped with a metric g, then an orthonormal frame at a point P of M is an ordered basis of the tangent space at P consisting of vectors which are orthonormal with respect to the bilinear form gP.
See also
Frame (linear algebra)
Frame bundle
k-frame
Moving frame
Frame fields in general relativity
References
Riemannian geometry | Orthonormal frame | [
"Physics"
] | 114 | [
"Relativity stubs",
"Theory of relativity"
] |
351,036 | https://en.wikipedia.org/wiki/Staple%20%28fastener%29 | A staple is a type of two-pronged fastener, usually metal, used for joining, gathering, or binding materials together. Large staples might be used with a hammer or staple gun for masonry, roofing, corrugated boxes and other heavy-duty uses. Smaller staples are used with a stapler to attach pieces of paper together; such staples are a more permanent and durable fastener for paper documents than the paper clip.
Etymology
The word "staple" originated in the late thirteenth Century, from Old English stapol, meaning "post, pillar". The word's first usage in the paper-fastening sense is attested from 1895.
History
In ancient times, the staple had several different functions.
Large metal staples dating from the 6th century BC have been found in the masonry works of the Persian empire (ancient Iran). For the construction of the Pasargadae and later Ka'ba-ye Zartosht, these staples, which are known as "dovetail" or "swallowtail" staples, were used for tightening stones together.
The home stapling machine was developed by Henry Heyl in 1877 and registered under US Patent No. 195,603. Heyl's companies, American Paper-Box Machine Company, Novelty Paper Box Company, and Standard Box Company, all of Philadelphia, manufactured machinery using staples in paper packaging and for saddle stitching.
Advantages
Most kinds of staples are easier to produce than nails or screws.
The crown of the staple can be used to bridge materials butted together.
The crown can bridge a piece and fasten it without puncturing, with a leg on either side, e.g. fastening electrical cables to wood framing.
The crown provides greater surface area than other comparable fasteners. This is generally more helpful with thinner materials.
Disadvantages
Staples generally have lower holding power compared to nails or screws. This can make them unsuitable for heavy-duty applications where strong connections are required.
Once a staple has been driven, it is difficult to remove without causing damage to the surrounding material. This contrasts with screws, which can often be removed and reused.
When used to hold paper together, staples create a more or less permanent attachment. Removing them without damaging the paper can be challenging, whereas paperclips can be easily added and removed without harming the paper.
While it's possible to remove and reuse staples, doing so can be difficult and often renders the staple unusable for future use. Paperclips, in contrast, are designed to be reusable.
Paper staples
The term "stapling" is used for both fastening sheets of paper together with bent legs or fastening sheets of paper to something solid with straight legs; however, when differentiating between the two, the term "tacking" is used for straight-leg stapling, while the term "stapling" is used for bent-leg stapling.
Specifications
Modern staples for paper staplers are made from zinc-plated steel wires glued together and bent to form a long strip of staples. Staple strips are commonly available as "full strips" with 210 staples per strip. Both copper plated and more expensive stainless steel staples which do not rust are also available, but uncommon.
Some staple sizes are used more commonly than others, depending on the application required. Some companies have unique staples just for their products. Staples from one manufacturer may or may not fit another manufacturer's unit even if they look similar and serve the same purpose.
Staples are often described as X/Y (e.g. 24/6 or 26/6), where the first number X is the gauge of the wire (AWG), and the second number Y is the length of the shank (leg) in millimeters. Some exceptions to this rule include staple sizes like No. 10.
Common sizes for the home and office include: 26/6, 24/6, 24/8, 13/6, 13/8 and No. 10 for mini staplers. Common sizes for heavy duty staplers include: 23/8, 23/12, 23/15, 23/20, 23/24, 13/10, and 13/14.
Stapleless staplers cut and bend paper without using metal fasteners.
Standards
There are few standards for staple size, length and thickness. This has led to many different incompatible staples and staplers systems, all serving the same purpose or applications.
24/6 staples are described by the German DIN 7405 standard.
In the United States, the specifications for non-medical industrial staples are described in ASTM F1667-15, Standard Specification for Driven Fasteners: Nails, Spikes, and Staples. A heavy duty office staple might be designated as F1667 STFCC-04: ST indicates staple, FC indicates flat top crown, C indicates cohered (joined into a strip), and 04 is the dash number for a staple with a length of 0.250 inch (6 mm), a leg thickness of 0.020 inch (500 μm), a leg width of 0.030 inch (800 μm), and a crown width of 0.500 inch (13 mm).
In the home
Staples are most commonly used to bind a stack of individual paper pages. A mechanical or electrical stapler may apply them by passing them through the paper pages and then clinching the staple legs that protrude from the bottom of the page stack.
When using a stapler, the papers to be fastened are placed between the main body and the anvil. The papers are pinched between the body and the anvil, then a drive blade pushes on the crown of the staple on the end of the staple strip. The staple breaks from the end of the strip and the legs of the staple are forced through the paper. As the legs hit the grooves in the anvil they are bent to hold the pages together. Many staplers have an anvil in the form of a "pinning" or "stapling" switch. This allows a choice between bending in or out. The outward bent staples are easier to remove and are for temporary fastening or "pinning".
Most staplers are capable of stapling without the anvil to drive straight leg staples for tacking.
There are various types of staples for paper, including heavy-duty staples, designed for use on documents 20, 50, or over 100 pages thick. There are also speedpoint staples, which have slightly sharper teeth so they can go through paper more easily.
In business
Staples are commonly considered a neat and efficient method of binding paperwork because they are relatively unobtrusive, low cost, and readily available.
Large staples found on corrugated cardboard boxes have folded legs. They are applied from the outside and do not use an anvil; jaw-like appendages push through the cardboard alongside the legs and bend them from the outside.
Saddle stitch staplers, also known as "booklet staplers," feature a longer reach from the pivot point than general-purpose staplers and bind pages into a booklet or "signature". Some can use "loop-staples" that enable the user to integrate folded matter into ring books and binders.
Outward clinch staples are blind staples. There is no anvil, and they are applied with a staple gun. When applied, each staple leg forms a curve bending outwards. This is in part caused by the shape of the crown, which is like an inverted "V", and not flat as in ordinary staples. Also, the legs are sharpened with an inside bevel point, causing them to tend to go outwards when forced into the base material. These staples are used for upholstery work, especially in vehicles, where they are used for fastening fabric or leather to a foam base. These staples are also used when installing fiberglass insulation batts around air ducts- the FSK paper sheathing is overlapped, and the two layers are stapled together before sealing with tape.
In packaging
Staples are used in various types of packaging.
Staples can attach items to paperboard for carded packaging
Staples of stitches can be used to attach the manufacturer's joint of corrugated boxes
Staples are used to close corrugated boxes. Small (nominally -inch crown) staples can be applied to a box with a post stapler. Wider crown (nominally -inch) staples can be applied with a blind clincher
Staples can help fabricate and attach paperwork to wooden boxes and crates.
In construction
Construction staples are commonly larger, have a more varied use, and are delivered by a staple gun or hammer tacker. Staple guns do not have backing anvils and are exclusively used for tacking (with the exception of outward-clinch staplers used for fastening duct insulation). They typically have staples made from thicker metal. Some staple guns use arched staples for fastening small cables, e.g. phone or cable TV, without damaging the cable. Devices known as hammer tackers or staple hammers operate without complex mechanics as a simple head loaded with a strip of staples drives them directly; this method requires a measure of skill. Powered electric staplers or pneumatic staplers drive staples easily and accurately; they are the simplest manner of applying staples, but are hindered by a cord or hose. Cordless electric staplers use a battery, typically rechargeable and sometimes replaceable.
In medicine
Surgical staples are used for the closing of incisions and wounds, a function also performed by sutures.
See also
Stapler
Staple gun
Staple remover
Hammer tacker
Paper clip
References
External links
—discusses many uses of the word
.
Fasteners
Stationery
Woodworking
Packaging
Metallic objects
Office equipment | Staple (fastener) | [
"Physics",
"Engineering"
] | 1,980 | [
"Metallic objects",
"Fasteners",
"Construction",
"Physical objects",
"Matter"
] |
351,039 | https://en.wikipedia.org/wiki/Viral%20marketing | Viral marketing is a business strategy that uses existing social networks to promote a product mainly on various social media platforms. Its name refers to how consumers spread information about a product with other people, much in the same way that a virus spreads from one person to another. It can be delivered by word of mouth, or enhanced by the network effects of the Internet and mobile networks.
The concept is often misused or misunderstood, as people apply it to any successful enough story without taking into account the word "viral".
Viral advertising is personal and, while coming from an identified sponsor, it does not mean businesses pay for its distribution. Most of the well-known viral ads circulating online are ads paid by a sponsor company, launched either on their own platform (company web page or social media profile) or on social media websites such as YouTube. Consumers receive the page link from a social media network or copy the entire ad from a website and pass it along through e-mail or posting it on a blog, web page or social media profile. Viral marketing may take the form of video clips, interactive Flash games, advergames, ebooks, brandable software, images, text messages, email messages, or web pages. The most commonly utilized transmission vehicles for viral messages include pass-along based, incentive based, trendy based, and undercover based. However, the creative nature of viral marketing enables an "endless amount of potential forms and vehicles the messages can utilize for transmission", including mobile devices.
The ultimate goal of marketers interested in creating successful viral marketing programs is to create viral messages that appeal to individuals with high social networking potential (SNP) and that have a high probability of being presented and spread by these individuals and their competitors in their communications with others in a short period.
The term "viral marketing" has also been used pejoratively to refer to stealth marketing campaigns—marketing strategies that advertise a product to people without them knowing they are being marketed to.
History
The emergence of "viral marketing", as an approach to advertisement, has been tied to the popularization of the notion that ideas spread like viruses. The field that developed around this notion, memetics, peaked in popularity in the 1990s. As this then began to influence marketing gurus, it took on a life of its own in that new context.
The brief career of Australian pop singer Marcus Montana is largely remembered as an early example of viral marketing. In early 1989, thousands of posters declaring "Marcus is Coming" were placed around Sydney, generating discussion and interest within the media and the community about the meaning of the mysterious advertisements. The campaign successfully made Montana's musical debut a talking point, but his subsequent music career was a failure.
The term viral strategy was first used in marketing in 1995, in a pre-digital marketing era, by a strategy team at Chiat / Day advertising in LA (now TBWA LA), led by Lorraine Ketch and Fred Sattler, for the launch of the first PlayStation for Sony Computer Entertainment. Born from a need to combat huge target cynicism the insight was that people reject things pushed at them but seek out things that elude them. Chiat / Day created a 'stealth' campaign to go after influencers and opinion leaders, using street teams for the first time in brand marketing and layered an intricate omni-channel web of info and intrigue. Insiders picked up on it and spread the word. Within 6 months, PlayStation was number one in its category—Sony's most successful launch in history.
There is debate on the origin and the popularization of the specific term viral marketing. The term is found in PC User magazine in 1989 with a somewhat differing meaning. It was later used by Jeffrey Rayport in the 1996 Fast Company article "The Virus of Marketing", and Tim Draper and Steve Jurvetson of the venture capital firm Draper Fisher Jurvetson in 1997 to describe Hotmail's practice of appending advertising to outgoing mail from their users.
Doug Rushkoff, a media critic, wrote about viral marketing on the Internet in 1996. The assumption is that if such an advertisement reaches a "susceptible" user, that user becomes "infected" (i.e., accepts the idea) and shares the idea with others "infecting them", in the viral analogy's terms. As long as each infected user shares the idea with more than one susceptible user on average (i.e., the basic reproductive rate is greater than one—the standard in epidemiology for qualifying something as an epidemic), the number of infected users grows according to an exponential curve. Of course, the marketing campaign may be successful even if the message spreads more slowly, if this user-to-user sharing is sustained by other forms of marketing communications, such as public relations or advertising.
Bob Gerstley wrote about algorithms designed to identify people with high "social networking potential." Gerstley employed SNP algorithms in quantitative marketing research. In 2004, the concept of the alpha user was coined to indicate that it had now become possible to identify the focal members of any viral campaign, the "hubs" who were most influential. Alpha users could be targeted for advertising purposes most accurately in mobile phone networks, due to their personal nature.
In early 2013 the first ever Viral Summit was held in Las Vegas. The summit attempted to identify similar trends in viral marketing methods for various media.
What makes things go viral
According to the book Contagious: Why Things Catch On, there are six key factors that drive virality. They are organized in an acronym called STEPPS which stands for:
Social Currency – the better something makes people look, the more likely they will be to share it
Triggers – things that are top of mind are more likely to be tip of tongue
Emotion – when we care, we share
Public – the easier something is to see, the more likely people are to imitate it
Practical Value – people share useful information to help others
Stories – Trojan Horse stories carry messages and ideas along for the ride
The goal of a viral marketing campaign is to widely disseminate marketing content through sharing & liking.
Another important factor that drives virality is the propagativity of the content, referring to the ease with which consumers can redistribute it. This includes the effort required to share the content, the network size and type of the chosen distribution medium, and the proximity of shareable content with its means of redistribution (i.e. a 'Share' button).
Methods and metrics
According to marketing professors Andreas Kaplan and Michael Haenlein, to make viral marketing work, three basic criteria must be met, i.e., giving the right message to the right messengers in the right environment:
Messenger: Three specific types of messengers are required to ensure the transformation of an ordinary message into a viral one: market mavens, social hubs, and salespeople. Market mavens are individuals who are continuously 'on the pulse' of things (information specialists); they are usually among the first to get exposed to the message and who transmit it to their immediate social network. Social hubs are people with an exceptionally large number of social connections; they often know hundreds of different people and have the ability to serve as connectors or bridges between different subcultures. Salespeople might be needed who receive the message from the market maven, amplify it by making it more relevant and persuasive, and then transmit it to the social hub for further distribution. Market mavens may not be particularly convincing in transmitting the information.
Message: Only messages that are both memorable and sufficiently interesting to be passed on to others have the potential to spur a viral marketing phenomenon. Making a message more memorable and interesting or simply more infectious, is often not a matter of major changes but minor adjustments. It should be unique and engaging with a main idea that motivates the recipient to share it widely with friends – a "must-see" element.
Environment: The environment is crucial in the rise of successful viral marketing – small changes in the environment lead to huge results, and people are much more sensitive to environment. The timing and context of the campaign launch must be right.
Whereas Kaplan, Haenlein and others reduce the role of marketers to crafting the initial viral message and seeding it, futurist and sales and marketing analyst Marc Feldman, who conducted IMT Strategies' viral marketing study in 2001, carves a different role for marketers which pushes the 'art' of viral marketing much closer to 'science'.
Metrics
To clarify and organize the information related to potential measures of viral campaigns, the key measurement possibilities should be considered in relation to the objectives formulated for the viral campaign. In this sense, some of the key cognitive outcomes of viral marketing activities can include measures such as the number of views, clicks, and hits for specific content, as well as the number of shares in social media, such as likes on Facebook or retweets on Twitter, which demonstrate that consumers processed the information received through the marketing message. Measures such as the number of reviews for a product or the number of members for a campaign web page quantify the number of individuals who have acknowledged the information provided by marketers. Besides statistics that are related to online traffic, surveys can assess the degree of product or brand knowledge, though this type of measurement is more complicated and requires more resources.
Related to consumers' attitudes toward a brand or even toward the marketing communication, different online and social media statistics, including the number of likes and shares within a social network, can be used. The number of reviews for a certain brand or product and the quality assessed by users are indicators of attitudes. Classical measures of consumer attitude toward the brand can be gathered through surveys of consumers.
Behavioral measures are very important because changes in consumers' behavior and buying decisions are what marketers hope to see through viral campaigns. There are numerous indicators that can be used in this context as a function of marketers' objectives. Some of them include the most known online and social media statistics such as number and quality of shares, views, product reviews, and comments. Consumers' brand engagement can be measured through the K-factor, the number of followers, friends, registered users, and time spent on the website. Indicators that are more bottom-line oriented focus on consumers' actions after acknowledging the marketing content, including the number of requests for information, samples, or test-drives. Nevertheless, responses to actual call-to-action messages are important, including the conversion rate.
Consumers' behavior is expected to lead to contributions to the bottom line of the company, meaning increase in sales, both in quantity and financial amount. However, when quantifying changes in sales, managers need to consider other factors that could potentially affect sales besides the viral marketing activities. Besides positive effects on sales, the use of viral marketing is expected to bring significant reductions in marketing costs and expenses.
Methods
Viral marketing often involves and utilizes:
Customer participation and polling services
Industry-specific organization contributions
Web search engines and blogs
Mobile smartphone integration
Multiple forms of print and direct marketing
Target marketing web services
Search engine optimization (SEO)
Social media optimization (SMO)
Television and radio
Influencer marketing
Viral target marketing is based on three important principles:
Social profile gathering
Proximity market analysis
Real-time key word density analysis
By applying these three important disciplines to an advertising model, a VMS company is able to match a client with their targeted customers at a cost-effective advantage.
The Internet makes it possible for a campaign to go viral very fast; it can, so to speak, make a brand famous overnight. However, the Internet and social media technologies themselves do not make a brand viral; they just enable people to share content to other people faster. Therefore, it is generally agreed that a campaign must typically follow a certain set of guidelines in order to potentially be successful:
It must be appealing to most of the audience.
It must be worth sharing with friends and family.
A large platform, e.g. YouTube or Facebook must be used.
An initial boost to gain attention is used, e.g. seeding, buying views, or sharing to Facebook fans.
The content is of good quality.
Demographics - It must be correlated with the Region & Society.
Drivers of success
Wilert Puriwat and Suchart Tripopsakul, who read over countless academic journals on viral marketing, gathered there knowledge to propose what they called the "7I's of effective word-of-mouth marketing campaigns." These seven I's can be used to highlight where the success of a viral marketing campaign comes from. While what Puriwat and Tripopsakul publish outlines what makes an effective campaign, there is also forewarnings that negative word-of-mouth messages about a brand or product have more power over a consumers purchasing decision. With that being said, the 7I's are as follow:
Invisibility: The more distant an advertisement feels from being an advertisement, the more likely it is that the population will share the message with the people they are connected to. Puriwat and Tripopsakul suggest in their paper that marketers should be "balancing the branding element with the quality of content" to create an effect campaign.
Identity: Identity has to do with the marketing's ability to positively promote the sharers personal traits that they wish to exemplify. To get the consumers themselves to share your marketing campaign it must match the aspects that they are wishing to emit to the people they are sharing with, especially over social media.
Innovation: Marketing campaigns that venture off from what has been done before raises the likelihood that word will spread. Deviating from the norm through means such as surprising the audience or using humor can create the innovation needed to engage the audience and distance themselves from being just another advert.
Insight: What insight means in this context is that the marketing team should be looking at ways that their campaign can have an effect on the consumer on an emotional level. Instead of just sharing a marketing campaign that tells the viewer how great the product or service is, the campaign should elicit a positive feeling in the viewer that confirms how they feel. This leads to an increased likelihood that the advertisement would be share.
Instantaneity: The instantaneity of a given campaign has to do with how well the initial deliverable can be intertwined with recent trends and current popular topics. This all-encompassing leg of the 7I's covers topics such as timeliness on when the campaign is initially launched, how trendy it is to share the viral advert with the people around you, and also includes the use of the correct celebrity for the moment. Using celebrities in ads is not a new idea, however the right celebrity to elicit the correct feeling in the viewer goes a long way in increasing success.
Integration: The integration of the viral marketing scheme is how well does the viral moment feed consumers into other marketing avenues for the company. While successfully creating a viral marketing campaign benefits the company by increasing the total amount impressions based on the financial investment, you still need the consumers to further interact with your brand outside of sharing. This can be done through the channeling of consumers to other marketing avenues such as websites and personal social media pages that can increase brand familiarity.
Interactivity: The interactivity of a marketing campaign draws its importance based on the fact that a normal marketing campaign pushes its consumers to just buy or agree with what is being promoted. This differs from viral marketing since you know need to close the gap between the brand and the viewer to the point that there is public interaction between the two. Often times this public interaction is negative in nature without viral marketing since forewarnings on products are often shared between consumers. This is why having an interactive avenue through viral marketing can correct the balance between negative and positive discussion of a brand through consumer to consumer networks.
Using these seven described aspects of viral marketing, the two ran a statistical test utilizing a survey of 286 people on their thoughts of recent viral marketing efforts. The questions in the survey gauged whether each point from the 7I's were met in the campaign using Likert scale questions and ended with questions on brand preference and brand recognition. While many conclusions were drawn from the statistical analysis, the prominent ones to be shared were based around age groups and interaction results. Wilert and Tripopsakul found that viral marketing is a tool that has shown to be more beneficial in targeting a younger demographic than the older audience. They also found that consumers who partook in any interaction with a brands viral marketing campaign more often than not had a positive increase in that brands perception.
Social networking
The growth of social networks significantly contributed to the effectiveness of viral marketing. As of 2009, two-thirds of the world's Internet population visits a social networking service or blog site at least every week. Facebook alone has over 1 billion active users. In 2009, time spent visiting social media sites began to exceed time spent emailing. A 2010 study found that 52% of people who view news online forward it on through social networks, email, or posts.
Social media
The introduction of social media has caused a change in how viral marketing is used and the speed at which information is spread and users interact. This has prompted many companies to use social media as a way to market themselves and their products, with Elsamari Botha and Mignon Reyneke stating that viral messages are "playing an increasingly important role in influencing and shifting public opinion on corporate reputations, brands, and products as well as political parties and public personalities to name but a few."
Influencers
In business, it is indicated that people prefer interaction with humans to a logo. Influencers build up a relationship between a brand and their customers. Companies would be left behind if they neglected the trend of influencers in viral marketing, as over 60% of global brands have used influencers in marketing in 2016.
Influencers correlate to the level of customers' involvement in companies' marketing. First, unintentional influences, because of brand satisfaction and low involvement, their action is just to deliver a company's message to a potential user. Secondly, users will become salesmen or promoters for a particular company with incentives. For example, ICQ offered their users benefits to promote a product to their friends. A recent trend in business is to offer incentives to individual users for re-posting an advertisement's message to their own profiles.
Marketers and agencies commonly consider celebrities as a good influencer with endorsement work. This conception is similar to celebrity marketing. Based on a survey, 69% of company marketing department and 74% of agencies are currently working with celebrities in the UK. The celebrity types come along with their working environment. Traditional celebrities are considered singers, dancers, actors or models. These types of public characters are continuing to be the most commonly used by company marketers. The survey found that 4 in 10 company having worked with these traditional celebrities in the prior year. However, people these years are spending more time on social media rather than traditional media such as TV. The researchers also claim that customers are not firmly believed celebrities are effectively influential.
Social media stars such as YouTuber Zoella or Instagrammer Aimee Song are followed by millions of people online. Online celebrities have connection and influence with their followers because they frequently and realisticly converse and interact on the Internet through comments or likes.
This trend captured by marketers who are used to explore new potential customers. Agencies are placing social media stars alongside singers and musicians at the top of the heap of celebrity types they had worked with. And there are more than 28% of company marketers having worked with one social media celebrity in the previous year.
Benefits
For companies
Using influencers in viral marketing provides companies several benefits. It enables companies to spend little time and budget on their marketing communication and brand awareness promotion. For example, Alberto Zanot, in the 2006 FIFA Football World Cup, shared Zinedine Zidane's headbutt against Italy and engaged more than 1.5 million viewers in less than the very first hour. Secondly, it enhances the credibility of messages. These trust-based relationships grab the audience's attention, create customers' demand, increase sales and loyalty, or simply drive customers' attitude and behavior. In the case of Coke, Millennials changed their mind about the product, from parents' drink to the beverage for teens. It built up Millennials' social needs by 'sharing a Coke' with their friends. This created a deep connection with Gen Y, dramatically increased sales (+11% compared with last year) and market share (+1.6%).
Benefits for influencers
No doubt that harnessing influencers would be a lucrative business for both companies and influencers. The concept of 'influencer' is no longer just an 'expert' but also anyone who delivers and influence on the credibility of a message (e.g. blogger) In 2014, BritMums, network sharing family's daily life, had 6,000 bloggers and 11,300 views per month on average and became endorsers for some particular brand such as Coca-Cola, Morrison. Another case, Aimee Song who had over 3.6m followers on the Instagram page and became Laura Mercier's social media influencers, gaining $500,000 monthly.
For consumers
Decision-making process seems to be hard for customers these days. Millers (1956) argued that people suffered from short-term memory. This links to difficulties in customers' decision-making process and Paradox of Choice, as they face various adverts and newspapers daily.
Influencers serve as a credible source for customers' decision-making process. Neilsen reported that 80% of consumers appreciated a recommendation of their acquaintances, as they have reasons to trust in their friends delivering the messages without benefits and helping them reduce perceived risks behind choices.
Risks of using the wrong influencer
Risks for the company
The main risk coming from the company is for it to target the wrong influencer or segment. Once the content is online, the sender won't be able to control it anymore. It is therefore vital to aim at a particular segment when releasing the message. This is what happened to the company BlendTech which released videos showing the blender could blend anything, and encouraged users to share videos. This mainly caught the attention of teenage boys who thought it funny to blend and destroy anything they could; even though the videos went viral, they did not target potential buyers of the product. This is considered to be one of the major factors that affects the success of the online promotion. It is critical and inevitable for the organisations to target the right audience. Another risk with internet is that a company's video could end up going viral on the other side of the planet where their products are not even for sale.
Risks emanating from the influencers
According to a paper by Duncan Watts and colleagues entitled: "Everyone's an influencer", the most common risk in viral marketing is that of the influencer not passing on the message, which can lead to the failure of the viral marketing campaign. A second risk is that the influencer modifies the content of the message. A third risk is that influencers pass on the wrong message. This can result from a misunderstanding or as a deliberate move.
Notable examples
Hotmail Tagline
Between 1996 and 1997, Hotmail was one of the first internet businesses to become extremely successful utilizing viral marketing techniques by inserting the tagline "Get your free e-mail at Hotmail" at the bottom of every e-mail sent out by its users. Hotmail was able to sign up 12 million users in 18 months. At the time, this was historically the fastest growth of any user based media company. By the time Hotmail reached 66 million users, the company was establishing 270,000 new accounts each day.
Dollar Shave Club YouTube Video
On March 6, 2012, Dollar Shave Club launched their online video campaign. In the first 48 hours of their video debuting on YouTube they had over 12,000 people signing up for the service. The video cost just $4500 to make and as of November 2015 has had more than 21 million views. The video was considered one of the best viral marketing campaigns of 2012 and won "Best Out-of-Nowhere Video Campaign" at the 2012 AdAge Viral Video Awards.
Oreo Power Outage
During the 2013 Super Bowl, the Mercedes-Benz stadium suffered from a massive power outage. Oreo took advantage of the power outage and created a viral marketing campaign, incorporating a black and white image of an Oreo. The image included a text that stated, “You can still dunk in the dark.” A caption was also included that stated “No Power? No problem.” Due to Oreo’s quick thinking and clever marketing created traction and caused thousands of tweets and retweets. The marketing tactic that Oreo used to bring traction to the Oreo Company is referred to as newsjacking, which companies use to bring more customers to their brand using clever marketing tactics.
Spotify Wrapped
Spotify Wrapped is a viral marketing campaign by Spotify released annually since 2016 between November 29 and December 6, allowing users to view a compilation of data about their activity on the platform over the preceding year, and inviting them to share a colorful pictorial representation of it on social media. Other brands started releasing similar features, like Apple with Apple Music Replay. In 2021, 120 million users accessed Spotify Wrapped.
Elf Original Song
Since Elf is an older brand, they have to get creative in how they market their products to their newer audience. They created a 15 second song called #Eyes lips face in October of 2019 for their customers to utilize in whatever form they choose on social media. Elf stands for eyes, lips and face. Social media can help bring traction and awareness to these brands to bring in the most people possible which is more money for them. Elf created a campaign to bring awareness to their brand. Elf made history with this marketing tactic, being the first makeup company to utilize a song, an original song at that, as a promotion tool. “Elf collaborated with Grammy-winning producer iLL Wayno and rising artist Holla FyeSixWun to create a catchy 15-second clip” (House of Marketers) These artists strategically handpicked specific influencers with big platforms to help further their song. These chosen influencers made videos with Elf's song in the background bringing further awareness to the makeup brand.
McDonald's Grimace Shake
In June 2023, McDonald's inadvertently took advantage of viral marketing with the rollout of Grimace's Birthday Meal, and more specifically, the Grimace Shake. During its release, a popular trend emerged where people would take videos of themselves drinking the Grimace Shake and then would be found in disturbing positions with purple goo (assumed to be from the shake) splattered across them. McDonald's, while not responsible for the trend themselves, did eventually go on to recognize it in a Twitter post that read (as Grimace): "meee pretending i don't see the grimace shake trendd". While the Grimace's Birthday campaign was already a success for McDonald's, the trend boosted sales even higher and kept them high all the way until the end of the promotion on June 29th.
Ghostface Real Estate Listing
In Autumn 2019, a real estate listing for a century-old home in Lansing, Michigan went viral when the listing agent (James Pyle) used the Ghostface character from the Scream movie in marketing photos that showcased the home on Realtor.com and Zillow. The listing went live on September 27, 2019, and quickly began trending on Facebook, garnering 300,000 views in 2 days, at which point a story on the unusual popularity of the listing appeared in a local newspaper. Pyle stated that wanted to do something fun and novel for the Halloween season but to keep the photos professional at the same time, and hired photographer Bradley Johnson to take several pictures of him dressed as Ghostface raking leaves in the backyard, preparing to carve a pumpkin in the kitchen, standing on the front and back porches, and peeking out behind curtains and doors. The following day, the story was picked up by several radio stations, including K102.5 in Kalamazoo, WCRZ in Burton, WOMC and ALT97 in Detroit, as well as the Metro Times newspaper in Detroit. Following the increased attention on the Zillow listing, over the next few days the story appeared on major news networks.
Pyle stated that a normal listing typically received under 150 views, and his goal was to get between 500 and 1,000 views of the home. However, the Zillow listing ended up receiving over 20,000 views by October 1, one million views by October 2 and exceeded 1.2 million views by October 3. It was estimated that the combined views of the listings on both sites (Zillow and Realtor.com) exceeded 5 million in 5 days. The listing received a cash offer within 4 days and the immense popularity resulted in the home becoming overbooked during the open house and subsequent viewings. Due to the success of the listing, Pyle was scheduled to appear on “Good Morning America” on October 2, 2019. He was quoted as saying that he didn't think he would ever be able to duplicate the success of the listing, but he planned to try some additional variations for future listings. The listing continued to be popular even after the house was off the market. This approach was so successful that it became a recommended practice on Realtor.com.
See also
Advertising campaign
Clickbait
Content marketing
Growth hacking
Guerrilla marketing
Internet marketing
K-factor (marketing)
Mainstream media
Mobile marketing
Reply marketing
Social media marketing
Social video marketing
Spotify Wrapped
Viral phenomenon
Visual marketing
References
HOM. “How Elf Conquered Tik Tok Case Study” HOM, 30 May 2023 How Elf Cosmetics Conquered TikTok: A Case Study on Beauty Brand Success (houseofmarketers.com)
Social media
Cultural trends
Advertising techniques
Memetics
1990s neologisms
Promotion and marketing communications
Publicity stunts
Social influence | Viral marketing | [
"Technology"
] | 6,141 | [
"Computing and society",
"Social media"
] |
351,077 | https://en.wikipedia.org/wiki/Transparency%20and%20translucency | In the field of optics, transparency (also called pellucidity or diaphaneity) is the physical property of allowing light to pass through the material without appreciable scattering of light. On a macroscopic scale (one in which the dimensions are much larger than the wavelengths of the photons in question), the photons can be said to follow Snell's law. Translucency (also called translucence or translucidity) allows light to pass through but does not necessarily (again, on the macroscopic scale) follow Snell's law; the photons can be scattered at either of the two interfaces, or internally, where there is a change in the index of refraction. In other words, a translucent material is made up of components with different indices of refraction. A transparent material is made up of components with a uniform index of refraction. Transparent materials appear clear, with the overall appearance of one color, or any combination leading up to a brilliant spectrum of every color. The opposite property of translucency is opacity. Other categories of visual appearance, related to the perception of regular or diffuse reflection and transmission of light, have been organized under the concept of cesia in an order system with three variables, including transparency, translucency and opacity among the involved aspects.
When light encounters a material, it can interact with it in several different ways. These interactions depend on the wavelength of the light and the nature of the material. Photons interact with an object by some combination of reflection, absorption and transmission.
Some materials, such as plate glass and clean water, transmit much of the light that falls on them and reflect little of it; such materials are called optically transparent. Many liquids and aqueous solutions are highly transparent. Absence of structural defects (voids, cracks, etc.) and molecular structure of most liquids are mostly responsible for excellent optical transmission.
Materials that do not transmit light are called opaque. Many such substances have a chemical composition which includes what are referred to as absorption centers. Many substances are selective in their absorption of white light frequencies. They absorb certain portions of the visible spectrum while reflecting others. The frequencies of the spectrum which are not absorbed are either reflected or transmitted for our physical observation. This is what gives rise to color. The attenuation of light of all frequencies and wavelengths is due to the combined mechanisms of absorption and scattering.
Transparency can provide almost perfect camouflage for animals able to achieve it. This is easier in dimly-lit or turbid seawater than in good illumination. Many marine animals such as jellyfish are highly transparent.
Etymology
late Middle English: from Old French, from medieval Latin - 'visible through', from Latin , from - 'through' + 'be visible'.
late 16th century (in the Latin sense): from Latin - 'shining through', from the verb , from - 'through' + 'to shine'.
late Middle English , from Latin 'darkened'. The current spelling (rare before the 19th century) has been influenced by the French form.
Introduction
With regard to the absorption of light, primary material considerations include:
At the electronic level, absorption in the ultraviolet and visible (UV-Vis) portions of the spectrum depends on whether the electron orbitals are spaced (or "quantized") such that electrons can absorb a quantum of light (or photon) of a specific frequency. For example, in most glasses, electrons have no available energy levels above them in the range of that associated with visible light, or if they do, the transition to them would violate selection rules, meaning there is no appreciable absorption in pure (undoped) glasses, making them ideal transparent materials for windows in buildings.
At the atomic or molecular level, physical absorption in the infrared portion of the spectrum depends on the frequencies of atomic or molecular vibrations or chemical bonds, and on selection rules. Nitrogen and oxygen are not greenhouse gases because there is no molecular dipole moment.
With regard to the scattering of light, the most critical factor is the length scale of any or all of these structural features relative to the wavelength of the light being scattered. Primary material considerations include:
Crystalline structure: whether the atoms or molecules exhibit the 'long-range order' evidenced in crystalline solids.
Glassy structure: Scattering centers include fluctuations in density or composition.
Microstructure: Scattering centers include internal surfaces such as grain boundaries, crystallographic defects, and microscopic pores.
Organic materials: Scattering centers include fiber and cell structures and boundaries.
Diffuse reflection - Generally, when light strikes the surface of a (non-metallic and non-glassy) solid material, it bounces off in all directions due to multiple reflections by the microscopic irregularities inside the material (e.g., the grain boundaries of a polycrystalline material or the cell or fiber boundaries of an organic material), and by its surface, if it is rough. Diffuse reflection is typically characterized by omni-directional reflection angles. Most of the objects visible to the naked eye are identified via diffuse reflection. Another term commonly used for this type of reflection is "light scattering". Light scattering from the surfaces of objects is our primary mechanism of physical observation.
Light scattering in liquids and solids depends on the wavelength of the light being scattered. Limits to spatial scales of visibility (using white light) therefore arise, depending on the frequency of the light wave and the physical dimension (or spatial scale) of the scattering center. Visible light has a wavelength scale on the order of 0.5 μm. Scattering centers (or particles) as small as 1 μm have been observed directly in the light microscope (e.g., Brownian motion).
Transparent ceramics
Optical transparency in polycrystalline materials is limited by the amount of light scattered by their microstructural features. Light scattering depends on the wavelength of the light. Limits to spatial scales of visibility (using white light) therefore arise, depending on the frequency of the light wave and the physical dimension of the scattering center. For example, since visible light has a wavelength scale on the order of a micrometre, scattering centers will have dimensions on a similar spatial scale. Primary scattering centers in polycrystalline materials include microstructural defects such as pores and grain boundaries. In addition to pores, most of the interfaces in a typical metal or ceramic object are in the form of grain boundaries, which separate tiny regions of crystalline order. When the size of the scattering center (or grain boundary) is reduced below the size of the wavelength of the light being scattered, the scattering no longer occurs to any significant extent.
In the formation of polycrystalline materials (metals and ceramics) the size of the crystalline grains is determined largely by the size of the crystalline particles present in the raw material during formation (or pressing) of the object. Moreover, the size of the grain boundaries scales directly with particle size. Thus, a reduction of the original particle size well below the wavelength of visible light (about 1/15 of the light wavelength, or roughly 600 nm / 15 = 40 nm) eliminates much of the light scattering, resulting in a translucent or even transparent material.
Computer modeling of light transmission through translucent ceramic alumina has shown that microscopic pores trapped near grain boundaries act as primary scattering centers. The volume fraction of porosity had to be reduced below 1% for high-quality optical transmission (99.99 percent of theoretical density). This goal has been readily accomplished and amply demonstrated in laboratories and research facilities worldwide using the emerging chemical processing methods encompassed by the methods of sol-gel chemistry and nanotechnology.
Transparent ceramics have created interest in their applications for high energy lasers, transparent armor windows, nose cones for heat seeking missiles, radiation detectors for non-destructive testing, high energy physics, space exploration, security and medical imaging applications. Large laser elements made from transparent ceramics can be produced at a relatively low cost. These components are free of internal stress or intrinsic birefringence, and allow relatively large doping levels or optimized custom-designed doping profiles. This makes ceramic laser elements particularly important for high-energy lasers.
The development of transparent panel products will have other potential advanced applications including high strength, impact-resistant materials that can be used for domestic windows and skylights. Perhaps more important is that walls and other applications will have improved overall strength, especially for high-shear conditions found in high seismic and wind exposures. If the expected improvements in mechanical properties bear out, the traditional limits seen on glazing areas in today's building codes could quickly become outdated if the window area actually contributes to the shear resistance of the wall.
Currently available infrared transparent materials typically exhibit a trade-off between optical performance, mechanical strength and price. For example, sapphire (crystalline alumina) is very strong, but it is expensive and lacks full transparency throughout the 3–5 μm mid-infrared range. Yttria is fully transparent from 3–5 μm, but lacks sufficient strength, hardness, and thermal shock resistance for high-performance aerospace applications. A combination of these two materials in the form of the yttrium aluminium garnet (YAG) is one of the top performers in the field.
Absorption of light in solids
When light strikes an object, it usually has not just a single frequency (or wavelength) but many. Objects have a tendency to selectively absorb, reflect, or transmit light of certain frequencies. That is, one object might reflect green light while absorbing all other frequencies of visible light. Another object might selectively transmit blue light while absorbing all other frequencies of visible light. The manner in which visible light interacts with an object is dependent upon the frequency of the light, the nature of the atoms in the object, and often, the nature of the electrons in the atoms of the object.
Some materials allow much of the light that falls on them to be transmitted through the material without being reflected. Materials that allow the transmission of light waves through them are called optically transparent. Chemically pure (undoped) window glass and clean river or spring water are prime examples of this.
Materials that do not allow the transmission of any light wave frequencies are called opaque. Such substances may have a chemical composition which includes what are referred to as absorption centers. Most materials are composed of materials that are selective in their absorption of light frequencies. Thus they absorb only certain portions of the visible spectrum. The frequencies of the spectrum which are not absorbed are either reflected back or transmitted for our physical observation. In the visible portion of the spectrum, this is what gives rise to color.
Absorption centers are largely responsible for the appearance of specific wavelengths of visible light all around us. Moving from longer (0.7 μm) to shorter (0.4 μm) wavelengths: Red, orange, yellow, green, and blue (ROYGB) can all be identified by our senses in the appearance of color by the selective absorption of specific light wave frequencies (or wavelengths). Mechanisms of selective light wave absorption include:
Electronic: Transitions in electron energy levels within the atom (e.g., pigments). These transitions are typically in the ultraviolet (UV) and/or visible portions of the spectrum.
Vibrational: Resonance in atomic/molecular vibrational modes. These transitions are typically in the infrared portion of the spectrum.
UV-Vis: electronic transitions
In electronic absorption, the frequency of the incoming light wave is at or near the energy levels of the electrons within the atoms that compose the substance. In this case, the electrons will absorb the energy of the light wave and increase their energy state, often moving outward from the nucleus of the atom into an outer shell or orbital.
The atoms that bind together to make the molecules of any particular substance contain a number of electrons (given by the atomic number Z in the periodic table). Recall that all light waves are electromagnetic in origin. Thus they are affected strongly when coming into contact with negatively charged electrons in matter. When photons (individual packets of light energy) come in contact with the valence electrons of an atom, one of several things can and will occur:
A molecule absorbs the photon, some of the energy may be lost via luminescence, fluorescence and phosphorescence.
A molecule absorbs the photon, which results in reflection or scattering.
A molecule cannot absorb the energy of the photon and the photon continues on its path. This results in transmission (provided no other absorption mechanisms are active).
Most of the time, it is a combination of the above that happens to the light that hits an object. The states in different materials vary in the range of energy that they can absorb. Most glasses, for example, block ultraviolet (UV) light. What happens is the electrons in the glass absorb the energy of the photons in the UV range while ignoring the weaker energy of photons in the visible light spectrum. But there are also existing special glass types, like special types of borosilicate glass or quartz that are UV-permeable and thus allow a high transmission of ultraviolet light.
Thus, when a material is illuminated, individual photons of light can make the valence electrons of an atom transition to a higher electronic energy level. The photon is destroyed in the process and the absorbed radiant energy is transformed to electric potential energy. Several things can happen, then, to the absorbed energy: It may be re-emitted by the electron as radiant energy (in this case, the overall effect is in fact a scattering of light), dissipated to the rest of the material (i.e., transformed into heat), or the electron can be freed from the atom (as in the photoelectric effects and Compton effects).
Infrared: bond stretching
The primary physical mechanism for storing mechanical energy of motion in condensed matter is through heat, or thermal energy. Thermal energy manifests itself as energy of motion. Thus, heat is motion at the atomic and molecular levels. The primary mode of motion in crystalline substances is vibration. Any given atom will vibrate around some mean or average position within a crystalline structure, surrounded by its nearest neighbors. This vibration in two dimensions is equivalent to the oscillation of a clock's pendulum. It swings back and forth symmetrically about some mean or average (vertical) position. Atomic and molecular vibrational frequencies may average on the order of 1012 cycles per second (Terahertz radiation).
When a light wave of a given frequency strikes a material with particles having the same or (resonant) vibrational frequencies, those particles will absorb the energy of the light wave and transform it into thermal energy of vibrational motion. Since different atoms and molecules have different natural frequencies of vibration, they will selectively absorb different frequencies (or portions of the spectrum) of infrared light. Reflection and transmission of light waves occur because the frequencies of the light waves do not match the natural resonant frequencies of vibration of the objects. When infrared light of these frequencies strikes an object, the energy is reflected or transmitted.
If the object is transparent, then the light waves are passed on to neighboring atoms through the bulk of the material and re-emitted on the opposite side of the object. Such frequencies of light waves are said to be transmitted.
Transparency in insulators
An object may be not transparent either because it reflects the incoming light or because it absorbs the incoming light. Almost all solids reflect a part and absorb a part of the incoming light.
When light falls onto a block of metal, it encounters atoms that are tightly packed in a regular lattice and a "sea of electrons" moving randomly between the atoms. In metals, most of these are non-bonding electrons (or free electrons) as opposed to the bonding electrons typically found in covalently bonded or ionically bonded non-metallic (insulating) solids. In a metallic bond, any potential bonding electrons can easily be lost by the atoms in a crystalline structure. The effect of this delocalization is simply to exaggerate the effect of the "sea of electrons". As a result of these electrons, most of the incoming light in metals is reflected back, which is why we see a shiny metal surface.
Most insulators (or dielectric materials) are held together by ionic bonds. Thus, these materials do not have free conduction electrons, and the bonding electrons reflect only a small fraction of the incident wave. The remaining frequencies (or wavelengths) are free to propagate (or be transmitted). This class of materials includes all ceramics and glasses.
If a dielectric material does not include light-absorbent additive molecules (pigments, dyes, colorants), it is usually transparent to the spectrum of visible light. Color centers (or dye molecules, or "dopants") in a dielectric absorb a portion of the incoming light. The remaining frequencies (or wavelengths) are free to be reflected or transmitted. This is how colored glass is produced.
Most liquids and aqueous solutions are highly transparent. For example, water, cooking oil, rubbing alcohol, air, and natural gas are all clear. Absence of structural defects (voids, cracks, etc.) and molecular structure of most liquids are chiefly responsible for their excellent optical transmission. The ability of liquids to "heal" internal defects via viscous flow is one of the reasons why some fibrous materials (e.g., paper or fabric) increase their apparent transparency when wetted. The liquid fills up numerous voids making the material more structurally homogeneous.
Light scattering in an ideal defect-free crystalline (non-metallic) solid that provides no scattering centers for incoming light will be due primarily to any effects of anharmonicity within the ordered lattice. Light transmission will be highly directional due to the typical anisotropy of crystalline substances, which includes their symmetry group and Bravais lattice. For example, the seven different crystalline forms of quartz silica (silicon dioxide, SiO2) are all clear, transparent materials.
Optical waveguides
Optically transparent materials focus on the response of a material to incoming light waves of a range of wavelengths. Guided light wave transmission via frequency selective waveguides involves the emerging field of fiber optics and the ability of certain glassy compositions to act as a transmission medium for a range of frequencies simultaneously (multi-mode optical fiber) with little or no interference between competing wavelengths or frequencies. This resonant mode of energy and data transmission via electromagnetic (light) wave propagation is relatively lossless.
An optical fiber is a cylindrical dielectric waveguide that transmits light along its axis by the process of total internal reflection. The fiber consists of a core surrounded by a cladding layer. To confine the optical signal in the core, the refractive index of the core must be greater than that of the cladding. The refractive index is the parameter reflecting the speed of light in a material. (Refractive index is the ratio of the speed of light in vacuum to the speed of light in a given medium. The refractive index of vacuum is therefore 1.) The larger the refractive index, the more slowly light travels in that medium. Typical values for core and cladding of an optical fiber are 1.48 and 1.46, respectively.
When light traveling in a dense medium hits a boundary at a steep angle, the light will be completely reflected. This effect, called total internal reflection, is used in optical fibers to confine light in the core. Light travels along the fiber bouncing back and forth off of the boundary. Because the light must strike the boundary with an angle greater than the critical angle, only light that enters the fiber within a certain range of angles will be propagated. This range of angles is called the acceptance cone of the fiber. The size of this acceptance cone is a function of the refractive index difference between the fiber's core and cladding. Optical waveguides are used as components in integrated optical circuits (e.g., combined with lasers or light-emitting diodes, LEDs) or as the transmission medium in local and long-haul optical communication systems.
Mechanisms of attenuation
Attenuation in fiber optics, also known as transmission loss, is the reduction in intensity of the light beam (or signal) with respect to distance traveled through a transmission medium. It is an important factor limiting the transmission of a signal across large distances. Attenuation coefficients in fiber optics usually use units of dB/km through the medium due to the very high quality of transparency of modern optical transmission media. The medium is usually a fiber of silica glass that confines the incident light beam to the inside.
In optical fibers, the main source of attenuation is scattering from molecular level irregularities, called Rayleigh scattering, due to structural disorder and compositional fluctuations of the glass structure. This same phenomenon is seen as one of the limiting factors in the transparency of infrared missile domes. Further attenuation is caused by light absorbed by residual materials, such as metals or water ions, within the fiber core and inner cladding. Light leakage due to bending, splices, connectors, or other outside forces are other factors resulting in attenuation. At high optical powers, scattering can also be caused by nonlinear optical processes in the fiber.
As camouflage
Many marine animals that float near the surface are highly transparent, giving them almost perfect camouflage. However, transparency is difficult for bodies made of materials that have different refractive indices from seawater. Some marine animals such as jellyfish have gelatinous bodies, composed mainly of water; their thick mesogloea is acellular and highly transparent. This conveniently makes them buoyant, but it also makes them large for their muscle mass, so they cannot swim fast, making this form of camouflage a costly trade-off with mobility. Gelatinous planktonic animals are between 50 and 90 percent transparent. A transparency of 50 percent is enough to make an animal invisible to a predator such as cod at a depth of ; better transparency is required for invisibility in shallower water, where the light is brighter and predators can see better. For example, a cod can see prey that are 98 percent transparent in optimal lighting in shallow water. Therefore, sufficient transparency for camouflage is more easily achieved in deeper waters. For the same reason, transparency in air is even harder to achieve, but a partial example is found in the
glass frogs of the South American rain forest, which have translucent skin and pale greenish limbs. Several Central American species of clearwing (ithomiine) butterflies and many dragonflies and allied insects also have wings which are mostly transparent, a form of crypsis that provides some protection from predators.
See also
Brillouin scattering
Clarity meter
Colloidal crystal
Haze (optics)
Light scattering
Pellicle mirror
Photonic crystal
Transparent metals
Turbidity
References
Further reading
Electrodynamics of continuous media, Landau, L. D., Lifshits. E.M. and Pitaevskii, L.P., (Pergamon Press, Oxford, 1984)
Laser Light Scattering: Basic Principles and Practice Chu, B., 2nd Edn. (Academic Press, New York 1992)
Solid State Laser Engineering, W. Koechner (Springer-Verlag, New York, 1999)
Introduction to Chemical Physics, J.C. Slater (McGraw-Hill, New York, 1939)
Modern Theory of Solids, F. Seitz, (McGraw-Hill, New York, 1940)
Modern Aspects of the Vitreous State, J.D.MacKenzie, Ed. (Butterworths, London, 1960)
External links
UV stability
Properties of Light
UV-Vis Absorption
Infrared Spectroscopy
Brillouin Scattering
Transparent Ceramics
Bulletproof Glass
Transparent ALON Armor
Properties of Optical Materials
What makes glass transparent ?
Brillouin scattering in optical fiber
Thermal IR Radiation and Missile Guidance
Optical phenomena
Physical properties
Glass engineering and science
Dimensionless numbers of physics | Transparency and translucency | [
"Physics",
"Materials_science",
"Engineering"
] | 4,880 | [
"Glass engineering and science",
"Physical phenomena",
"Materials science",
"Optical phenomena",
"Materials",
"Transparent materials",
"Physical properties",
"Matter"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.