id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
14,747,934 | https://en.wikipedia.org/wiki/2%20B%20R%200%202%20B | "2 B R 0 2 B" is a science fiction short story by Kurt Vonnegut, originally published in the digest magazine If: Worlds of Science Fiction in January 1962, and later collected in Vonnegut's Bagombo Snuff Box (1999). The title is pronounced "2 B R naught 2 B" and references the famous phrase "to be, or not to be" from William Shakespeare's Hamlet. The story explores themes of overpopulation, government control, and the value of human life, showcasing Vonnegut’s characteristic blend of dark humor and social commentary.
Publication History
"2 B R 0 2 B" was first published in the January 1962 issue of If: Worlds of Science Fiction. It was later included in Vonnegut’s 1999 collection Bagombo Snuff Box. The story has been widely reprinted and is often included in anthologies of classic science fiction.
Plot Summary
The setting is a society in which aging has been cured, individuals have indefinite lifespans, and population control is used to limit the population of the United States to forty million, a number which is maintained through a combination of infanticide and government-assisted suicide. In short, for someone to be born, someone else must first volunteer to die. As a result, births are few and far between, and deaths occur primarily by accident.
The scene is a waiting room at the Chicago Lying-In Hospital, where Edward K. Wehling Jr. is faced with the situation that his wife is about to give birth to triplets, but he has found only one person, his maternal grandfather, who will volunteer to die. A painter on a stepladder is redecorating the room with a mural depicting employees who work at the hospital, including Dr. Benjamin Hitz, the hospital's Chief Obstetrician. Leora Duncan, from the Service Division of the Federal Bureau of Termination, arrives to pose for the mural. It is a picture of a garden that is well taken care of, and a metaphor for the United States at the time. Later, Dr. Hitz enters the scene and converses with everyone but the painter of the mural.
It becomes apparent to all that Wehling is in a state of despair since he does not want to send his grandfather and two of his children to death. Dr. Hitz questions Wehling's belief in the system and tries to make Wehling feel better by explaining how the surviving child will "live on a happy, roomy, clean, rich planet." Suddenly, Wehling draws a revolver and kills Dr. Hitz, Leora Duncan, and himself, "making room for all three children."
The painter, who is about 200 years old, is left to reflect on the scene and thinks about life, war, plague, and starvation. Descending the stepladder, he initially takes the revolver and intends to kill himself with it but is unable to do so. Instead, he calls the Bureau of Termination to make an appointment. The last line is from the receptionist at the Bureau:
Themes and Analysis
"2 B R 0 2 B" presents a dystopian future where population control is strictly enforced, reflecting Vonnegut’s concerns about overpopulation and the potential dehumanization within bureaucratic systems. The title, a play on Shakespeare’s "To be, or not to be," underscores the existential questions at the heart of the story. Vonnegut’s use of dark humor and irony serves to critique societal norms and question the morality of government intervention in life and death.
Cultural Impact and Reception
Since its publication, "2 B R 0 2 B" has been recognized as one of Vonnegut’s most impactful short stories, frequently cited in discussions of dystopian literature. Critics have praised its incisive social commentary and the chilling plausibility of its future vision. The story has been studied in academic settings for its exploration of ethics, population control, and the role of government.
Adaptations
Vonnegut's story was the basis for the 2016 Canadian short film 2BR02B: To Be or Naught to Be, directed by Marco Checa Garcia, which premiered at the Sci-Fi-London festival in April 2016. The story has also inspired other adaptations, including an audiobook version narrated by Matt Montanez in 2024, a dramatic reading available on multiple platforms, and a graphic novel adaptation by Jim Tierney published in 2012. Additionally, the story has been performed as a stage adaptation by the Portland Stage Company in 2011.
Influence on Other Works
Vonnegut’s story has influenced a range of other writers and filmmakers who explore themes of dystopia and population control. The concept of government-regulated life and death in 2 B R 0 2 B can be seen echoed in works such as the film Logan's Run by Michael Anderson, which similarly examines a society that imposes strict population control measures.
References
External Links
Audio Book 2BR02B at Verkaro.org (Archive)
2BR02B: To Be or Naught to Be (2016) at the Internet Movie Database
Short stories by Kurt Vonnegut
1962 short stories
Telephone numbers
Science fiction short stories
Works originally published in If (magazine)
Hospitals in fiction
Short stories adapted into films
Overpopulation fiction | 2 B R 0 2 B | [
"Mathematics"
] | 1,094 | [
"Mathematical objects",
"Numbers",
"Telephone numbers"
] |
14,748,020 | https://en.wikipedia.org/wiki/Jane%20%28software%29 | Jane is a discontinued GUI-based integrated software package for the Apple II, Commodore 64 and Commodore 128 personal computers. It was developed by Arktronics in 1984, and the Commodore version was published by Commodore in 1985. The same year, it was also published for the French computer Thomson MO5. Like Commodore's earlier Magic Desk software, it used a literal desktop metaphor with the interface consisting of an onscreen graphic of a desktop with icons representing associated business tools: a typewriter represented the word processor component (JaneWrite), a filing cabinet for the database (JaneList), a calculator for the spreadsheet (JaneCalc) and so on. It was designed to be controlled by either a joystick, a mouse or a light pen. Like most of the other examples of integrated software for home computers, Jane's components were criticized for being slow and limited. It was not a success in the marketplace but represented an early example of a graphical interface on an 8-bit computer.
Arktronics was a software development company in Ann Arbor, Michigan, founded by Howard Marks and Bobby Kotick. Jane was originally intended to be a package not only for the Apple and Commodore lines, but also for the Atari 8-bit family and others. This transportability was engineered by a combination of higher level systems written in the C language and machine specific drivers written in the assembly language for each machine (6502 Assembly for the Apple II and Commodore 64). For the C64, DOS manager was written by Howard K. Weiner, and the font manager/windows manager was written by Daniel J. Weiner. The Weiner brothers, both went on to attend the University of Michigan Integrated Pre-medical-Medical (Inteflex) Program. Other programmers included Andrew Marcheff (z”l) and Thomas Naughton.
References
Apple II software
Commodore 64 software
1984 software
1985 software | Jane (software) | [
"Technology"
] | 388 | [
"Computing stubs",
"Software stubs"
] |
14,748,456 | https://en.wikipedia.org/wiki/NGC%201265 | NGC 1265 is a Fanaroff and Riley class 1 radio galaxy located in the constellation Perseus, a member of the Perseus Cluster.
References
External links
Simbad NGC 1265
3CRR Atlas 3C 83.1B
Radio galaxies
Elliptical galaxies
083.1B
1265
02651
12287
Perseus (constellation)
Perseus Cluster
012287 | NGC 1265 | [
"Astronomy"
] | 80 | [
"Perseus (constellation)",
"Constellations"
] |
14,748,503 | https://en.wikipedia.org/wiki/Mageu | Mageu (Setswana spelling), Mahewu (Shona/Chewa/Nyanja spelling), Mahleu (Sesotho spelling), Magau (xau-Namibia) (Khoikhoi spelling), Madleke (Tsonga spelling), Mabundu (Tshivenda spelling), maHewu, amaRhewu (Xhosa spelling) or amaHewu (Zulu and Northern Ndebele spelling) is a traditional Southern African non-alcoholic drink among many of the Chewa/Nyanja, Shona, Ndebele, Nama Khoikhoi and Damara people, Sotho people, Tswana people and Nguni people made from fermented mealie pap. Home production is still widely practised, but the drink is also available at many supermarkets, being produced at factories.
Its taste is derived predominantly from the lactic acid that is produced during fermentation, but commercial mageu is often flavoured and sweetened, much in the way commercially-available yogurt is. Similar beverages are also made in other parts of Africa.
Fermentation process
Thin mealie pap (maize meal) is prepared, to which wheat flour is added, providing the inoculum of lactate-producing bacteria. The mixture is left to ferment, typically in a warm area. Pasteurisation is done in commercial operations to extend shelf-life.
Nutrition
Nutritionally, it is similar to its parent mealie meal, but with the glucose metabolized to lactate during fermentation. Commercial preparations are often enriched (In South Africa, the term 'fortification' is only allowed legally for specific, government-sanctioned nutrition programs, e.g. that of bread) with vitamins and minerals. Although typically considered non-alcoholic, very small amounts (less than 1%) of ethanol have been reported.
See also
Ogi
Tejuino
Boza
References
Steinkraus, Keith H. "Industrialization of indigenous fermented foods". Google books. Accessed May 2010.
South African cuisine
Fermented drinks
Maize-based drinks | Mageu | [
"Biology"
] | 439 | [
"Fermented drinks",
"Biotechnology products"
] |
14,748,589 | https://en.wikipedia.org/wiki/Emergency%20Architects%20Foundation | The Emergency Architects Foundation is a French non-governmental organization,
reconnue d'utilité publique. It is organised as a not-for-profit foundation with the French Order of Architects as supporters and is accredited by the United Nations and the European Union.
The Emergency Architects Foundation is also present in Australia (Emergency Architects Australia) and in Canada (Emergency Architects Canada).
The aim of Emergency Architects is to bring help and technical aid to the victims of natural, technological and human disasters, not only in safety and security evaluations of the populations but also in post-disaster reconstruction programs focused on long-term development and risk mitigation.
History
The Emergency Architects Foundation was created in April 2001 by Patrick Coulombel (architect) in Amiens in Picardy, France as a result of the flooding of the River Somme in 2001. A group of architects formed the organisation so as to mobilize technical assistance to the disaster victims and protect the rich cultural heritage of the region.
Emergency Architects Australia (EAA) was started in 2005 in response to the Indonesian tsunami and since that has worked in Indonesia, Pakistan, the Solomon Islands, Sri Lanka, Timor Leste, Papua New Guinea, Cook Islands, India and Australia. The president of EAA is Andrea Nield. EAA has over 300 members and is supported by the Union of International Architects, Australian Institute of Architects National, the Victorian and NSW RAIA Chapters, and The Council of the City of Sydney. Among the important collaborators are The European Union, UNHCR, UNICEF, NZAID, Australian Red Cross, World Vision, and Caritas.
Emergency Architects Canada was founded in 2007 The president is Bernard Mac Namara.
Emergency Architects has already led 28 actions in 24 countries, made more than 39,600 assessments and about 8,500 buildings.
At the moment, the foundation is undertaking 12 programs in the following countries: Afghanistan, Solomon Islands, Indonesia, Lebanon, Pakistan, Peru, Sri Lanka Chad, East Timor and Australia.
Aims
The main objectives of the Emergency Architects Foundation are:
to support and develop architects' humanitarian engagement in France and worldwide and thus to contribute to the development of architecture,
to train architects with skills to help populations affected by natural, technological or human disasters,
to encourage architects training in France and in the World
to preserve and promote architectural, historical and cultural world heritage.
Functioning
The architects, engineers and planners have used their professional expertise (knowledge of risk prevention and of building) to provide appropriate and sustainable assistance to the populations affected by natural (tsunami, earthquake), technological (chemical factory explosion) or human (civil) disasters.
They always work with local populations and use local materials in their buildings.
The foundation employs 533 persons from 23 countries. Overall, more than 1,200 architects and engineers have been involved in Emergency Architects achievements in 21 countries.
Fields of operation
The foundation takes part in the following operations:
Emergency
Cartographic Evaluation Missions and Field work evaluation, in order to allow to the architects to quickly understand the disaster and its effects on the population, to estimate the extent of damage, to identify and define the human and logistical means for ensuring the immediate safety of populations and their quick rehousing.
Facilitate ethical partnerships between Emergency Architects, local communities, local governments and aid organisations.
Safety of the populations by the installation of safety perimeters around damaged construction which could present potential danger, survey public services and housing, organize strategies for stabilisation or evacuation of inhabitants in case of danger, and by identifying strategic methods to provide security of a population.
Missions in Refugee camps to improve living conditions of refugee populations in these Refugee camps.
Rebuilding
Help to rebuild sustainable and decent housing and restore basic economic and education infrastructure.
Re-house displaced populations.
Risk prevention: Analyse relevant environmental, urban, technological and architectural factors with regards to rebuilding safely.
Capacity building: Preserve traditional know-how while adding features to make them resistant to future disasters.
Promote the training of locals, from masons to architects.
Actions and interventions
France
Floods of the Somme Region (2001)
AZF chemical factory explosion (2001)
Floods of the Gard Region (2002)
Earthquake in Martinique (2007)
Africa
Algeria, Bourmerdès earthquake (2003)
Morocco, Al Hoceima earthquake (2004)
Chad, Refugee camps in the east of the Chad (2007)
Sudan, displacement refugee report (2007)
RDC, displacement study
Mauritius, slum remediation (2008)
Madagascar, earthquake (2004)
Senegal, schools building program
Asia
Timor Leste, School Sanitation project, building CVTL Headquarters and Maliana Gymnasium (2010-2011)
Bangladesh, floods (2004)
Iran, Earthquake in Bam (2003)
Afghanistan, Training Workshop in Kabul (2004)
Indonesia, 2004 tsunami (2004)
Sri Lanka, 2004 tsunami (2004)
Thailand, Khao Lak Island (June 2004)
South Asia, 2004 tsunami (2005)
Pakistan, 2005 Kashmir earthquake (2005)
Indonesia, 2006 Yogyakarta earthquake (2006)
Palestinian territories, reconstruction in Palestinian camps (2009)
Europe
2002 European floods (2002)
America
Grenada Island and Haiti, Hurricane Ivan and Hurricane Jeanne (2004)
Peru 2007 Peru earthquake (2007)
Canada, Contribution to the housing of underprivileged people (2007)
Oceania
Cook Islands Aitutaki, Cyclone Pat (2010)
Solomon Islands, building school programs (2008) (2009) (2010) and latrine programs (2011)
Solomon Islands, rebuilding post tsunami/earthquake (2007)
Australia, construction of a temporary village after bushfires (2009)
International accreditations and awards
The Emergency Architects Foundation has received several awards:
The French Ministry of Foreign Affairs: contract for actions of international solidarity as an accredited French Foundation. It is now a foundation reconnue d'utilité publique.
The European Commission: A signed accord with ECHO
The United Nations: Consultative status with EROSOC and collaborations with agencies UNHabitat, UNDP, UNHCR, UNICEF, WWF, UNOSAT
Union International Architects: 2008 signed partnership agreement
EAA is a signed signatory to the ACFID Code of Conduct and became a full member in 2008. EAA has OADG status.
Two international prizes for the Sigli, Indonesia Project:
2005 IFI AWARD, for Quality of Project and services to Humanity
2006 Sustainable Development Prize from Imerys International
and:
2008 Marion Mahony Griffin Prize to Andrea Nield
2009 AMO prize: Habitat, Architecture, Environnement, mention initiative
One of Citations of the World Architecture Community Awards 4th cycle.
See also
Architects Assist
Engineers Without Borders
References
External links
Official Website
Australian Chapter Website
Non-profit organizations based in France
Architecture organizations
Disaster management | Emergency Architects Foundation | [
"Engineering"
] | 1,353 | [
"Architecture organizations",
"Architecture"
] |
14,748,610 | https://en.wikipedia.org/wiki/3C%20305 | 3C 305, also known as IC 1065, is a lenticular galaxy located in the constellation Draco. The galaxy is located 577 million light-years away from Earth. It has an active galactic nucleus and is classified as a Seyfert 2 galaxy. This galaxy was discovered by American astronomer Lewis Swift on April 7, 1888.
3C 305 is also a radio galaxy. It shows an extended X-ray halo previously detected by Chandra X-ray and Very Large Array observations and hydrogen outflow with a jet power of ∼1043 erg s−1.
In additional, 3C 305 shows broad HI absorption levels, which researchers interpreted it as jet-cloud interaction. There are also signs that 3C 305 might be involved in a recent merger process with another gas-rich galaxy.
One supernova has been observed in the galaxy so far: SN 2003jb, (type Ia, mag. 16.5), discovered in December 2003.
References
External links
Radio galaxies
Third Cambridge Survey 305
305
9553
Draco (constellation)
52924
+11-18-008
Seyfert galaxies | 3C 305 | [
"Astronomy"
] | 227 | [
"Constellations",
"Draco (constellation)"
] |
14,748,809 | https://en.wikipedia.org/wiki/3C%20390.3 | 3C 390.3 is a broad-line radio galaxy located in the constellation Draco. It is also a Seyfert 1 galaxy which is an X-ray source.
References
External links
Simbad
3CRR atlas: 3C390.3
Radio galaxies
390.3
Draco (constellation) | 3C 390.3 | [
"Astronomy"
] | 65 | [
"Galaxy stubs",
"Astronomy stubs",
"Constellations",
"Draco (constellation)"
] |
14,749,290 | https://en.wikipedia.org/wiki/NGC%206251 | NGC 6251 is an active supergiant elliptical radio galaxy in the constellation Ursa Minor, and is more than 340 million light-years away from Earth. The galaxy has a Seyfert 2 active galactic nucleus, and is one of the most extreme examples of a Seyfert galaxy. This galaxy may be associated with gamma-ray source 3EG J1621+8203, which has high-energy gamma-ray emission. It is also noted for its one-sided radio jet—one of the brightest known—discovered in 1977. The supermassive black hole at the core has a mass of .
References
External links
www.jb.man.ac.uk/atlas/
Wikisky image of NGC 6251
Hubble Finds a Bare Black Hole Pouring Out Light (Probing the heart of the active galaxy NGC 6251—September 10, 1997)
Seyfert galaxies
Radio galaxies
Ursa Minor
Elliptical galaxies
6251
10501
58472 | NGC 6251 | [
"Astronomy"
] | 198 | [
"Ursa Minor",
"Constellations"
] |
14,749,308 | https://en.wikipedia.org/wiki/Iodine%20oxide | Iodine oxides are chemical compounds of oxygen and iodine. Iodine has only two stable oxides which are isolatable in bulk, iodine tetroxide and iodine pentoxide, but a number of other oxides are formed in trace quantities or have been hypothesized to exist.
The chemistry of these compounds is complicated with only a few having been well characterized. Many have been detected in the atmosphere and are believed to be particularly important in the marine boundary layer.
Molecular compounds
Diiodine monoxide has largely been the subject of theoretical study, but there is some evidence that it may be prepared in a similar manner to dichlorine monoxide, via a reaction between HgO and I2. The compound appears to be highly unstable but can react with alkenes to give halogenated products.
Radical iodine oxide (IO), iodine dioxide (IO2), collectively referred to as IO and iodine tetroxide ((I2O4) all possess significant and interconnected atmospheric chemistry. They are formed, in very small quantities, in the marine boundary layer by the photooxidation of diiodomethane, which is produced by macroalga such as seaweed or through the oxidation of molecular iodine, produced by the reaction of gaseous ozone and iodide present at the seasurface. Despite the small quantities produced (typically below ppt) they are thought to be powerful ozone depletion agents.
Diiodine pentoxide (I2O5) is the anhydride of iodic acid and the only stable anhydride of an iodine oxoacid.
Tetraiodine nonoxide (I4O9) has been prepared by the gas-phase reaction of I2 with O3 but has not been extensively studied.
Iodate anions
Iodine oxides also form negatively charged anions, which (associated with complementary cations) are components of acids or salts. These include the iodates and periodates.
Their conjugate acids are:
The -1 oxidation state, hydrogen iodide, is not an oxide, but it is included in this table for completeness.
The periodates include two variants: metaperiodate and orthoperiodate .
See also
Oxygen fluoride
Chlorine oxide
Bromine oxide
References
Oxides
Iodides
Iodine compounds | Iodine oxide | [
"Chemistry"
] | 489 | [
"Oxides",
"Salts"
] |
14,749,434 | https://en.wikipedia.org/wiki/Water%20immersion%20objective | In light microscopy, a water immersion objective is a specially designed objective lens used to increase the resolution of the microscope. This is achieved by immersing both the lens and the specimen in water which has a higher refractive index than air, thereby increasing the numerical aperture of the objective lens.
Applications
Water immersion objectives are used not only at very large magnifications that require high resolving power, but also of moderate power as there are water immersion objectives as low as 4X. Objectives with high power magnification have short focal lengths, facilitating the use of water. The water is applied to the specimen (conventional microscope), and the stage is raised, immersing the objective in water. Sometimes with water dipping objectives, the objective is directly immersed in the solution of water which contains the specimens to look at. Electrophoretic preparations used in the case of comet assay can benefit from the use of water objectives.
The refractive index of the water (1.33) is closer to those of imaged materials or to the glass of the cover-slip, so more light will be collected/focused by this type of objective comparing to air-immersion ones, leading to a range of higher numerical apertures (NA).
Correction collar
Unlike oil, water does not have the same or near identical refractive value as the cover slip glass, so a correction collar is needed to be able to variate for its thickness. Lenses without a correction collar generally are made for the use of a 0.17 mm cover slip or for use without a coverslip (dipping lens).
See also
Oil immersion objective
Microscopy
Optical microscope
Index-matching material
References
Microscopy | Water immersion objective | [
"Chemistry"
] | 335 | [
"Microscopy"
] |
14,750,034 | https://en.wikipedia.org/wiki/Energy%20elasticity | Energy elasticity is a term used with reference to the energy intensity of Gross Domestic Product. It is "the percentage change in energy consumption to achieve one per cent change in national GDP".
This term has been used when describing sustainable growth in the developing world, while being aware of the need to maintain the security of energy supply and constrain the emission of additional greenhouse gases. Energy elasticity is a top-line measure, as the commercial energy sources used by the country in question are normally further itemised as fossil, renewable, etc.
For example, India's national Integrated Energy Policy of 2005 noted current elasticity at 0.80, while planning for 7-8% GDP growth. It expected to be able to reduce this to 0.75 from 2011 and to 0.67 from 2021-22. By 2007, India's Ambassador was able to inform the United Nations Security Council that its GDP was growing by 8%, with only 3.7% growth in its total primary energy consumption, suggesting it had effectively de-linked energy consumption from economic growth.
China has shown the opposite relationship, as, after 2000, it has consumed proportionately more energy to achieve its high double-digit growth rate. Although there are problems with the quality of the estimates of both GDP and energy consumption, by 2003-4 observers placed Chinese energy elasticity at approximately 1.5. For every one percent increase in GDP, energy demand grew by 1.5 percent. Much of this extra demand has been sourced internationally from fossil fuels, such as coal and petroleum.
References
Energy economics
Economic indicators | Energy elasticity | [
"Environmental_science"
] | 321 | [
"Energy economics",
"Environmental social science"
] |
7,410,076 | https://en.wikipedia.org/wiki/Mark%20I%20NAAK | In the United States military, the Mark I NAAK, or MARK I Kit, ("Nerve Agent Antidote Kit") is a dual-chamber autoinjector: Two anti-nerve agent drugs—atropine sulfate and pralidoxime chloride—each in injectable form, constitute the kit. The kits are only effective against the nerve agents tabun (GA), sarin (GB), soman (GD) and VX.
Typically, U.S. servicemembers are issued three MARK I Kits when operating in circumstances where chemical weapons are considered a potential hazard. Along with the three kits are issued one CANA (Convulsive Antidote, Nerve Agent) for simultaneous use. (CANA is the drug diazepam or Valium, an anticonvulsant.) Both of these kits are intended for use in "buddy aid" or "self aid" administration of the drugs prior to decontamination and delivery of the patient to definitive medical care for the condition.
A newer model, the ATNAA (Antidote Treatment Nerve Agent Auto-Injector), has both the atropine and the pralidoxime in one syringe, allowing for simplified administration.
The use of a Mark 1 or ATNAA kit inhibits the nerve agents' purpose, thereby reducing the number of fatal casualties in the advent of chemical warfare. The kits should only be administered if nerve agents have been absorbed or inhaled.
References
U.S. Army Medical Research Institute of Chemical Defense, Medical Management of Chemical Casualties Handbook, Third Edition (June 2000), Aberdeen Proving Ground, MD, pp 118-126.
Military medicine in the United States
Antidotes
Drug delivery devices | Mark I NAAK | [
"Chemistry"
] | 358 | [
"Pharmacology",
"Drug delivery devices",
"Nerve agents",
"Chemical weapons"
] |
7,410,115 | https://en.wikipedia.org/wiki/Sesamol | Sesamol is a natural organic compound which is a component of sesame seeds and sesame oil, with anti-inflammatory, antioxidant, antidepressant and neuroprotective properties. It is a white crystalline solid that is a derivative of phenol. It is sparingly soluble in water, but miscible with most oils. It can be produced by organic synthesis from heliotropine.
Sesamol has been found to be an antioxidant that may prevent the spoilage of oils. It also may prevent the spoilage of oils by acting as an antifungal. It can be used in the synthesis of paroxetine.
Sesamol's molecular targets and mechanism of action, at least for its antidepressant-like effects, is found to be through the brain nerve growth factor (NGF) and endocannabinoid signalling under the regulatory drive of the CB1 receptors.
Alexander Shulgin used sesamol in his book PiHKAL to make MMDA-2.
See also
Sesamin and sesamolin, two lignans found in sesame oil
References
Natural phenols
Phenol antioxidants
Benzodioxoles
Sesame | Sesamol | [
"Chemistry"
] | 253 | [
"Biomolecules by chemical classification",
"Natural phenols"
] |
7,410,826 | https://en.wikipedia.org/wiki/Centre%20for%20Renewable%20Energy%20Systems%20Technology | The Centre for Renewable Energy Systems Technology (CREST) is a research centre into renewable energy based in the Department of Mechanical, Electrical and Manufacturing Engineering, Loughborough University in England.
Profile
Established in 1993, it is recognised internationally as a centre of excellence in its field particularly in photovoltaic systems, materials and devices, wind power and integration of renewable energy into electricity grids. About fifty researchers, academics and associated staff are involved with CREST's work.
The MSc course in Renewable Energy Systems Technology, developed at CREST, is one of the longest established renewable energy masters courses globally. It is producing a stream of graduates who are working internationally in all aspects of the renewables industry. This course can be studied full-time or part-time distance learning. As an advanced technology course, the modules in the CREST MSc include biomass, wind, solar, water/marine and electrical integration. There is a strong emphasis on electrical generation throughout.
History
The centre was initially set up through the funding of Professor Tony Marmont of Beacon Energy, who remains a mentor and advisory committee member. Other advisory members include Sir Jonathon Porritt and Dr Andrew Garrad of Garrad Hassan & Partners Ltd. Professor Phil Eames is the director of CREST and Leon Freris is a visiting professor. Professor Freris, and Dr David Sharpe, a leading British wind turbine aerodynamicist who also worked at the centre, were founding members of the British Wind Energy Association (now RenewableUK). David Sharpe has since become known for his work as the inventor of the Aerogenerator.
Associations
The Masters course is one of only sixteen programmes in the UK admitted to the Panasonic Fellowship programme (run by the Royal Academy of Engineering). The Panasonic Fellowship programme is aimed at recent graduates who wish to embark on a full-time master's degree course in environmental studies or sustainable development.
CREST is a participating university in the EUREC European Masters Program in Renewable Energy, an initiative supported by the European Commission to expand the European renewable energy industry.
The group is located in the Holywell Park area of the Loughborough campus, adjacent to the offices of the Energies Technology Institute (ETI).
References
External links
Official site of CREST
Official site of Department of Mechanical, Electrical and Manufacturing Engineering
European Masters in Renewable Energy
Loughborough University
Research institutes in Leicestershire
Renewable energy organizations
Organisations based in Leicestershire
Organizations established in 1993
Buildings and structures in Leicestershire
Renewable energy in England
1993 establishments in England | Centre for Renewable Energy Systems Technology | [
"Engineering"
] | 500 | [
"Renewable energy organizations",
"Energy organizations"
] |
7,411,006 | https://en.wikipedia.org/wiki/Frankie%20Trull | Frankie Trull is an American science advocate and lobbyist. She is founder and president of the Foundation for Biomedical Research, a non-profit organization that educates the public about animal research in the quest for medical advancements, treatments and cures for both humans and animals. Trull is also president of the National Association for Biomedical Research (NABR), which aims to provide a unified voice for the scientific community on legislative and regulatory matters affecting humane laboratory animal research.
Trull received her undergraduate degree from Boston University and her master's degree from Tufts University. She is founder and president of Policy Directions Inc., a Washington, D.C.–based government relations/strategic government communications firm, which specializes in health, medical research and advocacy, medical education and biotechnology, pharmaceutical and agriculture issues and assists large and small companies and nonprofits in addressing legislative and regulatory initiatives and policy development. Trull also serves on the board of overseers of the Tufts University Cummings School of Veterinary Medicine.
Trull played an instrumental role in coordinating Congressional consensus for the passage of the Animal Enterprise Terrorism Act (AETA), signed into law by President George W. Bush in 2006, to provide greater protection for researchers from animal rights extremists. Also in 2006, Trull coordinated the effort for successful passage of legislation to confer the Congressional Gold Medal on heart surgeon Michael E. DeBakey. She has also written numerous articles on the importance of biomedical research and the threat posed to the American research community by extremism.
Awards
In 1991, Trull was the recipient of the Distinguished Leadership Award from The Endocrine Society and the Presidential Award from the Society for Neuroscience. In 2003, she was given a Special Recognition Award from the American College of Laboratory Medicine (ACLAM). In 2005, Trull received the Public Service Award from the Association of Allergy and Immunology, the Society of Toxicology's Contribution to the Public Awareness of Animal Welfare Award, and the award for Education in Neuroscience from the Association of Neuroscience Departments and Programs (ANDP). The Association of American Medical Colleges awarded Trull their Special Recognition Award in 2007.
References
"Terrorism in the name of animals", Los Angeles Times, August 18, 2008
Living people
American lobbyists
Animal testing
Boston University alumni
Tufts University alumni
Year of birth missing (living people) | Frankie Trull | [
"Chemistry"
] | 473 | [
"Animal testing"
] |
7,411,518 | https://en.wikipedia.org/wiki/Nital | Nital is a solution of nitric acid and alcohol commonly used for etching of metals. It is especially suitable for revealing the microstructure of carbon steels. The alcohol can be methanol or ethanol.
Mixtures of ethanol and nitric acid are potentially explosive. This commonly occurs by gas evolution, although ethyl nitrate can also be formed. Methanol is not liable to explosion but it is toxic.
A solution of ethanol and nitric acid will become explosive if the concentration of nitric acid reaches over 10% (by weight). Solutions above 5% should not be stored in closed containers. Nitric acid will continue to act as an oxidant in dilute and cold conditions.
In popular culture
Nital is a critical plot element in the Japanese manga series Dr. Stone, whose story revolves around the mysterious petrification of all mankind. Made from nitric acid that they produce from bat guano found in a cave, they produce nitric acid by using the Ostwald process (using Platinum as a catalyst and urine as an ingredient) and highly distilled alcohol with a ratio of 3:7. Nital is dubbed the revival fluid with the unique property of undoing and freeing the petrified people.
References
Etching
Solutions | Nital | [
"Chemistry"
] | 258 | [
"Homogeneous chemical mixtures",
"Solutions"
] |
7,411,939 | https://en.wikipedia.org/wiki/Cyanometer | A cyanometer (from cyan and -meter) is an instrument for measuring "blueness", specifically the colour intensity of blue sky. It is attributed to Horace-Bénédict de Saussure and Alexander von Humboldt. It consists of squares of paper dyed in graduated shades of blue and arranged in a color circle or square that can be held up and compared to the color of the sky.
History
Horace-Bénédict de Saussure, a Swiss physicist and mountain climber, is credited with inventing the cyanometer in the 1760s. De Saussure's cyanometer was divided into colored, numbered sections, ranging from white to gradually darker shades of blue, dyed with Prussian blue and arranged in a circle. The cyanometers were manually produced with a predefined recipe of watercolor concentration for each section, and then distributed to friends and fellow naturalists to gather more observations.
In an article from 1790, de Saussure presents an illustration of a wheel with 40 stops, though clarifies that it serves merely to give the reader "an idea of its form"; the actual cyanometer had 53 stops (or "degrees"), starting with white as 0 and black as 52.
De Saussure believed that the color of the sky was dependent on the amount of particles suspended in the atmosphere, and that these particles had an opaque color blue (thought to be 34 degrees on the scale). If this were true, then one could estimate the concentration of such particles using the cyanometer.
The tool was meant to be used outside, by holding it up to the sky and finding the closest color to the sky's. Additionally, in an attempt to standardize testing, de Saussure gives a few pointers on how observations should be made. For example:
De Saussure used the device to measure the color of the sky at Geneva, Chamonix, and Mont Blanc (Col du Géant):
Alexander von Humboldt (1769–1859) was an eager user of the cyanometer on his voyages and explorations: during his trip across the Atlantic Ocean, he observed 23.5 degrees at noon; at the summit of Teide, a record 41 degrees; and, while climbing to the summit of Chimborazo, on 23 June 1802, Humboldt broke both the record of highest altitude ever reached by humans, but also of observed darkness of the sky, with 46 degrees on the cyanometer.
In his satirical verse epic Don Juan (Canto IV, 112), Lord Byron alludes to this device as an ironical means of measuring the blue of bluestocking ladies, crediting Humboldt for its invention.
Theory
The blueness of clear air in Earth's atmosphere is due to Rayleigh scattering by nitrogen and oxygen molecules. Dry air is 78% nitrogen and 21% oxygen. Atmospheric water content ranges from 0% to 5%.
When looking through clear air toward the horizon, distant sunlight of all wavelengths (colors) will generally undergo Mie scattering from spherical suspended particles. In an unpolluted sky, these spherical particles will primarily be liquid water condensed onto natural atmospheric dust grains. This is known as "wet haze". Therefore, in an unpolluted clear sky, wet haze adds white sunlight to blue Rayleigh-scattered light. More wet haze in the observer's line of sight results in a brighter and paler blue sky color.
When looking toward the horizon, an observer looks through up to 40 times as much atmosphere compared to looking overhead. Therefore, more Mie scattering is seen when viewing parts of the sky closer to the horizon. A darker blue sky will be observed if less wet haze is in the observer's line of sight. This occurs when looking directly overhead and at a higher altitude.
See also
Diffuse sky radiation
Notes
References
Bibliography
External links
The Cyanometer Is a 225-Year-Old Tool for Measuring the Blueness of the Sky (9 May 2014), an article by Christopher Jobson for Colossal.
Atmospheric optical phenomena
Meteorological instrumentation and equipment
1789 introductions
1789 in science
Shades of blue
Historical scientific instruments | Cyanometer | [
"Physics",
"Technology",
"Engineering"
] | 836 | [
"Physical phenomena",
"Earth phenomena",
"Meteorological instrumentation and equipment",
"Measuring instruments",
"Optical phenomena",
"Atmospheric optical phenomena"
] |
7,412,236 | https://en.wikipedia.org/wiki/Steve%20Jobs | Steven Paul Jobs (February 24, 1955 – October 5, 2011) was an American businessman, inventor, and investor best known for co-founding the technology company Apple Inc. Jobs was also the founder of NeXT and chairman and majority shareholder of Pixar. He was a pioneer of the personal computer revolution of the 1970s and 1980s, along with his early business partner and fellow Apple co-founder Steve Wozniak.
Jobs was born in San Francisco in 1955 and adopted shortly afterwards. He attended Reed College in 1972 before withdrawing that same year. In 1974, he traveled through India, seeking enlightenment before later studying Zen Buddhism. He and Wozniak co-founded Apple in 1976 to further develop and sell Wozniak's Apple I personal computer. Together, the duo gained fame and wealth a year later with production and sale of the Apple II, one of the first highly successful mass-produced microcomputers.
Jobs saw the commercial potential of the Xerox Alto in 1979, which was mouse-driven and had a graphical user interface (GUI). This led to the development of the largely unsuccessful Apple Lisa in 1983, followed by the breakthrough Macintosh in 1984, the first mass-produced computer with a GUI. The Macintosh launched the desktop publishing industry in 1985 (for example, the Aldus Pagemaker) with the addition of the Apple LaserWriter, the first laser printer to feature vector graphics and PostScript.
In 1985, Jobs departed Apple after a long power struggle with the company's board and its then-CEO, John Sculley. That same year, Jobs took some Apple employees with him to found NeXT, a computer platform development company that specialized in computers for higher-education and business markets, serving as its CEO. In 1986, he bought the computer graphics division of Lucasfilm, which was spun off independently as Pixar. Pixar produced the first computer-animated feature film, Toy Story (1995), and became a leading animation studio, producing dozens of commercially successful and critically acclaimed films.
In 1997, Jobs returned to Apple as CEO after the company's acquisition of NeXT. He was largely responsible for reviving Apple, which was on the verge of bankruptcy. He worked closely with British designer Jony Ive to develop a line of products and services that had larger cultural ramifications, beginning with the "Think different" advertising campaign, and leading to the iMac, iTunes, Mac OS X, Apple Store, iPod, iTunes Store, iPhone, App Store, and iPad. Jobs was also a board member at Gap Inc. from 1999 to 2002. In 2003, Jobs was diagnosed with a pancreatic neuroendocrine tumor. He died of tumor-related respiratory arrest in 2011; in 2022, he was posthumously awarded the Presidential Medal of Freedom. Since his death, he has won 141 patents; Jobs holds over 450 patents in total.
Early life
Family
Steven Paul Jobs was born in San Francisco, California, on February 24, 1955, to Joanne Carole Schieble and Abdulfattah "John" Jandali (). Abdulfattah Jandali was born in a Muslim household to wealthy Syrian parents, the youngest of nine siblings. After obtaining his undergraduate degree at the American University of Beirut, Jandali pursued a PhD in political science at the University of Wisconsin. There, he met Joanne Schieble, an American Catholic of Swiss-German descent whose parents owned a mink farm and real estate in Green Bay. The two fell in love but faced opposition from Schieble's father due to Jandali's Muslim faith. When Schieble became pregnant, she arranged for a closed adoption, and travelled to San Francisco to give birth.
Schieble requested that her son be adopted by college graduates. A lawyer and his wife were selected, but they withdrew after discovering that the baby was a boy, so Jobs was instead adopted by Paul Reinhold and Clara (née Hagopian) Jobs. Paul Jobs, an American of German descent, was the son of a dairy farmer from Washington County, Wisconsin. After dropping out of high school, he worked as a mechanic, then joined the US Coast Guard. When his ship was decommissioned at San Francisco, he bet he could find a wife within two weeks. He then met Clara Hagopian, an American of Armenian descent, and the two were engaged ten days later, in March 1946, and married that same year. The couple moved to Wisconsin, then Indiana, where Paul Jobs worked as a machinist and later as a car salesman. Since Clara missed San Francisco, she convinced Paul to move back. There, Paul worked as a repossession agent, and Clara became a bookkeeper. In 1955, after having an ectopic pregnancy, the couple looked to adopt a child. Since they lacked a college education, Schieble initially refused to sign the adoption papers, and went to court to request that her son be removed from the Jobs household and placed with a different family, but changed her mind after Paul and Clara promised to pay for their son's college tuition.
Infancy
In his youth, Jobs's parents took him to a Lutheran church. When Steve was in high school, Clara admitted to his girlfriend, Chrisann Brennan, that she "was too frightened to love [Steve] for the first six months of his life ... I was scared they were going to take him away from me. Even after we won the case, Steve was so difficult a child that by the time he was two I felt we had made a mistake. I wanted to return him." When Chrisann shared this comment with Steve, he stated that he was already aware, and later said that he had been deeply loved and indulged by Paul and Clara. Jobs would "bristle" when Paul and Clara were referred to as his "adoptive parents", and he regarded them as his parents "1,000%". Jobs referred to his biological parents as "my sperm and egg bank. That's not harsh, it's just the way it was, a sperm bank thing, nothing more."
Childhood
Paul Jobs worked in several jobs that included a try as a machinist, several other jobs, and then "back to work as a machinist". Paul and Clara adopted Jobs's sister Patricia in 1957, and by 1959 the family had moved to the Monta Loma neighborhood in Mountain View, California. Paul built a workbench in his garage for his son in order to "pass along his love of mechanics". Jobs, meanwhile, admired his father's craftsmanship "because he knew how to build anything. If we needed a cabinet, he would build it. When he built our fence, he gave me a hammer so I could work with him ... I wasn't that into fixing cars ... but I was eager to hang out with my dad."
Jobs had difficulty functioning in a traditional classroom, tended to resist authority figures, frequently misbehaved, and was suspended a few times. He frequently played pranks on others at Monta Loma Elementary School in Mountain View. His father Paul (who was abused as a child) never reprimanded him, however, and instead blamed the school for not challenging his brilliant son. Jobs skipped the 5th grade and transferred to the 6th grade at Crittenden Middle School in Mountain View, where he became a "socially awkward loner". Jobs was often "bullied" at Crittenden Middle, and in the middle of 7th grade, he gave his parents an ultimatum: either they would take him out of Crittenden or he would drop out of school.
The Jobs family was not affluent, and only by expending all their savings were they able to buy a new home in 1967, allowing Steve to change schools. The new house (a three-bedroom home on Crist Drive in Los Altos, California) was in the better Cupertino School District, in Cupertino, California. The house was declared a historic site in 2013, as the first site of Apple Computer. , it was owned by Jobs's sister, Patty, and occupied by his stepmother, Marilyn. When he was 13, in 1968, Jobs was given a summer job by Bill Hewlett (of Hewlett-Packard) after Jobs cold-called him to ask for parts for an electronics project.
Homestead High
The location of the Los Altos home meant that Jobs would be able to attend nearby Homestead High School, which had strong ties to Silicon Valley. He began his first year there in late 1968 along with Bill Fernandez, who introduced Jobs to Steve Wozniak, and would become Apple's first employee. Neither Jobs nor Fernandez (whose father was a lawyer) came from engineering households and thus decided to enroll in John McCollum's Electronics I class. Jobs had grown his hair long and become involved in the growing counterculture, and the rebellious youth eventually clashed with McCollum and lost interest in the class.
Jobs underwent a change during mid-1970. He later noted to his official biographer that "I started to listen to music a whole lot, and I started to read more outside of just science and technology — Shakespeare, Plato. I loved King Lear ... when I was a senior I had this phenomenal AP English class. The teacher was this guy who looked like Ernest Hemingway. He took a bunch of us snowshoeing in Yosemite." During his last two years at Homestead High, Jobs developed two different interests: electronics and literature. These dual interests were particularly reflected during Jobs's senior year, as his best friends were Wozniak and his first girlfriend, the artistic Homestead junior Chrisann Brennan.
In 1971, after Wozniak began attending University of California, Berkeley, Jobs would visit him there a few times a week. This experience led him to study in nearby Stanford University's student union. Instead of joining the electronics club, Jobs put on light shows with a friend for Homestead's avant-garde jazz program. He was described by a Homestead classmate as "kind of brain and kind of hippie ... but he never fit into either group. He was smart enough to be a nerd, but wasn't nerdy. And he was too intellectual for the hippies, who just wanted to get wasted all the time. He was kind of an outsider. In high school everything revolved around what group you were in, and if you weren't in a carefully defined group, you weren't anybody. He was an individual, in a world where individuality was suspect." By his senior year in late 1971, he was taking a freshman English class at Stanford and working on a Homestead underground film project with Chrisann Brennan.
Around that time, Wozniak designed a low-cost digital "blue box" to generate the necessary tones to manipulate the telephone network, allowing free long-distance calls. He was inspired by an article titled "Secrets of the Little Blue Box" from the October 1971 issue of Esquire. Jobs decided then to sell them and split the profit with Wozniak. The clandestine sales of the illegal blue boxes went well and perhaps planted the seed in Jobs's mind that electronics could be both fun and profitable. In a 1994 interview, he recalled that it took six months for him and Wozniak to design the blue boxes. Jobs later reflected that had it not been for Wozniak's blue boxes, "there wouldn't have been an Apple". He states it showed them that they could take on large companies and beat them.
By his senior year of high school, Jobs began using LSD. He later recalled that on one occasion he consumed it in a wheat field outside Sunnyvale, and experienced "the most wonderful feeling of my life up to that point". In mid-1972, after graduation and before leaving for Reed College, Jobs and Brennan rented a house from their other roommate, Al.
Reed College
In September 1972, Jobs enrolled at Reed College in Portland, Oregon. He insisted on applying only to Reed, although it was an expensive school that Paul and Clara could ill afford. Jobs soon befriended Robert Friedland, who was Reed's student body president at that time. Brennan remained involved with Jobs while he was at Reed.
After just one semester, Jobs dropped out of Reed College without telling his parents. Jobs later explained this was because he did not want to spend his parents' money on an education that seemed meaningless to him. He continued to attend by auditing his classes, including a course on calligraphy that was taught by Robert Palladino. In a 2005 commencement speech at Stanford University, Jobs stated that during this period, he slept on the floor in friends' dorm rooms, returned Coke bottles for food money, and got weekly free meals at the local Hare Krishna temple. In that same speech, Jobs said: "If I had never dropped in on that single calligraphy course in college, the Mac would have never had multiple typefaces or proportionally spaced fonts".
1974–1985
Pre-Apple
In February 1974, Jobs returned to his parents' home in Los Altos and began looking for a job. He was soon hired by Atari, Inc. in Los Gatos, California, as a computer technician. Back in 1973, Steve Wozniak designed his own version of the classic video game Pong and gave its electronics board to Jobs. According to Wozniak, Atari only hired Jobs because he took the board down to the company, and they thought that he had built it himself. Atari's cofounder Nolan Bushnell later described him as "difficult but valuable", pointing out that "he was very often the smartest guy in the room, and he would let people know that".
Jobs traveled to India in mid-1974 to visit Neem Karoli Baba at his Kainchi ashram with his Reed College friend and eventual Apple employee Daniel Kottke, searching for spiritual teachings. When they got to the Neem Karoli ashram, it was almost deserted because Neem Karoli Baba had died in September 1973. Then they made a long trek up a dry riverbed to an ashram of Haidakhan Babaji.
After seven months, Jobs left India and returned to the US ahead of Daniel Kottke. Jobs had changed his appearance; his head was shaved, and he wore traditional Indian clothing. During this time, Jobs experimented with psychedelics, later calling his LSD experiences "one of the two or three most important things [he had] done in [his] life". He spent a period at the All One Farm, a commune in Oregon that was owned by Robert Friedland.
During this time period, Jobs and Brennan both became practitioners of Zen Buddhism through the Zen master Kōbun Chino Otogawa. Jobs engaged in lengthy meditation retreats at the Tassajara Zen Mountain Center, the oldest Sōtō Zen monastery in the US. He considered taking up monastic residence at Eihei-ji in Japan, and maintained a lifelong appreciation for Zen, Japanese cuisine, and artists such as Hasui Kawase.
Jobs returned to Atari in early 1975, and that summer, Bushnell assigned him to create a circuit board for the arcade video game Breakout in as few chips as possible, knowing that Jobs would recruit Wozniak for help. During his day job at HP, Wozniak drew sketches of the circuit design; at night, he joined Jobs at Atari and continued to refine the design, which Jobs implemented on a breadboard. According to Bushnell, Atari offered for each TTL chip that was eliminated in the machine. Jobs made a deal with Wozniak to split the fee evenly between them if Wozniak could minimize the number of chips. Much to the amazement of Atari engineers, within four days Wozniak reduced the TTL count to 45, far below the usual 100, though Atari later re-engineered it to make it easier to test and add a few missing features. According to Wozniak, Jobs told him that Atari paid them only $750 (instead of the actual $5,000), and that Wozniak's share was thus $375. Wozniak did not learn about the actual bonus until ten years later but said that if Jobs had told him about it and explained that he needed the money, Wozniak would have given it to him.
Jobs and Wozniak attended meetings of the Homebrew Computer Club in 1975, which was a stepping stone to the development and marketing of the first Apple computer. According to a document released by the United States Department of Defense, Jobs claimed that in 1975, he was arrested in Eugene, Oregon, after being questioned for being a minor in possession of alcohol. Jobs alleged that he "didn't have any alcohol", but police questioned him, and subsequently determined that he had an outstanding arrest warrant for an unpaid speeding ticket. Jobs claimed he then paid the $50 fine. The arrest allegedly occurred "behind a store".
Apple (1976–1985)
By March 1976, Wozniak completed the basic design of the Apple I computer and showed it to Jobs, who suggested that they sell it; Wozniak was at first skeptical of the idea but later agreed. In April of that same year, Jobs, Wozniak, and administrative overseer Ronald Wayne founded Apple Computer Company (now called "Apple Inc.") as a business partnership in Jobs's parents' Crist Drive home on April 1, 1976. The operation originally started in Jobs's bedroom and later moved to the garage. Wayne stayed briefly, leaving Jobs and Wozniak as the active primary cofounders of the company.
The two decided on the name "Apple" after Jobs returned from the All One Farm commune in Oregon and told Wozniak about his time in the farm's apple orchard. Jobs originally planned to produce bare printed circuit boards of the Apple I and sell them to computer hobbyists for each. To fund the first batch, Wozniak sold his HP scientific calculator and Jobs sold his Volkswagen van. Later that year, computer retailer Paul Terrell purchased 50 fully assembled Apple I units for $500 each. Eventually about 200 Apple I computers were produced in total.
A neighbor on Crist Drive recalled Jobs as an odd individual who would greet his clients "with his underwear hanging out, barefoot and hippie-like". Another neighbor, Larry Waterland, who had just earned his PhD in chemical engineering at Stanford, recalled dismissing Jobs's budding business compared to the established industry of giant mainframe computers with big decks of punch cards: "Steve took me over to the garage. He had a circuit board with a chip on it, a DuMont TV set, a Panasonic cassette tape deck and a keyboard. He said, 'This is an Apple computer.' I said, 'You've got to be joking.' I dismissed the whole idea." Jobs's friend from Reed College and India, Daniel Kottke, recalled that as an early Apple employee, he "was the only person who worked in the garage ... Woz would show up once a week with his latest code. Steve Jobs didn't get his hands dirty in that sense." Kottke also stated that much of the early work took place in Jobs's kitchen, where he spent hours on the phone trying to find investors for the company.
They received funding from a then-semi-retired Intel product marketing manager and engineer named Mike Markkula. Scott McNealy, one of the cofounders of Sun Microsystems, said that Jobs broke a "glass age ceiling" in Silicon Valley because he'd created a very successful company at a young age. Markkula brought Apple to the attention of Arthur Rock, which, after looking at the crowded Apple booth at the Home Brew Computer Show, started with a $60,000 investment and went on the Apple board. Jobs was not pleased when Markkula recruited Mike Scott from National Semiconductor in February 1977 to serve as the first president and CEO of Apple.
After Brennan returned from her own journey to India, she and Jobs fell in love again, as Brennan noted changes in him that she attributes to Kobun (whom she was also still following). It was also at this time that Jobs displayed a prototype Apple II computer for Brennan and his parents in their living room. Brennan notes a shift in this time period, where the two main influences on Jobs were Apple Inc. and Kobun.
In April 1977, Jobs and Wozniak introduced the Apple II at the West Coast Computer Faire. It is the first consumer product to have been sold by Apple Computer. Primarily designed by Wozniak, Jobs oversaw the development of its unusual case and Rod Holt developed the unique power supply. During the design stage, Jobs argued that the Apple II should have two expansion slots, while Wozniak wanted eight. After a heated argument, Wozniak threatened that Jobs should "go get himself another computer". They later agreed on eight slots. The Apple II became one of the first highly successful mass-produced microcomputer products in the world.
As Jobs became more successful with his new company, his relationship with Brennan grew more complex. In 1977, the success of Apple was now a part of their relationship, and Brennan, Daniel Kottke, and Jobs moved into a house near the Apple office in Cupertino. Brennan eventually took a position in the shipping department at Apple. Brennan's relationship with Jobs deteriorated as his position with Apple grew, and she began to consider ending the relationship. In October 1977, Brennan was approached by Rod Holt, who asked her to take "a paid apprenticeship designing blueprints for the Apples". Both Holt and Jobs believed that it would be a good position for her, given her artistic abilities. Holt was particularly eager that she take the position and puzzled by her ambivalence toward it. Brennan's decision, however, was overshadowed by the fact that she realized she was pregnant, and that Jobs was the father. It took her a few days to tell Jobs, whose face, according to Brennan, "turned ugly" at the news. At the same time, according to Brennan, at the beginning of her third trimester, Jobs said to her: "I never wanted to ask that you get an abortion. I just didn't want to do that." He also refused to discuss the pregnancy with her.
Brennan turned down the internship and decided to leave Apple. A few weeks before she was due to give birth, Brennan was invited to deliver her baby at the All One Farm. She accepted the offer. When Jobs was 23 (the same age as his biological parents when they had him) Brennan gave birth to her baby, Lisa Brennan, on May 17, 1978. Jobs went there for the birth after he was contacted by Robert Friedland, their mutual friend and the farm owner. While distant, Jobs worked with her on a name for the baby, which they discussed while sitting in the fields on a blanket. Brennan suggested the name "Lisa" which Jobs also liked and notes that Jobs was very attached to the name "Lisa" while he "was also publicly denying paternity". She would discover later that during this time, Jobs was preparing to unveil a new kind of computer that he wanted to give a female name (his first choice was "Claire" after St. Clare). She stated that she never gave him permission to use the baby's name for a computer and he hid the plans from her. Jobs worked with his team to come up with the phrase, "Local Integrated Software Architecture" as an alternative explanation for the Apple Lisa. Decades later, however, Jobs admitted to his biographer Walter Isaacson that "obviously, it was named for my daughter".
When Jobs denied paternity, a DNA test established him as Lisa's father. It required him to pay Brennan monthly in addition to returning the welfare money she had received. Jobs paid her monthly at the time when Apple went public and made him a millionaire. Later, Brennan agreed to an interview with Michael Moritz for Time magazine for its Time Person of the Year special, released on January 3, 1983, in which she discussed her relationship with Jobs. Rather than name Jobs the Person of the Year, the magazine named the generic personal computer the "Machine of the Year". In the issue, Jobs questioned the reliability of the paternity test, which stated that the "probability of paternity for Jobs, Steven... is 94.1%". He responded by arguing that "28% of the male population of the United States could be the father". Time also noted that "the baby girl and the machine on which Apple has placed so much hope for the future share the same name: Lisa".
In 1978, at age 23, Jobs was worth over (equivalent to $ in ). By age 25, his net worth grew to an estimated (equivalent to $ in ). He was also one of the youngest "people ever to make the Forbes list of the nation's richest people—and one of only a handful to have done it themselves, without inherited wealth". In 1982, Jobs bought an apartment on the top two floors of The San Remo, a Manhattan building with a politically progressive reputation. Although he never lived there, he spent years renovating it thanks to I. M. Pei. In 1983, Jobs lured John Sculley away from Pepsi-Cola to serve as Apple's CEO, asking, "Do you want to spend the rest of your life selling sugared water, or do you want a chance to change the world?".
In 1984, Jobs bought the Jackling House and estate and resided there for a decade. Thereafter, he leased it out for several years until 2000 when he stopped maintaining the house, allowing weathering to degrade it. In 2004, Jobs received permission from the town of Woodside to demolish the house to build a smaller, contemporary styled one. After a few years in court, the house was finally demolished in 2011, a few months before he died.
Jobs took over development of the Macintosh in 1981, from early Apple employee Jef Raskin, who had conceived the project. Wozniak and Raskin had heavily influenced the early program, and Wozniak was on leave during this time due to an airplane crash earlier that year, making it easier for Jobs to take over the project. On January 22, 1984, Apple aired a Super Bowl television commercial titled "1984", which ended with the words: "On January 24th, Apple Computer will introduce Macintosh. And you'll see why 1984 won't be like 1984." On January 24, 1984, an emotional Jobs introduced the Macintosh to a wildly enthusiastic audience at Apple's annual shareholders meeting held in the Flint Auditorium at De Anza College. Macintosh engineer Andy Hertzfeld described the scene as "pandemonium". The Macintosh was inspired by the Lisa (in turn inspired by Xerox PARC's mouse-driven graphical user interface), and it was widely acclaimed by the media with strong initial sales. However, its low performance and limited range of available software led to a rapid sales decline in the second half of 1984.
Sculley's and Jobs's respective visions for the company greatly differed. Sculley favored open architecture computers like the Apple II, targeting education, small business, and home markets less vulnerable to IBM. Jobs wanted the company to focus on the closed architecture Macintosh as a business alternative to the IBM PC. President and CEO Sculley had little control over chairman of the board Jobs's Macintosh division; it and the Apple II division operated like separate companies, duplicating services. Although its products provided 85% of Apple's sales in early 1985, the company's January 1985 annual meeting did not mention the Apple II division or employees. Many left, including Wozniak, who stated that the company had "been going in the wrong direction for the last five years" and sold most of his stock. Though frustrated with the company's and Jobs's dismissal of the Apple II in favor of the Macintosh, Wozniak left amicably and remained an honorary employee of Apple, maintaining a lifelong friendship with Jobs.
By early 1985, the Macintosh's failure to defeat the IBM PC became clear, and it strengthened Sculley's position in the company. In May 1985, Sculley—encouraged by Arthur Rock—decided to reorganize Apple, and proposed a plan to the board that would remove Jobs from the Macintosh group and put him in charge of "New Product Development". This move would effectively render Jobs powerless within Apple. In response, Jobs then developed a plan to get rid of Sculley and take over Apple. However, Jobs was confronted after the plan was leaked, and he said that he would leave Apple. The Board declined his resignation and asked him to reconsider. Sculley also told Jobs that he had all of the votes needed to go ahead with the reorganization. A few months later, on September 17, 1985, Jobs submitted a letter of resignation to the Apple Board. Five additional senior Apple employees also resigned and joined Jobs in his new venture, NeXT.
The Macintosh's struggle continued after Jobs left Apple. Though marketed and received in fanfare, the expensive Macintosh was hard to sell. In 1985, Bill Gates's then-developing company, Microsoft, threatened to stop developing Mac applications unless it was granted "a license for the Mac operating system software. Microsoft was developing its graphical user interface ... for DOS, which it was calling Windows and didn't want Apple to sue over the similarities between the Windows GUI and the Mac interface." Sculley granted Microsoft the license which later led to problems for Apple. In addition, cheap IBM PC clones that ran Microsoft software and had a graphical user interface began to appear. Although the Macintosh preceded the clones, it was far more expensive, so "through the late 1980s, the Windows user interface was getting better and better and was thus taking increasingly more share from Apple". Windows-based IBM-PC clones also led to the development of additional GUIs such as IBM's TopView or Digital Research's GEM, and thus "the graphical user interface was beginning to be taken for granted, undermining the most apparent advantage of the Mac...it seemed clear as the 1980s wound down that Apple couldn't go it alone indefinitely against the whole IBM-clone market".
1985–1997
NeXT computer
Following his resignation from Apple in 1985, Jobs founded NeXT Inc. with $7 million. A year later he was running out of money, and he sought venture capital with no product on the horizon. Eventually, Jobs attracted the attention of billionaire Ross Perot, who invested heavily in the company. The NeXT computer was shown to the world in what was considered Jobs's comeback event, a lavish invitation-only gala launch event that was described as a multimedia extravaganza. The celebration was held at the Louise M. Davies Symphony Hall, San Francisco, California, on Wednesday, October 12, 1988. Steve Wozniak said in a 2013 interview that while Jobs was at NeXT he was "really getting his head together".
NeXT workstations were first released in 1990 and priced at . Like the Apple Lisa, the NeXT workstation was technologically advanced and designed for the education sector but was largely dismissed as cost prohibitive. The NeXT workstation was known for its technical strengths, chief among them its object-oriented software development system. Jobs marketed NeXT products to the financial, scientific, and academic community, highlighting its innovative, experimental new technologies, such as the Mach kernel, the digital signal processor chip, and the built-in Ethernet port. Making use of a NeXT computer, English computer scientist Tim Berners-Lee invented the World Wide Web in 1990 at CERN in Switzerland.
The revised, second generation NeXTcube was released in 1990. Jobs touted it as the first "interpersonal" computer that would replace the personal computer. With its innovative NeXTMail multimedia email system, NeXTcube could share voice, image, graphics, and video in email for the first time. "Interpersonal computing is going to revolutionize human communications and groupwork", Jobs told reporters. Jobs ran NeXT with an obsession for aesthetic perfection, as evidenced by the development of and attention to NeXTcube's magnesium case. This put considerable strain on NeXT's hardware division, and in 1993, after having sold only 50,000 machines, NeXT transitioned fully to software development with the release of NeXTSTEP/Intel. The company reported its first yearly profit of $1.03 million in 1994. In 1996, NeXT Software, Inc. released WebObjects, a framework for Web application development. After NeXT was acquired by Apple Inc. in 1997, WebObjects was used to build and run the Apple Store, MobileMe services, and the iTunes Store.
Pixar and Disney
In 1986, Jobs funded the spinout of The Graphics Group (later renamed Pixar) from Lucasfilm's computer graphics division for the price of $10 million, $5 million of which was given to the company as capital and $5 million of which was paid to Lucasfilm for technology rights.
The first film produced by Pixar with its Disney partnership, Toy Story (1995), with Jobs credited as executive producer, brought financial success and critical acclaim to the studio when it was released. Over the course of Jobs's life, under Pixar's creative chief John Lasseter, the company produced box-office hits A Bug's Life (1998), Toy Story 2 (1999), Monsters, Inc. (2001), Finding Nemo (2003), The Incredibles (2004), Cars (2006), Ratatouille (2007), WALL-E (2008), Up (2009), Toy Story 3 (2010), and Cars 2 (2011). Brave (2012), Pixar's first film to be produced since Jobs's death, honored him with a tribute for his contributions to the studio. Finding Nemo, The Incredibles, Ratatouille, WALL-E, Up, Toy Story 3, and Brave each received the Academy Award for Best Animated Feature, an award introduced in 2001.
In 2003 and 2004, as Pixar's contract with Disney was running out, Jobs and Disney chief executive Michael Eisner tried but failed to negotiate a new partnership, and in January 2004, Jobs announced that he would never deal with Disney again.
In October 2005, Bob Iger replaced Eisner at Disney, and Iger quickly worked to mend relations with Jobs and Pixar. On January 24, 2006, Jobs and Iger announced that Disney had agreed to purchase Pixar in an all-stock transaction worth $7.4 billion. When the deal closed, Jobs became The Walt Disney Company's largest single shareholder with approximately seven percent of the company's stock. Jobs's holdings in Disney far exceeded those of Eisner, who holds 1.7%, and of Disney family member Roy E. Disney, who until his 2009 death held about 1% of the company's stock and whose criticisms of Eisner—especially that he soured Disney's relationship with Pixar—accelerated Eisner's ousting. Upon completion of the merger, Jobs received 7% of Disney shares, and joined the board of directors as the largest individual shareholder. Upon Jobs's death his shares in Disney were transferred to the Steven P. Jobs Trust led by Laurene Jobs.
After Jobs's death, Iger recalled in 2019 that many warned him about Jobs, "that he would bully me and everyone else". Iger wrote, "Who wouldn't want Steve Jobs to have influence over how a company is run?", and that as an active Disney board member "he rarely created trouble for me. Not never but rarely." He speculated that they would have seriously considered merging Disney and Apple had Jobs lived. Floyd Norman, of Pixar, described Jobs as a "mature, mellow individual" who never interfered with the creative process of the filmmakers. In early June 2014, Pixar cofounder and Walt Disney Animation Studios President Edwin Catmull revealed that Jobs once advised him to "just explain it to them until they understand" in disagreements. Catmull released the book Creativity, Inc. in 2014, in which he recounts numerous experiences of working with Jobs. Regarding his own manner of dealing with Jobs, Catmull writes:
1997–2011
Return to Apple
In 1996, Jobs's former company Apple was struggling and its survival depended on completing its next operating system. After failed negotiations to purchase Be Inc., Apple eventually came to a deal with NeXT in December for $400 million; the deal was finalized in February 1997, bringing Jobs back to the company he had cofounded. Jobs became de facto chief after then-CEO Gil Amelio was ousted in July 1997. He was formally named interim chief executive on September 16. In March 1998, to concentrate Apple's efforts on returning to profitability, Jobs terminated several projects, such as Newton, Cyberdog, and OpenDoc. In the coming months, many employees developed a fear of encountering Jobs while riding in the elevator, "afraid that they might not have a job when the doors opened. The reality was that Jobs's summary executions were rare, but a handful of victims was enough to terrorize a whole company." Jobs changed the licensing program for Macintosh clones, making it too costly for the manufacturers to continue making machines.
With the purchase of NeXT, much of the company's technology found its way into Apple products, most notably NeXTSTEP, which evolved into Mac OS X. Under Jobs's guidance, the company increased sales significantly with the introduction of the iMac and other new products; since then, appealing designs and powerful branding have worked well for Apple. At the 2000 Macworld Expo, Jobs officially dropped the "interim" modifier from his title at Apple and became permanent CEO. Jobs quipped at the time that he would be using the title "iCEO".
The company subsequently branched out, introducing and improving upon other digital appliances. With the introduction of the iPod portable music player, iTunes digital music software, and the iTunes Store, the company made forays into consumer electronics and music distribution. On June 29, 2007, Apple entered the cellular phone business with the introduction of the iPhone, a multi-touch display cell phone, which also included the features of an iPod and, with its own mobile browser, revolutionized the mobile browsing scene. While nurturing open-ended innovation, Jobs also reminded his employees that "real artists ship".
Jobs had a public war of words with Dell Computer CEO Michael Dell, starting in 1987, when Jobs first criticized Dell for making "un-innovative beige boxes". On October 6, 1997, at a Gartner Symposium, when Dell was asked what he would do if he ran the then-troubled Apple Computer company, he said: "I'd shut it down and give the money back to the shareholders". Then, in 2006, Jobs emailed all employees when Apple's market capitalization rose above Dell's. It read:
Jobs was both admired and criticized for his consummate skill at persuasion and salesmanship, which has been dubbed the "reality distortion field" and was particularly evident during his keynote speeches (colloquially known as "Stevenotes") at Macworld Expos and at Apple Worldwide Developers Conferences.
Jobs usually went to work wearing a black long-sleeved mock turtleneck made by Issey Miyake, Levi's 501 blue jeans, and New Balance 991 sneakers. Jobs told his biographer Walter Isaacson "...he came to like the idea of having a uniform for himself, both because of its daily convenience (the rationale he claimed) and its ability to convey a signature style".
In 2001, Jobs was granted stock options in the amount of 7.5 million shares of Apple with an exercise price of $18.30. It was alleged that the options had been backdated, and that the exercise price should have been $21.10. It was further alleged that Jobs had thereby incurred taxable income of $20,000,000 that he did not report, and that Apple overstated its earnings by that same amount. As a result, Jobs potentially faced a number of criminal charges and civil penalties. The case was the subject of active criminal and civil government investigations, though an independent internal Apple investigation completed on December 29, 2006, found that Jobs was unaware of these issues and that the options granted to him were returned without being exercised in 2003.
In 2005, Jobs responded to criticism of Apple's poor recycling programs for e-waste in the US by lashing out at environmental and other advocates at Apple's annual meeting in Cupertino in April. A few weeks later, Apple announced it would take back iPods for free at its retail stores. The Computer TakeBack Campaign responded by flying a banner from a plane over the Stanford University graduation at which Jobs was the commencement speaker. The banner read "Steve, don't be a mini-player—recycle all e-waste".
In 2006, he further expanded Apple's recycling programs to any US customer who buys a new Mac. This program includes shipping and "environmentally friendly disposal" of their old systems. The success of Apple's unique products and services provided several years of stable financial returns, propelling Apple to become the world's most valuable publicly traded company in 2011.
Jobs was perceived as a demanding perfectionist who always aspired to position his businesses and their products at the forefront of the information technology industry by foreseeing and setting innovation and style trends. He summed up this self-concept at the end of his keynote speech at the Macworld Conference and Expo in January 2007, by quoting ice hockey player Wayne Gretzky:
On July 1, 2008, a class action suit was filed against several members of the Apple board of directors for revenue lost because of alleged securities fraud. In a 2011 interview with biographer Walter Isaacson, Jobs revealed that he had met with US President Barack Obama, complained about the nation's shortage of software engineers, and told Obama that he was "headed for a one-term presidency". Jobs proposed that any foreign student who got an engineering degree at a US university should automatically be offered a green card. After the meeting, Jobs commented, "The president is very smart, but he kept explaining to us reasons why things can't get done... It infuriates me".
Health problems
In October 2003, Jobs was diagnosed with cancer. In mid 2004, he announced to his employees that he had a cancerous tumor in his pancreas. The prognosis for pancreatic cancer is very poor; Jobs stated that he had a rare, less aggressive type, known as islet cell neuroendocrine tumor.
Jobs resisted his doctors' recommendations for medical intervention for nine months, in favor of alternative medicine. Other doctors agree that Jobs's diet was insufficient to address his disease. However, cancer researcher and alternative medicine critic David Gorski wrote that "it's impossible to know whether and by how much he might have decreased his chances of surviving his cancer through his flirtation with woo. My best guess was that Jobs probably only modestly decreased his chances of survival, if that." Barrie R. Cassileth, the chief of Memorial Sloan Kettering Cancer Center's integrative medicine department, on the other hand, said, "Jobs's faith in alternative medicine likely cost him his life ... He had the only kind of pancreatic cancer that is treatable and curable ... He essentially committed suicide."
According to biographer Walter Isaacson, "for nine months he refused to undergo surgery for his pancreatic cancer – a decision he later regretted as his health declined". "Instead, he tried a vegan diet, acupuncture, herbal remedies, and other treatments he found online, and even consulted a psychic. He was also influenced by a doctor who ran a clinic that advised juice fasts, bowel cleansings and other unproven approaches, before finally having surgery in July 2004." He underwent a pancreaticoduodenectomy (or "Whipple procedure") that appeared to remove the tumor successfully. Jobs did not receive chemotherapy or radiation therapy. During Jobs's absence, Tim Cook, head of worldwide sales and operations at Apple, ran the company.
In January 2006, only Jobs's wife, his doctors, and Iger knew that his cancer had returned. Jobs told Iger privately that he hoped to live to see his own son Reed's high school graduation in 2010. In early August 2006, Jobs delivered the keynote for Apple's annual Worldwide Developers Conference. His "thin, almost gaunt" appearance and unusually "listless" delivery, together with his choice to delegate significant portions of his keynote to other presenters, inspired a flurry of media and internet speculation about the state of his health. In contrast, according to an Ars Technica journal report, Worldwide Developers Conference (WWDC) attendees who saw Jobs in person said he "looked fine". Following the keynote, an Apple spokesperson said that "Steve's health is robust".
Two years later, similar concerns followed Jobs's 2008 WWDC keynote address. Apple officials stated that Jobs was victim to a "common bug" and was taking antibiotics, while others surmised his cachectic appearance was due to the Whipple procedure. During a July conference call discussing Apple earnings, participants responded to repeated questions about Jobs's health by insisting that it was a "private matter". Others said that shareholders had a right to know more, given Jobs's hands-on approach to running his company. Based on an off-the-record phone conversation with Jobs, The New York Times reported, "While his health problems amounted to a good deal more than 'a common bug', they weren't life-threatening and he doesn't have a recurrence of cancer".
On August 28, 2008, Bloomberg mistakenly published a 2500-word obituary of Jobs in its corporate news service, containing blank spaces for his age and cause of death. News carriers customarily stockpile up-to-date obituaries to facilitate news delivery in the event of a well-known figure's death. Although the error was promptly rectified, many news carriers and blogs reported on it, intensifying rumors concerning Jobs's health. Jobs responded at Apple's September 2008 Let's Rock keynote by paraphrasing Mark Twain: "The reports of my death are greatly exaggerated." At a subsequent media event, Jobs concluded his presentation with a slide reading "110/70", referring to his blood pressure, stating he would not address further questions about his health.
On December 16, 2008, Apple announced that marketing vice-president Phil Schiller would deliver the company's final keynote address at the Macworld Conference and Expo 2009, again reviving questions about Jobs's health. In a statement given on January 5, 2009, on Apple.com, Jobs said that he had been suffering from a "hormone imbalance" for several months.
On January 14, 2009, Jobs wrote in an internal Apple memo that in the previous week he had "learned that my health-related issues are more complex than I originally thought". He announced a six-month leave of absence until the end of June 2009, to allow him to better focus on his health. Tim Cook, who previously acted as CEO in Jobs's 2004 absence, became acting CEO of Apple, with Jobs still involved with "major strategic decisions".
In 2009, Tim Cook offered a portion of his liver to Jobs, since both share a rare blood type, and the donor liver can regenerate tissue after such an operation. Jobs yelled, "I'll never let you do that. I'll never do that." In April 2009, Jobs underwent a liver transplantation at Methodist University Hospital Transplant Institute in Memphis, Tennessee. Jobs's prognosis was described as "excellent".
Resignation
On January 17, 2011, a year and a half after Jobs returned to work following the liver transplant, Apple announced that he had been granted another leave of absence. Jobs announced his leave in a letter to employees, stating his decision was made "so he could focus on his health". As it did at the time of his 2009 medical leave, Apple announced that Tim Cook would run day-to-day operations and that Jobs would continue to be involved in major strategic decisions at the company. While on leave, Jobs appeared at the iPad 2 launch event on March 2, the WWDC keynote introducing iCloud on June 6, and before the Cupertino City Council on June 7.
On August 24, 2011, Jobs announced his resignation as Apple's CEO, writing to the board, "I have always said if there ever came a day when I could no longer meet my duties and expectations as Apple's CEO, I would be the first to let you know. Unfortunately, that day has come." Jobs became chairman of the board and named Tim Cook as his successor as CEO. Jobs continued to work for Apple until the day before his death six weeks later.
Death
Jobs died at his home in Palo Alto, California, around 3 p.m. (PDT) on October 5, 2011, due to complications from a relapse of his previously treated islet-cell pancreatic neuroendocrine tumor, which resulted in respiratory arrest. He had lost consciousness the day before and died with his wife, children, and sisters at his side. His sister, Mona Simpson, described his death thus: "Steve's final words, hours earlier, were monosyllables, repeated three times. Before embarking, he'd looked at his sister Patty, then for a long time at his children, then at his life's partner, Laurene, and then over their shoulders past them. Steve's final words were: 'Oh wow. Oh wow. Oh wow.' " He then lost consciousness and died several hours later. A small private funeral was held on October 7, 2011, the details of which, out of respect for Jobs's family, were not made public.
Both Apple and Pixar issued announcements of his death. Apple announced on the same day that they had no plans for a public service, but were encouraging "well-wishers" to send their remembrance messages to an email address created to receive such messages. Apple and Microsoft both flew their flags at half-staff throughout their respective headquarters and campuses.
Bob Iger ordered all Disney properties, including Walt Disney World and Disneyland, to fly their flags at half-staff from October 6 to 12, 2011. For two weeks following his death, Apple displayed on its corporate Web site a simple page that showed Jobs's name and lifespan next to his portrait in grayscale. On October 19, 2011, Apple employees held a private memorial service for Jobs on the Apple campus in Cupertino. It was attended by Jobs's widow, Laurene, and by Tim Cook, Bill Campbell, Norah Jones, Al Gore, and Coldplay. Some of Apple's retail stores closed briefly so employees could attend the memorial. A video of the service was uploaded to Apple's website.
California Governor Jerry Brown declared Sunday, October 16, 2011, to be "Steve Jobs Day". On that day, an invitation-only memorial was held at Stanford University. Those in attendance included Apple and other tech company executives, members of the media, celebrities, politicians, and family and close friends of Jobs. Bono, Yo-Yo Ma, and Joan Baez performed at the service, which lasted longer than an hour. There was high security with guards at all of the university's gates, and a helicopter overhead from an area news station. Each attendee was given a small brown box as a "farewell gift" from Jobs, containing a copy of the Autobiography of a Yogi (1946) by Paramahansa Yogananda.
Childhood friend and fellow Apple co-founder Steve Wozniak, former owner of what would become Pixar, George Lucas, his competitor Microsoft co-founder Bill Gates, and President Barack Obama all made statements in response to his death. At his request, Jobs was buried in an unmarked grave at Alta Mesa Memorial Park, the only nonsectarian cemetery in Palo Alto.
Innovations and designs
Jobs's design aesthetic was influenced by philosophies of Zen and Buddhism. In India, he experienced Buddhism while on his seven-month spiritual journey, and his sense of intuition was influenced by the spiritual people with whom he studied. Jobs gained insights regarding industrial designs from Richard Sapper. According to Apple co-founder Wozniak, "Steve didn't ever code. He wasn't an engineer and he didn't do any original design...". Daniel Kottke, one of Apple's earliest employees and a college friend of Jobs, stated: "Between Woz and Jobs, Woz was the innovator, the inventor. Steve Jobs was the marketing person."
He is listed as either primary inventor or co-inventor in 346 United States patents or patent applications related to a range of technologies from actual computer and portable devices to user interfaces (including touch-based), speakers, keyboards, power adapters, staircases, clasps, sleeves, lanyards, and packages. His contributions to most of his patents were to "the look and feel of the product". He and his industrial design chief Jonathan Ive are named for 200 of the patents. Most of these are design patents as opposed to utility patents or inventions; they are specific product designs such as both original and lamp-style iMacs, and PowerBook G4 Titanium. He holds 43 issued US patents on inventions. The patent on the Mac OS X Dock user interface with "magnification" feature was issued the day before he died. Although Jobs had little involvement in the engineering and technical side of the original Apple computers, Jobs later used his CEO position to directly involve himself with product design.
Involved in many projects throughout his career was his long-time marketing executive and confidant Joanna Hoffman, known as one of the few employees at Apple and NeXT who could successfully stand up to Jobs while also engaging with him. Even while terminally ill in the hospital, Jobs sketched new devices that would hold the iPad in a hospital bed. He despised the oxygen monitor on his finger, and suggested ways to revise the design for simplicity.
Apple I
The Apple I was designed entirely by Wozniak, but Jobs had the idea of selling the computer, which led to the founding of Apple Computer in 1976. Jobs and Wozniak constructed several of the Apple I prototype by hand, funded by selling some of their belongings. Eventually, 200 units were produced. One of the main innovations of the Apple I was that it included video display terminal circuitry on its circuit board, allowing it to connect to a low-cost composite video monitor or television, instead of an expensive computer terminal, compared to most existing computers at the time.
Apple II
The Apple II is an 8-bit home computer, one of the world's first highly successful mass-produced microcomputer products, designed primarily by Wozniak. Jobs oversaw the development of the Apple II's unusual case and Rod Holt developed the unique power supply. It was introduced in 1977 at the West Coast Computer Faire by Jobs and Wozniak as the first consumer product sold by Apple. The Apple II was first sold on June 10, 1977.
Lisa
The Lisa is a personal computer developed by Apple from 1978 and sold in the early 1980s to business users. It is the first personal computer with a graphical user interface. The Lisa sold poorly at 100,000 units, but despite being considered a commercial failure, it received technical acclaim, introducing several advanced features that reappeared on the Macintosh and eventually IBM PC compatibles. In 1982, after Jobs was forced out of the Lisa project, he took over the Macintosh project, adding inspiration from Lisa. The final Lisa 2/10 was modified and sold as the Macintosh XL.
Macintosh
Once he joined the Macintosh team, Jobs took over the project after Wozniak had experienced a traumatic airplane accident and temporarily left the company. Jobs launched the Macintosh on January 24, 1984, as the first mass-market personal computer featuring an integral graphical user interface and mouse. This first model was later renamed to Macintosh 128k among the prolific series. Since 1998, Apple has phased out the Macintosh name in favor of "Mac", though the product family has been nicknamed "Mac" or "the Mac" since inception. The Macintosh was introduced by a Ridley Scott television commercial, "1984". It aired during the third quarter of Super Bowl XVIII on January 22, 1984, received as a "watershed event" and a "masterpiece". Regis McKenna called the ad "more successful than the Mac itself". It uses an unnamed heroine to represent the coming of the Macintosh (indicated by a Picasso-style picture of the computer on her white tank top) to save humanity from the conformity of IBM's domination of the computer industry. The ad alludes to George Orwell's novel Nineteen Eighty-Four, which describes a dystopian future ruled by a televised "Big Brother".
The Macintosh, however, was expensive, which hindered its ability to be competitive in a market already dominated by the Commodore 64 for consumers, and the IBM Personal Computer and its accompanying clone market for businesses. Macintosh systems still found success in education and desktop publishing and kept Apple as the second-largest PC manufacturer for the next decade.
NeXT Computer
After Jobs was forced out of Apple in 1985, he started NeXT, a workstation computer company. The NeXT Computer was introduced in 1988 at a lavish launch event. Using the NeXT Computer, Tim Berners-Lee created the world's first web browser, the WorldWideWeb. The NeXT Computer's operating system, named NeXTSTEP, begat Darwin, which is now the foundation of most of Apple's operating systems such as Macintosh's macOS and iPhone's iOS.
iMac
Apple's iMac G3 was introduced in 1998 and its innovative design is directly the result of Jobs's return to Apple. Apple boasted "the back of our computer looks better than the front of anyone else's". Described as "cartoonlike", the first iMac, clad in Bondi Blue plastic, was unlike any personal computer that came before. In 1999, Apple introduced the Graphite gray Apple iMac and since has varied the shape, color and size considerably while maintaining the all-in-one design. Design ideas were intended to create a connection with the user such as the handle and a "breathing" light effect when the computer went to sleep. The Apple iMac sold for $1,299 at that time. The iMac's forward-thinking changes include eschewing the floppy disk drive and moving exclusively to USB for connecting peripherals. Through the iMac's success, USB was popularized among third-party peripheral makers—as evidenced by the fact that many early USB peripherals were made of translucent plastic to match the iMac design.
iTunes
iTunes is a media player, media library, online radio broadcaster, and mobile device management application developed by Apple. It is used to play, download, and organize digital audio and video on personal computers running the macOS and Microsoft Windows operating systems. The iTunes Store is also available on the iPod Touch, iPhone, and iPad.
Through the iTunes Store, users can purchase and download music, music videos, television shows, audiobooks, podcasts, movies, and movie rentals in some countries, and ringtones, available on the iPhone and iPod Touch (fourth generation onward). Application software for the iPhone, iPad and iPod Touch can be downloaded from the App Store.
iPod
The first generation of iPod was released October 23, 2001. The major innovation of the iPod was its small size achieved by using a 1.8" hard drive compared to the 2.5" drives common to players at that time. The capacity of the first-generation iPod ranged from 5 GB to 10 GB. The iPod sold for US$399 and more than 100,000 iPods were sold before the end of 2001. The introduction of the iPod resulted in Apple becoming a major player in the music industry. Also, the iPod's success prepared the way for the iTunes music store and the iPhone. After the first few generations of iPod, Apple released the touchscreen iPod Touch, the reduced-size iPod Mini and iPod Nano, and the screenless iPod Shuffle in the following years.
iPhone
Apple began work on the first iPhone in 2005 and the first iPhone was released on June 29, 2007. The iPhone created such a sensation that a survey indicated six out of ten Americans were aware of its release. Time declared it "Invention of the Year" for 2007 and included it in the All-TIME 100 Gadgets list in 2010, in the category of Communication. The completed iPhone had multimedia capabilities and functioned as a quad-band touch screen smartphone. A year later, the iPhone 3G was released in July 2008 with three key features: support for GPS, 3G data and tri-band UMTS/HSDPA. In June 2009, the iPhone 3GS, whose improvements included voice control, a better camera, and a faster processor, was introduced by Phil Schiller. The iPhone 4 was thinner than previous models, had a five megapixel camera capable of recording video in 720p HD, and added a secondary front-facing camera for video calls. A major feature of the iPhone 4s, introduced in October 2011, was Siri, a virtual assistant capable of voice recognition.
iPad
The iPad is an iOS-based line of tablet computers designed and marketed by Apple. The first iPad was released on April 3, 2010. The user interface is built around the device's multi-touch screen, including a virtual keyboard. The iPad includes built-in Wi-Fi and cellular connectivity on select models. , more than 250 million iPads have been sold.
Personal life
Marriage
In 1989, Jobs first met his future wife, Laurene Powell, when he gave a lecture at the Stanford Graduate School of Business, where she was a student. Soon after the event, he stated that Laurene "was right there in the front row in the lecture hall, and I couldn't take my eyes off of her ... kept losing my train of thought, and started feeling a little giddy". After the lecture, he met her in the parking lot and invited her out to dinner. From that point forward, they were together, with a few minor exceptions, for the rest of his life.
Jobs proposed on New Year's Day 1990; they married on March 18, 1991, in a Buddhist ceremony at the Ahwahnee Hotel in Yosemite National Park. Fifty people, including Jobs's father, Paul, and his sister Mona, attended. The ceremony was conducted by Jobs's guru, Kobun Chino Otogawa. The vegan wedding cake was in the shape of Yosemite's Half Dome, and the wedding ended with a hike and Laurene's brothers' snowball fight. Jobs reportedly said to Mona: "You see, Mona [...], Laurene is descended from Joe Namath, and we're descended from John Muir".
Jobs's and Powell's first child, a son named Reed, was born in 1991. Jobs's father, Paul, died a year and a half later, on March 5, 1993. Jobs's childhood home remains a tourist attraction and is currently owned by his stepmother (Paul's second wife), Marilyn Jobs. Jobs and Powell had two more children, daughters Erin (b. 1995) and Eve Jobs (b. 1998), who is a fashion model. The family lived in Palo Alto, California. Although a billionaire, Jobs made it known that, like Gates, he had stipulated that most of his monetary fortune would not be left to his children.
Family
Chrisann Brennan notes that after Jobs was forced out of Apple, "he apologized many times over for his behavior" towards her and Lisa. She said Jobs "said that he never took responsibility when he should have, and that he was sorry". By this time, Jobs had developed a strong relationship with Lisa and when she was nine, Jobs had her name on her birth certificate changed from "Lisa Brennan" to "Lisa Brennan-Jobs". Jobs and Brennan developed a working relationship to co-parent Lisa, a change which Brennan credits to the influence of his newly found biological sister, Mona Simpson, who worked to repair the relationship between Lisa and Jobs. Jobs had found Mona after first finding his birth mother, Joanne Schieble Simpson, shortly after he left Apple.
Jobs did not contact his birth family during his adoptive mother Clara's lifetime, however. He later told his official biographer Walter Isaacson: "I never wanted [Paul and Clara] to feel like I didn't consider them my parents, because they were totally my parents [...] I loved them so much that I never wanted them to know of my search, and I even had reporters keep it quiet when any of them found out". However, in 1986, when Jobs was 31, Clara was diagnosed with lung cancer. He began to spend a great deal of time with her and learned more details about her background and his adoption, information that motivated him to find his biological mother. Jobs found on his birth certificate the name of the San Francisco doctor to whom Schieble had turned when she was pregnant. Although the doctor did not help Jobs while he was alive, he left a letter for Jobs to be opened upon his death. As he died soon afterwards, Jobs was given the letter which stated that "his mother had been an unmarried graduate student from Wisconsin named Joanne Schieble".
Jobs only contacted Schieble after Clara died in early 1986 and after he received permission from his father, Paul. In addition, out of respect for Paul, he asked the media not to report on his search. Jobs stated that he was motivated to find his birth mother out of both curiosity and a need "to see if she was okay and to thank her, because I'm glad I didn't end up as an abortion. She was twenty-three and she went through a lot to have me." Schieble was emotional during their first meeting (though she wasn't familiar with the history of Apple or Jobs's role in it) and told him that she had been pressured into signing the adoption papers. She said that she regretted giving him up and repeatedly apologized to him for it. Jobs and Schieble developed a friendly relationship throughout the rest of his life and spent Christmas together.
During this first visit, Schieble told Jobs that he had a sister, Mona, who was not aware that she had a brother. Schieble then arranged for them to meet in New York where Mona worked. Her first impression of Jobs was that "he was totally straightforward and lovely, just a normal and sweet guy". Simpson and Jobs then went for a long walk to get to know each other. Jobs later told his biographer that "Mona was not completely thrilled at first to have me in her life and have her mother so emotionally affectionate toward me... As we got to know each other, we became really good friends, and she is my family. I don't know what I'd do without her. I can't imagine a better sister. My adopted sister, Patty, and I were never close."
Jobs then learned his family history. Six months after he was given up for adoption, Schieble's father died, she wed Jandali, and they had a daughter, Mona. Jandali states that after finishing his PhD he returned to Syria to work, and then Schieble left him. They divorced in 1962 and he said then he lost contact with Mona for a time:
A few years later, Schieble married an ice-skating teacher, George Simpson. Mona Jandali took her stepfather's last name, as Mona Simpson. In 1970, after divorcing her second husband, Schieble took Mona to Los Angeles and raised her alone.
When Simpson found that their father, Abdulfattah Jandali, was living in Sacramento, California, Jobs had no interest in meeting him as he believed Jandali did not treat his children well and according to the San Francisco Chronicle, this was because of finding a Seattle Times article about Jandali's abandonment of his students on a trip to Egypt in 1974. Simpson went to Sacramento alone and met Jandali, who worked in a small restaurant. They spoke for several hours, and he told her that he had left teaching for the restaurant business. He said he and Schieble had given another child away for adoption but that "we'll never see that baby again. That baby's gone." He said he once managed a Mediterranean restaurant near San Jose and that "all of the successful technology people used to come there. Even Steve Jobs ... oh yeah, he used to come in, and he was a sweet guy and a big tipper". At the request of Jobs, Simpson did not reveal to Jandali that his own story meant that he had actually already met his son.
After hearing about the visit, Jobs recalled that "it was amazing ... I had been to that restaurant a few times, and I remember meeting the owner. He was Syrian. Balding. We shook hands." However, Jobs still did not want to meet Jandali because "I was a wealthy man by then, and I didn't trust him not to try to blackmail me or go to the press about it ... I asked Mona not to tell him about me". Jandali later discovered his relationship to Jobs through an online blog. He then contacted Simpson and asked, "what is this thing about Steve Jobs?". Simpson told him that it was true and later commented, "My father is thoughtful and a beautiful storyteller, but he is very, very passive ... He never contacted Steve". Because Simpson herself researched her Syrian roots and began to meet the family, she assumed that Jobs would eventually want to meet their father, but he never did. Jobs also never showed an interest in his Syrian heritage or the Middle East. Simpson fictionalized the search for their father in her 1992 novel The Lost Father. Malek Jandali is their cousin.
Philanthropy
Jobs's views and actions on philanthropy and charity are a public mystery. He maintained privacy even over what few of these actions were publicly known. He has been a key figure in public discussions about societal obligations of the wealthy and powerful. Through his career, the media investigated and criticized him and Apple as unusually and inexplicably mysterious or absent among powerful leaders and especially billionaires. His name is absent from the Million Dollar List of all large global philanthropy. Some have speculated about his possible secret role in large anonymous donations.
Mark Vermilion, former charitable leader for Joan Baez, Apple, and Jobs, attributed Jobs's lifelong minimization of direct charity to his perfectionism and limited time. Jobs, Vermilion, and supporters said over the years that corporate products were Jobs's superior contributions to culture and society instead of direct charity. In 1985, Jobs said, "You know, my main reaction to this money thing is that it's humorous, all the attention to it, because it's hardly the most insightful or valuable thing that's happened to me."
Shortly after leaving Apple, he formed the charitable Steven P. Jobs Foundation, led by Mark Vermilion, hired away from Apple's community leadership. Jobs wanted a focus on nutrition and vegetarianism, but Vermilion wanted social entrepreneurship. That year, Jobs soon launched NeXT and closed the foundation with no results. Upon his 1997 return to Apple, Jobs optimized the failing company to the core, such as eliminating all philanthropic programs, never to be restored. In 2007, Stanford Social Innovation Review magazine listed Apple among "America's least philanthropic companies". A few months after another unflattering news report, Apple started a program to match employees' charitable gifts. Jobs declined to sign The Giving Pledge, launched in 2010 by Warren Buffett and Bill Gates for fellow billionaires. He donated $50 million to Stanford hospital and contributed to efforts to cure AIDS. Bono reported "tens of millions of dollars" given by Apple while Jobs was CEO, to AIDS and HIV relief programs in Africa, which inspired other companies to join.
Honors and awards
1985: awarded National Medal of Technology (with Steve Wozniak) by US President Ronald Reagan, the country's highest honor for technological achievements
1987: Jefferson Award for Public Service
1989: Entrepreneur of the Decade by Inc.
1991: Howard Vollum Award from Reed College
2004–2010: listed among the Time 100 Most Influential People in the World on five separate occasions
2007: named the most powerful person in business by Fortune magazine
2007: inducted into the California Hall of Fame, located at The California Museum for History, Women and the Arts
2012: Grammy Trustees Award, an award for those who have influenced the music industry in areas unrelated to performance
2012: posthumously honored with an Edison Achievement Award for his commitment to innovation throughout his career
2013: posthumously inducted as a Disney Legend
2017: Steve Jobs Theater opens at Apple Park
2022: posthumously awarded the Presidential Medal of Freedom by US President Joe Biden, the country's highest civilian honor
In popular culture
See also
Seva Foundation
Timeline of Steve Jobs media
References
Bibliography
External links
official memorial page at Apple
Steve Jobs profile at Forbes
Steven Paul Jobs The Vault at FBI Records
Steve Jobs at Andy Hertzfeld's The Original Macintosh (folklore.org)
Steve Jobs at Steve Wozniak's woz.org
2011: "Steve Jobs: From Garage to World's Most Valuable Company." Computer History Museum
2005: Steve Jobs commencement speech at Stanford University
1995: Steve Jobs, Founder, NeXT Computer, excerpts from an Oral History Interview at Smithsonian Institution, April 20, 1995
1994: Steve Jobs in 1994: The Rolling Stone Interview in Rolling Stone
1990: Steve Jobs – memory and imagination "What a computer is to me is it's the most remarkable tool that we've ever come up with, and it's the equivalent of a bicycle for our minds"
1983: The "Lost" Steve Jobs Speech from 1983; Foreshadowing Wireless Networking, the iPad, and the App Store (audio clip)
1955 births
2011 deaths
20th-century American Buddhists
20th-century American businesspeople
20th-century American inventors
21st-century American Buddhists
21st-century American businesspeople
21st-century American inventors
American adoptees
American animated film producers
American billionaires
American chairpersons of corporations
American computer businesspeople
American film studio executives
American financiers
American industrial designers
American investors
American mass media owners
American people of German descent
American people of Swiss descent
American people of Syrian descent
American psychedelic drug advocates
American technology chief executives
American technology company founders
American Zen Buddhists
Atari people
Businesspeople from Palo Alto, California
Businesspeople from San Francisco
Businesspeople in software
Computer designers
Deaths from pancreatic cancer in California
Directors of Apple Inc.
Directors of The Walt Disney Company
Disney executives
Film producers from California
Gap Inc. people
Homestead High School (California) alumni
Internet pioneers
Inventors from California
Liver transplant recipients
Mass media people from San Francisco
National Medal of Technology recipients
NeXT people
People from Cupertino, California
People from Los Altos, California
People from Mountain View, California
Personal computing
Philanthropists from California
Pixar people
Presidential Medal of Freedom recipients
Spokespersons
Technicians | Steve Jobs | [
"Technology"
] | 15,714 | [
"Lists of people in STEM fields",
"Computing and society",
"Proprietary technology salespersons",
"Personal computing"
] |
7,412,739 | https://en.wikipedia.org/wiki/Mind%20games | Mind games (also power games or head games) are actions performed for reasons of psychological one-upmanship, often employing passive–aggressive behavior to specifically demoralize or dis-empower the thinking subject, making the aggressor look superior. It also describes the unconscious games played by people engaged in ulterior transactions of which they are not fully aware, and which transactional analysis considers to form a central element of social life all over the world.
The first known use of the term "mind game" dates from 1963, and "head game" from 1977.
Conscious one-upmanship
In intimate relationships, mind games can be used to undermine one partner's belief in the validity of their own perceptions. Personal experience may be denied and driven from memory, and such abusive mind games may extend to the denial of the victim's reality, social undermining, and downplaying the importance of the other partner's concerns or perceptions. Both sexes have equal opportunities for such verbal coercion which may be carried out unconsciously as a result of the need to maintain one's own self-deception.
Mind games in the struggle for prestige appear in everyday life in the fields of office politics, sport, and relationships. Office mind games are often hard to identify clearly, as strong management blurs with over-direction, and healthy rivalry with manipulative head games and sabotage. The wary salesman will be consciously and unconsciously prepared to meet a variety of challenging mind games and put-downs in the course of their work. The serious sportsman will also be prepared to meet a variety of gambits and head games from their rivals, attempting to tread the fine line between competitive psychology and paranoia.
Unconscious games
Eric Berne described a psychological game as an organized series of ulterior transactions taking place on twin levels: social and psychological, and resulting in a dramatic outcome when the two levels finally came to coincide. He described the opening of a typical game like flirtation as follows: "Cowboy: 'Come and see the barn'. Visitor: 'I've loved barns ever since I was a little girl'". At the social level a conversation about barns, at the psychological level one about sex play, the outcome of the game – which may be comic or tragic, heavy or light – will become apparent when a switch takes place and the ulterior motives of each become clear.
Between thirty and forty such games (as well as variations of each) were described and tabulated in Berne's best seller on the subject titled "Games People Play: The Psychology of Human Relationships". According to one transactional analyst, "Games are so predominant and deep-rooted in society that they tend to become institutionalized, that is, played according to rules that everybody knows about and more or less agrees to. The game of Alcoholic, a five-handed game, illustrates this...so popular that social institutions have developed to bring the various players together" such as Alcoholics Anonymous and Al-anon.
Psychological games vary widely in degrees of consequence, ranging from first-degree games where losing involves embarrassment or frustration, to third-degree games where consequences are life-threatening. Berne recognized however that "since by definition games are based on ulterior transactions, they must all have some element of exploitation", and the therapeutic ideal he offered was to stop playing games altogether.
See also
References
Sources
R.D. Laing, Self and Others (Penguin 1969)
External links
Sarah Strudwick (Nov 16, 2010) Dark Souls – Mind Games, Manipulation and Gaslighting
Mind control
Harassment and bullying
Psychological abuse
Transactional analysis
Psychological manipulation | Mind games | [
"Biology"
] | 743 | [
"Harassment and bullying",
"Behavior",
"Aggression"
] |
7,413,289 | https://en.wikipedia.org/wiki/Beer%20engine | A beer engine is a device for pumping beer from a cask, usually located in a pub's cellar.
The beer engine was invented by John Lofting, a Dutch inventor, merchant and manufacturer who moved from Amsterdam to London in about 1688 and patented a number of inventions including a fire hose and engine for extinguishing fires and a thimble knurling machine. The London Gazette of 17 March 1691 stated "the patentee hath also projected a very useful engine for starting of beers and other liquors which will deliver from 20 to 30 barrels an hour which are completely fixed with brass joints and screws at reasonable rates."
The locksmith and hydraulic engineer Joseph Bramah developed beer pumping further in 1797.
The beer engine is normally manually operated, although electrically powered and gas powered pumps are occasionally used; when manually powered, the term handpump is often used to refer to both the pump and the associated handle.
The beer engine is normally located below the bar with the visible handle being used to draw the beer through a flexible tube to the spout, below which the glass is placed. Modern hand pumps may clamp onto the edge of the bar or be mounted on the top of the bar.
A pump clip is usually attached to the handle giving the name and sometimes the brewery, beer type and alcoholic strength of the beer being served through that handpump.
The handle of a handpump is often used as a symbol of cask ale. This style of beer has continued fermentation and uses porous and non-porous pegs, called spiles, to respectively release and retain the gases generated by fermentation and thus achieve the optimum level of carbonation in the beer.
In the 1970s many breweries were keen to replace cask conditioned ale with keg versions for financial benefit, and started to disguise keg taps by adorning them with cosmetic hand pump handles. This practice was opposed as fraudulent by the Campaign for Real Ale and was discontinued.
Swan neck
A swan neck is a curved spout. This is often used in conjunction with a sparkler - a nozzle containing small holes - fitted to the spout to aerate the beer as it enters the glass, giving a frothier head; this presentation style is more popular in the north of England than in the south.
Sparkler
A sparkler is a device that can be attached to the nozzle of a beer engine. Designed rather like a shower-head, beer dispensed through a sparkler becomes aerated and frothy which results in a noticeable head.
The sparkler works via the venturi effect. As the beer flows through the nozzle, air is drawn into the beer. Consequently, the beer will have a head, whether or not the beer is alive (fresh).
Real ale only produces a head whilst the yeast is alive, when yeast produces carbon dioxide. Typically, after three days of opening a barrel of beer, the yeast will die, and the beer will be flat. A sparkler will disguise flat beer, replacing the missing carbon dioxide with nitrogen and oxygen.
Whether or not the beer is alive (fresh), whisking the beer changes the texture, and gaseous composition, which can change the taste.
There is an argument that the sparkler can reduce the flavour and aroma, especially of the hops, in some beers. The counter argument is that the sparkler takes away harshness and produces a smoother, creamier beer that is easier to quaff.
Breweries may state whether or not a sparkler is preferred when serving their beers. Generally, breweries in northern England serve their beers with a sparkler attached and breweries in the south without, but this is by no means definitive.
Pump clips
Pump clips are badges that are attached to handpumps in pubs to show which cask ales are available.
In addition to the name of the beer served through the pump, they may give other details such as the brewer's name and alcoholic strength of the beer and serve as advertising.
Pump clips can be made of various materials. For beers that are brewed regularly by the big breweries, high quality plastic, metal or ceramic pump clips are used. Smaller breweries would use a printed plastic pump clip and for one-off beers laminated paper is used. There are variations on the material used, and the gaudiness or tastefulness of the decoration depending on how much the brewery wants to market their beers at the point of sale. Novelty pump clips have also been made of wood, slate and compact discs. Some even incorporate electronic flashing lights. Older pump clips were made of enamel.
The term pump clip originates from the clip that attaches it to the pump handle. These consist of a two-piece plastic ring which clamps to the handle with two screws. Plastic and laminated paper pump clips usually have a white plastic clip fixed with a sticky double-sided pad that pushes onto the handle.
See also
Beer tap
References
External links
DeeCee's Beer Pump Clips
National pump clip museum in Nottingham
Beer vessels and serving
Pumps
Bartending equipment | Beer engine | [
"Physics",
"Chemistry"
] | 1,042 | [
"Physical systems",
"Hydraulics",
"Turbomachinery",
"Pumps"
] |
7,414,202 | https://en.wikipedia.org/wiki/Fassbrause | Fassbrause , keg soda, is a non-alcoholic or alcoholic (depending on the brand) German drink made from fruit and malt extract, traditionally stored in a keg. The original Fassbrause also includes spices and is a speciality of Berlin, where it is sometimes called Sportmolle. ( used to be a term for beer in the Berlin dialect.)
Fassbrause is about the same color as some beers, and usually has an apple flavour. The taste is strongly reminiscent of the Austrian drink Almdudler, except that Fassbrause is less sweet, and not quite as spicy.
A variant of Fassbrause, the so-called Rote Fassbrause (red keg soda), which is available in some of the new states, but not in Berlin itself, appeared in the 1950s. This variant was available in the German Democratic Republic (GDR) prior to German Reunification and has a raspberry flavour.
Another non-alcoholic variant has been produced in the United States since the 1960s under the name "Apple Beer".
As the term Fassbrause is not protected, completely altered variants with no direct link to the original Berlin recipe have been created and marketed starting in the 2010s.
Cologne brewery Gaffel Becker & Co was the first to start with Gaffel Fassbrause in April 2010, and many big breweries followed.
Since then, the term Fassbrause has been perceived ambiguous in different ways in Western and Eastern Germany, because many people in Western Germany were not aware of the original specialty from Berlin.
History
The chemist Ludwig Scholvien invented Fassbrause in 1908 in Berlin for his son, in order to offer a non-alcoholic beer substitute of similar color and taste. Scholvien's original recipe included a natural concentrate of apple and licorice, intended to approximate the beer taste, along with the main ingredients of water and malt. A drink based on Scholvien's recipe, known as Apple Beer, was introduced in the US in the 1960s. Wild GmbH & Co. KG began producing the Fassbrause concentrate in Spandau after acquiring a factory in 1985. It later sold the production to Dr. August Oetker KG. Today the drink is available on tap throughout Berlin as a specialty drink. It is also occasionally served mixed with beer; this mixture is known in Berlin and Brandenburg as Gespritztes.
The brand Rixdorfer, which produces its Fassbrause with water sourced from Bad Liebenwerda, produces a significant amount of the total market share for the drink. It distributes the drink in 0.33-liter bottles for the Berliner Kindl Brauerei. Another popular brand is the Berliner Fassbrause, distributed by Spreequell. Since August 2012 a caffeinated Fassbrause drink has also been distributed under the name Kreuzbär.
Market availability
Barre Fassbrause – Produced by Privatbrauerei Ernst Barre GmbH
Fassbrause – Produced by Hansa-Brunnen AG
Faßbrause – Produced by Einsiedler Brauhaus GmbH
Gaffels Fassbrause – Produced by Privatbrauerei Gaffel Becker & Co OHG
Rixdorfer Fassbrause – Produced by Berliner Kindl Brauerei AG
Rote Brause from a keg using the original GDR recipe – Produced by Biercontor Wildberg
Zille's Fassbrause – Produced by Neue Torgauer Brauhaus GmbH
Krombacher's Fassbrause – Produced by Krombacher Brauerei
Flens Fassbrause – Produced by Flensburger Brauerei
Hartmannsdorfer Fassbrause – Produced by Brauhaus Hartmannsdorf GmbH
See also
Apple Beer
Cider
Hard soda
List of soft drink flavors
List of soft drinks by country
Queen Mary (cocktail)
External links
Gaffels Fassbrause website
Taz article on Fassbrause from 25 July 2005
Apple Beer website
References
Fermented drinks
German drinks
German inventions | Fassbrause | [
"Biology"
] | 848 | [
"Fermented drinks",
"Biotechnology products"
] |
7,414,275 | https://en.wikipedia.org/wiki/Kopp%27s%20law | Kopp's law can refer to either of two relationships discovered by the German chemist Hermann Franz Moritz Kopp (1817–1892).
Kopp found "that the molecular heat capacity of a solid compound is the sum of the atomic heat capacities of the elements composing it; the elements having atomic heat capacities lower than those required by the Dulong–Petit law retain these lower values in their compounds."
In studying organic compounds, Kopp found a regular relationship between boiling points and the number of CH2 groups present.
Kopp–Neumann law
The Kopp–Neumann law, named for Kopp and Franz Ernst Neumann, is a common approach for determining the specific heat C (in J·kg−1·K−1) of compounds using the following equation:
where N is the total number of compound constituents, and Ci and fi denote the specific heat and mass fraction of the i-th constituent. This law works surprisingly well at room-temperature conditions, but poorly at elevated temperatures.
See also
Rule of mixtures
References
Frederick Seitz, The Modern Theory of Solids, McGraw-Hill, New York, USA, 1940, ASIN: B000OLCK08
Further reading
Laws of thermodynamics | Kopp's law | [
"Physics",
"Chemistry"
] | 246 | [
"Thermodynamics stubs",
"Physical chemistry stubs",
"Thermodynamics",
"Laws of thermodynamics"
] |
7,414,931 | https://en.wikipedia.org/wiki/Crittercam | Crittercam is a small package of instruments including a camera that can be attached to a wild animal to study its behavior in the wild. National Geographic's Crittercam is a research tool designed to be worn by wild animals. It combines video and audio recording with collection of environmental data such as depth, temperature, and acceleration. The live feeds help scientists experience an animal's daily routines.
Crittercam was invented by National Geographic marine biologist Greg Marshall in 1986. Since then it has been employed in studies on over 40 marine and terrestrial animals.
History
The introduction of the terrestrial Crittercam made it possible for researchers to monitor the animals and their activity exactly when it occurred. Previously, the cameras could only record data and images for future playback once the camera was retrieved from the animal. When introduced in 2001 the camera was about half an inch in size. It had a resolution of 340 lines and is sensitive down to 3 lux. At this time it used a nine-volt battery for short term documentation of the animal's activities and a 1-pound battery to monitor for one week. The size of the battery continually increased as the duration of documentation did as well. Footage obtained from its use has appeared in program titles including "Great White Shark", "Sea Monsters", and "Tiger Shark".
The first depth gauge was invented in the late 1800s. However, it wasn't until 1964 that the first depth recorder was actually placed on an animal, a Weddell seal in Antarctica. The next advancement in recording animal-borne imagery was made possible by a microprocessor that attached a video camera in a submersible case to a loggerhead turtle. This case came to be known as the Crittercam. Marshall first conceived his idea of the Crittercam on a diving trip in Belize. During one dive he encountered a shark with a sucker fish clinging to its body. He then realized that if a camera could be utilized to replace the sucker fish, researchers could explore the environment and behavior of sharks without having to dive deep. He immediately began work on this idea, receiving small grants from the American Museum of Natural History to support his funding. He later secured a grant from the National Geographic Society and began to develop highly improved prototypes of his initial device that was strapped to the loggerhead turtle. These prototypes were successfully utilized on sharks and sea turtles. Since its production the Crittercam has been used to study the underwater behaviors of green turtles, humpback whales, blue whales, monk seals, reef sharks, and many other marine animals.
Attaching Crittercam
Methods for attaching the device vary with different species. In order to place it onto dolphins, whales and leatherback turtles special suction cups are used. Adhesive patches are used for seal and hardshell turtles. Sharks are fixed with a fin clamp in order for the device to remain in place while the animals are swimming. Backpack-like harnesses are placed on penguins for attachment. Land animals like lions and bears are given Crittercam collars. Research and development are constantly being conducted in hopes of devising more advanced attachment methods. Marshall has stated that he was surprised to see how quickly animals adapted to having the device strapped to their backs. While initial statements, from Greg Marshall, claimed the camera did not negatively affect or disturb animal's natural behaviors in their natural habitats, he did admit that 40- to 50-pound penguins' dives are decreased by 20 percent in distance while wearing the harness. When employed on emperor penguins, the camera proves its usefulness by capturing their behavior below the ice of Antarctica's waters where no human would be able to dive and manually record because of the freezing temperatures. In order to ensure the safety of the animals, in case something was to go wrong with the camera, scientists are able to remove the device through a remote control.
Crittercam influence in media and popular culture
In 2011 the Mystic Aquarium & Institute for Exploration opened a traveling exhibit, funded by National Geographic, called "Crittercam: The World Through Animal Eyes".
The periodical, Insight on the News, published an article stating that a team of scientists, led by Clyde Roper, wanted to use the Crittercam to film and study Architeuthis dux, the giant squid.
It was stated in 2003 that Crittercam had been attached to 41 tiger sharks, 3 dugongs, 3 whale sharks, and 34 turtles, all residing within Western Australia's Shark Bay and Ningaloo Reef. The camera can dive with sperm whales 200 meters deep and even remain intact within a pack of killer whales.
A 13-part TV series premiered on National Geographic's cable channel on January 17, 2004 that showed actual footage received from animals equipped with the Crittercam.
Kitty Cam
Inspired by the discoveries made as a result of the Crittercam on the behavior of various species, National Geographic and the University of Georgia have begun work on a new study that monitors the behavior of domestic cats called "Kitty Cam". Kitty Cams provide a creative solution to answer widespread and controversial questions about the interactions and behaviors of cats in the environment. Discoveries have been made from their collaborative efforts that identify common factors that threaten the health of owned free-roaming cats, such as exposure to infectious disease. The Kitty Cams are fixed on a collar that is placed on the cats, like Crittercams on land animals. The cameras are very lightweight and waterproof and can even capture activity at night through LED lights. In Athens-Clarke County, Georgia sixty cats were equipped with the cameras and monitored while roaming freely outdoors for 7–10 days. The experiment has been repeated many times and has produced many results from differing areas and seasons. After the initial experiment, 55 cats produced usable results with an average of 37 hours of footage per cat. In reference to their hunting behaviors, the footage showed that 44% of cats in Athens hunt wildlife. The majority of animals hunted were mammals, reptiles and invertebrates. They concluded that free-roaming cats showed hunting behavior during warmer seasons. Common risk factors concluded as a result of the study were crossing roads, coming into contact with other cats, eating/drinking substances outside of the house, exploring drain systems, and entering entrapping crawlspaces.
The smallest animal yet to carry Crittercam is the emperor penguin. Information and footage from Crittercam was used in the Oscar-winning documentary March of the Penguins.
At Museum of Science (Boston), there is an exhibit on Crittercam. The exhibit will soon travel to other museums. The exhibit allows people to participate in interactive displays and models.
References
Crittercam, additional feature on the March of the Penguins DVD.
External links
About Crittercam at the National Geographic website.
Crittercam on cat
Crittercam on turtle
Crittercam on whale
Crittercam on Emperor Penguin
Crittercam News Stories
Making a CritterCam for a Shark
Cameras
Articles containing video clips | Crittercam | [
"Technology"
] | 1,437 | [
"Recording devices",
"Cameras"
] |
7,415,870 | https://en.wikipedia.org/wiki/Motion%20analysis | Motion analysis is used in computer vision, image processing, high-speed photography and machine vision that studies methods and applications in which two or more consecutive images from an image sequences, e.g., produced by a video camera or high-speed camera, are processed to produce information based on the apparent motion in the images. In some applications, the camera is fixed relative to the scene and objects are moving around in the scene, in some applications the scene is more or less fixed and the camera is moving, and in some cases both the camera and the scene are moving.
The motion analysis processing can in the simplest case be to detect motion, i.e., find the points in the image where something is moving. More complex types of processing can be to track a specific object in the image over time, to group points that belong to the same rigid object that is moving in the scene, or to determine the magnitude and direction of the motion of every point in the image. The information that is produced is often related to a specific image in the sequence, corresponding to a specific time-point, but then depends also on the neighboring images. This means that motion analysis can produce time-dependent information about motion.
Applications of motion analysis can be found in rather diverse areas, such as surveillance, medicine, film industry, automotive crash safety, ballistic firearm studies, biological science, flame propagation, and navigation of autonomous vehicles to name a few examples.
Background
A video camera can be seen as an approximation of a pinhole camera, which means that each point in the image is illuminated by some (normally one) point in the scene in front of the camera, usually by means of light that the scene point reflects from a light source. Each visible point in the scene is projected along a straight line that passes through the camera aperture and intersects the image plane. This means that at a specific point in time, each point in the image refers to a specific point in the scene. This scene point has a position relative to the camera, and if this relative position changes, it corresponds to a relative motion in 3D. It is a relative motion since it does not matter if it is the scene point, or the camera, or both, that are moving. It is only when there is a change in the relative position that the camera is able to detect that some motion has happened. By projecting the relative 3D motion of all visible points back into the image, the result is the motion field, describing the apparent motion of each image point in terms of a magnitude and direction of velocity of that point in the image plane. A consequence of this observation is that if the relative 3D motion of some scene points are along their projection lines, the corresponding apparent motion is zero.
The camera measures the intensity of light at each image point, a light field. In practice, a digital camera measures this light field at discrete points, pixels, but given that the pixels are sufficiently dense, the pixel intensities can be used to represent most characteristics of the light field that falls onto the image plane. A common assumption of motion analysis is that the light reflected from the scene points does not vary over time. As a consequence, if an intensity I has been observed at some point in the image, at some point in time, the same intensity I will be observed at a position that is displaced relative to the first one as a consequence of the apparent motion. Another common assumption is that there is a fair amount of variation in the detected intensity over the pixels in an image. A consequence of this assumption is that if the scene point that corresponds to a certain pixel in the image has a relative 3D motion, then the pixel intensity is likely to change over time.
Methods
Motion detection
One of the simplest type of motion analysis is to detect image points that refer to moving points in the scene. The typical result of this processing is a binary image where all image points (pixels) that relate to moving points in the scene are set to 1 and all other points are set to 0. This binary image is then further processed, e.g., to remove noise, group neighboring pixels, and label objects. Motion detection can be done using several methods; the two main groups are differential methods and methods based on background segmentation.
Applications
Human motion analysis
In the areas of medicine, sports, video surveillance, physical therapy, and kinesiology, human motion analysis has become an investigative and diagnostic tool. See the section on motion capture for more detail on the technologies. Human motion analysis can be divided into three categories: human activity recognition, human motion tracking, and analysis of body and body part movement.
Human activity recognition is most commonly used for video surveillance, specifically automatic motion monitoring for security purposes. Most efforts in this area rely on state-space approaches, in which sequences of static postures are statistically analyzed and compared to modeled movements. Template-matching is an alternative method whereby static shape patterns are compared to pre-existing prototypes.
Human motion tracking can be performed in two or three dimensions. Depending on the complexity of analysis, representations of the human body range from basic stick figures to volumetric models. Tracking relies on the correspondence of image features between consecutive frames of video, taking into consideration information such as position, color, shape, and texture. Edge detection can be performed by comparing the color and/or contrast of adjacent pixels, looking specifically for discontinuities or rapid changes. Three-dimensional tracking is fundamentally identical to two-dimensional tracking, with the added factor of spatial calibration.
Motion analysis of body parts is critical in the medical field. In postural and gait analysis, joint angles are used to track the location and orientation of body parts. Gait analysis is also used in sports to optimize athletic performance or to identify motions that may cause injury or strain. Tracking software that does not require the use of optical markers is especially important in these fields, where the use of markers may impede natural movement.
Motion analysis in manufacturing
Motion analysis is also applicable in the manufacturing process. Using high speed video cameras and motion analysis software, one can monitor and analyze assembly lines and production machines to detect inefficiencies or malfunctions. Manufacturers of sports equipment, such as baseball bats and hockey sticks, also use high speed video analysis to study the impact of projectiles. An experimental setup for this type of study typically uses a triggering device, external sensors (e.g., accelerometers, strain gauges), data acquisition modules, a high-speed camera, and a computer for storing the synchronized video and data. Motion analysis software calculates parameters such as distance, velocity, acceleration, and deformation angles as functions of time. This data is then used to design equipment for optimal performance.
Additional applications for motion analysis
The object and feature detecting capabilities of motion analysis software can be applied to count and track particles, such as bacteria, viruses, "ionic polymer-metal composites", micron-sized polystyrene beads, aphids, and projectiles.
See also
Mechanography
Structure from motion
Video motion analysis
X-ray motion analysis
References
Research methods
Motion in computer vision | Motion analysis | [
"Physics"
] | 1,445 | [
"Physical phenomena",
"Motion (physics)",
"Motion in computer vision"
] |
7,416,129 | https://en.wikipedia.org/wiki/The%20Skull%20%28film%29 | The Skull is a 1965 British horror film directed by Freddie Francis for Amicus Productions, and starring Peter Cushing and Christopher Lee, Patrick Wymark, Jill Bennett, Nigel Green, Patrick Magee and Peter Woodthorpe. The script was written by Milton Subotsky from a short story by Robert Bloch, "The Skull of the Marquis de Sade".
It was one of a number of British horror films of the sixties to be scored by avant-garde composer Elisabeth Lutyens, including several others for Amicus.
Plot
In the 19th century, a phrenologist robs the grave of the recently buried Marquis de Sade. He takes the Marquis's severed head and sets about boiling it to remove its flesh, leaving the skull. Before the task is done, Pierre meets an unseen and horrific death.
In modern-day London, Christopher Maitland, a collector and writer on the occult, is offered the skull by Marco, an unscrupulous dealer in antiques and curiosities. Maitland learns that the skull has been stolen from Sir Matthew Phillips, a friend and fellow collector. Sir Matthew, however, does not want to recover it, having escaped its evil influence. He warns Maitland of its powers. At his sleazy lodgings, Marco dies in mysterious circumstances. Maitland finds his body and takes possession of the skull. He in turns falls victim as the skull drives him to hallucinations, madness and death.
Cast
Peter Cushing as Dr. Christopher Maitland
Patrick Wymark as Anthony Marco
Christopher Lee as Sir Matthew Phillips
Jill Bennett as Jane Maitland
Nigel Green as Inspector Wilson
Patrick Magee as Police Surgeon
Peter Woodthorpe as Bert Travers, Marco's Landlord
Michael Gough as auctioneer
George Coulouris as Dr. Londe
April Olrich as French girl
Maurice Good as Pierre, phrenologist
Production
The film was an attempt by Amicus to challenge Hammer Film Productions by making a full length colour movie. Once filming started, Freddie Francis rewrote much of Subotsky's script.
Christopher Lee is billed as "guest star" in the film's credits; he plays a supporting role, and, unusually, is not a villain.
The film's final twenty-five minutes contain almost no dialogue.
In real life the Marquis de Sade's body was exhumed from its grave in the grounds of the lunatic asylum at Charenton, where he died in 1814, and his skull was removed for phrenological analysis. It was subsequently lost, and its fate remains unknown.
Release
When it was released in France, promotional materials had to be changed at the last minute by pasting a new title, Le crâne maléfique ("The Evil Skull"), over the original French title Les Forfaits du Marquis de Sade ("Infamies of the Marquis de Sade") on posters and lobby cards, after legal action by the present-day Sade family.
Reception
The Monthly Film Bulletin wrote: "A graveyard opening, followed by the cleansing of the skull in a decor which establishes a nice line in drawing-room laboratories, promises a 19th century piece of macabre skullduggery in familiar idiom; but the opening is merely a resumé of the skull's history, and the main action takes place in contemporary decor which is unusually vivid and imaginative. The film is pictorially effective throughout, and is directed by Freddie Francis with an individual flair which far outstrips the standard gimmicks of the genre. Francis has perhaps an over-fondness for camera motion (pans, tracks and tilts galore, which tend to become irksome after a time); the trick shots, with the camera, as it were, inside the skull so that we look out through the eye-sockets, are over-used; and it is a pity that the idea was not reserved for a single presentation during the climax when the skull establishes itself on a pentacular table on which it teleports one of the statuettes. But except for one shot towards the end when, through boldness in bringing the thing into close-up, suspension wires are too clearly visible, the mobility of the skull is very well contrived; and such blemishes are small price to pay for an unusually deft piece of macabre supernatural, the impact of which is given extra distinction in Bill Constable's art direction and Elisabeth Lutyens' score."
References
External links
The Skull at the Internet Movie Database
1965 films
1965 horror films
British historical horror films
Amicus Productions films
Films based on short fiction
Films about the Marquis de Sade
Films directed by Freddie Francis
Films scored by Elisabeth Lutyens
Films based on works by Robert Bloch
Films with screenplays by Robert Bloch
Phrenology
1960s English-language films
1960s British films
English-language horror films | The Skull (film) | [
"Biology"
] | 985 | [
"Phrenology",
"Biology theories",
"Obsolete biology theories"
] |
7,416,829 | https://en.wikipedia.org/wiki/C%20POSIX%20library | The C POSIX library is a specification of a C standard library for POSIX systems. It was developed at the same time as the ANSI C standard. Some effort was made to make POSIX compatible with standard C; POSIX includes additional functions to those introduced in standard C. On the other hand, the 5 headers that were added to the C standard library with C11, were not likewise included in subsequent revisions of POSIX.
C POSIX library header files
See also
POSIX
C standard library
C++ standard library
References
Official List of headers in the POSIX library on opengroup.org
Description of the posix library from the Flux OSKit
Further reading
POSIX | C POSIX library | [
"Technology"
] | 150 | [
"Computer standards",
"POSIX"
] |
7,416,843 | https://en.wikipedia.org/wiki/Peskin%E2%80%93Takeuchi%20parameter | In particle physics, the Peskin–Takeuchi parameters are a set of three measurable quantities, called S, T, and U, that parameterize potential new physics contributions to electroweak radiative corrections. They are named after physicists Michael Peskin and Tatsu Takeuchi, who proposed the parameterization in 1990; proposals from two other groups (see References below) came almost simultaneously.
The Peskin–Takeuchi parameters are defined so that they are all equal to zero at a reference point in the Standard Model, with a particular value chosen for the (then unmeasured) Higgs boson mass. The parameters are then extracted from a global fit to the high-precision electroweak data from particle collider experiments (mostly the Z pole data from the CERN LEP collider) and atomic parity violation.
The measured values of the Peskin–Takeuchi parameters agree with the Standard Model. They can then be used to constrain models of new physics beyond the Standard Model. The Peskin–Takeuchi parameters are only sensitive to new physics that contributes to the oblique corrections, i.e., the vacuum polarization corrections to four-fermion scattering processes.
Definitions
The Peskin–Takeuchi parameterization is based on the following assumptions about the nature of the new physics:
The electroweak gauge group is given by SU(2)L x U(1)Y, and thus there are no additional electroweak gauge bosons beyond the photon, Z boson, and W boson. In particular, this framework assumes there are no Z' or W' gauge bosons. If there are such particles, the S, T, U parameters do not in general provide a complete parameterization of the new physics effects.
New physics couplings to light fermions are suppressed, and hence only oblique corrections need to be considered. In particular, the framework assumes that the nonoblique corrections (i.e., vertex corrections and box corrections) can be neglected. If this is not the case, then the process by which the S, T, U parameters are extracted from the precision electroweak data is no longer valid, and they no longer provide a complete parameterization of the new physics effects.
The energy scale at which the new physics appears is large compared to the electroweak scale. This assumption is inherent in defining S, T, U independent of the momentum transfer in the process.
With these assumptions, the oblique corrections can be parameterized in terms of four vacuum polarization functions: the self-energies of the photon, Z boson, and W boson, and the mixing between the photon and the Z boson induced by loop diagrams.
Assumption number 3 above allows us to expand the vacuum polarization functions in powers of q2/M2, where M represents the heavy mass scale of the new interactions, and keep only the constant and linear terms in q2. We have,
where denotes the derivative of the vacuum polarization function with respect to q2. The constant pieces of and are zero because of the renormalization conditions. We thus have six parameters to deal with. Three of these may be absorbed into the renormalization of the three input parameters of the electroweak theory, which are usually chosen to be the fine structure constant , as determined from quantum electrodynamic measurements (there is a significant running of α between the scale of the mass of the electron and the electroweak scale and this needs to be corrected for), the Fermi coupling constant GF, as determined from the muon decay which measures the weak current coupling strength at close to zero momentum transfer, and the Z boson mass MZ, leaving three left over which are measurable. This is because we are not able to determine which contribution comes from the Standard Model proper and which contribution comes from physics beyond the Standard Model (BSM) when measuring these three parameters. To us, the low energy processes could have equally well come from a pure Standard Model with redefined values of e, GF and MZ. These remaining three are the Peskin–Takeuchi parameters S, T and U, and are defined as:
where sw and cw are the sine and cosine of the weak mixing angle, respectively. The definitions are carefully chosen so that
Any BSM correction which is indistinguishable from a redefinition of e, GF and MZ (or equivalently, g1, g2 and ν) in the Standard Model proper at the tree level does not contribute to S, T or U.
Assuming that the Higgs sector consists of electroweak doublet(s) H, the effective action term only contributes to T and not to S or U. This term violates custodial symmetry.
Assuming that the Higgs sector consists of electroweak doublet(s) H, the effective action term only contributes to S and not to T or U. (The contribution of can be absorbed into g1 and the contribution of can be absorbed into g2).
Assuming that the Higgs sector consists of electroweak doublet(s) H, the effective action term contributes to U.
Uses
The S parameter measures the difference between the number of left-handed fermions and the number of right-handed fermions that carry weak isospin. It tightly constrains the allowable number of new fourth-generation chiral fermions. This is a problem for theories like the simplest version of technicolor (physics) that contain a large number of extra fermion doublets.
The T parameter measures isospin violation, since it is sensitive to the difference between the loop corrections to the Z boson vacuum polarization function and the W boson vacuum polarization function. An example of isospin violation is the large mass splitting between the top quark and the bottom quark, which are isospin partners to each other and in the limit of isospin symmetry would have equal mass.
The S and T parameters are both affected by varying the mass of the Higgs boson (recall that the zero point of S and T is defined relative to a reference value of the Standard Model Higgs mass). Before the Higgs-like boson was discovered at the LHC, experiments at the CERN LEP collider set a lower bound of 114 GeV on its mass. If we assume that the Standard Model is correct, a best fit value of the Higgs mass could be extracted from the S, T fit. The best fit was near the LEP lower bound, and the 95% confidence level upper bound was around 200 GeV. Thus the measured mass of 125-126 GeV fits comfortably in this prediction, suggesting the Standard Model may be a good description up to energies past the TeV ( = 1,000 GeV) scale.
The U parameter tends not to be very useful in practice, because the contributions to U from most new physics models are very small. This is because U actually parameterizes the coefficient of a dimension-eight operator, while S and T can be represented as dimension-six operators.
See also
Parameterized post-Newtonian formalism - a similar parametrization in the gravitational context
References
The following papers constitute the original proposals for the S, T, U parameters:
The first detailed global fits were presented in:
For a review, see:
Electroweak theory
Physics beyond the Standard Model | Peskin–Takeuchi parameter | [
"Physics"
] | 1,525 | [
"Physical phenomena",
"Unsolved problems in physics",
"Electroweak theory",
"Fundamental interactions",
"Particle physics",
"Physics beyond the Standard Model"
] |
7,417,294 | https://en.wikipedia.org/wiki/Dodman | A dodman (plural "dodmen") or a hoddyman dod is a local English vernacular word for a land snail. The word is used in some of the counties of England. This word is found in the Norfolk dialect, according to the Oxford English Dictionary. Fairfax, in his Bulk and Selvedge (1674), speaks of "a snayl or dodman".
Hodimadod is a similar word for snail that is more commonly used in the Buckinghamshire dialect.
Alternatively (and apparently now more commonly used in the Norfolk dialect) are the closely related words Dodderman or Doddiman. In everyday folklore, these words are popularly said to be derived from the surname of a travelling cloth seller called Dudman, who supposedly had a bent back and carried a large roll of cloth on his back. The words to dodder, doddery, doddering, meaning to progress in an unsteady manner, are popularly said to have the same derivation.
A traditional Norfolk rhyme goes as follows:
The 'inventor' of ley lines, Alfred Watkins, thought that in the words "dodman" and the builder's "hod" there was a survival of an ancient British term for a surveyor. Watkins felt that the name came about because the snail's two horns resembled a surveyor's two surveying rods. Watkins also supported this idea with an etymology from 'doddering along' and 'dodge' (akin, in his mind, to the series of actions a surveyor would carry out in moving his rod back and forth until it accurately lined up with another one as a backsight or foresight) and the Welsh verb 'dodi' meaning to lay or place. He thus decided that The Long Man of Wilmington was an image of an ancient surveyor.
References
Mollusc common names
Pseudoarchaeology
Surveying | Dodman | [
"Engineering"
] | 373 | [
"Surveying",
"Civil engineering"
] |
7,417,940 | https://en.wikipedia.org/wiki/Acoustic%20transmission%20line | An acoustic transmission line is the use of a long duct, which acts as an acoustic waveguide and is used to produce or transmit sound in an undistorted manner. Technically it is the acoustic analog of the electrical transmission line, typically conceived as a rigid-walled duct or tube, that is long and thin relative to the wavelength of sound present in it.
Examples of transmission line (TL) related technologies include the (mostly obsolete) speaking tube, which transmitted sound to a different location with minimal loss and distortion, wind instruments such as the pipe organ, woodwind and brass which can be modeled in part as transmission lines (although their design also involves generating sound, controlling its timbre, and coupling it efficiently to the open air), and transmission line based loudspeakers which use the same principle to produce accurate extended low bass frequencies and avoid distortion. The comparison between an acoustic duct and an electrical transmission line is useful in "lumped-element" modeling of acoustical systems, in which acoustic elements like volumes, tubes, pistons, and screens can be modeled as single elements in a circuit. With the substitution of pressure for voltage, and volume particle velocity for current, the equations are essentially the same. Electrical transmission lines can be used to describe acoustic tubes and ducts, provided the frequency of the waves in the tube is below the critical frequency, such that they are purely planar.
Design principles
Phase inversion is achieved by selecting a length of line that is equal to the quarter wavelength of the target lowest frequency. The effect is illustrated in Fig. 1, which shows a hard boundary at one end (the speaker) and the open-ended line vent at the other. The phase relationship between the bass driver and vent is in phase in the pass band until the frequency approaches the quarter wavelength, when the relationship reaches 90 degrees as shown. However, by this time the vent is producing most of the output (Fig. 2). Because the line is operating over several octaves with the drive unit, cone excursion is reduced, providing higher SPL's and lower distortion levels, compared with reflex and infinite baffle designs.
The calculation of the length of the line required for a certain bass extension appears to be straightforward, based on a simple formula:
where is the sound frequency in hertz (Hz), is the speed of sound in air at 20°C in meters/second, and is the length of the transmission line in meters.
The complex loading of the bass drive unit demands specific Thiele-Small driver parameters to realise the full benefits of a TL design. However, most drive units in the marketplace are developed for the more common reflex and infinite baffle designs and are usually not suitable for TL loading. High efficiency bass drivers with extended low frequency ability, are usually designed to be extremely light and flexible, having very compliant suspensions. Whilst performing well in a reflex design, these characteristics do not match the demands of a TL design. The drive unit is effectively coupled to a long column of air which has mass. This lowers the resonant frequency of the drive unit, negating the need for a highly compliant device. Furthermore, the column of air provides greater force on the driver itself than a driver opening onto a large volume of air (in simple terms it provides more resistance to the driver's attempt to move it), so to control the movement of air requires an extremely rigid cone, to avoid deformation and consequent distortion.
The introduction of the absorption materials reduces the velocity of sound through the line, as discovered by Bailey in his original work. Bradbury published his extensive tests to determine this effect in a paper in the Journal of the Audio Engineering Society (JAES) in 1976 and his results agreed that heavily damped lines could reduce the velocity of sound by as much as 50%, although 35% is typical in medium damped lines. Bradbury's tests were carried out using fibrous materials, typically longhaired wool and glass fibre. These kinds of materials, however, produce highly variable effects that are not consistently repeatable for production purposes. They are also liable to produce inconsistencies due to movement, climatic factors and effects over time. High-specification acoustic foams, developed by loudspeaker manufacturers such as PMC, with similar characteristics to longhaired wool, provide repeatable results for consistent production. The density of the polymer, the diameter of the pores and the sculptured profiling are all specified to provide the correct absorption for each speaker model. Quantity and position of the foam is critical to engineer a low-pass acoustic filter that provides adequate attenuation of the upper bass frequencies, whilst allowing an unimpeded path for the low bass frequencies.
Discovery and development
The concept was termed "acoustical labyrinth" by Stromberg-Carlson Co. when used in their console radios beginning in 1936 (see Concert Grand 837G Ch= 837 Radio Stromberg-Carlson Australasia Pty | Radiomuseum). Benjamin Olney who worked for Stromberg-Carlson was the inventor of the Acoustical Labyrinth and wrote an article for the Journal of the Acoustic Society of America in October of 1936 entitled "A Method of Eliminating Cavity Resonance, Extending Low Frequency Response and Increasing Acoustic Damping in Cabinet Type Loudspeakers" see Stromberg-Carlson started manufacturing an Acoustic Labyrinth speaker enclosure meant for a 12" or 15" coaxial driver as early as 1952 as evident in an Audio Engineering article in July of 1952 (page 28) see and numerous ads in Hi-Fidelity Magazine in 1952 and thereafter. The Transmission line type of loudspeaker enclosure was proposed in October 1965 by Dr A.R. Bailey and A.H. Radford in Wireless World (p483-486) magazine. The article postulated that energy from the rear of a driver unit could be essentially absorbed, without damping the cone's motion or superimposing internal reflections and resonance, so Bailey and Radford reasoned that the rear wave could be channeled down a long pipe. If the acoustic energy was absorbed, it would not be available to excite resonances. A pipe of sufficient length could be tapered, and stuffed so that the energy loss was almost complete, minimizing output from the open end. No broad consensus on the ideal taper (expanding, uniform cross-section, or contracting) has been established.
Uses
Loudspeaker design
Acoustic transmission lines gained attention in their use within loudspeakers in the 1960s and 1970s. In 1965, A R Bailey's article in Wireless World, “A Non-resonant Loudspeaker Enclosure Design”, detailed a working Transmission Line, which was commercialized by John Wright and partners under the brand name IMF and later TDL, and were sold by audiophile Irving M. "Bud" Fried in the United States.
A transmission line is used in loudspeaker design, to reduce time, phase and resonance related distortions, and in many designs to gain exceptional bass extension to the lower end of human hearing, and in some cases the near-infrasonic (below 20 Hz). TDL's 1980s reference speaker range (now discontinued) contained models with frequency ranges of 20 Hz upwards, down to 7 Hz upwards, without needing a separate subwoofer. Irving M. Fried, an advocate of TL design, stated that:
"I believe that speakers should preserve the integrity of the signal waveform and the Audio Perfectionist Journal has presented a great deal of information about the importance of time domain performance in loudspeakers. I’m not the only one who appreciates time- and phase-accurate speakers but I have been virtually the only advocate to speak out in print in recent years. There’s a reason for that."
In practice, the duct is folded inside a conventional shaped cabinet, so that the open end of the duct appears as a vent on the speaker cabinet. There are many ways in which the duct can be folded and the line is often tapered in cross section to avoid parallel internal surfaces that encourage standing waves. Depending upon the drive unit and quantity – and various physical properties – of absorbent material, the amount of taper will be adjusted during the design process to tune the duct to remove irregularities in its response. The internal partitioning provides substantial bracing for the entire structure, reducing cabinet flexing and colouration. The inside faces of the duct or line, are treated with an absorbent material to provide the correct termination with frequency to load the drive unit as a TL. A theoretically perfect TL would absorb all frequencies entering the line from the rear of the drive unit but remains theoretical, as it would have to be infinitely long. The physical constraints of the real world, demand that the length of the line must often be less than 4 meters before the cabinet becomes too large for any practical applications, so not all the rear energy can be absorbed by the line. In a realized TL, only the upper bass is TL loaded in the true sense of the term (i.e. fully absorbed); the low bass is allowed to freely radiate from the vent in the cabinet. The line therefore effectively works as a low-pass filter, another crossover point in fact, achieved acoustically by the line and its absorbent filling. Below this “crossover point” the low bass is loaded by the column of air formed by the length of the line. The length is specified to reverse the phase of the rear output of the drive unit as it exits the vent. This energy combines with the output of the bass unit, extending its response and effectively creating a second driver.
Sound ducts as transmission lines
A duct for sound propagation also behaves like a transmission line (e.g. air conditioning duct, car muffler, ...). Its length may be similar to the wavelength of the sound passing through it, but the dimensions of its cross-section are normally smaller than one quarter the wavelength.
Sound is introduced at one end of the tube by forcing the pressure across the whole cross-section to vary with time. An almost planar wavefront travels down the line at the speed of sound. When the wave reaches the end of the transmission line, behaviour depends on what is present at the end of the line. There are three possible scenarios:
The frequency of the pulse generated at the transducer results in a pressure peak at the terminus exit (odd ordered harmonic open pipe resonance) resulting in effectively low acoustic impedance of the duct and high level of energy transfer.
The frequency of the pulse generated at the transducer results in a pressure null at the terminus exit (even ordered harmonic open pipe anti -resonance) resulting in effectively high acoustic impedance of the duct and low level of energy transfer.
The frequency of the pulse generated at the transducer results in neither a peak or null in which energy transfer is nominal or in keeping with typical energy dissipation with distance from the source.
See also
Frequency response
Loudspeaker acoustics
Loudspeaker measurement
Speaking tube
Transmission line loudspeaker
References
External links
Quarterwave loudspeakers – Martin J King, developer of TL modeling software
– TL theory & design
Transmission Line Speakers Pages – TL projects, history & more
Brines Acoustics Articles (Archived 2009-10-24) – Application, tips, essays
Quarter Wave Tube - DiracDelta.co.uk – description of operation, equation and online calculation
Loudspeaker technology
Audio engineering | Acoustic transmission line | [
"Engineering"
] | 2,351 | [
"Electrical engineering",
"Audio engineering"
] |
7,418,540 | https://en.wikipedia.org/wiki/Constraint%20%28mathematics%29 | In mathematics, a constraint is a condition of an optimization problem that the solution must satisfy. There are several types of constraints—primarily equality constraints, inequality constraints, and integer constraints. The set of candidate solutions that satisfy all constraints is called the feasible set.
Example
The following is a simple optimization problem:
subject to
and
where denotes the vector (x1, x2).
In this example, the first line defines the function to be minimized (called the objective function, loss function, or cost function). The second and third lines define two constraints, the first of which is an inequality constraint and the second of which is an equality constraint. These two constraints are hard constraints, meaning that it is required that they be satisfied; they define the feasible set of candidate solutions.
Without the constraints, the solution would be (0,0), where has the lowest value. But this solution does not satisfy the constraints. The solution of the constrained optimization problem stated above is , which is the point with the smallest value of that satisfies the two constraints.
Terminology
If an inequality constraint holds with equality at the optimal point, the constraint is said to be , as the point cannot be varied in the direction of the constraint even though doing so would improve the value of the objective function.
If an inequality constraint holds as a strict inequality at the optimal point (that is, does not hold with equality), the constraint is said to be , as the point could be varied in the direction of the constraint, although it would not be optimal to do so. Under certain conditions, as for example in convex optimization, if a constraint is non-binding, the optimization problem would have the same solution even in the absence of that constraint.
If a constraint is not satisfied at a given point, the point is said to be infeasible.
Hard and soft constraints
If the problem mandates that the constraints be satisfied, as in the above discussion, the constraints are sometimes referred to as hard constraints. However, in some problems, called flexible constraint satisfaction problems, it is preferred but not required that certain constraints be satisfied; such non-mandatory constraints are known as soft constraints. Soft constraints arise in, for example, preference-based planning. In a MAX-CSP problem, a number of constraints are allowed to be violated, and the quality of a solution is measured by the number of satisfied constraints.
Global constraints
Global constraints are constraints representing a specific relation on a number of variables, taken altogether. Some of them, such as the alldifferent constraint, can be rewritten as a conjunction of atomic constraints in a simpler language: the alldifferent constraint holds on n variables , and is satisfied if the variables take values which are pairwise different. It is semantically equivalent to the conjunction of inequalities . Other global constraints extend the expressivity of the constraint framework. In this case, they usually capture a typical structure of combinatorial problems. For instance, the regular constraint expresses that a sequence of variables is accepted by a deterministic finite automaton.
Global constraints are used to simplify the modeling of constraint satisfaction problems, to extend the expressivity of constraint languages, and also to improve the constraint resolution: indeed, by considering the variables altogether, infeasible situations can be seen earlier in the solving process. Many of the global constraints are referenced into an online catalog.
See also
Constraint algebra
Karush–Kuhn–Tucker conditions
Lagrange multipliers
Level set
Linear programming
Nonlinear programming
Restriction
Satisfiability modulo theories
References
Further reading
External links
Nonlinear programming FAQ
Mathematical Programming Glossary
Mathematical optimization
Constraint programming
ca:Restricció
es:Restricción (matemáticas) | Constraint (mathematics) | [
"Mathematics"
] | 750 | [
"Mathematical optimization",
"Mathematical analysis"
] |
7,418,805 | https://en.wikipedia.org/wiki/Rolipram | Rolipram is a selective phosphodiesterase-4 inhibitor discovered and developed by Schering AG as a potential antidepressant drug in the early 1990s. It served as a prototype molecule for several companies' drug discovery and development efforts. Rolipram was discontinued after clinical trials showed that its therapeutic window was too narrow; it could not be dosed at high enough levels to be effective without causing significant gastrointestinal side effects.
Rolipram has several activities that make it a continuing focus for research. The etiology of many neurodegenerative diseases involves misfolded and clumped proteins which accumulate in the brain. Cells have a mechanism to dispose of such proteins called the proteasome. However, in Alzheimer's disease and some other conditions the activity of these proteasomes is impaired leading to a buildup of toxic aggregates. Research in mice suggests that rolipram has the ability to ramp up the activity of proteasomes and reduce the burden of these aggregates. Preliminary evidence suggests that this can improve spatial memory in mice engineered to have aggregate build-up. Rolipram continues to be used in research as a well-characterized PDE4 inhibitor. It has been used in studies to understand whether PDE4 inhibition could be useful in autoimmune diseases, Alzheimer's disease, cognitive enhancement, spinal cord injury, and respiratory diseases like asthma and COPD.
See also
Roflumilast
References
Abandoned drugs
PDE4 inhibitors
Phenol ethers
Pyrrolidones
Cyclopentyl compounds | Rolipram | [
"Chemistry"
] | 322 | [
"Drug safety",
"Abandoned drugs"
] |
7,418,842 | https://en.wikipedia.org/wiki/EHNA | EHNA (erythro-9-(2-hydroxy-3-nonly)adenine) is a potent adenosine deaminase inhibitor, which also acts as a phosphodiesterase inhibitor that selectively inhibits phosphodiesterase type 2 (PDE2).
References
PDE2 inhibitors
Purines
Secondary alcohols | EHNA | [
"Chemistry"
] | 79 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
7,419,506 | https://en.wikipedia.org/wiki/Delta%20Cygni | Delta Cygni (δ Cygni, abbreviated Delta Cyg, δ Cyg) is a binary star of a combined third-magnitude in the constellation of Cygnus. It is also part of the Northern Cross asterism whose brightest star is Deneb. Based upon parallax measurements obtained during the Hipparcos mission, Delta Cygni is located roughly distant from the Sun.
Delta Cygni's two components are designated Delta Cygni A (officially named Fawaris ) and B. More widely separated is a faint third component, a 12th magnitude star that is moving along with the others. Together they form a triple star system.
Nomenclature
δ Cygni (Latinised to Delta Cygni) is the binary's Bayer designation. The designations of the two components as Delta Cygni A and B derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
Traditionally, Delta Cygni had no proper name. It belonged to the Arabic asterism al-Fawāris (), meaning "the Riders" in indigenous Arabic, together with Zeta, Epsilon, and Gamma Cygni, the transverse of the Northern Cross. In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Fawaris for the component Delta Cygni A on 1 June 2018 and it is now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning Celestial Ford, refers to an asterism consisting of Delta Cygni, Gamma Cygni, 30 Cygni, Alpha Cygni (Deneb) and Nu, Tau, Upsilon, Zeta and Epsilon Cygni. Consequently, the Chinese name for Delta Cygni itself is (, ).
Properties
The primary, Delta Cygni A, is a blue-white giant star of spectral class B9, with a temperature of 10,400 K. It is nearing the end of its main-sequence life stage with a luminosity 155 times that of the Sun, a radius of 4.81 solar radii, and a mass approximately 2.93 solar masses. Like many hot stars, it spins rapidly, at least 135 kilometers per second at the equator, about 60 times that of the Sun.
The close companion Delta Cygni B is a yellow-white F-type main-sequence star of the sixth magnitude (6.33) with a luminosity about 6 times that of the Sun, and a mass about 1.5 times the Sun's. The two stars orbit each other at an average distance of 157 AU and a period of 780 years.
The much more distant third companion is an orange (class K) twelfth magnitude star, and only two thirds as massive.
The two main stars together appear with a spectral type of A0 IV. As seen from Earth, the entire triple star system of Delta Cygni shines at a combined apparent magnitude of 2.87. Both δ Cygni A and B have been suspected to vary in brightness. δ Cygni A was reported in 1951 as varying between magnitudes 2.85 and 2.89, and δ Cygni B was reported in 1837 to vary between magnitudes 6.3 and 8.5. The variability of the stars has not been confirmed.
Pole Star
Delta Cygni is a visible star located within 3° of the precessional path traced across the celestial sphere by the Earth's North pole. For at least four centuries around 11,250 AD it will probably be considered a pole star, a title currently held by Polaris which is just 0.5° off of the precessional path.
References
B-type giants
F-type main-sequence stars
K-type main-sequence stars
Triple star systems
Cygnus (constellation)
Cygni, Delta
Durchmusterung objects
Cygni, 18
186882
097165
7528 | Delta Cygni | [
"Astronomy"
] | 834 | [
"Cygnus (constellation)",
"Constellations"
] |
7,419,786 | https://en.wikipedia.org/wiki/Narcissistic%20injury | In psychology, narcissistic injury, also known as narcissistic wound or wounded ego, is emotional trauma that overwhelms an individual's defense mechanisms and devastates their pride and self-worth. In some cases, the shame or disgrace is so significant that the individual can never again truly feel good about who they are. This is sometimes referred to as a "narcissistic scar".
Freud maintained that "losses in love" and "losses associated with failure" often leave behind injury to an individual's self-regard.
Signals of narcissistic injury
A narcissistic injury will oftentimes not be noticeable by the subject at first sight. Narcissistic injuries, or narcissistic wounds, are likely a result of criticism, loss, or even a sense of abandonment. Those diagnosed with narcissistic personality disorder will come off as excessively defensive and attacking when facing any sort of criticism. While the average person would likely react by expressing vulnerability, a person dealing with a narcissistic wound will do the opposite, causing them to come off as narcissistic, despite feeling hurt inside. The reaction of a narcissistic injury is a cover-up for the real feelings of one who faces these problems.
To others, a narcissistic injury may seem as if the person is gaslighting or turning the issue back onto the other person. A person may come off as manipulative and aggressive because they refuse to accept anything they are told that they do not want to hear. It is important for those dealing with narcissistic wounds to make it clear to those whom they attack with their words that this is indeed a disorder, even when it takes the form of an insult towards another person.
Children who are taught that failure leads to less love and affection are more likely to become obsessed with perfection and are more likely to develop narcissistic personality disorder. The importance of self-love and unconditional love when raising children can help show them that their feelings are valid, no matter the situation, and regardless of how well or poorly they perform.
Sigmund Freud's concept of what in his last book he called "early injuries to the self (injuries to narcissism)" was subsequently extended by a wide variety of psychoanalysts. Karl Abraham saw the key to adult depression in the childhood experience of a blow to narcissism through the loss of narcissistic supply. Otto Fenichel confirmed the importance of narcissistic injury in depressives and expanded such analyses to include borderline personalities.
Edmund Bergler emphasized the importance of infantile omnipotence in narcissism, and the rage that follows any blow to that sense of narcissistic omnipotence; Annie Reich stressed how a feeling of shame-fueled rage, when a blow to narcissism exposed the gap between one's ego ideal and reality; while Jacques Lacan linked Freud on the narcissistic wound to Lacan on the narcissistic mirror stage.
Finally, object relations theory highlights rage against early environmental failures that left patients feeling bad about themselves when childhood omnipotence was too abruptly challenged.
Becoming defensive. When a narcissist's feelings are hurt, they are likely to react with hostility and tend to hold grudges. This is due to having a poor understanding of the emotional responses towards others. They lack empathy when hurting others' feelings due to their thought processing. They do not like confrontation. It is their high ego that needs to be fulfilled but deep down the cause of it is due to insecurities within themselves. When a narcissist's wants are challenged, they can act out through anger. This can stem from experiences of abuse, so they project their internalized trauma onto others.
Narcissists lack self confidence, which projects on to relationships. Jealousy roots in neurotic insecurity. Examples of possessiveness include jealousy of a person's attention being taken away by another, and thoughts of worry that someone will take one's partner away. Their high sense of possessiveness root from a high degree of jealousy. Their possessiveness may lead them to be abusive towards their partners and friends, as well.
Withdrawal can trigger an emotional reaction when a narcissist experiences a major setback. This collapse competes against the external validation they think they are entitled to and in return causes them emotional pain that they express as rage. A setback causes them to feel intensely frustrated.
Extreme mood swings. Rages of outburst or silence are commonly seen in people with narcissistic personality disorder. Some experiences that can affect this are threats to their self-esteem or when they are not given the attention or wants they think they deserve. Mood swings may be triggered when a narcissist's perception is confronted with contrary beliefs and so may respond with anger.
Feelings of power imbalance. Narcissists tend to suffer from strong feelings of inferiority and so have a hard time convincing themselves that they have achieved enough. A narcissist only demands what they want without concern for the other. In a relationship, the partner of the narcissist may experience gaslighting, ghosting, and manipulation.
Perfectionism
Narcissists are often pseudo-perfectionists and create situations in which they are the center of attention. The narcissist's attempts at being seen as perfect are necessary for their grandiose self-image. If a perceived state of perfection is not reached, it can lead to guilt, shame, anger or anxiety because the subject believes that they will lose the admiration and love of other people if they are imperfect.
Because some children are raised to believe that love is conditional, obsession with being perfect becomes routine for them. As a result, when failing in any aspect of life, the child will feel as if they are no longer accepted, causing a narcissistic injury.
Examples of reasons why children would show narcissistic injury due to perfectionism include failing exams, losing in competitions, being denied acceptance, disagreement in conversation with others, and constructive criticism.
Behind such perfectionism, self psychology would see earlier traumatic injuries to the grandiose self.
Research findings indicate that grandiose narcissists uphold a version of perfection, expect unreasonable things from others, and work toward unrealistic ambitions. The findings also imply that vulnerable narcissists intentionally foster an idea of infallibility while concealing flaws in order to appease others' perceived demands.
Treatment
Adam Phillips has argued that, contrary to what common sense might expect, therapeutic cure involves the patient being encouraged to re-experience "a terrible narcissistic wound" – the child's experience of exclusion by the parental alliance – in order to come to terms with, and learn again, the diminishing loss of omnipotence entailed by the basic "facts of life".
Criticism
Wide dissemination of psychiatrist Heinz Kohut's concepts may at times have led to their trivialization. Neville Symington points out that "You will often hear people say, 'Oh, I'm very narcissistic,' or, 'It was a wound to my narcissism.' Such comments are not a true recognition of the condition; they are throw-away lines. To really recognize narcissism in oneself is profoundly distressing and often associated with denial."
See also
Defense mechanism
Humiliation
Narcissistic mortification
Narcissistic withdrawal
References
Further reading
Cooper J & Maxwell N. Narcissistic Wounds: Clinical Perspectives (1995)
Levin JD. Slings and Arrows: Narcissistic Injury and Its Treatment (1995)
Narcissism
Psychoanalytic terminology | Narcissistic injury | [
"Biology"
] | 1,592 | [
"Behavior",
"Narcissism",
"Human behavior"
] |
7,419,799 | https://en.wikipedia.org/wiki/Geospatial%20metadata | Geospatial metadata (also geographic metadata) is a type of metadata applicable to geographic data and information. Such objects may be stored in a geographic information system (GIS) or may simply be documents, data-sets, images or other objects, services, or related items that exist in some other native environment but whose features may be appropriate to describe in a (geographic) metadata catalog (may also be known as a data directory or data inventory).
Definition
ISO 19115:2013 "Geographic Information – Metadata" from ISO/TC 211, the industry standard for geospatial metadata, describes its scope as follows:
ISO 19115:2013 also provides for non-digital mediums:
The U.S. Federal Geographic Data Committee (FGDC) describes geospatial metadata as follows:
History
The growing appreciation of the value of geospatial metadata through the 1980s and 1990s led to the development of a number of initiatives to collect metadata according to a variety of formats either within agencies, communities of practice, or countries/groups of countries. For example, NASA's "DIF" metadata format was developed during an Earth Science and Applications Data Systems Workshop in 1987, and formally approved for adoption in 1988. Similarly, the U.S. FGDC developed its geospatial metadata standard over the period 1992–1994. The Spatial Information Council of Australia and New Zealand (ANZLIC), a combined body representing spatial data interests in Australia and New Zealand, released version 1 of its "metadata guidelines" in 1996. ISO/TC 211 undertook the task of harmonizing the range of formal and de facto standards over the approximate period 1999–2002, resulting in the release of ISO 19115 "Geographic Information – Metadata" in 2003 and a subsequent revision in 2013. individual countries, communities of practice, agencies, etc. have started re-casting their previously used metadata standards as "profiles" or recommended subsets of ISO 19115, occasionally with the inclusion of additional metadata elements as formal extensions to the ISO standard. The growth in popularity of Internet technologies and data formats, such as Extensible Markup Language (XML) during the 1990s led to the development of mechanisms for exchanging geographic metadata on the web. In 2004, the Open Geospatial Consortium released the current version (3.1) of Geography Markup Language (GML), an XML grammar for expressing geospatial features and corresponding metadata. With the growth of the Semantic Web in the 2000s, the geospatial community has begun to develop ontologies for representing semantic geospatial metadata. Some examples include the Hydrology and Administrative ontologies developed by the Ordnance Survey in the United Kingdom.
ISO 19115: Geographic information – Metadata
ISO 19115 is a standard of the International Organization for Standardization (ISO). The standard is part of the ISO geographic information suite of standards (19100 series). ISO 19115 and its parts define how to describe geographical information and associated services, including contents, spatial-temporal purchases, data quality, access and rights to use.
The objective of this International Standard is to provide a clear procedure for the description of digital geographic data-sets so that users will be able to determine whether the data in a holding will be of use to them and how to access the data. By establishing a common set of metadata terminology, definitions and extension procedures, this standard promotes the proper use and effective retrieval of geographic data.
ISO 19115 was revised in 2013 to accommodate growing use of the internet for metadata management, as well as add many new categories of metadata elements (referred to as codelists) and the ability to limit the extent of metadata use temporally or by user.
ISO 19139 Geographic information Metadata XML schema implementation
ISO 19139:2012 provides the XML implementation schema for ISO 19115 specifying the metadata record format and may be used to describe, validate, and exchange geospatial metadata prepared in XML.
The standard is part of the ISO geographic information suite of standards (19100 series), and provides a spatial metadata XML (spatial metadata eXtensible Mark-up Language (smXML)) encoding, an XML schema implementation derived from ISO 19115, Geographic information – Metadata. The metadata includes information about the identification, constraint, extent, quality, spatial and temporal reference, distribution, lineage, and maintenance of the digital geographic data-set.
Metadata directories
Also known as metadata catalogues or data directories.
(need discussion of, and subsections on GCMD, FGDC metadata gateway, ASDD, European and Canadian initiatives, etc. etc.)
GIS Inventory – National GIS Inventory System which is maintained by the US-based National States Geographic Information Council (NSGIC) as a tool for the entire US GIS Community. Its primary purpose is to track data availability and the status of geographic information system (GIS) implementation in state and local governments to aid the planning and building of statewide spatial data infrastructures (SSDI). The Random Access Metadata for Online Nationwide Assessment (RAMONA) database is a critical component of the GIS Inventory. RAMONA moves its FGDC-compliant metadata (CSDGM Standard) for each data layer to a web folder and a Catalog Service for the Web (CSW) that can be harvested by Federal programs and others. This provides far greater opportunities for discovery of user information. The GIS Inventory website was originally created in 2006 by NSGIC under award NA04NOS4730011 from the Coastal Services Center, National Oceanic and Atmospheric Administration, U.S. Department of Commerce. The Department of Homeland Security has been the principal funding source since 2008 and they supported the development of the Version 5 during 2011/2012 under Order Number HSHQDC-11-P-00177. The Federal Emergency Management Agency and National Oceanic and Atmospheric Administration have provided additional resources to maintain and improve the GIS Inventory. Some US Federal programs require submission of CSDGM-Compliant Metadata for data created under grants and contracts that they issue. The GIS Inventory provides a very simple interface to create the required Metadata.
GCMD - Global Change Master Directory's goal is to enable users to locate and obtain access to Earth science data sets and services relevant to global change and Earth science research. The GCMD database holds more than 20,000 descriptions of Earth science data sets and services covering all aspects of Earth and environmental sciences.
ECHO - The EOS Clearing House (ECHO) is a spatial and temporal metadata registry, service registry, and order broker. It allows users to more efficiently search and access data and services through the Reverb Client or Application Programmer Interfaces (APIs). ECHO stores metadata from a variety of science disciplines and domains, totalling over 3400 Earth science data sets and over 118 million granule records.
GoGeo - GoGeo is a service run by EDINA (University of Edinburgh) and is supported by Jisc. GoGeo allows users to conduct geographically targeted searches to discover geospatial datasets. GoGeo searches many data portals from the HE and FE community and beyond. GoGeo also allows users to create standards compliant metadata through its Geodoc metadata editor.
Geospatial metadata tools
There are many proprietary GIS or geospatial products that support metadata viewing and editing on GIS resources. For example, ESRI's ArcGIS Desktop, SOCET GXP, Autodesk's AutoCAD Map 3D 2008, Arcitecta's Mediaflux and Intergraph's GeoMedia support geospatial metadata extensively.
GIS Inventory is a free web-based tool that provides a very simple interface to create geospatial metadata. Participants create a profile and document their data layers through a survey-style interface. The GIS Inventory produces metadata that is compliant with the Federal Content Standard for Digital Geospatial Metadata (CSDGM). The GIS Inventory is also capably of ingesting already completed metadata through document upload and web server connectivity. Through the GIS Inventory web services, metadata are automatically shared with US Federal agencies.
GeoNetwork opensource is a comprehensive Free and Open Source Software solution to manage and publish geospatial metadata and services based on international metadata and catalog standards. The software is part of the Open Source Geospatial Foundation's software stack.
GeoCat Bridge allows users to edit, validate and directly publish metadata from ArcGIS Desktop to GeoNetwork (and generic CSW catalogs) and publishes data as map services on GeoServer. Several metadata profiles are supported.
pycsw is an OGC CSW server implementation written in Python. pycsw fully implements the OpenGIS Catalogue Service Implementation Specification (Catalogue Service for the Web). The project is certified OGC Compliant, and is an OGC Reference Implementation.
CATMDEdit
terraCatalog
ArcCatalog
ArcGIS Server Portal
GeoNetwork opensource
IME
M3CAT MetaD
MetaGenie
Parcs Canada Metadata Editor
Mapit/CADit
NOKIS Editor
References
ANZLIC Metadata Profile Version 1.2 (viewed July 2011)
External links
FGDC metadata page
Global Change Master Directory(GCMD)
Geospatial Exploitation of Motion Imagery is a geospatially aware and integrated Intelligent Video Surveillance (IVS) software system targeted at real-time and forensic video analytic and mining applications that require low-resolution detection, tracking, and classification of moving objects (people and vehicles) in outdoor, wide-area scenes.
ISO 19115:2003 Geographic information – Metadata
Geographic information – Metadata – XML schema implementation
EarthDataModels design for Metadata is a logical data model and physical implementation of a Spatial Metadata Database, based on ISO19115 and is INSPIRE compliant.
Data management
Metadata
Geographic data and information | Geospatial metadata | [
"Technology"
] | 2,010 | [
"Data management",
"Geographic data and information",
"Metadata",
"Data"
] |
7,420,465 | https://en.wikipedia.org/wiki/Sidera%20Lodoicea | Sidera Lodoicea is the name given by the astronomer Giovanni Domenico Cassini to the four moons of Saturn discovered by him in the years 1671, 1672, and 1684 and published in his Découverte de deux nouvelles planètes autour de Saturne in 1673 and in the Journal des sçavans in 1686. These satellites are today known by the following names, given in 1847:
Iapetus or Saturn VIII, discovered October 25, 1671
Rhea or Saturn V, discovered December 23, 1672
Tethys or Saturn III, discovered March 21, 1684
Dione or Saturn IV, discovered March 21, 1684
The name Sidera Lodoicea means "Louisian Stars", from Latin sidus "star" and Lodoiceus, a nonce adjective coined from Lodoicus, one of several Latin forms of the French name Louis (reflecting an older form, Lodhuwig). Cassini intended the name to honor King Louis XIV of France, who reigned from 1643 to 1715, and who was Cassini's benefactor as patron of the Paris Observatory, of which Cassini was the director.
The name was modelled on Sidera Medicea, "Medicean stars", the Latin name used by Galileo to name the four Galilean satellites of Jupiter, in honor of the Florentine house of Medici.
The following contemporary (1686) notice records Cassini's choice of name, and explains his rationale for the same:
Notes
References
History of astronomy
Moons of Saturn
Discoveries by Giovanni Domenico Cassini
Louis XIV | Sidera Lodoicea | [
"Astronomy"
] | 325 | [
"History of astronomy"
] |
7,420,632 | https://en.wikipedia.org/wiki/Minimum%20information%20required%20in%20the%20annotation%20of%20models | MIRIAM (Minimum Information Required In The Annotation of Models) is a community-level effort to standardize the annotation and curation processes of quantitative models of biological systems. It consists of a set of guidelines suitable for use with any structured format, allowing different groups to collaborate and share resulting models. Adherence to these guidelines also facilitates the sharing of software and service infrastructures built upon modeling activities.
The idea of "a set of good practices" including "some obligatory metadata" was first proposed by Nicolas Le Novère in October 2004 as part of a discussion to develop a common database of models in systems biology (which led to the creation of BioModels Database). These initial ideas were further refined at a meeting in Heidelberg, during ICSB 2004, with representatives from many other interested groups.
MIRIAM is a registered project of the MIBBI (minimum information for biological and biomedical investigations).
MIRIAM Guidelines
The MIRIAM Guidelines are composed of three parts, reference correspondence, attribution annotation, and external resource annotation, each of which deals with a different aspect of information that should be included within a model.
Reference correspondence
'Reference correspondence' deals with the basic reference information needed to make use of the model, detailing on a gross level the format of the model file, and its instantiability for simulation purposes.
The model file must be encoded in a public, standardized, machine-readable format (SBML, CellML, GENESIS, ...).
The model file must be valid with respect to its encoding schema.
The model must be associated with a reference description or publication detailing its origin, even if it is a composite.
The encoded model structure must reflect the (biological) process(es) detailed in the reference description.
The model must be instantiable; necessary quantitative parameters, such as initial conditions, should be provided if they are needed for a simulation.
When instantiated, the model must be capable of reproducing representative results as given in the reference description, within an epsilon (algorithms, round-up errors).
Attribution annotation
'Attribution annotation' deals with the attribution information that must be embedded within the model file.
The model must have a name.
The model must include a citation of the reference description identifying the authors of the model.
The model must include the name and contact details of the model creators.
The date and time of model creation and last modification should be specified. A model history is useful but not required.
The model should be linked to a precise statement about its terms of use and distribution, regardless of whether it is 'free to use' or not.
External resource annotation
'External resource annotation' defines the manner in which annotations should be constructed. Those annotations contain references to entities in databases, classifications, ontologies, etc. One of the purposes of annotation is to allow unambiguous identification of the various model components.
The annotation must unambiguously relate a piece of knowledge to a model constituent.
The referenced information should be described using a triplet {data collection, collection-specific identifier, optional qualifier}:
The annotation should be expressed as a Uniform Resource Identifier (URI).
The collection-specific identifier should be analysed within the framework of the data collection.
Qualifiers (optional) should be used to refine the link between the model components and the referenced information, for example "has_a", "is_version_of" and "is_homolog_to".
More information about the existing qualifiers is available from BioModels.net.
So far, annotation is mainly a manual work, so to ensure their longevity the usage of perennial URIs is necessary. It was recognised that the generation of valid and unique URIs for annotation required the creation of a catalogue of shared namespaces for use by the community. This function is provided by the MIRIAM Registry. The Registry also provides a variety of supporting auxiliary features to enable automated procedures based upon these URIs. The ability to generate resolvable identifiers is provided through the use of the resolving layer, Identifiers.org.
See also
Minimum information standards
MIRIAM Registry
Identifiers.org
Metadata standards
Computational systems biology
BioModels Database
SBML
CellML
References
Systems biology
Minimum Information Standards
Bioinformatics | Minimum information required in the annotation of models | [
"Engineering",
"Biology"
] | 900 | [
"Bioinformatics",
"Biological engineering",
"Systems biology"
] |
354,575 | https://en.wikipedia.org/wiki/Melvin%20Calvin | Melvin Ellis Calvin (April 8, 1911 – January 8, 1997) was an American biochemist known for discovering the Calvin cycle along with Andrew Benson and James Bassham, for which he was awarded the 1961 Nobel Prize in Chemistry. He spent most of his five-decade career at the University of California, Berkeley.
Early life and education
Melvin Calvin was born in St. Paul, Minnesota, the son of Elias Calvin and Rose Herwitz, Jewish immigrants from the Russian Empire (now known as Lithuania and Georgia).
At an early age, Melvin Calvin’s family moved to Detroit, MI where his parents ran a grocery store to earn their living. Melvin Calvin was often found exploring his curiosity by looking through all of the products that made up their shelves.
After he graduated from Central High School in 1928, he went on to study at Michigan College of Mining and Technology (now known as Michigan Technological University) where he received the school’s first Bachelors of Science in Chemistry. He went on to earn his Ph.D. at the University of Minnesota in 1935. While under the mentorship of George Glocker, he studied and wrote his thesis on the electron affinity of halogens. He was invited to join the lab of Michael Polanyi as a Post Doctoral student at the University of Manchester. The two years he spent at the lab were focused on studying the structure and behavior of organic molecules. In 1942, He married Marie Genevieve Jemtegaard, and they had three daughters, Elin, Sowie, and Karole, and a son, Noel.
Career
On a visit to the University of Manchester, Joel Hildebrand, the director of UC Radiation Laboratory, invited Calvin to join the faculty at the University of California, Berkeley. This made him the first non-Berkeley graduate hired by the chemistry department in +25 years. He invited Calvin to push forward in radioactive carbon research because "now was the time". Calvin's original research at UC Berkeley was based on the discoveries of Martin Kamen and Sam Ruben in long-lived radioactive carbon-14 in 1940.
In 1947, he was promoted to a Professor of Chemistry and the director of the Bio-Organic Chemistry group in the Lawrence Radiation Laboratory. The team he formed included: Andrew Benson, James A. Bassham, and several others. Andrew Benson was tasked with setting up the photosynthesis laboratory. The purpose of this lab was to discover the path of carbon fixation through the process of photosynthesis. The greatest impact of the research was discovering the way that light energy converts into chemical energy. Using the carbon-14 isotope as a tracer, Calvin, Andrew Benson and James Bassham mapped the complete route that carbon travels through a plant during photosynthesis, starting from its absorption as atmospheric carbon dioxide to its conversion into carbohydrates and other organic compounds. The process is part of the photosynthesis cycle. It was given the name the Calvin–Benson–Bassham Cycle, named for the work of Melvin Calvin, Andrew Benson, and James Bassham. There were many people who contributed to this discovery but ultimately Melvin Calvin led the charge (see below).
In 1963, Calvin was given the additional title of Professor of Molecular Biology. He was founder and Director of the Laboratory of Chemical Biodynamics, known as the “Roundhouse”, and simultaneously Associate Director of Berkeley Radiation Laboratory, where he conducted much of his research until his retirement in 1980. In his final years of active research, he studied the use of oil-producing plants as renewable sources of energy. He also spent many years testing the chemical evolution of life and wrote a book on the subject that was published in 1969.
The foundation of the Melvin Calvin laboratory
The circular laboratory known as the “Roundhouse” was designed to facilitate collaboration between students and visiting scientists in Calvin’s lab. It was created as Calvin had an insatiable curiosity that drove him to become well versed in many fields and recognize the benefits of cross disciplinary collaboration. Open scientific discussion was a large part of his students' everyday lives and he wanted to create a community space where all kinds of minds and knowledge were brought together. In order to help facilitate this in the Roundhouse, he brought in post doctoral students and guest scientists from all around the world.
Calvin established a community within the roundhouse where students and staff members felt they could truly realize their potential. His management skills became renowned and many creative scientific outlets are modeled after them today. He was known as Mr. Photosynthesis but that does not even begin to describe how his organizational and management skills revolutionized the scientific community across all fronts.
Discovery of the Calvin cycle
The discovery of the Calvin cycle would start by building on the research done by Sam Ruben and Martin Kamen after their work on the carbon-14 isotope came to an end after Ruben’s accidental death in the laboratory and Kamen found himself in trouble over security breaches with the FBI and Department of State. Despite this Ernest Lawrence, the Radiation Laboratory director, was proud of the work they had done and wanted to see the research furthered so he along with Wendell Latimer, the Dean of Chemistry and Chemical Engineering, recruited Calvin in 1945.
The lab's original focus was on the applications of Carbon-14 in medicine and synthesis of radio-labeled amino acids and biological metabolites for medical research. Calvin began to establish the lab by recruiting strong chemists in labs across the country. He then recruited Andrew Benson, who had worked with Ruben and Kamen previously on photosynthesis and C-14, to head that aspect of the lab.
The predominating theory regarding the production of sugars and other reduced carbon compounds was that they were considered to be a “light” reaction. This theory had yet to be disproven. Benson began his investigation by continuing his previous work with the isolation of the product of dark CO2 fixation and would then crystalize the radioactive succinic acid. This paired with exposing algae to light without CO2 and then immediately transferring it to a dark flask that contained CO2 and observing that the radioactive sucrose was still formed at the same rate as when photosynthesis was allowed to be carried out in pure light gave definitive evidence that there was a non-photochemical reduction of CO2.
There was an issue though, they now needed to determine the first product of the fixation of CO2. In order to do this, they began utilizing paper chromatographic techniques that were pioneered by W.A Stepka. This allowed them to determine that the first product of CO2 fixation was 3-Carbon phosphoglyceric acid (PGA). A long known product of glucose fermentation per the reaction outlined years earlier by Ruben and Kamen.
After this discovery, Calvin’s competing lab at the University of Chicago was unable to confirm the discovery and thus created a strong attack on Calvin’s literature. This led to a symposium sponsored by the American Association for the Advancement of Science to determine which lab was correct. Though met with resistance at the conference Calvin and Benson were able to convince the audience of their position and the attack was dismissed.
After this first identification the remaining members of the glycolytic sequence save for two were able to be identified based on their chemical behavior. The two unknown components were sugars. Benson, after noticing their separation on the paper chromatograms and examining their reactivities, realized they were ketoses. Thanks to the collaboration of James A. Bassham the compounds were able to be subjected to periodate degradation. The identification of 14% activity in the carbonyl carbon in one of the sugars made Bassham turn his attention to seven-carbon sugars. Despite several more tests though, Bassham was unable to determine the identities of these two sugars.
Further experimentation showed that through restricting the uptake of CO2 the level of ribulose bisphosphate could be increased. This was an indication that it was the acceptor molecule for CO2. Though the mechanism for this was not immediately obvious, Calvin was able to determine one later called the novel carboxylation mechanism which would lead to the series completion in 1958.
Public service
Throughout his life, Calvin acted as a public servant in many different capacities. He served as president of the American Chemical Society, the American Society of Plant Physiology, and the Pacific Division of the American Association for the Advancement of Science. Along with all of this he also served as chairman of the Committee on Science and Public Policy for the National Academy of Sciences.
One major contribution he had as a public servant was his work with NASA. In collaboration with NASA, he assisted in the creation of plans to protect the Moon against biological contamination from the Earth and the Earth from contamination from the Moon during the Apollo missions. As well as, helping strategies on how to best bring back lunar samples and how to search for biological life on other planets.
Along with these servant capacities he also worked as a public servant for the U.S government. He served as a member of the President’s Science Advisory Committee from 1963 to 1966 and served on the top advisory body of the Department of Energy, the Energy Research Advisory Board.
Finally, he served on many international committees and for many international organizations including the Joint Commission on Applied Radioactivity of the International Union of Pure and Applied Chemistry, the U.S. Committee of the International Union of Biochemistry, and the Commission on Molecular Biophysics of the International Organization for Pure and Applied Biophysics
Controversy
In his 2011 television history of Botany for the BBC, Timothy Walker, Director of the University of Oxford Botanic Garden, criticised Calvin's treatment of Andrew Benson, claiming that Calvin had got the credit for Benson's work after firing him, and had failed to mention Benson's role when writing his autobiography decades later. Benson himself has also mentioned being fired by Calvin, and has complained about not being mentioned in his autobiography.
Honours and legacy
1954 - Elected to the United States National Academy of Sciences
1955 - Awarded the Centenary Prize
1958 - Elected a foreign member of the Royal Netherlands Academy of Arts and Sciences
1958 - Elected to the American Academy of Arts and Sciences
1959 - Elected a Member of the German Academy of Sciences Leopoldina
1960 - Elected to the American Philosophical Society
1961 - Melvin Calvin received the Nobel Prize in Chemistry “for his research on the carbon dioxide assimilation in plants”
1964 - Awarded the Davy Medal of the Royal Society
1971 - Honorary Doctor of Laws (LL.D.) degree from Whittier College
1978 - Priestley medal of the American Chemical Society
Calvin was featured on the 2011 volume of the American Scientists collection of US postage stamps, along with Asa Gray, Maria Goeppert-Mayer, and Severo Ochoa. This was the third volume in the series, the first two having been released in 2005 and 2008.
Calvin was award 13 other honorary degrees.
Publications
Bassham, J. A., Benson, A. A., and Calvin, M. "The Path of Carbon in Photosynthesis VIII. The Role of Malic Acid.", Ernest Orlando Lawrence Berkeley National Laboratory, University of California Radiation Laboratory–Berkeley, United States Department of Energy (through predecessor agency the Atomic Energy Commission), (January 25, 1950).
Badin, E. J., and Calvin, M. "The Path of Carbon in Photosynthesis IX. Photosynthesis, Photoreduction, and the Hydrogen-Oxygen-Carbon Dioxide Dark Reaction.", Ernest Orlando Lawrence Berkeley National Laboratory, University of California Radiation Laboratory–Berkeley, United States Department of Energy (through predecessor agency the Atomic Energy Commission), (February 1, 1950).
Calvin, M., Bassham, J. A., Benson, A. A., Kawaguchi, S., Lynch, V. H., Stepka, W., and Tolbert, N. E."The Path of Carbon in Photosynthesis XIV.", Ernest Orlando Lawrence Berkeley National Laboratory, University of California Radiation Laboratory–Berkeley, United States Department of Energy (through predecessor agency the Atomic Energy Commission), (June 30, 1951).
Calvin, M. "Photosynthesis: The Path of Carbon in Photosynthesis and the Primary Quantum Conversion Act of Photosynthesis.", Ernest Orlando Lawrence Berkeley National Laboratory, University of California Radiation Laboratory-Berkeley, United States Department of Energy (through predecessor agency the Atomic Energy Commission), (November 22, 1952).
Bassham, J. A., and Calvin, M. "The Path of Carbon in Photosynthesis", Ernest Orlando Lawrence Berkeley National Laboratory, University of California Lawrence Radiation Laboratory-Berkeley, United States Department of Energy (through predecessor agency the Atomic Energy Commission), (October 1960).
Calvin, M. "The Path of Carbon in Photosynthesis (Nobel Prize Lecture).", Ernest Orlando Lawrence Berkeley National Laboratory, University of California Radiation Laboratory-Berkeley, United States Department of Energy (through predecessor agency the Atomic Energy Commission), (December 11, 1961).
See also
List of Jewish Nobel laureates
References
External links
including the Nobel Lecture, December 11, 1961 The Path of Carbon in Photosynthesis
Nobel speech and biographmems/mcalvin.html Tribute by Glenn Seaborg and Andrew Benson
Biographical memoir by Glenn Seaborg and Andrew Benson
U.S. Patent 4427511 Melvin Calvin – Photo-induced electron transfer method
Encyclopædia Britannica article
USPS News Release: Celebrating American Scientists Press release for the new Forever Stamp designs featuring Melvin Calvin.
National Academy of Sciences Biographical Memoir
1911 births
1997 deaths
Nobel laureates in Chemistry
American Nobel laureates
20th-century American chemists
Jewish Nobel laureates
Scientists from Saint Paul, Minnesota
Researchers of photosynthesis
National Medal of Science laureates
Michigan Technological University alumni
University of Minnesota College of Science and Engineering alumni
Alumni of the University of Manchester
UC Berkeley College of Chemistry faculty
Foreign members of the Royal Society
Members of the Royal Netherlands Academy of Arts and Sciences
Members of the German National Academy of Sciences Leopoldina
Central High School (Detroit) alumni
Members of the American Philosophical Society | Melvin Calvin | [
"Chemistry"
] | 2,864 | [
"Biochemists",
"Photochemists",
"Photosynthesis",
"Researchers of photosynthesis"
] |
354,593 | https://en.wikipedia.org/wiki/Affine%20representation | In mathematics, an affine representation of a topological Lie group G on an affine space A is a continuous (smooth) group homomorphism from G to the automorphism group of A, the affine group Aff(A). Similarly, an affine representation of a Lie algebra g on A is a Lie algebra homomorphism from g to the Lie algebra aff(A) of the affine group of A.
An example is the action of the Euclidean group E(n) on the Euclidean space En.
Since the affine group in dimension n is a matrix group in dimension n + 1, an affine representation may be thought of as a particular kind of linear representation. We may ask whether a given affine representation has a fixed point in the given affine space A. If it does, we may take that as origin and regard A as a vector space; in that case, we actually have a linear representation in dimension n. This reduction depends on a group cohomology question, in general.
See also
Group action
Projective representation
References
.
Homological algebra
Representation theory of Lie algebras
Representation theory of Lie groups | Affine representation | [
"Mathematics"
] | 230 | [
"Algebra stubs",
"Mathematical structures",
"Fields of abstract algebra",
"Category theory",
"Algebra",
"Homological algebra"
] |
354,618 | https://en.wikipedia.org/wiki/Suspension%20%28chemistry%29 | In chemistry, a suspension is a heterogeneous mixture of a fluid that contains solid particles sufficiently large for sedimentation. The particles may be visible to the naked eye, usually must be larger than one micrometer, and will eventually settle, although the mixture is only classified as a suspension when and while the particles have not settled out.
Properties
A suspension is a heterogeneous mixture in which the solid particles do not dissolve, but get suspended throughout the bulk of the solvent, left floating around freely in the medium. The internal phase (solid) is dispersed throughout the external phase (fluid) through mechanical agitation, with the use of certain excipients or suspending agents.
An example of a suspension would be sand in water. The suspended particles are visible under a microscope and will settle over time if left undisturbed. This distinguishes a suspension from a colloid, in which the colloid particles are smaller and do not settle. Colloids and suspensions are different from solution, in which the dissolved substance (solute) does not exist as a solid, and solvent and solute are homogeneously mixed.
A suspension of liquid droplets or fine solid particles in a gas is called an aerosol. In the atmosphere, the suspended particles are called particulates and consist of fine dust and soot particles, sea salt, biogenic and volcanogenic sulfates, nitrates, and cloud droplets.
Suspensions are classified on the basis of the dispersed phase and the dispersion medium, where the former is essentially solid while the latter may either be a solid, a liquid, or a gas.
In modern chemical process industries, high-shear mixing technology has been used to create many novel suspensions.
Suspensions are unstable from a thermodynamic point of view but can be kinetically stable over a longer period of time, which in turn can determine a suspension's shelf life. This time span needs to be measured in order to provide accurate information to the consumer and ensure the best product quality.
"Dispersion stability refers to the ability of a dispersion to resist change in its properties over time."
Technique monitoring physical stability
Multiple light scattering coupled with vertical scanning is the most widely used technique to monitor the dispersion state of a product, hence identifying and quantifying destabilization phenomena. It works on concentrated dispersions without dilution. When light is sent through the sample, it is back scattered by the particles. The backscattering intensity is directly proportional to the size and volume fraction of the dispersed phase. Therefore, local changes in concentration (sedimentation) and global changes in size (flocculation, aggregation) are detected and monitored. Of primary importance in the analysis of stability in particle suspensions is the value of the zeta potential exhibited by suspended solids. This parameter indicates the magnitude of interparticle electrostatic repulsion and is commonly analyzed to determine how the use of adsorbates and pH modification affect particle repulsion and suspension stabilization or destabilization.
Accelerating methods for shelf life prediction
The kinetic process of destabilisation can be rather long (up to several months or even years for some products) and it is often required for the formulator to use further accelerating methods in order to reach reasonable development time for new product design. Thermal methods are the most commonly used and consists in increasing temperature to accelerate destabilisation (below critical temperatures of phase and degradation). Temperature affects not only the viscosity, but also interfacial tension in the case of non-ionic surfactants or more generally interactions forces inside the system. Storing a dispersion at high temperatures enables simulation of real life conditions for a product (e.g. tube of sunscreen cream in a car in the summer), but also to accelerate destabilisation processes up to 200 times including vibration, centrifugation and agitation are sometimes used. They subject the product to different forces that pushes the particles / film drainage. However, some emulsions would never coalesce in normal gravity, while they do under artificial gravity. Moreover, segregation of different populations of particles have been highlighted when using centrifugation and vibration.
Examples
Common examples of suspensions include:
Mud or muddy water: where soil, clay, or silt particles are suspended in water.
Flour suspended in water.
Kimchi suspended on vinegar.
Chalk suspended in water.
Sand suspended in water.
See also
References
Colloidal chemistry
Drug delivery devices
Dosage forms
Heterogeneous chemical mixtures | Suspension (chemistry) | [
"Chemistry"
] | 920 | [
"Pharmacology",
"Colloidal chemistry",
"Drug delivery devices",
"Colloids",
"Surface science",
"Chemical mixtures",
"Heterogeneous chemical mixtures"
] |
354,628 | https://en.wikipedia.org/wiki/History%20of%20mental%20disorders | Historically, mental disorders have had three major explanations, namely, the supernatural, biological and psychological models. For much of recorded history, deviant behavior has been considered supernatural and a reflection of the battle between good and evil. When confronted with unexplainable, irrational behavior and by suffering and upheaval, people have perceived evil. In fact, in the Persian Empire from 550 to 330 B.C.E., all physical and mental disorders were considered the work of the devil. Physical causes of mental disorders have been sought in history. Hippocrates was important in this tradition as he identified syphilis as a disease and was, therefore, an early proponent of the idea that psychological disorders are biologically caused. This was a precursor to modern psycho-social treatment approaches to the causation of psychopathology, with the focus on psychological, social and cultural factors. Well known philosophers like Plato, Aristotle, etc., wrote about the importance of fantasies, dreams, and thus anticipated, to some extent, the fields of psychoanalytic thought and cognitive science that were later developed. They were also some of the first to advocate for humane and responsible care for individuals with psychological disturbances.
Ancient period
There is archaeological evidence for the use of trepanation in around 6500 BC, though it is unknown if this was done as a response to mental illnesses, or to treat physiological conditions such as cranial hemorrhaging.
Mesopotamia
Mental illnesses were well known in ancient Mesopotamia, where diseases and mental disorders were believed to be caused by specific deities. Because hands symbolized control over a person, mental illnesses were known as "hands" of certain deities. One psychological illness was known as Qāt Ištar, meaning "Hand of Ishtar". Others were known as "Hand of Shamash", "Hand of the Ghost", and "Hand of the God". Descriptions of these illnesses, however, are so vague that it is usually impossible to determine which illnesses they correspond to in modern terminology. Mesopotamian doctors kept detailed record of their patients' hallucinations and assigned spiritual meanings to them. A patient who hallucinated that he was seeing a dog was predicted to die; whereas, if he saw a gazelle, he would recover. The royal family of Elam was notorious for its members frequently being insane. Erectile dysfunction was recognized as being rooted in psychological problems.
Egypt
Limited notes in an ancient Egyptian document known as the Ebers papyrus appear to describe the affected states of concentration, attention, and emotional distress in the heart or mind. Some of these were interpreted later, and renamed as hysteria and melancholy. Somatic treatments included applying bodily fluids while reciting magical spells. Hallucinogens may have been used as a part of the healing rituals. Religious temples may have been used as therapeutic retreats, possibly for the induction of receptive states to facilitate sleep and the interpretation of dreams.
India
Ancient Hindu scriptures-Ramayana and Mahabharata-contain fictional descriptions of depression and anxiety. Mental disorders were generally thought to reflect abstract metaphysical entities, supernatural agents, sorcery and witchcraft. The Charaka Samhita which is a part of the Hindu Ayurveda ("knowledge of life"), saw ill health as resulting from an imbalance among the three body fluids or forces called Tri-Dosha. These also affected the personality types among people. Suggested causes included inappropriate diet, disrespect towards the gods, teachers or others, mental shock due to excessive fear or joy, and faulty bodily activity. Treatments included the use of herbs and ointments, charms and prayers, and moral or emotional persuasion. In the Hindu epic Ramayana, the Dasharatha died from despondency, which Shiv Gautam states illustrates major depressive disorder.
China
The earliest known record of mental illness in ancient China dates back to 1100 B.C. Mental disorders were treated mainly under Traditional Chinese medicine using herbs, acupuncture or "emotional therapy". The Inner Canon of the Yellow Emperor described symptoms, mechanisms and therapies for mental illness, emphasizing connections between bodily organs and emotions. The ancient Chinese believed that demonic possession played a role in mental illness during this time period. They felt that areas of emotional outbursts, such as funeral homes, could open up the Wei Chi and allow entities to possess an individual. Trauma was also considered to be something that caused high levels of emotion. Thus, trauma is a possible catalyst for mental illness due to its ability to allow the Wei Chi open to possession. This explains why the ancient Chinese believed that a mental illness was, in reality, a demonic possession. According to Chinese thought, five stages or elements comprised the conditions of imbalance between yin and yang. Mental illness, according to the Chinese perspective, is thus considered an imbalance of the yin and yang because optimum health arises from balance with nature.
China was one of the earliest developed civilizations in which medicine and attention to mental disorders were introduced (Soong, 2006). As in the West, Chinese views of mental disorders regressed to a belief in supernatural forces as causal agents. From the later part of the second century through the early part of the ninth century, ghosts and devils were implicated in "ghostevil" insanity, which presumably resulted from possession by evil spirits. The "Dark Ages" in China, however, were neither so severe (in terms of the treatment of mental patients) nor as long-lasting as in the West. A return to biological, somatic (bodily) views and an emphasis on psychosocial factors occurred in the centuries that followed. In recent history, China has been experiencing a broadening of ideas in mental health services and has been incorporating many ideas from Western psychiatry (Zhang & Lu, 2006).
Greece and Rome
In ancient Greece and Rome, madness was associated stereotypically with aimless wandering and violence. However, Socrates considered positive aspects including prophesying (a 'manic art'); mystical initiations and rituals; poetic inspiration; and the madness of lovers. Now often seen as the very epitome of rational thought and as the founder of philosophy, Socrates freely admitted to experiencing what are now called "command hallucinations" (then called his 'daemon'). Pythagoras also heard voices. Hippocrates (470–) classified mental disorders, including paranoia, epilepsy, mania and melancholia. Hippocrates mentions the practice of bloodletting in the fifth century BC.
Through long contact with Greek culture, and their eventual conquest of Greece, the Romans absorbed many Greek (and other) ideas on medicine. The humoral theory fell out of favor in some quarters. The Greek physician Asclepiades (–40 BC), who practiced in Rome, discarded it and advocated humane treatments, and had insane persons freed from confinement and treated them with natural therapy, such as diet and massages. Arateus (–90 AD) argued that it is hard to pinpoint from where a mental illness comes. However, Galen (129–), practicing in Greece and Rome, revived humoral theory. Galen, however, adopted a single symptom approach rather than broad diagnostic categories, for example studying separate states of sadness, excitement, confusion and memory loss.
Playwrights such as Homer, Sophocles and Euripides described madmen driven insane by the gods, imbalanced humors or circumstances. As well as the triad (of which mania was often used as an overarching term for insanity) there were a variable and overlapping range of terms for such things as delusion, eccentricity, frenzy, and lunacy. Roman encyclopedist Celsus argued that insanity is really present when a continuous dementia begins due to the mind being at the mercy of imaginings. He suggested that people must heal their own souls through philosophy and personal strength. He described common practices of dietetics, bloodletting, drugs, talking therapy, incubation in temples, exorcism, incantations and amulets, as well as restraints and "tortures" to restore rationality, including starvation, being terrified suddenly, agitation of the spirit, and stoning and beating. Most, however, did not receive medical treatment but stayed with family or wandered the streets, vulnerable to assault and derision. Accounts of delusions from the time included people who thought themselves to be famous actors or speakers, animals, inanimate objects, or one of the gods. Some were arrested for political reasons, such as Jesus ben Ananias who was eventually released as a madman after showing no concern for his own fate during torture.
Israel and the Hebrew diaspora
Passages of the Hebrew Bible/Old Testament have been interpreted as describing mood disorders in figures such as Job, King Saul and in the Psalms of David. In the Book of Daniel, King Nebuchadnezzar is described as temporarily losing his sanity. Mental disorder was not a problem like any other, caused by one of the gods, but rather caused by problems in the relationship between the individual and God. They believed that abnormal behavior was the result of possessions that represented the wrath and punishment from God. This punishment was seen as a withdrawal of God's protection and the abandonment of the individual to evil forces.
From the beginning of the twentieth century, the mental health of Jesus is also discussed.
Middle Ages
Middle East
Persian and Arabic scholars were heavily involved in translating, analyzing and synthesizing Greek texts and concepts. As the Muslim world expanded, Greek concepts were integrated with religious thought and over time, new ideas and concepts were developed. Arab texts from this period contain discussions of melancholia, mania, hallucinations, delusions, and other mental disorders. Mental disorder was generally connected to loss of reason, and writings covered links between the brain and disorders, and spiritual/mystical meaning of disorders. wrote about fear and anxiety, anger and aggression, sadness and depression, and obsessions.
Authors who wrote on mental disorders and/or proposed treatments during this period include Al-Balkhi, Al-Razi, Al-Farabi, Ibn-Sina, Al-Majusi Abu al-Qasim al-Zahrawi, Averroes, and Najab ud-din Unhammad.
Some thought mental disorder could be caused by possession by a djinn (devil), which could be either good or demon-like. There were sometimes beatings to exorcise the djin, or alternatively over-zealous attempts at cures. Islamic views often merged with local traditions. In Morocco the traditional Berber people were animists and the concept of sorcery was integral to the understanding of mental disorder; it was mixed with the Islamic concepts of djin and often treated by religious scholars combining the roles of holy man, sage, seer and sorcerer.
The first bimaristan was founded in Baghdad in the 9th century, and several others of increasing complexity were created throughout the Arab world in the following centuries. Some of them contained wards dedicated to the care of mentally ill patients, most of whom had debilitating illnesses or exhibited violence. In the centuries to come, the Muslim world would eventually serve as a critical way station of knowledge for Renaissance Europe, through the Latin translations of many scientific Islamic texts. Ibn-Sina's (Avicenna's) Canon of Medicine became the standard of medical science in Europe for centuries, together with works of Hippocrates and Galen.
Europe
Conceptions of madness in the Middle Ages in Europe were a mixture of the divine, diabolical, magical and transcendental. Theories of the four humors (black bile, yellow bile, phlegm, and blood) were applied, sometimes separately (a matter of "physic") and sometimes combined with theories of evil spirits (a matter of "faith"). Arnaldus de Villanova (1235–1313) combined "evil spirit" and Galen-oriented "four humours" theories and promoted trephining as a cure to let demons and excess humours escape. Other bodily remedies in general use included purges, bloodletting and whipping. Madness was often seen as a moral issue, either a punishment for sin or a test of faith and character. Christian theology endorsed various therapies, including fasting and prayer for those estranged from God and exorcism of those possessed by the devil. Thus, although mental disorder was often thought to be due to sin, other more mundane causes were also explored, including intemperate diet and alcohol, overwork, and grief. The Franciscan friar Bartholomeus Anglicus ( – 1272) described a condition which resembles depression in his encyclopedia, De Proprietatibis Rerum, and he suggested that music would help. A semi-official tract called the Praerogativa regis distinguished between the "natural born idiot" and the "lunatic". The latter term was applied to those with periods of mental disorder; deriving from either Roman mythology describing people "moonstruck" by the goddess Luna or theories of an influence of the moon.
Episodes of mass dancing mania are reported from the Middle Ages, "which gave to the individuals affected all the appearance of insanity". This was one kind of mass delusion or mass hysteria/panic that has occurred around the world through the millennia.
The care of lunatics was primarily the responsibility of the family. In England, if the family were unable or unwilling, an assessment was made by crown representatives in consultation with a local jury and all interested parties, including the subject himself or herself. The process was confined to those with real estate or personal estate, but it encompassed poor as well as rich and took into account psychological and social issues. Those considered lunatics at the time probably had more support from their communities and families than those diagnosed with mental disorders today, since the focus now is primarily on providing professional medical support. As in other eras, visions were generally interpreted as meaningful spiritual and visionary insights; some may have been causally related to mental disorders, but since hallucinations were culturally supported they may not have had the same connections as today.
Modern period
Europe and the Americas
16th to 18th centuries
Some mentally ill people may have been victims of the witch-hunts that spread in waves in early modern Europe. However, those judged insane were increasingly admitted to local workhouses, poorhouses and jails (particularly the "pauper insane") or sometimes to the new private madhouses. Restraints and forcible confinement were used for those thought dangerously disturbed or potentially violent to themselves, others or property. The latter likely grew out of lodging arrangements for single individuals (who, in workhouses, were considered disruptive or ungovernable) then there were a few catering each for only a handful of people, then they gradually expanded (e.g. 16 in London in 1774, and 40 by 1819). By the mid-19th century there would be 100 to 500 inmates in each. The development of this network of madhouses has been linked to new capitalist social relations and a service economy, that meant families were no longer able or willing to look after disturbed relatives.
Madness was commonly depicted in literary works, such as the plays of Shakespeare.
By the end of the 17th century and into the Enlightenment, madness was increasingly seen as an organic physical phenomenon, no longer involving the soul or moral responsibility. The mentally ill were typically viewed as insensitive wild animals. Harsh treatment and restraint in chains was seen as therapeutic, helping suppress the animal passions. There was sometimes a focus on the management of the environment of madhouses, from diet to exercise regimes to number of visitors. Severe somatic treatments were used, similar to those in medieval times. Madhouse owners sometimes boasted of their ability with the whip. Treatment in the few public asylums was also barbaric, often secondary to prisons. The most notorious was Bedlam where at one time spectators could pay a penny to watch the inmates as a form of entertainment.
Concepts based in humoral theory gradually gave way to metaphors and terminology from mechanics and other developing physical sciences. Complex new schemes were developed for the classification of mental disorders, influenced by emerging systems for the biological classification of organisms and medical classification of diseases.
The term "crazy" (from Middle English meaning cracked) and insane (from Latin insanus meaning unhealthy) came to mean mental disorder in this period. The term "lunacy", long used to refer to periodic disturbance or epilepsy, came to be synonymous with insanity. "Madness", long in use in root form since at least the early centuries AD, and originally meaning crippled, hurt or foolish, came to mean loss of reason or self-restraint. "Psychosis", from Greek "principle of life/animation", had varied usage referring to a condition of the mind/soul. "Nervous", from an Indo-European root meaning to wind or twist, meant muscle or vigor, was adopted by physiologists to refer to the body's electrochemical signaling process (thus called the nervous system), and was then used to refer to nervous disorders and neurosis. "Obsession", from a Latin root meaning to sit on or sit against, originally meant to besiege or be possessed by an evil spirit, came to mean a fixed idea that could decompose the mind.
With the rise of madhouses and the professionalization and specialization of medicine, there was a considerable incentive for medical doctors to become involved. In the 18th century, they began to stake a claim to a monopoly over madhouses and treatments. Madhouses could be a lucrative business, and many made a fortune from them. There were some bourgeois ex-patient reformers who opposed the often brutal regimes, blaming both the madhouse owners and the medics, who in turn resisted the reforms.
Towards the end of the 18th century, a moral treatment movement developed, that implemented more humane, psychosocial, and personalized approaches. Notable figures included the medic Vincenzo Chiarugi in Italy under Enlightenment leadership; the ex-patient superintendent Pussin and the psychologically inclined medic Philippe Pinel in revolutionary France; the Quakers in England, led by businessman William Tuke; and later, in the United States, campaigner Dorothea Dix.
19th century
The 19th century, in the context of industrialization and population growth, saw a massive expansion of the number and size of insane asylums in every Western country, a process called "the great confinement" or the "asylum era". Laws were introduced to compel authorities to deal with those judged insane by family members and hospital superintendents. Although originally based on the concepts and structures of moral treatment, they became large impersonal institutions overburdened with large numbers of people with a complex mix of mental and social-economic problems. The success of moral treatment had cast doubt on the approach of medics, and many had opposed it, but by the mid-19th century many became advocates of it but argued that the mad also often had physical/organic problems so that both approaches were necessary. This argument has been described as an important step in the profession's eventual success in securing a monopoly on the treatment of lunacy. However, it is well documented that very little therapeutic activity occurred in the new asylum system, that medics were little more than administrators who seldom attended to patients, and then mainly for other physical problems. The "oldest forensic secure hospital in Europe" was opened in 1850 after Sir Thomas Freemantle introduced the bill that was to establish a Central Criminal Lunatic Asylum in Ireland on 19 May 1845.
Clear descriptions of some syndromes, such as the condition that would later be termed schizophrenia, have been identified as relatively rare prior to the 19th century, although interpretations of the evidence and its implications are inconsistent.
Numerous different classification schemes and diagnostic terms were developed by different authorities, taking an increasingly anatomical-clinical descriptive approach. The term "psychiatry" was coined as the medical specialty became more academically established. Asylum superintendents, later to be psychiatrists, were generally called "alienists" because they were thought to deal with people alienated from society; they adopted largely isolated and managerial roles in the asylums while milder "neurotic" conditions were dealt with by neurologists and general physicians, although there was overlap for conditions such as neurasthenia.
In the United States it was proposed that black slaves who tried to escape had a mental disorder termed drapetomania. It was then argued in scientific journals that mental disorders were rare under conditions of slavery but became more common following emancipation, and later that mental illness in African Americans was due to evolutionary factors or various negative characteristics, and that they were not suitable for therapeutic intervention.
By the 1870s in North America, officials who ran Lunatic Asylums renamed them Insane Asylums. By the late century, the term "asylum" had lost its original meaning as a place of refuge, retreat or safety, and was associated with abuses that had been widely publicized in the media, including by ex-patient organization the Alleged Lunatics' Friend Society and ex-patients like Elizabeth Packard.
The relative proportion of the public officially diagnosed with mental disorders was increasing, however. This has been linked to various factors, including possibly humanitarian concern; incentives for professional status/money; a lowered tolerance of communities for unusual behavior due to the existence of asylums to place them in (this affected the poor the most); and the strain placed on families by industrialization.
20th century
The turn of the 20th century saw the development of psychoanalysis, which came to the fore later. Kraepelin's classification gained popularity, including the separation of mood disorders from what would later be termed schizophrenia.
Asylum superintendents sought to improve the image and medical status of their profession. Asylum "inmates" were increasingly referred to as "patients" and asylums renamed as hospitals. Referring to people as having a "mental illness" dates from this period in the early 20th century.
In the United States, a "mental hygiene" movement, originally defined in the 19th century, gained momentum and aimed to "prevent the disease of insanity" through public health methods and clinics. The term mental health became more popular, however. Clinical psychology and social work developed as professions alongside psychiatry. Theories of eugenics led to compulsory sterilization movements in many countries around the world for several decades, often encompassing patients in public mental institutions. World War I saw a massive increase of conditions that came to be termed "shell shock".
In Nazi Germany, the institutionalized mentally ill were among the earliest targets of sterilization campaigns and covert "euthanasia" programs. It has been estimated that over 200,000 individuals with mental disorders of all kinds were put to death, although their mass murder has received relatively little historical attention. Despite not being formally ordered to take part, psychiatrists and psychiatric institutions were at the center of justifying, planning and carrying out the atrocities at every stage, and "constituted the connection" to the later annihilation of Jews and other "undesirables" such as homosexuals in The Holocaust.
In other areas of the world, funding was often cut for asylums, especially during periods of economic decline, and during wartime in particular many patients starved to death. Soldiers received increased psychiatric attention, and World War II saw the development in the US of a new psychiatric manual for categorizing mental disorders, which along with existing systems for collecting census and hospital statistics led to the first Diagnostic and Statistical Manual of Mental Disorders (DSM). The International Classification of Diseases (ICD) followed suit with a section on mental disorders.
Previously restricted to the treatment of severely disturbed people in asylums, psychiatrists cultivated clients with a broader range of problems, and between 1917 and 1970 the number practicing outside institutions swelled from 8 percent to 66 percent. The term stress, having emerged from endocrinology work in the 1930s, was popularized with an increasingly broad biopsychosocial meaning, and was increasingly linked to mental disorders. "Outpatient commitment" laws were gradually expanded or introduced in some countries.
Lobotomies, Insulin shock therapy, Electro convulsive therapy, and the "neuroleptic" chlorpromazine came into use mid-century.
An antipsychiatry movement came to the fore in the 1960s. Deinstitutionalization gradually occurred in the West, with isolated psychiatric hospitals being closed down in favor of community mental health services. However, inadequate services and continued social exclusion often led to many being homeless or in prison. A consumer/survivor movement gained momentum.
Other kinds of psychiatric medication gradually came into use, such as "psychic energizers" and lithium. Benzodiazepines gained widespread use in the 1970s for anxiety and depression, until dependency problems curtailed their popularity. Advances in neuroscience and genetics led to new research agendas. Cognitive behavioral therapy was developed. Through the 1990s, new SSRI antidepressants became some of the most widely prescribed drugs in the world.
The DSM and then ICD adopted new criteria-based classification, representing a return to a Kraepelin-like descriptive system. The number of "official" diagnoses saw a large expansion, although homosexuality was gradually downgraded and dropped in the face of human rights protests. Different regions sometimes developed alternatives such as the Chinese Classification of Mental Disorders or Latin American Guide for Psychiatric Diagnosis.
In early 20th century, lobotomy was introduced until the mid-1950s.
In 1927 insulin coma therapy was introduced and used until 1960. Physicians deliberately put the patient into a low blood sugar coma because they thought that large fluctuations in insulin levels could alter the function of the brain. Risks included prolonged coma. Electroconvulsive Therapy (ECT) was later adopted as a substitution to this treatment.
21st century
DSM-IV and previous versions of the Diagnostic and Statistical Manual of Mental Disorders presented extremely high comorbidity, diagnostic heterogeneity of the categories, unclear boundaries, that have been interpreted as intrinsic anomalies of the criterial, neopositivistic approach leading the system to a state of scientific crisis. Accordingly, a radical rethinking of the concept of mental disorder and the need of a radical scientific revolution in psychiatric taxonomy was proposed.
In 2013, the American Psychiatric Association published the DSM–5 after more than 10 years of research.
See also
Notes and references
Further reading
Mental disorders
Medical sociology | History of mental disorders | [
"Biology"
] | 5,398 | [
"Mental disorders",
"Behavior",
"Human behavior"
] |
354,800 | https://en.wikipedia.org/wiki/Equality%20of%20outcome | Equality of outcome, equality of condition, or equality of results is a political concept which is central to some political ideologies and is used in some political discourse, often in contrast to the term equality of opportunity. It describes a state in which all people have approximately the same material wealth and income, or in which the general economic conditions of everyone's lives are alike.
Achieving equal results generally entails reducing or eliminating material inequalities between individuals or households in society and usually involves a transfer of income or wealth from wealthier to poorer individuals, or adopting other measures to promote equality of condition.
One account in The Journal of Political Philosophy suggested that the term meant "equalising where people end up rather than where or how they begin", but described this sense of the term as "simplistic" since it failed to identify what was supposed to be made equal.
In politics
Political philosophy
According to professor of politics Ed Rooksby, the concept of equality of outcome is an important one in disputes between different political positions, since equality has overall been seen as positive and an important concept that is "deeply embedded in the fabric of modern politics". Conflict between so-called haves and have-nots has happened throughout human civilization and was a focus of philosophers such as Aristotle in his treatise Politics. In political philosophy, there are differing views on whether equal outcomes are beneficial or not. One view is that there is a moral basis for equality of outcome, but that the means of achieving such an outcome can be malevolent.
Writing in the journal Foreign Affairs, analyst George Packer argued that "inequality undermines democracy" in the United States partially because it "hardens society into a class system, imprisoning people in the circumstances of their birth". Packer elaborated that inequality "corrodes trust among fellow citizens" and compared it to an "odorless gas which pervades every corner" of the nation.
In his 1987 book The Passion for Equality, analyst Kenneth Cauthen suggested that there were moral underpinnings for having equal outcomes because there is a common good—which people both contribute to and receive benefits from—and therefore should be enjoyed in common. Cauthen argued that this was a fundamental basis for both equality of opportunity as well as equality of outcome.
One view is that mechanisms to achieve equal outcomes—to take a society and with unequal socioeconomic levels and force it to equal outcomes—are fraught with moral as well as practical problems since they often involve political coercion to compel the transfer.
According to one report in Britain, outcomes matter because unequal outcomes in terms of personal wealth had a strong impact on average life expectancy, such that wealthier people tended to live seven years longer than poorer people and that egalitarian nations tended to have fewer problems with societal issues such as mental illness, violence, teenage pregnancy and other social problems. Authors of the book The Spirit Level contended that "more equal societies almost always do better" on other measures and as a result striving for equal outcomes can have overall beneficial effects for everybody.
In his A Theory of Justice (1971), philosopher John Rawls developed a "second principle of justice" that economic and social inequalities can only be justified if they benefit the most disadvantaged members of society. Rawls further claims that all economically and socially privileged positions must be open to all people equally. Rawls argues that the inequality between a doctor's salary and a grocery clerk's is only acceptable if this is the only way to encourage the training of sufficient numbers of doctors, preventing an unacceptable decline in the availability of medical care (which would therefore disadvantage everyone).
Writing in The New York Times, economist Paul Krugman agreed with Rawls' position in which both equality of opportunity and equality of outcome were linked and suggested that "we should try to create the society each of us would want if we didn't know in advance who we'd be". Krugman favored a society in which hard-working and talented people can get rewarded for their efforts, but in which there was a "social safety net" created by taxes to help the less fortunate. Many have suggested that a society promoting equality of opportunity will resultantly see a higher degree of equality in the outcome and that equalizing a person's socioeconomic starting conditions will result in a meritocratic distribution of economic influence. Such is the basis for left-leaning market-based ideologies such as distributism, ordoliberalism, the Social market economy, and some forms of social democracy.
In The Guardian, commentator Julian Glover writes that equality challenges both left-leaning and right-leaning positions and suggests that the task of left-leaning advocates is to "understand the impossibility and undesirability of equality" while the task for right-leaning advocates was to "realise that a divided and hierarchical society cannot—in the best sense of that word—be fair".
Conservatives and classical liberals criticize attempts to try to fight poverty by redistributive methods as ineffective, arguing that more serious cultural and behavioral problems lock poor people into poverty. Sometimes right-leaning positions have been criticized by left-leaning positions for oversimplifying what is meant by the term equality of outcome and for construing outcomes strictly to mean precisely equal amounts for everybody. In The Guardian, commentator Ed Rooksby criticized the right's tendency to oversimplify and suggested that serious left-leaning advocates would not construe equality to mean "absolute equality of everything". Rooksby wrote that Marx favored the position described in the phrase "from each according to his ability, to each according to his need" and argued that this did not imply strict equality of things, but that it meant that people required "different things in different proportions in order to flourish".
American libertarians and advocates of economic liberalism such as Friedrich Hayek and Milton Friedman tend to see equality of outcome negatively and argue that any effort to cause equal outcomes would necessarily and unfortunately involve coercion by government. Friedman wrote that striving for equality of outcome leaves most people "without equality and without opportunity".
One left-leaning position is that it is simplistic to define equality in strict outcomes since questions such as what is being equalized as well as huge differences in preferences and tastes and needs is considerable, therefore they ask: exactly what is being equalized? Author Mark Penn wrote that "the fundamental principle of centrism in the 1990s was that people would neither be left to fend for themselves nor guaranteed equality of outcome—they would be given the tools they needed to achieve the American dream if they worked hard". On the topic of fairness, Glover writes that fairness "compels no action", comparing it to an "atmospheric ideal, an invisible gas, a miasma" and using an expression by Winston Churchill, a "happy thought".
Bernard Shaw was one of the few socialist theorists to advocate complete economic equality of outcome right at the beginning of World War One. The vast majority of socialists view an ideal economy as one where remuneration is at least somewhat proportional to the degree of effort and personal sacrifice expended by individuals in the productive process. This latter concept was expressed by Karl Marx's famous maxim: "To each according to his contribution".
Substantive equality
The substantive equality embraced by Court of Justice of the European Union focuses on equality of outcomes for group characteristics and group outcomes.
Conflation with Marxism, socialism and communism
The German economist and philosopher Karl Marx and his collaborator Frederick Engels are sometimes mistakenly characterized as egalitarians, and the economic systems of socialism and communism are sometimes misconstrued as being based on equality of outcome. In reality, both Marx and Engels regarded the concept of equality as a political concept and value, suited to promoting bourgeois interests, focusing their analysis on more concrete issues such as the laws of motion of capitalism and exploitation based on economic and materialist logic. Marx renounced theorizing on moral concepts and refrained from advocating principles of justice. Marx's views on equality were informed by his analysis of the development of the productive forces in society.
Socialism is based on a principle of distribution whereby individuals receive compensation proportional to the amount of energy and labor they contribute to production ("To each according to his contribution"), which by definition precludes equal outcomes in income distribution. In Marxist theory, communism is based on a principle whereby access to goods and services is based on free and open access (often referred to as distribution based on one's needs); Marx stressed free access to the articles of consumption. Hence the "equality" in a communist society is not about total equality or equality of outcome, but about equal and free access to the articles of consumption. Marx argued that free access to consumption would enable individuals to overcome alienation. In Critique of the Gotha Programme, Marx also took into account how some were more capable than others (such as in height, marital status, skills, etc.), furthering his point against absolute equality.
As opposed to Marxists, George Bernard Shaw, a Fabian socialist, would have socialists place more emphasis on equal distribution rather than production. He developed his ideas on economic equality (and its implications for social, democratic, legal, military, and gender concerns) in lectures and articles in the ten years following the writing of his 1905 play on poverty and power, Major Barbara, at the same time as his Fabian colleague Beatrice Webb as the primary author of the 1909 Minority Report on the Poor Law, along with her husband Sidney Webb, was proposing to abolish poverty in industrial societies by introducing what we now call the welfare state. In the 1907 preface to Major Barbara, Shaw was probably the first to argue for what he called "Universal Pensions for Life", now known as universal incomes. Following major lectures on equality in 1910 and 1913, he gave his fullest exposition of economic equality in a series of six highly publicized Fabian public lectures at the end of 1914, "On Redistribution of Income"—a phrase, as he put it at the time, that he wanted to get into circulation. Although largely unacknowledged, most of the terms of the equality debate since (such as for example, John Rawls and many recent writers on inequality) are as outlined in some detail in Shaw's 1914 series of lectures, where he argued for a gradual incremental process towards equal incomes, mostly by levelling-up from the bottom through union activity and labor laws, minimum and basic incomes as well as by using such mechanisms as income and wealth (inheritance) taxes to prevent incomes rising at the top. In the end, the goal would have been achieved not at absolute equality, but when any remaining income differences would not yield any significant social difference. Like the later Fabian, W. H. Tawney, who further developed the equality debate, Shaw considered equality of opportunity as obsolete without economic equality. Shaw later expanded his pre-World War One work on equality into his 1928 political treatise, The Intelligent Woman's Guide to Socialism and Capitalism.
Related concepts
Equality of outcome is often compared to related concepts of equality, particularly with equality of opportunity. Generally, most senses of the concept of equality are controversial and are seen differently by people having different political perspectives, but of all of the terms relating to equality, equality of outcome is the most controversial and contentious.
Equality of opportunity generally describes fair competition for important jobs and positions such that contenders have equal chances to win such positions, and applicants are not judged or hampered by unfair or arbitrary discrimination. It entails the "elimination of arbitrary discrimination in the process of selection". The term is usually applied in workplace situations, but has been applied in other areas as well such as housing, lending, and voting rights. The essence is that job seekers have "an equal chance to compete within the framework of goals and the structure of rules established", according to one view. It is generally seen as a procedural value of fair treatment by the rules.
Equality of autonomy is a relatively new concept, a sort of hybrid notion that has been developed by philosopher Amartya Sen and can be thought of as "the ability and means to choose our life course should be spread as equally as possible across society". It is an equal shot at empowerment or a chance to develop up to his or her potential rather than equal goods or equal chances. In a teaching guide, equality of autonomy was explained as "equality in the degree of empowerment people have to make decisions affecting their lives, how much choice and control they have given their circumstances". Sen's approach requires "active intervention of institutions like the state into people's lives" but with an aim towards "fostering of people's self-creation rather than their living conditions". Sen argued that "the ability to convert incomes into opportunities is affected by a multiplicity of individual and social differences that mean some people will need more than others to achieve the same range of capabilities".
Equality of process is related to the general notion of fair treatment and can be thought of as "dealing with inequalities in treatment through discrimination by other individuals and groups, or by institutions and systems, including not being treated with dignity and respect", according to one definition.
Equality of perception is an uncommonly used term meaning that "person should be perceived as being of equal worth".
Outcome versus opportunity
Equality of outcome and equality of opportunity have been contrasted to a great extent. When evaluated in a simple context, the more preferred term in contemporary political discourse is equality of opportunity (or, meaning the same thing, the common variant "equal opportunities"), which the public as well as individual commentators see as the nicer or more "well-mannered" of the two terms. A mainstream political view is that the comparison of the two terms is valid, but that they are somewhat mutually exclusive in the sense that striving for either type of equality would require sacrificing the other to an extent and that achieving equality of opportunity necessarily brings about "certain inequalities of outcome". For example, striving for equal outcomes might require discriminating between groups to achieve these outcomes; or striving for equal opportunities in some types of treatment might lead to unequal results. Equality seeking policies may also have a redistributive focus.
However, the two concepts are not always cleanly contrasted since the notion of equality is complex. Some analysts see the two concepts not as polar opposites but as highly related such that they can not be understood without considering the other term.
In a lamp assembly factory, for example, equality of outcome might mean that workers are all paid equally regardless of how many lamps of acceptable quality they make, which also implies that the workers cannot be fired for producing too few lamps of acceptable quality. This can be contrasted with a payment system such as piece work, which requires that every worker is paid a fixed amount of money per lamp of acceptable quality that the worker makes.
In contemporary political discourse, the two concepts of equality of outcome have sometimes been criticized as the "politics of envy" and are often seen as more "controversial" than equality of opportunity. One wrote that "equality of opportunity is then set up as the mild-mannered alternative to the craziness of outcome equality". One theorist suggested that an over-emphasis on either type of equality can "come into conflict with individual freedom and merit".
Critics of equality of opportunity note that while it is relatively easier to deal with unfairness for people with different races or genders, it is much harder to deal with the social class since "one can never entirely extract people from their ancestry and upbringing". As a result, critics contend that efforts to bring fairness by equal opportunity are stymied by the difficulty of people having differing starting points at the beginning of the socio-economic competition. A person born into an upper-middle-class family will have greater advantages by the mere fact of birth than a person born into poverty.
One newspaper account criticized the discussion by politicians on the subject of equality as "weasely" and thought that the term was politically correct and vague. Furthermore, when comparing equality of opportunity with equality of outcome, the sense was that the latter type was "worse" for society. Equality of outcome may be incorporated into a philosophy that ultimately seeks equality of opportunity. Moving towards a higher equality of outcome (albeit not perfectly equal) can lead to an environment more adept at providing equality of opportunity by eliminating conditions that restrict the possibility for members of society to fulfill their potential. For example, a child born in a poor, dangerous neighborhood with poor schools and little access to healthcare may be significantly disadvantaged in his attempts to maximize use of talents, no matter how fine his work ethic. Thus even proponents of meritocracy may promote some level of equality of outcome to create a society capable of truly providing equality of opportunity.
While outcomes can usually be measured with a great degree of precision, it is much more difficult to measure the intangible nature of opportunities. That is one reason why many proponents of equal opportunity use measures of equality of outcome to judge success. Analyst Anne Phillips argued that the proper way to assess the effectiveness of the hard-to-measure concept of equality of opportunity is by the extent of the equality of outcome. Nevertheless, she described a single criterion of equality of outcome as problematic—the measure of "preference satisfaction" was "ideologically loaded" while other measures such as income or wealth were inadequate and she advocated an approach which combined data about resources, occupations and roles.
To the extent that inequalities can be passed from one generation to another through tangible gifts and wealth inheritance, some claim that equality of opportunity for children cannot be achieved without equality of outcome for parents. Moreover, access to social institutions is affected by equality of outcome and it is further claimed that rigging equality of outcome can be a way to prevent co-option of non-economic institutions important to social control and policy formation, such as the legal system, media or the electoral process, by powerful individuals or coalitions of wealthy people.
Purportedly, greater equality of outcome is likely to reduce relative poverty, leading to a more cohesive society. However, if taken to an extreme it may lead to greater absolute poverty, if it negatively affects a country's GDP by damaging workers' sense of work ethic by destroying incentives to work harder. Critics of equality of outcome believe that it is more important to raise the standard of living of the poorest in absolute terms. Some critics additionally disagree with the concept of equality of outcome on philosophical grounds. Still others note that poor people of low social status often have a drive, hunger and ambition which ultimately lets them achieve better economic and social outcomes than their initially more advantaged rivals.
A related argument that is often encountered in education, especially in the debates on the grammar school in the United Kingdom and in the debates on gifted education in various countries, says that people by nature have differing levels of ability and initiative which result in some achieving better outcomes than others and it is, therefore, impossible to ensure equality of outcome without imposing inequality of opportunity.
See also
Affirmative action
Anarcho-communism
Classless society
Distributive justice
Egalitarianism
Equality before the law
Equity of condition
Income inequality metrics
Inequity aversion
Relative deprivation
Substantive equality
Substantive rights
References
External links
Equality, from the Stanford Encyclopedia of Philosophy (2007)
Social inequality
Egalitarianism
Affirmative action
Identity politics
Anti-racism
Discrimination
Disability rights
Equality rights | Equality of outcome | [
"Biology"
] | 3,967 | [
"Behavior",
"Aggression",
"Discrimination"
] |
354,977 | https://en.wikipedia.org/wiki/Urban%20planning%20in%20communist%20countries | Urban planning in the Soviet Bloc countries during the Cold War era was dictated by ideological, political, social as well as economic motives. Unlike the urban development in the Western countries, Soviet-style planning often called for the complete redesigning of cities.
This thinking was reflected in the urban design of all communist countries. Most socialist systems exercised a form of centrally controlled development and simplified methods of construction already outlined in the Soviet guidelines at the end of the Stalinist period. The communist planning resulted in the virtually identical city blocks being erected across many nations, even if there were differences in the specifics between each country.
Soviet-style cities are often traced to Modernist ideas in architecture such as those of Le Corbusier and his plans for Paris. The housing developments generally feature tower blocks in park-like settings, standardized and mass-produced using structural insulated panels within a short period of time.
Beginnings of urban planning in communist countries
Many eastern European countries had suffered physical damage during World War II and their economies were in a very poor state. There was a need to reconstruct cities which had been severely damaged due to the war. For example, Warsaw, Poland, had been practically razed to the ground under the planned destruction of Warsaw by German forces after the 1944 Warsaw Uprising. The centre of Dresden, Germany, had been totally destroyed by the 1945 Allied bombardment. Stalingrad had been largely destroyed and only a small number of structures were left standing.
The financial resources of eastern European countries, after nationalization of industry and land, were under total government control. All development and investment had to be financed by the state. In line with their commitment to communism, the first priority was building industry.
Therefore, for the first ten to fifteen years, most resources were directed towards the development of industry and the reconstruction of destroyed cities. In most cases, this reconstruction was executed without any urban planning for several reasons. Firstly, reconstruction had to start immediately as there was not enough time to develop a detailed plan. Secondly, the man-power and expertise for developing urban plans in great numbers were not available.
Oftentimes, destroyed cites were not rebuilt as they were before. Rather, entirely new cities were constructed along the principles of Soviet Socialism. However, the historically significant structures in some large cities were rebuilt. Experts worked to make the restoration resemble the original as much as possible. For example, the old city centre in Warsaw, the Zwinger in Dresden, and many historic buildings in Budapest were restored to their pre-war beauty.
A notable exception is the building of the National Theatre of Bucharest, Romania, which was damaged by bombing in August 1944. Though part of the building was still standing, after taking complete power in 1947, the communist authorities decided to tear down the remains of the building.
In the late 1940s, the USSR developed a new type of high-rise. The first such buildings were built in Moscow: Moscow State University, Kotelnicheskaya Embankment Building, Kudrinskaya Square Building, Hilton Moscow Leningradskaya Hotel, Hotel Ukraina, Ministry of Foreign Affairs, Ministry of Heavy Industry. These were duplicated in some other countries, the main examples being the Palace of Culture and Science in Warsaw and the House of the Free Press in Bucharest. The Stalin Allee (subsequently named Karl-Marx-Allee) in East Berlin was also flanked by buildings having the same Stalinist style, though their concept was different from the Moscow high-rises. These buildings are mainly examples of a new architectural style, but did not involve urban planning to a significant extent, and there is no visible conceptual link between these buildings and their neighborhood.
Construction of these buildings required the demolition of the structures which were located on their sites. The most notorious was the demolition of the Cathedral of Christ the Saviour, erected in Moscow as a memorial of Napoleon's defeat. The site was required for the Palace of the Soviets, which was never built. The demolition of historic buildings, especially churches, to make way for the new communist structures was a general trait of communist urbanism. A more recent example was the Demolition of historical parts of Bucharest by Nicolae Ceauşescu who aimed to rebuild the capital in a socialist realist style.
In other cases, the Soviets preserved historic structures and attempted to erase their non-Soviet significance; instead, they focused on aesthetics and perceived beauty. For example, the Vilnius Cathedral was repurposed as an art museum after the Soviet Union retook Lithuania in 1944. Additionally, the names of streets in Vilnius were changed to more closely reflect Soviet values. Over time, the city began to expand, and in the 1978 Master Plan for Vilnius, new districts were proposed, most of which were residential. New private housing was prohibited from the city center and the old town.
Industrialization brought more people from rural areas to the cities. As few new housing units were built immediately after the war, an already severe housing shortages became worse. Eventually, chronic housing shortages and overcrowding required an extensive program of new construction. As a result, most communist countries adopted the solution used in the USSR which included strict limits on the living space to which each person was entitled. Generally, each person was entitled to about 9-10 square meters (100 square feet). Often, more than one person had to share the same room. Two or more generations of the same family would often share an apartment originally built for only one nuclear family. There was no space allocated to separate living and dining areas. After the mid-1950s, new housing policies aimed at the mass construction of larger individual apartments.
First attempts of socialist city planning in Eastern Europe
In the process of socialist industrialization, industrial facilities were built not only near existing cities but also in areas where only small rural communities had existed. In such cases, new urban communities emerged in the vicinity of the industrial plants to accommodate the workers. This is the case of Nowa Huta (1949) in Poland, Dunaújváros (1950) in Hungary, and Oneşti (1952) in Romania.
After World War II, dam construction accelerated due to an abundance of new technology. The relocation of people caused by storage reservoirs on large rivers created the need for new communities. Many river-based traditional villages were demolished and their inhabitants relocated. For instance, in Romania, the construction of the Izvorul Muntelui dam on the Bistriţa river required the relocation of several villages with a population of several thousand people.
These trends of the early post-war years were just a sign of what was to follow in the next decades when the constraints of the reconstruction had been overcome and development was undertaken on a much greater scale. However, the first projects highlighted the need for urban planning in the new localities. This also included the design of the entire infrastructure system such as roads, water supply and power supply and also social impact studies, as in many cases the life-style of the population was severely affected. For example, often farmers whose land had been claimed for development would not get replacement farmland or compensation.
Urban development in the 1960s and 1970s
In the big cities few new housing units were constructed and the existing units were overcrowded. Around 1960, the USSR changed its policy and began an extensive program of construction of new apartment buildings, with the introduction of Khrushchevka and the subsequent introduction of Brezhnevka. This trend was immediately followed by all communist countries in Eastern Europe. The development of new neighborhoods in order to extend the housing capacity of cities required an extensive urban planning effort. In most cities, new development took place on the outskirts of the existing cities, incorporating suburbs or undeveloped land into the city. Also, in cities in which slums existed, the slums were redeveloped with modern housing units.
While the actual design and construction of the apartment buildings is not part of the urban planning exercise, the height and type of the buildings, the density of the buildings and other general characteristics were fixed by the planning exercise. Besides, the entire development of the infrastructure had to be planned. This included the transportation system and the roads, water supply, sewerage, power supply, shopping centers, schools and other infrastructure. Flood control was also a concern for cities located in flood prone areas. The planning also covered the industrial zones where new industries were to be located.
In some parts, urban problems were raised also due to other infrastructure, mainly to the development of waterways. The construction of reservoirs on big rivers in the proximity of cities created new waterfronts which had to be developed. This happened mainly in the Soviet Union, but also in other countries. Also some urban planning was required in the downtown districts where new official buildings were constructed. An example is the development of the area of the congress hall attached to the previous royal palace in the center of Bucharest.
Planning of rural localities
The standardization of living (i.e. hot and cold running water, electricity, access to medicine and education, etc.) between the workers in the urban-cityscape and those in the rural-farming lands was an important piece of foundational Marxism–Leninism in the Soviet Union. But by the early 1970s it became clear that the gradual evolution towards equal standards of living between urban and rural workers, as prescribed by Marxism–Leninism, was lagging. Even more disparaging, significant developments in the quality of life for the villages of the European west greatly surpassed those in the communist east (the majority of which only had electricity). Consequently, the USSR found it necessary to enact policy to improve the lives of villagers and advance its own villages to be more comparable to those in the west.
In the Soviet Union, this policy came about through the systematic construction of urban types of residences, mainly multi-story modern apartment blocks, built on the idea that these buildings could provide a degree of comfort that which the older peasant houses could not. As part of this plan, smaller villages (typically those with populations under 1,000) were deemed irrational or inefficient and a variety of remedies could befall them. The mildest consequence was the village could be slated for reduction of services, given a timely notice of demolition, or the workers were asked voluntarily to leave.
Romania
In time, large-scale demolitions and enormous reconstruction projects of villages, towns, and cities, in whole or in part, began to take shape. One of the largest and most ambitious of these developments began in 1974 with the goal of turning Romania into a "multilaterally developed socialist society". Urban planning, in Romania, began early on as displaced rural Romanians started flocking to the cities. With a "blank canvas" of land, the communist regime hoped to create hundreds of urban industrial centers via investment in schools, medical clinics, housing, and industry.
Although the systematization plan extended, in theory, to the entire country, initial work centered in Moldavia. It also affected such locales as Ceauşescu's own native village of Scorniceşti in Olt County: there, the Ceauşescu family home was the only older building left standing. The initial phase of systematization largely petered out by 1980, at which point only about 10 percent of new housing was being built in historically rural areas.
Given the lack of budget, in many regions systematization did not constitute an effective plan, good or bad, for development. Instead, it constituted a barrier against organic regional growth. New buildings had to be at least two stories high, so peasants could not build small houses. Yards were restricted to 250 square meters and private agricultural plots were banned from within the villages. Despite the obvious negative impact of such a scheme on subsistence agriculture, after 1981 villages were mandated to be agriculturally self-sufficient.
In the mid-1980s the concept of systematization found new life, applied primarily to the area of the nation's capital, Bucharest. Nearby villages were demolished, often in service of large-scale projects such as a canal from Bucharest to the Danube – projects which were later abandoned by Romania's post-communist government. Most dramatically, eight square kilometers in the historic center of Bucharest were leveled. The demolition campaign erased many monuments including 3 monasteries, 20 churches, 3 synagogues, 3 hospitals, 2 theaters and a noted Art Deco sports stadium. This also involved evicting 40,000 people with only a single day's notice and relocating them to new homes, in order to make way for the grandiose Centrul Civic and the immense Palace of the People, a building second in size only to the Pentagon.
Urban planning, especially the destruction of historic churches and monasteries, was protested by several nations, especially Hungary and West Germany, each concerned for their national minorities in Transylvania. Despite these protests, Ceauşescu remained in the relatively good graces of the United States and other Western powers almost to the last, largely because his relatively independent political line rendered him a useful counter to the Soviet Union in Cold War politics.
North Korea
Pyongyang, the capital of North Korea, has a downtown consisting of hundreds of high-rise apartments. North Korean citizens are provided housing by the government, and the quality of said housing is dependent on social status and household size. The city also has several extraordinarily expansive public spaces that are usually built around colossal monuments depicting Juche ideologies and/or monuments relating to Kim Jong-il and Kim Il Sung.
Car ownership rates in Pyongyang are low, and thus public transportation is vital to the city. A two-line subway system serves the city, with a network of elaborate stations, many with high ceilings and murals on their walls. Additionally, an expansive tram network covers the city. There are no suburbs in Pyongyang as the government's city planning policies substitute lower density suburban expansion for high rise residential development in central areas.
People's Republic of China
The development of urban planning in the People's Republic of China (PRC) demonstrates a unique approach with Chinese characteristics. It started after communist takeover in the early 1950s. Through implementing new national urban policies, communist planners first introduced urban planning by applying centralised economic planning and industrialisation, especially in heavy industry.
Phase 1 (1949–1960)
In September 1952, there were two significant policies promulgated at an urban development conference: "construction of key cities in co-ordination with the national economic development programme" and "establishment of urban planning structure to strengthen city development". These policies influenced China's urban planning significantly and at the same time were clearly defined by the main direction of the state – centralised economic and industrial development. During the First Five-year Plan (1953–58), the nation determined to develop 156 national key projects and 8 key industrial based cities. In this period, vast physical development projects such as industrial bases, community facilities and housing for workers were established to achieve national needs and goals. All of these projects were carried out with the aid of the experts from the Soviet Union, particularly in terms of urban economic development and physical urban design. Urban planning at that time was mainly based on Soviet planning principles and the model of the post-war Soviet planning practice. Soviet-style communist planning concentrated on "formalistic street patterns and grand design for public buildings and monuments, huge public squares, and the predominance of master plans". The role of communist planners during this period was to focus on location selection of factories and industrial plants, arrangement of service facilities, design of the layout of industrial towns, functional division of urban land use zones and development of residential districts. Historic preservation was not a priority during this period of development. For example, Mao Zedong allowed Beijing's city walls to be demolished despite their historical significance in order to make room for other uses. The bricks from the walls were used in new development projects ranging from homes to a subway system. By the end of 1959, there were 180 cities, 1400 towns and more than 2,000 suburban residential settlements that had been project plans prepared under communist planning.
Phase 2 (1961–1976)
From 1960 to 1976, due to the political climate changing, the development of urban planning in communist China had suffered severe catastrophes: planning institutions had to cease, planners were assigned to support development in rural areas and planning documents were destroyed or discarded. During the Great Leap Forward in the early 1960s, the utopian socialist planning development which particularly overemphasized large-scale urban development was seen as superior to Western-style planning. However, due to the severe limitations of fiscal and labor resources, the first priority of urban planning was given to utopian socialist principles and then the second place to people's livelihood. Thus, giving little attention to the establishment of residential amenities and facilities, there were significant social and physical imbalances resulting in urban development. For instance, in the historic hutong neighborhoods in Beijing, courtyards were routinely replaced with new residential structures in order to accommodate more residents. By the end of this phase, about 30% of these courtyards had residential structures placed on them. Additionally, some anti-urban movements, a typical example being the People's Commune Movement, took place in communist China during this period. The purpose of setting up a commune, seen as a sub-community within cities, was to spread industrial values from urban to rural areas so that eventually the urban-rural gap would be eliminated.
Phase 3 (1977–1984)
In December 1978, a new era of economic and political reform had begun and accelerated. The major concern of urban planning in communist China shifted to the recognition of the function of cities. Consequently, a nationwide effective force to restore urban master plans was started. By the end of 1984, 241 cities and 1,071 counties throughout the nation as a whole completed their master plans. Although these master plans might not technically fulfill the needs of urban development, they at least acted as guidelines to lead to planned and organised urban construction. In addition, some concepts of mega-metropolitan areas were established during this period.
Phase 4 (1985–present)
Contemporary urban planning in China is undergoing rapid, unprecedented urbanization and industrialization. In fact, China's urbanization rate was almost 50% by the year 2010, a stark contrast from previous decades. Based upon the current Chinese Urban and Rural Planning Act, two tiers – master plan and detailed plan – make up the Chinese urban planning system. Reviewing the history of urban planning in China, the contemporary planning norm is neither simply following Soviet-style planning nor prohibiting advanced Western viewpoints of urban development. Urban renewal and redevelopment are common themes in contemporary Chinese planning. Large swathes of major cities are sometimes torn down at once to allow for new uses. In some cases, residents simply refuse to move out and developers have to adjust their plans accordingly. These residents have been dubbed "nail houses" or "dingzi hu", and there have been many famous cases of these holdouts in Chinese media.
Socialist Federal Republic of Yugoslavia
Post-WWII SFR Yugoslavia followed in line with the earlier urbanist experiments of the Soviet Union, and often delved in urban planning projects.
The best known example would be the Novi Zagreb (eng. "New Zagreb") urban development scheme of the Zagreb city – the capital of the Socialist republic of Croatia.
The district is mostly residential, consisting of blocks of flats and tower blocks that were built during the Socialist era (1945–1990). Although it is not as prestigious as downtown Zagreb, it has been praised for its good road network, public transportation connections and abundance of parks.
The project was started by the mayor of Zagreb, Većeslav Holjevac, as there was a large expanse of empty and undeveloped land south of the Sava river. The land was seized from the Captol church administration following the victory of the communist partisans in World War II. The mayor, seeing the opportunity to set in motion the building of a completely new and modern city under the socialist administration, promptly organized a team of urbanist designers and city planners.
The first complete solution for habitation with public and commercial contents was made for the neighborhood Trnsko by urbanists Zdenko Kolacio, Mirko Maretić and Josip Uhlik with horticulturist Mira Wenzler-Halambek in 1959–1960. It was followed by plans for neighborhood Zapruđe in 1962–1963, also made by Josip Uhlik.
The project was lauded as a great success, the district being known for its large amounts of foliage and recreational areas, including parks, museums and sports fields. A lot of care also went into building a modernized and efficient system of transportation and mass transit, such as tram and bus lines which were built by 1979. Lauding a typical Eastern bloc architectural style, it was designed to house a large capacity of residents, as the construction of the area was in part driven by the need for workforce to fuel the Zagreb industrialization projects recently put in motion. It also has examples of brutalist architecture, rare for the late period the area was constructed in.
See also
Eastern Bloc economies
Sotsgorod: Cities for Utopia
Soviet urban planning ideologies of the 1920s
Urban planning
Large panel system building
Eastern bloc housing:
Panelák (Czechoslovakia)
Panelház (Hungary)
Plattenbau (East Germany)
Ugsarmal bair (Mongolian People's Republic)
Systematization (Romania)
Khrushchevka (Soviet Union)
References
Anania, Lidia; Luminea, Cecilia; Melinte, Livia; Prosan, Ana-Nina; Stoica, Lucia; and Ionescu-Ghinea, Neculai, Bisericile osândite de Ceauşescu. București 1977–1989 (1995). Editura Anastasia, Bucharest, . In Romanian. Title means "Churches doomed by Ceauşescu". This is very much focused on churches, but along the way provides many details about systematization, especially the demolition to make way for Centrul Civic.
Bucica, Cristina. Legitimating Power in Capital Cities: Bucharest – Continuity Through Radical Change? (PDF), 2000.
Chen, Xiaoyan: Monitoring and Evaluation in China's Urban Planning System: A Case Study of Xuzhou, (PDF), 2009.
Ilchenko M. Utopian spaces: Symbolic transformation of the "Socialist Cities" under post-Soviet conditions //Re-Imagining the city: Municipality and Urbanity Today from a Sociological Perspective", Ed. by M. Smagacz-Poziemska, K. Frysztacki, A. Bukowski. Jagiellonian University Press, 2017. P. 32-52
Ilchenko M. “Socialist cities” under post-Soviet conditions: symbolic changes and new ways of representation // EUROPA REGIONAL, 25. 2017 (2018), 2, pp. 30–44
Kirkby, R J. R. Urbanization in China: Town and Country in a Developing Economy, 1949-2000 A.d. New York: Columbia University Press, 1985. Print.
Tang, Wing-Shing; Chinese Urban Planning at Fifty: An Assessment of the Planning Theory Literature Journal of Planning Literature 2000 14: 347-66
Xie, Yichun and Costa F.J.: in: Cities, Volume 10, Issue 2, May 1993, Pages 103-114
Communist states
Eastern Bloc | Urban planning in communist countries | [
"Engineering"
] | 4,742 | [
"Urban planning",
"Architecture"
] |
354,978 | https://en.wikipedia.org/wiki/Corbel | In architecture, a corbel is a structural piece of stone, wood or metal jutting from a wall to carry a superincumbent weight, a type of bracket. A corbel is a solid piece of material in the wall, whereas a console is a piece applied to the structure. A piece of timber projecting in the same way was called a "tassel" or a "bragger" in England.
The technique of corbelling, where rows of corbels deeply keyed inside a wall support a projecting wall or parapet, has been used since Neolithic (New Stone Age) times. It is common in medieval architecture and in the Scottish baronial style as well as in the vocabulary of classical architecture, such as the modillions of a Corinthian cornice. The corbel arch and corbel vault use the technique systematically to make openings in walls and to form ceilings. These are found in the early architecture of most cultures, from Eurasia to Pre-Columbian architecture.
A console is more specifically an S-shaped scroll bracket in the classical tradition, with the upper or inner part larger than the lower (as in the first illustration) or outer. Keystones are also often in the form of consoles. Whereas "corbel" is rarely used outside architecture, "console" is widely used for furniture, as in console table, and other decorative arts where the motif appears.
The word corbel comes from Old French and derives from the Latin , a diminutive of ("raven"), which refers to the beak-like appearance. Similarly, the French refer to a bracket-corbel, usually a load-bearing internal feature, as a ("crow").
Decorated corbels
Norman (Romanesque) corbels often have a plain appearance, although they may be elaborately carved with stylised heads of humans, animals or imaginary "beasts", and sometimes with other motifs (The Church of St Mary and St David in Kilpeck, Herefordshire is a notable example, with 85 of its original 91 richly carved corbels still surviving).
Similarly, in the Early English period corbels were sometimes elaborately carved, as at Lincoln Cathedral, and sometimes more simply so.
Corbels sometimes end with a point apparently growing into the wall, or forming a knot, and often are supported by angels and other figures. In the later periods the carved foliage and other ornaments used on corbels resemble those used in the capitals of columns.
Throughout England, in half-timber work, wooden corbels ("tassels" or "braggers") abound, carrying window-sills or oriel windows in wood, which also are often carved.
Classical architecture
The corbels carrying balconies in Italy and France were sometimes of great size and richly carved, and some of the finest examples of the Italian Cinquecento (16th century) style are found in them. Taking a cue from 16th-century practice, the Paris-trained designers of 19th-century Beaux-Arts architecture were encouraged to show imagination in varying corbels.
Corbel tables
A corbel table is a projecting moulded string course supported by a range of corbels. Sometimes these corbels carry a small arcade under the string course, the arches of which are pointed and trefoiled. As a rule, the corbel table carries the gutter, but in Lombard work the arcaded corbel table was used as a decoration to subdivide the storeys and break up the wall surface. In Italy sometimes over the corbels will form a moulding, and above a plain piece of projecting wall forming a parapet.
The corbels carrying the arches of the corbel tables in Italy and France were often elaborately moulded, sometimes in two or three courses projecting over one another; those carrying the machicolations of English and French castles had four courses.
In modern chimney construction, a corbel table is constructed on the inside of a flue in the form of a concrete ring beam supported by a range of corbels. The corbels can be either in-situ or pre-cast concrete. The corbel tables described here are built at approximately ten-metre intervals to ensure stability of the barrel of refractory bricks constructed thereon.
Corbelling
Corbelling, where rows of corbels gradually build a wall out from the vertical, has long been used as a simple kind of vaulting, for example in many Neolithic chambered cairns, where walls are gradually corbelled in until the opening can be spanned by a slab.
Corbelled vaults are very common in early architecture around the world. Different types may be called the beehive house (ancient Britain and elsewhere), the Irish clochán, the pre-Roman nuraghe of Sardinia, and the tholos tombs (or "beehive tombs") of Late Bronze Age Greece and other parts of the Mediterranean.
In medieval architecture, the technique was used to support upper storeys or a parapet projecting forward from the wall plane, often to form machicolations (openings between corbels could be used to drop things onto attackers). This later became a decorative feature, without the openings. Corbelling supporting upper stories and particularly supporting projecting corner turrets subsequently became a characteristic of the Scottish baronial style.
Medieval timber-framed buildings often employ jettying, where upper stories are cantilevered out on projecting wooden beams in a similar manner to corbelling.
Gallery
Short visual history of corbels
See also
Atlas (architecture)
Dentil
Eaves
Fireplace mantel
Modillion
Muqarnas
Notes
References
Citations
Sources
The CRSBI (Corpus of ROMANESQUE SCULPTURE in Britain and Ireland) website has many examples of carved Norman corbels
External links
Beyond-the-pale—A discursive and richly-illustrated website showing corbels on hundreds of churches in the British Isles, France and Spain, depicting the sins of the flesh and their punishment
An Illustrated Masonry Glossary
Architectural elements
Fortification (architectural elements)
Garden features
Ornaments (architecture)
Scottish baronial architecture
Traditional East Asian architecture | Corbel | [
"Technology",
"Engineering"
] | 1,267 | [
"Building engineering",
"Architectural elements",
"Components",
"Architecture"
] |
354,988 | https://en.wikipedia.org/wiki/Parapet | A parapet is a barrier that is an upward extension of a wall at the edge of a roof, terrace, balcony, walkway or other structure. The word comes ultimately from the Italian parapetto (parare 'to cover/defend' and petto 'chest/breast'). Where extending above a roof, a parapet may simply be the portion of an exterior wall that continues above the edge line of the roof surface, or may be a continuation of a vertical feature beneath the roof such as a fire wall or party wall. Parapets were originally used to defend buildings from military attack, but today they are primarily used as guard rails, to conceal rooftop equipment, reduce wind loads on the roof, and to prevent the spread of fires.
Parapet types
Parapets may be plain, embattled, perforated or panelled, which are not mutually exclusive terms.
Plain parapets are upward extensions of the wall, sometimes with a coping at the top and corbel below.
Embattled parapets may be panelled, but are pierced, if not purely as stylistic device, for the discharge of defensive projectiles.
Perforated parapets are pierced in various designs such as circles, trefoils, or quatrefoils.
Panelled parapets are ornamented by a series of panels, either oblong or square, and more or less enriched, but not perforated. These are common in the Decorated and Perpendicular periods.
Historic parapet walls
The Mirror Wall at Sigiriya, Sri Lanka built between 477 and 495 AD is one of the few surviving protective parapet walls from antiquity. Built onto the side of Sigiriya Rock it ran for a distance of approximately and provided protection from inclement weather. Only about of this wall exists today, but brick debris and grooves on the rock face along the western side of the rock clearly show where the rest of this wall once stood.
Parapet roofs
Parapets surrounding roofs are common in London. This dates from the Building Act 1707 which banned projecting wooden eaves in the cities of Westminster and London as a fire risk. Instead an 18-inch brick parapet was required, with the roof set behind. This was continued in many Georgian houses, as it gave the appearance of a flat roof which accorded with the desire for classical proportions.
In Shilpa Shastras, the ancient Indian science of sculpture, a parapet is known as hāra. It is optionally added while constructing a temple. The hāra can be decorated with various miniature pavilions, according to the Kāmikāgama. In the Bible the Hebrews are obligated to build a parapet on the roof of their houses to prevent people falling (Deuteronomy 22:8).
Firewall parapets
Many firewalls are required to have a parapet, a portion of the wall extending above the roof. The parapet is required to be as fire resistant as the lower wall, and extend a distance prescribed by building code.
Bridge parapets
Parapets on bridges and other highway structures (such as retaining walls) prevent users from falling off where there is a drop. They may also be meant to restrict views, to prevent rubbish passing below, and to act as noise barriers.
Bridge parapets may be made from any material, but structural steel, aluminium, timber and reinforced concrete are common. They may be of solid or framed construction.
In European standards, parapets are defined as a sub-category of "vehicle restraint systems" or "pedestrian restraint systems".
Parapets in fortification
A parapet fortification (known as a breastwork when temporary) is a wall of stone, wood or earth on the outer edge of a defensive wall or trench, which shelters the defenders. In medieval castles, they were often crenellated. In later artillery forts, parapets tend to be higher and thicker. They could be provided with embrasures for the fort's guns to fire through, and a banquette or fire-step so that defending infantry could shoot over the top. The top of the parapet often slopes towards the enemy to enable the defenders to shoot downwards; this incline is called the superior talus.
See also
Attic style
Baluster
Merlon
Redoubt
References
Bibliography
Senani Ponnamperuma. The Story of Sigiriya, Panique Pty Ltd, 2013 pp 124–127, 179. .
External links
Victorian Forts glossary
Parapet
What is a Parapet?
Castle architecture
Architectural elements
Bridge components
Protective barriers | Parapet | [
"Technology",
"Engineering"
] | 902 | [
"Building engineering",
"Architectural elements",
"Bridge components",
"Components",
"Architecture"
] |
355,011 | https://en.wikipedia.org/wiki/Icon%20%28computing%29 | In computing, an icon is a pictogram or ideogram displayed on a computer screen in order to help the user navigate a computer system. The icon itself is a quickly comprehensible symbol of a software tool, function, or a data file, accessible on the system and is more like a traffic sign than a detailed illustration of the actual entity it represents. It can serve as an electronic hyperlink or file shortcut to access the program or data. The user can activate an icon using a mouse, pointer, finger, or voice commands. Their placement on the screen, also in relation to other icons, may provide further information to the user about their usage. In activating an icon, the user can move directly into and out of the identified function without knowing anything further about the location or requirements of the file or code.
Icons as parts of the graphical user interface of a computer system, in conjunction with windows, menus and a pointing device (mouse), belong to the much larger topic of the history of the graphical user interface that has largely supplanted the text-based interface for casual use.
Overview
The computing definition of "icon" can include three distinct semiotical elements:
Icon, which resembles its referent (such as a road sign for falling rocks).
This category includes stylized drawings of objects from the office environment or from other professional areas such as printers, scissors, file cabinets and folders.
Index, which is associated with its referent (smoke is a sign of fire).
This category includes stylized drawings used to refer to actions "printer" and "print", "scissors" and "cut" or "magnifying glass" and "search".
Symbol, which is related to its referent only by convention (letters, musical notation, mathematical operators etc.).
This category includes standardized symbols found across many electronic devices, such as the power on/off symbol and the USB icon.
The majority of icons are encoded and decoded using metonymy, synecdoche, and metaphor.
An example of metaphorical representation characterizes all the major desktop-based computer systems including the desktop that uses an iconic representation of objects from the 1980s office environment to transpose attributes from a familiar context/object to an unfamiliar one. This is known as skeuomorphism, and an example is the use of the floppy disk to represent saving data; even though floppy disks have been obsolete for roughly a quarter century, it is still recognized as "the save icon".
Metonymy is in itself a subset of metaphors that use one entity to point to another related to it such as using a fluorescent bulb instead of a filament one to represent power saving settings.
Synecdoche is considered as a special case of metonymy, in the usual sense of the part standing for the whole such as a single component for the entire system, speaker driver for the entire audio system settings.
Additionally, a group of icons can be categorised as brand icons, used to identify commercial software programs and are related to the brand identity of a company or software. These commercial icons serve as functional links on the system to the program or data files created by a specific software provider. Although icons are usually depicted in graphical user interfaces, icons are sometimes rendered in a TUI using special characters such as MouseText or PETSCII.
The design of all computer icons is constricted by the limitations of the device display. They are limited in size, with the standard size of about a thumbnail for both desktop computer systems and mobile devices. They are frequently scalable, as they are displayed in different positions in the software, a single icon file such as the Apple Icon Image format can include multiple versions of the same icon optimized to work at a different size, in colour or grayscale as well as on dark and bright backgrounds.
The colors used, for both the image and the icon background, should stand out on different system backgrounds and among each other. The detailing of the icon image needs to be simple, remaining recognizable in varying graphical resolutions and screen sizes. Computer icons are by definition language-independent but often not culturally independent; they do not rely on letters or words to convey their meaning. These visual parameters place rigid limits on the design of icons, frequently requiring the skills of a graphic artist in their development.
Because of their condensed size and versatility, computer icons have become a mainstay of user interaction with electronic media. Icons also provide rapid entry into the system functionality. On most systems, users can create and delete, replicate, select, click or double-click standard computer icons and drag them to new positions on the screen to create a customized user environment.
Types
Standardized electrical device symbols
A series of recurring computer icons are taken from the broader field of standardized symbols used across a wide range of electrical equipment. Examples of these are the power symbol and the USB icon, which are found on a wide variety of electronic devices. The standardization of electronic icons is an important safety-feature on all types of electronics, enabling a user to more easily navigate an unfamiliar system. As a subset of electronic devices, computer systems and mobile devices use many of the same icons; they are corporated into the design of both the computer hardware and on the software. On the hardware, these icons identify the functionality of specific buttons and plugs. In the software, they provide a link into the customizable settings.
System warning icons also belong to the broader area of ISO standard warning signs. These warning icons, first designed to regulate automobile traffic in the early 1900s, have become standardized and widely understood by users without the necessity of further verbal explanations. In designing software operating systems, different companies have incorporated and defined these standard symbols as part of their graphical user interface. For example, the Microsoft MSDN defines the standard icon use of error, warning, information and question mark icons as part of their software development guidelines.
Different organizations are actively involved in standardizing these icons, as well as providing guidelines for their creation and use. The International Electrotechnical Commission (IEC) has defined "Graphical symbols for use on equipment", published as IEC 417, a document which displays IEC standardized icons. Another organization invested in the promotion of effective icon usage is the ICT (information and communications technologies), which has published guidelines for the creation and use of icons. Many of these icons are available on the Internet, either to purchase or as freeware to incorporate into new software.
Metaphorical icons
An icon is a signifier pointing to the signified. Easily comprehendible icons will make use of familiar visual metaphors directly connected to the signified: actions the icon initiate or the content that would be revealed. Metaphors, metonymy and synecdoche are used to encode the meaning in an icon system.
The signified can have multiple natures: virtual objects such as files and applications, actions within a system or an application (e.g. snap a picture, delete, rewind, connect/disconnect etc...), action in the physical world (e.g. print, eject DVD, change volume or brightness etc...) as well as physical objects (e.g. monitor, compact disk, mouse, printer etc...).
The Desktop metaphor
A subgroup of the more visually rich icons is based on objects lifted from a 1970 physical office space and desktop environment. It includes the basic icons used for a file, file folder, trashcan, inbox, together with the spatial real estate of the screen, i.e. the electronic desktop. This model originally enabled users, familiar with common office practices and functions, to intuitively navigate the computer desktop and system. (Desktop Metaphor, pg 2). The icons stand for objects or functions accessible on the system and enable the user to do tasks common to an office space. These desktop computer icons developed over several decades; data files in the 1950s, the hierarchical storage system (i.e. the file folder and filing cabinet) in the 1960s, and finally the desktop metaphor itself (including the trashcan) in the 1970s.
Dr. David Canfield Smith associated the term "icon" with computing in his landmark 1975 PhD thesis "Pygmalion: A Creative Programming Environment". In his work, Dr. Smith envisioned a scenario in which "visual entities", called icons, could execute lines of programming code, and save the operation for later re-execution. Dr. Smith later served as one of the principal designers of the Xerox Star, which became the first commercially available personal computing system based on the desktop metaphor when it was released in 1981. "The icons on [the desktop] are visible concrete embodiments of the corresponding physical objects." The desktop and icons displayed in this first desktop model are easily recognizable by users several decades later, and display the main components of the desktop metaphor GUI.
This model of the desktop metaphor has been adopted by most personal computing systems in the last decades of the 20th century; it remains popular as a "simple intuitive navigation by single user on single system." It is only at the beginning of the 21st century that personal computing is evolving a new metaphor based on Internet connectivity and teams of users, cloud computing. In this new model, data and tools are no longer stored on the single system, instead they are stored someplace else, "in the cloud". The cloud metaphor is replacing the desktop model; it remains to be seen how many of the common desktop icons (file, file folder, trashcan, inbox, filing cabinet) find a place in this new metaphor.
Brand icons for commercial software
A further type of computer icon is more related to the brand identity of the software programs available on the computer system. These brand icons are bundled with their product and installed on a system with the software. They function in the same way as the hyperlink icons described above, representing functionality accessible on the system and providing links to either a software program or data file. Over and beyond this, they act as a company identifier and advertiser for the software or company.
Because these company and program logos represent the company and product itself, much attention is given to their design, done frequently by commercial artists. To regulate the use of these brand icons, they are trademark registered and are considered part of the company's intellectual property.
In closed systems such as iOS and Android, the use of icons is to a degree regulated or guided
to create a sense of consistency in the UI.
Overlay icons
On some GUI systems (e.g. Windows), on an icon which represents an object (e.g. a file) a certain additional subsystem can add a smaller secondary icon, laid over the primary icon and usually positioned in one of its corners, to indicate the status of the object which is represented with the primary icon. For instance, the subsystem for locking files can add a "padlock" overlay icon on an icon which represents a file in order to indicate that the file is locked.
Placement and spacing
In order to display the number of icons representing the growing complexity offered on a device, different systems have come up with different solutions for screen space management. The computer monitor continues to display primary icons on the main page or desktop, allowing easy and quick access to the most commonly used functions for a user. This screen space also invites almost immediate user customization, as the user adds favourite icons to the screen and groups related icons together on the screen. Secondary icons of system programs are also displayed on the task bar or the system dock. These secondary icons do not provide a link like the primary icons, instead, they are used to show availability of a tool or file on the system.
Spatial management techniques play a bigger role in mobile devices with their much smaller screen real estate. In response, mobile devices have introduced, among other visual devices, scrolling screen displays and selectable tabs displaying groups of related icons. Even with these evolving display systems, the icons themselves remain relatively constant in both appearance and function.
Above all, the icon itself must remain clearly identifiable on the display screen regardless of its position and size. Programs might display their icon not only as a desktop hyperlink, but also in the program title bar, on the Start menu, in the Microsoft tray or the Apple dock. In each of these locations, the primary purpose is to identify and advertise the program and functionality available. This need for recognition in turn sets specific design restrictions on effective computer icons.
Design
In order to maintain consistency in the look of a device, OS manufacturers offer detailed guidelines for the development and use of icons on their systems. This is true for both standard system icons and third party application icons to be included in the system. The system icons currently in use have typically gone through widespread international acceptance and understandability testing. Icon design factors have also been the topic for extensive usability studies. The design itself involves a high level of skill in combining an attractive graphic design with the required usability features.
Shape
The icon needs to be clear and easily recognizable, able to display on monitors of widely varying size and resolutions. Its shape should be simple with clean lines, without too much detailing in the design. Together with the other design details, the shape also needs to make it unique on the display and clearly distinguishable from other icons.
Color
The icon needs to be colorful enough to easily pick out on the display screen, and contrast well with any background. With the increasing ability to customize the desktop, it is important for the icon itself to display in a standard color which cannot be modified, retaining its characteristic appearance for immediate recognition by the user. Through color it should also provide some visual indicator as to the icon state; activated, available or currently not accessible ("greyed out").
Size and scalability
The standard icon is generally the size of an adult thumb, enabling both easy visual recognition and use in a touchscreen device. For individual devices the display size correlates directly to the size of the screen real estate and the resolution of the display. Because they are used in multiple locations on the screen, the design must remain recognizable at the smallest size, for use in a directory tree or title bar, while retaining an attractive shape in the larger sizes. In addition to scaling, it may be necessary to remove visual details or simplify the subject between discrete sizes. Larger icons serve also as part of the accessibility features for the visually impaired on many computer systems. The width and height of the icon are the same (1:1 aspect ratio) in almost all areas of traditional use.
Motion
Icons can also be augmented with iconographic motion - geometric manipulations applied to a graphical element over time, for example, a scale, rotation, or other deformation. One example is when application icons "wobble" in iOS to convey to the user they are able to be repositioned by being dragged. This is different from an icon with animated graphics, such as a Throbber. In contrast to static icons and icons with animated graphics, kinetic behaviors do not alter the visual content of an element (whereas fades, blurs, tints, and addition of new graphics, such as badges, exclusively alter an icon's pixels). Stated differently, pixels in an icon can be moved, rotated, stretched, and so on - but not altered or added to. Research has shown iconographic motion can act as a powerful and reliable visual cue, a critical property for icons to embody.
Localization
In its primary function as a symbolic image, the icon design should ideally be divorced from any single language. For products which are targeting the international marketplace, the primary design consideration is that the icon is non-verbal; localizing text in icons is costly and time-consuming.
Cultural context
Beyond text, there are other design elements which can be dependent upon the cultural context for interpretation. These include color, numbers, symbols, body parts and hand gestures. Each of these elements needs to be evaluated for their meaning and relevance across all markets targeted by the product.
Related visual tools
Other graphical devices used in the computer user interface fulfill GUI functions on the system similar to the computer icons described above. However each of these related graphical devices differs in one way or another from the standard computer icon.
Windows
The graphical windows on the computer screen share some of the visual and functional characteristics of the computer icon. Windows can be minimized to an icon format to serve as a hyperlink to the window itself. Multiple windows can be open and even overlapping on the screen. However where the icon provides a single button to initiate some function, the principal function of the window is a workspace, which can be minimized to an icon hyperlink when not in use.
Control widgets
Over time, certain GUI widgets have gradually appeared which are useful in many contexts. These are graphical controls which are used across computer systems and can be intuitively manipulated by the user even in a new context because the user recognises them from having seen them in a more familiar context. Examples of these control widgets are scroll bars, sliders, listboxes and buttons used in many programs. Using these widgets, a user is able to define and manipulate the data and the display for the software program they are working with. The first set of computer widgets was originally developed for the Xerox Alto. Now they are commonly bundled in widget toolkits and distributed as part of a development package. These control widgets are standardized pictograms used in the graphical interface, they offer an expanded set of user functionalities beyond the hyperlink function of computer icons.
Emoticons
Another GUI icon is exemplified by the smiley face, a pictogram embedded in a text message. The smiley, and by extension other emoticons, are used in computer text to convey information in a non-verbal binary shorthand, frequently involving the emotional context of the message. These icons were first developed for computers in the 1980s as a response to the limited storage and transmission bandwidth used in electronic messaging. Since then they have become both abundant and more sophisticated in their keyboard representations of varying emotions. They have developed from keyboard character combinations into real icons. They are widely used in all forms of electronic communications, always with the goal of adding context to the verbal content of the message. In adding an emotional overlay to the text, they have also enabled electronic messages to substitute for and frequently supplant voice-to-voice messaging.
These emoticons are very different from the icon hyperlinks described above. They do not serve as links, and are not part of any system function or computer software. Instead they are part of the communication language of users across systems. For these computer icons, customization and modifications are not only possible but in fact expected of the user.
Hyperlinks
A text hyperlink performs much the same function as the functional computer icon: it provides a direct link to some function or data available on the system. Although they can be customized, these text hyperlinks generally share a standardized recognizable format, blue text with underlining. Hyperlinks differ from functional computer icons in that they are normally embedded in text, whereas icons are displayed as stand-alone on the screen real estate. They are also displayed in text, either as the link itself or a friendly name, whereas icons are defined as being primarily non-textual.
Icon creation
Because of the design requirements, icon creation can be a time-consuming and costly process. There are a plethora of icon creation tools to be found on the Internet, ranging from professional level tools through utilities bundled with software development programs to stand-alone freeware. Given this wide availability of icon tools and icon sets, a problem can arise with custom icons which are mismatched in style to the other icons included on the system.
Tools
Icons underwent a change in appearance from the early 8-bit pixel art used pre-2000 to a more photorealistic appearance featuring effects such as softening, sharpening, edge enhancement, a glossy or glass-like appearance, or drop shadows which are rendered with an alpha channel.
Icon editors used on these early platforms usually contain a rudimentary raster image editor capable of modifying images of an icon pixel by pixel, by using simple drawing tools, or by applying simple image filters. Professional icon designers seldom modify icons inside an icon editor and use a more advanced drawing or 3D modeling application instead.
The main function performed by an icon editor is generation of icons from images. An icon editor resamples a source image to the resolution and color depth required for an icon. Other functions performed by icon editors are icon extraction from executable files (exe, dll), creation of icon libraries, or saving individual images of an icon.
All icon editors can make icons for system files (folders, text files, etc.), and for web pages.
These have a file extension of .ICO for Windows and web pages or .ICNS for the Macintosh. If the editor can also make a cursor, the image can be saved with a file extension of .CUR or .ANI for both Windows and the Macintosh. Using a new icon is simply a matter of moving the image into the correct file folder and using the system tools to select the icon. In Windows XP you could go to My Computer, open Tools on the explorer window, choose Folder Options, then File Types, select a file type, click on Advanced and select an icon to be associated with that file type.
Developers also use icon editors to make icons for specific program files. Assignment of an icon to a newly created program is usually done within the Integrated Development Environment used to develop that program. However, if one is creating an application in the Windows API he or she can simply add a line to the program's resource script before compilation. Many icon editors can copy a unique icon from a program file for editing. Only a few can assign an icon to a program file, a much more difficult task.
Simple icon editors and image-to-icon converters are also available online as web applications.
List of tools
This is a list of notable computer icon software.
Axialis IconWorkshop – Supports both Windows and Mac icons. (Commercial, Windows)
IcoFX – Icon editor supporting Windows Vista and Macintosh icons with PNG compression (Commercial, Windows)
IconBuilder – Plug-in for Photoshop; focused on Mac. (Commercial, Windows/Mac)
Microangelo Toolset – a set of tools (Studio, Explorer, Librarian, Animator, On Display) for editing Windows icons and cursors. (Commercial, Windows)
Microsoft Visual Studio - can author ICO/CUR files but cannot edit 32-bit icon frames with 8-bit transparency. (Commercial, Windows)
The following is a list of raster graphic applications capable of creating and editing icons:
GIMP – Image Editor Supports reading and writing Windows ICO/CUR/ANI files and PNG files that can be converted to Mac .icns files. (Open Source, Free Software, Multi-Platform)
ImageMagick and GraphicsMagick – Command Line image conversion & generation that can be used to create Windows ICO files and PNG files that can be converted to Mac .ICNS files. (Open Source, Free Software, Multi-Platform)
IrfanView – Support converting graphic file formats into Windows ICO files. (Proprietary, free for non-commercial use, Windows)
ResEdit – Supports creating classic Mac OS icon resources. (Proprietary, Discontinued, Classic Mac OS)
See also
Apple Icon Image format
Distinguishable interfaces
Favicon
Font Awesome
ICO (file format)
Icon design
Iconfinder
Resource (Windows)
Semasiography
The Noun Project
Unicode symbols
WIMP (computing)
XPM
References
Further reading
Wolf, Alecia. 2000. "Emotional Expression Online: Gender Differences in Emoticon
Katz, James E., editor (2008). Handbook of Mobile Communication Studies. MIT Press, Cambridge, Massachusetts.
Levine, Philip and Scollon, Ron, editors (2004). Discourse & Technology: Multimodal Discourse Analysis. Georgetown University Press, Washington, D.C.
Abdullah, Rayan and Huebner, Roger (2006). Pictograms, Icons and Signs: A Guide to Information Graphics. Thames & Hudson, London.
Handa, Carolyn (2004). Visual Rhetoric in a Digital World: A Critical Sourcebook. Bedford / St. Martins, Boston.
Zenon W. Pylyshyn and Liam J. Bannon (1989). Perspectives on the Computer Revolution. Ablex, New York.
External links
Graphical user interface elements
Pictograms | Icon (computing) | [
"Mathematics",
"Technology"
] | 5,025 | [
"Symbols",
"Pictograms",
"Graphical user interface elements",
"Components"
] |
355,024 | https://en.wikipedia.org/wiki/Capital%20%28architecture%29 | In architecture, the capital () or chapiter forms the topmost member of a column (or a pilaster). It mediates between the column and the load thrusting down upon it, broadening the area of the column's supporting surface. The capital, projecting on each side as it rises to support the abacus, joins the usually square abacus and the usually circular shaft of the column. The capital may be convex, as in the Doric order; concave, as in the inverted bell of the Corinthian order; or scrolling out, as in the Ionic order. These form the three principal types on which all capitals in the classical tradition are based.
The Composite order was formalized in the 16th century following Roman Imperial examples such as the Arch of Titus in Rome. It adds Ionic volutes to Corinthian acanthus leaves.
From the highly visible position it occupies in all colonnaded monumental buildings, the capital is often selected for ornamentation; and is often the clearest indicator of the architectural order. The treatment of its detail may be an indication of the building's date.
Capitals occur in many styles of architecture, before and after the classical architecture in which they are so prominent.
Pre-classical Antiquity
Egyptian
The two earliest Egyptian capitals of importance are those based on the lotus and papyrus plants respectively, and these, with the palm tree capital, were the chief types employed by the Egyptians, until under the Ptolemies in the 3rd to 1st centuries BC, various other river plants were also employed, and the conventional lotus capital went through various modifications.
Many motifs of Egyptian ornamentation are symbolic, such as the scarab, or sacred beetle, the solar disk, and the vulture. Other common motifs include palm leaves, the papyrus plant, and the buds and flowers of the lotus.
Some of the most popular types of capitals were the Hathor, lotus, papyrus and Egyptian composite. Most of the types are based on vegetal motifs. Capitals of some columns were painted in bright colors.
Assyrian
Some kind of volute capital is shown in the Assyrian bas-reliefs, but no Assyrian capital has ever been found; the enriched bases exhibited in the British Museum were initially misinterpreted as capitals.
Persian
In the Achaemenid Persian capital, the brackets are carved with two heavily decorated back-to-back animals projecting right and left to support the architrave; on their backs they carry other brackets at right angles to support the cross timbers. The bull is the most common, but there are also lions and griffins. The capital extends below for further than in most other styles, with decoration drawn from the many cultures that the Persian Empire conquered including Egypt, Babylon, and Lydia. There are double volutes at the top and, inverted, bottom of a long plain fluted section which is square, although the shaft of the column is round, and also fluted.
Aegean
The earliest Aegean capital is that shown in the frescoes at Knossos in Crete (1600 BC); it was of the convex type, probably moulded in stucco. Capitals of the second, concave type, include the richly carved examples of the columns flanking the Tomb of Agamemnon in Mycenae (c. 1100 BC): they are carved with a chevron device, and with a concave apophyge on which the buds of some flowers are sculpted.
Proto-Aeolic
Volute capitals, also known as proto-Aeolic capitals, are encountered in Iron-Age Southern Levant and ancient Cyprus, many of them in royal architectural contexts in the kingdoms of Israel and Judah starting from the 9th century BCE, as well as in Moab, Ammon, and at Cypriot sites such as the city-state of Tamassos in the Archaic period.
Classical Antiquity
The orders, structural systems for organising component parts, played a crucial role in the Greeks' search for perfection of ratio and proportion. The Greeks and Romans distinguished three classical orders of architecture, the Doric, Ionic, and Corinthian orders; each had different types of capitals atop the columns of their hypostyle and trabeate monumental buildings. Throughout the Mediterranean Basin, the Near East, and the wider Hellenistic world including the Greco-Bactrian Kingdom and the Indo-Greek Kingdom, numerous variations on these and other designs of capitals co-existed with the regular classical orders. The only architectural treatise of classical antiquity to survive is by the 1st-century BC Roman architect Vitruvius, who discussed the different proportions of each of these orders and made recommendations for how the column capitals of each order were to be constructed and in what proportions. In the Roman world and within the Roman Empire, the Tuscan order was employed, originally from Italy and with a capital similar to Greek Doric capitals, while the Roman imperial period saw the emergence of the Composite order, with a hybrid capital developed from Ionic and Corinthian elements. The Tuscan and Corinthian columns were counted among the classical canon of orders by the architects of Renaissance architecture and Neoclassical architecture.
Greek
Doric
The Doric capital is the simplest of the five Classical orders: it consists of the abacus above an ovolo molding, with an astragal collar set below. It was developed in the lands occupied by the Dorians, one of the two principal divisions of the Greek race. It became the preferred style of the Greek mainland and the western colonies (southern Italy and Sicily). In the Temple of Apollo, Syracuse (c. 700 BC), the echinus moulding has become a more definite form: this in the Parthenon reaches its culmination, where the convexity is at the top and bottom with a delicate uniting curve. The sloping side of the echinus becomes flatter in the later examples, and in the Colosseum at Rome forms a quarter round (see Doric order). In versions where the frieze and other elements are simpler the same form of capital is described as being in the Tuscan order. Doric reached its peak in the mid-5th century BC, and was one of the orders accepted by the Romans. Its characteristics are masculinity, strength and solidity.
The Doric capital consists of a cushion-like convex moulding known as an echinus, and a square slab termed an abacus.
Ionic
In the Ionic capital, spirally coiled volutes are inserted between the abacus and the ovolo. This order appears to have been developed contemporaneously with the Doric, though it did not come into common usage and take its final shape until the mid-5th century BC. The style prevailed in Ionian lands, centred on the coast of Asia Minor and Aegean islands. The order's form was far less set than the Doric, with local variations persisting for many decades. In the Ionic capitals of the archaic Temple of Artemis at Ephesus (560 BC) the width of the abacus is twice that of its depth, consequently the earliest Ionic capital known was virtually a bracket capital. A century later, in the temple on the Ilissus, the abacus has become square (See the more complete discussion at Ionic order). According to the Roman architect Vitruvius, the Ionic order's main characteristics were beauty, femininity, and slenderness, derived from its basis on the proportion of a woman.
The volutes of an Ionic capital rest on an echinus, almost invariably carved with egg-and-dart. Above the scrolls was an abacus, more shallow than that in Doric examples, and again ornamented with egg-and-dart.
Corinthian
It has been suggested that the foliage of the Greek Corinthian capital was based on the Acanthus spinosus, that of the Roman on the Acanthus mollis. Not all architectural foliage is as realistic as Isaac Ware's (illustration, right) however. The leaves are generally carved in two "ranks" or bands, like one leafy cup set within another. The Corinthian capitals from the Tholos of Epidaurus (400 BC) illustrate the transition between the earlier Greek capital, as at Bassae, and the Roman version that Renaissance and modern architects inherited and refined (See the more complete discussion at Corinthian order).
In Roman architectural practice, capitals are briefly treated in their proper context among the detailing proper to each of the "Orders", in the only complete architectural textbook to have survived from classical times, the , by Marcus Vitruvius Pollio, better known as Vitruvius, dedicated to the emperor Augustus. The various orders are discussed in Vitruvius' books iii and iv. Vitruvius describes Roman practice in a practical fashion. He gives some tales about the invention of each of the orders, but he does not give a hard and fast set of canonical rules for the execution of capitals.
Two further, specifically Roman orders of architecture have their characteristic capitals, the sturdy and primitive Tuscan capitals, typically used in military buildings, similar to Greek Doric, but with fewer small moldings in its profile, and the invented Composite capitals not even mentioned by Vitruvius, which combined Ionic volutes and Corinthian acanthus capitals, in an order that was otherwise quite similar in proportions to the Corinthian, itself an order that Romans employed much more often than Greeks.
The increasing adoption of Composite capitals signalled a trend towards freer, more inventive (and often more coarsely carved) capitals in Late Antiquity.
Anta
The anta capital is not a capital which is set on top of column, but rather on top of an anta, a structural post integrated into the frontal end of a wall, such as the front of the side wall of a temple.
The top of an anta is often highly decorated, usually with bands of floral motifs. The designs often respond to an order of columns, but usually with a different set of design principles. In order not to protrude excessively from the wall surface, these structures tend to have a rather flat surface, forming brick-shaped capitals, called "anta capitals". Anta capitals are known from the time of the Doric order.
An anta capital can sometimes be qualified as a "sofa" capital or a "sofa anta capital" when the sides of the capital broaden upward, in a shape reminiscent of a couch or sofa.
Anta capitals are sometimes hard to distinguish from pilaster capitals, which are rather decorative, and do not have the same structural role as anta capitals.
Roman
Tuscan
The origins of the Tuscan order lie with the Etruscans and are found on their tombs. Although the Romans perceived it as especially Italianate, the Tuscan capital found on Roman monuments is in fact closer to the Greek Doric order than to Etruscan examples, its capital being nearby identical with the Doric.
Composite
The Romans invented the Composite order by uniting the Corinthian order with the Ionic capital, possibly as early as Augustus's reign. In many versions the Composite order volutes are larger, however, and there is generally some ornament placed centrally between the volutes. Despite this origin, very many Composite capitals in fact treat the two volutes as different elements, each springing from one side of their leafy base. In this, and in having a separate ornament between them, they resemble the Archaic Greek Aeolic order, though this seems not to have been the route of their development in early Imperial Rome. Equally, where the Greek Ionic volute is usually shown from the side as a single unit of unchanged width between the front and back of the column, the Composite volutes are normally treated as four different thinner units, one at each corner of the capital, projecting at some 45° to the façade.
Indian
The Lion Capital of Ashoka
The Lion Capital of Ashoka is an iconic capital which consists of four Asiatic lions standing back to back, on an elaborate base that includes other animals. A graphic representation of it was adopted as the official Emblem of India in 1950. This powerfully carved lion capital from Sarnath stood a top a pillar bearing the edicts of the emperor Ashoka. Like most of Ashoka's capitals, it is brilliantly polished. Located at the site of Buddha's first sermon and the formation of the Buddhist order, it carried imperial and Buddhist symbols, reflecting the universal authority of both the emperor's and the Buddha's words. The capital today serves as the emblem of the Republic of India. Minus the inverted bell-shaped lotus flower, this has been adopted as the National Emblem of India, seen from another angle, showing the horse on the left and the bull on the right of the Ashoka Chakra in the circular base on which the four Indian lions are standing back to back. On the side shown here there are the bull and elephant; a lion occupies the other place. The wheel "Ashoka Chakra" from its base has been placed onto the centre of the National Flag of India
Indo-Ionic capitals
The Pataliputra capital is a monumental rectangular capital with volutes designs, that was discovered in the palace ruins of the ancient Mauryan Empire capital city of Pataliputra (modern Patna, northeastern India). It is dated to the 3rd century BC. The top is made of a band of rosettes, eleven in total for the fronts and four for the sides. Below that is a band of bead and reel pattern, then under it a band of waves, generally right-to-left, except for the back where they are left-to-right. Further below is a band of egg-and-dart pattern, with eleven "tongues" or "eggs" on the front, and only seven on the back. Below appears the main motif, a flame palmette, growing among pebbles.
The Sarnath capital is a pillar capital, sometimes also described as a "stone bracket", discovered in the archaeological excavations at the ancient Buddhist site of Sarnath. The pillar displays Ionic volutes and palmettes. It has been variously dated from the 3rd century BCE during the Mauryan Empire period, to the 1st century BCE, during the Sunga Empire period.
Indo-Corinthian capitals
Some capitals with strong Greek and Persian influence have been found in northeastern India in the Maurya Empire palace of Pataliputra, dating to the 4th–3rd century BC. Examples such as the Pataliputra capital belong to the Ionic order rather than the later Corinthian order. They are witness to relations between India and the West from that early time.
Indo-Corinthian capitals correspond to the much more abundant Corinthian-style capitals crowning columns or pilasters, which can be found in the northwestern Indian subcontinent, particularly in Gandhara, and usually combine Hellenistic and Indian elements. These capitals are typically dated to the first century BC, and constitute important elements of Greco-Buddhist art.
The Classical design was often adapted, usually taking a more elongated form, and sometimes being combined with scrolls, generally within the context of Buddhist stupas and temples. Indo-Corinthian capitals also incorporated figures of the Buddha or Bodhisattvas, usually as central figures surrounded by, and often under the shade of, the luxurious foliage of Corinthian designs.
Late Antiquity
Byzantine
Byzantine capitals vary widely, mostly developing from the classical Corinthian, but tending to have an even surface level, with the ornamentation undercut with drills. The block of stone was left rough as it came from the quarry, and the sculptor evolved new designs to his own fancy, so that one rarely meets with many repetitions of the same design. One of the most remarkable designs features leaves carved as if blown by the wind; the finest example being at the 8th-century Hagia Sophia (Thessaloniki). Those in the Cathedral of Saint Mark, Venice (1071) specially attracted John Ruskin's fancy. Others appear in Sant'Apollinare in Classe, Ravenna (549).
The capital in San Vitale, Ravenna (547) shows above it the dosseret required to carry the arch, the springing of which was much wider than the abacus of the capital. On eastern capitals the eagle, the lion and the lamb are occasionally carved, but treated conventionally.
There are two types of capitals used at Hagia Sophia: Composite and Ionic. The composite capital that emerged during the Late Byzantine Empire, mainly in Rome, combines the Corinthian with the Ionic. Composite capitals line the principal space of the nave. Ionic capitals are used behind them in the side spaces, in a mirror position relative to the Corinthian or composite orders (as was their fate well into the 19th century, when buildings were designed for the first time with a monumental Ionic order). At Hagia Sophia, though, these are not the standard imperial statements. The capitals are filled with foliage in all sorts of variations. In some, the small, lush leaves appear to be caught up in the spinning of the scrolls – clearly, a different, nonclassical sensibility has taken over the design.
The capitals at Basilica of San Vitale in Ravenna (Italy) show wavy and delicate floral patterns similar to decorations found on belt buckles and dagger blades. Their inverted pyramidal form has the look of a basket.
Capitals in early Islamic architecture are derived from Graeco-Roman and Byzantine forms, reflecting the training of most of the masons producing them.
Middle Ages
In both periods small columns are often used close together in groups, often around a pier that is in effect a single larger column, or running along a wall surface. The structural importance of the individual column is thereby greatly reduced. In both periods, though there are common types, the sense of a strict order with rules was not maintained, and when the budget allowed, carvers were able to indulge their inventiveness. Capitals were sometimes used to hold depictions of figures and narrative scenes, especially in the Romanesque.
In Romanesque architecture and Gothic architecture capitals throughout western Europe present as much variety as in the East, and for the same reason, that the sculptor evolved his design in accordance with the block he was carving, but in the west variety goes further, because of the clustering of columns and piers.
The earliest type of capital in Lombardy and Germany is known as the cushion-cap, in which the lower portion of the cube block has been cut away to meet the circular shaft. These types were generally painted at first with geometrical designs, afterwards carved.
The finest carving comes from France, especially from the area around Paris. The most varied were carved in 1130–1170.
In Britain and France the figures introduced into the capitals are sometimes full of character, these are referred to as historiated (or figured capital). These capitals, however, are not equal to those of the Early English Gothic, in which foliage is treated as if copied from metalwork, and is of infinite variety, being found in small village churches as well as in cathedrals.
Armenian
Armenian capitals are often versions of Byzantine forms. In the 4th-7th centuries the capitals of Armenian architectural facades and masonry facades are tall rectangular stones with a total volume, which are converted into a slab by means of a bell. In the structures of the early period (Ereruyk, Tekor, Tsopk, etc.) they were sculpted with plant and animal images, palm trees. In the 10th century and in the following centuries, capitals are mainly formed by a combination of a cylinder and a slab. The structures of Armenian palaces, churches, courtyards (Dvin, Aruch, Zvartnots, Ishkhan, Banak, Haghpat, Sanahin, Ani structures) are diverse and unique.
Renaissance and post-Renaissance
In the Renaissance period the feature became of the greatest importance and its variety almost as great as in the Romanesque and Gothic styles. The flat pilaster, which was employed extensively in this period, called for a planar rendition of the capital, executed in high relief. This affected the designs of capitals. A traditional 15th-century variant of the Composite capital turns the volutes inwards above stiffened leaf carving. In new Renaissance combinations in capital designs most of the ornament can be traced to Classical Roman sources.
The 'Renaissance' was as much a reinterpretation as a revival of Classical norms. For example, the volutes of ancient Greek and Roman Ionic capitals had lain in the same plane as the architrave above them. This had created an awkward transition at the corner – where, for example, the designer of the temple of Athena Nike on the Acropolis in Athens had brought the outside volute of the end capitals forward at a 45-degree angle. This problem was more satisfactorily solved by the 16th-century architect Sebastiano Serlio, who angled outwards all volutes of his Ionic capitals. Since then use of antique Ionic capitals, instead of Serlio's version, has lent an archaic air to the entire context, as in Greek Revival.
There are numerous newly invented orders, sometimes called nonce orders, where a different ornamentation of the capital is typically a key feature. Within the bounds of decorum, a certain amount of inventive play has always been acceptable within the classical tradition. These became increasingly common after the Renaissance. When Benjamin Latrobe redesigned the Senate Vestibule in the United States Capitol in 1807, he introduced six columns that he "Americanized" with ears of corn (maize) substituting for the European acanthus leaves. As Latrobe reported to Thomas Jefferson in August 1809,
These capitals during the summer session obtained me more applause from members of Congress than all the works of magnitude or difficulty that surround them. They christened them the 'corncob capitals'.
Another example is the Delhi Order invented by the British architect Edwin Lutyens for New Delhi's central palace, Viceroy's House, now the Presidential residence Rashtrapati Bhavan, using elements of Indian architecture. Here the capital had a band of vertical ridges, with bells hanging at each corner as a replacement for volutes. The Delhi Order reappears in some later Lutyens buildings including Campion Hall, Oxford.
See also
Impost (architecture)
Pulvino
References
Lewis, Philippa & Gillian Darley (1986) Dictionary of Ornament, NY: Pantheon
External links
Types of capitals used in Medieval Art and Architecture
Columns and entablature
Architectural elements | Capital (architecture) | [
"Technology",
"Engineering"
] | 4,650 | [
"Building engineering",
"Structural system",
"Architectural elements",
"Columns and entablature",
"Components",
"Architecture"
] |
355,054 | https://en.wikipedia.org/wiki/CAcert.org | CAcert.org is a community-driven certificate authority that issues free X.509 public key certificates. CAcert.org relies heavily on automation and therefore issues only Domain-validated certificates (and not Extended validation or Organization Validation certificates).
These certificates can be used to digitally sign and encrypt email; encrypt code and documents; and to authenticate and authorize user connections to websites via TLS/SSL.
CAcert Inc. Association
On 24 July 2003, Duane Groth incorporated CAcert Inc. as a non-profit association registered in New South Wales, Australia and after, in September 2024, moved to Europe in Geneva, Switzerland. CAcert Inc runs CAcert.org—a community-driven certificate authority.
In 2004, the Dutch Internet pioneer Teus Hagen became involved. He served as board member and, in 2008, as a president.
Certificate Trust status
CAcert.org's root certificates are not included in the most widely deployed certificate stores and has to be added by its customers. As of 2021, most browsers, email clients, and operating systems do not automatically trust certificates issued by CAcert. Thus, users receive an "untrusted certificate" warning upon trying to view a website providing X.509 certificate issued by CAcert, or view emails authenticated with CAcert certificates in Microsoft Outlook, Mozilla Thunderbird, etc. CAcert uses its own certificate on its website.
Web browsers
Discussion for inclusion of CAcert root certificate in Mozilla Application Suite and Mozilla Firefox started in 2004. Mozilla had no CA certificate policy at the time. Eventually, Mozilla developed a policy which required CAcert to improve their management system and conduct audits. In April 2007, CAcert formally withdrew its application for inclusion in the Mozilla root program. At the same time, the CA/Browser Forum was established to facilitate communication among browser vendors and Certificate Authorities. Mozilla's advice was incorporated into "baseline requirements" used by most major browser vendors. Progress towards meeting these requirements can hardly be expected in the near future.
Operating systems
FreeBSD included CAcert's root certificate but removed it in 2008, following Mozilla's policy. In 2014, CAcert was removed from Ubuntu, Debian, and OpenBSD root stores. In 2018, CAcert was removed from Arch Linux.
As of Feb 2022, the following operating systems or distributions include the CAcert root certificate by default:
Arch Linux
FreeWRT
Gentoo (app-misc/ca-certificates only when USE flag cacert is set, defaults OFF from version 20161102.3.27.2-r2 )
GRML
Knoppix
Mandriva Linux
MirOS BSD
Openfire
Privatix
Replicant (Android)
As of 2021, the following operating systems or distributions have an optional package with the CAcert root certificate:
Debian
openSUSE
Web of trust
To create higher-trust certificates, users can participate in a web of trust system whereby users physically meet and verify each other's identities. CAcert maintains the number of assurance points for each account. Assurance points can be gained through various means, primarily by having one's identity physically verified by users classified as "Assurers".
Having more assurance points allows users more privileges such as writing a name in the certificate and longer expiration times on certificates. A user with at least 100 assurance points is a Prospective Assurer, and may—after passing an Assurer Challenge—verify other users; more assurance points allow the Assurer to assign more assurance points to others.
CAcert sponsors key signing parties, especially at big events such as CeBIT and FOSDEM.
As of 2021, CAcert's web of trust has over 380,000 verified users.
Root certificate descriptions
Since October 2005, CAcert offers Class 1 and Class 3 root certificates. Class 3 is a high-security subset of Class 1.
See also
Let's Encrypt
CAcert wiki
Further reading
References
Cryptography organizations
Certificate authorities
Transport_Layer_Security
Information privacy
Safety_engineering | CAcert.org | [
"Engineering"
] | 873 | [
"Safety engineering",
"Systems engineering",
"Information privacy",
"Cybersecurity engineering"
] |
355,100 | https://en.wikipedia.org/wiki/Quiver%20%28mathematics%29 | In mathematics, especially representation theory, a quiver is another name for a multidigraph; that is, a directed graph where loops and multiple arrows between two vertices are allowed. Quivers are commonly used in representation theory: a representation of a quiver assigns a vector space to each vertex of the quiver and a linear map to each arrow .
In category theory, a quiver can be understood to be the underlying structure of a category, but without composition or a designation of identity morphisms. That is, there is a forgetful functor from (the category of categories) to (the category of multidigraphs). Its left adjoint is a free functor which, from a quiver, makes the corresponding free category.
Definition
A quiver consists of:
The set of vertices of
The set of edges of
Two functions: giving the start or source of the edge, and another function, giving the target of the edge.
This definition is identical to that of a multidigraph.
A morphism of quivers is a mapping from vertices to vertices which takes directed edges to directed edges. Formally, if and are two quivers, then a morphism of quivers consists of two functions and such that the following diagrams commute:
That is,
and
Category-theoretic definition
The above definition is based in set theory; the category-theoretic definition generalizes this into a functor from the free quiver to the category of sets.
The free quiver (also called the walking quiver, Kronecker quiver, 2-Kronecker quiver or Kronecker category) is a category with two objects, and four morphisms: The objects are and . The four morphisms are and the identity morphisms and That is, the free quiver is the category
A quiver is then a functor . (That is to say, specifies two sets and , and two functions ; this is the full extent of what it means to be a functor from to .)
More generally, a quiver in a category is a functor The category of quivers in is the functor category where:
objects are functors
morphisms are natural transformations between functors.
Note that is the category of presheaves on the opposite category .
Path algebra
If is a quiver, then a path in is a sequence of arrows
such that the head of is the tail of for , using the convention of concatenating paths from right to left. Note that a path in graph theory has a stricter definition, and that this concept instead coincides with what in graph theory is called a walk.
If is a field then the quiver algebra or path algebra is defined as a vector space having all the paths (of length ≥ 0) in the quiver as basis (including, for each vertex of the quiver , a trivial path of length 0; these paths are not assumed to be equal for different ), and multiplication given by concatenation of paths. If two paths cannot be concatenated because the end vertex of the first is not equal to the starting vertex of the second, their product is defined to be zero. This defines an associative algebra over . This algebra has a unit element if and only if the quiver has only finitely many vertices. In this case, the modules over are naturally identified with the representations of . If the quiver has infinitely many vertices, then has an approximate identity given by where ranges over finite subsets of the vertex set of .
If the quiver has finitely many vertices and arrows, and the end vertex and starting vertex of any path are always distinct (i.e. has no oriented cycles), then is a finite-dimensional hereditary algebra over . Conversely, if is algebraically closed, then any finite-dimensional, hereditary, associative algebra over is Morita equivalent to the path algebra of its Ext quiver (i.e., they have equivalent module categories).
Representations of quivers
A representation of a quiver is an association of an -module to each vertex of , and a morphism between each module for each arrow.
A representation of a quiver is said to be trivial if for all vertices in .
A morphism, between representations of the quiver , is a collection of linear maps such that for every arrow in from to , i.e. the squares that forms with the arrows of and all commute. A morphism, , is an isomorphism, if is invertible for all vertices in the quiver. With these definitions the representations of a quiver form a category.
If and are representations of a quiver , then the direct sum of these representations, is defined by for all vertices in and is the direct sum of the linear mappings and .
A representation is said to be decomposable if it is isomorphic to the direct sum of non-zero representations.
A categorical definition of a quiver representation can also be given. The quiver itself can be considered a category, where the vertices are objects and paths are morphisms. Then a representation of is just a covariant functor from this category to the category of finite dimensional vector spaces. Morphisms of representations of are precisely natural transformations between the corresponding functors.
For a finite quiver (a quiver with finitely many vertices and edges), let be its path algebra. Let denote the trivial path at vertex . Then we can associate to the vertex the projective -module consisting of linear combinations of paths which have starting vertex . This corresponds to the representation of obtained by putting a copy of at each vertex which lies on a path starting at and 0 on each other vertex. To each edge joining two copies of we associate the identity map.
This theory was related to cluster algebras by Derksen, Weyman, and Zelevinsky.
Quiver with relations
To enforce commutativity of some squares inside a quiver a generalization is the notion of quivers with relations (also named bound quivers).
A relation on a quiver is a linear combination of paths from .
A quiver with relation is a pair with a quiver and an
ideal of the path algebra. The quotient is the path algebra of .
Quiver Variety
Given the dimensions of the vector spaces assigned to every vertex, one can form a variety which characterizes all representations of that quiver with those specified dimensions, and consider stability conditions. These give quiver varieties, as constructed by .
Gabriel's theorem
A quiver is of finite type if it has only finitely many isomorphism classes of indecomposable representations. classified all quivers of finite type, and also their indecomposable representations. More precisely, Gabriel's theorem states that:
A (connected) quiver is of finite type if and only if its underlying graph (when the directions of the arrows are ignored) is one of the ADE Dynkin diagrams: .
The indecomposable representations are in a one-to-one correspondence with the positive roots of the root system of the Dynkin diagram.
found a generalization of Gabriel's theorem in which all Dynkin diagrams of finite dimensional semisimple Lie algebras occur. This was generalized to all quivers and their corresponding Kac–Moody algebras by Victor Kac.
See also
ADE classification
Adhesive category
Assembly theory
Graph algebra
Group ring
Incidence algebra
Quiver diagram
Semi-invariant of a quiver
Toric variety
Derived noncommutative algebraic geometry - Quivers help encode the data of derived noncommutative schemes
References
Books
Lecture Notes
Quiver representations in toric geometry
Research
Projective toric varieties as fine moduli spaces of quiver representations
Sources
.
Victor Kac, "Root systems, representations of quivers and invariant theory". Invariant theory (Montecatini, 1982), pp. 74–108, Lecture Notes in Math. 996, Springer-Verlag, Berlin 1983.
Bernšteĭn, I. N.; Gelʹfand, I. M.; Ponomarev, V. A., "Coxeter functors, and Gabriel's theorem" (Russian), Uspekhi Mat. Nauk 28 (1973), no. 2(170), 19–33. Translation on Bernstein's website.
Category theory
Representation theory
Directed graphs | Quiver (mathematics) | [
"Mathematics"
] | 1,666 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Category theory",
"Representation theory"
] |
355,128 | https://en.wikipedia.org/wiki/Hobbing | Hobbing is a machining process for gear cutting, cutting splines, and cutting sprockets using a hobbing machine, a specialized milling machine. The teeth or splines of the gear are progressively cut into the material (such as a flat, cylindrical piece of metal or thermoset plastic) by a series of cuts made by a cutting tool called a hob.
Hobbing is relatively fast and inexpensive compared to most other gear-forming processes and is used for a broad range of parts and quantities. Hobbing is especially common for machining spur and helical gears.
A type of skiving that is analogous to the hobbing of external gears can be applied to the cutting of internal gears, which are skived with a rotary cutter (rather than shaped or broached).
Process
Hobbing can create gears that are straight, helical, straight bevel, faced, crowned, wormed, cylkro and chamfered. A hobbing machine uses two skew spindles. One is mounted with a blank workpiece and the other holds the cutter (or “hob”). The angle between the hob's spindle (axis) and the workpiece's spindle varies depending on the type of part being manufactured. For example, if a spur gear is being produced, the spindle is held at the lead angle of the hob, whereas if a helical gear is being produced, the held at the lead angle of the hob plus the helix angle of the helical gear. The speeds of the two spindles are held at a constant proportion determined by the number of teeth being cut into the blank; for example, for a single-threaded hob with a gear ratio of 40:1 the hob rotates 40 times to each turn of the blank, producing 40 teeth in the blank. If the hob has multiple threads, the speed ratio is multiplied by the number of threads on the hob. The hob is then fed up into the workpiece until the correct tooth depth is obtained. To finish the operation, the hob is fed through the workpiece parallel to the blank's axis of rotation.
Often during mass production, multiple blanks are stacked using a suitable fixture and cut in one operation.
For very large gears, the blank may be preliminarily gashed to a rough shape to make hobbing more efficient.
Equipment
Hobbing machines, also known as hobbers, come in many sizes to produce different sizes of gears. Tiny instrument gears are produced on small table-top machines, while large-diameter marine gears are produced on large industrial machines. A hobbing machine typically consists of a chuck and tailstock to hold the workpiece, a spindle to mount the hob, and a drive motor.
For a tooth profile which is theoretically involute, the fundamental rack is straight-sided, with sides inclined at the pressure angle of the tooth form, with flat top and bottom. The necessary addendum correction to allow the use of small-numbered pinions can either be obtained by suitable modification of this rack to a cycloidal form at the tips, or by hobbing at a diameter other than the theoretical pitch. Since the gear ratio between hob and blank is fixed, the resulting gear will have the correct pitch on the pitch circle but the tooth thickness will not be equal to the space width.
Hobbing machines are characterized by the largest module or pitch diameter it can generate. For example, a capacity machine can generate gears with a 10 in pitch diameter and usually a maximum of a 10 in face width. Most hobbing machines are vertical hobbers, meaning the blank is mounted vertically. Horizontal hobbing machines are usually used for cutting longer workpieces; i.e. cutting splines on the end of a shaft.
The hob
The hob is a cutting tool used to cut the teeth into the workpiece. It is cylindrical in shape with helical cutting teeth. These teeth have grooves that run the length of the hob, which aid in cutting and chip removal. There are also special hobs designed for special gears such as the spline and sprocket gears.
The cross-sectional shape of the hob teeth are almost the same shape as teeth of a rack gear that would be used with the finished product. There are slight changes to the shape for generating purposes, such as extending the hob's tooth length to create a clearance in the gear's roots. Each hob tooth is relieved on its back side to reduce friction.
Most hobs are single-thread hobs, but double-, and triple-thread hobs are used for high production volume shops. Multiple-thread hobs are more efficient but less accurate than single-thread hobs.
Depending on type of gear teeth to be cut, there are custom made hobs and general purpose hobs. Custom made hobs are different from other hobs as they are suited to make gears with modified tooth profiles. Modified tooth profiles are usually used to add strength and reduce size and gear noise.
Common types of hobs include:
Roller chain sprocket hobs
Worm wheel hobs
Spline hobs
Chamfer hobs
Spur and helical gear hobs
Straight side spline hobs
Involute spline hobs
Serration hobs
Semitopping gear hobs
Uses
Hobbing is used to make the following types of finished gears:
Cycloid gears (see below)
Helical gears
Involute gears
Ratchets
Splines
Sprockets
Spur gears
Worm gears
Hobbing is used to produce most throated worm wheels, but certain tooth profiles cannot be hobbed. If any portion of the hob profile is perpendicular to the axis, the hob will not have the cutting clearance generated by the usual backing off process and will not cut well.
Cycloidal forms
For cycloidal gears (as used in BS978-2 Specification for fine pitch gears) and cycloidal-type gears, each module, ratio, and number of teeth in the pinion requires a different hobbing cutter, so the hobbing is ineffective for small-volume production.
To circumvent this problem, a special war-time emergency circular arc gear standard was produced giving a series of close-to-cycloidal forms which could be cut with a single hob for each module for eight teeth and upwards to economize on cutter manufacturing resources. A variant on this is still included in BS978-2a (Gears for instruments and clockwork mechanisms. Cycloidal type gears. Double circular arc type gears).
Tolerances of concentricity of the hob limit the lower modules which can be cut practically by hobbing to about 0.5 module.
History
Christian Schiele of Lancaster England patented the hobbing machine in 1856. It was a simple design, but the rudimentary components are all present in the customary patent drawings. The hob cutting tool and the gear train to provide the appropriate spindle speed ratio are clearly visible. Knowledge of hobbing within the watchmaking trade likely precedes his patent. The next major step forward was in 1897, when Herman Pfauter invented a machine that could cut both traditional “spur” gears and helical gears, driving production further forward.
See also
List of gear nomenclature
References
Bibliography
.
.
.
.
.
.
Further reading
. At p. 303, "The hobbing process conceived in 1856 by Christian Schiele became a practical one for production work as soon as involute-shaped gear teeth superseded the cycloidal type in the 1880s, since the involute hob, like the involute rack, has straight sides (for the worm is a form of continuous rack) so that to make a hob from a worm all one has to do is to gash some teeth in the worm so that it will cut the blank as it is rotated."
; pre-1890 patent not found at eSpaceNet (see British Library remarks); see Google Books reprint which is missing sheets 1 and 2.
. At p. 105, "But it had been recognized that the worm was a form of continuous rack and all that was necessary to cut gears with it was to provide cutting edges on it — to make a hob (Fig. 45). Teeth had been cut by this method probably for the first time by Ramsden in 1768."
Dudley, Darle W. (1969), "The Evolution of the Gear Art", Published by, American Gear Manufacturers Association, Washington D.C., Library of Congress Catalog Card Number: 72-78509
Radzevich, Stephen P. (2017), "Gear cutting tools: science and engineering", CRC Press, Second Edition, . Chapter 1 provides a very comprehensive and contemporary history of Gear Cutting Tools in Chapter 1.
External links
. Has schematics of hobbing machines in figures 8–10.
Machine tools
Gears | Hobbing | [
"Engineering"
] | 1,850 | [
"Machine tools",
"Industrial machinery"
] |
355,140 | https://en.wikipedia.org/wiki/Monad%20%28category%20theory%29 | In category theory, a branch of mathematics, a monad is a triple consisting of a functor T from a category to itself and two natural transformations that satisfy the conditions like associativity. For example, if are functors adjoint to each other, then together with determined by the adjoint relation is a monad.
In concise terms, a monad is a monoid in the category of endofunctors of some fixed category (an endofunctor is a functor mapping a category to itself). According to John Baez, a monad can be considered at least in two ways:
A monad as a generalized monoid; this is clear since a monad is a monoid in a certain category,
A monad as a tool for studying algebraic gadgets; for example, a group can be described by a certain monad.
Monads are used in the theory of pairs of adjoint functors, and they generalize closure operators on partially ordered sets to arbitrary categories. Monads are also useful in the theory of datatypes, the denotational semantics of imperative programming languages, and in functional programming languages, allowing languages without mutable state to do things such as simulate for-loops; see Monad (functional programming).
A monad is also called, especially in old literature, a triple, triad, standard construction and fundamental construction.
Introduction and definition
A monad is a certain type of endofunctor. For example, if and are a pair of adjoint functors, with left adjoint to , then the composition is a monad. If and are inverse to each other, the corresponding monad is the identity functor. In general, adjunctions are not equivalences—they relate categories of different natures. The monad theory matters as part of the effort to capture what it is that adjunctions 'preserve'. The other half of the theory, of what can be learned likewise from consideration of , is discussed under the dual theory of comonads.
Formal definition
Throughout this article, denotes a category. A monad on consists of an endofunctor together with two natural transformations: (where denotes the identity functor on ) and (where is the functor from to ). These are required to fulfill the following conditions (sometimes called coherence conditions):
(as natural transformations ); here and are formed by "horizontal composition".
(as natural transformations ; here denotes the identity transformation from to ).
We can rewrite these conditions using the following commutative diagrams:
See the article on natural transformations for the explanation of the notations and , or see below the commutative diagrams not using these notions:
The first axiom is akin to the associativity in monoids if we think of as the monoid's binary operation, and the second axiom is akin to the existence of an identity element (which we think of as given by ). Indeed, a monad on can alternatively be defined as a monoid in the category whose objects are the endofunctors of and whose morphisms are the natural transformations between them, with the monoidal structure induced by the composition of endofunctors.
The power set monad
The power set monad is a monad on the category : For a set let be the power set of and for a function let be the function between the power sets induced by taking direct images under . For every set , we have a map , which assigns to every the singleton . The function
takes a set of sets to its union. These data describe a monad.
Remarks
The axioms of a monad are formally similar to the monoid axioms. In fact, monads are special cases of monoids, namely they are precisely the monoids among endofunctors , which is equipped with the multiplication given by composition of endofunctors.
Composition of monads is not, in general, a monad. For example, the double power set functor does not admit any monad structure.
Comonads
The categorical dual definition is a formal definition of a comonad (or cotriple); this can be said quickly in the terms that a comonad for a category is a monad for the opposite category . It is therefore a functor from to itself, with a set of axioms for counit and comultiplication that come from reversing the arrows everywhere in the definition just given.
Monads are to monoids as comonads are to comonoids. Every set is a comonoid in a unique way, so comonoids are less familiar in abstract algebra than monoids; however, comonoids in the category of vector spaces with its usual tensor product are important and widely studied under the name of coalgebras.
Terminological history
The notion of monad was invented by Roger Godement in 1958 under the name "standard construction". Monad has been called "dual standard construction", "triple", "monoid" and "triad". The term "monad" is used at latest 1967, by Jean Bénabou.
Examples
Identity
The identity functor on a category is a monad. Its multiplication and unit are the identity function on the objects of .
Monads arising from adjunctions
Any adjunction
gives rise to a monad on C. This very widespread construction works as follows: the endofunctor is the composite
This endofunctor is quickly seen to be a monad, where the unit map stems from the unit map of the adjunction, and the multiplication map is constructed using the counit map of the adjunction:
In fact, any monad can be found as an explicit adjunction of functors using the Eilenberg–Moore category (the category of -algebras).
Double dualization
The double dualization monad, for a fixed field k arises from the adjunction
where both functors are given by sending a vector space V to its dual vector space . The associated monad sends a vector space V to its double dual . This monad is discussed, in much greater generality, by .
Closure operators on partially ordered sets
For categories arising from partially ordered sets (with a single morphism from to if and only if ), then the formalism becomes much simpler: adjoint pairs are Galois connections and monads are closure operators.
Free-forgetful adjunctions
For example, let be the forgetful functor from the category Grp of groups to the category Set of sets, and let be the free group functor from the category of sets to the category of groups. Then is left adjoint of . In this case, the associated monad takes a set and returns the underlying set of the free group .
The unit map of this monad is given by the maps
including any set into the set in the natural way, as strings of length 1. Further, the multiplication of this monad is the map
made out of a natural concatenation or 'flattening' of 'strings of strings'. This amounts to two natural transformations.
The preceding example about free groups can be generalized to any type of algebra in the sense of a variety of algebras in universal algebra. Thus, every such type of algebra gives rise to a monad on the category of sets. Importantly, the algebra type can be recovered from the monad (as the category of Eilenberg–Moore algebras), so monads can also be seen as generalizing varieties of universal algebras.
Another monad arising from an adjunction is when is the endofunctor on the category of vector spaces which maps a vector space to its tensor algebra , and which maps linear maps to their tensor product. We then have a natural transformation corresponding to the embedding of into its tensor algebra, and a natural transformation corresponding to the map from to obtained by simply expanding all tensor products.
Codensity monads
Under mild conditions, functors not admitting a left adjoint also give rise to a monad, the so-called codensity monad. For example, the inclusion
does not admit a left adjoint. Its codensity monad is the monad on sets sending any set X to the set of ultrafilters on X. This and similar examples are discussed in .
Monads used in denotational semantics
The following monads over the category of sets are used in denotational semantics of imperative programming languages, and analogous constructions are used in functional programming.
The maybe monad
The endofunctor of the maybe or partiality monad adds a disjoint point:
The unit is given by the inclusion of a set into :
The multiplication maps elements of to themselves, and the two disjoint points in to the one in .
In both functional programming and denotational semantics, the maybe monad models partial computations, that is, computations that may fail.
The state monad
Given a set , the endofunctor of the state monad maps each set to the set of functions . The component of the unit at maps each element to the function
The multiplication maps the function to the function
In functional programming and denotational semantics, the state monad models stateful computations.
The environment monad
Given a set , the endofunctor of the reader or environment monad maps each set to the set of functions . Thus, the endofunctor of this monad is exactly the hom functor . The component of the unit at maps each element to the constant function .
In functional programming and denotational semantics, the environment monad models computations with access to some read-only data.
The list and set monads
The list or nondeterminism monad maps a set X to the set of finite sequences (i.e., lists) with elements from X. The unit maps an element x in X to the singleton list [x]. The multiplication concatenates a list of lists into a single list.
In functional programming, the list monad is used to model nondeterministic computations. The covariant powerset monad is also known as the set monad, and is also used to model nondeterministic computation.
Algebras for a monad
Given a monad on a category , it is natural to consider -algebras, i.e., objects of acted upon by in a way which is compatible with the unit and multiplication of the monad. More formally, a -algebra is an object of together with an arrow of called the structure map of the algebra such that the diagrams
commute.
A morphism of -algebras is an arrow of such that the diagram commutes. -algebras form a category called the Eilenberg–Moore category and denoted by .
Examples
Algebras over the free group monad
For example, for the free group monad discussed above, a -algebra is a set together with a map from the free group generated by towards subject to associativity and unitality conditions. Such a structure is equivalent to saying that is a group itself.
Algebras over the distribution monad
Another example is the distribution monad on the category of sets. It is defined by sending a set to the set of functions with finite support and such that their sum is equal to . In set-builder notation, this is the setBy inspection of the definitions, it can be shown that algebras over the distribution monad are equivalent to convex sets, i.e., sets equipped with operations for subject to axioms resembling the behavior of convex linear combinations in Euclidean space.
Algebras over the symmetric monad
Another useful example of a monad is the symmetric algebra functor on the category of -modules for a commutative ring .sending an -module to the direct sum of symmetric tensor powerswhere . For example, where the -algebra on the right is considered as a module. Then, an algebra over this monad are commutative -algebras. There are also algebras over the monads for the alternating tensors and total tensor functors giving anti-symmetric -algebras, and free -algebras, sowhere the first ring is the free anti-symmetric algebra over in -generators and the second ring is the free algebra over in -generators.
Commutative algebras in E-infinity ring spectra
There is an analogous construction for commutative -algebraspg 113 which gives commutative -algebras for a commutative -algebra . If is the category of -modules, then the functor is the monad given bywhere -times. Then there is an associated category of commutative -algebras from the category of algebras over this monad.
Monads and adjunctions
As was mentioned above, any adjunction gives rise to a monad. Conversely, every monad arises from some adjunction, namely the free–forgetful adjunction
whose left adjoint sends an object X to the free T-algebra T(X). However, there are usually several distinct adjunctions giving rise to a monad: let be the category whose objects are the adjunctions such that and whose arrows are the morphisms of adjunctions that are the identity on . Then the above free–forgetful adjunction involving the Eilenberg–Moore category is a terminal object in . An initial object is the Kleisli category, which is by definition the full subcategory of consisting only of free T-algebras, i.e., T-algebras of the form for some object x of C.
Monadic adjunctions
Given any adjunction with associated monad T, the functor G can be factored as
i.e., G(Y) can be naturally endowed with a T-algebra structure for any Y in D. The adjunction is called a monadic adjunction if the first functor yields an equivalence of categories between D and the Eilenberg–Moore category . By extension, a functor is said to be monadic if it has a left adjoint forming a monadic adjunction. For example, the free–forgetful adjunction between groups and sets is monadic, since algebras over the associated monad are groups, as was mentioned above. In general, knowing that an adjunction is monadic allows one to reconstruct objects in D out of objects in C and the T-action.
Beck's monadicity theorem
Beck's monadicity theorem gives a necessary and sufficient condition for an adjunction to be monadic. A simplified version of this theorem states that G is monadic if it is conservative (or G reflects isomorphisms, i.e., a morphism in D is an isomorphism if and only if its image under G is an isomorphism in C) and C has and G preserves coequalizers.
For example, the forgetful functor from the category of compact Hausdorff spaces to sets is monadic. However the forgetful functor from all topological spaces to sets is not conservative since there are continuous bijective maps (between non-compact or non-Hausdorff spaces) that fail to be homeomorphisms. Thus, this forgetful functor is not monadic.
The dual version of Beck's theorem, characterizing comonadic adjunctions, is relevant in different fields such as topos theory and topics in algebraic geometry related to descent. A first example of a comonadic adjunction is the adjunction
for a ring homomorphism between commutative rings. This adjunction is comonadic, by Beck's theorem, if and only if B is faithfully flat as an A-module. It thus allows to descend B-modules, equipped with a descent datum (i.e., an action of the comonad given by the adjunction) to A-modules. The resulting theory of faithfully flat descent is widely applied in algebraic geometry.
Uses
Monads are used in functional programming to express types of sequential computation (sometimes with side-effects). See monads in functional programming, and the more mathematically oriented Wikibook module b:Haskell/Category theory.
Monads are used in the denotational semantics of impure functional and imperative programming languages.
In categorical logic, an analogy has been drawn between the monad-comonad theory, and modal logic via closure operators, interior algebras, and their relation to models of S4 and intuitionistic logics.
Generalization
It is possible to define monads in a 2-category . Monads described above are monads for .
See also
Distributive law between monads
Lawvere theory
Monad (functional programming)
Polyad
Strong monad
Giry monad
Monoidal monad
References
Further reading
https://mathoverflow.net/questions/55182/what-is-known-about-the-category-of-monads-on-set
Ross Street, The formal theory of monads
External links
Monads, a YouTube video of five short lectures (with one appendix).
John Baez's This Week's Finds in Mathematical Physics (Week 89) covers monads in 2-categories.
Monads and comonads, video tutorial.
https://medium.com/@felix.kuehl/a-monad-is-just-a-monoid-in-the-category-of-endofunctors-lets-actually-unravel-this-f5d4b7dbe5d6
Adjoint functors
Category theory | Monad (category theory) | [
"Mathematics"
] | 3,640 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Category theory"
] |
355,155 | https://en.wikipedia.org/wiki/Polar%20orbit | A polar orbit is one in which a satellite passes above or nearly above both poles of the body being orbited (usually a planet such as the Earth, but possibly another body such as the Moon or Sun) on each revolution. It has an inclination of about 60–90 degrees to the body's equator.
Launching satellites into polar orbit requires a larger launch vehicle to launch a given payload to a given altitude than for a near-equatorial orbit at the same altitude, because it cannot take advantage of the Earth's rotational velocity. Depending on the location of the launch site and the inclination of the polar orbit, the launch vehicle may lose up to 460 m/s of Delta-v, approximately 5% of the Delta-v required to attain Low Earth orbit.
Usage
Polar orbits are used for Earth-mapping, reconnaissance satellites, as well as for some weather satellites.
The Iridium satellite constellation uses a polar orbit to provide telecommunications services.
Near-polar orbiting satellites commonly choose a Sun-synchronous orbit, where each successive orbital pass occurs at the same local time of day. For some applications, such as remote sensing, it is important that changes over time are not aliased by changes in local time. Keeping the same local time on a given pass requires that the time period of the orbit be kept as short, which requires a low orbit. However, very low orbits rapidly decay due to drag from the atmosphere. Commonly used altitudes are between 700 and 800 km, producing an orbital period of about 100 minutes. The half-orbit on the Sun side then takes only 50 minutes, during which local time of day does not vary greatly.
To retain a Sun-synchronous orbit as the Earth revolves around the Sun during the year, the orbit must precess about the Earth at the same rate (which is not possible if the satellite passes directly over the pole).
Because of Earth's equatorial bulge, an orbit inclined at a slight angle is subject to a torque, which causes precession. An angle of about 8° from the pole produces the desired precession in a 100-minute orbit.
See also
List of orbits
Molniya orbit
Tundra orbit
Vandenberg Air Force Base, a major United States launch location for polar orbits
References
External links
Orbital Mechanics (Rocket and Space Technology)
Astrodynamics
Earth orbits
Articles containing video clips | Polar orbit | [
"Engineering"
] | 480 | [
"Astrodynamics",
"Aerospace engineering"
] |
355,204 | https://en.wikipedia.org/wiki/Somnium%20Scipionis | The Dream of Scipio (Latin: Somnium Scipionis), written by Cicero, is the sixth book of De re publica, and describes a (postulated fictional or real) dream vision of the Roman general Scipio Aemilianus, set two years before he oversaw the destruction of Carthage in 146 BC.
Textual history
The Somnium Scipionis is a portion of the sixth and final book from Cicero's De re publica, but because parts of Cicero's whole work are missing, Somnium Scipionis represents nearly all that remains of the sixth book. The main reason that the Somnium Scipionis survived was because in the fifth-century, the Latin writer Macrobius wrote a Neoplatonic commentary on the work, in which he excerpted large portions from Cicero. Additionally, many copies of Macrobius's work were amended with a copy of the Somnium Scipionis at their end. However, during the Middle Ages, the Somnium Scipionis became so popular that its transmission was polluted by multiple copies, and today it has been impossible to establish a stemma for it.
Contents
Upon his arrival in Africa, a guest at the court of Massinissa, Scipio Aemilianus is visited by his dead grandfather-by-adoption, Scipio Africanus, hero of the Second Punic War. He finds himself looking down upon Carthage "from a high place full of stars, shining and splendid". His future is foretold by his grandfather, and great stress is placed upon the loyal duty of the Roman soldier, who will as a reward after death "inhabit... that circle that shines forth among the stars which you have learned from the Greeks to call the Milky Way". Nevertheless, Scipio Aemilianus sees that Rome is an insignificant part of the earth, which is itself dwarfed by the stars.
Then, Scipio Aemilianus sees that the universe is made up of nine celestial spheres. The earth is the innermost, whereas the highest is heaven, which "contains all the rest, and is itself the supreme God" (unus est caelestis [...] qui reliquos omnes complectitur, summus ipse deus). In between these two extremes lie the seven spheres of the Moon, Mercury, Venus, the Sun, Mars, Jupiter, and Saturn (which proceed from lowest to highest). As he stares in wonder at the universe, Scipio Aemilianus begins to hear a "so great and so sweet" (tantus et tam dulcis) sound, which Scipio Africanus identifies as the musica universalis: the "music of the spheres". He explains to his grandson that because the planets are set apart at fixed intervals, a sound is produced as they move. The moon, being the lowest sphere and the one closest to Earth, emits the lowest sound of all, whereas the heaven emits the highest. The Earth, on the other hand, does not move, remaining motionless at the center of the universe.
Then the climatic belts of the earth are observed, from the snow fields to the deserts, and there is discussion of the nature of the Divine, the soul and virtue, from the Stoic point of view.
Relation to other works
The tale is modelled on the "Myth of Er" in Plato's Republic. Although the story of Er records a near-death experience, while the journey of Scipio's "disembodied soul" takes place in a dream, both give examples of belief in astral projection.
Reception and influence
The literary and philosophical influence of the Somnium was great. Macrobius' Commentary upon Scipio's Dream was known to the sixth-century philosopher Boethius, and was later valued throughout the Middle Ages as a primer of cosmology. The work assumed the astrological cosmos formulated by Claudius Ptolemy. Chretien de Troyes referred to Macrobius' work in his first Arthurian romance, Erec and Enide, and it was a model for Dante's account of heaven and hell. Chaucer referred to the work in "The Nun's Priest's Tale" and especially in the Parlement of Foules.
Some critics consider Raphael's painting Vision of a Knight to be a depiction of Scipio's Dream.
The composer Mozart, at the age of fifteen, wrote a short opera entitled Il sogno di Scipione (K. 126), with a libretto by Metastasio, based upon Scipio Aemilianus's 'soul-journey' through the cosmos.
Iain Pears wrote a historical novel called The Dream of Scipio which, though attributed to fictional classical writer Manlius, refers to Cicero's work in various direct and indirect ways.
Bernard Field, in the preface to his History of Science Fiction, cited Scipio's vision of the Earth as seen from a great height as a forerunner of modern science fiction writers describing the experience of flying in orbit — particularly noting the similarity between Scipio's realization that Rome is but a small part of the Earth with similar feeling by characters in Arthur C. Clarke's works.
This story is the basis for Chris McCully's poem "Scipio's Dream" from his collection Not Only I, published in 1996.
Gallery
Images from a 12th-century manuscript of Macrobius' Commentarii in Somnium Scipionis (Parchment, 50 ff.; 23.9 × 14 cm; Southern France). Date: ca. 1150. Source: Copenhagen, Det Kongelige Bibliotek, ms. NKS 218 4°.
References
Bibliography
External links
Somnium Scipionis (in Latin)
The Dream of Scipio (in English)
Philosophical works by Cicero
Visionary literature
Religious cosmologies
Early scientific cosmologies
Ancient astronomy
Roman underworld
Fiction about dreams
Legendary dreams | Somnium Scipionis | [
"Astronomy"
] | 1,221 | [
"Ancient astronomy",
"History of astronomy"
] |
355,245 | https://en.wikipedia.org/wiki/MozillaZine | MozillaZine is a discontinued unofficial Mozilla website that provided information about Mozilla products including Firefox browser, Thunderbird email client, and related software (SeaMonkey, Camino, Calendar and Mobile). The site used to host an active community support internet forum, and a community-driven knowledge base of information about Mozilla products, but as of 2019 the site was not being maintained anymore. The site is still online in read-only mode.
History
The site was founded by Chris Nelson on September 1, 1998, just a few months after mozilla.org, which was created on February 23, 1998, and quickly grew in popularity. Improvements were added to the site and it soon moved to the mozillazine.org domain. Originally, the site's main audience was Mozilla developers, both Netscape employees and outsiders, but it soon attracted interested observers and end users.
On November 14, 1998, MozillaZine merged with MozBin, which brought its webmaster, Jason Kersey, on board.
In the beginning of 2001, Chris Nelson phased out his involvement with the site.
In May 2002, Alex Bishop became the site's third member of staff. Alex Bishop became less involved with the site in 2007.
After 2007, MozillaZine was primarily administrated by Jason Kersey.
In 2009, MozillaZine removed news section of the site due to lack of interest and since the open source project was well covered by other sources, including general and computer press. The home page was updated to remove the News and Blogs links. MozillaZine refocused on community software support and advocacy.
On September 20, 2019, site admin Jason Kersey announced his departure from the project and that MozillaZine would go into read-only mode. The site continued to operate, but without any administrator with root permissions on the server that hosts the forum.
See also
mozdev.org
List of Internet forums
References
External links
MozillaZine
French
Japanese
Mozilla
Free software websites
Internet properties established in 1998
Internet forums | MozillaZine | [
"Technology"
] | 434 | [
"Computing websites",
"Free software websites"
] |
355,288 | https://en.wikipedia.org/wiki/Abacus%20%28architecture%29 | In architecture, an abacus (from the Ancient Greek (), ; or French , ; : abacuses or abaci) is a flat slab forming the uppermost member or division of the capital of a column, above the bell. Its chief function is to provide a large supporting surface, tending to be wider than the capital, as an abutment to receive the weight of the arch or the architrave above. The diminutive of abacus, abaculus, is used to describe small mosaic tiles, also called abaciscus or tessera, used to create ornamental floors with detailed patterns of chequers or squares in a tessellated pavement.
Definition
In classical architecture, the shape of the abacus and its edge profile varies in the different classical orders. In the Greek Doric order, the abacus is a plain square slab without mouldings, supported on an echinus. In the Roman and Renaissance Doric orders, it is crowned by a moulding (known as "crown moulding"). In the Tuscan and Roman Doric capital, it may rest on a boltel.
In the archaic Greek Ionic order, owing to the greater width of the capital, the abacus is rectangular in plan, and consists of a carved ovolo moulding. In later examples, the slab is thinner and the abacus remains square, except where there are angled volutes, where the slab is slightly curved. In the Roman and Renaissance Ionic capital, the abacus is square with a fillet on the top of an ogee moulding with curved edges over angled volutes.
In an angular capital of the Greek Corinthian order, the abacus is moulded, its sides are concave, and its angles canted (except in one or two exceptional Greek capitals, where it is brought to a sharp angle); the volutes of adjacent faces meet and project diagonally under each corner of the abacus. The same shape is adopted in the Roman and Renaissance Corinthian and Composite capitals, in some cases with the carved ovolo moulding, fillet, and cavetto.
In Romanesque architecture, the abacus survives as a heavier slab, generally moulded and decorated. It is often square with the lower edge splayed off and moulded or carved, and the same was retained in France during the medieval period; but in England, in Early English work, a circular deeply moulded abacus was introduced, which in the 14th and 15th centuries was transformed into an octagonal one.
In Gothic architecture, the moulded forms of the abacus vary in shape, such as square, circular, or even octagonal, it may even be a flat disk or drum. The form of the Gothic abacus is often affected by the shape of a vault that springs from the column, in which case it is called an impost block.
Indian architecture (śilpaśāstra)
In śilpaśāstra, the ancient Indian science of sculpture, the abacus is commonly termed as phalaka (or phalakā). It consists of a flat plate and forms part of the standard pillar (stambha). The phalaka should be constructed below the potikā ("bracket"). It is commonly found together with the dish-like maṇḍi as a single unit. The term is found in encyclopedic books such as the Mānasāra, Kāmikgāgama and the Suprabhedāgama.
Examples in England
Early Saxon abaci are frequently simply chamfered, but sometimes grooved as in the crypt at Repton (fig. 1) and in the arcade of the refectory at Westminster Abbey. The abacus in Norman work is square where the columns are small; but on larger piers it is sometimes octagonal, as at Waltham Abbey. The square of the abacus is often sculptured with ornaments, as at the White Tower and at Alton, Hampshire (fig. 2). In Early English work, the abacus is generally circular, and in larger work, a group of circles (fig. 4), with some examples of octagonal and square shapes. The mouldings are generally half-rounds, which overhang deep hollows in the capital. In France, the abacus in early work is generally square, as at Chateau de Blois (fig. 3).
Examples in France
The first abacus pictured below (fig. 5) is decorated with simple mouldings and ornaments, common during the 12th century, in Île-de-France, Normandy, Champagne, and Burgundy regions, and from the choir of Vézelay Abbey (fig. 6). Figure 7 shows a circular abacus used at windows in the side chapels of Notre Dame de Paris. Towards the end of the 13th century, this element decreases in importance—they became short with a narrow profile during the 14th century, and disappeared almost entirely during the 15th century (fig. 8).
Sources
Wikisource has original text related to the Encyclopædia Britannica Eleventh Edition article: Abacus.
See also
Pulvino
Footnotes
References
External links
Abacus, Smith's Dictionary of Greek and Roman Antiquities
Columns and entablature
Architectural elements | Abacus (architecture) | [
"Technology",
"Engineering"
] | 1,084 | [
"Building engineering",
"Structural system",
"Architectural elements",
"Columns and entablature",
"Components",
"Architecture"
] |
355,377 | https://en.wikipedia.org/wiki/Photonic%20crystal | A photonic crystal is an optical nanostructure in which the refractive index changes periodically. This affects the propagation of light in the same way that the structure of natural crystals gives rise to X-ray diffraction and that the atomic lattices (crystal structure) of semiconductors affect their conductivity of electrons. Photonic crystals occur in nature in the form of structural coloration and animal reflectors, and, as artificially produced, promise to be useful in a range of applications.
Photonic crystals can be fabricated for one, two, or three dimensions. One-dimensional photonic crystals can be made of thin film layers deposited on each other. Two-dimensional ones can be made by photolithography, or by drilling holes in a suitable substrate. Fabrication methods for three-dimensional ones include drilling under different angles, stacking multiple 2-D layers on top of each other, direct laser writing, or, for example, instigating self-assembly of spheres in a matrix and dissolving the spheres.
Photonic crystals can, in principle, find uses wherever light must be manipulated. For example, dielectric mirrors are one-dimensional photonic crystals which can produce ultra-high reflectivity mirrors at a specified wavelength. Two-dimensional photonic crystals called photonic-crystal fibers are used for fiber-optic communication, among other applications. Three-dimensional crystals may one day be used in optical computers, and could lead to more efficient photovoltaic cells.
Although the energy of light (and all electromagnetic radiation) is quantized in units called photons, the analysis of photonic crystals requires only classical physics. "Photonic" in the name is a reference to photonics, a modern designation for the study of light (optics) and optical engineering. Indeed, the first research into what we now call photonic crystals may have been as early as 1887 when the English physicist Lord Rayleigh experimented with periodic multi-layer dielectric stacks, showing they can effect a photonic band-gap in one dimension. Research interest grew with work in 1987 by Eli Yablonovitch and Sajeev John on periodic optical structures with more than one dimension—now called photonic crystals.
Introduction
Photonic crystals are composed of periodic dielectric, metallo-dielectric—or even superconductor microstructures or nanostructures that affect electromagnetic wave propagation in the same way that the periodic potential in a semiconductor crystal affects the propagation of electrons, determining allowed and forbidden electronic energy bands. Photonic crystals contain regularly repeating regions of high and low refractive index. Light waves may propagate through this structure or propagation may be disallowed, depending on their wavelength. Wavelengths that may propagate in a given direction are called modes, and the ranges of wavelengths which propagate are called bands. Disallowed bands of wavelengths are called photonic band gaps. This gives rise to distinct optical phenomena, such as inhibition of spontaneous emission, high-reflecting omni-directional mirrors, and low-loss-waveguiding. The bandgap of photonic crystals can be understood as the destructive interference of multiple reflections of light propagating in the crystal at each interface between layers of high- and low- refractive index regions, akin to the bandgaps of electrons in solids.
There are two strategies for opening up the complete photonic band gap. The first one is to increase the refractive index contrast for the band gap in each direction becomes wider and the second one is to make the Brillouin zone more similar to sphere. However, the former is limited by the available technologies and materials and the latter is restricted by the crystallographic restriction theorem. For this reason, the photonic crystals with a complete band gap demonstrated to date have face-centered cubic lattice with the most spherical Brillouin zone and made of high-refractive-index semiconductor materials. Another approach is to exploit quasicrystalline structures with no crystallography limits. A complete photonic bandgap was reported for low-index polymer quasicrystalline samples manufactured by 3D printing.
The periodicity of the photonic crystal structure must be around or greater than half the wavelength (in the medium) of the light waves in order for interference effects to be exhibited. Visible light ranges in wavelength between about 400 nm (violet) to about 700 nm (red) and the resulting wavelength inside a material requires dividing that by the average index of refraction. The repeating regions of high and low dielectric constant must, therefore, be fabricated at this scale. In one dimension, this is routinely accomplished using the techniques of thin-film deposition.
History
Photonic crystals have been studied in one form or another since 1887, but no one used the term photonic crystal until over 100 years later—after Eli Yablonovitch and Sajeev John published two milestone papers on photonic crystals in 1987. The early history is well-documented in the form of a story when it was identified as one of the landmark developments in physics by the American Physical Society.
Before 1987, one-dimensional photonic crystals in the form of periodic multi-layer dielectric stacks (such as the Bragg mirror) were studied extensively. Lord Rayleigh started their study in 1887, by showing that such systems have a one-dimensional photonic band-gap, a spectral range of large reflectivity, known as a stop-band. Today, such structures are used in a diverse range of applications—from reflective coatings to enhancing LED efficiency to highly reflective mirrors in certain laser cavities (see, for example, VCSEL). The pass-bands and stop-bands in photonic crystals were first reduced to practice by Melvin M. Weiner who called those crystals "discrete phase-ordered media." Weiner achieved those results by extending Darwin's dynamical theory for x-ray Bragg diffraction to arbitrary wavelengths, angles of incidence, and cases where the incident wavefront at a lattice plane is scattered appreciably in the forward-scattered direction. A detailed theoretical study of one-dimensional optical structures was performed by Vladimir P. Bykov, who was the first to investigate the effect of a photonic band-gap on the spontaneous emission from atoms and molecules embedded within the photonic structure. Bykov also speculated as to what could happen if two- or three-dimensional periodic optical structures were used. The concept of three-dimensional photonic crystals was then discussed by Ohtaka in 1979, who also developed a formalism for the calculation of the photonic band structure. However, these ideas did not take off until after the publication of two milestone papers in 1987 by Yablonovitch and John. Both these papers concerned high-dimensional periodic optical structures, i.e., photonic crystals. Yablonovitch's main goal was to engineer photonic density of states to control the spontaneous emission of materials embedded in the photonic crystal. John's idea was to use photonic crystals to affect localisation and control of light.
After 1987, the number of research papers concerning photonic crystals began to grow exponentially. However, due to the difficulty of fabricating these structures at optical scales (see Fabrication challenges), early studies were either theoretical or in the microwave regime, where photonic crystals can be built on the more accessible centimetre scale. (This fact is due to a property of the electromagnetic fields known as scale invariance. In essence, electromagnetic fields, as the solutions to Maxwell's equations, have no natural length scale—so solutions for centimetre scale structure at microwave frequencies are the same as for nanometre scale structures at optical frequencies.)
By 1991, Yablonovitch had demonstrated the first three-dimensional photonic band-gap in the microwave regime. The structure that Yablonovitch was able to produce involved drilling an array of holes in a transparent material, where the holes of each layer form an inverse diamond structure – today it is known as Yablonovite.
In 1996, Thomas Krauss demonstrated a two-dimensional photonic crystal at optical wavelengths. This opened the way to fabricate photonic crystals in semiconductor materials by borrowing methods from the semiconductor industry.
Pavel Cheben demonstrated a new type of photonic crystal waveguide – subwavelength grating (SWG) waveguide. The SWG waveguide operates in subwavelength region, away from the bandgap. It allows the waveguide properties to be controlled directly by the nanoscale engineering of the resulting metamaterial while mitigating wave interference effects. This provided “a missing degree of freedom in photonics” and resolved an important limitation in silicon photonics which was its restricted set of available materials insufficient to achieve complex optical on-chip functions.
Today, such techniques use photonic crystal slabs, which are two dimensional photonic crystals "etched" into slabs of semiconductor. Total internal reflection confines light to the slab, and allows photonic crystal effects, such as engineering photonic dispersion in the slab. Researchers around the world are looking for ways to use photonic crystal slabs in integrated computer chips, to improve optical processing of communications—both on-chip and between chips.
Autocloning fabrication technique, proposed for infrared and visible range photonic crystals by Sato et al. in 2002, uses electron-beam lithography and dry etching: lithographically formed layers of periodic grooves are stacked by regulated sputter deposition and etching, resulting in "stationary corrugations" and periodicity. Titanium dioxide/silica and tantalum pentoxide/silica devices were produced, exploiting their dispersion characteristics and suitability to sputter deposition.
Such techniques have yet to mature into commercial applications, but two-dimensional photonic crystals are commercially used in photonic crystal fibres (otherwise known as holey fibres, because of the air holes that run through them). Photonic crystal fibres were first developed by Philip Russell in 1998, and can be designed to possess enhanced properties over (normal) optical fibres.
Study has proceeded more slowly in three-dimensional than in two-dimensional photonic crystals. This is because of more difficult fabrication. Three-dimensional photonic crystal fabrication had no inheritable semiconductor industry techniques to draw on. Attempts have been made, however, to adapt some of the same techniques, and quite advanced examples have been demonstrated, for example in the construction of "woodpile" structures constructed on a planar layer-by-layer basis. Another strand of research has tried to construct three-dimensional photonic structures from self-assembly—essentially letting a mixture of dielectric nano-spheres settle from solution into three-dimensionally periodic structures that have photonic band-gaps. Vasily Astratov's group from the Ioffe Institute realized in 1995 that natural and synthetic opals are photonic crystals with an incomplete bandgap. The first demonstration of an "inverse opal" structure with a complete photonic bandgap came in 2000, from researchers at the University of Toronto, and Institute of Materials Science of Madrid (ICMM-CSIC), Spain. The ever-expanding field of natural photonics, bioinspiration and biomimetics—the study of natural structures to better understand and use them in design—is also helping researchers in photonic crystals. For example, in 2006 a naturally occurring photonic crystal was discovered in the scales of a Brazilian beetle. Analogously, in 2012 a diamond crystal structure was found in a weevil and a gyroid-type architecture in a butterfly. More recently, gyroid photonic crystals have been found in the feather barbs of blue-winged leafbirds and are responsible for the bird's shimmery blue coloration. Some publications suggest the feasibility of the complete photonic band gap in the visible range in photonic crystals with optically saturated media that can be implemented by using laser light as an external optical pump.
Construction strategies
The fabrication method depends on the number of dimensions that the photonic bandgap must exist in.
One-dimensional photonic crystals
To produce a one-dimensional photonic crystal, thin film layers of different dielectric constant may be periodically deposited on a surface which leads to a band gap in a particular propagation direction (such as normal to the surface). A Bragg grating is an example of this type of photonic crystal. One-dimensional photonic crystals can include layers of non-linear optical materials in which the non-linear behaviour is accentuated due to field enhancement at wavelengths near a so-called degenerate band edge. This field enhancement (in terms of intensity) can reach where N is the total number of layers. However, by using layers which include an optically anisotropic material, it has been shown that the field enhancement can reach , which, in conjunction with non-linear optics, has potential applications such as in the development of an all-optical switch.
A one-dimensional photonic crystal can be implemented using repeated alternating layers of a metamaterial and vacuum. If the metamaterial is such that the relative permittivity and permeability follow the same wavelength dependence, then the photonic crystal behaves identically for TE and TM modes, that is, for both s and p polarizations of light incident at an angle.
Recently, researchers fabricated a graphene-based Bragg grating (one-dimensional photonic crystal) and demonstrated that it supports excitation of surface electromagnetic waves in the periodic structure by using 633 nm He-Ne laser as the light source. Besides, a novel type of one-dimensional graphene-dielectric photonic crystal has also been proposed. This structure can act as a far-IR filter and can support low-loss surface plasmons for waveguide and sensing applications. 1D photonic crystals doped with bio-active metals (i.e. silver) have been also proposed as sensing devices for bacterial contaminants. Similar planar 1D photonic crystals made of polymers have been used to detect volatile organic compounds vapors in atmosphere.
In addition to solid-phase photonic crystals, some liquid crystals with defined ordering can demonstrate photonic color. For example, studies have shown several liquid crystals with short- or long-range one-dimensional positional ordering can form photonic structures.
Two-dimensional photonic crystals
In two dimensions, holes may be drilled in a substrate that is transparent to the wavelength of radiation that the bandgap is designed to block. Triangular and square lattices of holes have been successfully employed.
The Holey fiber or photonic crystal fiber can be made by taking cylindrical rods of glass in hexagonal lattice, and then heating and stretching them, the triangle-like airgaps between the glass rods become the holes that confine the modes.
Three-dimensional photonic crystals
There are several structure types that have been constructed:
Spheres in a diamond lattice
Yablonovite
The woodpile structure – "rods" are repeatedly etched with beam lithography, filled in, and covered with a layer of new material. As the process repeats, the channels etched in each layer are perpendicular to the layer below, and parallel to and out of phase with the channels two layers below. The process repeats until the structure is of the desired height. The fill-in material is then dissolved using an agent that dissolves the fill-in material but not the deposition material. It is generally hard to introduce defects into this structure.
Inverse opals or Inverse Colloidal Crystals-Spheres (such as polystyrene or silicon dioxide) can be allowed to deposit into a cubic close packed lattice suspended in a solvent. Then a hardener is introduced that makes a transparent solid out of the volume occupied by the solvent. The spheres are then dissolved with an acid such as Hydrochloric acid. The colloids can be either spherical or nonspherical. contains in excess of 750,000 polymer nanorods. Light focused on this beam splitter penetrates or is reflected, depending on polarization.
Photonic crystal cavities
Not only band gap, photonic crystals may have another effect if we partially remove the symmetry through the creation a nanosize cavity. This defect allows you to guide or to trap the light with the same function as nanophotonic resonator and it is characterized by the strong dielectric modulation in the photonic crystals. For the waveguide, the propagation of light depends on the in-plane control provided by the photonic band gap and to the long confinement of light induced by dielectric mismatch. For the light trap, the light is strongly confined in the cavity resulting further interactions with the materials. First, if we put a pulse of light inside the cavity, it will be delayed by nano- or picoseconds and this is proportional to the quality factor of the cavity. Finally, if we put an emitter inside the cavity, the emission light also can be enhanced significantly and or even the resonant coupling can go through Rabi oscillation. This is related with cavity quantum electrodynamics and the interactions are defined by the weak and strong coupling of the emitter and the cavity. The first studies for the cavity in one-dimensional photonic slabs are usually in grating or distributed feedback structures. For two-dimensional photonic crystal cavities, they are useful to make efficient photonic devices in telecommunication applications as they can provide very high quality factor up to millions with smaller-than-wavelength mode volume. For three-dimensional photonic crystal cavities, several methods have been developed including lithographic layer-by-layer approach, surface ion beam lithography, and micromanipulation technique. All those mentioned photonic crystal cavities that tightly confine light offer very useful functionality for integrated photonic circuits, but it is challenging to produce them in a manner that allows them to be easily relocated. There is no full control
with the cavity creation, the cavity location, and the emitter position relative to the maximum field of the cavity while the studies to solve those problems are still ongoing. Movable cavity of nanowire in photonic crystals is one of solutions to tailor this light matter interaction.
Fabrication challenges
Higher-dimensional photonic crystal fabrication faces two major challenges:
Making them with enough precision to prevent scattering losses blurring the crystal properties
Designing processes that can robustly mass-produce the crystals
One promising fabrication method for two-dimensionally periodic photonic crystals is a photonic-crystal fiber, such as a holey fiber. Using fiber draw techniques developed for communications fiber it meets these two requirements, and photonic crystal fibres are commercially available. Another promising method for developing two-dimensional photonic crystals is the so-called photonic crystal slab. These structures consist of a slab of material—such as silicon—that can be patterned using techniques from the semiconductor industry. Such chips offer the potential to combine photonic processing with electronic processing on a single chip.
For three dimensional photonic crystals, various techniques have been used—including photolithography and etching techniques similar to those used for integrated circuits. Some of these techniques are already commercially available. To avoid the complex machinery of nanotechnological methods, some alternate approaches involve growing photonic crystals from colloidal crystals as self-assembled structures.
Mass-scale 3D photonic crystal films and fibres can now be produced using a shear-assembly technique that stacks 200–300 nm colloidal polymer spheres into perfect films of fcc lattice. Because the particles have a softer transparent rubber coating, the films can be stretched and molded, tuning the photonic bandgaps and producing striking structural color effects.
Computing photonic band structure
The photonic band gap (PBG) is essentially the gap between the air-line and the dielectric-line in the dispersion relation of the PBG system. To design photonic crystal systems, it is essential to engineer the location and size of the bandgap by computational modeling using any of the following methods:
Plane wave expansion method
Inverse dispersion method
Finite element method
Finite difference time domain method
Order-n spectral method
KKR method
Bloch wave – MoM method
Essentially, these methods solve for the frequencies (normal modes) of the photonic crystal for each value of the propagation direction given by the wave vector, or vice versa. The various lines in the band structure, correspond to the different cases of n, the band index. For an introduction to photonic band structure, see K. Sakoda's and Joannopoulos books.
The plane wave expansion method can be used to calculate the band structure using an eigen formulation of the Maxwell's equations, and thus solving for the eigen frequencies for each of the propagation directions, of the wave vectors. It directly solves for the dispersion diagram. Electric field strength values can also be calculated over the spatial domain of the problem using the eigen vectors of the same problem. For the picture shown to the right, corresponds to the band-structure of a 1D distributed Bragg reflector (DBR) with air-core interleaved with a dielectric material of relative permittivity 12.25, and a lattice period to air-core thickness ratio (d/a) of 0.8, is solved using 101 planewaves over the first irreducible Brillouin zone. The Inverse dispersion method also exploited plane wave expansion but formulates Maxwell's equation as an eigenproblem for the wave vector k while the frequency is considered as a parameter. Thus, it solves the dispersion relation instead of , which plane wave method does. The inverse dispersion method makes it possible to find complex value of the wave vector e.g. in the bandgap, which allows one to distinguish photonic crystals from metamaterial. Besides, the method is ready for the frequency dispersion of the permittivity to be taken into account.
To speed calculation of the frequency band structure, the Reduced Bloch Mode Expansion (RBME) method can be used. The RBME method applies "on top" of any of the primary expansion methods mentioned above. For large unit cell models, the RBME method can reduce time for computing the band structure by up to two orders of magnitude.
Applications
Photonic crystals are attractive optical materials for controlling and manipulating light flow. One dimensional photonic crystals are already in widespread use, in the form of thin-film optics, with applications from low and high reflection coatings on lenses and mirrors to colour changing paints and inks. Higher-dimensional photonic crystals are of great interest for both fundamental and applied research, and the two dimensional ones are beginning to find commercial applications.
The first commercial products involving two-dimensionally periodic photonic crystals are already available in the form of photonic-crystal fibers, which use a microscale structure to confine light with radically different characteristics compared to conventional optical fiber for applications in nonlinear devices and guiding exotic wavelengths. The three-dimensional counterparts are still far from commercialization but may offer additional features such as optical nonlinearity required for the operation of optical transistors used in optical computers, when some technological aspects such as manufacturability and principal difficulties such as disorder are under control.
SWG photonic crystal waveguides have facilitated new integrated photonic devices for controlling transmission of light signals in photonic integrated circuits, including fibre-chip couplers, waveguide crossovers, wavelength and mode multiplexers, ultra-fast optical switches, athermal waveguides, biochemical sensors, polarization management circuits, broadband interference couplers, planar waveguide lenses, anisotropic waveguides, nanoantennas and optical phased arrays. SWG nanophotonic couplers permit highly-efficient and polarization-independent coupling between photonic chips and external devices. They have been adopted for fibre-chip coupling in volume optoelectronic chip manufacturing. These coupling interfaces are particularly important because every photonic chip needs to be optically connected with the external world and the chips themselves appear in many established and emerging applications, such as 5G networks, data center interconnects, chip-to-chip interconnects, metro- and long-haul telecommunication systems, and automotive navigation.
In addition to the foregoing, photonic crystals have been proposed as platforms for the development of solar cells and optical sensors, including chemical sensors and biosensors.
See also
References
External links
Business report on Photonic Crystals in Metamaterials – see also Scope and Analyst
Photonic crystals tutorials by Prof S. Johnson at MIT
Photonic crystals an introduction
Invisibility cloak created in 3-D; Photonic crystals(BBC)
Condensed matter physics
Metamaterials
Photonics | Photonic crystal | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 5,071 | [
"Metamaterials",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Matter"
] |
355,401 | https://en.wikipedia.org/wiki/Solomon%20Feferman | Solomon Feferman (December 13, 1928 – July 26, 2016) was an American philosopher and mathematician who worked in mathematical logic. In addition to his prolific technical work in proof theory, computability theory, and set theory, he was known for his contributions to the history of logic (for instance, via biographical writings on figures such as Kurt Gödel, Alfred Tarski, and Jean van Heijenoort) and as a vocal proponent of the philosophy of mathematics known as predicativism, notably from an anti-platonist stance.
Life
Solomon Feferman was born in The Bronx in New York City to working-class parents who had immigrated to the United States after World War I and had met and married in New York. Neither parent had any advanced education. The family moved to Los Angeles, where Feferman graduated from high school at age 16.
He received his B.S. from the California Institute of Technology in 1948, and in 1957 his Ph.D. in mathematics from the University of California, Berkeley, under Alfred Tarski, after having been drafted and having served in the U.S. Army from 1953 to 1955. In 1956 he was appointed to the Departments of Mathematics and Philosophy at Stanford University, where he later became the Patrick Suppes Professor of Humanities and Sciences. While the majority of his career was spent at Stanford, he also spent time as a post-doctoral fellow at the Institute for Advanced Study in Princeton, a visiting professor at MIT, and a visiting fellow at the University of Oxford (Wolfson College and All Souls College).
Feferman died on 26 July 2016 at his home in Stanford, following an illness that lasted three months and a stroke. At his death, he had been a member of the Mathematical Association of America for 37 years.
Contributions
Feferman was editor-in-chief of the five-volume Collected Works of Kurt Gödel, published by Oxford University Press between 2001 and 2013.
In 2004, together with his wife Anita Burdman Feferman, he published a biography of Alfred Tarski: Alfred Tarski: Life and Logic.
Influenced by the writings of Hermann Weyl, he worked on predicative mathematics. In particular, he introduced the Feferman–Schütte ordinal as a measure of the strength of certain predicative systems.
Recognition
Feferman was awarded Guggenheim Fellowships in 1972 and 1986 and the Rolf Schock Prize in logic and philosophy in 2003. He was invited to give the Gödel Lecture in 1997 and the Tarski Lectures in 2006. In 2012, he became a fellow of the American Mathematical Society.
Publications
Papers
Feferman, Solomon; Vaught, Robert L. (1959), "The first order properties of products of algebraic systems", Fund. Math. 47, 57–103.
Feferman, Solomon (1975), "A language and axioms for explicit mathematics", Algebra and logic (Fourteenth Summer Res. Inst., Austral. Math. Soc., Monash Univ., Clayton, 1974), pp. 87–139, Lecture Notes in Math., vol. 450, Berlin, Springer.
Feferman, Solomon (1979), "Constructive theories of functions and classes", Logic Colloquium '78 (Mons, 1978), pp. 159–224, Stud. Logic Foundations Math., 97, Amsterdam, New York, North-Holland.
Buchholz, Wilfried; Feferman, Solomon; Pohlers, Wolfram; Sieg, Wilfried (1981), "Iterated inductive definitions and subsystems of analysis: recent proof-theoretical studies", Lecture Notes in Mathematics, 897, Berlin, New York, Springer-Verlag.
Feferman, Solomon; Hellman, Geoffrey (1995), "Predicative foundations of arithmetic", J. Philos. Logic 24 (1), 1–17.
Avigad, Jeremy; Feferman, Solomon (1998), "Gödel's functional (Dialectica) interpretation", Handbook of proof theory, 337–405, Stud. Logic Found. Math., 137, Amsterdam, North-Holland.
Books
Feferman, Solomon (1964) The Number Systems, Foundations of Algebra and Analysis Addison Wesley. Library of Congress Catalog No.63-12470
Feferman, Solomon. (1998). In the Light of Logic. Oxford University Press. , Logic and Computation in Philosophy series.
See also
Criticism of non-standard analysis
References
External links
Solomon Feferman official website (via Internet Archive) at Stanford University
1928 births
21st-century American mathematicians
American logicians
Jewish American scientists
Jewish philosophers
Mathematical logicians
American historians of mathematics
University of California, Berkeley alumni
Rolf Schock Prize laureates
Stanford University Department of Philosophy faculty
Stanford University Department of Mathematics faculty
American philosophers of mathematics
Fellows of the American Mathematical Society
2016 deaths
21st-century American Jews
20th-century American mathematicians | Solomon Feferman | [
"Mathematics"
] | 1,013 | [
"Proof theorists",
"Mathematical logic",
"Mathematical logicians",
"Proof theory"
] |
355,547 | https://en.wikipedia.org/wiki/London%20dispersion%20force | London dispersion forces (LDF, also known as dispersion forces, London forces, instantaneous dipole–induced dipole forces, fluctuating induced dipole bonds or loosely as van der Waals forces) are a type of intermolecular force acting between atoms and molecules that are normally electrically symmetric; that is, the electrons are symmetrically distributed with respect to the nucleus. They are part of the van der Waals forces. The LDF is named after the German physicist Fritz London. They are the weakest intermolecular force.
Introduction
The electron distribution around an atom or molecule undergoes fluctuations in time. These fluctuations create instantaneous electric fields which are felt by other nearby atoms and molecules, which in turn adjust the spatial distribution of their own electrons. The net effect is that the fluctuations in electron positions in one atom induce a corresponding redistribution of electrons in other atoms, such that the electron motions become correlated. While the detailed theory requires a quantum-mechanical explanation (see quantum mechanical theory of dispersion forces), the effect is frequently described as the formation of instantaneous dipoles that (when separated by vacuum) attract each other. The magnitude of the London dispersion force is frequently described in terms of a single parameter called the Hamaker constant, typically symbolized . For atoms that are located closer together than the wavelength of light, the interaction is essentially instantaneous and is described in terms of a "non-retarded" Hamaker constant. For entities that are farther apart, the finite time required for the fluctuation at one atom to be felt at a second atom ("retardation") requires use of a "retarded" Hamaker constant.
While the London dispersion force between individual atoms and molecules is quite weak and decreases quickly with separation like , in condensed matter (liquids and solids), the effect is cumulative over the volume of materials, or within and between organic molecules, such that London dispersion forces can be quite strong in bulk solid and liquids and decay much more slowly with distance. For example, the total force per unit area between two bulk solids decreases by where is the separation between them. The effects of London dispersion forces are most obvious in systems that are very non-polar (e.g., that lack ionic bonds), such as hydrocarbons and highly symmetric molecules like bromine (Br2, a liquid at room temperature) or iodine (I2, a solid at room temperature). In hydrocarbons and waxes, the dispersion forces are sufficient to cause condensation from the gas phase into the liquid or solid phase. Sublimation heats of e.g. hydrocarbon crystals reflect the dispersion interaction. Liquification of oxygen and nitrogen gases into liquid phases is also dominated by attractive London dispersion forces.
When atoms/molecules are separated by a third medium (rather than vacuum), the situation becomes more complex. In aqueous solutions, the effects of dispersion forces between atoms or molecules are frequently less pronounced due to competition with polarizable solvent molecules. That is, the instantaneous fluctuations in one atom or molecule are felt both by the solvent (water) and by other molecules.
Larger and heavier atoms and molecules exhibit stronger dispersion forces than smaller and lighter ones. This is due to the increased polarizability of molecules with larger, more dispersed electron clouds. The polarizability is a measure of how easily electrons can be redistributed; a large polarizability implies that the electrons are more easily redistributed. This trend is exemplified by the halogens (from smallest to largest: F2, Cl2, Br2, I2). The same increase of dispersive attraction occurs within and between organic molecules in the order RF, RCl, RBr, RI (from smallest to largest) or with other more polarizable heteroatoms. Fluorine and chlorine are gases at room temperature, bromine is a liquid, and iodine is a solid. The London forces are thought to arise from the motion of electrons.
Quantum mechanical theory
The first explanation of the attraction between noble gas atoms was given by Fritz London in 1930. He used a quantum-mechanical theory based on second-order perturbation theory. The perturbation is because of the Coulomb interaction between the electrons and nuclei of the two moieties (atoms or molecules). The second-order perturbation expression of the interaction energy contains a sum over states. The states appearing in this sum are simple products of the stimulated electronic states of the monomers. Thus, no intermolecular antisymmetrization of the electronic states is included, and the Pauli exclusion principle is only partially satisfied.
London wrote a Taylor series expansion of the perturbation in , where is the distance between the nuclear centers of mass of the moieties.
This expansion is known as the multipole expansion because the terms in this series can be regarded as energies of two interacting multipoles, one on each monomer. Substitution of the multipole-expanded form of V into the second-order energy yields an expression that resembles an expression describing the interaction between instantaneous multipoles (see the qualitative description above). Additionally, an approximation, named after Albrecht Unsöld, must be introduced in order to obtain a description of London dispersion in terms of polarizability volumes, , and ionization energies, , (ancient term: ionization potentials).
In this manner, the following approximation is obtained for the dispersion interaction between two atoms and . Here and are the polarizability volumes of the respective atoms. The quantities and are the first ionization energies of the atoms, and is the intermolecular distance.
Note that this final London equation does not contain instantaneous dipoles (see molecular dipoles). The "explanation" of the dispersion force as the interaction between two such dipoles was invented after London arrived at the proper quantum mechanical theory. The authoritative work contains a criticism of the instantaneous dipole model and a modern and thorough exposition of the theory of intermolecular forces.
The London theory has much similarity to the quantum mechanical theory of light dispersion, which is why London coined the phrase "dispersion effect". In physics, the term "dispersion" describes the variation of a quantity with frequency, which is the fluctuation of the electrons in the case of the London dispersion.
Relative magnitude
Dispersion forces are usually dominant over the three van der Waals forces (orientation, induction, dispersion) between atoms and molecules, with the exception of molecules that are small and highly polar, such as water. The following contribution of the dispersion to the total intermolecular interaction energy has been given:
See also
Dispersion (chemistry)
van der Waals force
van der Waals molecule
Non-covalent interactions
References
Intermolecular forces
Chemical bonding
sv:Dispersionkraft | London dispersion force | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,442 | [
"Molecular physics",
"Materials science",
"Intermolecular forces",
"Condensed matter physics",
"nan",
"Chemical bonding"
] |
355,559 | https://en.wikipedia.org/wiki/Mobbing | Mobbing, as a sociological term, refers either to bullying in any context, or specifically to that within the workplace, especially when perpetrated by a group rather than an individual.
Psychological and health effects
Victims of workplace mobbing frequently suffer from: adjustment disorders, somatic symptoms, psychological trauma (e.g., trauma tremors or sudden onset selective mutism), post-traumatic stress disorder (PTSD), or major depression.
In mobbing targets with PTSD, Leymann notes that the "mental effects were fully comparable with PTSD from war or prison camp experiences." Some patients may develop alcoholism or other substance abuse disorders. Family relationships routinely suffer and victims sometimes display acts of aggression towards strangers in the street. Workplace targets and witnesses may even develop brief psychotic episodes , generally with paranoid symptoms. Leymann estimated that 15% of suicides in Sweden could be directly attributed to workplace mobbing.
Development of the concept
Konrad Lorenz, in his book entitled On Aggression (1966), first described mobbing among birds and other animals, attributing it to instincts rooted in the Darwinian struggle to thrive (see animal mobbing behavior). In his view, most humans are subject to similar innate impulses but capable of bringing them under rational control. Lorenz's explanation for his choice of the English word "mobbing" was omitted in the English translation by Marjorie Kerr Wilson. According to Kenneth Westhues, Lorenz chose the word "mobbing" because he remembered in the collective attack by birds, the old German term hassen auf, which means "to hate after" or "to put a hate on" was applied and this emphasised "the depth of antipathy with which the attack is made" rather than the English word 'mobbing' which emphasised the collective aspect of the attack.
In the 1970s, the Swedish physician applied Lorenz's conceptualization to the collective aggression of children against a targeted child. In the 1980s, professor and practising psychologist Heinz Leymann applied the term to ganging up in the workplace. In 2011, anthropologist Janice Harper suggested that some anti-bullying approaches effectively constitute a form of mobbing by using the label "bully" to dehumanize, encouraging people to shun and avoid people labeled bullies, and in some cases sabotage their work or refuse to work with them, while almost always calling for their exclusion and termination from employment.
Cause
Janice Harper followed her Huffington Post essay with a series of essays in both The Huffington Post and in her column "Beyond Bullying: Peacebuilding at Work, School and Home" in Psychology Today that argued that mobbing is a form of group aggression innate to primates, and that those who engage in mobbing are not necessarily "evil" or "psychopathic", but responding in a predictable and patterned manner when someone in a position of leadership or influence communicates to the group that someone must go. For that reason, she indicated that anyone can and will engage in mobbing, and that once mobbing gets underway, just as in the animal kingdom it will almost always continue and intensify as long as the target remains with the group. She subsequently published a book on the topic in which she explored animal behavior, organizational cultures and historical forms of group aggression, suggesting that mobbing is a form of group aggression on a continuum of structural violence with genocide as the most extreme form of mob aggression.
Online
Social networking sites and blogs have enabled anonymous groups to coordinate and attack other people. The victims of these groups can be targeted by various attacks and threats, sometimes causing the victims to use pseudonyms or go offline to avoid them.
In the workplace
British anti-bullying researchers Andrea Adams and Tim Field have used the expression "workplace bullying" instead of what Leymann called "mobbing" in a workplace context. They identify mobbing as a particular type of bullying that is not as apparent as most, defining it as "an emotional assault. It begins when an individual becomes the target of disrespectful and harmful behavior. Through innuendo, rumors, and public discrediting, a hostile environment is created in which one individual gathers others to willingly, or unwillingly, participate in continuous malevolent actions to force a person out of the workplace."
Adams and Field believe that mobbing is typically found in work environments that have poorly organised production or working methods and incapable or inattentive management and that mobbing victims are usually "exceptional individuals who demonstrated intelligence, competence, creativity, integrity, accomplishment and dedication".
In contrast, Janice Harper suggests that workplace mobbing is typically found in organizations where there is limited opportunity for employees to exit, whether through tenure systems or contracts that make it difficult to terminate an employee (such as universities or unionized organizations), and/or where finding comparable work in the same community makes it difficult for the employee to voluntarily leave (such as academic positions, religious institutions, or military). In these employments, efforts to eliminate the worker will intensify to push the worker out against his or her will through shunning, sabotage, false accusations and a series of investigations and poor reviews. Another form of employment where workers are mobbed are those that require the use of uniforms or other markers of group inclusion (law enforcement, fire fighting, military), organizations where a single gender has predominated, but another gender is beginning to enter (STEM fields, fire fighting, military, nursing, teaching, and construction). Finally, she suggests that organizations where there are limited opportunities for advancement can be prone to mobbing because those who do advance are more likely to view challenges to their leadership as threats to their precarious positions. Harper further challenges the idea that workers are targeted for their exceptional competence. In some cases, she suggests, exceptional workers are mobbed because they are viewed as threatening to someone, but some workers who are mobbed are not necessarily good workers. Rather, Harper contends, some mobbing targets are outcasts or unproductive workers who cannot easily be terminated, and are thus treated inhumanely to push them out. While Harper emphasizes the cruelty and damaging consequences of mobbing, her organizational analysis focuses on the structural, rather than moral, nature of the organization. Moreover, she views the behavior itself, which she terms workplace aggression, as grounded in group psychology, rather than individual psychosis—even when the mobbing is initiated due to a leader's personal psychosis, the dynamics of group aggression will transform the leader's bullying into group mobbing—two vastly distinct psychological and social phenomena.
Shallcross, Ramsay and Barker consider workplace "mobbing" to be a generally unfamiliar term in some English speaking countries. Some researchers claim that mobbing is simply another name for bullying. Workplace mobbing can be considered as a "virus" or a "cancer" that spreads throughout the workplace via gossip, rumour and unfounded accusations. It is a deliberate attempt to force a person out of their workplace by humiliation, general harassment, emotional abuse and/or terror. Mobbing can be described as being "ganged up on." Mobbing is executed by a leader (who can be a manager, a co-worker, or a subordinate). The leader then rallies others into a systematic and frequent "mob-like" behaviour toward the victim.
Mobbing as "downward bullying" by superiors is also known as "bossing", and "upward bullying" by colleagues as "staffing", in some European countries, for instance, in German-speaking regions.
At school
Following on from the work of Heinemann, Elliot identifies mobbing as a common phenomenon in the form of group bullying at school. It involves "ganging up" on someone using tactics of rumor, innuendo, discrediting, isolating, intimidating, and above all, making it look as if the targeted person is responsible (victim blaming). It is to be distinguished from normal conflicts (between pupils of similar standing and power), which are an integral part of everyday school life.
In academia
Kenneth Westhues' study of mobbing in academia found that vulnerability was increased by personal differences such as being a foreigner or of a different sex; by working in fields such as music or literature which have recently come under the sway of less objective and more post-modern scholarship; financial pressure; or having an aggressive superior. Other factors included envy, heresy and campus politics.
Checklists
Sociologists and authors have created checklists and other tools to identify mobbing behaviour. Common approaches to assessing mobbing behavior is through quantifying frequency of mobbing behavior based on a given definition of the behavior or through quantifying what respondents believe encompasses mobbing behavior. These are referred to as "self-labeling" and "behavior experience" methods respectively.
Limitations of some mobbing examination tools are:
Participant exhaustion due to examination length
Limited sample exposure resulting in limited result generalizability
Confounding with constructs that result in the same affect as mobbing but are not purposely harmful
Common Tools used to measure mobbing behavior are:
Leyman Inventory of Psychological Terror (LIPT)
Negative Acts Questionnaire – Revised (NAQ-R)
Luxembourg Workplace Mobbing Scale (LWMS)
Counteracting
From an organizational perspective, it has been suggested that mobbing behavior can be curtailed by acknowledging behaviors as mobbing behaviors and that such behaviors result in harm and/or negative consequences. Precise definitions of such traits are critical due to ambiguity of unacceptable and acceptable behaviors potentially leading to unintentional mobbing behavior. Attenuation of mobbing behavior can further be enhanced by developing policies that explicitly address specific behaviors that are culturally accepted to result in harm or negative affect. This provides a framework from which mobbing victims can respond to mobbing. Lack of such a framework may result in a situation where each instance of mobbing is treated on an individual basis with no recourse of prevention. It may also indicate that such behaviors are warranted and within the realm of acceptable behavior within an organization. Direct responses to grievances related to mobbing that are handled outside of a courtroom and training programs outlining antibully-countermeasures also demonstrate a reduction in mobbing behavior.
Persecutory delusions
See also
References
Further reading
Davenport NZ, Schwartz RD & Elliott GP Mobbing, Emotional Abuse in the American Workplace, 3rd ed., 2005, Civil Society Publishing. Ames, IA,
Shallcross L., Ramsay S. & Barker M. "Workplace Mobbing: Expulsion, Exclusion, and Transformation (2008) (blind peer reviewed) Australia and New Zealand Academy of Management Conference (ANZAM)
Westhues. Eliminating Professors: A Guide to the Dismissal Process. Lewiston, New York: Edwin Mellen Press.Westhues K The Envy of Excellence: Administrative Mobbing of High-Achieving Professors Lewiston, New York: Edwin Mellen Press.Westhues K "At the Mercy of the Mob" OHS Canada, Canada's Occupational Health & Safety Magazine (18:8), pp. 30–36.
Institute for education of works councils Germany – Information about Mobbing, Mediation and conflict resolution (German)
Zapf D. & Einarsen S. 2005 "Mobbing at Work: Escalated Conflicts in Organizations." Counterproductive Work Behavior: Investigations of Actors and Targets. Fox, Suzy & Spector, Paul E. Washington, DC: American Psychological Association. p. vii
Abuse
Aggression
Harassment and bullying
Interpersonal conflict
Injustice
Persecution
Group processes
Occupational health psychology
Stalking
1960s neologisms
Majority–minority relations | Mobbing | [
"Biology"
] | 2,354 | [
"Behavior",
"Abuse",
"Harassment and bullying",
"Aggression",
"Stalking",
"Human behavior"
] |
355,594 | https://en.wikipedia.org/wiki/Seminal%20vesicles | The seminal vesicles (also called vesicular glands or seminal glands) are a pair of convoluted tubular accessory glands that lie behind the urinary bladder of male mammals. They secrete fluid that largely composes the semen.
The vesicles are 5–10 cm in size, 3–5 cm in diameter, and are located between the bladder and the rectum. They have multiple outpouchings, which contain secretory glands, which join together with the vasa deferentia at the ejaculatory ducts. They receive blood from the vesiculodeferential artery, and drain into the vesiculodeferential veins. The glands are lined with column-shaped and cuboidal cells. The vesicles are present in many groups of mammals, but not marsupials, monotremes or carnivores.
Inflammation of the seminal vesicles is called seminal vesiculitis and most often is due to bacterial infection as a result of a sexually transmitted infection or following a surgical procedure. Seminal vesiculitis can cause pain in the lower abdomen, scrotum, penis or peritoneum, painful ejaculation, and blood in the semen. It is usually treated with antibiotics, although may require surgical drainage in complicated cases. Other conditions may affect the vesicles, including congenital abnormalities such as failure or incomplete formation, and, uncommonly, tumours.
The seminal vesicles have been described as early as the second century AD by Galen, although the vesicles only received their name much later, as they were initially described using the term from which the word prostate is derived.
Structure
The human seminal vesicles are a pair of glands in males that are positioned below the urinary bladder and at the end of the vasa deferentia, where they enter the prostate. Each vesicle is a coiled and folded tube, with occasional outpouchings termed diverticula in its wall. The lower part of the tube ends as a straight tube called the excretory duct, which joins with the vas deferens of that side of the body to form an ejaculatory duct. The ejaculatory ducts pass through the prostate gland before opening separately into the verumontanum of the prostatic urethra. The vesicles are between 5–10 cm in size, 3–5 cm in diameter, and have a volume of around 13 mL.
The vesicles receive blood supply from the vesiculodeferential artery, and also from the inferior vesical artery. The vesiculodeferential artery arises from the umbilical arteries, which branch directly from the internal iliac arteries. Blood is drained into the vesiculodeferential veins and the inferior vesical plexus, which drain into the internal iliac veins. Lymphatic drainage occurs along the venous routes, draining into the internal iliac nodes.
The vesicles lie behind the bladder at the end of the vasa deferentia. They lie in the space between the bladder and the rectum; the bladder and prostate lie in front, the tip of the ureter as it enters the bladder above, and Denonvilliers' fascia and the rectum behind.
Development
In the developing embryo, at the hind end lies a cloaca. This, over the fourth to the seventh week, divides into a urogenital sinus and the beginnings of the anal canal, with a wall forming between these two inpouchings called the urorectal septum. Two ducts form next to each other that connect to the urogenital sinus; the mesonephric duct and the paramesonephric duct, which go on to form the reproductive tracts of the male and female respectively.
In the male, under the influence of testosterone, the mesonephric ducts proliferate, forming the epididymis, ductus deferens and, via a small outpouching near the developing prostate, the seminal vesicles. Sertoli cells secrete anti-Müllerian hormone, which causes the paramesonephric ducts to regress.
The development and maintenance of the seminal vesicles, as well as their secretion and size/weight, are highly dependent on androgens. The seminal vesicles contain 5α-reductase, which metabolizes testosterone into its much more potent metabolite, dihydrotestosterone (DHT). The seminal vesicles have also been found to contain luteinizing hormone receptors, and hence may also be regulated by the ligand of this receptor, luteinizing hormone.
Microanatomy
The inner lining of the seminal vesicles (the epithelium) is made of a lining of interspersed column-shaped and cube-shaped cells. There are varying descriptions of the lining as being pseudostratified and consisting of column-shaped cells only. When viewed under a microscope, the cells are seen to have large bubbles in their interior. This is because their interior, called cytoplasm, contains lipid droplets involved in secretion during ejaculation. The tissue of the seminal vesicles is full of glands, spaced irregularly. As well as glands, the seminal vesicles contain smooth muscle and connective tissue. This fibrous and muscular tissue surrounds the glands, helping to expel their contents. The outer surface of the glands is covered in peritoneum.
Function
The seminal vesicles secrete a significant proportion of the fluid that ultimately becomes semen. Fluid is secreted from the ejaculatory ducts of the vesicles into the vas deferens, where it becomes part of semen. This then passes through the urethra, where it is ejaculated during a male sexual response.
About 70-85% of the seminal fluid in humans originates from the seminal vesicles. The fluid consists of nutrients including fructose and citric acid, prostaglandins, and fibrinogen. Fructose is not produced anywhere else in the body except in the seminal vesicles. It provides a forensic test in rape cases.
Nutrients help support sperm until fertilisation occurs; prostaglandins may also assist by softening mucus of the cervix, and by causing reverse contractions of parts of the female reproductive tract such as the fallopian tubes, to ensure that sperm are less likely to be expelled.
Clinical significance
Disease
Diseases of the seminal vesicles as opposed to that of prostate gland are extremely rare and are infrequently reported in the medical literature.
Congenital anomalies associated with the seminal vesicles include failure to develop, either completely (agenesis) or partially (hypoplasia), and cysts. Failure of the vesicles to form is often associated with absent vas deferens, or an abnormal connection between the vas deferens and the ureter. The seminal vesicles may also be affected by cysts, amyloidosis, and stones. Stones or cysts that become infected, or obstruct the vas deferens or seminal vesicles, may require surgical intervention.
Seminal vesiculitis (also known as spermatocystitis) is an inflammation of the seminal vesicles, most often caused by bacterial infection. Symptoms can include vague back or lower abdominal pain; pain of the penis, scrotum or peritoneum; painful ejaculation; blood in the semen on ejaculation; irritative and obstructive voiding symptoms; and impotence. Infection may be due to sexually transmitted infections, as a complication of a procedure such as prostate biopsy. It is usually treated with antibiotics. If a person experiences ongoing discomfort, transurethral seminal vesiculoscopy may be considered. Intervention in the form of drainage through the skin or surgery may also be required if the infection becomes an abscess. The seminal vesicles may also be affected by tuberculosis, schistosomiasis and hydatid disease. These diseases are investigated, diagnosed and treated according to the underlying disease.
Benign tumours of the seminal vesicles are rare. When they do occur, they are usually papillary adenomata and cystadenomata. They do not cause elevation of tumour markers and are usually diagnosed based on examination of tissue that has been removed after surgery. Primary adenocarcinoma, although rare, constitutes the most common malignant tumour of the seminal vesicles; that said, malignant involvement of the vesicles is typically the result of local invasion from an extra-vesicular lesion. When adenocarcinoma occurs, it can cause blood in the urine, blood in the semen, painful urination, urinary retention, or even urinary obstruction. Adenocarcinomata are usually diagnosed after they are excised, based on tissue diagnosis. Some produce the tumour marker Ca-125, which can be used to monitor for reoccurence afterwards. Even rarer neoplasms include sarcoma, squamous cell carcinoma, yolk sac tumour, neuroendocrine carcinoma, paraganglioma, epithelial stromal tumours and lymphoma.
Investigations
Symptoms due to diseases of the seminal vesicles may be vague and not able to be specifically attributable to the vesicles themselves; additionally, some conditions such as tumours or cysts may not cause any symptoms at all. When diseases is suspected, such as due to pain on ejaculation, blood in the urine, infertility, due to urinary tract obstruction, further investigations may be conducted.
A digital rectal examination, which involves a finger inserted by a medical practitioner through the anus, may cause greater than usual tenderness of the prostate gland, or may reveal a large seminal vesicle. Palpation is dependent on the length of index finger as seminal vesicles are located above the prostate gland and retrovesical (behind the bladder).
A urine specimen may be collected, and is likely to demonstrate blood within the urine. Laboratory examination of seminal vesicle fluid requires a semen sample, e.g. for semen culture or semen analysis. Fructose levels provide a measure of seminal vesicle function and, if absent, bilateral agenesis or obstruction is suspected.
Imaging of the vesicles is provided by medical imaging; either by transrectal ultrasound, CT or MRI scans. An examination using cystoscopy, where a flexible tube is inserted in the urethra, may show disease of the vesicles because of changes in the normal appearance of the nearby bladder trigone, or prostatic urethra.
Other animals
The evolution of seminal vesicles may have been influenced by sexual selection. They occur in birds and reptiles and many groups of mammals, but are absent in marsupials, monotremes, and carnivorans. The function is similar in all mammals they are present in, which is to secrete a fluid as part of semen that is ejaculated during the sexual response.
History
The action of the seminal vesicles has been described as early the second century AD by Galen, as "glandular bodies" that secrete substances alongside semen during reproduction. By the time of Herophilus the presence of the glands and associated ducts had been described. Around the time of the early 17th century the word used to describe the vesicles, parastatai, eventually and unambiguously was used to refer to the prostate gland, rather than the vesicles. The first time the prostate was portrayed in an individual drawing was by Reiner De Graaf in 1678.
The first described use of laparoscopic surgery on the vesicles was described in 1993; this is now the preferred approach because of decreased pain, complications, and a shorter hospital stay.
Additional images
See also
Male accessory gland infection (MAGI)
Ejaculatory duct
Urethra
Prostate
List of distinct cell types in the adult human body
References
External links
- "Male Reproductive System: prostate, seminal vesicle"
- "The Male Pelvis: The Urinary Bladder"
- "The Male Pelvis: Structures Located Posterior to the Urinary Bladder"
Exocrine system
Human male reproductive system
Mammal male reproductive system
Men's health
Sex organs | Seminal vesicles | [
"Biology"
] | 2,616 | [
"Exocrine system",
"Organ systems"
] |
355,711 | https://en.wikipedia.org/wiki/Integration%20testing | Integration testing, also called integration and testing, abbreviated I&T, is a form of software testing in which multiple parts of a software system are tested as a group.
Integration testing describes tests that are run at the integration-level to contrast testing at the unit or system level.
Often, integration testing is conducted to evaluate the compliance of a component with functional requirements.
In a structured development process, integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan, and delivers as output test results as a step leading to system testing.
Approach
Some different types of integration testing are big-bang, mixed (sandwich), risky-hardest, top-down, and bottom-up. Other Integration Patterns are: collaboration integration, backbone integration, layer integration, client-server integration, distributed services integration and high-frequency integration.
In big-bang testing, most of the developed modules are coupled together to form a complete software system or major part of the system and then used for integration testing. This method is very effective for saving time in the integration testing process . However, if the test cases and their results are not recorded properly, the entire integration process will be more complicated and may prevent the testing team from achieving the goal of integration testing.
In bottom-up testing, the lowest level components are tested first, and are then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested. All the bottom or low-level modules, procedures or functions are integrated and then tested. After the integration testing of lower level integrated modules, the next level of modules will be formed and can be used for integration testing. This approach is helpful only when all or most of the modules of the same development level are ready. This method also helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage.
In top-down testing, the top integrated modules are tested first and the branch of the module is tested step by step until the end of the related module.
Sandwich testing combines top-down testing with bottom up testing. One limitation to this sort of testing is that any conditions not stated in specified integration tests, outside of the confirmation of the execution of design items, will generally not be tested.
See also
Design predicates
Functional testing
Continuous integration
References
Software testing
Hardware testing | Integration testing | [
"Engineering"
] | 492 | [
"Software engineering",
"Software testing"
] |
355,791 | https://en.wikipedia.org/wiki/Tactical%20designator | Police units in the United States tend to use a tactical designator (or tactical callsign) consisting of a letter of the police radio alphabet followed by one or two numbers. For example, "Mary One" might identify the head of a city's homicide division. Police and fire department radio systems are assigned official callsigns, however. Examples are KQY672 and KYX556. The official headquarters callsigns are usually announced at least hourly, and more frequently by Morse code.
The United States Army uses tactical designators that change daily. They normally consist of letter-number-letter prefixes identifying a unit, followed by a number-number suffix identifying the role of the person using the callsign.
See also
Brevity code
Call sign
Glossary of military abbreviations
List of aviation, avionics, aerospace and aeronautical abbreviations
List of aviation mnemonics
List of government and military acronyms
ITU prefix
NATO phonetic alphabet
Pan-pan
Procedure word
Pseudonym
References
Military communications
Law enforcement in the United States | Tactical designator | [
"Engineering"
] | 211 | [
"Military communications",
"Telecommunications engineering"
] |
355,810 | https://en.wikipedia.org/wiki/Aviator%20call%20sign | An aviator call sign or aviator callsign is a call sign given to a military pilot, flight officer, and even some enlisted aviators. The call sign is a specialized form of nickname that is used as a substitute for the aviator's given name. It is used on flight suit and flight jacket name tags, painted/displayed beneath the officer's or enlisted aircrewman's name on aircraft fuselages or canopy rails, and in radio conversations. They are most commonly used in tactical jet aircraft communities (i.e., fighter, bomber, attack) than in other aircraft communities (i.e., airlift, mobility, maritime patrol), but their use is not totally exclusive to the former. Many NASA Astronauts with military aviator backgrounds are referred to during spaceflights by their call signs rather than their first names.
The origins of aviator call signs are varied. Most call signs play on or reference on variants of the aviator's firstname or surname. Other inspirations for call signs may include personality traits, middle name, references to historical figures, or past exploits during the pilot's career. Aviator call signs nearly always must come from a member or members of the aviator's squadron, training class, or other cohort.
It is considered bad form to try to give oneself a call sign and it is also common for aviators to be given a fairly derogatory call sign, and the more they complain about it, the more likely it is to stick.
Some aviators use the same call sign throughout their careers; in other cases an aviator might have a series of call signs. For example, U.S. Navy Lieutenant Kara Hultgreen was originally given the call sign "Hulk" because of her ability to bench-press 200 pounds. Later, after a television appearance in which she wore noticeable makeup, she received the call sign "Revlon", and a 1998 biography was entitled Call Sign Revlon.
In fiction
Film
The 1986 film Top Gun, set at the United States Navy Fighter Weapons School, featured several aviators with call signs, including Pete Mitchell (Tom Cruise): "Maverick"; Tom Kasansky (Val Kilmer): "Iceman"; and Nick Bradshaw (Anthony Edwards): "Goose". In addition, a number of military or former military personnel acted as crew to the film: Rick Moe (F-14 air crew): "Curly"; Ray Seckinger (Top Gun instructor and MiG pilot): "Secks"; Thomas Sobieck (Top Gun instructor and MiG pilot): "Sobs"; Robert Willard (Navy aerial coordinator, Top Gun instructor and MiG pilot): "Rat"; C.J. Heatley (aerial camera operator): "Heater"; and Ricky Hammonds (Top Gun instructor and MiG pilot): "Organ".
In the 1991 film Flight of the Intruder, new A-6 Intruder pilot LTJG Jack Barlow is given the call sign "Razor" because he didn't look old enough to shave. It is later changed to "Straight Razor" at the end of the film because he'd become "a real weapon" in the eyes of his commanding officer. The book's principal character Jake Grafton has the call sign "Cool Hand".
The 2019 film Captain Marvel, set in 1995, has former ex-U.S. Air Force test pilot and member of an elite Kree military unit, Carol Danvers having "Avenger" as her call sign. This name is later used by Nick Fury to rename the initiative that he had earlier drafted, to locate heroes like Danvers. He renames the initiative The Avengers after her Air Force call sign.
Television
Dwight Schultz's Captain H.M. "Howling Mad" Murdock, from the 1983 television series The A-Team (as well as his counterpart in the 2010 film adaptation, as portrayed by Sharlto Copley) is a gifted, albeit insane, can-fly-anything pilot. Aptly named, Murdock displays symptoms of mental instability, as demonstrated by his weekly obsessions (ranging from inanimate objects to role playing). Whether or not he is in fact insane is often debated, due to demonstrations in his fine tuned skills of acting and mimicry.
In the 1993 animated television series SWAT Kats: The Radical Squadron, the main characters Chance Furlong and Jake Clawson have the call signs "T-Bone," and "Razor," respectively. Although their call signs are technically their SwatKAT aliases, they frequently refer to each other by their call sign even when not flying.
In the 1995 TV series JAG, the lead character, Harmon Rabb, is given the name "Pappy" due to the fact that he is the oldest pilot in his squadron. This is later changed to 'Hammer' which was his father's Vietnam War call sign—a mark of respect.
In the 2004 television series Battlestar Galactica, a number of the Galactica'''s crew had call signs. William Adama (portrayed by Edward James Olmos) had "Husker".Lee Adama (Jamie Bamber) had "Apollo". Kara Thrace (Katee Sackhoff) had "Starbuck" and Karl Agathon (Tahmoh Penikett) had "Helo." Sharon Valerii (Grace Park) had two personas and a call sign for each: "Boomer" for Sharon Valerii and "Athena" for Sharon Agathon. In the original 1978 series on which the 2004 series was based, many of these were the characters' actual names, rather than call signs.
Payload Specialist Howard Wolowitz from 2007 television series The Big Bang Theory has the unwanted astronaut call sign "Froot Loops" given to him by astronaut Mike Massimino.
The episode "Newbie Dash" of the 2010 animated TV series My Little Pony: Friendship Is Magic revolves around Rainbow Dash trying to shake off an embarrassing nickname ("Crash") given to her upon joining the aerobatic team The Wonderbolts. She ultimately learns that all of her teammates have equally embarrassing nicknames, and embraces it as her callsign for the remainder of the series.
Print
The Hal Jordan version of the DC Comics character Green Lantern, introduced in 1959, was a US Air Force pilot and test pilot with the call sign "Highball".
The Marvel Comics character Corsair, space-faring father to X-Men characters Scott Summers and Alex Summers, got his alias from his call sign from his time as a US Air Force pilot.
In Tom Clancy's 1993 novel Without Remorse, fictional Vice Admiral Winslow Holland Maxwell, during World War II, received the call sign "Winnie," which he hated; after a mission in which he shot down three Japanese planes (all confirmed by gunsight cameras), he found a new coffee mug in the wardroom, engraved with the call sign "Dutch." When he later became an admiral, he displayed the mug—no longer used for coffee or pencils—in a place of honor on his desk.
A trilogy of novels published 2001-2004 by Ward "Mooch" Carroll, Punk's War, Punk's Wing, and Punk's Fight'', featured Rick Reichert, an F-14 pilot with the call sign "Punk" named by his skipper (Commanding Officer) because he was caught listening to punk rock music while he was in the paraloft “walking” (suiting up) for a flight.
In real life
Astronaut Duane Carey used the callsign "Spider" as an A-10 pilot; When he transferred to F-16s, his call sign was changed to "Digger", because another pilot with that callsign had recently left the group, and the group wanted to continue its use.
US Navy fighter pilot Dale Snodgrass used the callsign of "Snort" and flew the F-14 Tomcat. He is known for a photo of him in his F-14 doing a knife edge pass off the side of the USS America. After his retirement from the Navy he flew many types of warbirds at airshows across the world, up until his death in mid-2021.
See also
Airline codes
Brevity code
List of ICAO aircraft type designators
NATO phonetic alphabet
Spacecraft call signs
References
Call signs
Military communications
Nicknames | Aviator call sign | [
"Engineering"
] | 1,716 | [
"Military communications",
"Telecommunications engineering"
] |
355,814 | https://en.wikipedia.org/wiki/Outline%20of%20discrete%20mathematics | Discrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. In contrast to real numbers that have the property of varying "smoothly", the objects studied in discrete mathematics – such as integers, graphs, and statements in logic – do not vary smoothly in this way, but have distinct, separated values. Discrete mathematics, therefore, excludes topics in "continuous mathematics" such as calculus and analysis.
Included below are many of the standard terms used routinely in university-level courses and in research papers. This is not, however, intended as a complete list of mathematical terms; just a selection of typical terms of art that may be encountered.
Discrete mathematical disciplines
For further reading in discrete mathematics, beyond a basic level, see these pages. Many of these disciplines are closely related to computer science.
a study of
Concepts in discrete mathematics
Sets
Functions
Arithmetic
Elementary algebra
Mathematical relations
Equivalence and identity
Mathematical phraseology
,
Combinatorics
Probability
Propositional logic
Mathematicians associated with discrete mathematics
Leonhard Euler - Swiss mathematician (1707-1783)
Claude Shannon - American mathematician (1916-2001)
Donald Knuth - American mathematician and computer scientist (b. 1938)
See also
References
External links
Archives
Jonathan Arbib & John Dwyer, Discrete Mathematics for Cryptography, 1st Edition .
John Dwyer & Suzy Jagger, Discrete Mathematics for Business & Computing, 1st Edition 2010 .
Discrete mathematics
Discrete mathematics
Discrete mathematics | Outline of discrete mathematics | [
"Mathematics"
] | 277 | [
"Discrete mathematics",
"nan"
] |
355,849 | https://en.wikipedia.org/wiki/Equal%20opportunity | Equal opportunity is a state of fairness in which individuals are treated similarly, unhampered by artificial barriers, prejudices, or preferences, except when particular distinctions can be explicitly justified. For example, the intent of equal employment opportunity is that the important jobs in an organization should go to the people who are most qualified – persons most likely to perform ably in a given task – and not go to persons for reasons deemed arbitrary or irrelevant, such as circumstances of birth, upbringing, having well-connected relatives or friends, religion, sex, ethnicity, race, caste, or involuntary personal attributes such as disability, age.
According to proponents of the concept, chances for advancement should be open to everybody without regard for wealth, status, or membership in a privileged group. The idea is to remove arbitrariness from the selection process and base it on some "pre-agreed basis of fairness, with the assessment process being related to the type of position" and emphasizing procedural and legal means. Individuals should succeed or fail based on their efforts and not extraneous circumstances such as having well-connected parents. It is opposed to nepotism and plays a role in whether a social structure is seen as legitimate.
The concept is applicable in areas of public life in which benefits are earned and received such as employment and education, although it can apply to many other areas as well. Equal opportunity is central to the concept of meritocracy.
There are two major types of equality: formal equality, the individual merit-based comparison of opportunity, and substantive equality, which moves away from individual merit-based comparison towards group equality of outcomes.
Differing political viewpoints
People with differing political viewpoints often view the concept differently. The meaning of equal opportunity is debated in fields such as political philosophy, sociology and psychology. It is being applied to increasingly wider areas beyond employment, including lending, housing, college admissions, voting rights, and elsewhere. In the classical sense, equality of opportunity is closely aligned with the concept of equality before the law and ideas of meritocracy.
Generally, the terms equality of opportunity and equal opportunity are interchangeable, with occasional slight variations; the former has more of a sense of being an abstract political concept while "equal opportunity" is sometimes used as an adjective, usually in the context of employment regulations, to identify an employer, a hiring approach, or the law. Equal opportunity provisions have been written into regulations and have been debated in courtrooms. It is sometimes conceived as a legal right against discrimination. It is an ideal which has become increasingly widespread in Western nations during the last several centuries and is intertwined with social mobility, most often with upward mobility and with rags to riches stories:
Theory
Outline of the concept
According to the Stanford Encyclopedia of Philosophy, the concept assumes that society is stratified with a diverse range of roles, some of which are more desirable than others. The benefit of equality of opportunity is to bring fairness to the selection process for coveted roles in corporations, associations, nonprofits, universities and elsewhere. According to one view, there is no "formal linking" between equality of opportunity and political structure, in the sense that there can be equality of opportunity in democracies, autocracies and in communist nations, although it is primarily associated with a competitive market economy and embedded within the legal frameworks of democratic societies. People with different political perspectives see equality of opportunity differently: liberals disagree about which conditions are needed to ensure it and many "old-style" conservatives see inequality and hierarchy in general as beneficial out of a respect for tradition. It can apply to a specific hiring decision, or to all hiring decisions by a specific company, or rules governing hiring decisions for an entire nation. The scope of equal opportunity has expanded to cover more than issues regarding the rights of minority groups, but covers practices regarding "recruitment, hiring, training, layoffs, discharge, recall, promotions, responsibility, wages, sick leave, vacation, overtime, insurance, retirement, pensions, and various other benefits".
The concept has been applied to numerous aspects of public life, including accessibility of polling stations, care provided to HIV patients, whether men and women have equal opportunities to travel on a spaceship, bilingual education, skin color of models in Brazil, television time for political candidates, army promotions, admittance to universities and ethnicity in the United States. The term is interrelated with and often contrasted with other conceptions of equality such as equality of outcome and equality of autonomy. Equal opportunity emphasizes the personal ambition and talent and abilities of the individual, rather than his or her qualities based on membership in a group, such as a social class or race or extended family. Further, it is seen as unfair if external factors that are viewed as being beyond the control of a person significantly influence what happens to him or her. Equal opportunity then emphasizes a fair process whereas in contrast equality of outcome emphasizes an equal outcome. In sociological analysis, equal opportunity is seen as a factor correlating positively with social mobility, in the sense that it can benefit society overall by maximizing well-being.
Different types
There are different concepts lumped under equality of opportunity.
Formal equality of opportunity describes equal opportunities based only on merit but these opportunities should not depend on your identity such as gender or race. Formal equality does not guarantee equal outcomes for groups or equal representation of groups, but requires that deliberate discrimination be only meritocratic. For instance, job interviews should only discriminate against applicants based on job competence. Meritocratic universities should not accept a less-capable applicant instead of a more-capable applicant who cannot pay tuition. Formal equality can be called racial color blindness and gender blindness.
Substantive equality describes equal outcomes for groups or equal representation of identities such as gender or race. Substantive does not guarantee equality of opportunity based only on merit. For instance, substantive equality includes that jobs are distributed according to the race and gender proportions of the whole population.
Equality before the law describes where the law does not discriminate explicitly based on protected identity such as gender or race. Equality before the law does not imply Formal equality of opportunity or substantive equality. If firing any pregnant employee is legal, it would meet Equality before the law but would violate both Formal equality of opportunity and substantive equality.
Formal equality of opportunity is often more difficult to measure. A political party that formally allows anyone to join, but meets in a non-wheelchair-accessible building far from public transit, substantively discriminates against both young and old members as they are less likely to be able-bodied car-owners. However, if the party raises membership dues in order to afford a better building, it discourages poor members instead. A workplace in which it is difficult for persons with special needs and disabilities to perform can considered as a type of substantive inequality, although job restructuring activities can be done to make it easier for disabled persons to succeed. Grade-cutoff university admission is formally fair, but if in practice it overwhelmingly picks women and graduates of expensive user-fee schools, it is substantively unfair to men and the poor. The unfairness has already taken place and the university can choose to try to counterbalance it, but it likely can not single-handedly make pre-university opportunities equal. Social mobility and the Great Gatsby curve are often used as an indicator of substantive equality of opportunity.
Both equality concepts say that it is unfair and inefficient if extraneous factors rule people's lives. Both accept as fair inequality based on relevant, meritocratic factors. They differ in the scope of the methods used to promote them. The difference between the two equality concepts is also referred to as Dilemma of Difference.
Formal equality of opportunity
Formal equality of opportunity is sometimes referred to as the nondiscrimination principle or described as the absence of direct discrimination, or described in the narrow sense as equality of access. It is characterized by:
Open call. Positions bringing superior advantages should be open to all applicants and job openings should be publicized in advance giving applicants a "reasonable opportunity" to apply. Further, all applications should be accepted.
Fair judging. Applications should be judged on their merits, with procedures designed to identify those best-qualified. The evaluation of the applicant should be in accord with the duties of the position and for the job opening of choir director, for example, the evaluation may judge applicants based on musical knowledge rather than some arbitrary criterion such as hair color. Blind auditions and blind interviews have been shown to improve equal opportunity.
An application is chosen. The applicant judged as "most qualified" is offered the position while others are not. There is agreement that the result of the process is again unequal, in the sense that one person has the position while another does not, but that this outcome is deemed fair on procedural grounds.
The formal approach is limited to the public sphere as opposed to private areas such as the family, marriage, or religion. What is considered "fair" and "unfair" is spelled out in advance. An expression of this version appeared in The New York Times: "There should be an equal opportunity for all. Each and every person should have as great or as small an opportunity as the next one. There should not be the unfair, unequal, superior opportunity of one individual over another."
This sense was also expressed by economists Milton and Rose Friedman in their 1980 book Free to Choose. The Friedmans explained that equality of opportunity was "not to be interpreted literally" since some children are born blind while others are born sighted, but that "its real meaning is ... a career open to the talents". This means that there should be "no arbitrary obstacles" blocking a person from realizing their ambitions: "Not birth, nationality, color, religion, sex, nor any other irrelevant characteristic should determine the opportunities that are open to a person – only his abilities".
It is a relatively straightforward task for legislators to ban blatant efforts to favor one group over another and encourage equality of opportunity as a result. Japan banned gender-specific job descriptions in advertising as well as sexual discrimination in employment as well as other practices deemed unfair, although a subsequent report suggested that the law was having minimal effect in securing Japanese women high positions in management. In the United States, the Equal Employment Opportunity Commission sued a private test preparation firm, Kaplan, for unfairly using credit histories to discriminate against African Americans in terms of hiring decisions. According to one analysis, it is possible to imagine a democracy which meets the formal criteria (1 through 3), but which still favors wealthy candidates who are selected in free and fair elections.
Meritocracy
There is some overlap among these different conceptions with the term meritocracy which describes an administrative system which rewards such factors as individual intelligence, credentials, education, morality, knowledge or other criteria believed to confer merit. Equality of opportunity is often seen as a major aspect of a meritocracy. One view was that equality of opportunity was more focused on what happens before the race begins while meritocracy is more focused on fairness at the competition stage. The term meritocracy can also be used in a negative sense to refer to a system in which an elite hold themselves in power by controlling access to merit (via access to education, experience, or bias in assessment or judgment).
Substantive equality
Substantive equality of opportunity, sometimes called fair equality of opportunity, is a somewhat broader and more expansive concept than the more limiting formal equality of opportunity and it deals with what is sometimes described as indirect discrimination. It goes farther and is more controversial than the formal variant; and has been described as "unstable", particularly if the society in question is unequal to begin with in terms of great disparity of wealth.
The substantive equality embraced by Court of Justice of the European Union focuses on equality of outcomes for group characteristics and group outcomes.
Substantive equality has been identified as more of a left-leaning political position, but this is not a hard-and-fast rule. The substantive model is advocated by people who see limitations in formal equality. In the substantive approach, the starting point before the race begins is unfair since people have had differing experiences before even approaching the competition. The substantive approach examines the applicants themselves before applying for a position and judges whether they have equal abilities or talents; and if not, then it suggests that authorities (usually the government) take steps to make applicants more equal before they get to the point where they compete for a position and fixing the before-the-starting-point issues has sometimes been described as working towards "fair access to qualifications". The success of this approach is evaluated by equality of outcome for disadvantaged and marginalized people and groups.
According to John Hills, children of wealthy and well-connected parents usually have a decisive advantage over other types of children and he notes that "advantage and disadvantage reinforce themselves over the life cycle, and often on to the next generation" so that successful parents pass along their wealth and education to succeeding generations, making it difficult for others to climb up a social ladder. However, so-called positive action efforts to bring an underprivileged person up to speed before a competition begins are limited to the period of time before the evaluation begins. At that point, the "final selection for posts must be made according to the principle the best person for the job", that is, a less qualified applicant should not be chosen over a more qualified applicant. Regardless of the nuances, the overall idea is still to give children from less fortunate backgrounds more of a chance, or to achieve at the beginning what some theorists call equality of condition. Writer Ha-Joon Chang expressed this view:
In a sense, substantive equality of opportunity moves the "starting point" further back in time. Sometimes it entails the use of affirmative action policies to help all contenders become equal before they get to the starting point, perhaps with greater training, or sometimes redistributing resources via restitution or taxation to make the contenders more equal. It holds that all who have a "genuine opportunity to become qualified" be given a chance to do so and it is sometimes based on a recognition that unfairness exists, hindering social mobility, combined with a sense that the unfairness should not exist or should be lessened in some manner. One example postulated was that a warrior society could provide special nutritional supplements to poor children, offer scholarships to military academies and dispatch "warrior skills coaches" to every village as a way to make opportunity substantively more fair. The idea is to give every ambitious and talented youth a chance to compete for prize positions regardless of their circumstances of birth.
The substantive approach tends to have a broader definition of extraneous circumstances which should be kept out of a hiring decision. One editorial writer suggested that among the many types of extraneous circumstances which should be kept out of hiring decisions was personal beauty, sometimes termed "lookism":
The substantive position was advocated by Bhikhu Parekh in 2000 in Rethinking Multiculturalism, in which he wrote that "all citizens should enjoy equal opportunities to acquire the capacities and skills needed to function in society and to pursue their self-chosen goals equally effectively" and that "equalising measures are justified on grounds of justice as well as social integration and harmony".
Affirmative action programs usually fall under the substantive category. The idea is to help disadvantaged groups get back to a normal starting position after a long period of discrimination. The programs involve government action, sometimes with resources being transferred from an advantaged group to a disadvantaged one and these programs have been justified on the grounds that imposing quotas counterbalances the past discrimination as well as being a "compelling state interest" in diversity in society. For example, there was a case in São Paulo in Brazil of a quota imposed on the São Paulo Fashion Week to require that "at least 10 percent of the models to be black or indigenous" as a coercive measure to counteract a "longstanding bias towards white models". It does not have to be accomplished via government action: for example, in the 1980s in the United States, President Ronald Reagan dismantled parts of affirmative action, but one report in the Chicago Tribune suggested that companies remained committed to the principle of equal opportunity regardless of government requirements. In another instance, upper-middle-class students taking the Scholastic Aptitude Test in the United States performed better since they had had more "economic and educational resources to prepare for these test than others". The test itself was seen as fair in a formal sense, but the overall result is seen as unfair in a substantive sense. In India, the Indian Institutes of Technology found that to achieve substantive equality of opportunity the school had to reserve 22.5 percent of seats for applicants from "historically disadvantaged schedule castes and tribes". Elite universities in France began a special "entrance program" to help applicants from "impoverished suburbs".
Luck egalitarianism
Luck egalitarianism views the unequal outcomes that are connected to bad luck of unchosen circumstances as unjust, but just if when connected to circumstances chosen by the individual and that weighing matters such as personal responsibility was important. A somewhat different view was expressed by John Roemer, who used the term nondiscrimination principle to mean that "all individuals who possess the attributes relevant for the performance of the duties of the position in question be included in the pool of eligible candidates, and that an individual's possible occupancy of the position be judged only with respect to those relevant attributes". Matt Cavanagh argued that race and sex should not matter when getting a job, but that the sense of equality of opportunity should not extend much further than preventing straightforward discrimination.
Equality of fair opportunity
Philosopher John Rawls offered this variant of substantive equality of opportunity and explained that it happens when individuals with the same "native talent and the same ambition" have the same prospects of success in competitions. Gordon Marshall offers a similar view with the words "positions are to be open to all under conditions in which persons of similar abilities have equal access to office". An example was given that if two persons X and Y have identical talent, but X is from a poor family while Y is from a rich one, then equality of fair opportunity is in effect when both X and Y have the same chance of winning the job. It suggests the ideal society is "classless" without a social hierarchy being passed from generation to generation, although parents can still pass along advantages to their children by genetics and socialization skills. One view suggests that this approach might advocate "invasive interference in family life". Marshall posed this question:
Economist Paul Krugman agrees mostly with the Rawlsian approach in that he would like to "create the society each of us would want if we didn't know in advance who we'd be". Krugman elaborated: "If you admit that life is unfair, and that there's only so much you can do about that at the starting line, then you can try to ameliorate the consequences of that unfairness".
Level playing field
Some theorists have posed a level playing field conception of equality of opportunity, similar in many respects to the substantive principle (although it has been used in different contexts to describe formal equality of opportunity) and it is a core idea regarding the subject of distributive justice espoused by John Roemer and Ronald Dworkin and others. Like the substantive notion, the level playing field conception goes farther than the usual formal approach. The idea is that initial "unchosen inequalities" – prior circumstances over which an individual had no control, but which impact his or her success in a given competition for a particular post – these unchosen inequalities should be eliminated as much as possible, according to this conception. According to Roemer, society should "do what it can to level the playing field so that all those with relevant potential will eventually be admissible to pools of candidates competing for positions". Afterwards, when an individual competes for a specific post, he or she might make specific choices which cause future inequalities – and these inequalities are deemed acceptable because of the previous presumption of fairness. This system helps undergird the legitimacy of a society's divvying up of roles as a result in the sense that it makes certain achieved inequalities "morally acceptable", according to persons who advocate this approach. This conception has been contrasted to the substantive version among some thinkers and it usually has ramifications for how society treats young persons in such areas as education and socialization and health care, but this conception has been criticized as well. John Rawls postulated the difference principle which argued that "inequalities are justified only if needed to improve the lot of the worst off, for example by giving the talented an incentive to create wealth".
Moral senses
There is general agreement that equality of opportunity is good for society, although there are diverse views about how it is good since it is a value judgement. It is generally viewed as a positive political ideal in the abstract sense. In nations where equality of opportunity is absent, it can negatively impact economic growth, according to some views and one report in Al Jazeera suggested that Egypt, Tunisia and other Middle Eastern nations were stagnating economically in part because of a dearth of equal opportunity.
Practical considerations
Difficulties with implementation
There is general agreement that programs to bring about certain types of equality of opportunity can be difficult and that efforts to cause one result often have unintended consequences or cause other problems.
A government policy that requires equal treatment can pose problems for lawmakers. A requirement for the government to provide equal health care services for all citizens can be prohibitively expensive. If the government seeks equality of opportunity for citizens to get health care by rationing services using a maximization model to try to save money, new difficulties might emerge. For example, trying to ration health care by maximizing the "quality-adjusted years of life" might steer monies away from disabled persons even though they may be more deserving, according to one analysis. In another instance, BBC News questioned whether it was wise to ask female army recruits to undergo the same strenuous tests as their male counterparts since many women were being injured as a result.
Age discrimination can present vexing challenges for policymakers trying to implement equal opportunity. According to several studies, attempts to be equally fair to both a young and an old person are problematic because the older person has presumably fewer years left to live and it may make more sense for a society to invest greater resources in a younger person's health. Treating both persons equally while following the letter of the equality of opportunity seems unfair from a different perspective.
Efforts to achieve equal opportunity along one dimension can exacerbate unfairness in other dimensions. For example, public bathrooms: If for the sake of fairness the physical area of men's and women's bathrooms is equal, the overall result may be unfair since men can use urinals, which require less physical space. In other words, a more fair arrangement may be to allot more physical space for women's restrooms. The sociologist Harvey Molotch explained: "By creating men's and women's rooms of the same size, society guarantees that individual women will be worse off than individual men."
Another difficulty is that it is hard for a society to bring substantive equality of opportunity to every type of position or industry. If a nation focuses efforts on some industries or positions, then people with other talents may be left out. For example, in an example in the Stanford Encyclopedia of Philosophy, a warrior society might provide equal opportunity for all kinds of people to achieve military success through fair competition, but people with non-military skills such as farming may be left out.
Lawmakers have run into problems trying to implement equality of opportunity. In 2010 in Britain, a legal requirement "forcing public bodies to try to reduce inequalities caused by class disadvantage" was scrapped after much debate and replaced by a hope that organizations would try to focus more on "fairness" than "equality" as fairness is generally seen as a much unclear concept than equality, but easier for politicians to manage if they are seeking to avoid fractious debate. In New York City, mayor Ed Koch tried to find ways to maintain the "principle of equal treatment" while arguing against more substantive and abrupt transfer payments called minority set-asides.
Cultural diversity of lifestyles, value systems, traditions, and beliefs can explain differences in outcomes between subgroups.
Measures
Many economists measure the degree of equal opportunity with measures of economic mobility. For instance, Joseph Stiglitz asserts that with five economic divisions and full equality of opportunity, "20 percent of those in the bottom fifth would see their children in the bottom fifth. Denmark almost achieves that – 25 percent are stuck there. Britain, supposedly notorious for its class divisions, does only a little worse (30 percent). That means they have a 70 percent chance of moving up. The chances of moving up in America, though, are markedly smaller (only 58 percent of children born to the bottom group make it out), and when they do move up, they tend to move up only a little". Similar analyses can be performed for each economic division and overall. They all show how far from the ideal all industrialized nations are and how correlated measures of equal opportunity are with income inequality and wealth inequality. Equal opportunity has ramifications beyond income; the American Human Development Index, rooted in the capabilities approach pioneered by Amartya Sen, is used to measure opportunity across geographies in the U.S. using health, education, and standard of living outcomes.
Difficulties with measurement
The consensus view is that trying to measure equality of opportunity is difficult whether examining a single hiring decision or looking at groups over time.
Single instance. It is possible to reexamine the procedures governing a specific hiring decision, see if they were followed, and re-evaluate the selection by asking questions such as "Was it fair? Were fair procedures followed? Was the best applicant selected?". This is a judgment call and biases may enter into the minds of decision-makers. The determination of equality of opportunity in such an instance is based on mathematical probability: if equality of opportunity is in effect, then it is seen as fair if each of two applicants has a 50 percent chance of winning the job, that is, they both have equal chances to succeed (assuming of course that the person making the probability assessment is unaware of all variables – including valid ones such as talent or skill as well as arbitrary ones such as race or gender). However, it is hard to measure whether each applicant had a 50 percent chance based on the outcome.
Groups. When assessing the equal opportunity for a type of job or company or industry or nation, then statistical analysis is often done by looking at patterns and abnormalities, typically comparing subgroups with larger groups on a percentage basis. Averaging opportunities over subgroups allows to determine if there are statistically significant differences in outcomes between subgroups. For factors where blinded experiments are feasible, the equality or lack of equality of opportunity due to this factor can be determined up to statistical significance. While substantive equality for group outcomes can be measured by comparing statistically significant differences in subgroup outcomes, formal equality of opportunities does not require equal outcomes between groups. If equality of opportunity is violated, perhaps by discrimination which affects a subgroup or population over time, it is possible to make this determination using statistical analysis, but there are numerous difficulties involved. Nevertheless, entities such as city governments and universities have hired full-time professionals with knowledge of statistics to ensure compliance with equal opportunity regulations. For example, Colorado State University requires the director of its Office of Equal Opportunity to maintain extensive statistics on its employees by job category as well as minorities and gender. In Britain, Aberystwyth University collects information including the "representation of women, men, members of racial or ethnic minorities and people with disabilities amongst applicants for posts, candidates interviewed, new appointments, current staff, promotions and holders of discretionary awards" to comply with equal opportunity laws.
It is difficult to prove unequal treatment although statistical analysis can provide indications of problems, it is subject to conflicts over interpretation and methodological issues. For example, a study in 2007 by the University of Washington examined its treatment of women. Researchers collected statistics about female participation in numerous aspects of university life, including percentages of women with full professorships (23 percent), enrollment in programs such as nursing (90 percent), engineering (18 percent). There is wide variation in how these statistics might be interpreted. For example, the 23 percent figure for women with full professorships could be compared to the total population of women (presumably 50 percent) perhaps using census data, or it might be compared to the percentage of women with full professorships at competing universities. It might be used in an analysis of how many women applied for the position of full professor compared to how many women attained this position. Further, the 23 percent figure could be used as a benchmark or baseline figure as part of an ongoing longitudinal analysis to be compared with future surveys to track progress over time. In addition, the strength of the conclusions is subject to statistical issues such as sample size and bias. For reasons such as these, there is considerable difficulty with most forms of statistical interpretation.
Statistical analysis of equal opportunity has been done using sophisticated examinations of computer databases. An analysis in 2011 by University of Chicago researcher Stefano Allesina examined 61,000 names of Italian professors by looking at the "frequency of last names", doing one million random drawings and he suggested that Italian academia was characterized by violations of equal opportunity practices as a result of these investigations. The last names of Italian professors tended to be similar more often than predicted by random chance. The study suggested that newspaper accounts showing that "nine relatives from three generations of a single-family (were) on the economics faculty" at the University of Bari were not aberrations, but indicated a pattern of nepotism throughout Italian academia.
Substative equality is typically measured by the criteria of equality of outcome for groups, although with difficulty. In one example, an analysis of relative equality of opportunity was done based on outcomes, such as a case to see whether hiring decisions were fair regarding men versus women – the analysis was done using statistics based on average salaries for different groups. In another instance, a cross-sectional statistical analysis was conducted to see whether social class affected participation in the United States Armed Forces during the Vietnam War: a report in Time by the Massachusetts Institute of Technology suggested that soldiers came from a variety of social classes and that the principle of equal opportunity had worked, possibly because soldiers had been chosen by a lottery process for conscription. In college admissions, equality of outcome can be measured directly by comparing offers of admission given to different groups of applicants: for example, there have been reports in newspapers of discrimination against Asian Americans regarding college admissions in the United States which suggest that Asian American applicants need higher grades and test scores to win admission to prestigious universities than other ethnic groups.
Marketplace considerations
Equal opportunity of opportunity has been described as a fundamental basic notion in business and commerce and described by economist Adam Smith as a basic economic precept. There has been research suggesting that "competitive markets will tend to drive out such discrimination" since employers or institutions which hire based on arbitrary criteria will be weaker as a result and not perform as well as firms that embrace equality of opportunity. Firms competing for overseas contracts have sometimes argued in the press for equal chances during the bidding process, such as when American oil corporations wanted equal shots at developing oil fields in Sumatra; and firms, seeing how fairness is beneficial while competing for contracts, can apply the lesson to other areas such as internal hiring and promotion decisions. A report in USA Today suggested that the goal of equal opportunity was "being achieved throughout most of the business and government labor markets because major employers pay based on potential and actual productivity".
Fair opportunity practices include measures taken by an organization to ensure efficiency effectiveness and fairness in the employment process. A basic definition of equality is the idea of equal treatment and respect. In job advertisements and descriptions, the fact that the employer is an equal opportunity employer is sometimes indicated by the abbreviations EOE or MFDV, which stands for Minority, Female, Disabled, Veteran. Analyst Ross Douthat in The New York Times suggested that equality of opportunity depends on a rising economy which brings new chances for upward mobility and he suggested that greater equality of opportunity is more easily achieved during "times of plenty". Efforts to achieve equal opportunity can rise and recede, sometimes as a result of economic conditions or political choices. Empirical evidence from public health research also suggests that equality of opportunity is linked to better health outcomes in the United States and Europe.
History
According to professor David Christian of Macquarie University, an underlying Big History trend has been a shift from seeing people as resources to exploiting towards a perspective of seeing people as individuals to empower. According to Christian, in many ancient agrarian civilizations, roughly nine of every ten persons was a peasant exploited by a ruling class. In the past thousand years, there has been a gradual movement in the direction of greater respect for equal opportunity as political structures based on generational hierarchies and feudalism broke down during the late Middle Ages and new structures emerged during the Renaissance. Monarchies were replaced by democracies: kings were replaced by parliaments and congresses. Slavery was also abolished generally. The new entity of the nation state emerged with highly specialized parts, including corporations, laws, and new ideas about citizenship as well as values about individual rights found expression in constitutions, laws, and statutes.
In the United States, one legal analyst suggested that the real beginning of the modern sense of equal opportunity was in the Fourteenth Amendment which provided "equal protection under the law". The amendment did not mention equal opportunity directly, but it helped undergird a series of later rulings which dealt with legal struggles, particularly by African Americans and later women, seeking greater political and economic power in the growing republic. In 1933, a congressional "Unemployment Relief Act" forbade discrimination "based on race, color, or creed". The Supreme Court's 1954 Brown v. Board of Education decision furthered government initiatives to end discrimination.
In 1961, President John F. Kennedy signed Executive Order 10925 which enabled a presidential committee on an equal opportunity, which was soon followed by President Lyndon B. Johnson's Executive Order 11246. The Civil Rights Act of 1964 became the legal underpinning of equal opportunity in employment. Businesses and other organizations learned to comply with the rulings by specifying fair hiring and promoting practices and posting these policy notices on bulletin boards, employee handbooks, and manuals as well ain s training sessions and films. Courts dealt with issues about equal opportunities, such as the 1989 Wards Cove decision, the Supreme Court ruled that statistical evidence by itself was insufficient to prove racial discrimination. The Equal Employment Opportunity Commission was established, sometimes reviewing charges of discrimination cases which numbered in the tens of thousands annually during the 1990s. Some law practices specialized in employment law. The conflict between formal and substantive approaches manifested itself in backlashes, sometimes described as reverse discrimination, such as the Bakke case when a white male applicant to medical school sued based on being denied admission because of a quota system preferring minority applicants. In 1990, the Americans with Disabilities Act prohibited discrimination against disabled persons, including cases of equal opportunity. In 2008, the Genetic Information Nondiscrimination Act prevents employers from using genetic information when hiring, firing, or promoting employees.
Many countries have specific bodies tasked with looking at equality of opportunity issues. In the United States, for example, it is the Equal Employment Opportunity Commission; in Britain, there is the Equality of Opportunity Committee as well as the Equality and Human Rights Commission; in Canada, the Royal Commission on the Status of Women has "equal opportunity as its precept"; and in China, the Equal Opportunities Commission handles matters regarding ethnic prejudice. In addition, there have been political movements pushing for equal treatment, such as the Women's Equal Opportunity League which in the early decades of the twentieth century, pushed for fair treatment by employers in the United States. One of the group's members explained: Global initiatives such as the United Nations Sustainable Development Goal 5 and Goal 10 are also aimed at ensuring equal opportunities for women at all levels of decision making, and reducing inequalities of outcome.
Criticism
There is agreement that the concept of equal opportunity lacks a precise definition. While it generally describes "open and fair competition" with equal chances for achieving sought-after jobs or positions as well as an absence of discrimination, the concept is elusive with a "wide range of meanings". Formal equality is hard to measure, and implementation of substantive equality poses problems as well as disagreements about what to do.
There have been various criticisms directed at both the substantive and formal approaches. One account suggests that left-leaning thinkers who advocate equality of outcome fault even formal equality of opportunity because it "legitimates inequalities of wealth and income". John W. Gardner suggested several views: (1) that inequalities will always exist regardless of trying to erase them; (2) that bringing everyone "fairly to the starting line" fails to deal with the "destructive competitiveness that follows"; (3) that any equalities achieved will entail future inequalities. Substantive equality of opportunity has led to concerns that efforts to improve fairness "ultimately collapses into the different one of equality of outcome or condition".
Economist Larry Summers advocated an approach of focusing on equality of opportunity and not equality of outcomes and that the way to strengthen equal opportunity was to bolster public education. A contrasting report in The Economist criticized efforts to contrast equality of opportunity and equality of outcome as being opposite poles on a hypothetical ethical scale, such that equality of opportunity should be the "highest ideal" while equality of outcome was "evil". Rather, the report argued that any difference between the two types of equality was illusory and that both terms were highly interconnected. According to this argument, wealthier people have greater opportunities – wealth itself can be considered as "distilled opportunity" – and children of wealthier parents have access to better schools, health care, nutrition and so forth. Accordingly, people who endorse equality of opportunity may like the idea of it in principle, yet at the same time, they would be unwilling to take the extreme steps or "titanic interventions" necessary to achieve real intergenerational equality. A slightly different view in The Guardian suggested that equality of opportunity was merely a "buzzword" to sidestep the thornier political question of income inequality.
There is speculation that since equality of opportunity is only one of sometimes competing "justice norms", there is a risk that following equality of opportunity too strictly might cause problems in other areas. A hypothetical example was suggested: suppose wealthier people gave excessive amounts of campaign contributions; suppose further that these contributions resulted in better regulations, and then laws limiting such contributions based on equal opportunity for all political participants may have the unintended long term consequence of making political decision-making lackluster and possibly hurting the groups that it was trying to protect. Philosopher John Kekes makes a similar point in his book The Art of Politics in which he suggests that there is a danger to elevating any one particular political good – including equality of opportunity – without balancing competing goods such as justice, property rights and others. Kekes advocated having a balanced perspective, including a continuing dialog between cautionary elements and reform elements. A similar view was expressed by Ronald Dworkin in The Economist:
Economist Paul Krugman sees equality of opportunity as a "non-Utopian compromise" which works and is a "pretty decent arrangement" which varies from country to country. However, there are differing views such as by Matt Cavanagh, who criticised equality of opportunity in his 2002 book Against Equality of Opportunity. Cavanagh favored a limited approach of opposing specific kinds of discrimination as steps to help people get greater control over their lives.
Conservative thinker Dinesh D'Souza criticized equality of opportunity on the basis that "it is an ideal that cannot and should not be realized through the actions of the government" and added that "for the state to enforce equal opportunity would be to contravene the true meaning of the Declaration and to subvert the principle of a free society". D'Souza described how his parenting undermined equality of opportunity:
D'Souza argued that it was wrong for the government to try to bring his daughter down, or to force him to raise other people's children, but a counterargument is that there is a benefit to everybody, including D'Souza's daughter, to have a society with less anxiety about downward mobility, less class resentment, and less possible violence.
An argument similar to D'Souza's was raised in Anarchy, State, and Utopia by Robert Nozick, who wrote that the only way to achieve equality of opportunity was "directly worsening the situations of those more favored with opportunity, or by improving the situation of those less well-favored". Nozick gave an argument of two suitors competing to marry one "fair lady": X was plain while Y was better looking and more intelligent. If Y did not exist, then "fair lady" would have married X, but Y exists and so she marries Y. Nozick asks: "Does suitor X have a legitimate complaint against Y based on unfairness since Y did not earn his good looks or intelligence?". Nozick suggests that there are no grounds for complaint. Nozick argued against equality of opportunity because it violates the rights of property since the equal opportunity maxim interferes with an owner's right to do what he or she pleases with a property.
Property rights were a major component of the philosophy of John Locke and are sometimes referred to as "Lockean rights". The sense of the argument is along these lines: equal opportunity rules regarding, say, a hiring decision within a factory, made to bring about greater fairness, violate a factory owner's rights to run the factory as he or she sees best; it has been argued that a factory owner's right to property encompasses all decision-making within the factory as being part of those property rights. That some people's "natural assets" were unearned is irrelevant to the equation according to Nozick and he argued that people are nevertheless entitled to enjoy these assets and other things freely given by others.
Friedrich Hayek felt that luck was too much of a variable in economics, such that one can not devise a system with any kind of fairness when many market outcomes are unintended. By sheer chance or random circumstances, a person may become wealthy just by being in the right place and time and Hayek argued that it is impossible to devise a system to make opportunities equal without knowing how such interactions may play out. Hayek saw not only equality of opportunity, but all of social justice as a "mirage".
Some conceptions of equality of opportunity, particularly the substantive and level playing field variants, have been criticized on the basis that they make assumptions to the effect that people have similar genetic makeups. Other critics have suggested that social justice is more complex than mere equality of opportunity. Nozick made the point that what happens in society can not always be reduced to competition for a coveted position and in 1974 wrote that "life is not a race in which we all compete for a prize which someone has established", that there is "no unified race" and there is not someone person "judging swiftness".
See also
References
External links
United Kingdom
UK Government Women & Equality Unit
United States
U.S. Equal Employment Opportunity Commission (EEOC) (US) – the branch of the U.S. government that enforces equal opportunity laws in workplaces
Department of the Interior Office for Equal Opportunity (US)
Stanford Encyclopedia of Philosophy entry on Equality of Opportunity
Further reading
Dias Pereira, Rita (2022). "Genetic Advantage and Equality of Opportunity in Education: Two Definitions and an Empirical Illustration." Tinbergen Institute Discussion Paper, No. TI 2021-108/V, Tinbergen Institute, Amsterdam and Rotterdam.
Discrimination
Disability rights
Anti-racism
Equality rights
Egalitarianism
Liberalism
Social inequality
Right-wing politics
Identity politics | Equal opportunity | [
"Biology"
] | 9,011 | [
"Behavior",
"Aggression",
"Discrimination"
] |
355,852 | https://en.wikipedia.org/wiki/Dachau%20concentration%20camp | Dachau (, ; , ) was one of the first concentration camps built by Nazi Germany and the longest-running one, opening on 22 March 1933. The camp was initially intended to intern Hitler's political opponents, which consisted of communists, social democrats, and other dissidents. It is located on the grounds of an abandoned munitions factory northeast of the medieval town of Dachau, about northwest of Munich in the state of Bavaria, in southern Germany. After its opening by Heinrich Himmler, its purpose was enlarged to include forced labor, and eventually, the imprisonment of Jews, Romani, German and Austrian criminals, and, finally, foreign nationals from countries that Germany occupied or invaded. The Dachau camp system grew to include nearly 100 sub-camps, which were mostly work camps or , and were located throughout southern Germany and Austria. The main camp was liberated by U.S. forces on 29 April 1945.
Prisoners lived in constant fear of brutal treatment and terror detention including standing cells, floggings, the so-called tree or pole hanging, and standing at attention for extremely long periods. There were 32,000 documented deaths at the camp, and thousands that are undocumented. Approximately 10,000 of the 30,000 prisoners were sick at the time of liberation.
In the postwar years, the Dachau facility served to hold SS soldiers awaiting trial. After 1948, it held ethnic Germans who had been expelled from eastern Europe and were awaiting resettlement, and also was used for a time as a United States military base during the occupation. It was finally closed in 1960.
There are several religious memorials within the Memorial Site, which is open to the public.
General overview
Dachau served as a prototype and model for the other German concentration camps that followed. Almost every community in Germany had members taken away to these camps. Newspapers continually reported "the removal of the enemies of the Reich to concentration camps." As early as 1935, a jingle went around: "Lieber Herr Gott, mach mich stumm, dass ich nicht nach Dachau komm'" ("Dear Lord God, make me dumb [silent], That I may not to Dachau come").
The camp's layout and building plans were developed by Commandant Theodor Eicke and were applied to all later camps. He had a separate, secure camp near the command center, which consisted of living quarters, administration and army camps. Eicke became the chief inspector for all concentration camps, responsible for organizing others according to his model.
The Dachau complex included the prisoners' camp which occupied approximately 5 acres, and the much larger area of SS training school including barracks, factories plus other facilities of around 20 acres.
The entrance gate used by prisoners carries the phrase "Arbeit macht frei" (, or "Work makes [one] free"; contextual English translation: "Work shall set you free"). This phrase was also used in several other concentration camps such as Theresienstadt, near Prague, and Auschwitz I.
Dachau was the concentration camp that was in operation the longest, from March 1933 to April 1945, nearly all twelve years of the Nazi regime. Dachau's close proximity to Munich, where Hitler came to power and where the Nazi Party had its official headquarters, made Dachau a convenient location. From 1933 to 1938, the prisoners were mainly German nationals detained for political reasons. After the Reichspogromnacht or Kristallnacht, 30,000 male Jewish citizens were deported to concentration camps. More than 10,000 of them were interned in Dachau alone. As the German military occupied other European states, citizens from across Europe were sent to concentration camps. Subsequently, the camp was used for prisoners of all sorts, from every nation occupied by the forces of the Third Reich.
In the postwar years, the camp continued in use. From 1945 through 1948, the camp was used by the Allies as a prison for SS officers awaiting trial. After 1948, when hundreds of thousands of ethnic Germans were expelled from eastern Europe, it held Germans from Czechoslovakia until they could be resettled. It also served as a military base for the United States, which maintained forces in the country. It was closed in 1960. At the insistence of survivors, various memorials have been constructed and installed here.
Demographic statistics vary but they are in the same general range. One source gives a general estimate of over 200,000 prisoners from more than 30 countries during Nazi rule, of whom two-thirds were political prisoners, including many Catholic priests, and nearly one-third were Jews. At least 25,613 prisoners are believed to have been murdered in the camp and almost another 10,000 in its subcamps, primarily from disease, malnutrition and suicide. In late 1944, a typhus epidemic occurred in the camp caused by poor sanitation and overcrowding, which caused more than 15,000 deaths. It was followed by an evacuation, in which large numbers of the prisoners died. Toward the end of the war, death marches to and from the camp caused the deaths of numerous unrecorded prisoners. After liberation, prisoners weakened beyond recovery by the starvation conditions continued to die. Two thousand cases of "the dread black typhus" had already been identified by 3 May, and the U.S. Seventh Army was "working day and night to alleviate the appalling conditions at the camp". Prisoners with typhus, a louse-borne disease with an incubation period from 12 to 18 days, were treated by the 116th Evacuation Hospital, while the 127th would be the general hospital for the other illnesses. There were 227 documented deaths among the 2,252 patients cared for by the 127th.
Over the 12 years of use as a concentration camp, the Dachau administration recorded the intake of 206,206 prisoners and deaths of 31,951. Crematoria were constructed to dispose of the deceased. Visitors may now walk through the buildings and view the ovens used to cremate bodies, which hid the evidence of many deaths. It is claimed that in 1942, more than 3,166 prisoners in weakened condition were transported to Hartheim Castle near Linz, and were executed by poison gas because they were deemed unfit. Between January and April 1945 11,560 detainees died at KZ Dachau according to a U.S. Army report of 1945, though the Dachau administration registered 12,596 deaths from typhus at the camp over the same period.
Dachau was the third concentration camp to be liberated by British or American Allied forces.
History
Establishment
After the takeover of Bavaria on 9 March 1933, Heinrich Himmler, then Chief of Police in Munich, began to speak with the administration of an unused gunpowder and munitions factory. He toured the site to see if it could be used for quartering protective-custody prisoners. The concentration camp at Dachau was opened 22 March 1933, with the arrival of about 200 prisoners from Stadelheim Prison in Munich and the Landsberg fortress (where Hitler had written Mein Kampf during his imprisonment). Himmler announced in the Münchner Neueste Nachrichten newspaper that the camp could hold up to 5,000 people, and described it as "the first concentration camp for political prisoners" to be used to restore calm to Germany. It became the first regular concentration camp established by the coalition government of the National Socialist German Worker's Party (Nazi Party) and the German National People's Party (dissolved on 6 July 1933).
Jehovah's Witnesses, homosexuals and emigrants were sent to Dachau after the 1935 passage of the Nuremberg Laws which institutionalized racial discrimination. In early 1937, the SS, using prisoner labor, initiated construction of a large complex capable of holding 6,000 prisoners. The construction was officially completed in mid-August 1938. More political opponents, and over 11,000 German and Austrian Jews were sent to the camp after the annexation of Austria and the Sudetenland in 1938. Sinti and Roma in the hundreds were sent to the camp in 1939, and over 13,000 prisoners were sent to the camp from Poland in 1940. Representatives of the International Committee of the Red Cross inspected the camp in 1935 and 1938 and documented the harsh conditions.
First deaths 1933: Investigation
Shortly after the SS was commissioned to supplement the Bavarian police overseeing the Dachau camp, the first reports of prisoner deaths at Dachau began to emerge. In April 1933, Josef Hartinger, an official from the Bavarian Justice Ministry and physician Moritz Flamm, part-time medical examiner, arrived at the camp to investigate the deaths in accordance with the Bavarian penal code. They noted many inconsistencies between the injuries on the corpses and the camp guards' accounts of the deaths. Over a number of months, Hartinger and Flamm uncovered clear evidence of murder and compiled a dossier of charges against Hilmar Wäckerle, the SS commandant of Dachau, Werner Nürnbergk, the camp doctor, and Josef Mutzbauer, the camp's chief administrator (Kanzleiobersekretär). In June 1933, Hartinger presented the case to his superior, Bavarian State Prosecutor Karl Wintersberger. Initially supportive of the investigation, Wintersberger became reluctant to submit the resulting indictment to the Justice Ministry, increasingly under the influence of the SS. Hartinger reduced the scope of the dossier to the four clearest cases and Wintersberger signed it, after first notifying Himmler as a courtesy. The killings at Dachau suddenly stopped (temporarily), Wäckerle was transferred to Stuttgart and replaced by Theodor Eicke. The indictment and related evidence reached the office of Hans Frank, the Bavarian Justice Minister, but was intercepted by Gauleiter Adolf Wagner and locked away in a desk only to be discovered by the US Army.
In 1934, both Hartinger and Wintersberger were transferred to provincial positions. Flamm was no longer employed as a medical examiner and was to survive two attempts on his life before his suspicious death in the same year. Flamm's thoroughly gathered and documented evidence within Hartiger's indictment ensured that it achieved convictions of senior Nazis at the Nuremberg trials in 1947. Wintersberger's complicit behaviour is documented in his own evidence to the Pohl Trial.
Forced labor
The prisoners of Dachau concentration camp originally were to serve as forced labor for a munition factory, and to expand the camp. It was used as a training center for the SS-Totenkopfverbände guards and was a model for other concentration camps. The camp was about in rectangular shape. The prisoners' entrance was secured by an iron gate with the motto "Arbeit macht frei" ("Work will make you free"). This reflected Nazi propaganda, which had concentration camps as labor and re-education camps. This was their original purpose, but the focus was soon shifted to using forced labor as a method of torture and murder. The original slogan was left on the gates.
As of 1938, the procedure for new arrivals occurred at the Schubraum, where prisoners were to hand over their clothing and possessions. One former Luxembourgish prisoner, Albert Theis, reflected about the room, "There we were stripped of all our clothes. Everything had to be handed over: money, rings, watches. One was now stark naked".
The camp included an administration building that contained offices for the Gestapo trial commissioner, SS authorities, the camp leader and his deputies. These administration offices consisted of large storage rooms for the personal belongings of prisoners, the bunker, roll-call square where guards would also inflict punishment on prisoners (especially those who tried to escape), the canteen where prisoners served SS men with cigarettes and food, the museum containing plaster images of prisoners who suffered from bodily defects, the camp office, the library, the barracks, and the infirmary, which was staffed by prisoners who had previously held occupations such as physicians or army surgeons.
Operation Barbarossa
Over 4,000 Soviet prisoners of war were murdered by the Dachau commandant's guard at the SS shooting range located at Hebertshausen, two kilometers from the main camp, in the years 1941/1943. These murders were a clear violation of the provisions laid down in the Geneva Convention for prisoners of war. The SS used the cynical term Sonderbehandlung ("special treatment") for these criminal executions. The first executions of the Soviet prisoners of war at the Hebertshausen shooting range took place on 25 November 1941.
After 1942, the number of prisoners being held at the camp continued to exceed 12,000. Dachau originally held communists, leading socialists and other "enemies of the state" in 1933 but, over time, the Nazis began to send German Jews to the camp. In the early years of imprisonment, Jews were offered permission to emigrate overseas if they "voluntarily" gave their property to enhance Hitler's public treasury. Once Austria was annexed and Czechoslovakia was dissolved, the citizens of both countries became the next prisoners at Dachau. In 1940, Dachau became filled with Polish prisoners, who continued to be the majority of the prisoner population until Dachau was officially liberated.
The prisoner enclosure at the camp was heavily guarded to ensure that no prisoners escaped. A 3-metre-wide (10 ft) no-man's land was the first marker of confinement for prisoners; an area which, upon entry, would elicit lethal gunfire from guard towers. Guards are known to have tossed inmates' caps into this area, resulting in the death of the prisoners when they attempted to retrieve the caps. Despondent prisoners committed suicide by entering the zone. A four-foot-deep and eight-foot-broad (1.2 × 2.4 m) creek, connected with the river Amper, lay on the west side between the "neutral-zone" and the electrically charged, and barbed wire fence which surrounded the entire prisoner enclosure.
In August 1944 a women's camp opened inside Dachau. In the last months of the war, the conditions at Dachau deteriorated. As Allied forces advanced toward Germany, the Germans began to move prisoners from concentration camps near the front to more centrally located camps. They hoped to prevent the liberation of large numbers of prisoners. Transports from the evacuated camps arrived continuously at Dachau. After days of travel with little or no food or water, the prisoners arrived weak and exhausted, often near death. Typhus epidemics became a serious problem as a result of overcrowding, poor sanitary conditions, insufficient provisions, and the weakened state of the prisoners.
Owing to repeated transports from the front, the camp was constantly overcrowded and the hygiene conditions were beneath human dignity. Starting from the end of 1944 up to the day of liberation, 15,000 people died, about half of all the prisoners held at KZ Dachau. Five hundred Soviet POWs were executed by firing squad. The first shipment of women came from Auschwitz-Birkenau.
Final days
As late as 19 April 1945, prisoners were sent to KZ Dachau; on that date a freight train from Buchenwald with nearly 4,500 was diverted to Nammering. SS troops and police confiscated food and water that local townspeople tried to give to the prisoners. Nearly three hundred dead bodies were ordered removed from the train and carried to a ravine over away. The 524 prisoners who had been forced to carry the dead to this site were then shot by the guards, and buried along with those who had died on the train. Nearly 800 bodies went into this mass grave. The train continued on to KZ Dachau.
During April 1945 as U.S. troops drove deeper into Bavaria, the commander of KZ Dachau suggested to Himmler that the camp be turned over to the Allies. Himmler, in signed correspondence, prohibited such a move, adding that "No prisoners shall be allowed to fall into the hands of the enemy alive."
On 24 April 1945, just days before the U.S. troops arrived at the camp, the commandant and a strong guard forced between 6,000 and 7,000 surviving inmates on a death march from Dachau south to Eurasburg, then eastwards towards the Tegernsee; liberated two days after Hitler's death by a Nisei-ethnicity U.S. Army artillery battalion. Any prisoners who could not keep up on the six-day march were shot. Many others died of exhaustion, hunger and exposure. Months later a mass grave containing 1,071 prisoners was found along the route.
Though at the time of liberation the death rate had peaked at 200 per day, after the liberation by U.S. forces the rate eventually fell to between 50 and 80 deaths per day from malnutrition and disease. In addition to the direct abuse of the SS and the harsh conditions, people died from typhus epidemics and starvation. The number of inmates had peaked in 1944 with transports from evacuated camps in the east (such as Auschwitz), and the resulting overcrowding led to an increase in the death rate.
Main camp
Purpose
Dachau was opened in March 1933. The press statement given at the opening stated:
On Wednesday the first concentration camp is to be opened in Dachau with an accommodation for 5000 people. 'All Communists and—where necessary—Reichsbanner and Social Democratic functionaries who endanger state security are to be concentrated here, as in the long run it is not possible to keep individual functionaries in the state prisons without overburdening these prisons, and on the other hand these people cannot be released because attempts have shown that they persist in their efforts to agitate and organize as soon as they are released.
Whatever the publicly stated purpose of the camp, the SS men who arrived there on 11 May 1933 were left in no illusion as to its real purpose by the speech given on that day by Johann-Erasmus Freiherr von Malsen-Ponickau
Comrades of the SS! You all know what the Fuehrer has called us to do. We have not come here for human encounters with those pigs in there. We do not consider them human beings, as we are, but as second-class people. For years they have been able to continue their criminal existence. But now we are in power. If those pigs had come to power, they would have cut off all our heads. Therefore we have no room for sentimentalism. If anyone here cannot bear to see the blood of comrades, he does not belong and had better leave. The more of these pig dogs we strike down, the fewer we need to feed.
Between the years 1933 and 1945, more than 3.5 million Germans were imprisoned in such concentration camps or prison for political reasons. Approximately 77,000 Germans were killed for one or another form of resistance by Special Courts, courts-martial, and the civil justice system. Many of these Germans had served in government, the military, or in civil positions, which were considered to enable them to engage in subversion and conspiracy against the Nazis.
Organization
The camp was divided into two sections: the camp area and the crematorium. The camp area consisted of 32 barracks, including one for clergy imprisoned for opposing the Nazi regime and one reserved for medical experiments. The courtyard between the prison and the central kitchen was used for the summary execution of prisoners. The camp was surrounded by an electrified barbed-wire fence, a ditch, and a wall with seven guard towers.
In early 1937, the SS, using prisoner labor, initiated construction of a large complex of buildings on the grounds of the original camp. The construction was officially completed in mid-August 1938 and the camp remained essentially unchanged and in operation until 1945. A crematorium that was next to, but not directly accessible from within the camp, was erected in 1942. KZ Dachau was therefore the longest running concentration camp of the Third Reich. The Dachau complex included other SS facilities beside the concentration camp—a leader school of the economic and civil service, the medical school of the SS, etc. The camp at that time was called a "protective custody camp," and occupied less than half of the area of the entire complex.
Medical experimentation
Hundreds of prisoners suffered and died, or were executed, in medical experiments conducted at KZ Dachau, of which Sigmund Rascher was in charge. Hypothermia experiments involved exposure to vats of icy water or being strapped down naked outdoors in freezing temperatures. Attempts at reviving the subjects included scalding baths, and forcing naked women to have sexual intercourse with the unconscious victim. Nearly 100 prisoners died during these experiments. The original records of the experiments were destroyed "in an attempt to conceal the atrocities".
Extensive communication between the investigators and Heinrich Himmler, head of the SS, documents the experiments.
During 1942, "high altitude" experiments were conducted. Victims were subjected to rapid decompression to pressures found at , and experienced spasmodic convulsions, agonal breathing, and eventual death.
Demographics
The camp was originally designed for holding German and Austrian political prisoners and Jews, but in 1935 it began to be used also for ordinary criminals. Inside the camp there was a sharp division between the two groups of prisoners; those who were there for political reasons and therefore wore a red tag, and the criminals, who wore a green tag. The political prisoners who were there because they disagreed with Nazi Party policies, or with Hitler, naturally did not consider themselves criminals. Dachau was used as the chief camp for Christian (mainly Catholic) clergy who were imprisoned for not conforming with the Nazi Party line.
During the war, other nationals were transferred to it, including French; in 1940 Poles; in 1941 people from the Balkans, Czechs, Yugoslavs; and in 1942, Russians.
Prisoners were divided into categories. At first, they were classified by the nature of the crime for which they were accused, but eventually were classified by the specific authority-type under whose command a person was sent to camp. Political prisoners who had been arrested by the Gestapo wore a red badge, "professional" criminals sent by the Criminal Courts wore a green badge, Cri-Po prisoners arrested by the criminal police wore a brown badge, "work-shy and asocial" people sent by the welfare authorities or the Gestapo wore a black badge, Jehovah's Witnesses arrested by the Gestapo wore a violet badge, homosexuals sent by the criminal courts wore a pink badge, emigrants arrested by the Gestapo wore a blue badge, "race polluters" arrested by the criminal court or Gestapo wore badges with a black outline, second-termers arrested by the Gestapo wore a bar matching the color of their badge, "idiots" wore a white armband with the label Blöd (Stupid), Romani wore a black triangle, and Jews, whose incarceration in the Dachau concentration camp dramatically increased after Kristallnacht, wore a yellow badge, combined with another color.
The average number of Germans in the camp during the war was . Just before the liberation many German prisoners were evacuated, but of these Germans died during the evacuation transport. Evacuated prisoners included such prominent political and religious figures as Martin Niemöller, Kurt von Schuschnigg, Édouard Daladier, Léon Blum, Franz Halder, and Hjalmar Schacht.
Clergy
In an effort to counter the strength and influence of spiritual resistance, Nazi security services monitored clergy very closely. Priests were frequently denounced, arrested and sent to concentration camps, often simply on the basis of being "suspected of activities hostile to the State" or that there was reason to "suppose that his dealings might harm society". Despite SS hostility to religious observance, the Vatican and German bishops successfully lobbied the regime to concentrate clergy at one camp and obtained permission to build a chapel, for the priests to live communally and for time to be allotted to them for their religious and intellectual activity. Priests Barracks at Dachau were established in Blocks 26, 28 and 30, though only temporarily. 26 became the international block and 28 was reserved for Poles—the most numerous group.
Of a total of 2,720 clergy recorded as imprisoned at Dachau, the overwhelming majority, some 2,579 (or 94.88%) were Catholic. Among the other denominations, there were 109 Protestants, 22 Greek Orthodox, 8 Old Catholics and Mariavites and 2 Muslims. In his Dachau: The Official History 1933–1945, Paul Berben noted that R. Schnabel's 1966 investigation, Die Frommen in der Hölle ("The Pious Ones in Hell") found an alternative total of 2,771 and included the fate all the clergy listed, with 692 noted as deceased and 336 sent out on "invalid trainloads" and therefore presumed dead. Over 400 German priests were sent to Dachau. Total numbers incarcerated are nonetheless difficult to assert, for some clergy were not recognised as such by the camp authorities, and some—particularly Poles—did not wish to be identified as such, fearing they would be mistreated.
The Nazis introduced a racial hierarchy—keeping Poles in harsh conditions, while favoring German priests. 697 Poles arrived in December 1941, and a further 500 of mainly elderly clergy arrived in October the following year. Inadequately clothed for the bitter cold, of this group, only 82 survived. A large number of Polish priests were chosen for Nazi medical experiments. In November 1942, 20 were given phlegmons. 120 were used by Dr Schilling for malaria experiments between July 1942 and May 1944. Several Poles met their deaths with the "invalid trains" sent out from the camp, others were liquidated in the camp and given bogus death certificates. Some died of cruel punishment for misdemeanors—beaten to death or run to exhaustion.
Staff
The camp staff consisted mostly of male SS, although 19 female guards served at Dachau as well, most of them until liberation. Sixteen have been identified including Fanny Baur, Leopoldine Bittermann, Ernestine Brenner, Anna Buck, Rosa Dolaschko, Maria Eder, Rosa Grassmann, Betty Hanneschaleger, Ruth Elfriede Hildner, Josefa Keller, Berta Kimplinger, Lieselotte Klaudat, Theresia Kopp, Rosalie Leimboeck, and Thea Miesl. Female guards were also assigned to the Augsburg Michelwerke, Burgau, Kaufering, Mühldorf, and Munich Agfa Camera Werke subcamps. In mid-April 1945, female subcamps at Kaufering, Augsburg, and Munich were closed, and the SS stationed the women at Dachau. Several Norwegians worked as guards at the Dachau camp.
In the major Dachau war crimes case (United States of America v. Martin Gottfried Weiss et.al.), forty-two officials of Dachau were tried from November to December 1945. All were found guilty—thirty-six of the defendants were sentenced to death on 13 December 1945, of whom 23 were hanged on 28–29 May 1946, including the commandant, SS-Obersturmbannführer Martin Gottfried Weiss, SS-Obersturmführer Freidrich Wilhelm Ruppert and camp doctors Karl Schilling and Fritz Hintermeyer. Camp commandant Weiss admitted in affidavit testimony that most of the deaths at Dachau during his administration were due to "typhus, TB, dysentery, pneumonia, pleurisy, and body weakness brought about by lack of food." His testimony also admitted to deaths by shootings, hangings and medical experiments. Ruppert ordered and supervised the deaths of innumerable prisoners at Dachau main and subcamps, according to the War Crimes Commission official trial transcript. He testified about hangings, shootings and lethal injections, but did not admit to direct responsibility for any individual deaths. An anonymous Dutch prisoner contended that British Special Operations Executive (SOE) agent Noor Inayat Khan was cruelly beaten by SS officer Wilhelm Ruppert before being shot from behind; the beating may have been the actual cause of her death.
Satellite camps and sub-camps
Satellite camps under the authority of Dachau were established in the summer and autumn of 1944 near armaments factories throughout southern Germany to increase war production. Dachau alone had more than 30 large subcamps, and hundreds of smaller ones, in which over 30,000 prisoners worked almost exclusively on armaments.
Overall, the Dachau concentration camp system included 123 sub-camps and Kommandos which were set up in 1943 when factories were built near the main camp to make use of forced labor of the Dachau prisoners. Out of the 123 sub-camps, eleven of them were called Kaufering, distinguished by a number at the end of each. All Kaufering sub-camps were set up to specifically build three underground factories (Allied bombing raids made it necessary for them to be underground) for a project called Ringeltaube (wood pigeon), which planned to be the location in which the German jet fighter plane, Messerschmitt Me 262, was to be built. In the last days of war, in April 1945, the Kaufering camps were evacuated and around 15,000 prisoners were sent up to the main Dachau camp. Typhus alone was estimated to have caused 15,000 deaths between December 1944 and April 1945. "Within the first month after the arrival of the American troops, 10,000 prisoners were treated for malnutrition and kindred diseases. In spite of this one hundred prisoners died each day during the first month from typhus, dysentery or general weakness".
As U.S. Army troops neared the Dachau sub-camp at Landsberg on 27 April 1945, the SS officer in charge ordered that 4,000 prisoners be murdered. Windows and doors of their huts were nailed shut. The buildings were then doused with gasoline and set afire. Prisoners who were naked or nearly so were burned to death, while some managed to crawl out of the buildings before dying. Earlier that day, as Wehrmacht troops withdrew from Landsberg am Lech, townspeople hung white sheets from their windows. Infuriated SS troops dragged German civilians from their homes and hanged them from trees.
Liberation
Main camp
As the Allies began to advance on Nazi Germany, the SS began to evacuate the first concentration camps in summer 1944. Thousands of prisoners were killed before the evacuation due to being ill or unable to walk. At the end of 1944, the overcrowding of camps began to take its toll on the prisoners. The unhygienic conditions and the supplies of food rations became disastrous. In November a typhus fever epidemic broke out that took thousands of lives.
In the second phase of the evacuation, in April 1945, Himmler gave direct evacuation routes for remaining camps. Prisoners who were from the northern part of Germany were to be directed to the Baltic and North Sea coasts to be drowned. The prisoners from the southern part were to be gathered in the Alps, which was the location in which the SS wanted to resist the Allies. On 28 April 1945, an armed revolt took place in the town of Dachau. Both former and escaped concentration camp prisoners and a renegade Volkssturm (civilian militia) company took part. At about 8:30 am the rebels occupied the Town Hall. The SS gruesomely suppressed the revolt within a few hours.
Being fully aware that Germany was about to be defeated in World War II, the SS invested its time in removing evidence of the crimes it committed in the concentration camps. They began destroying incriminating evidence in April 1945 and planned on murdering the prisoners using codenames "Wolke A-I" (Cloud A-1) and "Wolkenbrand" (Cloud fire). However, these plans were not carried out. In mid-April, plans to evacuate the camp started by sending prisoners toward Tyrol. On 26 April, over 10,000 prisoners were forced to leave the Dachau concentration camp on foot, in trains, or in trucks. The largest group of some 7,000 prisoners was driven southward on a foot-march lasting several days. More than 1,000 prisoners did not survive this march. The evacuation transports cost many thousands of prisoners their lives.
On 26 April 1945 prisoner Karl Riemer fled the Dachau concentration camp to get help from American troops and on 28 April Victor Maurer, a representative of the International Red Cross, negotiated an agreement to surrender the camp to U.S. troops. That night a secretly formed International Prisoners Committee took over the control of the camp. Units of 3rd Battalion, 157th Infantry Regiment, 45th Infantry Division, commanded by Lieutenant Colonel Felix L. Sparks, were ordered to secure the camp. On 29 April Sparks led part of his battalion as they entered the camp over a side wall. At about the same time, Brigadier General Henning Linden led the 222nd Infantry Regiment of the 42nd (Rainbow) Infantry Division soldiers including his aide, Lieutenant William Cowling, to accept the formal surrender of the camp from German Lieutenant Heinrich Wicker at an entrance between the camp and the compound for the SS garrison. Linden was traveling with Marguerite Higgins and other reporters; as a result, Linden's detachment generated international headlines by accepting the surrender of the camp. More than 30,000 Jews and political prisoners were freed, and since 1945 adherents of the 42nd and 45th Division versions of events have argued over which unit was the first to liberate Dachau.
Satellite camps liberation
The first Dachau subcamp discovered by advancing Allied forces was Kaufering IV by the 12th Armored Division on 27 April 1945. Subcamps liberated by the 12th Armored Division included: Erpting, Schrobenhausen, Schwabing, Langerringen, Türkheim, Lauingen, Schwabach, Germering.
During the liberation of the sub-camps surrounding Dachau, advance scouts of the U.S. Army's 522nd Field Artillery Battalion, a segregated battalion consisting of Nisei, 2nd generation Japanese-Americans, liberated the 3,000 prisoners of the "Kaufering IV Hurlach" slave labor camp. Perisco describes an Office of Strategic Services (OSS) team (code name LUXE) leading Army Intelligence to a "Camp IV" on 29 April. "They found the camp afire and a stack of some four hundred bodies burning ... American soldiers then went into Landsberg and rounded up all the male civilians they could find and marched them out to the camp. The former commandant was forced to lie amidst a pile of corpses. The male population of Landsberg was then ordered to walk by, and ordered to spit on the commandant as they passed. The commandant was then turned over to a group of liberated camp survivors". The 522nd's personnel later discovered the survivors of a death march headed generally southwards from the Dachau main camp to Eurasburg, then eastwards towards the Austrian border on 2 May, just west of the town of Waakirchen.
Weather at the time of liberation was unseasonably cool and temperatures trended down through the first two days of May; on 2 May, the area received a snowstorm with of snow at nearby Munich. Proper clothing was still scarce and film footage from the time (as seen in The World at War) shows naked, gaunt people either wandering on snow or dead under it.
Due to the number of sub-camps over a large area that comprised the Dachau concentration camp complex, many Allied units have been officially recognized by the United States Army Center of Military History and the United States Holocaust Memorial Museum as liberating units of Dachau, including:
the 4th Infantry Division, 36th Infantry Division, 42nd Infantry Division, 45th Infantry Division, 63rd Infantry Division, 99th Infantry Division, 103rd Infantry Division, 10th Armored Division, 12th Armored Division, 14th Armored Division, 20th Armored Division, and the 101st Airborne Division.
Killing of camp guards
American troops killed some of the camp guards after they had surrendered. The number is disputed, as some were killed in combat, some while attempting to surrender, and others after their surrender was accepted. In 1989, Brigadier General Felix L. Sparks, the Colonel in command of a battalion that was present, stated:
An Inspector General report resulting from a US Army investigation conducted between 3 and 8 May 1945—titled "American Army Investigation of Alleged Mistreatment of German Guards at Dachau"—found that 21 plus "a number" of presumed SS men were killed, with others being wounded after their surrender had been accepted. In addition, 25 to 50 SS guards were estimated to have been killed by the liberated prisoners. Lee Miller visited the camp just after liberation, and photographed several guards who were killed by soldiers or prisoners.
According to Sparks, court-martial charges were drawn up against him and several other men under his command, but General George S. Patton, who had recently been appointed military governor of Bavaria, chose to dismiss the charges.
Colonel Charles L. Decker, an acting deputy judge advocate, concluded in late 1945 that, while war crimes had been committed at Dachau by Germany, "Certainly, there was no such systematic criminality among United States forces as pervaded the Nazi groups in Germany."
American troops also forced local citizens to the camp to see for themselves the conditions there and to help bury the dead. Many local residents were shocked about the experience and claimed no knowledge of the activities at the camp.
Post-liberation Easter
6 May 1945 (23 April on the Orthodox calendar) was the day of Pascha, Orthodox Easter. In a cell block used by Catholic priests to say daily Mass, several Greek, Serbian and Russian priests and one Serbian deacon, wearing makeshift vestments made from towels of the SS guard, gathered with several hundred Greek, Serbian and Russian prisoners to celebrate the Paschal Vigil. A prisoner named Rahr described the scene:
There is a Russian Orthodox chapel at the camp today, and it is well known for its icon of Christ leading the prisoners out of the camp gates.
After liberation
Authorities worked night and day to alleviate conditions at the camp immediately following the liberation as an epidemic of black typhus swept through the prisoner population. Two thousand cases had already been reported by 3 May.
By October 1945, the former camp was being used by the U.S. Army as a place of confinement for war criminals, the SS and important witnesses. It was also the site of the Dachau Trials for German war criminals, a site chosen for its symbolism. In 1948, the Bavarian government established housing for refugees on the site, and this remained for many years. Among those held in the Dachau internment camp set up under the U.S. Army were Elsa Ehrich, Maria Mandl, and Elisabeth Ruppert.
The Kaserne quarters and other buildings used by the guards and trainee guards were converted and served as the Eastman Barracks, an American military post. After the closure of the Eastman Barracks in 1974, these areas are now occupied by the Bavarian Bereitschaftspolizei (rapid response police unit).
Deportation of Soviet nationals
By January 1946, 18,000 members of the SS were being confined at the camp along with an additional 12,000 persons, including deserters from the Russian army and a number who had been captured in German Army uniform. The occupants of two barracks rioted as 271 of the Russian deserters were to be loaded onto trains that would return them to Russian-controlled lands, as agreed at the Yalta Conference. Inmates barricaded themselves inside two barracks. While the first was able to be cleared without too much trouble, those in the second building set fire to it, tore off their clothing in an effort to frustrate the guards, and linked arms to resist being removed from the building. Tear gas was used by the American soldiers before rushing the barracks, only for them to find that many had killed themselves.
The American services newspaper Stars and Stripes reported:
“The GIs quickly cut down most of those who had hanged themselves from the rafters. Those still conscious were screaming in Russian, pointing first at the guns of the guards, then at themselves, begging to us to shoot.”
Ten of the soldiers killed themselves during the riot while another 21 attempted suicide, apparently with razor blades. Many had "cracked heads" inflicted by 500 American guards, in the attempt to bring the situation under control. One of those injured later died in a hospital. The New York Times reported on the death with the headline, "Russian Traitor Dies of Wounds".
List of personnel
Commandants
SS-Standartenführer Hilmar Wäckerle (22 March 1933 – 26 June 1933)
SS-Gruppenführer Theodor Eicke (26 June 1933 – 4 July 1934)
SS-Oberführer (4 July 1934 – 22 October 1934)
SS-Brigadeführer Berthold Maack (22 October 1934 – 12 January 1935)
SS-Oberführer Heinrich Deubel (12 January 1935 – 31 March 1936)
SS-Oberführer Hans Loritz (31 March 1936 – 7 January 1939)
SS-Hauptsturmführer Alexander Piorkowski (7 January 1939 – 2 January 1942)
SS-Obersturmbannführer Martin Weiß (3 January 1942 – 30 September 1943)
SS-Hauptsturmführer Eduard Weiter (30 September 1943 – 26 April 1945)
SS-Obersturmbannführer Martin Weiß (26 April 1945 – 28 April 1945)
Other staff
Adolf Eichmann (29 January 1934 – October 1934)
Rudolf Höss (1934–1938)
Max Kögel (1937–1938)
SS-Untersturmführer Hans Steinbrenner (1905–1964), brutal guard who greeted new arrivals with his improvized "Welcome Ceremony".
SS-Obergruppenführer Gerhard Freiherr von Almey, half-brother of Ludolf von Alvensleben. Executed in 1955, in Moscow.
Johannes Heesters (visited the camp and entertained the SS officers, was also given/giving tours)
Otto Rahn (1937)
SS-Untersturmführer Johannes Otto
SS-Untersturmführer Heinrich Wicker, killed in the Dachau liberation reprisals
SS-Obersturmbannführer Johann Kantschuster was the arrest commandant in Dachau (1933–1939), went on to become camp commandant at Fort Breendonk, Belgium
SS-Sturmbannführer Robert Erspenmüller, first warden of the guards and right-hand of Hilmar Wäckerle. Disagreed with Eicke and was transferred away.
SS and civilian doctors
Dr. Werner Nuernbergk – First camp doctor, escaped charges for falsifying death certificates in 1933
SS-Untersturmführer Dr. Hans Eisele – (13 March 1912 – 3 May 1967) – Sentenced to death, but reprieved and released in 1952. Fled to Egypt after new accusations in 1958.
SS-Obersturmführer Dr. Fritz Hintermayer – (28 Oct 1911 – 29 May 1946) – Executed by the Allies
Dr. Ernst Holzlöhner – (23 February 1899 – 14 June 1945) – Committed suicide
SS-Hauptsturmführer Dr. Fridolin Karl Puhr – (30 April 1913 – 31 May 1957) – Sentenced to death, later commuted to 10-years imprisonment
SS-Untersturmführer Dr. Sigmund Rascher – (12 February 1909 – 26 April 1945) – Executed by the SS
Dr. Claus Schilling – (25 July 1871 – 28 May 1946) – Executed by the Allies
SS-Sturmbannführer Dr. Horst Schumann – (11 May 1906 – 5 May 1983) – Escaped to Ghana, later extradited to West Germany
SS-Obersturmführer Dr. Helmuth Vetter – (21 March 1910 – 2 February 1949) – Executed by the Allies
SS-Sturmbannführer Dr. Wilhelm Witteler – (20 April 1909 – 13 May 1993) – Sentenced to death, later commuted to 20-years imprisonment
SS-Sturmbannführer Dr. Waldemar Wolter – (19 May 1908 – 28 May 1947) – Executed by the Allies
Memorial
Between 1945 and 1948 when the camp was handed over to the Bavarian authorities, many accused war criminals and members of the SS were imprisoned at the camp. Owing to the severe refugee crisis mainly caused by the expulsions of ethnic Germans, the camp was used from late 1948 to house 2000 Germans from Czechoslovakia (mainly from the Sudetenland). This settlement was called Dachau-East and remained until the mid-1960s. During this time, former prisoners banded together to erect a memorial on the site of the camp. The display, which was reworked in 2003, follows the path of new arrivals to the camp. Two of the barracks have been rebuilt and one shows a cross-section of the entire history of the camp since the original barracks had to be torn down due to their poor condition when the memorial was built. The other 30 barracks are indicated by low cement curbs filled with pebbles.
In media
In his 2013 autobiography, Moose: Chapters from My Life, in the chapter entitled, "Dachau", author Robert B. Sherman chronicles his experiences as an American Army serviceman during the initial hours of Dachau's liberation.
In Lewis Black's first book, Nothing's Sacred, he mentions visiting the camp as part of his tour of Europe and how it looked all cleaned up and spiffy, "like some delightful holiday camp", and only the crematorium building showed any sign of the horror that went on there.
In Maus, Vladek describes his time interned at Dachau, among his time at other concentration camps. He describes the journey to Dachau in over-crowded trains, trading rations for other goods and favors to stay alive, and contracting typhus.
Frontline: "Memory of the Camps" (7 May 1985, Season 3, Episode 18), is a 56-minute television documentary that addresses Dachau and other Nazi concentration camps
See also
Karl von Eberstein
List of Nazi concentration camps
List of subcamps of Dachau
Notes
References
Bibliography
Includes report written for: United States. Army. Infantry Division, 9th. Office of the Surgeon. Interrogation of SS Officers and Men at Dachau.
("US v. Weiss")
("UN War Crimes Commission")
External links
Video Footage showing the Liberation of Dachau
Concentration camps of Nazi Germany: illustrated history on YouTube
Dachau camp prisoner testimonies page, 041940.pl
"The Angel of Dachau". – Pope Francis declares concentration camp priest a martyr – CNA
"Traces of Evil". Illustrative History of Dachau and Environs
Buildings and structures in Dachau (district)
Tourist attractions in Munich
1933 establishments in Germany
Tourist attractions in Bavaria
World War II museums in Germany
World War II sites in Germany
World War II memorials in Germany
Museums in Bavaria
Articles containing video clips
Medical experimentation on prisoners of war | Dachau concentration camp | [
"Chemistry",
"Biology"
] | 9,736 | [
"Medical experimentation on prisoners of war",
"Biological warfare"
] |
355,859 | https://en.wikipedia.org/wiki/Epoxide | In organic chemistry, an epoxide is a cyclic ether, where the ether forms a three-atom ring: two atoms of carbon and one atom of oxygen. This triangular structure has substantial ring strain, making epoxides highly reactive, more so than other ethers. They are produced on a large scale for many applications. In general, low molecular weight epoxides are colourless and nonpolar, and often volatile.
Nomenclature
A compound containing the epoxide functional group can be called an epoxy, epoxide, oxirane, and ethoxyline. Simple epoxides are often referred to as oxides. Thus, the epoxide of ethylene (C2H4) is ethylene oxide (C2H4O). Many compounds have trivial names; for instance, ethylene oxide is called "oxirane". Some names emphasize the presence of the epoxide functional group, as in the compound 1,2-epoxyheptane, which can also be called 1,2-heptene oxide.
A polymer formed from epoxide precursors is called an epoxy. However, few if any of the epoxy groups in the resin survive the curing process.
Synthesis
The dominant epoxides industrially are ethylene oxide and propylene oxide, which are produced respectively on the scales of approximately 15 and 3 million tonnes/year.
Aside from ethylene oxide, most epoxides are generated when peroxidized reagents donate a single oxygen atom to an alkene. Safety considerations weigh on these reactions because organic peroxides are prone to spontaneous decomposition or even combustion.
Both t-butyl hydroperoxide and ethylbenzene hydroperoxide can be used as oxygen sources during propylene oxidation (although a catalyst is required as well, and most industrial producers use dehydrochlorination instead).
Ethylene oxidation
The ethylene oxide industry generates its product from reaction of ethylene and oxygen. Modified heterogeneous silver catalysts are typically employed. According to a reaction mechanism suggested in 1974 at least one ethylene molecule is totally oxidized for every six that are converted to ethylene oxide: 7 H2C=CH2 + 6 O2 -> 6 C2H4O + 2 CO2 + 2 H2O
Only ethylene produces an epoxide during incomplete combustion. Other alkenes fail to react usefully, even propylene, though TS-1 supported Au catalysts can selectively epoxidize propylene.
Organic peroxides and metal catalysts
Metal complexes are useful catalysts for epoxidations involving hydrogen peroxide and alkyl hydroperoxides. Metal-catalyzed epoxidations were first explored using tert-butyl hydroperoxide (TBHP). Association of TBHP with the metal (M) generates the active metal peroxy complex containing the MOOR group, which then transfers an O center to the alkene.
Vanadium(II) oxide catalyzes the epoxidation at specifically less-substituted alkenes.
Nucleophilic epoxidation
Electron-deficient olefins, such as enones and acryl derivatives can be epoxidized using nucleophilic oxygen compounds such as peroxides. The reaction is a two-step mechanism. First the oxygen performs a nucleophilic conjugate addition to give a stabilized carbanion. This carbanion then attacks the same oxygen atom, displacing a leaving group from it, to close the epoxide ring.
Transfer from peroxycarboxylic acids
Peroxycarboxylic acids, which are more electrophilic than other peroxides, convert alkenes to epoxides without the intervention of metal catalysts. In specialized applications, dioxirane reagents (e.g. dimethyldioxirane) perform similarly, but are more explosive.
Typical laboratory operations employ the Prilezhaev reaction. This approach involves the oxidation of the alkene with a peroxyacid such as mCPBA. Illustrative is the epoxidation of styrene with perbenzoic acid to styrene oxide:
The stereochemistry of the reaction is quite sensitive. Depending on the mechanism of the reaction and the geometry of the alkene starting material, cis and/or trans epoxide diastereomers may be formed. In addition, if there are other stereocenters present in the starting material, they can influence the stereochemistry of the epoxidation.
The reaction proceeds via what is commonly known as the "Butterfly Mechanism". The peroxide is viewed as an electrophile, and the alkene a nucleophile. The reaction is considered to be concerted. The butterfly mechanism allows ideal positioning of the sigma star orbital for π electrons to attack. Because two bonds are broken and formed to the epoxide oxygen, this is formally an example of a coarctate transition state.
Asymmetric epoxidations
Chiral epoxides can often be derived enantioselectively from prochiral alkenes. Many metal complexes give active catalysts, and the most important involve titanium, vanadium, or molybdenum.
Hydroperoxides are also employed in catalytic enantioselective epoxidations, such as the Sharpless epoxidation and the Jacobsen epoxidation. Together with the Shi epoxidation, these reactions are useful for the enantioselective synthesis of chiral epoxides. Oxaziridine reagents may also be used to generate epoxides from alkenes.
The Sharpless epoxidation reaction is one of the premier enantioselective chemical reactions. It is used to prepare 2,3-epoxyalcohols from primary and secondary allylic alcohols.
The above reactions all use electrophilic reagents, but some asymmetric nucleophilic epoxidations are possible.
Dehydrohalogenation and other γ eliminations
Halohydrins react with base to give epoxides. The reaction is spontaneous because the energetic cost of introducing the ring strain (13 kcal/mol) is offset by the larger bond enthalpy of the newly introduced C-O bond (when compared to that of the cleaved C-halogen bond).
Formation of epoxides from secondary halohydrins is predicted to occur faster than from primary halohydrins due to increased entropic effects in the secondary halohydrin, and tertiary halohydrins react (if at all) extremely slowly due to steric crowding.
Starting with propylene chlorohydrin, most of the world's supply of propylene oxide arises via this route.
An intramolecular epoxide formation reaction is one of the key steps in the Darzens reaction.
In the Johnson–Corey–Chaykovsky reaction epoxides are generated from carbonyl groups and sulfonium ylides. In this reaction, a sulfonium is the leaving group instead of chloride.
Biosynthesis
Epoxides are uncommon in nature. They arise usually via oxygenation of alkenes by the action of cytochrome P450. (but see also the short-lived epoxyeicosatrienoic acids which act as signalling molecules. and similar epoxydocosapentaenoic acids, and epoxyeicosatetraenoic acids.)
Arene oxides are intermediates in the oxidation of arenes by cytochrome P450. For prochiral arenes (naphthalene, toluene, benzoates, benzopyrene), the epoxides are often obtained in high enantioselectivity.
Reactions
Ring-opening reactions dominate the reactivity of epoxides.
Hydrolysis and addition of nucleophiles
Epoxides react with a broad range of nucleophiles, for example, alcohols, water, amines, thiols, and even halides. With two often-nearly-equivalent sites of attack, epoxides exemplify "ambident substrates". Ring-opening regioselectivity in asymmetric epoxides generally follows the SN2 pattern of attack at the least-substituted carbon, but can be affected by carbocation stability under acidic conditions. This class of reactions is the basis of epoxy glues and the production of glycols.
Lithium aluminium hydride or aluminium hydride both reduce epoxides through a simple nucleophilic addition of hydride (H−); they produce the corresponding alcohol.
Polymerization and oligomerization
Polymerization of epoxides gives polyethers. For example ethylene oxide polymerizes to give polyethylene glycol, also known as polyethylene oxide. The reaction of an alcohol or a phenol with ethylene oxide, ethoxylation, is widely used to produce surfactants:
ROH + n C2H4O → R(OC2H4)nOH
With anhydrides, epoxides give polyesters.
Metallation and deoxygenation
Lithiation cleaves the ring to β-lithioalkoxides.
Epoxides can be deoxygenated using oxophilic reagents, with loss or retention of configuration. The combination of tungsten hexachloride and n-butyllithium gives the alkene.
When treated with thiourea, epoxides convert to the episulfide (thiiranes).
Other reactions
Epoxides undergo ring expansion reactions, illustrated by the insertion of carbon dioxide to give cyclic carbonates.
An epoxide adjacent to an alcohol can undergo the Payne rearrangement in base.
Uses
Ethylene oxide is widely used to generate detergents and surfactants by ethoxylation. Its hydrolysis affords ethylene glycol. It is also used for sterilisation of medical instruments and materials.
The reaction of epoxides with amines is the basis for the formation of epoxy glues and structural materials. A typical amine-hardener is triethylenetetramine (TETA).
Safety
Epoxides are alkylating agents, making many of them highly toxic.
See also
Epoxide hydrolase
Juliá–Colonna epoxidation
Further reading
References
Functional groups | Epoxide | [
"Chemistry"
] | 2,210 | [
"Functional groups"
] |
355,882 | https://en.wikipedia.org/wiki/Administratium | Administratium is a well-known in-joke in scientific circles and is a parody both on the bureaucracy of scientific establishments and on descriptions of newly discovered chemical elements.
In 1991, Thomas Kyle (the supposed discoverer of this element) was awarded an Ig Nobel Prize for physics, making him one of only three fictional people to have won the award.
A spoof article was written by William DeBuvitz in 1988 and first appeared in print in the January 1989 issue of The Physics Teacher. It spread rapidly among university campuses and research centers; many versions surfaced, often customized to the contributor's situation.
A similar joke concerns Administrontium which was referenced in print in 1993.
Another variation on the same joke is Bureaucratium. A commonly heard description describes it as "having a negative half-life". In other words, the more time passes, the more massive "Bureaucratium" becomes; it only grows larger and more sluggish. This refers to the bureaucratic system, which is generally perceived as a system in which bureaucratic procedures accumulate, and whatever needs to get done takes increasingly longer to get done as soon as it touches the bureaucracy.
See also
Unobtainium
Wishalloy
List of fictional elements, materials, isotopes and subatomic particles
References
1989 introductions
Fictional materials
Academic administration
In-jokes
Parodies
Satirical works
Political satire
Science and culture
Science writing
fr:Administratium | Administratium | [
"Physics"
] | 292 | [
"Materials",
"Fictional materials",
"Matter"
] |
355,887 | https://en.wikipedia.org/wiki/Homochronous | In telecommunications, the term homochronous describes the relationship between two signals
such that their corresponding significant instants are displaced by a constant interval of time.
Synchronization | Homochronous | [
"Engineering"
] | 37 | [
"Telecommunications engineering",
"Synchronization"
] |
355,908 | https://en.wikipedia.org/wiki/Poor%20Richard%27s%20Almanack | Poor Richard's Almanack (sometimes Almanac) was a yearly almanac published by Benjamin Franklin, who adopted the pseudonym of "Poor Richard" or "Richard Saunders" for this purpose. The publication appeared continually from 1732 to 1758. It sold exceptionally well for a pamphlet published in the Thirteen Colonies; print runs reached 10,000 per year.
Franklin, the American inventor, statesman, and accomplished publisher and printer, achieved success with Poor Richard's Almanack. Almanacks were very popular books in colonial America, offering a mixture of seasonal weather forecasts, practical household hints, puzzles, and other amusements. Poor Richard's Almanack was also popular for its extensive use of wordplay, and some of the witty phrases coined in the work survive in the contemporary American vernacular.
History
On December 28, 1732, Benjamin Franklin announced in The Pennsylvania Gazette that he had just printed and published the first edition of The Poor Richard, by Richard Saunders, Philomath. Franklin published the first Poor Richard's Almanack on December 28, 1732, and continued to publish new editions for 25 years, bringing him much economic success and popularity. The almanack sold as many as 10,000 copies a year. In 1735, upon the death of Franklin's brother, James, Franklin sent 500 copies of Poor Richard's to his widow for free, so that she could make money selling them.
Contents
The Almanack contained the calendar, weather, poems, sayings, and astronomical and astrological information that a typical almanac of the period would contain. Franklin also included the occasional mathematical exercise, and the Almanack from 1750 features an early example of demographics. It is chiefly remembered, however, for being a repository of Franklin's aphorisms and proverbs, many of which live on in American English. These maxims typically counsel thrift and courtesy, with a dash of cynicism.
In the spaces that occurred between noted calendar days, Franklin included proverbial sentences about industry and frugality. Several of these sayings were borrowed from an earlier writer, Lord Halifax, many of whose aphorisms sprang from, "... [a] basic skepticism directed against the motives of men, manners, and the age." In 1757, Franklin made a selection of these and prefixed them to the almanac as the address of an old man to the people attending an auction. This was later published as The Way to Wealth, and was popular in both America and England.
Poor Richard
Franklin borrowed the name "Richard Saunders" from the seventeenth-century author of Rider's British Merlin, a popular London almanac which continued to be published throughout the eighteenth century. Franklin created the Poor Richard persona based in part on Jonathan Swift's pseudonymous character, "Isaac Bickerstaff". In a series of three letters in 1708 and 1709, known as the Bickerstaff papers, "Bickerstaff" predicted the imminent death of astrologer and almanac maker John Partridge. Franklin's Poor Richard, like Bickerstaff, claimed to be a philomath and astrologer and, like Bickerstaff, predicted the deaths of actual astrologers who wrote traditional almanacs. In the early editions of Poor Richard's Almanack, predicting and falsely reporting the deaths of these astrologers—much to their dismay—was something of a running joke. However, Franklin's endearing character of "Poor" Richard Saunders, along with his wife Bridget, was ultimately used to frame (if comically) what was intended as a serious resource that people would buy year after year. To that end, the satirical edge of Swift's character is largely absent in Poor Richard. Richard was presented as distinct from Franklin himself, occasionally referring to the latter as his printer.
In later editions, the original Richard Saunders character gradually disappeared, replaced by a Poor Richard, who largely stood in for Franklin and his own practical scientific and business perspectives. By 1758, the original character was even more distant from the practical advice and proverbs of the almanac, which Franklin presented as coming from "Father Abraham," who in turn got his sayings from Poor Richard.
Serialization
One of the appeals of the Almanack was that it contained various "news stories" in serial format, so that readers would purchase it year after year to find out what happened to the protagonists. One of the earliest of these was the "prediction" that the author's "good Friend and Fellow-Student, Mr. Titan Leeds" would die on October 17 of that year, followed by the rebuttal of Mr. Leeds himself that he would die, not on the 17th, but on October 26. Appealing to his readers, Franklin urged them to purchase the next year or two or three or four editions to show their support for his prediction. The following year, Franklin expressed his regret that he was too ill to learn whether he or Leeds was correct. Nevertheless, the ruse had its desired effect: people purchased the Almanack to find out who was correct. (Later editions of the Almanack would claim that Leeds had died and that the person claiming to be Leeds was an impostor; Leeds, in fact, died in 1738, which prompted Franklin to applaud the supposed impostor for ending his ruse.)
Criticism
For some writers the content of the Almanack became inextricably linked with Franklin's character—and not always to favorable effect. Both Nathaniel Hawthorne and Herman Melville caricatured the Almanack—and Franklin by extension—in their writings, while James Russell Lowell, reflecting on the public unveiling in Boston of a statue to honor Franklin, wrote:
... we shall find out that Franklin was born in Boston, and invented being struck with lightning and printing and the Franklin medal, and that he had to move to Philadelphia because great men were so plenty in Boston that he had no chance, and that he revenged himself on his native town by saddling it with the Franklin stove, and that he discovered the almanac, and that a penny saved is a penny lost, or something of the kind.
The Almanack was also a reflection of the social norms and social mores of his times, rather than a philosophical document setting a path for new-freedoms, as the works of Franklin's contemporaries, Thomas Jefferson, John Adams, and Thomas Paine were. Historian Howard Zinn offers, as an example, the adage "Let thy maidservant be faithful, strong, and homely" as indication of Franklin's belief in the legitimacy of controlling the sexual lives of servants for the economic benefit of their masters.
At least one modern biographer has published the claim that Franklin "stole", not borrowed, the name of Richard Saunders from the deceased astrologer-doctor. Franklin also "borrowed—apparently without asking—and adapted the title of an almanac his brother James Franklin was publishing at Newport: Poor Robin's Almanack (itself appropriated from a seventeenth-century almanac published under the same title in London)".
Cultural impact
Louis XVI of France gave a ship to John Paul Jones who renamed it after the Almanack author—Bonhomme Richard, or "Goodman (that is, a polite title of address for a commoner who is not a member of the gentry) Richard" (the first of several US warships so named). The Almanack was translated into Italian, along with the Pennsylvania State Constitution (which Franklin helped draft) at the establishment of the Cisalpine Republic. It was also twice translated into French, reprinted in Great Britain in broadside for ease of posting, and was distributed by members of the clergy to poor parishioners. It was the first work of English literature to be translated into Slovene, translated in 1812 by Janez Nepomuk Primic (1785–1823).
The Almanack also had a strong cultural and economic impact in the years following publication. In Pennsylvania, changes in monetary policy in regard to foreign expenses were evident for years after the issuing of the Almanack. Later writers such as Noah Webster were inspired by the almanac, and it went on to influence other publications of this type such as the Old Farmer's Almanac.
Sociologist Max Weber considered Poor Richard's Almanack and Franklin to reflect the "spirit of capitalism" in a form of "classical" purity." This is why he filled the pages of Chapter 2 of his 1905 book The Protestant Ethic and the Spirit of Capitalism with illustrative quotations from Franklin's almanacks.
Numerous farmer's almanacs trace their format and tradition to Poor Richard's Almanack; the Old Farmer's Almanac, for instance, has included a picture of Franklin on its cover since 1851.
In 1958, the United States mobilized its naval forces in response to an attack on Vice President Richard Nixon in Caracas, Venezuela. The operation was code-named "Poor Richard".
See also
The Papers of Benjamin Franklin
Citations
Bibliography
See also:
External links
(Click "find by author" and select "Franklin" for a complete list.)
1732 non-fiction books
18th-century books
1732 establishments in Pennsylvania
1758 disestablishments in the Thirteen Colonies
Publications established in 1732
Publications disestablished in 1758
Almanacs
Agriculture books
Astronomy books
Astrological texts
American non-fiction books
Proverbs
Works by Benjamin Franklin
Works about weather | Poor Richard's Almanack | [
"Physics",
"Astronomy"
] | 1,936 | [
"Physical phenomena",
"Weather",
"Astronomy books",
"Works about astronomy",
"Works about weather"
] |
355,968 | https://en.wikipedia.org/wiki/UPGMA | UPGMA (unweighted pair group method with arithmetic mean) is a simple agglomerative (bottom-up) hierarchical clustering method. It also has a weighted variant, WPGMA, and they are generally attributed to Sokal and Michener.
Note that the unweighted term indicates that all distances contribute equally to each average that is computed and does not refer to the math by which it is achieved. Thus the simple averaging in WPGMA produces a weighted result and the proportional averaging in UPGMA produces an unweighted result (see the working example).
Algorithm
The UPGMA algorithm constructs a rooted tree (dendrogram) that reflects the structure present in a pairwise similarity matrix (or a dissimilarity matrix).
At each step, the nearest two clusters are combined into a higher-level cluster. The distance between any two clusters and , each of size (i.e., cardinality) and , is taken to be the average of all distances between pairs of objects in and in , that is, the mean distance between elements of each cluster:
In other words, at each clustering step, the updated distance between the joined clusters and a new cluster is given by the proportional averaging of the and distances:
The UPGMA algorithm produces rooted dendrograms and requires a constant-rate assumption - that is, it assumes an ultrametric tree in which the distances from the root to every branch tip are equal. When the tips are molecular data (i.e., DNA, RNA and protein) sampled at the same time, the ultrametricity assumption becomes equivalent to assuming a molecular clock.
Working example
This working example is based on a JC69 genetic distance matrix computed from the 5S ribosomal RNA sequence alignment of five bacteria: Bacillus subtilis (), Bacillus stearothermophilus (), Lactobacillus viridescens (), Acholeplasma modicum (), and Micrococcus luteus ().
First step
First clustering
Let us assume that we have five elements and the following matrix of pairwise distances between them :
In this example, is the smallest value of , so we join elements and .
First branch length estimation
Let denote the node to which and are now connected. Setting ensures that elements and are equidistant from . This corresponds to the expectation of the ultrametricity hypothesis.
The branches joining and to then have lengths (see the final dendrogram)
First distance matrix update
We then proceed to update the initial distance matrix into a new distance matrix (see below), reduced in size by one row and one column because of the clustering of with .
Bold values in correspond to the new distances, calculated by averaging distances between each element of the first cluster and each of the remaining elements:
Italicized values in are not affected by the matrix update as they correspond to distances between elements not involved in the first cluster.
Second step
Second clustering
We now reiterate the three previous steps, starting from the new distance matrix
Here, is the smallest value of , so we join cluster and element .
Second branch length estimation
Let denote the node to which and are now connected. Because of the ultrametricity constraint, the branches joining or to , and to are equal and have the following length:
We deduce the missing branch length:
(see the final dendrogram)
Second distance matrix update
We then proceed to update into a new distance matrix (see below), reduced in size by one row and one column because of the clustering of with . Bold values in correspond to the new distances, calculated by proportional averaging:
Thanks to this proportional average, the calculation of this new distance accounts for the larger size of the cluster (two elements) with respect to (one element). Similarly:
Proportional averaging therefore gives equal weight to the initial distances of matrix . This is the reason why the method is unweighted, not with respect to the mathematical procedure but with respect to the initial distances.
Third step
Third clustering
We again reiterate the three previous steps, starting from the updated distance matrix .
Here, is the smallest value of , so we join elements and .
Third branch length estimation
Let denote the node to which and are now connected.
The branches joining and to then have lengths (see the final dendrogram)
Third distance matrix update
There is a single entry to update, keeping in mind that the two elements and each have a contribution of in the average computation:
Final step
The final matrix is:
So we join clusters and .
Let denote the (root) node to which and are now connected.
The branches joining and to then have lengths:
We deduce the two remaining branch lengths:
The UPGMA dendrogram
The dendrogram is now complete. It is ultrametric because all tips ( to ) are equidistant from :
The dendrogram is therefore rooted by , its deepest node.
Comparison with other linkages
Alternative linkage schemes include single linkage clustering, complete linkage clustering, and WPGMA average linkage clustering. Implementing a different linkage is simply a matter of using a different formula to calculate inter-cluster distances during the distance matrix update steps of the above algorithm. Complete linkage clustering avoids a drawback of the alternative single linkage clustering method - the so-called chaining phenomenon, where clusters formed via single linkage clustering may be forced together due to single elements being close to each other, even though many of the elements in each cluster may be very distant to each other. Complete linkage tends to find compact clusters of approximately equal diameters.
Uses
In ecology, it is one of the most popular methods for the classification of sampling units (such as vegetation plots) on the basis of their pairwise similarities in relevant descriptor variables (such as species composition). For example, it has been used to understand the trophic interaction between marine bacteria and protists.
In bioinformatics, UPGMA is used for the creation of phenetic trees (phenograms). UPGMA was initially designed for use in protein electrophoresis studies, but is currently most often used to produce guide trees for more sophisticated algorithms. This algorithm is for example used in sequence alignment procedures, as it proposes one order in which the sequences will be aligned. Indeed, the guide tree aims at grouping the most similar sequences, regardless of their evolutionary rate or phylogenetic affinities, and that is exactly the goal of UPGMA
In phylogenetics, UPGMA assumes a constant rate of evolution (molecular clock hypothesis) and that all sequences were sampled at the same time, and is not a well-regarded method for inferring relationships unless this assumption has been tested and justified for the data set being used. Notice that even under a 'strict clock', sequences sampled at different times should not lead to an ultrametric tree.
Time complexity
A trivial implementation of the algorithm to construct the UPGMA tree has time complexity, and using a heap for each cluster to keep its distances from other cluster reduces its time to . Fionn Murtagh presented an time and space algorithm.
See also
Neighbor-joining
Cluster analysis
Single-linkage clustering
Complete-linkage clustering
Hierarchical clustering
Models of DNA evolution
Molecular clock
References
External links
UPGMA clustering algorithm implementation in Ruby (AI4R)
Example calculation of UPGMA using a similarity matrix
Example calculation of UPGMA using a distance matrix
Bioinformatics algorithms
Computational phylogenetics
Cluster analysis algorithms
Phylogenetics | UPGMA | [
"Biology"
] | 1,551 | [
"Genetics techniques",
"Computational phylogenetics",
"Bioinformatics algorithms",
"Taxonomy (biology)",
"Bioinformatics",
"Phylogenetics"
] |
356,158 | https://en.wikipedia.org/wiki/Annulus%20%28mathematics%29 | In mathematics, an annulus (: annuli or annuluses) is the region between two concentric circles. Informally, it is shaped like a ring or a hardware washer. The word "annulus" is borrowed from the Latin word anulus or annulus meaning 'little ring'. The adjectival form is annular (as in annular eclipse).
The open annulus is topologically equivalent to both the open cylinder and the punctured plane.
Area
The area of an annulus is the difference in the areas of the larger circle of radius and the smaller one of radius :
The area of an annulus is determined by the length of the longest line segment within the annulus, which is the chord tangent to the inner circle, in the accompanying diagram. That can be shown using the Pythagorean theorem since this line is tangent to the smaller circle and perpendicular to its radius at that point, so and are sides of a right-angled triangle with hypotenuse , and the area of the annulus is given by
The area can also be obtained via calculus by dividing the annulus up into an infinite number of annuli of infinitesimal width and area and then integrating from to :
The area of an annulus sector (the region between two circular sectors with overlapping radii) of angle , with measured in radians, is given by
Complex structure
In complex analysis an annulus in the complex plane is an open region defined as
If , the region is known as the punctured disk (a disk with a point hole in the center) of radius around the point .
As a subset of the complex plane, an annulus can be considered as a Riemann surface. The complex structure of an annulus depends only on the ratio . Each annulus can be holomorphically mapped to a standard one centered at the origin and with outer radius 1 by the map
The inner radius is then .
The Hadamard three-circle theorem is a statement about the maximum value a holomorphic function may take inside an annulus.
The Joukowsky transform conformally maps an annulus onto an ellipse with a slit cut between foci.
See also
References
External links
Annulus definition and properties With interactive animation
Area of an annulus, formula With interactive animation
Circles
Elementary geometry
Geometric shapes
Planar surfaces | Annulus (mathematics) | [
"Mathematics"
] | 484 | [
"Geometric shapes",
"Planes (geometry)",
"Euclidean plane geometry",
"Mathematical objects",
"Planar surfaces",
"Elementary mathematics",
"Elementary geometry",
"Geometric objects",
"Circles",
"Pi"
] |
356,175 | https://en.wikipedia.org/wiki/Speed%20to%20fly | Speed to fly is a principle used by soaring pilots when flying between sources of lift, usually thermals, ridge lift and wave. The aim is to maximize the average cross-country speed by optimizing the airspeed in both rising and sinking air. The optimal airspeed is independent of the wind speed, because the fastest average speed achievable through the airmass corresponds to the fastest achievable average groundspeed.
The speed to fly is the optimum speed through sinking or rising air mass to achieve either the furthest glide, or fastest average cross-country speed.
Most speed to fly setups use units of either airspeed in kilometers per hour (km/h) and climb rate in meters per second (m/s), or airspeed in knots (kn) and climb rate in feet per minute (ft/min).
History
The first documented use of speed to fly theory was by Wolfgang Späte, who used a table of speeds to fly for different climb rates to help him win the 1938 Rhön competition flying a DFS Reiher. Späte is thought to have used a simplified form of the general theory that did not account for sinking air between thermals. In the same year two Poles, L. Swarzc and W. Kasprzyk, also published similar results, although there is some debate about whether this included the effect of air mass movement between the thermals. The simplified (no-sink) analysis was first published in English by Philip Wills in 1940, writing under the pen-name “Corunus”. The full solution incorporating sinking air between thermals was independently published in the June 1947 edition of Sailplane & Glider by two Cambridge University members, George Pirie, a graduate who had flown with Cambridge University Gliding Club, and Ernest Dewing, an undergraduate who flew at Dunstable after graduating. They both noticed, Pirie by direct argument and Dewing with mathematics, that the solution involved adding the average rate of climb in the thermal to the instantaneous rate of sink being experienced in the glide in order to find the corresponding best speed to fly. Karl Nickel and Paul MacCready published separate articles (in German) describing the same theory in Swiss Aero-Revue in 1949.
In 1954, Paul MacCready described an Optimum Airspeed Selector, that he had been using since 1947. According to MacCready, the crosscountry airspeed selector is "a simple device that indicates the optimum speed at which a sailplane should be flown between thermals. On a day with weak thermals and weak downcurrents, a pilot should fly between thermals at a velocity near that for best gliding angle of the sailplane...If the next thermal to be encountered is expected to be strong, the pilot should dive toward it at high velocity in order to reach it as fast as possible. Note the magnitude of the wind is of no concern when considering thermals which move with the air mass. For the derivation of the airspeed selector one minimizes the time for the sailplane to reach a thermal and regain the original height."
According to Bob Wander, "The principal advantage of making a rotatable speed-to-fly ring for your total energy variometer is that cross-country speeds in gliding can be optimized when we factor the strength of thermals into the speed-to-fly process. For instance, when thermals are weak, then it pays to fly conservatively...minimum sinking speed...We are able to cruise faster between thermals when lift is strong because it is so easy to get altitude back in strong lift".
Instrumentation
The minimal instrumentation required is an airspeed indicator and a variometer. The pilot will use the polar curve information for the particular glider to derive the exact speeds to fly, minimum sink or maximum L/D, depending on the lift and sink conditions in which the glider is flying. A speed to fly ring (known as a 'MacCready Ring'), which is fitted around the aircraft's variometer, will indicate the optimum airspeed to fly between thermals for maximum crosscountry performance. The ring is usually calibrated in either knots or meters per second and its markings are based on the aircraft's polar curve. During the glide between thermals, the index arrow is set at the rate of climb expected in the next thermal. On the speed ring, the variometer needle points to the optimum speed to fly between thermals.
Electronic versions of the MacCready Ring are built into glide computers that will give audible warnings to the pilot to speed up or slow down. Similar facilities can also be built into a PDA. The computer is connected to sensors that detect the aircraft's airspeed and rate of sink. If linked to a GPS, and using a computed or manual estimate of the windspeed, the glide computer can also calculate the speed and altitude necessary to glide to a particular destination. This glide is known as the final glide because no further lift should be necessary to reach the goal. During this glide, speed to fly information is needed to ensure that the remaining height is used efficiently.
See also
Geoffrey H. Stephenson
ICAO recommendations on use of the International System of Units
References
External links
Performance Airspeeds for the Soaring Challenged by Jim D. Burch (mirror of defunct original page via avia.tion.ca)
MacCready Theory with Uncertain Lift and Limited Altitude Paper from Technical Soaring 23 (3) (July 1999) 88-96, by John H. Cochrane
Just a little faster, please (new version, 2007) paper by John H. Cochrane
The Price You Pay for McCready Speeds by Wil Schuemann, from the Proceedings of the 1972 Soaring Symposium
Competition Philosophy by Dick Johnson, from the Proceedings of the 1972 Soaring Symposium
Introduction to Cross Country Soaring by Kai Gersten, 1999 (Revised 2006)
This Brilliant Man Can Get You In Trouble – Misapply MacCready Theory At Your Own Peril by Clemens Ceipek, 2021
Aircraft aerodynamics
Airspeed
Gliding technology | Speed to fly | [
"Physics"
] | 1,242 | [
"Wikipedia categories named after physical quantities",
"Airspeed",
"Physical quantities"
] |
356,200 | https://en.wikipedia.org/wiki/Muller%27s%20ratchet | In evolutionary genetics, Muller's ratchet (named after Hermann Joseph Muller, by analogy with a ratchet effect) is a process which, in the absence of recombination (especially in an asexual population), results in an accumulation of irreversible deleterious mutations. This happens because in the absence of recombination, and assuming reverse mutations are rare, offspring bear at least as much mutational load as their parents. Muller proposed this mechanism as one reason why sexual reproduction may be favored over asexual reproduction, as sexual organisms benefit from recombination and consequent elimination of deleterious mutations. The negative effect of accumulating irreversible deleterious mutations may not be prevalent in organisms which, while they reproduce asexually, also undergo other forms of recombination. This effect has also been observed in those regions of the genomes of sexual organisms that do not undergo recombination.
Etymology
Although Muller discussed the advantages of sexual reproduction in his 1932 talk, it does not contain the word "ratchet". Muller first introduced the term "ratchet" in his 1964 paper, and the phrase "Muller's ratchet" was coined by Joe Felsenstein in his 1974 paper, "The Evolutionary Advantage of Recombination".
Explanation
Asexual reproduction compels genomes to be inherited as indivisible blocks so that once the least mutated genomes in an asexual population begin to carry at least one deleterious mutation, no genomes with fewer such mutations can be expected to be found in future generations (except as a result of back mutation). This results in an eventual accumulation of mutations known as genetic load. In theory, the genetic load carried by asexual populations eventually becomes so great that the population goes extinct. Also, laboratory experiments have confirmed the existence of the ratchet and the consequent extinction of populations in many organisms (under intense drift and when recombinations are not allowed) including RNA viruses, bacteria, and eukaryotes. In sexual populations, the process of genetic recombination allows the genomes of the offspring to be different from the genomes of the parents. In particular, progeny (offspring) genomes with fewer mutations can be generated from more highly mutated parental genomes by putting together mutation-free portions of parental chromosomes. Also, purifying selection, to some extent, unburdens a loaded population when recombination results in different combinations of mutations.
Among protists and prokaryotes, a plethora of supposedly asexual organisms exists. More and more are being shown to exchange genetic information through a variety of mechanisms. In contrast, the genomes of mitochondria and chloroplasts do not recombine and would undergo Muller's ratchet were they not as small as they are (see Birdsell and Wills [pp. 93–95]). Indeed, the probability that the least mutated genomes in an asexual population end up carrying at least one (additional) mutation depends heavily on the genomic mutation rate and this increases more or less linearly with the size of the genome (more accurately, with the number of base pairs present in active genes). However, reductions in genome size, especially in parasites and symbionts, can also be caused by direct selection to get rid of genes that have become unnecessary. Therefore, a smaller genome is not a sure indication of the action of Muller's ratchet.
In sexually reproducing organisms, nonrecombining chromosomes or chromosomal regions such as the mammalian Y chromosome (with the exception of multicopy sequences which do engage intrachromosomal recombination and gene conversion) should also be subject to the effects of Muller's ratchet. Such nonrecombining sequences tend to shrink and evolve quickly. However, this fast evolution might also be due to these sequences' inability to repair DNA damage via template-assisted repair, which is equivalent to an increase in the mutation rate for these sequences. Ascribing cases of genome shrinkage or fast evolution to Muller's ratchet alone is not easy.
Muller's ratchet relies on genetic drift, and turns faster in smaller populations because in such populations deleterious mutations have a better chance of fixation. Therefore, it sets the limits to the maximum size of asexual genomes and to the long-term evolutionary continuity of asexual lineages. However, some asexual lineages are thought to be quite ancient; Bdelloid rotifers, for example, appear to have been asexual for nearly 40 million years. However, rotifers were found to possess a substantial number of foreign genes from possible horizontal gene transfer events. Furthermore, a vertebrate fish, Poecilia formosa, seems to defy the ratchet effect, having existed for 500,000 generations. This has been explained by maintenance of genomic diversity through parental introgression and a high level of heterozygosity resulting from the hybrid origin of this species.
Calculation of the fittest class
In 1978, John Haigh used a Wright–Fisher model to analyze the effect of Muller's ratchet in an asexual population. If the ratchet is operating the fittest class (least loaded individuals) is small and prone to extinction by the effect of genetic drift. In his paper Haigh derives the equation that calculates the frequency of individuals carrying mutations for the population with stationary distribution:
where, is the number of individual carrying mutations, is the population size, is the mutation rate and is the selection coefficient.
Thus, the frequency of the individuals of the fittest class () is:
In an asexual population which suffers from ratchet the frequency of fittest individuals would be small, and go extinct after few generations. This is called a click of the ratchet. Following each click, the rate of accumulation of deleterious mutation would increase, and ultimately results in the extinction of the population.
The antiquity of recombination and Muller's ratchet
It has been argued that recombination was an evolutionary development as ancient as life on Earth. Early RNA replicators capable of recombination may have been the ancestral sexual source from which asexual lineages could periodically emerge. Recombination in the early sexual lineages may have provided a means for coping with genome damage. Muller's ratchet under such ancient conditions would likely have impeded the evolutionary persistence of the asexual lineages that were unable to undergo recombination.
Muller's ratchet and mutational meltdown
Since deleterious mutations are harmful by definition, accumulation of them would result in loss of individuals and a smaller population size. Small populations are more susceptible to the ratchet effect and more deleterious mutations would be fixed as a result of genetic drift. This creates a positive feedback loop which accelerates extinction of small asexual populations. This phenomenon has been called mutational meltdown. It appears that mutational meltdown due to Muller’s ratchet can be avoided by a little bit of sex as in the common apomictic asexual flowering plant Ranunculus auricomus.
See also
Evolution of sexual reproduction
Genetic hitchhiking
Hill–Robertson effect
References
External links
xkcd webcomic explaining Muller's ratchet and recombination through the evolution of Internet memes
Evolutionary biology concepts
Genetics concepts
Population genetics | Muller's ratchet | [
"Biology"
] | 1,541 | [
"Genetics concepts",
"Evolutionary biology concepts"
] |
356,369 | https://en.wikipedia.org/wiki/List%20of%20DOS%20commands | This article presents a list of commands used by MS-DOS compatible operating systems, especially as used on IBM PC compatibles. Many unrelated disk operating systems use the DOS acronym and are not part of the scope of this list.
In MS-DOS, many standard system commands are provided for common tasks such as listing files on a disk or moving files. Some commands are built into the command interpreter; others exist as external commands on disk. Over multiple generations, commands were added for additional functions. In Microsoft Windows, a command prompt window that uses many of the same commands, cmd.exe, can still be used.
Command processing
The command interpreter for DOS runs when no application programs are running. When an application exits, if the transient portion of the command interpreter in memory was overwritten, DOS will reload it from disk. Some commands are internal—built into COMMAND.COM; others are external commands stored on disk. When the user types a line of text at the operating system command prompt, COMMAND.COM will parse the line and attempt to match a command name to a built-in command or to the name of an executable program file or batch file on disk. If no match is found, an error message is printed, and the command prompt is refreshed.
External commands were too large to keep in the command processor, or were less frequently used. Such utility programs would be stored on disk and loaded just like regular application programs but were distributed with the operating system. Copies of these utility command programs had to be on an accessible disk, either on the current drive or on the command path set in the command interpreter.
In the list below, commands that can accept more than one file name, or a filename including wildcards (* and ?), are said to accept a filespec (file specification) parameter. Commands that can accept only a single file name are said to accept a filename parameter. Additionally, command line switches, or other parameter strings, can be supplied on the command line. Spaces and symbols such as a "/" or a "-" may be used to allow the command processor to parse the command line into filenames, file specifications, and other options.
The command interpreter preserves the case of whatever parameters are passed to commands, but the command names themselves and file names are case-insensitive.
Many commands are the same across many DOS systems, but some differ in command syntax or name.
DOS commands
A partial list of the most common commands for MS-DOS and IBM PC DOS follows below.
APPEND
Sets the path to be searched for data files or displays the current search path.
The APPEND command is similar to the PATH command that tells DOS where to search for program files (files with a .COM, . EXE, or .BAT file name extension).
The command is available in MS-DOS versions 3.2 and later.
ASSIGN
The command redirects requests for disk operations on one drive to a different drive. It can also display drive assignments or reset all drive letters to their original assignments.
The command is available in MS-DOS versions 3 through 5 and IBM PC DOS releases 2 through 5.
ATMDM
Lists connections and addresses seen by Windows ATM call manager.
ATTRIB
Attrib changes or views the attributes of one or more files. It defaults to display the attributes of all files in the current directory. The file attributes available include read-only, archive, system, and hidden attributes. The command has the capability to process whole folders and subfolders of files and also process all files.
The command is available in MS-DOS versions 3 and later.
BACKUP and RESTORE
These are commands to backup and restore files from an external disk. These appeared in version 2, and continued to PC DOS 5 and MS-DOS 6 (PC DOS 7 had a deversioned check). In DOS 6, these were replaced by commercial programs (CPBACKUP, MSBACKUP), which allowed files to be restored to different locations.
BASIC and BASICA
An implementation of the BASIC programming language for PCs. Implementing BASIC in this way was very common in operating systems on 8- and 16-bit machines made in the 1980s.
IBM computers had BASIC 1.1 in ROM, and IBM's versions of BASIC used code in this ROM-BASIC, which allowed for extra memory in the code area. BASICA last appeared in IBM PC DOS 5.02, and in OS/2 (2.0 and later), the version had ROM-BASIC moved into the program code.
Microsoft released GW-BASIC for machines with no ROM-BASIC. Some OEM releases had basic.com and basica.com as loaders for GW-BASIC.EXE.
BASIC was dropped after MS-DOS 4, and PC DOS 5.02. OS/2 (which uses PC DOS 5), has it, while MS-DOS 5 does not.
BREAK
This command is used to instruct DOS to check whether the and keys have been pressed before carrying out a program request.
The command is available in MS-DOS versions 2 and later.
CALL
Starts a batch file from within another batch file and returns when that one ends.
The command is available in MS-DOS versions 3.3 and later.
CD and CHDIR
The CHDIR (or the alternative name CD) command either displays or changes the current working directory.
The command is available in MS-DOS versions 2 and later.
CHCP
The command either displays or changes the active code page used to display character glyphs in a console window. Similar functionality can be achieved with MODE CON: CP SELECT=.
The command is available in MS-DOS versions 3.3 and later.
CHKDSK
CHKDSK verifies a storage volume (for example, a hard disk, disk partition or floppy disk) for file system integrity. The command has the ability to fix errors on a volume and recover information from defective disk sectors of a volume.
The command is available in MS-DOS versions 1 and later.
CHOICE
The CHOICE command is used in batch files to prompt the user to select one item from a set of single-character choices. Choice was introduced as an external command with MS-DOS 6.0; Novell DOS 7 and PC DOS 7.0. Earlier versions of DR-DOS supported this function with the built-in switch command (for numeric choices) or by beginning a command with a question mark. This command was formerly called ync (yes-no-cancel).
CLS
The CLS or CLRSCR command clears the terminal screen.
The command is available in MS-DOS versions 2 and later.
COMMAND
Start a new instance of the command interpreter.
The command is available in MS-DOS versions 1 and later.
COMP
Show differences between any two files, or any two sets of files.
The command is available in MS-DOS versions 3.3 through 5 and IBM PC DOS releases 1 through 5.
COPY
Makes copies of existing files.
The command is available in MS-DOS versions 1 and later.
CTTY
Defines the terminal device (for example, COM1) to use for input and output.
The command is available in MS-DOS versions 2 and later.
DATE
Displays the system date and prompts the user to enter a new date. Complements the TIME command.
The command is available in MS-DOS versions 1 and later.
DBLBOOT
(Not a command: This is a batch file added to DOS 6.X Supplemental Disks to help create DoubleSpace boot floppies.)
DBLSPACE
A disk compression utility supplied with MS-DOS version 6.0 (released in 1993) and version 6.2.
DEBUG
A very primitive assembler and disassembler.
DEFRAG
The command has the ability to analyze the file fragmentation on a disk drive or to defragment a drive. This command is called DEFRAG in MS-DOS/PC DOS and diskopt in DR-DOS.
The command is available in MS-DOS versions 6 and later.
DEL and ERASE
DEL (or the alternative form ERASE) is used to delete one or more files.
The command is available in MS-DOS versions 1 and later.
DELTREE
Deletes a directory along with all of the files and subdirectories that it contains. Normally, it will ask for confirmation of the potentially dangerous action. Since the RD (RMDIR) command can not delete a directory if the directory is not empty (except in Windows NT & 10), the DELTREE command can be used to delete the whole directory.
The deltree command is included in certain versions of Microsoft Windows and MS-DOS operating systems. It is specifically available only in versions of MS-DOS 6.0 and higher, and in Microsoft Windows 9x. In Windows NT, the functionality provided exists but is handled by the command or which has slightly different syntax. This command is not present in Windows 7 and 8. In Windows 10, the command switch is or .
DIR
The DIR command displays the contents of a directory. The contents comprise the disk's volume label and serial number; one directory or filename per line, including the filename extension, the file size in bytes, and the date and time the file was last modified; and the total number of files listed, their cumulative size, and the free space (in bytes) remaining on the disk. The command is one of the few commands that exist from the first versions of DOS. The command can display files in subdirectories. The resulting directory listing can be sorted by various criteria and filenames can be displayed in a chosen format.
DISKCOMP
A command for comparing the complete contents of a floppy disk to another one.
The command is available in MS-DOS versions 3.2 and later and IBM PC DOS releases 1 and later.
DISKCOPY
A command for copying the complete contents of a diskette to another diskette.
The command is available in MS-DOS versions 2 and later.
DOSKEY
A command that adds command history, macro functionality, and improved editing features to the command-line interpreter.
The command is available in MS-DOS versions 5 and later.
DOSSIZE
Displays how much memory various DOS components occupy.
DRVSPACE
A disk compression utility supplied with MS-DOS version 6.22.
ECHO
The ECHO command prints its own arguments back out to the DOS equivalent of the standard output stream. (Hence the name, ECHO) Usually, this means directly to the screen, but the output of echo can be redirected, like any other command, to files or devices. Often used in batch files to print text out to the user.
Another important use of the echo command is to toggle echoing of commands on and off in batch files. Traditionally batch files begin with the @echo off statement. This says to the interpreter that echoing of commands should be off during the whole execution of the batch file, thus resulting in a "tidier" output (the @ symbol declares that this particular command (echo off) should also be executed without echo.)
The command is available in MS-DOS versions 2 and later.
EDIT
EDIT is a full-screen text editor, included with MS-DOS versions 5 and 6, OS/2 and Windows NT to 4.0 The corresponding program in Windows 95 and later, and Windows 2000 and later is Edit v2.0. PC DOS 6 and later use the DOS E Editor and DR-DOS used editor up to version 7.
EDLIN
DOS line-editor. It can be used with a script file, like debug, this makes it of some use even today. The absence of a console editor in MS-DOS/PC DOS 1–4 created an after-market for third-party editors.
In DOS 5, an extra command "?" was added to give the user much-needed help.
DOS 6 was the last version to contain EDLIN; for MS-DOS 6, it's on the supplemental disks, while PC DOS 6 had it in the base install. Windows NT 32-bit, and OS/2 have Edlin.
EMM386
The EMM386 command enables or disables EMM386 expanded-memory support on a computer with an 80386 or higher processor.
The command is available in MS-DOS versions 5 and later.
ERASE
See: DEL and ERASE
EXE2BIN
Converts an executable (.exe) file into a binary file with the extension .com, which is a memory image of the program.
The size of the resident code and data sections combined in the input .exe file must be less than 64 KB. The file must also have no stack segment.
The command is available in MS-DOS versions 1 through 5. It is available separately for version 6 on the Supplemental Disk.
EXIT
Exits the current command processor. If the exit is used at the primary command, it has no effect unless in a DOS window under Microsoft Windows, in which case the window is closed and the user returns to the desktop.
The command is available in MS-DOS versions 2 and later.
EXPAND
The Microsoft File Expansion Utility is used to uncompress one or more compressed cabinet files (.CAB). The command dates back to 1990 and was supplied on floppy disc for MS-DOS versions 5 and later.
FAKEMOUS
FAKEMOUS is an IBM PS/2 mouse utility used with AccessDOS. It is included on the MS-DOS 6 Supplemental Disk.
AccessDOS assists persons with disabilities.
FASTHELP
Provides information for MS-DOS commands.
FASTOPEN
A command that provides accelerated access to frequently-used files and directories.
The command is available in MS-DOS versions 3.3 and later.
FC
Show differences between any two files, or any two sets of files.
The command is available in MS-DOS versions 2 and later – primarily non-IBM releases.
FDISK
The FDISK command manipulates hard disk partition tables. The name derives from IBM's habit of calling hard drives fixed disks. FDISK has the ability to display information about, create, and delete DOS partitions or logical DOS drive. It can also install a standard master boot record on the hard drive.
The command is available in MS-DOS versions 3.2 and later and IBM PC DOS 2.0 releases and later.
FIND
The FIND command is a filter to find lines in the input data stream that contain or don't contain a specified string and send these to the output data stream. It may also be used as a pipe.
The command is available in MS-DOS versions 2 and later.
FINDSTR
The FINDSTR command is a GREP-oriented FIND-like utility. Among its uses is the logical-OR lacking in FIND.
would find all TXT files with one or more of the above-listed words YES, NO, MAYBE.
FOR
Iteration: repeats a command for each out of a specified set of files.
The FOR loop can be used to parse a file or the output of a command.
The command is available in MS-DOS versions 2 and later.
FORMAT
Deletes the FAT entries and the root directory of the drive/partition, and reformats it for MS-DOS. In most cases, this should only be used on floppy drives or other removable media. This command can potentially erase everything on a computer's drive.
The command is available in MS-DOS versions 1 and later.
GOTO
The Goto command transfers execution to a specified label. Labels are specified at the beginning of a line, with a colon ().
The command is available in MS-DOS versions 2 and later.
Used in Batch files.
GRAFTABL
The GRAFTABL command enables the display of an extended character set in graphics mode.
The command is available in MS-DOS versions 3 through 5.
GRAPHICS
A TSR program to enable the sending of graphical screen dump to printer by pressing <Print Screen>.
The command is available in MS-DOS versions 3.2 and later and IBM PC DOS releases 2 and later.
HELP
Gives help about DOS commands.
The command is available in MS-DOS versions 5 thru Windows XP. Full-screen command help is available in MS-DOS versions 6 and later. Beginning with Windows XP, the command processor "DOS" offers builtin-help for commands by using (e.g. )
IF
IF is a conditional statement, that allows branching of the program execution. It evaluates the specified condition, and only if it is true, then it executes the remainder of the command line. Otherwise, it skips the remainder of the line and continues with next command line.
Used in Batch files.
The command is available in MS-DOS versions 2 and later.
INTERSVR and INTERLNK
In MS-DOS; filelink in DR-DOS.
Network PCs using a null modem cable or LapLink cable. The server-side version of InterLnk, it also immobilizes the machine it's running on as it is an active app (As opposed to a terminate-and-stay-resident program) which must be running for any transfer to take place. DR-DOS' filelink is executed on both the client and server.
New in PC DOS 5.02, MS-DOS 6.0.
JOIN
The JOIN command attaches a drive letter to a specified directory on another drive. The opposite can be achieved via the SUBST command.
The command is available in MS-DOS versions 3 through 5. It is available separately for versions 6.2 and later on the Supplemental Disk.
KEYB
The KEYB command is used to select a keyboard layout.
The command is available in MS-DOS versions 3.3 and later.
From DOS 3.0 through 3.21, there are instead per-country commands, namely KEYBFR, KEYBGR, KEYBIT, KEYBSP and KEYBUK.
LABEL
Changes the label on a logical drive, such as a hard disk partition or a floppy disk.
The command is available in MS-DOS versions 3.1 and later and IBM PC DOS releases 3 and later.
LINK4
Microsoft 8086 Object Linker
LOADFIX
Loads a program above the first 64K of memory, and runs the program. The command is available in MS-DOS versions 5 and later. It is included only in MS-DOS/PC DOS. DR-DOS used memmax, which opened or closed lower, upper, and video memory access, to block the lower 64K of memory.
LOADHIGH and LH
A command that loads a program into the upper memory area.
The command is available in MS-DOS versions 5 and later.
It is called hiload in DR-DOS.
MD or MKDIR
Makes a new directory. The parent of the directory specified will be created if it does not already exist.
The command is available in MS-DOS versions 2 and later.
MEM
Displays memory usage. It is capable of displaying program size and status, memory in use, and internal drivers. It is an external command.
The command is available in MS-DOS versions 4 and later and DR DOS releases 5.0 and later.
On earlier DOS versions the memory usage could be shown by running CHKDSK. In DR DOS the parameter /A could be used to only show the memory usage.
MEMMAKER
Starting with version 6, MS-DOS included the external program MemMaker which was used to free system memory (especially Conventional memory) by automatically reconfiguring the AUTOEXEC.BAT and CONFIG.SYS files. This was usually done by moving TSR programs and device drivers to the upper memory. The whole process required two system restarts. Before the first restart the user was asked whether to enable EMS Memory, since use of expanded memory required a reserved 64KiB region in upper memory. The first restart inserted the SIZER.EXE program which gauged the memory needed by each TSR or Driver. MemMaker would then calculate the optimal Driver and TSR placement in upper memory and modify the AUTOEXEC.BAT and CONFIG.SYS accordingly, and reboot the second time.
MEMMAKER.EXE and SIZER.EXE were developed for Microsoft by Helix Software Company and were eliminated starting in MS-DOS 7 (Windows 95); however, they could be obtained from Microsoft's FTP server as part of the OLDDOS.EXE package, alongside other tools.
PC DOS uses another program called RamBoost to optimize memory, working either with PC DOS's HIMEM/EMM386 or a third-party memory manager. RamBoost was licensed to IBM by Central Point Software.
MIRROR
The MIRROR command saves disk storage information that can be used to recover accidentally erased files.
The command is available in MS-DOS version 5. It is available separately for versions 6.2 and later on Supplemental Disk.
MODE
Configures system devices. Changes graphics modes, adjusts keyboard settings, prepares code pages, and sets up port redirection.
The command is available in MS-DOS versions 3.2 and later and IBM PC DOS releases 1 and later.
MORE
The MORE command paginates text, so that one can view files containing more than one screen of text. More may also be used as a filter. While viewing MORE text, the return key displays the next line, the space bar displays the next page.
The command is available in MS-DOS versions 2 and later.
MOVE
Moves files or renames directories.
The command is available in MS-DOS versions 6 and later.
DR-DOS used a separate command for renaming directories, rendir.
MSAV
A command that scans the computer for known viruses.
The command is available in MS-DOS versions 6 and later.
MSBACKUP
The MSBACKUP command is used to backup or restore one or more files from one disk to another.
The New York Times said that MSBACKUP "is much better and faster than the old BACKUP command used in earlier versions of DOS, but it does lack some of the advanced features found in backup software packages that are sold separately. There is another offering, named MWBACKUP, that is GUI-oriented. It was introduced for Windows for Workgroups (3.11).
The MSBACKUP command is available in MS-DOS versions 6 and later.
MSCDEX
MSCDEX is a driver executable which allows DOS programs to recognize, read, and control CD-ROMs.
The command is available in MS-DOS versions 6 and later.
MSD
The MSD command provides detailed technical information about the computer's hardware and software. MSD was new in MS-DOS 6; the PC DOS version of this command is QCONFIG. The command appeared first in Word2, and then in Windows 3.10.
MSHERC
The MSHERC.COM (also QBHERC.COM) was a TSR graphics driver supplied with Microsoft QuickC, QuickBASIC, and the C Compiler, to allow use of the Hercules adapter high-resolution graphics capability (720 x 348, 2 colors).
NLSFUNC
Loads extended Nationalization and Localization Support from COUNTRY.SYS, and changed the codepage of drivers and system modules resident in RAM.
In later versions of DR-DOS 6, NLSFUNC relocated itself into the HiMem area, thereby freeing a portion of the nearly invaluable lower 640KiB that constituted the ”conventional” memory available to software.
The command is available in MS-DOS versions 3.3 and later.
PATH
Displays or sets a search path for executable files.
The command is available in MS-DOS versions 2 and later.
PAUSE
Suspends processing of a batch program and displays the message , if not given other text to display.
The command is available in MS-DOS versions 1 and later.
PING
Allows the user to test the availability of a network connection to a specified host. Hostnames are usually resolved to IP addresses.
It is not included in many DOS versions; typically ones with network stacks will have it as a diagnostic tool.
POWER
The POWER command is used to turn power management on and off, report the status of power management, and set levels of power conservation. It is an external command implemented as POWER.EXE.
The command is available in MS-DOS versions 6 and later.
PRINT
The PRINT command adds or removes files in the print queue. This command was introduced in MS-DOS version 2. Before that there was no built-in support for background printing files. The user would usually use the copy command to copy files to LPT1.
PRINTFIX
PROMPT
The command allows the user to change the prompt in the command screen. The default prompt is (i.e. ), which displays the drive and current path as the prompt, but can be changed to anything. , displays the current system date as the prompt. Type in the cmd screen for help on this function.
The command is available in MS-DOS versions 2 and later and IBM PC DOS releases 2.1 and later.
PS
A utility inspired by the UNIX/XENIX ps command. It also provides a full-screen mode, similar to the top utility on UNIX systems.
QBASIC
An integrated development environment and BASIC interpreter.
The command is available in MS-DOS versions 5 and later.
RD or RMDIR
Remove a directory (delete a directory); by default the directories must be empty of files for the command to succeed.
The command is available in MS-DOS versions 2 and later.
The deltree command in some versions of MS-DOS and all versions of Windows 9x removes non-empty directories.
RECOVER
A primitive filesystem error recovery utility included in MS-DOS / IBM PC DOS.
The command is available in MS-DOS versions 2 through 5.
REM
Remark (comment) command, normally used within a batch file, and for DR-DOS, PC/MS-DOS 6 and above, in CONFIG.SYS. This command is processed by the command processor. Thus, its output can be redirected to create a zero-byte file. REM is useful in logged sessions or screen-captures. One might add comments by way of labels, usually starting with double-colon (::). These are not processed by the command processor.
REN
The REN command renames a file. Unlike the move command, this command cannot be used to rename subdirectories, or rename files across drives. Mass renames can be accomplished by the use of the wildcards characters asterisk (*) and question mark (?).
The command is available in MS-DOS versions 1 and later.
REPLACE
A command that is used to replace one or more existing computer files or add new files to a target directory.
The command is available in MS-DOS versions 3.2 and later.
RESTORE
See: BACKUP and RESTORE
SCANDISK
Disk diagnostic utility. Scandisk was a replacement for the chkdsk utility, starting with MS-DOS version 6.2 and later. Its primary advantages over chkdsk is that it is more reliable and has the ability to run a surface scan which finds and marks bad clusters on the disk. It also provided mouse point-and-click TUI, allowing for interactive session to complement command-line batch run.
chkdsk had surface scan and bad cluster detection functionality included, and was used again on Windows NT-based operating systems.
SELECT
The SELECT command formats a disk and installs country-specific information and keyboard codes.
It was initially only available with IBM PC DOS. The version included with PC DOS 3.0 and 3.1 is hard-coded to transfer the operating system from A: to B:, while from PC DOS 3.2 onward you can specify the source and destination, and can be used to install DOS to the harddisk.
The version included with MS-DOS 4 and PC DOS 4 is no longer a simple command-line utility, but a full-fledged installer.
The command is available in MS-DOS versions 3.3 and 4 and IBM PC DOS releases 3 through 4.
This command is no longer included in DOS Version 5 and later, where it has been replaced by SETUP.
SET
Sets environment variables.
The command is available in MS-DOS versions 2 and later.
cmd.exe in Windows NT 2000, 4DOS, 4OS2, 4NT, and a number of third-party solutions allow direct entry of environment variables from the command prompt. From at least Windows 2000, the set command allows for the evaluation of strings into variables, thus providing inter alia a means of performing integer arithmetic.
SETUP
The command is available in MS-DOS versions 5 and later.
This command does a computer setup. With all computers running DOS versions 5 and
later, it runs the computer setup, such as Windows 95 setup and Windows 98 setup.
SETVER
SetVer is a TSR program designed to return a different value to the version of DOS that is running. This allows programs that look for a specific version of DOS to run under a different DOS.
The command is available in MS-DOS versions 5 and later.
SHARE
Installs support for file sharing and locking capabilities.
The command is available in MS-DOS versions 3 and later.
SHIFT
The SHIFT command increases number of replaceable parameters to more than the standard ten for use in batch files.
This is done by changing the position of replaceable parameters. It replaces each of the replacement parameters with the subsequent one (e.g. with , with , etc.).
The command is available in MS-DOS versions 2 and later.
SIZER
The external command SIZER.EXE is not intended to be started directly from the command prompt. Is used by MemMaker during the memory-optimization process.
SMARTDRV
The command is available in MS-DOS versions 6 and later.
SORT
A filter to sort lines in the input data stream and send them to the output data stream. Similar to the Unix command sort. Handles files up to 64k. This sort is always case insensitive.
The command is available in MS-DOS versions 2 and later.
SUBST
A utility to map a subdirectory to a drive letter. The opposite can be achieved via the JOIN command.
The command is available in MS-DOS versions 3.1 and later.
SYS
A utility to make a volume bootable. Sys rewrites the Volume Boot Code (the first sector of the partition that SYS is acting on) so that the code, when executed, will look for IO.SYS. SYS also copies the core DOS system files, IO.SYS, MSDOS.SYS, and COMMAND.COM, to the volume. SYS does not rewrite the Master Boot Record, contrary to widely held belief.
The command is available in MS-DOS versions 1 and later.
TELNET
The Telnet Client is a tool for developers and administrators to help manage and test network connectivity.
TIME
Display the system time and waits for the user to enter a new time. Complements the DATE command.
The command is available in MS-DOS versions 1 and later.
TITLE
Enables a user to change the title of their MS-DOS window.
TREE
It is an external command, graphically displays the path of each directory and sub-directories on the specified drive.
The command is available in MS-DOS versions 3.2 and later and IBM PC DOS releases 2 and later.
TRUENAME
Internal command that expands the name of a file, directory, or drive, and display its absolute pathname as the result. It will expand relative pathnames, SUBST drives, and JOIN directories, to find the actual directory.
For example, in DOS 7.1, if the current directory is C:\WINDOWS\SYSTEM, then
The argument does not need to refer to an existing file or directory: TRUENAME will output the absolute pathname as if it did. Also TRUENAME does not search in the PATH.
For example, in DOS 5, if the current directory is C:\TEMP, then TRUENAME command.com will display C:\TEMP\COMMAND.COM (which does not exist), not C:\DOS\COMMAND.COM (which does and is in the PATH).
This command displays the UNC pathnames of mapped network or local CD drives. This command is an undocumented DOS command. The help switch "/?" defines it as a "Reserved command name". It is available in MS-DOS version 5.00 and later, including the DOS 7 and 8 in Windows 95/98/ME. The C library function realpath performs this function. The Microsoft Windows NT command processors do not support this command, including the versions of command.com for NT.
TYPE
Displays a file. The more command is frequently used in conjunction with this command, e.g. type long-text-file | more. TYPE can be used to concatenate files (); however this won't work for large files—use copy command instead.
The command is available in MS-DOS versions 1 and later.
UNDELETE
Restores file previously deleted with del. By default all recoverable files in the working directory are restored; options are used to change this behavior. If the MS-DOS mirror TSR program is used, then deletion tracking files are created and can be used by undelete.
The command is available in MS-DOS versions 5 and later.
UNFORMAT
MS-DOS version 5 introduced the quick format option (Format /Q) which removes the disk's file table without deleting any of the data. The same version also introduced the UNFORMAT command to undo the effects of a quick format, restoring the file table and making all the files accessible again.
UNFORMAT only works if invoked before any further changes have overwritten the drive's contents.
VER
An internal DOS command, that reports the DOS version presently running, and since MS-DOS 5, whether DOS is loaded high.
The command is available in MS-DOS versions 2 and later.
VERIFY
Enables or disables the feature to determine if files have been correctly written to disk (You can enable the verify command by typing "verify on" on Command Prompt and pressing enter. To display the current VERIFY setting, type VERIFY without a parameter. To turn off the feature, type "verify off"). If no parameter is provided, the command will display the current setting.
The command is available in MS-DOS versions 2 and later.
VOL
An internal command that displays the disk volume label and serial number.
The command is available in MS-DOS versions 2 and later.
VSAFE
A TSR program that continuously monitors the computer for viruses.
The command is available in MS-DOS versions 6 and later.
XCOPY
Copy entire directory trees. Xcopy is a version of the copy command that can move files and directories from one location to another.
XCOPY usage and attributes can be obtained by typing in the DOS Command line.
The command is available in MS-DOS versions 3.2 and later.
See also
:Category:Windows commands
COMMAND.COM
cmd.exe – command-line interpreter in various Windows and OS/2 systems
Command-line interface
List of CONFIG.SYS directives
Timeline of DOS operating systems
References
Further reading
External links
Command-Line Reference : Microsoft TechNet Database "Command-Line Reference"
The MS-DOS 6 Technical Reference on TechNet contains the official Microsoft MS-DOS 6 command reference documentation.
MDGx MS-DOS Undocumented + Hidden Secrets
MS-DOS v1.25 and v2.0 source code
There are several guides to DOS commands available that are licensed under the GNU Free Documentation License:
The FreeDOS Spec at SourceForge is a plaintext specification, written in 1999, for how DOS commands should work in FreeDOS
MS-DOS commands
Reference for windows commands with examples
A Collection of Undocumented and Obscure Features in Various MS-DOS Versions
DOS commands
DOS commands | List of DOS commands | [
"Technology"
] | 7,402 | [
"Computing-related lists",
"DOS commands",
"Computing commands",
"Microsoft lists"
] |
356,382 | https://en.wikipedia.org/wiki/Gene%20regulatory%20network | A gene (or genetic) regulatory network (GRN) is a collection of molecular regulators that interact with each other and with other substances in the cell to govern the gene expression levels of mRNA and proteins which, in turn, determine the function of the cell. GRN also play a central role in morphogenesis, the creation of body structures, which in turn is central to evolutionary developmental biology (evo-devo).
The regulator can be DNA, RNA, protein or any combination of two or more of these three that form a complex, such as a specific sequence of DNA and a transcription factor to activate that sequence. The interaction can be direct or indirect (through transcribed RNA or translated protein). In general, each mRNA molecule goes on to make a specific protein (or set of proteins). In some cases this protein will be structural, and will accumulate at the cell membrane or within the cell to give it particular structural properties. In other cases the protein will be an enzyme, i.e., a micro-machine that catalyses a certain reaction, such as the breakdown of a food source or toxin. Some proteins though serve only to activate other genes, and these are the transcription factors that are the main players in regulatory networks or cascades. By binding to the promoter region at the start of other genes they turn them on, initiating the production of another protein, and so on. Some transcription factors are inhibitory.
In single-celled organisms, regulatory networks respond to the external environment, optimising the cell at a given time for survival in this environment. Thus a yeast cell, finding itself in a sugar solution, will turn on genes to make enzymes that process the sugar to alcohol. This process, which we associate with wine-making, is how the yeast cell makes its living, gaining energy to multiply, which under normal circumstances would enhance its survival prospects.
In multicellular animals the same principle has been put in the service of gene cascades that control body-shape. Each time a cell divides, two cells result which, although they contain the same genome in full, can differ in which genes are turned on and making proteins. Sometimes a 'self-sustaining feedback loop' ensures that a cell maintains its identity and passes it on. Less understood is the mechanism of epigenetics by which chromatin modification may provide cellular memory by blocking or allowing transcription. A major feature of multicellular animals is the use of morphogen gradients, which in effect provide a positioning system that tells a cell where in the body it is, and hence what sort of cell to become. A gene that is turned on in one cell may make a product that leaves the cell and diffuses through adjacent cells, entering them and turning on genes only when it is present above a certain threshold level. These cells are thus induced into a new fate, and may even generate other morphogens that signal back to the original cell. Over longer distances morphogens may use the active process of signal transduction. Such signalling controls embryogenesis, the building of a body plan from scratch through a series of sequential steps. They also control and maintain adult bodies through feedback processes, and the loss of such feedback because of a mutation can be responsible for the cell proliferation that is seen in cancer. In parallel with this process of building structure, the gene cascade turns on genes that make structural proteins that give each cell the physical properties it needs.
Overview
At one level, biological cells can be thought of as "partially mixed bags" of biological chemicals – in the discussion of gene regulatory networks, these chemicals are mostly the messenger RNAs (mRNAs) and proteins that arise from gene expression. These mRNA and proteins interact with each other with various degrees of specificity. Some diffuse around the cell. Others are bound to cell membranes, interacting with molecules in the environment. Still others pass through cell membranes and mediate long range signals to other cells in a multi-cellular organism. These molecules and their interactions comprise a gene regulatory network. A typical gene regulatory network looks something like this:
The nodes of this network can represent genes, proteins, mRNAs, protein/protein complexes or cellular processes. Nodes that are depicted as lying along vertical lines are associated with the cell/environment interfaces, while the others are free-floating and can diffuse. Edges between nodes represent interactions between the nodes, that can correspond to individual molecular reactions between DNA, mRNA, miRNA, proteins or molecular processes through which the products of one gene affect those of another, though the lack of experimentally obtained information often implies that some reactions are not modeled at such a fine level of detail. These interactions can be inductive (usually represented by arrowheads or the + sign), with an increase in the concentration of one leading to an increase in the other, inhibitory (represented with filled circles, blunt arrows or the minus sign), with an increase in one leading to a decrease in the other, or dual, when depending on the circumstances the regulator can activate or inhibit the target node. The nodes can regulate themselves directly or indirectly, creating feedback loops, which form cyclic chains of dependencies in the topological network. The network structure is an abstraction of the system's molecular or chemical dynamics, describing the manifold ways in which one substance affects all the others to which it is connected. In practice, such GRNs are inferred from the biological literature on a given system and represent a distillation of the collective knowledge about a set of related biochemical reactions. To speed up the manual curation of GRNs, some recent efforts try to use text mining, curated databases, network inference from massive data, model checking and other information extraction technologies for this purpose.
Genes can be viewed as nodes in the network, with input being proteins such as transcription factors, and outputs being the level of gene expression. The value of the node depends on a function which depends on the value of its regulators in previous time steps (in the Boolean network described below these are Boolean functions, typically AND, OR, and NOT). These functions have been interpreted as performing a kind of information processing within the cell, which determines cellular behavior. The basic drivers within cells are concentrations of some proteins, which determine both spatial (location within the cell or tissue) and temporal (cell cycle or developmental stage) coordinates of the cell, as a kind of "cellular memory". The gene networks are only beginning to be understood, and it is a next step for biology to attempt to deduce the functions for each gene "node", to help understand the behavior of the system in increasing levels of complexity, from gene to signaling pathway, cell or tissue level.
Mathematical models of GRNs have been developed to capture the behavior of the system being modeled, and in some cases generate predictions corresponding with experimental observations. In some other cases, models have proven to make accurate novel predictions, which can be tested experimentally, thus suggesting new approaches to explore in an experiment that sometimes wouldn't be considered in the design of the protocol of an experimental laboratory. Modeling techniques include differential equations (ODEs), Boolean networks, Petri nets, Bayesian networks, graphical Gaussian network models, Stochastic, and Process Calculi. Conversely, techniques have been proposed for generating models of GRNs that best explain a set of time series observations. Recently it has been shown that ChIP-seq signal of histone modification are more correlated with transcription factor motifs at promoters in comparison to RNA level. Hence it is proposed that time-series histone modification ChIP-seq could provide more reliable inference of gene-regulatory networks in comparison to methods based on expression levels.
Structure and evolution
Global feature
Gene regulatory networks are generally thought to be made up of a few highly connected nodes (hubs) and many poorly connected nodes nested within a hierarchical regulatory regime. Thus gene regulatory networks approximate a hierarchical scale free network topology. This is consistent with the view that most genes have limited pleiotropy and operate within regulatory modules. This structure is thought to evolve due to the preferential attachment of duplicated genes to more highly connected genes. Recent work has also shown that natural selection tends to favor networks with sparse connectivity.
There are primarily two ways that networks can evolve, both of which can occur simultaneously. The first is that network topology can be changed by the addition or subtraction of nodes (genes) or parts of the network (modules) may be expressed in different contexts. The Drosophila Hippo signaling pathway provides a good example. The Hippo signaling pathway controls both mitotic growth and post-mitotic cellular differentiation. Recently it was found that the network the Hippo signaling pathway operates in differs between these two functions which in turn changes the behavior of the Hippo signaling pathway. This suggests that the Hippo signaling pathway operates as a conserved regulatory module that can be used for multiple functions depending on context. Thus, changing network topology can allow a conserved module to serve multiple functions and alter the final output of the network. The second way networks can evolve is by changing the strength of interactions between nodes, such as how strongly a transcription factor may bind to a cis-regulatory element. Such variation in strength of network edges has been shown to underlie between species variation in vulva cell fate patterning of Caenorhabditis worms.
Local feature
Another widely cited characteristic of gene regulatory network is their abundance of certain repetitive sub-networks known as network motifs. Network motifs can be regarded as repetitive topological patterns when dividing a big network into small blocks. Previous analysis found several types of motifs that appeared more often in gene regulatory networks than in randomly generated networks. As an example, one such motif is called feed-forward loops, which consist of three nodes. This motif is the most abundant among all possible motifs made up of three nodes, as is shown in the gene regulatory networks of fly, nematode, and human.
The enriched motifs have been proposed to follow convergent evolution, suggesting they are "optimal designs" for certain regulatory purposes. For example, modeling shows that feed-forward loops are able to coordinate the change in node A (in terms of concentration and activity) and the expression dynamics of node C, creating different input-output behaviors. The galactose utilization system of E. coli contains a feed-forward loop which accelerates the activation of galactose utilization operon galETK, potentially facilitating the metabolic transition to galactose when glucose is depleted. The feed-forward loop in the arabinose utilization systems of E.coli delays the activation of arabinose catabolism operon and transporters, potentially avoiding unnecessary metabolic transition due to temporary fluctuations in upstream signaling pathways. Similarly in the Wnt signaling pathway of Xenopus, the feed-forward loop acts as a fold-change detector that responses to the fold change, rather than the absolute change, in the level of β-catenin, potentially increasing the resistance to fluctuations in β-catenin levels. Following the convergent evolution hypothesis, the enrichment of feed-forward loops would be an adaptation for fast response and noise resistance. A recent research found that yeast grown in an environment of constant glucose developed mutations in glucose signaling pathways and growth regulation pathway, suggesting regulatory components responding to environmental changes are dispensable under constant environment.
On the other hand, some researchers hypothesize that the enrichment of network motifs is non-adaptive. In other words, gene regulatory networks can evolve to a similar structure without the specific selection on the proposed input-output behavior. Support for this hypothesis often comes from computational simulations. For example, fluctuations in the abundance of feed-forward loops in a model that simulates the evolution of gene regulatory networks by randomly rewiring nodes may suggest that the enrichment of feed-forward loops is a side-effect of evolution. In another model of gene regulator networks evolution, the ratio of the frequencies of gene duplication and gene deletion show great influence on network topology: certain ratios lead to the enrichment of feed-forward loops and create networks that show features of hierarchical scale free networks. De novo evolution of coherent type 1 feed-forward loops has been demonstrated computationally in response to selection for their hypothesized function of filtering out a short spurious signal, supporting adaptive evolution, but for non-idealized noise, a dynamics-based system of feed-forward regulation with different topology was instead favored.
Bacterial regulatory networks
Regulatory networks allow bacteria to adapt to almost every environmental niche on earth. A network of interactions among diverse types of molecules including DNA, RNA, proteins and metabolites, is utilised by the bacteria to achieve regulation of gene expression. In bacteria, the principal function of regulatory networks is to control the response to environmental changes, for example nutritional status and environmental stress. A complex organization of networks permits the microorganism to coordinate and integrate multiple environmental signals.
One example stress is when the environment suddenly becomes poor of nutrients. This triggers a complex adaptation process in bacteria, such as E. coli. After this environmental change, thousands of genes change expression level. However, these changes are predictable from the topology and logic of the gene network that is reported in RegulonDB. Specifically, on average, the response strength of a gene was predictable from the difference between the numbers of activating and repressing input transcription factors of that gene.
Modelling
Coupled ordinary differential equations
It is common to model such a network with a set of coupled ordinary differential equations (ODEs) or SDEs, describing the reaction kinetics of the constituent parts. Suppose that our regulatory network has nodes, and let represent the concentrations of the corresponding substances at time . Then the temporal evolution of the system can be described approximately by
where the functions express the dependence of on the concentrations of other substances present in the cell. The functions are ultimately derived from basic principles of chemical kinetics or simple expressions derived from these e.g. Michaelis–Menten enzymatic kinetics. Hence, the functional forms of the are usually chosen as low-order polynomials or Hill functions that serve as an ansatz for the real molecular dynamics. Such models are then studied using the mathematics of nonlinear dynamics. System-specific information, like reaction rate constants and sensitivities, are encoded as constant parameters.
By solving for the fixed point of the system:
for all , one obtains (possibly several) concentration profiles of proteins and mRNAs that are theoretically sustainable (though not necessarily stable). Steady states of kinetic equations thus correspond to potential cell types, and oscillatory solutions to the above equation to naturally cyclic cell types. Mathematical stability of these attractors can usually be characterized by the sign of higher derivatives at critical points, and then correspond to biochemical stability of the concentration profile. Critical points and bifurcations in the equations correspond to critical cell states in which small state or parameter perturbations could switch the system between one of several stable differentiation fates. Trajectories correspond to the unfolding of biological pathways and transients of the equations to short-term biological events. For a more mathematical discussion, see the articles on nonlinearity, dynamical systems, bifurcation theory, and chaos theory.
Boolean network
The following example illustrates how a Boolean network can model a GRN together with its gene products (the outputs) and the substances from the environment that affect it (the inputs). Stuart Kauffman was amongst the first biologists to use the metaphor of Boolean networks to model genetic regulatory networks.
Each gene, each input, and each output is represented by a node in a directed graph in which there is an arrow from one node to another if and only if there is a causal link between the two nodes.
Each node in the graph can be in one of two states: on or off.
For a gene, "on" corresponds to the gene being expressed; for inputs and outputs, "on" corresponds to the substance being present.
Time is viewed as proceeding in discrete steps. At each step, the new state of a node is a Boolean function of the prior states of the nodes with arrows pointing towards it.
The validity of the model can be tested by comparing simulation results with time series observations. A partial validation of a Boolean network model can also come from testing the predicted existence of a yet unknown regulatory connection between two particular transcription factors that each are nodes of the model.
Continuous networks
Continuous network models of GRNs are an extension of the Boolean networks described above. Nodes still represent genes and connections between them regulatory influences on gene expression. Genes in biological systems display a continuous range of activity levels and it has been argued that using a continuous representation captures several properties of gene regulatory networks not present in the Boolean model. Formally most of these approaches are similar to an artificial neural network, as inputs to a node are summed up and the result serves as input to a sigmoid function, e.g., but proteins do often control gene expression in a synergistic, i.e. non-linear, way. However, there is now a continuous network model that allows grouping of inputs to a node thus realizing another level of regulation. This model is formally closer to a higher order recurrent neural network. The same model has also been used to mimic the evolution of cellular differentiation and even multicellular morphogenesis.
Stochastic gene networks
Experimental results
have demonstrated that gene expression is a stochastic process. Thus, many authors are now using the stochastic formalism, after the work by Arkin et al. Works on single gene expression and small synthetic genetic networks, such as the genetic toggle switch of Tim Gardner and Jim Collins, provided additional experimental data on the phenotypic variability and the stochastic nature of gene expression. The first versions of stochastic models of gene expression involved only instantaneous reactions and were driven by the Gillespie algorithm.
Since some processes, such as gene transcription, involve many reactions and could not be correctly modeled as an instantaneous reaction in a single step, it was proposed to model these reactions as single step multiple delayed reactions in order to account for the time it takes for the entire process to be complete.
From here, a set of reactions were proposed that allow generating GRNs. These are then simulated using a modified version of the Gillespie algorithm, that can simulate multiple time delayed reactions (chemical reactions where each of the products is provided a time delay that determines when will it be released in the system as a "finished product").
For example, basic transcription of a gene can be represented by the following single-step reaction (RNAP is the RNA polymerase, RBS is the RNA ribosome binding site, and Pro i is the promoter region of gene i):
Furthermore, there seems to be a trade-off between the noise in gene expression, the speed with which genes can switch, and the metabolic cost associated their functioning. More specifically, for any given level of metabolic cost, there is an optimal trade-off between noise and processing speed and increasing the metabolic cost leads to better speed-noise trade-offs.
A recent work proposed a simulator (SGNSim, Stochastic Gene Networks Simulator), that can model GRNs where transcription and translation are modeled as multiple time delayed events and its dynamics is driven by a stochastic simulation algorithm (SSA) able to deal with multiple time delayed events.
The time delays can be drawn from several distributions and the reaction rates from complex
functions or from physical parameters. SGNSim can generate ensembles of GRNs within a set of user-defined parameters, such as topology. It can also be used to model specific GRNs and systems of chemical reactions. Genetic perturbations such as gene deletions, gene over-expression, insertions, frame shift mutations can also be modeled as well.
The GRN is created from a graph with the desired topology, imposing in-degree and out-degree distributions. Gene promoter activities are affected by other genes expression products that act as inputs, in the form of monomers or combined into multimers and set as direct or indirect. Next, each direct input is assigned to an operator site and different transcription factors can be allowed, or not, to compete for the same operator site, while indirect inputs are given a target. Finally, a function is assigned to each gene, defining the gene's response to a combination of transcription factors (promoter state). The transfer functions (that is, how genes respond to a combination of inputs) can be assigned to each combination of promoter states as desired.
In other recent work, multiscale models of gene regulatory networks have been developed that focus on synthetic biology applications. Simulations have been used that model all biomolecular interactions in transcription, translation, regulation, and induction of gene regulatory networks, guiding the design of synthetic systems.
Prediction
Other work has focused on predicting the gene expression levels in a gene regulatory network. The approaches used to model gene regulatory networks have been constrained to be interpretable and, as a result, are generally simplified versions of the network. For example, Boolean networks have been used due to their simplicity and ability to handle noisy data but lose data information by having a binary representation of the genes. Also, artificial neural networks omit using a hidden layer so that they can be interpreted, losing the ability to model higher order correlations in the data. Using a model that is not constrained to be interpretable, a more accurate model can be produced. Being able to predict gene expressions more accurately provides a way to explore how drugs affect a system of genes as well as for finding which genes are interrelated in a process. This has been encouraged by the DREAM competition which promotes a competition for the best prediction algorithms. Some other recent work has used artificial neural networks with a hidden layer.
Applications
Multiple sclerosis
There are three classes of multiple sclerosis: relapsing-remitting (RRMS), primary progressive (PPMS) and secondary progressive (SPMS). Gene regulatory network (GRN) plays a vital role to understand the disease mechanism across these three different multiple sclerosis classes.
See also
Body plan
Cis-regulatory module
Genenetwork (database)
Morphogen
Operon
Synexpression
Systems biology
Weighted gene co-expression network analysis
References
Further reading
External links
Plant Transcription Factor Database and Plant Transcriptional Regulation Data and Analysis Platform
Open source web service for GRN analysis
BIB: Yeast Biological Interaction Browser
Graphical Gaussian models for genome data – Inference of gene association networks with GGMs
A bibliography on learning causal networks of gene interactions – regularly updated, contains hundreds of links to papers from bioinformatics, statistics, machine learning.
https://web.archive.org/web/20060907074456/http://mips.gsf.de/proj/biorel/ BIOREL is a web-based resource for quantitative estimation of the gene network bias in relation to available database information about gene activity/function/properties/associations/interactio.
Evolving Biological Clocks using Genetic Regulatory Networks – Information page with model source code and Java applet.
Engineered Gene Networks
Tutorial: Genetic Algorithms and their Application to the Artificial Evolution of Genetic Regulatory Networks
BEN: a web-based resource for exploring the connections between genes, diseases, and other biomedical entities
Global protein-protein interaction and gene regulation network of Arabidopsis thaliana
Gene expression
Networks
Systems biology
Evolutionary developmental biology | Gene regulatory network | [
"Chemistry",
"Biology"
] | 4,770 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Systems biology"
] |
356,415 | https://en.wikipedia.org/wiki/Computer-aided%20maintenance | Computer-aided maintenance (not to be confused with CAM which usually stands for Computer Aided Manufacturing) refers to systems that utilize software to organize planning, scheduling, and support of maintenance and repair. A common application of such systems is the maintenance of computers, either hardware or software, themselves. It can also apply to the maintenance of other complex systems that require periodic maintenance, such as reminding operators that preventive maintenance is due or even predicting when such maintenance should be performed based on recorded past experience.
Computer aided configuration
The first computer-aided maintenance software came from DEC in the 1980s to configure VAX computers. The software was built using the techniques of artificial intelligence expert systems, because the problem of configuring a VAX required expert knowledge. During the research, the software was called R1 and was renamed XCON when placed in service. Fundamentally, XCON was a rule-based configuration database written as an expert system using forward chaining rules. As one of the first expert systems to be pressed into commercial service it created high expectations, which did not materialize, as DEC lost commercial pre-eminence.
Help Desk software
Help desks frequently use help desk software that captures symptoms of a bug and relates them to fixes, in a fix database. One of the problems with this approach is that the understanding of the problem is embodied in a non-human way, so that solutions are not unified.
Strategies for finding fixes
The bubble-up strategy simply records pairs of symptoms and fixes. The most frequent set of pairs is then presented as a tentative solution, which is then attempted. If the fix works, that fact is further recorded, along with the configuration of the presenting system, into a solutions database.
Oddly enough, shutting down and booting up again manages to 'fix,' or at least 'mask,' a bug in many computer-based systems; thus reboot is the remedy for distressingly many symptoms in a 'fix database.' The reason a reboot often works is that it causes the RAM to be flushed. However, typically the same set of actions are likely to create the same result demonstrating a need to refine the "startup" applications (which launch into memory) or install the latest fix/patch of the offending application.
Currently, most expertise in finding fixes lies in human domain experts, who simply sit at a replica of the computer-based system, and who then 'talk through' the problem with the client to duplicate the problem, and then relate the fix.
References
Help desk
Product lifecycle management
Computer systems | Computer-aided maintenance | [
"Technology",
"Engineering"
] | 522 | [
"Computer science",
"Computers",
"Computer engineering",
"Computer systems"
] |
356,457 | https://en.wikipedia.org/wiki/Lookup%20table | In computer science, a lookup table (LUT) is an array that replaces runtime computation of a mathematical function with a simpler array indexing operation, in a process termed as direct addressing. The savings in processing time can be significant, because retrieving a value from memory is often faster than carrying out an "expensive" computation or input/output operation. The tables may be precalculated and stored in static program storage, calculated (or "pre-fetched") as part of a program's initialization phase (memoization), or even stored in hardware in application-specific platforms. Lookup tables are also used extensively to validate input values by matching against a list of valid (or invalid) items in an array and, in some programming languages, may include pointer functions (or offsets to labels) to process the matching input. FPGAs also make extensive use of reconfigurable, hardware-implemented, lookup tables to provide programmable hardware functionality.
LUTs differ from hash tables in a way that, to retrieve a value with key , a hash table would store the value in the slot where is a hash function i.e. is used to compute the slot, while in the case of LUT, the value is stored in slot , thus directly addressable.
History
Before the advent of computers, lookup tables of values were used to speed up hand calculations of complex functions, such as in trigonometry, logarithms, and statistical density functions.
In ancient (499 AD) India, Aryabhata created one of the first sine tables, which he encoded in a Sanskrit-letter-based number system. In 493 AD, Victorius of Aquitaine wrote a 98-column multiplication table which gave (in Roman numerals) the product of every number from 2 to 50 times and the rows were "a list of numbers starting with one thousand, descending by hundreds to one hundred, then descending by tens to ten, then by ones to one, and then the fractions down to 1/144" Modern school children are often taught to memorize "times tables" to avoid calculations of the most commonly used numbers (up to 9 x 9 or 12 x 12).
Early in the history of computers, input/output operations were particularly slow – even in comparison to processor speeds of the time. It made sense to reduce expensive read operations by a form of manual caching by creating either static lookup tables (embedded in the program) or dynamic prefetched arrays to contain only the most commonly occurring data items. Despite the introduction of systemwide caching that now automates this process, application level lookup tables can still improve performance for data items that rarely, if ever, change.
Lookup tables were one of the earliest functionalities implemented in computer spreadsheets, with the initial version of VisiCalc (1979) including a LOOKUP function among its original 20 functions. This has been followed by subsequent spreadsheets, such as Microsoft Excel, and complemented by specialized VLOOKUP and HLOOKUP functions to simplify lookup in a vertical or horizontal table. In Microsoft Excel the XLOOKUP function has been rolled out starting 28 August 2019.
Limitations
Although the performance of a LUT is a guaranteed for a lookup operation, no two entities or values can have the same key . When the size of universe —where the keys are drawn—is large, it might be impractical or impossible to be stored in memory. Hence, in this case, a hash table would be a preferable alternative.
Examples
Trivial hash function
For a trivial hash function lookup, the unsigned raw data value is used directly as an index to a one-dimensional table to extract a result. For small ranges, this can be amongst the fastest lookup, even exceeding binary search speed with zero branches and executing in constant time.
Counting bits in a series of bytes
One discrete problem that is expensive to solve on many computers is that of counting the number of bits that are set to 1 in a (binary) number, sometimes called the population function. For example, the decimal number "37" is "00100101" in binary, so it contains three bits that are set to binary "1".
A simple example of C code, designed to count the 1 bits in a int, might look like this:
int count_ones(unsigned int x) {
int result = 0;
while (x != 0) {
x = x & (x - 1);
result++;
}
return result;
}
The above implementation requires 32 operations for an evaluation of a 32-bit value, which can potentially take several clock cycles due to branching. It can be "unrolled" into a lookup table which in turn uses trivial hash function for better performance.
The bits array, bits_set with 256 entries is constructed by giving the number of one bits set in each possible byte value (e.g. 0x00 = 0, 0x01 = 1, 0x02 = 1, and so on). Although a runtime algorithm can be used to generate the bits_set array, it's an inefficient usage of clock cycles when the size is taken into consideration, hence a precomputed table is used—although a compile time script could be used to dynamically generate and append the table to the source file. Sum of ones in each byte of the integer can be calculated through trivial hash function lookup on each byte; thus, effectively avoiding branches resulting in considerable improvement in performance.
int count_ones(int input_value) {
union four_bytes {
int big_int;
char each_byte[4];
} operand = input_value;
const int bits_set[256] = {
0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4, 1, 2, 2, 3, 2, 3, 3, 4,
2, 3, 3, 4, 3, 4, 4, 5, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 1, 2, 2, 3, 2, 3, 3, 4,
2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6,
4, 5, 5, 6, 5, 6, 6, 7, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 2, 3, 3, 4, 3, 4, 4, 5,
3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6,
4, 5, 5, 6, 5, 6, 6, 7, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
4, 5, 5, 6, 5, 6, 6, 7, 5, 6, 6, 7, 6, 7, 7, 8};
return (bits_set[operand.each_byte[0]] + bits_set[operand.each_byte[1]] +
bits_set[operand.each_byte[2]] + bits_set[operand.each_byte[3]]);
}}
Lookup tables in image processing
In data analysis applications, such as image processing, a lookup table (LUT) can be used to transform the input data into a more desirable output format. For example, a grayscale picture of the planet Saturn could be transformed into a color image to emphasize the differences in its rings.
In image processing, lookup tables are often called LUTs (or 3DLUT), and give an output value for each of a range of index values. One common LUT, called the colormap or palette, is used to determine the colors and intensity values with which a particular image will be displayed. In computed tomography, "windowing" refers to a related concept for determining how to display the intensity of measured radiation.
Discussion
A classic example of reducing run-time computations using lookup tables is to obtain the result of a trigonometry calculation, such as the sine of a value. Calculating trigonometric functions can substantially slow a computing application. The same application can finish much sooner when it first precalculates the sine of a number of values, for example for each whole number of degrees (The table can be defined as static variables at compile time, reducing repeated run time costs).
When the program requires the sine of a value, it can use the lookup table to retrieve the closest sine value from a memory address, and may also interpolate to the sine of the desired value, instead of calculating by mathematical formula. Lookup tables can thus used by mathematics coprocessors in computer systems. An error in a lookup table was responsible for Intel's infamous floating-point divide bug.
Functions of a single variable (such as sine and cosine) may be implemented by a simple array. Functions involving two or more variables require multidimensional array indexing techniques. The latter case may thus employ a two-dimensional array of power[x][y] to replace a function to calculate xy for a limited range of x and y values. Functions that have more than one result may be implemented with lookup tables that are arrays of structures.
As mentioned, there are intermediate solutions that use tables in combination with a small amount of computation, often using interpolation. Pre-calculation combined with interpolation can produce higher accuracy for values that fall between two precomputed values. This technique requires slightly more time to be performed but can greatly enhance accuracy in applications that require it. Depending on the values being precomputed, precomputation with interpolation can also be used to shrink the lookup table size while maintaining accuracy.
While often effective, employing a lookup table may nevertheless result in a severe penalty if the computation that the LUT replaces is relatively simple. Memory retrieval time and the complexity of memory requirements can increase application operation time and system complexity relative to what would be required by straight formula computation. The possibility of polluting the cache may also become a problem. Table accesses for large tables will almost certainly cause a cache miss. This phenomenon is increasingly becoming an issue as processors outpace memory. A similar issue appears in rematerialization, a compiler optimization. In some environments, such as the Java programming language, table lookups can be even more expensive due to mandatory bounds-checking involving an additional comparison and branch for each lookup.
There are two fundamental limitations on when it is possible to construct a lookup table for a required operation. One is the amount of memory that is available: one cannot construct a lookup table larger than the space available for the table, although it is possible to construct disk-based lookup tables at the expense of lookup time. The other is the time required to compute the table values in the first instance; although this usually needs to be done only once, if it takes a prohibitively long time, it may make the use of a lookup table an inappropriate solution. As previously stated however, tables can be statically defined in many cases.
Computing sines
Most computers only perform basic arithmetic operations and cannot directly calculate the sine of a given value. Instead, they use the CORDIC algorithm or a complex formula such as the following Taylor series to compute the value of sine to a high degree of precision:
(for x close to 0)
However, this can be expensive to compute, especially on slow processors, and there are many applications, particularly in traditional computer graphics, that need to compute many thousands of sine values every second. A common solution is to initially compute the sine of many evenly distributed values, and then to find the sine of x we choose the sine of the value closest to x through array indexing operation. This will be close to the correct value because sine is a continuous function with a bounded rate of change. For example:
real array sine_table[-1000..1000]
for x from -1000 to 1000
sine_table[x] = sine(pi * x / 1000)
function lookup_sine(x)
return sine_table[round(1000 * x / pi)]
Unfortunately, the table requires quite a bit of space: if IEEE double-precision floating-point numbers are used, over 16,000 bytes would be required. We can use fewer samples, but then our precision will significantly worsen. One good solution is linear interpolation, which draws a line between the two points in the table on either side of the value and locates the answer on that line. This is still quick to compute, and much more accurate for smooth functions such as the sine function. Here is an example using linear interpolation:
function lookup_sine(x)
x1 = floor(x*1000/pi)
y1 = sine_table[x1]
y2 = sine_table[x1+1]
return y1 + (y2-y1)*(x*1000/pi-x1)
Linear interpolation provides for an interpolated function that is continuous, but will not, in general, have continuous derivatives. For smoother interpolation of table lookup that is continuous and has continuous first derivative, one should use the cubic Hermite spline.
When using interpolation, the size of the lookup table can be reduced by using nonuniform sampling, which means that where the function is close to straight, we use few sample points, while where it changes value quickly we use more sample points to keep the approximation close to the real curve. For more information, see interpolation.
Other usages of lookup tables
Caches
Storage caches (including disk caches for files, or processor caches for either code or data) work also like a lookup table. The table is built with very fast memory instead of being stored on slower external memory, and maintains two pieces of data for a sub-range of bits composing an external memory (or disk) address (notably the lowest bits of any possible external address):
one piece (the tag) contains the value of the remaining bits of the address; if these bits match with those from the memory address to read or write, then the other piece contains the cached value for this address.
the other piece maintains the data associated to that address.
A single (fast) lookup is performed to read the tag in the lookup table at the index specified by the lowest bits of the desired external storage address, and to determine if the memory address is hit by the cache. When a hit is found, no access to external memory is needed (except for write operations, where the cached value may need to be updated asynchronously to the slower memory after some time, or if the position in the cache must be replaced to cache another address).
Hardware LUTs
In digital logic, a lookup table can be implemented with a multiplexer whose select lines are driven by the address signal and whose inputs are the values of the elements contained in the array. These values can either be hard-wired, as in an ASIC whose purpose is specific to a function, or provided by D latches which allow for configurable values. (ROM, EPROM, EEPROM, or RAM.)
An n-bit LUT can encode any n-input Boolean function by storing the truth table of the function in the LUT. This is an efficient way of encoding Boolean logic functions, and LUTs with 4-6 bits of input are in fact the key component of modern field-programmable gate arrays (FPGAs) which provide reconfigurable hardware logic capabilities.
Data acquisition and control systems
In data acquisition and control systems, lookup tables are commonly used to undertake the following operations in:
The application of calibration data, so as to apply corrections to uncalibrated measurement or setpoint values; and
Undertaking measurement unit conversion; and
Performing generic user-defined computations.
In some systems, polynomials may also be defined in place of lookup tables for these calculations.
See also
Associative array
Branch table
Gal's accurate tables
Memoization
Memory-bound function
Nearest-neighbor interpolation
Shift register lookup table
Palette, a.k.a. color lookup table or CLUT – for the usage in computer graphics
3D lookup table – usage in film industry
References
External links
Fast table lookup using input character as index for branch table
Art of Assembly: Calculation via Table Lookups
"Bit Twiddling Hacks" (includes lookup tables) By Sean Eron Anderson of Stanford University
Memoization in C++ by Paul McNamee, Johns Hopkins University showing savings
"The Quest for an Accelerated Population Count" by Henry S. Warren Jr.
Arrays
Associative arrays
Computer performance
Software optimization
Articles with example C code | Lookup table | [
"Technology"
] | 3,782 | [
"Computer performance"
] |
356,493 | https://en.wikipedia.org/wiki/Polish%20mine%20detector | The Mine detector (Polish) Mark I () was a metal detector for landmines developed during World War II. Initial work on the design had started in Poland but after the invasion of Poland by the Germans in 1939, and then the Fall of France in mid-1940, it was not until the winter of 1941–1942 that work was completed by Polish lieutenant Józef Kosacki.
History
In the pre-war period, the Department of Artillery of Poland's Ministry of National Defence ordered the construction of a device that could be helpful in locating duds on artillery training grounds. The instrument was designed by the AVA Wytwórnia Radiotechniczna, but its implementation was prevented by the German invasion of Poland. Following the fall of Poland and the transfer of Polish HQ to France, work restarted on the device, this time intended as a mine detector. Little is known of this stage of construction as the work was stopped by the Battle of France and the need to evacuate the Polish personnel to Great Britain.
There in late 1941 Lieutenant Józef Kosacki devised a final version, based partially on the earlier designs. His invention was not patented; he gave it as a gift to the British Army. He was given a letter of thanks from the King for this act. His design was accepted and 500 mine detectors were immediately sent to El Alamein where they doubled the speed of the British Eighth Army. During the war more than 100,000 of this type were produced, together with several hundred thousands of further developments of the mine detector (Mk. II, Mk. III and Mk IV). The detector was used later during the Allied invasion of Sicily, the Allied invasion of Italy and the Invasion of Normandy. This type of detector was used by the British Army until 1995.
An attempt was made to mount a version of the mine detector on a vehicle so that sappers would be less vulnerable. To this end "Lulu" (on a Sherman tank) and subsequently "Bantu" (on a Staghound armoured car) were developed. The detector mechanism was in non-metallic rollers on arms held away from the vehicle. When the roller passed over a mine or a similar piece of metal it was indicated in the vehicle. Prototypes were built but never tried in combat.
Design
See also
Demining
Land mine
Notes
References
"The History of Landmines" by Mike Croll published in Great Britain in 1998 by Leo Cooper, Pen & Sword Books Ltd.
The Polish Contribution to The Ultimate Allied Victory in The Second World War Tadeusz Modelski, Worthing, England 1986, Page 221
Time Magazine/Canadian Edition, March 8, 1999, page 18
Mieczysław Borchólski "Z saperami generała Maczka", MON 1990,
External links
Polish mine detector (Time Magazine/Canadian Edition), March 8, 1999 page 18
MK. III "Polish" Mine Detector
World War II military equipment of Poland
Mine warfare countermeasures
Science and technology in Poland
World War II military equipment of the United Kingdom
Polish inventions
Metal detecting
Military equipment of Poland
Military equipment of World War II
Military equipment introduced from 1940 to 1944 | Polish mine detector | [
"Technology",
"Engineering"
] | 636 | [
"Measuring instruments",
"Metal detecting"
] |
356,729 | https://en.wikipedia.org/wiki/Corticotropin-releasing%20factor%20family | Corticotropin-releasing factor family, CRF family is a family of related neuropeptides in vertebrates. This family includes corticotropin-releasing hormone (also known as CRF), urotensin-I, urocortin, and sauvagine. The family can be grouped into 2 separate paralogous lineages, with urotensin-I, urocortin and sauvagine in one group and CRH forming the other group. Urocortin and sauvagine appear to represent orthologues of fish urotensin-I in mammals and amphibians, respectively. The peptides have a variety of physiological effects on stress and anxiety, vasoregulation, thermoregulation, growth and metabolism, metamorphosis and reproduction in various species, and are all released as prohormones.
Corticotropin-releasing hormone (CRH) is a releasing hormone found mainly in the paraventricular nucleus of the mammalian hypothalamus that regulates the release of corticotropin (ACTH) from the pituitary gland. The paraventricular nucleus transports CRH to the anterior pituitary, stimulating adrenocorticotropic hormone (ACTH) release via CRH type 1 receptors, thereby activating the hypothalamic-pituitary-adrenal axis (HPA) and, thus, glucocorticoid release.
CRH is evolutionary-related to a number of other active peptides. Urocortin acts in vitro to stimulate the secretion of adrenocorticotropic hormone. Urotensin is found in the teleost caudal neurosecretory system and may play a role in osmoregulation and as a corticotropin-releasing factor. Urotensin-I is released from the urophysis of fish, and produces ACTH and subsequent cortisol release in vivo. The nonhormonal portion of the prohormone is thought to be the urotensin binding protein (). Sauvagine, isolated from frog skin, has a potent hypotensive and antidiuretic effect.
Subfamilies
Urocortin
Human proteins from this family
CRH; UCN;
References
Protein domains
Hormones | Corticotropin-releasing factor family | [
"Biology"
] | 503 | [
"Protein domains",
"Protein classification"
] |
356,741 | https://en.wikipedia.org/wiki/Opposed-piston%20engine | An opposed-piston engine is a piston engine in which each cylinder has a piston at both ends, and no cylinder head. Petrol and diesel opposed-piston engines have been used mostly in large-scale applications such as ships, military tanks, and factories. Current manufacturers of opposed-piston engines include Cummins, Achates Power and Fairbanks-Morse Defense (FMDefense).
Design
Compared to contemporary two-stroke engines, which used a conventional design of one piston per cylinder, the advantages of the opposed-piston engine have been recognized as:
Eliminating the cylinder head and valve-train, which reduces weight, complexity, cost, heat loss, and friction loss of the engine.
Creating a uniflow-scavenged movement of gas through the combustion chamber, which avoided the drawbacks associated with the contemporary crossflow-scavenged designs (however later advancements have provided methods for achieving uniflow scavenging in conventional piston engine designs).
A reduced height of the engine
The main drawback was that the two opposing pistons had to be geared together. This added weight and complexity when compared to conventional piston engines, which use a single crankshaft as the power output.
The most common layout was two crankshafts, with the crankshafts geared together (in either the same direction or opposing directions). The Koreyvo, Jumo, and Napier Deltic engines used one piston per cylinder to expose an intake port, and the other to expose an exhaust port. Each piston is referred to as either an intake piston or an exhaust piston, depending on its function in this regard. This layout gives superior scavenging, as gas flow through the cylinder is axial rather than radial, and simplifies design of the piston crowns. In the Jumo 205 and its variants, the upper crankshaft serves the exhaust pistons, and the lower crankshaft the intake pistons. In designs using multiple cylinder banks, each big end bearing serves one inlet and one exhaust piston, using a forked connecting rod for the exhaust piston.
History
1880s to 1930s
One of the first opposed-piston engines was the 1882 Atkinson differential engine, which has a power stroke on every rotation of the crankshaft (compared with every second rotation for the contemporary Otto cycle engine), but it was not a commercial success.
In 1898, an Oechelhäuser two-stroke opposed-piston engine producing was installed at the Hoerde ironworks. This design of engine was also produced under licence by manufacturers including Deutsche Kraftgas Gesellschaft in Germany and William Beardmore & Sons in the United Kingdom.
In 1901, the Kansas City Lightning Balanced Gas and Gasoline Engines were gasoline engines producing .
An early opposed-piston car engine was produced by the French company Gobron-Brillié around 1900. On 31 March 1904, a Gobron-Brillié car powered by the opposed-piston engine was the first car ever to exceed 150 km/h with a "World's Record Speed" of . On 17 July 1904, the Gobron-Brillié car became the first to exceed for the flying kilometre. The engine used a single crankshaft at one end of the cylinders and a crosshead for the opposing piston.
Another early opposed piston car engine was in the Scottish Arrol-Johnston car, which appears to have been first installed in their 10 hp buckboard c1900. The engine was described and illustrated in some detail in the account of their 12-15 hp car exhibited at the 1905 Olympia Motor-Show. The engine was a four-stroke with two cylinders (with opposed pistons in each) with the crankshaft underneath and the pistons connected by lever arms to the two-throw crankshaft.
The first diesel engine with opposed pistons was a prototype built at Kolomna Locomotive Works in Russia. The designer, Raymond A. Koreyvo, patented the engine in France on 6 November 1907 and displayed the engine at international exhibitions, but it did not reach production. The Kolomna design used a typical layout of two crankshafts connected by gearing.
In 1914, the Simpson's Balanced Two-Stroke motorcycle engine was another opposed-piston engine using a single crankshaft beneath the centre of the cylinders with both pistons connected by levers. This engine was a crankcase compression design, with one piston used to uncover the transfer port, and the other to open the exhaust port. The advantage of this design was to avoid the deflector crowns for pistons used by most two-stroke engines at that time.
Doxford Engine Works in the United Kingdom built large opposed-piston engines for marine use, with the first Doxford engine being installed in a ship in 1921. This diesel engine used a single crankshaft at one end of the cylinders and a crosshead for the opposing piston. After World War I, these engines were produced in a number of models, such as the P and J series, with outputs as high as . Production of Doxford engines in the UK ceased in 1980.
Later opposed-piston diesel engines include the 1932 Junkers Jumo 205 aircraft engine built in Germany, which had two crankshafts, not using a design similar to the 1900–1922 Gobron-Brillié engines.
1940s to present
The Fairbanks Morse 38 8-1/8 diesel engine, originally designed in Germany in the 1930s, was used in U.S. submarines in the 1940s and 1950s, and in boats from the 1930s-present. It was also used in locomotives from 1944.
The latest (November 2021) version of the Fairbanks-Morse 38 8-1/8 is known as the FM 38D 8-1/8 Diesel and Dual Fuel. This two-stroke opposed-piston engine retains the same extra-heavy-duty design and has a rated in-service lifespan of more than 40 years, but now the optional capability of burning dual fuels (gaseous and liquid fuels, with automatic switchover to full diesel if the gas supply runs out) is available.
The Commer TS3 three-cylinder diesel truck engines, released in 1954, have a single crankshaft beneath the centre of the cylinders with both pistons connected by levers.
Also released in 1954 was the Napier Deltic engine for military boats. It uses three crankshafts, one at each corner, to form the three banks of double-ended cylinders arranged in an equilateral triangle. The Deltic engine was used in British Rail Class 55 and British Rail Class 23 locomotives and to power fast patrol boats and Royal Navy mine sweepers. Beginning in 1962, Gibbs invited Mack Trucks to take part in designing FDNY’s super pumper and its companion tender. DeLaval Turbine was commissioned to design a multistage centrifugal pump with a Napier-Deltic T18-37C diesel to power the pumps.
In 1959, the Leyland L60 six-cylinder diesel engine was introduced. The L60 was produced in the United Kingdom for use in the Chieftain tank.
The Soviet T-64 tank, produced from 1963–1987, also used an opposed-piston diesel engine developed by Malyshev Factory in Kharkiv. After the dissolution of the Soviet Union Malyshev Factory continued development and production of opposed-piston engines for armored vehicles, such as the three-cylinder used in BTR-4 Butsefal, various upgrades of the 5TD and the six-cylinder for T-64BM2, BM Oplot etc.
In 2014, Achates Power published a technical paper citing a 30% fuel economy improvement when its engine was benchmarked against a next-generation diesel engine equipped with advanced technologies.
Volvo filed for a patent in 2017.
The Diesel Air Dair 100 is a two-cylinder diesel aircraft engine, designed and produced by Diesel Air Ltd of Olney, Buckinghamshire for use in airships, home-built kitplanes, and light aircraft.
In July 2021, Cummins was awarded an $87M contract by the United States Army to complete the development of the Advanced Combat Engine (ACE), a modular and scalable diesel engine solution that uses opposed-piston technology.
Free-piston engine
A variation of the opposed-piston design is the free-piston engine, which was first patented in 1934. Free piston engines have no crankshaft, and the pistons are returned after each firing stroke by compression and expansion of air in a separate cylinder. Early applications were for use as an air compressor or as a gas generator for a gas turbine.
See also
Junk head
Michel engine
Split-single engine
References
Locomotive parts
Opposed piston engines
Piston engine configurations
Piston ported engines
Two-stroke diesel engines | Opposed-piston engine | [
"Technology"
] | 1,744 | [
"Piston ported engines",
"Engines"
] |
356,748 | https://en.wikipedia.org/wiki/Stelzer%20engine | The Stelzer engine is a two-stroke opposing-piston free-piston engine design proposed by Frank Stelzer. It uses conjoined pistons in a push-pull arrangement which allows for fewer moving parts and simplified manufacturing. An engine of the same design appeared on the cover of the February 1969 issue of Mechanix Illustrated magazine.
Operation
There are two combustion chambers and a central precompression chamber. Control of the air flow between the precompression chamber and the combustion chambers is made by stepped piston rods.
Applications
Applications envisaged for the engine include driving:
An air compressor
A hydraulic pump
A linear generator
Prototypes
A prototype engine was demonstrated in Frankfurt in 1983 and Opel was reported to be interested in it. In 1982, the Government of Ireland agreed to pay half the cost of a factory at Shannon Airport to manufacture the engines. A prototype car with a Stelzer engine and electric transmission was shown at a German motor show in 1983.
See also
Linear alternator
References
External links
-- Two-Stroke Internal Combustion Engine 1983
Diagrams of Stelzer engine and linear alternator
Proposed engines
Free-piston engines | Stelzer engine | [
"Technology",
"Engineering"
] | 229 | [
"Proposed engines",
"Mechanical engineering stubs",
"Mechanical engineering",
"Engines"
] |
356,782 | https://en.wikipedia.org/wiki/Nominal%20watt | Nominal wattage is used to simplify the measurement of the efficiency of a loudspeaker.
The impedance of a loudspeaker varies with frequency. This means that if different sine wave tones are fed into the loudspeaker at the same voltage (or the same current), the amount of electric power consumed will vary.
By convention, loudspeakers are designed to generate the same sound pressure level (SPL) at the listener for the same voltage at varying frequencies - regardless of the variation in electric power. This permits a loudspeaker to be used with an amplifier having a low internal impedance and a flat frequency response is realized for the combined amplifier/loudspeaker system.
However, an amplifier with a low internal impedance delivers more electrical output power when the load impedance reduces (until the impedances become approximately matched). Such high power levels could cause damage to either the amplifier or the amplifier's power supply, or the circuit connected to the amplifier's output (including the loudspeaker).
Therefore, an additional convention exists whereby loudspeaker manufacturers specify a conservative estimate of the average impedance that the loudspeaker will present while playing typical music. This is called the nominal impedance. Amplifiers can therefore be safely specified to operate into a load that has this nominal impedance (or higher, but not lower).
Typical nominal impedances for speakers include 4, 6, 8 and 16Ω (ohms), with 4Ω being most common in in-car loudspeakers, and 8Ω being most common elsewhere. A loudspeaker with an 8Ω nominal impedance may exhibit actual impedances ranging from approximately 5 to 100Ω depending on frequency.
In this context, the nominal wattage is the theoretical electric power that would be transferred from amplifier to speaker if the loudspeaker was actually exhibiting its nominal impedance. The actual electric power may vary from about twice the nominal power down to less than one tenth.
Loudspeaker efficiency is measured with respect to nominal power in order to emulate the situation outlined above where a low internal impedance amplifier is used with a loudspeaker. The convention is to supply one nominal watt during testing. If the nominal impedance is 4 ohms, the voltage would be 2 volts. If the nominal impedance is 8Ω, the voltage would be 2.83 volts.
References
EIA RS-299-A, Loudspeakers, Dynamic, Magnet Structures and Impedance
IEC 60268-5, Sound System Equipment - Part 5: Loudspeakers
Loudspeaker technology
Units of power | Nominal watt | [
"Physics",
"Mathematics"
] | 545 | [
"Physical quantities",
"Quantity",
"Power (physics)",
"Units of power",
"Units of measurement"
] |
356,878 | https://en.wikipedia.org/wiki/Xcode | Xcode is Apple's integrated development environment (IDE) for macOS, used to develop software for macOS, iOS, iPadOS, watchOS, tvOS, and visionOS. It was initially released in late 2003; the latest stable release is version 16, released on September 16, 2024, and is available free of charge via the Mac App Store and the Apple Developer website. Registered developers can also download preview releases and prior versions of the suite through the Apple Developer website. Xcode includes command-line tools that enable UNIX-style development via the Terminal app in macOS. They can also be downloaded and installed without the GUI.
Before Xcode, Apple offered developers Project Builder and Interface Builder to develop Mac OS X applications.
Major features
Xcode supports source code for the programming languages: Swift, C++, Objective-C, Objective-C++, Java, AppleScript, Python, Ruby, ResEdit (Rez), and C, with a variety of programming models, including but not limited to Cocoa, Carbon, and Java. Third parties have added support for GNU Pascal, Free Pascal, Ada, C#, Go, Perl, and D.
Xcode can build fat binary (universal binary) files containing code for multiple architectures with the Mach-O executable format. These helped ease the transitions from 32-bit PowerPC to 64-bit PowerPC, from PowerPC to Intel x86, from 32-bit to 64-bit Intel, and most recently from Intel x86 to Apple silicon by allowing developers to distribute a single application to users and letting the operating system automatically choose the appropriate architecture at runtime. Using the iOS SDK, tvOS SDK, and watchOS SDK, Xcode can also be used to compile and debug applications for iOS, iPadOS, tvOS, and watchOS.
Xcode includes the GUI tool Instruments, which runs atop a dynamic tracing framework, DTrace, created by Sun Microsystems and released as part of OpenSolaris.
Xcode also integrates built-in support for source code management using the Git version control system and protocol, allowing the user to create and clone Git repositories (which can be hosted on source code repository hosting sites such as GitHub, Bitbucket, and Perforce, or self-hosted using open-source software such as GitLab), and to commit, push, and pull changes, all from within Xcode, automating tasks that would traditionally be performed by using Git from the command line.
Composition
The main application of the suite is the integrated development environment (IDE), also named Xcode. The Xcode suite includes most of Apple's developer documentation, and built-in Interface Builder, an application used to construct graphical user interfaces.
Up to Xcode 4.1, the Xcode suite included a modified version of the GNU Compiler Collection. In Xcode 3.1 up to Xcode 4.6.3, it included the LLVM-GCC compiler, with front ends from the GNU Compiler Collection and a code generator based on LLVM. In Xcode 3.2 and later, it included the Clang C/C++/Objective-C compiler, with newly-written front ends and a code generator based on LLVM, and the Clang static analyzer. Starting with Xcode 4.2, the Clang compiler became the default compiler, Starting with Xcode 5.0, Clang was the only compiler provided.
Up to Xcode 4.6.3, the Xcode suite used the GNU Debugger (GDB) as the back-end for the IDE's debugger. Starting with Xcode 4.3, the LLDB debugger was also provided; starting with Xcode 4.5 LLDB replaced GDB as the default back-end for the IDE's debugger. Starting with Xcode 5.0, GDB was no longer supplied.
Playgrounds
The Playgrounds feature of Xcode provides an environment for rapid experimentation and development in the Swift programming language. The original version of the feature was announced and released by Apple Inc on June 2, 2014, during WWDC 2014.
Playgrounds provide a testing ground that renders developer code in real time. They have the capability of evaluating and displaying the results of single expressions as they are coded (in line or on a side bar), providing rapid feedback to the programmer. This type of development environment, known as a read-eval-print loop (or REPL) is useful for learning, experimenting and fast prototyping. Playgrounds was used by Apple to publish Swift tutorials and guided tours where the REPL advantages are noticeable.
The Playgrounds feature was developed by the Developer Tools department at Apple. According to Chris Lattner, the inventor of Swift Programming Language and Senior Director and Architect at the Developer Tools Department, Playgrounds was "heavily influenced by Bret Victor's ideas, by Light Table and by many other interactive systems". Playgrounds was announced by Apple Inc. on June 2, 2014, during WWDC 2014 as part of Xcode 6 and released in September.
In September 2016, the Swift Playgrounds application for iPad (also available on macOS starting in February 2020) was released, incorporating these ideas into an educational tool. Xcode's Playgrounds feature continued development, with a new step-by-step execution feature introduced in Xcode 10 at WWDC 2018.
Removed features
Formerly, Xcode supported distributing a product build process over multiple systems. One technology involved was named Shared Workgroup Build, which used the Bonjour protocol to automatically discover systems providing compiler services, and a modified version of the free software product distcc to facilitate the distribution of workloads. Earlier versions of Xcode provided a system named Dedicated Network Builds. These features are absent in the supported versions of Xcode.
Xcode also includes Apple's WebObjects tools and frameworks for building Java web applications and web services (formerly sold as a separate product). As of Xcode 3.0, Apple dropped WebObjects development inside Xcode; WOLips should be used instead. Xcode 3 still includes the WebObjects frameworks.
Version history
1.x series
Xcode 1.0 was released in fall 2003. Xcode 1.0 was based on Project Builder, but had an updated user interface (UI), ZeroLink, Fix & Continue, distributed build support, and Code Sense indexing.
The next significant release, Xcode 1.5, had better code completion and an improved debugger.
2.x series
Xcode 2.0 was released with Mac OS X v10.4 "Tiger". It included the Quartz Composer visual programming language, better Code Sense indexing for Java, and Ant support. It also included the Apple Reference Library tool, which allows searching and reading online documentation from Apple's website and documentation installed on a local computer.
Xcode 2.1 could create universal binary files. It supported shared precompiled headers, unit testing targets, conditional breakpoints, and watchpoints. It also had better dependency analysis.
The final version of Xcode for Mac OS X v10.4 was 2.5.
3.x series
Xcode 3.0 was released with Mac OS X v10.5 "Leopard". Notable changes since 2.1 include the DTrace debugging tool (now named Instruments), refactoring support, context-sensitive documentation, and Objective-C 2.0 with garbage collection. It also supports Project Snapshots, which provide a basic form of version control; Message Bubbles, which show build errors debug values alongside code; and building four-architecture fat binaries (32 and 64-bit Intel and PowerPC).
Xcode 3.1 was an update release of the developer tools for Mac OS X, and was the same version included with the iPhone SDK. It could target non-Mac OS X platforms, including iPhone OS 2.0. It included the GCC 4.2 and LLVM GCC 4.2 compilers. Another new feature since Xcode 3.0 is that Xcode's SCM support now includes Subversion 1.5.
Xcode 3.2 was released with Mac OS X v10.6 "Snow Leopard" and installs on no earlier version of OS X. It supports static program analysis, among other features. It also drops official support for targeting versions earlier than iPhone OS 3.0. But it is still possible to target older versions, and the simulator supports iPhone OS 2.0 through 3.1. Also, Java support is "exiled" in 3.2 to the organizer.
Xcode 3.2.6 is the last version that can be downloaded for free for users of Mac OS X Snow Leopard (though it’s not the last version that supports Snow Leopard; 4.2 is). Downloading Xcode 3.2.6 requires a free registration at Apple's developer site.
4.x series
In June 2010, at the Apple Worldwide Developers Conference version 4 of Xcode was announced during the Developer Tools State of the Union address. Version 4 of the developer tools consolidates the Xcode editing tools and Interface Builder into one application, among other enhancements. Apple released the final version of Xcode 4.0 on March 9, 2011. The software was made available for free to all registered members of the $99 per year Mac Developer program and the $99 per year iOS Developer program. It was also sold for $4.99 to non-members on the Mac App Store (no longer available). Xcode 4.0 drops support for many older systems, including all PowerPC development and software development kits (SDKs) for Mac OS X 10.4 and 10.5, and all iOS SDKs older than 4.3. The deployment target can still be set to produce binaries for those older platforms, but for Mac OS platforms, one is then limited to creating x86 and x86-64 binaries. Later, Xcode was free to the general public. Before version 4.1, Xcode cost $4.99.
Xcode 4.1 was made available for free on July 20, 2011 (the day of Mac OS X Lion's release) to all users of Mac OS X Lion on the Mac App Store. On August 29, 2011, Xcode 4.1 was made available for Mac OS X Snow Leopard for members of the paid Mac or iOS developer programs. Xcode 4.1 was the last version to include GNU Compiler Collection (GCC) instead of only LLVM GCC or Clang.
On October 12, 2011, Xcode 4.2 was released concurrently with the release of iOS 5.0, and it included many more and improved features, such as storyboarding and automatic reference counting (ARC). Xcode 4.2 is the last version to support Mac OS X 10.6 "Snow Leopard", but is available only to registered developers with paid accounts; without a paid account, 3.2.6 is the latest download that appears for Snow Leopard.
Xcode 4.3, released on February 16, 2012, is distributed as one application bundle, Xcode.app, installed from the Mac App Store. Xcode 4.3 reorganizes the Xcode menu to include development tools. Xcode 4.3.1 was released on March 7, 2012 to add support for iOS 5.1. Xcode 4.3.2 was released on March 22, 2012 with enhancements to the iOS Simulator and a suggested move to the LLDB debugger versus the GDB debugger (which appear to be undocumented changes). Xcode 4.3.3, released in May 2012, featured an updated SDK for Mac OS X 10.7.4 "Lion" and a few bug fixes.
Xcode 4.4 was released on July 25, 2012.
It runs on both Mac OS X Lion (10.7) and OS X Mountain Lion (10.8) and is the first version of Xcode to contain the OS X 10.8 "Mountain Lion" SDK. Xcode 4.4 includes support for automatic synthesizing of declared properties, new Objective-C features such as literal syntax and subscripting, improved localization, and more. On August 7, 2012, Xcode 4.4.1 was released with a few bug fixes.
On September 19, 2012, iOS 6 and Xcode 4.5 were released. Xcode added support for iOS 6 and the 4-inch Retina Display on iPhone 5 and iPod Touch 5th generation. It also brought some new Objective-C features to iOS, simplified localization, and added auto-layout support for iOS. On October 3, 2012, Xcode 4.5.1 was released with bug fixes and stability improvements. Less than a month later, Xcode 4.5.2 was released, with support for iPad Mini and iPad with Retina Display, and bug fixes and stability improvements.
On January 28, 2013, iOS 6.1 and Xcode 4.6 were released.
5.x series
On June 10, 2013, at the Apple Worldwide Developers Conference, version 5 of Xcode was announced.
On September 18, 2013, Xcode 5.0 was released. It shipped with iOS 7 and OS X 10.8 Mountain Lion SDKs. However, support for OS X 10.9 Mavericks was only available in beta versions. Xcode 5.0 also added a version of Clang generating 64-bit ARM code for iOS 7. Apple removed support for building garbage collected Cocoa binaries in Xcode 5.1.
6.x series
On June 2, 2014, at the Worldwide Developers Conference, Apple announced version 6 of Xcode. One of the most notable features was support for Swift, an all-new programming language developed by Apple. Xcode 6 also included features like Playgrounds and live debugging tools. On September 17, 2014, at the same time, iOS 8 and Xcode 6 were released. Xcode could be downloaded on the Mac App Store.
7.x series
On June 8, 2015, at the Apple Worldwide Developers Conference, Xcode version 7 was announced. It introduced support for Swift 2, and Metal for OS X, and added support for deploying on iOS devices without an Apple Developer account. Xcode 7 was released on September 16, 2015.
8.x series
On June 13, 2016, at the Apple Worldwide Developers Conference, Xcode version 8 was announced; a beta version was released the same day. It introduced support for Swift 3. Xcode 8 was released on September 13, 2016.
9.x series
On June 5, 2017, at the Apple Worldwide Developers Conference, Xcode version 9 was announced; a beta version was released the same day. It introduced support for Swift 4 and Metal 2. It also introduced remote debugging on iOS and tvOS devices wirelessly, through Wi-Fi.
Xcode 9 was publicly released on September 19, 2017.
10.x series
On June 4, 2018, at the Apple Worldwide Developers Conference, Xcode version 10 was announced; a beta version was released the same day. Xcode 10 introduced support for the Dark Mode announced for macOS Mojave, the collaboration platforms Bitbucket and GitLab (in addition to already supported GitHub), training machine learning models from playgrounds, and the new features in Swift 4.2 and Metal 2.1, as well as improvements to the editor and the project build system. Xcode 10 also dropped support for building 32-bit macOS apps and no longer supports Subversion integration.
Xcode 10 was publicly released on September 17, 2018.
11.x series
On June 3, 2019, at the Apple Worldwide Developers Conference, Xcode version 11 was announced; a beta version was released the same day. Xcode 11 introduced support for the new features in Swift 5.1, as well as the new SwiftUI framework (although the interactive UI tools are available only when running under macOS 10.15). It also supports building iPad applications that run under macOS; includes integrated support for the Swift Package Manager; and contains further improvements to the editor, including a "minimap" that gives an overview of a source code file with quick navigation. Xcode 11 requires macOS 10.14 or later and Xcode 11.4 requires 10.15 or later.
Xcode 11 was publicly released on September 20, 2019.
12.x series
On June 22, 2020, at the Apple Worldwide Developers Conference, Xcode version 12 was announced; a beta version was released the same day. Xcode 12 introduced support for Swift 5.3 and requires macOS 10.15.4 or later. Xcode 12 dropped building apps for iOS 8 and the lowest version of iOS supported by Xcode 12 built apps is iOS 9. Xcode 12.1 also dropped support for building apps for Mac OS X 10.6 Snow Leopard. The minimum version of macOS supported by Xcode 12.1 built apps is OS X 10.9 Mavericks.
Xcode 12 was publicly released on September 16, 2020.
13.x series
On June 7, 2021, at the Apple Worldwide Developers Conference, Xcode version 13 was announced; a beta version was released the same day. The new version introduced support for Swift 5.5 and requires macOS 11.3 or later. Xcode 13 contains SDKs for iOS / iPadOS 15, macOS 12, watchOS 8, and tvOS 15. Xcode 13’s major features include the new concurrency model in Swift projects, improved support for version control providers (such as GitHub), including the ability to browse, view, and comment on pull requests right in the app interface, and support for Xcode Cloud, Apple’s newly-launched mobile CI/CD service (it also has a web version).
Xcode 13 was publicly released on September 20, 2021.
14.x series
On June 6, 2022, at the Apple Worldwide Developers Conference, Xcode version 14 was announced; a beta version was released the same day. Xcode 14 dropped support for building 32-bit iOS apps. Xcode 14 dropped support for building apps for iOS 9 and 10 (these versions of iOS supported 32-bit iOS apps) and the minimum version of iOS supported by Xcode 14 built apps is iOS 11. Xcode 14 also dropped building apps for macOS 10.12 Sierra. The minimum version of macOS supported by Xcode 14 built apps is macOS 10.13 High Sierra.
Xcode 14 was publicly released on September 12, 2022.
15.x series
On June 5, 2023, at the Apple Worldwide Developers Conference, Xcode version 15 was announced; a beta version was released the same day. Xcode 15 dropped support for building apps for iOS 11 and the minimum version of iOS supported by Xcode 15 built apps is iOS 12.
Xcode 15 was publicly released on September 18, 2023.
16.x series
On June 10, 2024, at the Apple Worldwide Developers Conference, Xcode version 16 was announced; a beta version was released the same day.
Xcode 16 was publicly released on September 16, 2024.
Version comparison table
Xcode 1.0 - Xcode 2.x (before iOS support)
Xcode 3.0 - Xcode 4.x
Xcode 5.0 - 6.x (since arm64 support)
Xcode 7.0 - 10.x (since Free On-Device Development)
Xcode 11.0 - 14.x (since SwiftUI framework)
Xcode 15.0 - (since visionOS support)
Toolchain versions
Xcode 1.0 - Xcode 2.x (before iOS support)
Xcode 3.0 - Xcode 4.x
Xcode 5.0 - 6.x (since arm64 support)
Xcode 7.0 - 10.x (since Free On-Device Development)
Xcode 11.0 - 14.x (since SwiftUI framework)
Xcode 15.0 - (since visionOS support)
See also
XcodeGhost
CodeWarrior
References
External links
Xcode – Mac App Store
Apple Developer Connection: Xcode tools and resources
Xcode Release Notes — Archive
Download Xcode
2003 software
Freeware
History of software
Integrated development environments
IOS
IOS development software
MacOS programming tools
MacOS text editors
MacOS-only software made by Apple Inc.
Software version histories
User interface builders | Xcode | [
"Technology"
] | 4,249 | [
"History of software",
"History of computing"
] |
356,904 | https://en.wikipedia.org/wiki/List%20of%20craters%20on%20Ganymede | Ganymede is the largest moon in the Solar System, and has a hard surface with many craters. Most of them are named after figures from Egyptian, Mesopotamian, and other ancient Middle Eastern myths.
List
Dropped or not approved names
External links
USGS: Ganymede nomenclature
USGS: Ganymede Nomenclature: Craters
Ganymede | List of craters on Ganymede | [
"Astronomy"
] | 70 | [
"Astronomy-related lists",
"Lists of impact craters"
] |
356,910 | https://en.wikipedia.org/wiki/List%20of%20mountains%20on%20Io | More than 135 mountains have been identified on the surface of Jupiter's moon Io. Despite the extensive active volcanism taking place on Io, most mountains on Io are formed through tectonic processes. These structures average 6 km (4 mi) in height and reach a maximum of 17.5 ± 1.5 km (10.9 ± 1 mi) at South Boösaule Montes. Mountains often appear as large (the average mountain is 157 km (98 mi) long), isolated structures with no apparent global tectonic patterns outlined, in contrast to the situation on Earth. To support the tremendous topography observed at these mountains requires rock compositions consisting mostly of silicate, as opposed to sulfur.
Mountains on Io (generally, structures rising above the surrounding plains) have a variety of morphologies. Plateaus are most common. These structures resemble large, flat-topped mesas with rugged surfaces. Other mountains appear to be tilted crustal blocks, with a shallow slope from the formerly flat surface and a steep slope consisting of formerly sub-surface materials uplifted by compressive stresses. Both types of mountains often have steep scarps along one or more margins. Only a handful of mountains on Io appear to have a volcanic origin. These mountains resemble small shield volcanoes, with steep slopes (6–7°) near a small, central caldera and shallow slopes along their margins. These volcanic mountains are often smaller than the average mountain on Io, averaging only 1 to 2 km (0.6 to 1.2 mi) in height and 40 to 60 km (25 to 37 mi) wide. Other shield volcanoes with much shallower slopes are inferred from the morphology of several of Io's volcanoes, where thin flows radiate out from a central patera, such as at Ra Patera.
Some of Io's mountains have received official names from the International Astronomical Union. The names are a combination of a name of a person or place derived from the Greek mythological story of Io, Dante's Inferno, or from the name of a nearby feature on Io surface and an approved descriptive term. The descriptive terms, or categories, used for these mountains depends on their morphology, which is a reflection of the mountain's age, geologic origin (volcanic or tectonic), and mass wasting processes. Mountains consisting of massifs, ridges, or isolated peaks use the descriptive term, mons or the plural montes, the Latin term for mountain. These features are named after prominent locations from the Greek mythological travels of Io or places mentioned in Dante's Inferno. Plateaus are normally given the descriptive term mensa (pl. mensae), the Latin term for mesa, though some mountains with plateau morphology use mons. Ionian mensae are named after mythological figures associated with the Io myth, characters from Dante's Inferno. Like mountains, these features can also be named after nearby volcanoes. Some units of layered plains have names using the descriptive term planum (pl. plana). However other more mountainous structures, such as Danube Planum, use the term. Partly as a result of the inconsistent use of this term, planum has not been used since the Voyager era. Ionian plana are named after locations associated with the Io myth. Rare cases of volcanic mountains, such as the shield volcano Tsũi Goab Tholus, use the term tholus (plural: tholi). Ionian tholi are named after people associated with the Io myth or nearby features on Io's surface.
See also the list of volcanic features on Io and the list of regions on Io.
List of named Ionian mountains
The following table lists those positive topographic structures (mountains, plateaus, shield volcanoes, and layered plains) that have been given names by the International Astronomical Union. Coordinates and Length come from the USGS website that hosts that nomenclature list. Height information from Paul Schenk's 2001 paper, "The mountains of Io: Global and geological perspectives from Voyager and Galileo". When the name refers to multiple mountains, the tallest peak from Schenk et al. 2001 is listed. Those whose heights come from another sources are noted and sourced in the table. Height ranges result from uncertainties due to different methods used to determine the height of the mountain.
See also
List of mountains
List of mountain types
:Category:Lists of mountains
List of mountain ranges
List of highest mountains
List of peaks by prominence
List of tallest mountains in the Solar System
Mountaineering
References
External links
USGS: Io nomenclature
USGS: Io nomenclature: mountains
Io's Tall Mountains – Planetary Society article
Io Mountain Database, including those without official names
Io | List of mountains on Io | [
"Astronomy"
] | 945 | [
"Lists of extraterrestrial mountains",
"Astronomy-related lists"
] |
356,924 | https://en.wikipedia.org/wiki/List%20of%20craters%20on%20Callisto | This is a list of named craters on Callisto, one of the many moons of Jupiter, the most heavily cratered natural satellite in the Solar System (for other features, see list of geological features on Callisto).
As of 2020, the Working Group for Planetary System Nomenclature has officially named a total of 142 craters on Callisto, more than on any other non-planetary object such as Ganymede (131), Rhea (128), Vesta (90), Ceres (90), Dione (73), Iapetus (58), Enceladus (53), Tethys (50) and Europa (41). Although some Callistoan craters refer to the nymph Callisto from Greek mythology, they are officially named after characters from myths and folktales of cultures of the Far North.
List of Craters
See also
List of craters on the Moon
List of craters on Mars
List of craters on Mercury
List of craters on Venus
Note
References
External links
USGS: Callisto nomenclature
USGS: Callisto Nomenclature: Craters
Callisto Crater Database Lunar and Planetary Institute
Callisto
Impact craters on Jupiter's moons | List of craters on Callisto | [
"Astronomy"
] | 236 | [
"Astronomy-related lists",
"Lists of impact craters"
] |
357,027 | https://en.wikipedia.org/wiki/Automated%20analyser | An automated analyser is a medical laboratory instrument designed to measure various substances and other characteristics in a number of biological samples quickly, with minimal human assistance. These measured properties of blood and other fluids may be useful in the diagnosis of disease.
Photometry is the most common method for testing the amount of a specific analyte in a sample. In this technique, the sample undergoes a reaction to produce a color change. Then, a photometer measures the absorbance of the sample to indirectly measure the concentration of analyte present in the sample. The use of an ion-selective electrode (ISE) is another common analytical method that specifically measures ion concentrations. This typically measures the concentrations of sodium, calcium or potassium present in the sample.
There are various methods of introducing samples into the analyser. Test tubes of samples are often loaded into racks. These racks can be inserted directly into some analysers or, in larger labs, moved along an automated track. More manual methods include inserting tubes directly into circular carousels that rotate to make the sample available. Some analysers require samples to be transferred to sample cups. However, the need to protect the health and safety of laboratory staff has prompted many manufacturers to develop analysers that feature closed tube sampling, preventing workers from direct exposure to samples. Samples can be processed singly, in batches, or continuously.
The automation of laboratory testing does not remove the need for human expertise (results must still be evaluated by medical technologists and other qualified clinical laboratory professionals), but it does ease concerns about error reduction, staffing concerns, and safety.
Routine biochemistry analysers
These are machines that process a large portion of the samples going into a hospital or private medical laboratory. Automation of the testing process has reduced testing time for many analytes from days to minutes. The history of discrete sample analysis for the clinical laboratory began with the introduction of the "Robot Chemist" invented by Hans Baruch and introduced commercially in 1959[1].
The AutoAnalyzer is an early example of an automated chemistry analyzer using a special flow technique named "continuous flow analysis (CFA)", invented in 1957 by Leonard Skeggs, PhD and first made by the Technicon Corporation. The first applications were for clinical (medical) analysis. The AutoAnalyzer profoundly changed the character of the chemical testing laboratory by allowing significant increases in the numbers of samples that could be processed. Samples used in the analyzers include, but are not limited to, blood, serum, plasma, urine, cerebrospinal fluid, and other fluids from within the body. The design based on separating a continuously flowing stream with air bubbles largely reduced slow, clumsy, and error-prone manual methods of analysis. The types of tests include enzyme levels (such as many of the liver function tests), ion levels (e.g. sodium and potassium, and other tell-tale chemicals (such as glucose, serum albumin, or creatinine).
Simple ions are often measured with ion selective electrodes, which let one type of ion through, and measure voltage differences. Enzymes may be measured by the rate they change one coloured substance to another; in these tests, the results for enzymes are given as an activity, not as a concentration of the enzyme. Other tests use colorimetric changes to determine the concentration of the chemical in question. Turbidity may also be measured.
Immuno-based analysers
Antibodies are used by some analysers to detect many substances by immunoassay and other reactions that employ the use of antibody-antigen reactions.
When concentration of these compounds is too low to cause a measurable increase in turbidity when bound to antibody, more specialised methods must be used.
Recent developments include automation for the immunohaematology lab, also known as transfusion medicine.
Hematology analysers
These are used to perform complete blood counts, erythrocyte sedimentation rates (ESRs), or coagulation tests.
Cell counters
Automated cell counters sample the blood, and quantify, classify, and describe cell populations using both electrical and optical techniques.
Electrical analysis involves passing a dilute solution of the blood through an aperture across which an electrical current is flowing. The passage of cells through the current changes the impedance between the terminals (the Coulter principle). A lytic reagent is added to the blood solution to selectively lyse the red cells (RBCs), leaving only white cells (WBCs), and platelets intact. Then the solution is passed through a second detector. This allows the counts of RBCs, WBCs, and platelets to be obtained. The platelet count is easily separated from the WBC count by the smaller impedance spikes they produce in the detector due to their lower cell volumes.
Optical detection may be utilised to gain a differential count of the populations of white cell types. A dilute suspension of cells is passed through a flow cell, which passes cells one at a time through a capillary tube past a laser beam. The reflectance, transmission and scattering of light from each cell is analysed by sophisticated software giving a numerical representation of the likely overall distribution of cell populations.
Some of the latest hematology instruments may report Cell Population Data that consist in Leukocyte morphological information that may be used for flagging Cell abnormalities that trigger the suspect of some diseases.
Reticulocyte counts can now be performed by many analysers, giving an alternative to time-consuming manual counts. Many automated reticulocyte counts, like their manual counterparts, employ the use of a supravital dye such as new methylene blue to stain the red cells containing reticulin prior to counting. Some analysers have a modular slide maker which is able to both produce a blood film of consistent quality and stain the film, which is then reviewed by a medical laboratory professional.
Coagulometers
Automated coagulation machines or Coagulometers measure the ability of blood to clot by performing any of several types of tests including Partial thromboplastin times, Prothrombin times (and the calculated INRs commonly used for therapeutic evaluation), Lupus anticoagulant screens, D dimer assays, and factor assays.
Coagulometers require blood samples that have been drawn in tubes containing sodium citrate as an anticoagulant. These are used because the mechanism behind the anticoagulant effect of sodium citrate is reversible. Depending on the test, different substances can be added to the blood plasma to trigger a clotting reaction. The progress of clotting may be monitored optically by measuring the absorbance of a particular wavelength of light by the sample and how it changes over time.
..
Other hematology apparatus
Automatic erythrocyte sedimentation rate (ESR) readers, while not strictly analysers, do preferably have to comply to the 2011-published CLSI (Clinical and Laboratory Standards Institute) "Procedures for the Erythrocyte Sedimentation Rate Test: H02-A5 and to the ICSH (International Council for Standardization in Haematology) published "ICSH review of the measurement of the erythrocyte sedimentation rate", both indicating the only reference method, being Westergren, explicitly indicating the use of diluted blood (with sodium citrate), in 200 mm pipettes, bore 2.55 mm. After 30 or 60 minutes being in a vertical position, with no draughts and vibration or direct sunlight allowed, an optical reader determines how far the red cells have fallen by detecting the level.
Miscellaneous analysers
Some tests and test categories are unique in their mechanism or scope, and require a separate analyser for only a few tests, or even for only one test. Other tests are esoteric in nature—they are performed less frequently than other tests, and are generally more expensive and time-consuming to perform. Even so, the current shortage of qualified clinical laboratory professionals has spurred manufacturers to develop automated systems for even these rarely performed tests.
Analysers that fall into this category include instruments that perform:
DNA labeling and detection
Osmolarity and osmolality measurement
Measurement of glycated haemoglobin (haemoglobin A1C), and
Aliquotting and routing of samples throughout the laboratory
See also
Comprehensive metabolic panel
Medical technologist
Notes
1. Rosenfeld, Louis. Four Centuries of Clinical Chemistry. Gordon and Breach Science Publishers, 1999. . Pp. 490–492
References
Laboratory equipment
Measuring instruments
Clinical pathology
Drugs developed by Hoffmann-La Roche
Articles containing video clips | Automated analyser | [
"Technology",
"Engineering"
] | 1,779 | [
"Measuring instruments"
] |
357,078 | https://en.wikipedia.org/wiki/Lake%20Powell | Lake Powell is a reservoir on the Colorado River in Utah and Arizona, United States. It is a major vacation destination visited by approximately two million people every year. It holds of water when full, second in the United States to only Lake Mead - though Lake Mead has fallen below Lake Powell in size several times during the 21st century in terms of volume of water, depth and surface area.
Lake Powell was created by the flooding of Glen Canyon by the Glen Canyon Dam, which also led to the 1972 creation of Glen Canyon National Recreation Area, a popular summer destination of public land managed by the National Park Service. The reservoir is named for John Wesley Powell, a civil war veteran who explored the river via three wooden boats in 1869. It lies primarily in southern Utah, with a small portion in northern Arizona.
Lake Powell is a water storage facility for the Upper Basin states of the Colorado River Compact (Colorado, Utah, Wyoming and New Mexico). The Compact specifies that the Upper Basin states are to provide a minimum annual flow of to the Lower Basin states (Arizona, Nevada, and California).
According to US Geological Survey and the Bureau of Reclamation report, in addition to water loss, Lake Powell faced an average annual loss in storage capacity of about 33,270 acre-feet, or 11 billion gallons, per year between 1963 and 2018 because of sediments flowing in from the Colorado and San Juan rivers. Those settle at the bottom of the reservoir and decrease the total amount of water the reservoir can hold. Environmentalists have pushed to drain Lake Powell and restore Glen Canyon to its natural, free-flowing state.
History
Planning
In the 1940s and early 1950s, the United States Bureau of Reclamation planned to construct a series of Colorado River dams in the rugged Colorado Plateau province of Colorado, Utah, and Arizona. Glen Canyon Dam was born of a controversial damsite the Bureau selected in Echo Park, in what is now Dinosaur National Monument in Colorado. A small but politically effective group of objectors, led by David Brower of the Sierra Club, succeeded in defeating the Bureau's bid, citing Echo Park's natural and scenic qualities as too valuable to submerge.
Glen Canyon Dam was built to solve the downstream delivery obligations of the Upper Basin states. Lake Powell is an "aquatic bank" built to fulfill the terms of the "Compact Calls" of Lower Basin.
Construction
Construction on Glen Canyon Dam began with a demolition blast keyed by the push of a button by President Dwight D. Eisenhower at his desk in the Oval Office on October 1, 1956, which started clearing tunnels for water diversion. On February 11, 1959, water flowed through the tunnels so dam construction could begin. Later that year, the bridge was completed, allowing trucks to deliver equipment and materials for the dam and also for the new town of Page, Arizona.
Concrete placement started around the clock on June 17, 1960. The last bucket of over 5 million cubic yards (4,000,000 m3) was poured on September 13, 1963. The dam is 710 feet (216 m) high and the surface elevation of the water at full-pool is approximately 3700 feet (1100 m). Construction cost $155 million, and 18 lives were lost. On September 22, 1966, Glen Canyon Dam was dedicated by Lady Bird Johnson. From 1970 to 1980, turbines and generators were installed for hydroelectricity.
Filling and operations
Upon completion of Glen Canyon Dam on September 13, 1963, the Colorado River began to back up, no longer being diverted through the tunnels. The newly flooded Glen Canyon formed Lake Powell. Sixteen years elapsed before the lake filled to the level on June 22, 1980. The lake level fluctuates considerably depending on the seasonal snow runoff from the Rocky Mountains. The all-time highest water level was reached on July 14, 1983, during one of the heaviest Colorado River floods in recorded history, in part influenced by a strong El Niño event. The lake rose to above sea level, with a water content of . It lies primarily in parts of Garfield, Kane, and San Juan counties in southern Utah, with a small portion in Coconino County in northern Arizona. The northern limits of the lake extend at least as far as the Hite Crossing Bridge.
21st century drought and push to drain
Colorado River flows have been below average since 2000 as a result of the southwestern North American megadrought, leading to lower lake levels. In winter 2005 (before the spring run-off) the lake reached its then-lowest level since filling, an elevation of above sea level, which was approximately below full pool. After 2005, the lake level slowly rebounded, although it has not filled completely since then. Summer 2011 saw the third largest June and the second largest July runoff since the closure of Glen Canyon Dam, and the water level peaked at nearly , 77 percent of capacity, on July 30. However, water years 2012 and 2013 were, respectively, the third and fourth-lowest runoff years recorded on the Colorado River. By April 9, 2014, the lake level had fallen to , largely erasing the gains made in 2011.
Colorado River levels returned to normal during water years 2014 and 2015 (pushing the lake to ) by the end of water year 2015. The Bureau of Reclamation in 2014 reduced the Lake Powell release from 8.23 to 7.48 million acre-feet, for the first time since the lake filled in 1980. This was done due to the "equalization" guideline which stipulates that an approximately equal amount of water must be retained in both Lake Powell and Lake Mead, in order to preserve hydro-power generation capacity at both lakes. This resulted in Lake Mead declining to the lowest level on record since the 1930s.
Long-term water level decline continued, forcing an emergency release of water from the Flaming Gorge Reservoir in July 2021. and by April 22, 2022, Lake Powell was at in elevation – just of capacity. This marks the lowest water level for Lake Powell since it was filled in 1963.
The capacity of Lake Powell has decreased by 7% since 1963 facing an average annual loss of 33,270 acre-feet of storage, due to the inflow of sediments from Colorado and San Juan rivers.
Peer-reviewed studies indicate that storing water in Lake Mead rather than in Lake Powell would yield a savings of 300,000 acre feet of water or more per year, leading to calls by environmentalists to drain Lake Powell and restore Glen Canyon to its natural, free-flowing state.
Climate
These data are for the Wahweap climate station on Lake Powell just south of the Utah-Arizona border (Years 1961 to 2012).
Geology
Glen Canyon was carved by differential erosion from the Colorado River over an estimated 5 million years. The Colorado Plateau, through which the canyon cuts, arose some 11 million years ago. Within that plateau lie layers of rock from over 300 million years ago to the relatively recent volcanic activity. Pennsylvanian and Permian formations can be seen in Cataract Canyon and San Juan Canyon. The Moenkopi Formation, which dates from 230 million years ago (Triassic Period), and the Chinle Formation are found at Lees Ferry and the Rincon. Both formations are the result of the ancient inland sea that covered the area. Once the sea drained, windblown sand invaded the area, creating what is known as Wingate Sandstone.
The more recent (Jurassic Period) formations include Kayenta Sandstone, which produces the trademark blue-black "desert varnish" that streaks down many walls of the canyons. Above this is Navajo Sandstone. Many of the arches, including Rainbow Bridge, lie at this transition point. This period also includes light yellow Entrada Sandstone, and the dark brown, almost purple Carmel Formation. These latter two can be seen on the tops of mesas around Wahweap, and the crown of Castle Rock and Tower Butte. Above these layers lie the sandstone, conglomerate and shale of the Straight Cliffs Formation that underlies the Kaiparowits Plateau and San Rafael Swell to the north of the lake.
The confluences of the Escalante, Dirty Devil and San Juan rivers with the Colorado lie within Lake Powell. The slower flow of the San Juan river has produced goosenecks where of river are contained within on a straight line.
Landmarks and features
The lake's main body stretches up Glen Canyon, but has also filled many (over 90) side canyons. The lake also stretches up the Escalante River and San Juan River where they merge into the main Colorado River. This provides access to many natural geographic points of interest as well as some remnants of the Anasazi culture.
Glen Canyon Dam, the dam the blocks the Colorado River and forms Lake Powell. (Arizona)
Rainbow Bridge, one of the world's largest natural bridges. (Utah)
Hite Crossing Bridge, the only bridge spanning Lake Powell. Although the bridge informally marks the upstream limit of the lake, when the lake is at its normal high water elevation, backwater can stretch up to upstream into Cataract Canyon.
Defiance House ruin (Anasazi)
Castle Rock
Cathedral in the Desert
San Juan goosenecks
Gregory Butte
Gunsight Butte
Lone Rock
Alstrom Point
Kaiparowits Plateau
Hole-in-the-Rock crossing
the Rincon
Three-Roof Ruin
Padre Bay
Waterpocket Fold
Antelope Island lies mostly in Arizona just north of Page in the southwest part of Lake Powell.
Images
Development
Access to the lake is limited to developed marinas because most of the lake is surrounded by steep sandstone walls:
Lee's Ferry
Page and Wahweap Marina
Antelope Point Marina
Halls Crossing, Utah Marina
Bullfrog Marina
Hite Marina
The following marinas are accessible only by boat:
Dangling Rope Marina
Rainbow Bridge National Monument
Escalante Subdistrict
Glen Canyon National Recreation Area draws more than two million visitors annually. Recreational activities include boating, fishing, waterskiing, jet-skiing, and hiking. Prepared campgrounds can be found at each marina, but many visitors choose to rent a houseboat or bring their own camping equipment, find a secluded spot somewhere in the canyons, and make their own camp (there are no restrictions on where visitors can stay).
The Castle Rock Cut is one of the most important navigational channels in the lake; it was blasted as early as the 1970s to allow boaters to bypass the winding canyons between the Glen Canyon Dam and reaches of Lake Powell further upstream – saving, on average, one hour of travel time. The cut has been deepened several times since then, to allow the use of the channel during droughts. During the protracted 21st-century drought, however, the lake has dropped so quickly on several occasions that the cut dried up during the summer tourist season, most recently in 2013. Continued deepening of the Castle Rock cut has been criticized for its high cost, but boaters and the National Park Service argue that it improves safety, saves millions of dollars in fuel, and improves emergency response time. In September 2021 the level of Lake Powell was 45 feet below the bottom of the Castle Rock cut.
Currently, most marinas on the lake don't have Automatic Identification System monitoring stations that transmit boat positions to the AIS websites for the boating community. A substantial number of vessels on the lake do not have AIS transponders as there currently are no mandatory requirements for AIS usage for this body of water. Extra precautions must be taken with respect to boating safety, as the fractal nature of the lake's hydrologic surface area can allow vessels with limited charting equipment to become easily lost.
The burying of human (and pet) waste in Glen Canyon National Recreation Area is prohibited. Anyone who camps farther than a quarter of a mile from a marina must bring a portable toilet. Pet waste must also be packed out.
The southwestern end of Lake Powell in Arizona can be accessed via U.S. Route 89 and State Route 98. State Route 95 and State Route 276 lead to the northeastern end of the lake in Utah.
Fish species
Some of these fish species are on the US Endangered Species List. Currently most native species on the Colorado River Basin are subject to ongoing restoration efforts of some kind.
Bass
Smallmouth bass
Largemouth bass
Striped bass
Carp, pike and others
Crappie
Sunfish
Channel catfish
Northern pike
Walleye
Common carp
Razorback sucker
Brown trout
Bonytail chub
Gizzard shad
Invasive species
Zebra and quagga mussels first appeared in the United States in the 1980s.
The mussels were initially brought to the United States through the ballast water of ships entering the Great Lakes. These aquatic invaders soon spread to many bodies of water in the Eastern United States and have even made their way to the western United States. In January 2008, Zebra mussels have been detected in several reservoirs along the Colorado River system such as Lakes Mead, Mojave, and Havasu.
By the early 2000s Arizona, California, Nebraska, Kansas, Colorado, Nevada and Utah have all confirmed the presence of larval zebra mussels in lakes and reservoirs.
Zebra and quagga mussels can be destructive to an ecosystem due to competition for resources with native species. The filtration of zooplankton by the mussels can negatively impact the feeding for some species of fish. Zebra and quagga mussels can attach to hard surfaces and build layers on underwater structures. The mussels are known to clog pipes including those in hydroelectric power systems, thus becoming a costly and time-consuming problem for water managers in the West.
Control policies have recently been introduced to alleviate the hydroelectric problems as well as ecological problems faced by Western infestation. Beginning in 1999 Lake Powell began to visually monitor for the mussels.
In 2001 hot water boat decontamination sites were established at Wahweap, Bullfrog, and Halls Crossing marinas. In January 2007, zebra mussels were detected in Lake Mead and new action plans were announced to prevent the spread of mussels to Lake Powell. In August 2007, preliminary testing was positive for zebra or quagga larvae in Lake Powell. These tests were deemed false positives, but adult quagga mussels were found in 2013.
In August 2010, Lake Powell was declared mussel free. Lake Powell introduced a mandatory boat inspection for each watercraft entering the reservoir beginning in June 2009. Effective June 29, 2009, every vessel entering Lake Powell must have a mussel certificate, although boat owners were allowed to self-certify. These measures were intended to help prevent vessels from transporting Zebra mussels into Lake Powell.
Despite these measures, quagga mussel DNA was detected in 2012 and live mussels were found at a number of sites including the Wahweap Marina in Spring and Summer 2013. In June 2013 the NPS was attempting a diver-based eradication program to find and remove mussels before the lake became infested.
Pipeline proposal
The Washington County Water Conservancy District has proposed building the Lake Powell Pipeline, which would have the capacity to extract up to per year from Lake Powell for distribution to municipal drinking water systems in the county.
References
Bibliography
Martin, Russell, A Story That Stands Like a Dam: Glen Canyon and the Struggle for the Soul of the West, Henry Holt & Co, 1989
McPhee, John, "Encounters with the Archdruid," Farrar, Straus, and Giroux, 1971
Nichols, Tad, Glen Canyon: Images of a Lost World, Santa Fe: Museum of New Mexico Press, 2000
Abbey, Edward, Desert Solitaire, Ballantine Books, 1985
Farmer, Jared, Glen Canyon Dammed: Inventing Lake Powell and the Canyon Country, Tucson: The University of Arizona Press, 1999
Stiles, Jim, The Brief but Wonderful Return of Cathedral in the Desert, Salt Lake Tribune, June 7, 2005
Further reading
(1994) "Lake Powell" article in the Utah History Encyclopedia. The article was written by Robert S. McPherson and the Encyclopedia was published by the University of Utah Press. ISBN 9780874804256. Archived from the original on November 3, 2022 and retrieved on June 17, 2024.
External links
Water Level in Lake Powell, slide show of ten years of images from NASA's Landsat 5 satellite, showing dramatic fluctuations in water levels in Lake Powell.
Lake Powell Water Database – water level, basin snowpack, and other statistics
Lake Powell Resorts and Marinas
Friends of Lake Powell – organization opposed to decommissioning Glen Canyon Dam
Glen Canyon National Recreation Area (National Park Service)
Data visualization from Bureau of Reclamation (interactive)
Reclamation Information Sharing Environment (RISE) – Bureau of Reclamation database, with locations and time series on water levels and flows
Lake Powell historical water level data - Lake Powell water level data for the recent 25-year period 1997–2022, in machine-readable (CSV/Excel) formats
Powell
Powell
Landmarks in Arizona
Colorado River
Powell
Tourist attractions in Coconino County, Arizona
Powell
Powell
Powell
Buildings and structures in Garfield County, Utah
Buildings and structures in Kane County, Utah
Buildings and structures in San Juan County, Utah
Powell
Glen Canyon National Recreation Area
Tourist attractions in San Juan County, Utah
Colorado River Storage Project
1963 establishments in Utah
1963 establishments in Arizona | Lake Powell | [
"Engineering"
] | 3,495 | [
"Colorado River Storage Project",
"Lake Powell"
] |
357,107 | https://en.wikipedia.org/wiki/Blind%20carbon%20copy | A blind carbon copy (abbreviated Bcc) is a message copy sent to an additional recipient, without the primary recipient being made aware. This concept originally applied to paper correspondence and now also applies to email. "Bcc" can also stand for "blind courtesy copy" as a backronym of the original abbreviation.
In some circumstances, the typist creating a paper correspondence must ensure that multiple recipients of such a document do not see the names of other recipients. To achieve this, the typist can:
Add the names in a second step to each copy, without carbon paper;
Set the ribbon not to strike the paper, which leaves names off the top copy (but may leave letter impressions on the paper).
With email, recipients of a message are specified using addresses in any of these three fields:
To: Primary recipients
Cc: Carbon copy to secondary recipients
Bcc: Blind carbon copy to tertiary recipients who receive the message. The primary and secondary recipients cannot see the tertiary recipients. Depending on email software, the tertiary recipients may only see their own email address in Bcc, or they may see the email addresses of all primary and secondary recipients but will not see other tertiary recipients.
It is common practice to use the Bcc: field when addressing a very long list of recipients, or a list of recipients who should not (necessarily) know each other, e.g. in mailing lists.
SMTP Mechanism for Email
BCC in email is handled uniquely by the Simple Mail Transfer Protocol. Here BCC recipients are managed by specifying all recipients using the RCPT TO command, without distinguishing between To, CC, or BCC fields. The SMTP "envelope" includes each recipient, including those in BCC, but only To and CC recipients appear in the email headers visible to recipients. Provided SMTP servers respect this, this setup keeps BCC addresses hidden from end recipients, but allows each recipient to receive the email, as they are omitted from the email’s header information shown in email clients, while still being included in the SMTP delivery commands used to send the email. Consequently, email clients display only the listed To and CC recipients, preserving the privacy of BCC recipients.
Benefits
There are a number of reasons for using this feature:
Bcc is often used to prevent an accidental "Reply All" from sending a reply intended for only the originator of the message to the entire recipient list. Using Bcc can prevent an email storm from happening.
To send a copy of one's correspondence to a third party (for example, a colleague) when one does not want to let the recipient know that this is being done (or when one does not want the recipient to know the third party's e-mail address, assuming the other recipient is in the To: or Cc: fields).
To send a message to multiple parties with none of them knowing the other recipients. This can be accomplished by addressing a message to oneself (or, in some email clients, leaving the To: field empty) and filling in the actual intended recipients in the Bcc: field.
To tighten the focus of an existing email correspondence. By "moving people to BCC," a sender can remove non-essential parties from the recipient list so that future reply-all's will not include them. It is customary to include a parenthetical note indicating that certain recipients have been moved to BCC. This can be done out of courtesy to uninterested parties, or as a way of politely cutting off non-essential parties from the thread going forward.
To prevent the spread of computer viruses, spam, and malware by avoiding the accumulation of block-list e-mail addresses available to all Bcc: recipients, which often occurs in the form of chain letters.
Disadvantages
In some cases, the use of blind carbon copy may be viewed as mildly unethical. The original addressee of the mail (To: address) is left under the impression that communication is proceeding between the known parties, and is knowingly kept unaware of others participating in the primary communication.
A related risk is that by (unintentional) use of "reply to all" functionality by someone on Bcc, the original addressee is (inadvertently) made aware of this participation. For this reason, it is in some cases better to separately forward the original e-mail.
Depending on the particular email software used, the recipient may or may not know that the message has been sent via Bcc. In some cases, 'undisclosed recipients' placed in the To: line (by the software) shows that Bcc has been used. In other cases, the message appears identical to one sent to a single addressee. The recipient does not necessarily see the email address (and real name, if any) originally placed in the To: line.
When it is useful for the recipients to know who else has received a Bcc message,
their real names, but not their email addresses, can be listed in the body of the message, or
a meaningful substitute for the names can be placed in the body of the message, e.g. '[To General Manager and members of Remunerations Committee]', or '[To the whole Bloggs family]'.
References
External links
US-CERT Cyber Security Tip ST04-008 , "Benefits of BCC"
Email
Business terms
Typing
Computing terminology
Computing acronyms
Office administration
de:Header (E-Mail)#BCC
fr:Courrier électronique#Système de copie et de copie invisible | Blind carbon copy | [
"Technology"
] | 1,134 | [
"Computing terminology",
"Computing acronyms"
] |
357,125 | https://en.wikipedia.org/wiki/Imaginary%20friend | Imaginary friends (also known as pretend friends, invisible friends or made-up friends) are a psychological and a social phenomenon where a friendship or other interpersonal relationship takes place in the imagination rather than physical reality.
Although they may seem real to their creators, children usually understand that their imaginary friends are not real.
The first studies focusing on imaginary friends are believed to have been conducted during the 1890s. There is little research about the concept of imaginary friends in children's imaginations. Klausen and Passman (2007) report that imaginary companions were originally described as being supernatural creatures and spirits that were thought to connect people with their past lives. Adults in history have had entities such as household gods, guardian angels, and muses that functioned as imaginary companions to provide comfort, guidance and inspiration for creative work. It is possible the phenomenon appeared among children in the mid-19th century when childhood was emphasized as an important time for play and imagination.
Description
In some studies, imaginary friends are defined as children impersonating a specific character (imagined by them), or objects or toys that are personified. However, some psychologists will define an imaginary friend only as a separate created character. Imaginary friends can be people, but they can also take the shape of other characters such as animals or other abstract ideas such as ghosts, monsters, robots, aliens or angels. These characters can be created at any point during a lifetime, though Western culture suggests they are most acceptable in preschool- and school-age children.
Most research agrees that girls are more likely than boys to develop imaginary friends. Once children reach school age, boys and girls are equally likely to have an imaginary companion. Research has often reiterated that there is not a specific "type" of child that creates an imaginary friend. Whenever children have a fantasy they may come to believe some imaginary world exists in another universe or create imaginary world for imaginary friends to live.
Research has shown that imaginary friends are a normative part of childhood and even adulthood. Additionally, some psychologists suggest that imaginary friends are much like a fictional character created by an author.
As Eileen Kennedy-Moore points out, "Adult fiction writers often talk about their characters taking on a life of their own, which may be an analogous process to children’s invisible friends." In addition, Marjorie Taylor and her colleagues have found that fiction writers are more likely than average to have had imaginary friends as children.
There is a difference between the common imaginary friends that many children create, and the imaginary voices of psychopathology. Often when there’s a psychological disorder and any inner voices are present, they add negativity to the conversation. The person with the disorder may sometimes believe that the imagined voices are physically real, not an imagined inner dialog.
Imaginary friends can serve various functions. Playing with imaginary friends enables children to enact behaviors and events they have not yet experienced. Imaginary play allows children to use their imagination to construct knowledge of the world. In addition, imaginary friends might also fulfill children's innate desire to connect with others before actual play among peers is common. According to psychologist Lev Vygotsky, cultural tools and interaction with people mediate psychological functioning and cognitive development. Imaginary friends, perceived as real beings, could teach children how to interact with others along with many other social skills. Vygotsky's sociocultural view of child development includes the notion of children's “zone of proximal development,” which is the difference between what children can do with and without help. Imaginary friends can aid children in learning things about the world that they could not learn without help, such as appropriate social behavior, and thus can act as a scaffold for children to achieve slightly above their social capability.
In addition, imaginary friends also serve as a means for children to experiment with and explore the world. In this sense, imaginary companions also relate to Piaget's theory of child development because they are completely constructed by the child. According to Piaget, children are scientific problem solvers who self-construct experiences and build internal mental structures based on experimentation. The creation of and interaction with imaginary companions helps children to build such mental structures. The relationship between a child and their imaginary friend can serve as a catalyst for the formation of real relationships in later development and thus provides a head start to practising real-life interaction.
Research
It has been theorized that children with imaginary friends may develop language skills and retain knowledge faster than children without them, which may be because these children get more linguistic practice than their peers as a result of carrying out "conversations" with their imaginary friends.
Kutner (n.d.) reported that 65% of 7-year-old children report they have had an imaginary companion at some point in their lives. He further reported:
Imaginary friends are an integral part of many children's lives. They provide comfort in times of stress, companionship when they're lonely, someone to boss around when they feel powerless, and someone to blame for the broken lamp in the living room. Most important, an imaginary companion is a tool young children use to help them make sense of the adult world.
Taylor, Carlson & Gerow (c2001: p. 190) hold that:
despite some results suggesting that children with imaginary friends might be superior in intelligence, it is not true that all intelligent children create them.
If imaginary friends can provide assistance to children in developing their social skills, they must function as important roles in the lives of children. Hoff (2004 – 2005) was interested in finding out the roles and functions of imaginary friends and how they impacted the lives of children. The results of her study have provided some significant insight on the roles of imaginary friends. Many of the children reported their imaginary friends as being sources of comfort in times of boredom and loneliness. Another interesting result was that imaginary friends served to be mentors for children in their academics. They were encouraging, provided motivation, and increased the self-esteem of children when they did well in school. Finally, imaginary friends were reported as being moral guides for children. Many of the children reported that their imaginary friends served as a conscience and helped them to make the correct decision in times where morality was questioned.
Other professionals such as Marjorie Taylor feel imaginary friends are common among school-age children and are part of normal social-cognitive development. Part of the reason people believed children gave up imaginary companions earlier than has been observed is related to Piaget's stages of cognitive development. Piaget suggested that imaginary companions disappeared once children entered the concrete operational stage of development. Marjorie Taylor identified middle school children with imaginary friends and followed up six years later as they were completing high school. At follow-up, those who had imaginary friends in middle school displayed better coping strategies but a "low social preference for peers." She suggested that imaginary friends may directly benefit children's resiliency and positive adjustment.
Because imagination play with a character involves the child often imagining how another person (or character) would act, research has been done to determine if having an imaginary companion has a positive effect on theory of mind development. In a previous study, Taylor & Carlson (1997) found that 4-year-old children who had imaginary friends scored higher on emotional understanding measures and that having a theory of mind would predict higher emotional understanding later on in life. When children develop the realization that other people have different thoughts and beliefs other than their own, they are able to grow in their development of theory of mind as they begin to have better understandings of emotions.
Positive psychology
The article "Pretend play and positive psychology: Natural companions" defined many great tools that are seen in children who engage pretend play. These five areas include creativity, coping, emotion regulation, empathy/emotional understanding and hope. Hope seems to be the underlying tool children use in motivation. Children become more motivated when they believe in themselves, therefore children will not be discouraged to come up with different ways of thinking because they will have confidence. Imaginary companionship displays immense creativity helping them to develop their social skills and creativity is frequently discussed term amongst positive psychology.
An imaginary companion can be considered the product of the child's creativity whereas the communication between the imaginary friend and the child is considered to be the process.
Adolescence
"Imaginary companions in adolescence: sign of a deficient or positive development?" explores the extent to which adolescents create imaginary companions. The researchers explored the prevalence of imaginary companions in adolescence by investigating the diaries of adolescents age 12-17. In addition they looked at the characteristics of these imaginary companions and did a content analysis of the data obtained in the diaries. There were three hypotheses tested: (1) the deficit hypothesis, (2) the giftedness hypothesis, (3) the egocentrism hypothesis. The results of their study concluded that creative and socially competent adolescents with great coping skills were particularly prone to the creation of these imaginary friends. These findings did not support the deficit hypothesis or egocentrism hypothesis, further suggesting that these imaginary companions were not created with the aim to replace or substitute a real-life family member or friend, but they simply created another "very special friend". This is surprising because it is usually assumed that children who create imaginary companions have deficits of some sort, and it is unheard of for an adolescent to have an imaginary companion.
Tulpa
Following the popularizing and secularizing of the concept of tulpa in the Western world, these practitioners, calling themselves "tulpamancers", report an improvement to their personal lives through the practice, and new unusual sensory experiences. Some practitioners use the tulpa for sexual and romantic interactions, though the practice is considered taboo. A survey of the community with 118 respondents on the explanation of tulpas found 8.5% support a metaphysical explanation, 76.5% support a neurological or psychological explanation, and 14% "other" explanations. Nearly all practitioners consider the tulpa a real or somewhat-real person. The number of active participants in these online communities is in the low hundreds, and few meetings in person have taken place.
Birth order
To uncover the origin of imaginary companions and learn more about the children who create them, it is necessary to seek out children who have created imaginary companions. Unfortunately young children cannot accurately self-report, therefore the most effective way to gather information about children and their imaginary companions is by interviewing the people who spend the most time with them. Often mothers are the primary caretakers who spend the most time with a child. Therefore, for this study 78 mothers were interviewed and asked whether their child had an imaginary friend. If the mother revealed that their child did not have an imaginary companion then the researcher asked about the child's tendency to personify objects.
In order to convey the meaning of personified objects the researchers explained to the mothers that it is common for children to choose a specific toy or object that they are particularly attached to or fond of. For the object to qualify as a personified object the child had to treat it as animate. Furthermore, it is necessary to reveal what children consider an imaginary friend or pretend play. In order to distinguish a child having or not having an imaginary companion, the friend had to be in existence for at least one month. In order to examine the developmental significance of preschool children and their imaginary companions the mothers of children were interviewed. The major conclusion from the study was that there is a significant distinction between invisible companions and personified objects.
A significant finding in this study was the role of the child's birth order in the family in terms of having an imaginary companion or not. The results of the interviews with mothers indicated that children with imaginary friends were more likely to be a first-born child when compared to children who did not have an imaginary companion at all. This study further supports that children may create imaginary friends to work on social development. The findings that a first-born child is more likely to have an imaginary friend sheds some light on the idea that the child needs to socialize therefore they create the imaginary friend to develop their social skills. This is an extremely creative way for children to develop their social skills and creativity is frequently discussed term amongst positive psychology. An imaginary companion can be considered the product of creativity whereas the communication between the imaginary friend and the child is the process.
In regards to birth order there is also research on children who do not have any siblings at all. The research in this area further investigates the notion that children create imaginary companions due to the absence of peer relationships. A study that examined the differences in self-talk frequency as a function of age, only-child, and imaginary childhood companion status provides a insight to the commonalties of children with imaginary companions. The researchers collected information from college students who were asked if they ever had an imaginary friend as a child (Brinthaupt & Dove, 2012). There were three trials in the study and the researchers found that there were significant differences in self-talk between different age groupings.
Their first trial indicated that only children who create imaginary companions actually engage in high levels of positive self-talk had more positive social development. They also found that women were more likely than men to have had an imaginary companion. Their findings were consistent with other research which suggests that it is more common for females to have imaginary companions. The researchers suggested that women may be more likely to have imaginary companions because they are more likely to rely on feedback from persons other than themselves, thus supporting the theory that men have more self reinforcing self-talk.
Furthermore, other research has concluded that women seek more social support than men, which could be another possibility for creating these imaginary companions. The second trial found that children without siblings reported more self-talk than children with siblings; the third trial found that the students who reported having an imaginary friend also reported more self-talk than the other students who did not have imaginary friends. When self-talk is negative, it is associated with effects such as increased anxiety and depression. The researchers concluded that "individuals with higher levels of social-assessment and critical self-talk reported lower self-esteem and more frequent automatic negative self-statements." When self-talk is positive, however, the study found that "people with higher levels of self-reinforcing self-talk reported more positive self-esteem and more frequent automatic positive self-statements".
See also
References
Further reading
Gleason, T. (2009). 'Imaginary companions.' In Harry T. Reis & Susan Sprecher (Eds.), Encyclopedia of Human Relationships (pp. 833–834). Thousand Oaks, CA: Sage.
Hall, E. (1982). 'The fearful child's hidden talents [Interview with Jerome Kagan].' Psychology Today, 16 (July), 50–59.
Partington, J., & Grant, C. (1984). 'Imaginary playmates and other useful fantasies.' In P. Smith (Ed.), Play in animals and humans (pp. 217–240). New York: Basil Blackwell.
Imaginary Friends with Dr Evan Kidd podcast interview with Dr Evan Kidd of La Trobe University
Children's games
Developmental psychology
Interpersonal relationships
Friend
Stock characters
Fantasy tropes
Science fiction themes
Hallucinations
Nonexistent things | Imaginary friend | [
"Biology"
] | 3,092 | [
"Behavior",
"Developmental psychology",
"Behavioural sciences",
"Interpersonal relationships",
"Human behavior"
] |
357,138 | https://en.wikipedia.org/wiki/Behram%20Kur%C5%9Funo%C4%9Flu | Behram Kurşunoğlu (14 March 1922 – 25 October 2003) was a Turkish physicist and the founder and the director of the Center for Theoretical Studies, University of Miami. He was best known for his works on unified field theory, energy and global issues.
He participated in the discovery of two different types of neutrinos in late 1950s. During his University of Miami career, he hosted several Nobel Prize laureates, including Paul Dirac, Lars Onsager and Robert Hofstadter. He wrote several books on diverse aspects of physics, the most notable of which is Modern Quantum Theory (1962).
Early life and education
Behram Kurşunoğlu was born in Çaykara district of Trabzon. While he was a third year student in the Department of Mathematics and Astronomy of İstanbul Yüksek Öğretmen Okulu, he was sent to University of Edinburgh through a scholarship of the Turkish Ministry of Education, in 1945.
After graduating from the University of Edinburgh, he completed his doctorate degree in physics at the University of Cambridge. During the period of 1956–1958, he served as the dean of the Faculty of Nuclear Sciences and Technology at Middle East Technical University and a counselor to the office of Turkish General Staff. He held teaching positions at several universities in the United States, and starting from 1958, professorship at the University of Miami.
Career
University of Miami
In 1965, he acted as one of the founders of the Center for Theoretical Studies of the University of Miami, of which he became the first director. During this period, he also worked in counseling positions for several research organizations and laboratories in Europe. With the invitation of Russian Academy of Sciences, he worked as a visiting professor in the USSR during 1968.
He continued his work at the Center for Theoretical Studies of the University of Miami until 1992, after which he became the director of the Global Foundation research organization.
Kursunoglu died on October 25, 2003, due to heart attack, shortly before that year's Coral Gables Conference, which was a festschrift for Paul Frampton combined with a memorial for Kursunoglu in a conference series he had been organizing since 1964. He had three children, İsmet, Sevil and Ayda, from his wife Sevda Arif.
Publications
In Physical Review Letters:
1951 On Einstein's Unified Field Theory
1953 Derivation and Renormalization of the Tamm-Dancoff Equations
1953 Expectations from a Unified Field Theory
1953 Unified Field Theory and Born-Infeld Electrodynamics
In Physical Review:
1952 Gravitation and Electrodynamics
1954 Tamm-Dancoff Methods and Nuclear Forces
1956 Transformation of Relativistic Wave Equations
1957 Proton Bremsstrahlung
1963 Brownian Motion in a Magnetic Field
1964 New Symmetry Group for Elementary Particles. I. Generalization of Lorentz Group Via Electrodynamics
1967 Space-Time and Origin of Internal Symmetries
1968 Dynamical Theory of Hadrons and Leptons
In Physical Review D:
1970 Theory of Relativistic Supermultiplets. I. Baryon SpectroscopyElectrodynamics
1970 Theory of Relativistic Supermultiplets. II. Periodicities in Hadron Spectroscopy
1974 Gravitation and magnetic charge
1975 Erratum: Gravitation and magnetic charge
1976 Consequences of nonlinearity in the generalized theory of gravitation
1976 Velocity of light in generalized theory of gravitation
In Journal of Mathematical Physics:
1961 Complex Orthogonal and Antiorthogonal Representation of Lorentz Group
1967 Unitary Representations of U(2, 2) and Massless Fields
In Reviews of Modern Physics:
1957 Correspondence in the Generalized Theory of Gravitation
Awards
Fellow of the American Physical Society (1965)
TÜBİTAK Science Award (1972)
Award of Phi Kappa Phi honor society
Award of the Sigma Xi scientific research society
Sigma Pi Sigma award
"Science is Guidance" Award of the Atatürk Society of America (2001)
Further reading
References
External links
The Work of Behram Kursunoglu, talk presented at the 2003 Coral Gables conference by Philip D. Mannheim.
La Belle Epoque of High Energy Physics and Cosmology , webpage for the 2003 Coral Gables conference.
People from Çaykara
Turkish emigrants to the United States
Turkish physicists
1922 births
2003 deaths
Alumni of the University of Edinburgh
University of Miami faculty
Academic staff of Middle East Technical University
Fellows of the American Physical Society
Theoretical physicists
Recipients of TÜBİTAK Science Award
Members of the Turkish Academy of Sciences
American academics of Turkish descent
People from Bayburt
Members of Phi Kappa Phi | Behram Kurşunoğlu | [
"Physics"
] | 930 | [
"Theoretical physics",
"Theoretical physicists"
] |
357,170 | https://en.wikipedia.org/wiki/Ralph%20Appelbaum%20Associates | Ralph Appelbaum Associates (RAA) is one of the world's longest-established and largest museum exhibition design firms with offices in New York City, London, Beijing, Berlin, Moscow, and Dubai.
Overview
The firm was founded in 1978 by Ralph Appelbaum (born 1942), a graduate of Pratt Institute and former Peace Corps volunteer (in Peru). Appelbaum currently directs RAA's undertakings, and retains daily involvement in selected commissions.
The New York Times reported in 1999 that the firm was composed of "architects, designers, editors, model builders, historians, childhood specialists, one poet, one painter and one astrophysicist."
The company's best-known project is the United States Holocaust Memorial Museum in Washington, D.C., which is the United States' official memorial to the Holocaust. Established in 1993, the museum has been described as a "turning point in museology".
Major projects
Selected works
See also
Local Projects, U.S. firm
Event Communications, U.K. firm
Gallagher & Associates, U.S. firm
Xenario, Shanghai/U.S. firm
Cultural tourism
Exhibit design
Exhibition designer
References
External links
Ralph Appelbaum Associates website
1978 establishments in New York (state)
Design companies established in 1978
Companies based in New York City
Museum companies
Museum designers
Exhibition designers
Environmental design
American companies established in 1978 | Ralph Appelbaum Associates | [
"Engineering"
] | 281 | [
"Environmental design",
"Design"
] |
357,190 | https://en.wikipedia.org/wiki/Kondo%20effect | In physics, the Kondo effect describes the scattering of conduction electrons in a metal due to magnetic impurities, resulting in a characteristic change i.e. a minimum in electrical resistivity with temperature.
The cause of the effect was first explained by Jun Kondo, who applied third-order perturbation theory to the problem to account for scattering of s-orbital conduction electrons off d-orbital electrons localized at impurities (Kondo model). Kondo's calculation predicted that the scattering rate and the resulting part of the resistivity should increase logarithmically as the temperature approaches 0 K. Extended to a lattice of magnetic impurities, the Kondo effect likely explains the formation of heavy fermions and Kondo insulators in intermetallic compounds, especially those involving rare earth elements such as cerium, praseodymium, and ytterbium, and actinide elements such as uranium. The Kondo effect has also been observed in quantum dot systems.
Theory
The dependence of the resistivity on temperature , including the Kondo effect, is written as
where is the residual resistivity, the term shows the contribution from the Fermi liquid properties, and the term is from the lattice vibrations: , , and are constants independent of temperature. Jun Kondo derived the third term with logarithmic dependence on temperature and the experimentally observed concentration dependence.
History
In 1930, Walther Meissner and B. Voigt observed that the resistivity of nominally pure gold reaches a minimum at 10 K, and similarly for nominally pure Cu at 2 K. Similar results were discovered in other metals. Kondo described the three puzzling aspects that frustrated previous researchers who tried to explain the effect:
The resistivity of a truly pure metal is expected to decrease monotonically, because with lower temperature, the probability of electron-phonon scattering decreases.
The resistivity should rapidly plateau when the temperature drops below the Debye temperature of the phonons, corresponding with the highest allowed mode of vibration of the metal. However, in the AuFe alloy, the resistivity continues to rise sharply below 0.01 K, yet there seemed to be no energy gap in AuFe alloy that small.
The phenomenon is universal, so any explanation should apply in general.
Experiments in the 1960s by Myriam Sarachik at Bell Laboratories showed that phenomenon was caused by magnetic impurity in nominally pure metals. When Kondo sent a preview of his paper to Sarachik, Sarachik confirmed the data fit the theory.
Kondo's solution was derived using perturbation theory resulting in a divergence as the temperature approaches 0 K, but later methods used non-perturbative techniques to refine his result. These improvements produced a finite resistivity but retained the feature of a resistance minimum at a non-zero temperature. One defines the Kondo temperature as the energy scale limiting the validity of the Kondo results. The Anderson impurity model and accompanying Wilsonian renormalization theory were an important contribution to understanding the underlying physics of the problem. Based on the Schrieffer–Wolff transformation, it was shown that the Kondo model lies in the strong coupling regime of the Anderson impurity model. The Schrieffer–Wolff transformation projects out the high energy charge excitations in the Anderson impurity model, obtaining the Kondo model as an effective Hamiltonian.
The Kondo effect can be considered as an example of asymptotic freedom, i.e. a situation where the coupling becomes non-perturbatively strong at low temperatures and low energies. In the Kondo problem, the coupling refers to the interaction between the localized magnetic impurities and the itinerant electrons.
Examples
Extended to a lattice of magnetic ions, the Kondo effect likely explains the formation of heavy fermions and Kondo insulators in intermetallic compounds, especially those involving rare earth elements such as cerium, praseodymium, and ytterbium, and actinide elements such as uranium. In heavy fermion materials, the non-perturbative growth of the interaction leads to quasi-electrons with masses up to thousands of times the free electron mass, i.e., the electrons are dramatically slowed by the interactions. In a number of instances they are superconductors. It is believed that a manifestation of the Kondo effect is necessary for understanding the unusual metallic delta-phase of plutonium.
The Kondo effect has been observed in quantum dot systems. In such systems, a quantum dot with at least one unpaired electron behaves as a magnetic impurity, and when the dot is coupled to a metallic conduction band, the conduction electrons can scatter off the dot. This is completely analogous to the more traditional case of a magnetic impurity in a metal.
Band-structure hybridization and flat band topology in Kondo insulators have been imaged in angle-resolved photoemission spectroscopy experiments.
In 2012, Beri and Cooper proposed a topological Kondo effect could be found with Majorana fermions, while it has been shown that quantum simulations with ultracold atoms may also demonstrate the effect.
In 2017, teams from the Vienna University of Technology and Rice University conducted experiments into the development of new materials made from the metals cerium, bismuth and palladium in specific combinations and theoretical work experimenting with models of such structures, respectively. The results of the experiments were published in December 2017 and, together with the theoretical work, lead to the discovery of a new state, a correlation-driven Weyl semimetal. The team dubbed this new quantum material Weyl-Kondo semimetal.
References
Further reading
Kondo Effect - 40 Years after the Discovery - special issue of the Journal of the Physical Society of Japan
. Monograph by Kondo himeslf.
Monograph on newer versions of the Kondo effect in non-magnetic contexts especially
Electrical resistance and conductance
Correlated electrons
Electric and magnetic fields in matter
Physical phenomena | Kondo effect | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,228 | [
"Physical phenomena",
"Physical quantities",
"Quantity",
"Electric and magnetic fields in matter",
"Materials science",
"Condensed matter physics",
"Correlated electrons",
"Wikipedia categories named after physical quantities",
"Electrical resistance and conductance"
] |
357,328 | https://en.wikipedia.org/wiki/Gr%C3%B6bner%20basis | In mathematics, and more specifically in computer algebra, computational algebraic geometry, and computational commutative algebra, a Gröbner basis is a particular kind of generating set of an ideal in a polynomial ring over a field . A Gröbner basis allows many important properties of the ideal and the associated algebraic variety to be deduced easily, such as the dimension and the number of zeros when it is finite. Gröbner basis computation is one of the main practical tools for solving systems of polynomial equations and computing the images of algebraic varieties under projections or rational maps.
Gröbner basis computation can be seen as a multivariate, non-linear generalization of both Euclid's algorithm for computing polynomial greatest common divisors, and
Gaussian elimination for linear systems.
Gröbner bases were introduced by Bruno Buchberger in his 1965 Ph.D. thesis, which also included an algorithm to compute them (Buchberger's algorithm). He named them after his advisor Wolfgang Gröbner. In 2007, Buchberger received the Association for Computing Machinery's Paris Kanellakis Theory and Practice Award for this work.
However, the Russian mathematician Nikolai Günther had introduced a similar notion in 1913, published in various Russian mathematical journals. These papers were largely ignored by the mathematical community until their rediscovery in 1987 by Bodo Renschuch et al. An analogous concept for multivariate power series was developed independently by Heisuke Hironaka in 1964, who named them standard bases. This term has been used by some authors to also denote Gröbner bases.
The theory of Gröbner bases has been extended by many authors in various directions. It has been generalized to other structures such as polynomials over principal ideal rings or polynomial rings, and also some classes of non-commutative rings and algebras, like Ore algebras.
Tools
Polynomial ring
Gröbner bases are primarily defined for ideals in a polynomial ring over a field . Although the theory works for any field, most Gröbner basis computations are done either when is the field of rationals or the integers modulo a prime number.
In the context of Gröbner bases, a nonzero polynomial in is commonly represented as a sum where the are nonzero elements of , called coefficients, and the are monomials (called power products by Buchberger and some of his followers) of the form where the are nonnegative integers. The vector is called the exponent vector of the monomial. When the list of the variables is fixed, the notation of monomials is often abbreviated as
Monomials are uniquely defined by their exponent vectors, and, when a monomial ordering (see below) is fixed, a polynomial is uniquely represented by the ordered list of the ordered pairs formed by an exponent vector and the corresponding coefficient. This representation of polynomials is especially efficient for Gröbner basis computation in computers, although it is less convenient for other computations such as polynomial factorization and polynomial greatest common divisor.
If is a finite set of polynomials in the polynomial ring , the ideal generated by is the set of linear combinations of elements of with coefficients in ; that is the set of polynomials that can be written with
Monomial ordering
All operations related to Gröbner bases require the choice of a total order on the monomials, with the following properties of compatibility with multiplication. For all monomials , , ,
.
A total order satisfying these condition is sometimes called an admissible ordering.
These conditions imply that the order is a well-order, that is, every strictly decreasing sequence of monomials is finite.
Although Gröbner basis theory does not depend on a particular choice of an admissible monomial ordering, three monomial orderings are especially important for the applications:
Lexicographical ordering, commonly called lex or plex (for pure lexical ordering).
Total degree reverse lexicographical ordering, commonly called degrevlex.
Elimination ordering, lexdeg.
Gröbner basis theory was initially introduced for the lexicographical ordering. It was soon realised that the Gröbner basis for degrevlex is almost always much easier to compute, and that it is almost always easier to compute a lex Gröbner basis by first computing the degrevlex basis and then using a "change of ordering algorithm". When elimination is needed, degrevlex is not convenient; both lex and lexdeg may be used but, again, many computations are relatively easy with lexdeg and almost impossible with lex.
Basic operations
Leading term, coefficient and monomial
Once a monomial ordering is fixed, the terms of a polynomial (product of a monomial with its nonzero coefficient) are naturally ordered by decreasing monomials (for this order). This makes the representation of a polynomial as a sorted list of pairs coefficient–exponent vector a canonical representation of the polynomials (that is, two polynomials are equal if and only they have the same representation).
The first (greatest) term of a polynomial for this ordering and the corresponding monomial and coefficient are respectively called the leading term, leading monomial and leading coefficient and denoted, in this article, and .
Most polynomial operations related to Gröbner bases involve the leading terms. So, the representation of polynomials as sorted lists make these operations particularly efficient (reading the first element of a list takes a constant time, independently of the length of the list).
Polynomial operations
The other polynomial operations involved in Gröbner basis computations are also compatible with the monomial ordering; that is, they can be performed without reordering the result:
The addition of two polynomials consists in a merge of the two corresponding lists of terms, with a special treatment in the case of a conflict (that is, when the same monomial appears in the two polynomials).
The multiplication of a polynomial by a scalar consists of multiplying each coefficient by this scalar, without any other change in the representation.
The multiplication of a polynomial by a monomial consists of multiplying each monomial of the polynomial by . This does not change the term ordering by definition of a monomial ordering.
Divisibility of monomials
Let and be two monomials, with exponent vectors and
One says that divides , or that is a multiple of , if for every ; that is, if is componentwise not greater than . In this case, the quotient is defined as In other words, the exponent vector of is the componentwise subtraction of the exponent vectors of and .
The greatest common divisor of and is the monomial whose exponent vector is the componentwise minimum of and . The least common multiple is defined similarly with instead of .
One has
Reduction
The reduction of a polynomial by other polynomials with respect to a monomial ordering is central to Gröbner basis theory. It is a generalization of both row reduction occurring in Gaussian elimination and division steps of the
Euclidean division of univariate polynomials. When completed as much as possible, it is sometimes called multivariate division although its result is not uniquely defined.
Lead-reduction is a special case of reduction that is easier to compute. It is fundamental for Gröbner basis computation, since general reduction is needed only at the end of a Gröbner basis computation, for getting a reduced Gröbner basis from a non-reduced one.
Let an admissible monomial ordering be fixed, to which refers every monomial comparison that will occur in this section.
A polynomial is lead-reducible by another polynomial if the leading monomial is a multiple of . The polynomial is reducible by if some monomial of is a multiple . (So, if is lead-reducible by , it is also reducible, but may be reducible without being lead-reducible.)
Suppose that is reducible by , and let be a term of such that the monomial is a multiple of . A one-step reduction of by consists of replacing by
This operation removes the monomial from without changing the terms with a monomial greater than (for the monomial ordering). In particular, a one step lead-reduction of produces a polynomial all of whose monomials are smaller than .
Given a finite set of polynomials, one says that is reducible or lead-reducible by if it is reducible or lead-reducible, respectively, by at least one element of . In this case, a one-step reduction (resp. one-step lead-reduction) of by is any one-step reduction (resp. one-step lead-reduction) of by an element of .
The (complete) reduction (resp. lead-reduction) of by consists of iterating one-step reductions (respect. one-step lead reductions) until getting a polynomial that is irreducible (resp. lead-irreducible) by . It is sometimes called a normal form of by . In general this form is not uniquely defined because there are, in general, several elements of that can be used for reducing ; this non-uniqueness is the starting point of Gröbner basis theory.
The definition of the reduction shows immediately that, if is a normal form of by , one has
where is irreducible by and the are polynomials such that In the case of univariate polynomials, if consists of a single element , then is the remainder of the Euclidean division of by , and is the quotient. Moreover, the division algorithm is exactly the process of lead-reduction. For this reason, some authors use the term multivariate division instead of reduction.
Non uniqueness of reduction
In the example that follows, there are exactly two complete lead-reductions that produce two very different results. The fact that the results are irreducible (not only lead-irreducible) is specific to the example, although this is rather common with such small examples.
In this two variable example, the monomial ordering that is used is the lexicographic order with and we consider the reduction of , by with
For the first reduction step, either the first or the second term of may be reduced. However, the reduction of a term amounts to removing this term at the cost of adding new lower terms; if it is not the first reducible term that is reduced, it may occur that a further reduction adds a similar term, which must be reduced again. It is therefore always better to reduce first the largest (for the monomial order) reducible term; that is, in particular, to lead-reduce first until getting a lead-irreducible polynomial.
The leading term of is reducible by and not by So the first reduction step consists of multiplying by and adding the result to :
The leading term of is a multiple of the leading monomials of both and So, one has two choices for the second reduction step. If one chooses one gets a polynomial that can be reduced again by
No further reduction is possible, so is a complete reduction of .
One gets a different result with the other choice for the second step:
Again, the result is irreducible, although only lead reductions were done.
In summary, the complete reduction of can result in either or
It is for dealing with the problems set by this non-uniqueness that Buchberger introduced Gröbner bases and -polynomials. Intuitively, may be reduced to This implies that belongs to the ideal generated by . So, this ideal is not changed by adding to , and this allows more reductions. In particular, can be reduced to by and this restores the uniqueness of the reduced form.
Here Buchberger's algorithm for Gröbner bases would begin by adding to the polynomial
This polynomial, called -polynomial by Buchberger, is the difference of the one-step reductions of the least common multiple of the leading monomials of and , by and respectively:
.
In this example, one has This does not complete Buchberger's algorithm, as gives different results, when reduced by or
S-polynomial
Given monomial ordering, the S-polynomial or critical pair of two polynomials and is the polynomial
;
where denotes the least common multiple of the leading monomials of and .
Using the definition of , this translates to:
Using the property that relates the and the , the S-polynomial can also be written as:
where denotes the greatest common divisor of the leading monomials of and .
As the monomials that are reducible by both and are exactly the multiples of , one can deal with all cases of non-uniqueness of the reduction by considering only the S-polynomials. This is a fundamental fact for Gröbner basis theory and all algorithms for computing them.
For avoiding fractions when dealing with polynomials with integer coefficients, the S polynomial is often defined as
This does not changes anything to the theory since the two polynomials are associates.
Definition
Let be a polynomial ring over a field . In this section, we suppose that an admissible monomial ordering has been fixed.
Let be a finite set of polynomials in that generates an ideal . The set is a Gröbner basis (with respect to the monomial ordering), or, more precisely, a Gröbner basis of if
the ideal generated by the leading monomials of the polynomials in equals the ideal generated by the leading monomials of ,
or, equivalently,
There are many characterizing properties, which can each be taken as an equivalent definition of Gröbner bases. For conciseness, in the following list, the notation "one-word/another word" means that one can take either "one-word" or "another word" for having two different characterizations of Gröbner bases. All the following assertions are characterizations of Gröbner bases:
Counting the above definition, this provides 12 characterizations of Gröbner bases. The fact that so many characterizations are possible makes Gröbner bases very useful. For example, condition 3 provides an algorithm for testing ideal membership; condition 4 provides an algorithm for testing whether a set of polynomials is a Gröbner basis and forms the basis of Buchberger's algorithm for computing Gröbner bases; conditions 5 and 6 allow computing in in a way that is very similar to modular arithmetic.
Existence
For every admissible monomial ordering and every finite set of polynomials, there is a Gröbner basis that contains and generates the same ideal. Moreover, such a Gröbner basis may be computed with Buchberger's algorithm.
This algorithm uses condition 4, and proceeds roughly as follows: for any two elements of , compute the complete reduction by of their S-polynomial, and add the result to if it is not zero; repeat this operation with the new elements of included until, eventually, all reductions produce zero.
The algorithm terminates always because of Dickson's lemma or because polynomial rings are Noetherian (Hilbert's basis theorem). Condition 4 ensures that the result is a Gröbner basis, and the definitions of S-polynomials and reduction ensure that the generated ideal is not changed.
The above method is an algorithm for computing Gröbner bases; however, it is very inefficient. Many improvements of the original Buchberger's algorithm, and several other algorithms have been proposed and implemented, which dramatically improve the efficiency. See , below.
Reduced Gröbner bases
A Gröbner basis is if all leading monomials of its elements are irreducible by the other elements of the basis. Given a Gröbner basis of an ideal , one gets a minimal Gröbner basis of by removing the polynomials whose leading monomials are multiple of the leading monomial of another element of the Gröbner basis. However, if two polynomials of the basis have the same leading monomial, only one must be removed. So, every Gröbner basis contains a minimal Gröbner basis as a subset.
All minimal Gröbner bases of a given ideal (for a fixed monomial ordering) have the same number of elements, and the same leading monomials, and the non-minimal Gröbner bases have more elements than the minimal ones.
A Gröbner basis is if every polynomial in it is irreducible by the other elements of the basis, and has as leading coefficient. So, every reduced Gröbner basis is minimal, but a minimal Gröbner basis need not be reduced.
Given a Gröbner basis of an ideal , one gets a reduced Gröbner basis of by first removing the polynomials that are lead-reducible by other elements of the basis (for getting a minimal basis); then replacing each element of the basis by the result of the complete reduction by the other elements of the basis; and, finally, by dividing each element of the basis by its leading coefficient.
All reduced Gröbner bases of an ideal (for a fixed monomial ordering) are equal. It follows that two ideals are equal if and only if they have the same reduced Gröbner basis.
Sometimes, reduced Gröbner bases are defined without the condition on the leading coefficients. In this case, the uniqueness of reduced Gröbner bases is true only up to the multiplication of polynomials by a nonzero constant.
When working with polynomials over the field of the rational numbers, it is useful to work only with polynomials with integer coefficients. In this case, the condition on the leading coefficients in the definition of a reduced basis may be replaced by the condition that all elements of the basis are primitive polynomials with integer coefficients, with positive leading coefficients. This restores the uniqueness of reduced bases.
Special cases
For every monomial ordering, the empty set of polynomials is the unique Gröbner basis of the zero ideal.
For every monomial ordering, a set of polynomials that contains a nonzero constant is a Gröbner basis of the unit ideal (the whole polynomial ring). Conversely, every Gröbner basis of the unit ideal contains a nonzero constant. The reduced Gröbner basis of the unit is formed by the single polynomial .
In the case of polynomials in a single variable, there is a unique admissible monomial ordering, the ordering by the degree. The minimal Gröbner bases are the singletons consisting of a single polynomial. The reduced Gröbner bases are the monic polynomials.
Example and counterexample
Let be the ring of bivariate polynomials with rational coefficients and consider the ideal generated by the polynomials
,
.
By reducing by , one obtains a new polynomial such that
None of and is reducible by the other, but is reducible by , which gives another polynomial in :
Under lexicographic ordering with we have
As and belong to , and none of them is reducible by the others, none of and is a Gröbner basis of .
On the other hand, is a Gröbner basis of , since the S-polynomials
can be reduced to zero by and .
The method that has been used here for finding and , and proving that is a Gröbner basis is a direct application of Buchberger's algorithm. So, it can be applied mechanically to any similar example, although, in general, there are many polynomials and S-polynomials to consider, and the computation is generally too large for being done without a computer.
Properties and applications of Gröbner bases
Unless explicitly stated, all the results that follow are true for any monomial ordering (see that article for the definitions of the different orders that are mentioned below).
It is a common misconception that the lexicographical order is needed for some of these results. On the contrary, the lexicographical order is, almost always, the most difficult to compute, and using it makes impractical many computations that are relatively easy with graded reverse lexicographic order (grevlex), or, when elimination is needed, the elimination order (lexdeg) which restricts to grevlex on each block of variables.
Equality of ideals
Reduced Gröbner bases are unique for any given ideal and any monomial ordering. Thus two ideals are equal if and only if they have the same (reduced) Gröbner basis (usually a Gröbner basis software always produces reduced Gröbner bases).
Membership and inclusion of ideals
The reduction of a polynomial f by the Gröbner basis G of an ideal yields 0 if and only if f is in . This allows to test the membership of an element in an ideal. Another method consists in verifying that the Gröbner basis of G∪{f} is equal to G.
To test if the ideal generated by f1, ..., fk is contained in the ideal , it suffices to test that every is in . One may also test the equality of the reduced Gröbner bases of and ∪ {f1, ...,fk}.
Solutions of a system of algebraic equations
Any set of polynomials may be viewed as a system of polynomial equations by equating the polynomials to zero. The set of the solutions of such a system depends only on the generated ideal, and, therefore does not change when the given generating set is replaced by the Gröbner basis, for any ordering, of the generated ideal. Such a solution, with coordinates in an algebraically closed field containing the coefficients of the polynomials, is called a zero of the ideal. In the usual case of rational coefficients, this algebraically closed field is chosen as the complex field.
An ideal does not have any zero (the system of equations is inconsistent) if and only if 1 belongs to the ideal (this is Hilbert's Nullstellensatz), or, equivalently, if its Gröbner basis (for any monomial ordering) contains 1, or, also, if the corresponding reduced Gröbner basis is [1].
Given the Gröbner basis G of an ideal I, it has only a finite number of zeros, if and only if, for each variable x, G contains a polynomial with a leading monomial that is a power of x (without any other variable appearing in the leading term). If this is the case, then the number of zeros, counted with multiplicity, is equal to the number of monomials that are not multiples of any leading monomial of G. This number is called the degree of the ideal.
When the number of zeros is finite, the Gröbner basis for a lexicographical monomial ordering provides, theoretically, a solution: the first coordinate of a solution is a root of the greatest common divisor of polynomials of the basis that depend only on the first variable. After substituting this root in the basis, the second coordinate of this solution is a root of the greatest common divisor of the resulting polynomials that depend only on the second variable, and so on. This solving process is only theoretical, because it implies GCD computation and root-finding of polynomials with approximate coefficients, which are not practicable because of numeric instability. Therefore, other methods have been developed to solve polynomial systems through Gröbner bases (see System of polynomial equations for more details).
Dimension, degree and Hilbert series
The dimension of an ideal I in a polynomial ring R is the Krull dimension of the ring R/I and is equal to the dimension of the algebraic set of the zeros of I. It is also equal to number of hyperplanes in general position which are needed to have an intersection with the algebraic set, which is a finite number of points. The degree of the ideal and of its associated algebraic set is the number of points of this finite intersection, counted with multiplicity. In particular, the degree of a hypersurface is equal to the degree of its definition polynomial.
Both degree and dimension depend only on the set of the leading monomials of the Gröbner basis of the ideal for any monomial ordering.
The dimension is the maximal size of a subset S of the variables such that there is no leading monomial depending only on the variables in S. Thus, if the ideal has dimension 0, then for each variable x there is a leading monomial in the Gröbner basis that is a power of x.
Both dimension and degree may be deduced from the Hilbert series of the ideal, which is the series , where is the number of monomials of degree i that are not multiple of any leading monomial in the Gröbner basis. The Hilbert series may be summed into a rational fraction
where d is the dimension of the ideal and is a polynomial such that is the degree of the ideal.
Although the dimension and the degree do not depend on the choice of the monomial ordering, the Hilbert series and the polynomial change when one changes the monomial ordering.
Most computer algebra systems that provide functions to compute Gröbner bases provide also functions for computing the Hilbert series, and thus also the dimension and the degree.
Elimination
The computation of Gröbner bases for an elimination monomial ordering allows computational elimination theory. This is based on the following theorem.
Consider a polynomial ring in which the variables are split into two subsets X and Y. Let us also choose an elimination monomial ordering "eliminating" X, that is a monomial ordering for which two monomials are compared by comparing first the X-parts, and, in case of equality only, considering the Y-parts. This implies that a monomial containing an X-variable is greater than every monomial independent of X.
If G is a Gröbner basis of an ideal I for this monomial ordering, then is a Gröbner basis of (this ideal is often called the elimination ideal). Moreover, consists exactly of the polynomials of whose leading terms belong to (this makes the computation of very easy, as only the leading monomials need to be checked).
This elimination property has many applications, some described in the next sections.
Another application, in algebraic geometry, is that elimination realizes the geometric operation of projection of an affine algebraic set into a subspace of the ambient space: with above notation, the (Zariski closure of) the projection of the algebraic set defined by the ideal I into the Y-subspace is defined by the ideal
The lexicographical ordering such that is an elimination ordering for every partition Thus a Gröbner basis for this ordering carries much more information than usually necessary. This may explain why Gröbner bases for the lexicographical ordering are usually the most difficult to compute.
Intersecting ideals
If and are two ideals generated respectively by {f1, ..., fm} and {g1, ..., gk}, then a single Gröbner basis computation produces a Gröbner basis of their intersection . For this, one introduces a new indeterminate t, and one uses an elimination ordering such that the first block contains only t and the other block contains all the other variables (this means that a monomial containing t is greater than every monomial that does not contain t). With this monomial ordering, a Gröbner basis of consists in the polynomials that do not contain t, in the Gröbner basis of the ideal
In other words, is obtained by eliminating t in K.
This may be proven by observing that the ideal K consists of the polynomials such that and . Such a polynomial is independent of t if and only if , which means that
Implicitization of a rational curve
A rational curve is an algebraic curve that has a set of parametric equations of the form
where and are univariate polynomials for 1 ≤ i ≤ n. One may (and will) suppose that and are coprime (they have no non-constant common factors).
Implicitization consists in computing the implicit equations of such a curve. In case of n = 2, that is for plane curves, this may be computed with the resultant. The implicit equation is the following resultant:
Elimination with Gröbner bases allows to implicitize for any value of n, simply by eliminating t in the ideal
If n = 2, the result is the same as with the resultant, if the map is injective for almost every t. In the other case, the resultant is a power of the result of the elimination.
Saturation
When modeling a problem by polynomial equations, it is often assumed that some quantities are non-zero, so as to avoid degenerate cases. For example, when dealing with triangles, many properties become false if the triangle degenerates to a line segment, i.e. the length of one side is equal to the sum of the lengths of the other sides. In such situations, one cannot deduce relevant information from the polynomial system unless the degenerate solutions are ignored. More precisely, the system of equations defines an algebraic set which may have several irreducible components, and one must remove the components on which the degeneracy conditions are everywhere zero.
This is done by saturating the equations by the degeneracy conditions, which may be done via the elimination property of Gröbner bases.
Definition of the saturation
The localization of a ring consists in adjoining to it the formal inverses of some elements. This section concerns only the case of a single element, or equivalently a finite number of elements (adjoining the inverses of several elements is equivalent to adjoining the inverse of their product). The localization of a ring R by an element f is the ring where t is a new indeterminate representing the inverse of f. The localization of an ideal of R is the ideal of When R is a polynomial ring, computing in is not efficient because of the need to manage the denominators. Therefore, localization is usually replaced by the operation of saturation.
The with respect to f of an ideal in R is the inverse image of under the canonical map from R to It is the ideal consisting in all elements of R whose product with some power of f belongs to .
If is the ideal generated by and 1−ft in R[t], then It follows that, if R is a polynomial ring, a Gröbner basis computation eliminating t produces a Gröbner basis of the saturation of an ideal by a polynomial.
The important property of the saturation, which ensures that it removes from the algebraic set defined by the ideal the irreducible components on which the polynomial f is zero, is the following: The primary decomposition of consists of the components of the primary decomposition of I that do not contain any power of f.
Computation of the saturation
A Gröbner basis of the saturation by f of a polynomial ideal generated by a finite set of polynomials F, may be obtained by eliminating t in that is by keeping the polynomials independent of t in the Gröbner basis of for an elimination ordering eliminating t.
Instead of using F, one may also start from a Gröbner basis of F. Which method is most efficient depends on the problem. However, if the saturation does not remove any component, that is if the ideal is equal to its saturated ideal, computing first the Gröbner basis of F is usually faster. On the other hand, if the saturation removes some components, the direct computation may be dramatically faster.
If one wants to saturate with respect to several polynomials or with respect to a single polynomial which is a product there are three ways to proceed which give the same result but may have very different computation times (it depends on the problem which is the most efficient).
Saturating by in a single Gröbner basis computation.
Saturating by then saturating the result by and so on.
Adding to F or to its Gröbner basis the polynomials and eliminating the in a single Gröbner basis computation.
Effective Nullstellensatz
Hilbert's Nullstellensatz has two versions. The first one asserts that a set of polynomials has no common zeros over an algebraic closure of the field of the coefficients, if and only if 1 belongs to the generated ideal. This is easily tested with a Gröbner basis computation, because 1 belongs to an ideal if and only if 1 belongs to the Gröbner basis of the ideal, for any monomial ordering.
The second version asserts that the set of common zeros (in an algebraic closure of the field of the coefficients) of an ideal is contained in the hypersurface of the zeros of a polynomial f, if and only if a power of f belongs to the ideal. This may be tested by saturating the ideal by f; in fact, a power of f belongs to the ideal if and only if the saturation by f provides a Gröbner basis containing 1.
Implicitization in higher dimension
By definition, an affine rational variety of dimension k may be described by parametric equations of the form
where are n+1 polynomials in the k variables (parameters of the parameterization) Thus the parameters and the coordinates of the points of the variety are zeros of the ideal
One could guess that it suffices to eliminate the parameters to obtain the implicit equations of the variety, as it has been done in the case of curves. Unfortunately this is not always the case. If the have a common zero (sometimes called base point), every irreducible component of the non-empty algebraic set defined by the is an irreducible component of the algebraic set defined by I. It follows that, in this case, the direct elimination of the provides an empty set of polynomials.
Therefore, if k>1, two Gröbner basis computations are needed to implicitize:
Saturate by to get a Gröbner basis
Eliminate the from to get a Gröbner basis of the ideal (of the implicit equations) of the variety.
Algorithms and implementations
Buchberger's algorithm is the oldest algorithm for computing Gröbner bases. It has been devised by Bruno Buchberger together with the Gröbner basis theory. It is straightforward to implement, but it appeared soon that raw implementations can solve only trivial problems. The main issues are the following ones:
Even when the resulting Gröbner basis is small, the intermediate polynomials can be huge. It results that most of the computing time may be spent in memory management. So, specialized memory management algorithms may be a fundamental part of an efficient implementation.
The integers occurring during a computation may be sufficiently large for making fast multiplication algorithms and multimodular arithmetic useful. For this reason, most optimized implementations use the GMPlibrary. Also, modular arithmetic, Chinese remainder theorem and Hensel lifting are used in optimized implementations
The choice of the S-polynomials to reduce and of the polynomials used for reducing them is devoted to heuristics. As in many computational problems, heuristics cannot detect most hidden simplifications, and if heuristic choices are avoided, one may get a dramatic improvement of the algorithm efficiency.
In most cases most S-polynomials that are computed are reduced to zero; that is, most computing time is spent to compute zero.
The monomial ordering that is most often needed for the applications (pure lexicographic) is not the ordering that leads to the easiest computation, generally the ordering degrevlex.
For solving 3. many improvements, variants and heuristics have been proposed before the introduction of F4 and F5 algorithms by Jean-Charles Faugère. As these algorithms are designed for integer coefficients or with coefficients in the integers modulo a prime number, Buchberger's algorithm remains useful for more general coefficients.
Roughly speaking, F4 algorithm solves 3. by replacing many S-polynomial reductions by the row reduction of a single large matrix for which advanced methods of linear algebra can be used. This solves partially issue 4., as reductions to zero in Buchberger's algorithm correspond to relations between rows of the matrix to be reduced, and the zero rows of the reduced matrix correspond to a basis of the vector space of these relations.
F5 algorithm improves F4 by introducing a criterion that allows reducing the size of the matrices to be reduced. This criterion is almost optimal, since the matrices to be reduced have full rank in sufficiently regular cases (in particular, when the input polynomials form a regular sequence). Tuning F5 for a general use is difficult, since its performances depend on an order on the input polynomials and a balance between the incrementation of the working polynomial degree and of the number of the input polynomials that are considered. To date (2022), there is no distributed implementation that is significantly more efficient than F4, but, over modular integers F5 has been used successfully for several cryptographic challenges; for example, for breaking HFE challenge.
Issue 5. has been solved by the discovery of basis conversion algorithms that start from the Gröbner basis for one monomial ordering for computing a Gröbner basis for another monomial ordering. FGLM algorithm is such a basis conversion algorithm that works only in the zero-dimensional case (where the polynomials have a finite number of complex common zeros) and has a polynomial complexity in the number of common zeros. A basis conversion algorithm that works is the general case is the Gröbner walk algorithm. In its original form, FGLM may be the critical step for solving systems of polynomial equations because FGML does not take into account the sparsity of involved matrices. This has been fixed by the introduction of sparse FGLM algorithms.
Most general-purpose computer algebra systems have implementations of one or several algorithms for Gröbner bases, often also embedded in other functions, such as for solving systems of polynomial equations or for simplifying trigonometric functions; this is the case, for example, of CoCoA, GAP, Macaulay 2, Magma, Maple, Mathematica, SINGULAR, SageMath and SymPy. When F4 is available, it is generally much more efficient than Buchberger's algorithm. The implementation techniques and algorithmic variants are not always documented, although they may have a dramatic effect on efficiency.
Implementations of F4 and (sparse)-FGLM are included in the library Msolve. Beside Gröbner algorithms, Msolve contains fast algorithms for real-root isolation, and combines all these functions in an algorithm for the real solutions of systems of polynomial equations that outperforms dramatically the other software for this problem (Maple and Magma). Msolve is available on GitHub, and is interfaced with Julia, Maple and SageMath; this means that Msolve can be used directly from within these software environments.
Complexity
The complexity of the Gröbner basis computations is commonly evaluated in term of the number of variables and the maximal degree of the input polynomials.
In the worst case, the main parameter of the complexity is the maximal degree of the elements of the resulting reduced Gröbner basis. More precisely, if the Gröbner basis contains an element of a large degree , this element may contain nonzero terms whose computation requires a time of On the other hand, if all polynomials in the reduced Gröbner basis a homogeneous ideal have a degree of at most , the Gröbner basis can be computed by linear algebra on the vector space of polynomials of degree less than , which has a dimension So, the complexity of this computation is
The worst-case complexity of a Gröbner basis computation is doubly exponential in . More precisely, the complexity is upper bounded by a polynomial in Using little o notation, it is therefore bounded by On the other hand, examples have been given of reduced Gröbner bases containing polynomials of degree
or containing elements. As every algorithm for computing a Gröbner basis must write its result, this provides a lower bound of the complexity.
Gröbner basis is EXPSPACE-complete.
Generalizations
The concept and algorithms of Gröbner bases have been generalized to submodules of free modules over a polynomial ring. In fact, if is a free module over a ring , then one may consider the direct sum as a ring by defining the product of two elements of to be . This ring may be identified with , where is a basis of L. This allows identifying a submodule of generated by with the ideal of generated by and the products , . If is a polynomial ring, this reduces the theory and the algorithms of Gröbner bases of modules to the theory and the algorithms of Gröbner bases of ideals.
The concept and algorithms of Gröbner bases have also been generalized to ideals over various rings, commutative or not, like polynomial rings over a principal ideal ring or Weyl algebras.
Areas of applications
Error-Correcting Codes
Gröbner basis has been applied in the theory of error-correcting codes for algebraic decoding. By using Gröbner basis computation on various forms of error-correcting equations, decoding methods were developed for correcting errors of cyclic codes, affine variety codes, algebraic-geometric codes and even general linear block codes. Applying Gröbner basis in algebraic decoding is still a research area of channel coding theory.
See also
Bergman's diamond lemma, an extension of Gröbner bases to non-commutative rings
Graver basis
Janet basis
Regular chains, an alternative way to represent algebraic sets
References
Further reading
[This is Buchberger's thesis inventing Gröbner bases.]
(This is the journal publication of Buchberger's thesis.)
(translated from Sibirsk. Mat. Zh. Siberian Mathematics Journal 3 (1962), 292–296).
(on infinite dimensional Gröbner bases for polynomial rings in infinitely many indeterminates).
External links
Faugère's own implementation of his F4 algorithm
Comparative Timings Page for Gröbner Bases Software
Prof. Bruno Buchberger Bruno Buchberger
Gröbner basis introduction on Scholarpedia
Algebraic geometry
Commutative algebra
Computer algebra
Invariant theory
Rewriting systems | Gröbner basis | [
"Physics",
"Mathematics",
"Technology"
] | 8,637 | [
"Symmetry",
"Group actions",
"Algebra",
"Computer algebra",
"Computational mathematics",
"Fields of abstract algebra",
"Computer science",
"Algebraic geometry",
"Commutative algebra",
"Invariant theory"
] |
357,339 | https://en.wikipedia.org/wiki/Correctness%20%28computer%20science%29 | In theoretical computer science, an algorithm is correct with respect to a specification if it behaves as specified. Best explored is functional correctness, which refers to the input-output behavior of the algorithm: for each input it produces an output satisfying the specification.
Within the latter notion, partial correctness, requiring that if an answer is returned it will be correct, is distinguished from total correctness, which additionally requires that an answer is eventually returned, i.e. the algorithm terminates. Correspondingly, to prove a program's total correctness, it is sufficient to prove its partial correctness, and its termination. The latter kind of proof (termination proof) can never be fully automated, since the halting problem is undecidable.
For example, successively searching through integers 1, 2, 3, … to see if we can find an example of some phenomenon—say an odd perfect number—it is quite easy to write a partially correct program (see box). But to say this program is totally correct would be to assert something currently not known in number theory.
A proof would have to be a mathematical proof, assuming both the algorithm and specification are given formally. In particular it is not expected to be a correctness assertion for a given program implementing the algorithm on a given machine. That would involve such considerations as limitations on computer memory.
A deep result in proof theory, the Curry–Howard correspondence, states that a proof of functional correctness in constructive logic corresponds to a certain program in the lambda calculus. Converting a proof in this way is called program extraction.
Hoare logic is a specific formal system for reasoning rigorously about the correctness of computer programs. It uses axiomatic techniques to define programming language semantics and argue about the correctness of programs through assertions known as Hoare triples.
Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Although crucial to software quality and widely deployed by programmers and testers, software testing still remains an art, due to limited understanding of the principles of software. The difficulty in software testing stems from the complexity of software: we can not completely test a program with moderate complexity. Testing is more than just debugging. The purpose of testing can be quality assurance, verification and validation, or reliability estimation. Testing can be used as a generic metric as well. Correctness testing and reliability testing are two major areas of testing. Software testing is a trade-off between budget, time and quality.
See also
Formal verification
Design by contract
Program analysis
Model checking
Compiler correctness
Program derivation
Notes
References
"Human Language Technology. Challenges for Computer Science and Linguistics." Google Books. N.p., n.d. Web. 10 April 2017.
"Security in Computing and Communications." Google Books. N.p., n.d. Web. 10 April 2017.
"The Halting Problem of Alan Turing - A Most Merry and Illustrated Explanation." The Halting Problem of Alan Turing - A Most Merry and Illustrated Explanation. N.p., n.d. Web. 10 April 2017.
Turner, Raymond, and Nicola Angius. "The Philosophy of Computer Science." Stanford Encyclopedia of Philosophy. Stanford University, 20 August 2013. Web. 10 April 2017.
Dijkstra, E. W. "Program Correctness". U of Texas at Austin, Departments of Mathematics and Computer Sciences, Automatic Theorem Proving Project, 1970. Web.
Formal methods terminology
Theoretical computer science
Software quality | Correctness (computer science) | [
"Mathematics"
] | 713 | [
"Theoretical computer science",
"Applied mathematics",
"Formal methods terminology"
] |
357,353 | https://en.wikipedia.org/wiki/Large%20Hadron%20Collider | The Large Hadron Collider (LHC) is the world's largest and highest-energy particle accelerator. It was built by the European Organization for Nuclear Research (CERN) between 1998 and 2008 in collaboration with over 10,000 scientists and hundreds of universities and laboratories across more than 100 countries. It lies in a tunnel in circumference and as deep as beneath the France–Switzerland border near Geneva.
The first collisions were achieved in 2010 at an energy of 3.5 teraelectronvolts (TeV) per beam, about four times the previous world record. The discovery of the Higgs boson at the LHC was announced in 2012. Between 2013 and 2015, the LHC was shut down and upgraded; after those upgrades it reached 6.5 TeV per beam (13.0 TeV total collision energy). At the end of 2018, it was shut down for maintenance and further upgrades, reopened over three years later in April 2022.
The collider has four crossing points where the accelerated particles collide. Nine detectors, each designed to detect different phenomena, are positioned around the crossing points. The LHC primarily collides proton beams, but it can also accelerate beams of heavy ions, such as in lead–lead collisions and proton–lead collisions.
The LHC's goal is to allow physicists to test the predictions of different theories of particle physics, including measuring the properties of the Higgs boson, searching for the large family of new particles predicted by supersymmetric theories, and studying other unresolved questions in particle physics.
Background
The term hadron refers to subatomic composite particles composed of quarks held together by the strong force (analogous to the way that atoms and molecules are held together by the electromagnetic force). The best-known hadrons are the baryons such as protons and neutrons; hadrons also include mesons such as the pion and kaon, which were discovered during cosmic ray experiments in the late 1940s and early 1950s.
A collider is a type of a particle accelerator that brings two opposing particle beams together such that the particles collide. In particle physics, colliders, though harder to construct, are a powerful research tool because they reach a much higher center of mass energy than fixed target setups. Analysis of the byproducts of these collisions gives scientists good evidence of the structure of the subatomic world and the laws of nature governing it. Many of these byproducts are produced only by high-energy collisions, and they decay after very short periods of time. Thus many of them are hard or nearly impossible to study in other ways.
Purpose
Many physicists hope that the Large Hadron Collider will help answer some of the fundamental open questions in physics, which concern the basic laws governing the interactions and forces among elementary particles and the deep structure of space and time, particularly the interrelation between quantum mechanics and general relativity.
These high-energy particle experiments can provide data to support different scientific models. For example, the Standard Model and Higgsless model required high-energy particle experiment data to validate their predictions and allow further theoretical development. The Standard Model was completed by detection of the Higgs boson by the LHC in 2012.
LHC collisions have explored other questions, including:
Do all known particles have supersymmetric partners, as part of supersymmetry in an extension of the Standard Model and Poincaré symmetry?
Are there extra dimensions, as predicted by various models based on string theory, and can we detect them?
What is the nature of the dark matter, a hypothetical form of matter which appears to account for 27% of the mass-energy of the universe?
Other open questions that may be explored using high-energy particle collisions include:
It is already known that electromagnetism and the weak nuclear force are different manifestations of a single force called the electroweak force. The LHC may clarify whether the electroweak force and the strong nuclear force are similarly just different manifestations of one universal unified force, as predicted by various Grand Unification Theories.
Why is the fourth fundamental force (gravity) so many orders of magnitude weaker than the other three fundamental forces? See also Hierarchy problem.
Are there additional sources of quark flavour mixing beyond those already present within the Standard Model?
Why are there apparent violations of the symmetry between matter and antimatter? See also CP violation.
What are the nature and properties of quark–gluon plasma, thought to have existed in the early universe and in certain compact and strange astronomical objects today? This will be investigated by heavy ion collisions, mainly in ALICE, but also in CMS, ATLAS and LHCb. First observed in 2010, findings published in 2012 confirmed the phenomenon of jet quenching in heavy-ion collisions.
Design
The collider is contained in a circular tunnel, with a circumference of , at a depth ranging from underground. The variation in depth was deliberate, to reduce the amount of tunnel that lies under the Jura Mountains to avoid having to excavate a vertical access shaft there. A tunnel was chosen to avoid having to purchase expensive land on the surface and to take advantage of the shielding against background radiation that the Earth's crust provides.
The wide concrete-lined tunnel, constructed between 1983 and 1988, was formerly used to house the Large Electron–Positron Collider. The tunnel crosses the border between Switzerland and France at four points, with most of it in France. Surface buildings hold ancillary equipment such as compressors, ventilation equipment, control electronics and refrigeration plants.
The collider tunnel contains two adjacent parallel beamlines (or beam pipes) each containing a beam, which travel in opposite directions around the ring. The beams intersect at four points around the ring, which is where the particle collisions take place. Some 1,232 dipole magnets keep the beams on their circular path (see image), while an additional 392 quadrupole magnets are used to keep the beams focused, with stronger quadrupole magnets close to the intersection points in order to maximize the chances of interaction where the two beams cross. Magnets of higher multipole orders are used to correct smaller imperfections in the field geometry. In total, about 10,000 superconducting magnets are installed, with the dipole magnets having a mass of over 27 tonnes. About 96 tonnes of superfluid helium-4 is needed to keep the magnets, made of copper-clad niobium-titanium, at their operating temperature of , making the LHC the largest cryogenic facility in the world at liquid helium temperature. LHC uses 470 tonnes of Nb–Ti superconductor.
During LHC operations, the CERN site draws roughly 200 MW of electrical power from the French electrical grid, which, for comparison, is about one-third the energy consumption of the city of Geneva; the LHC accelerator and detectors draw about 120 MW thereof. Each day of its operation generates 140 terabytes of data.
When running an energy of 6.5 TeV per proton, once or twice a day, as the protons are accelerated from 450 GeV to 6.5 TeV, the field of the superconducting dipole magnets is increased from 0.54 to . The protons each have an energy of 6.5 TeV, giving a total collision energy of 13 TeV. At this energy, the protons have a Lorentz factor of about 6,930 and move at about , or about slower than the speed of light (c). It takes less than for a proton to travel 26.7 km around the main ring. This results in per second for protons whether the particles are at low or high energy in the main ring, since the speed difference between these energies is beyond the fifth decimal.
Rather than having continuous beams, the protons are bunched together, into up to , with in each bunch so that interactions between the two beams take place at discrete intervals, mainly apart, providing a bunch collision rate of 40 MHz. It was operated with fewer bunches in the first years. The design luminosity of the LHC is 1034 cm−2s−1, which was first reached in June 2016. By 2017, twice this value was achieved.
Before being injected into the main accelerator, the particles are prepared by a series of systems that successively increase their energy. The first system is the linear particle accelerator Linac4 generating 160 MeV negative hydrogen ions (H− ions), which feeds the Proton Synchrotron Booster (PSB). There, both electrons are stripped from the hydrogen ions leaving only the nucleus containing one proton. Protons are then accelerated to 2 GeV and injected into the Proton Synchrotron (PS), where they are accelerated to 26 GeV. Finally, the Super Proton Synchrotron (SPS) is used to increase their energy further to 450 GeV before they are at last injected (over a period of several minutes) into the main ring. Here, the proton bunches are accumulated, accelerated (over a period of ) to their peak energy, and finally circulated for 5 to while collisions occur at the four intersection points.
The LHC physics programme is mainly based on proton–proton collisions. However, during shorter running periods, typically one month per year, heavy-ion collisions are included in the programme. While lighter ions are considered as well, the baseline scheme deals with lead ions (see A Large Ion Collider Experiment). The lead ions are first accelerated by the linear accelerator LINAC 3, and the Low Energy Ion Ring (LEIR) is used as an ion storage and cooler unit. The ions are then further accelerated by the PS and SPS before being injected into LHC ring, where they reach an energy of 2.3 TeV per nucleon (or 522 TeV per ion), higher than the energies reached by the Relativistic Heavy Ion Collider. The aim of the heavy-ion programme is to investigate quark–gluon plasma, which existed in the early universe.
Detectors
Nine detectors have been built in large caverns excavated at the LHC's intersection points. Two of them, the ATLAS experiment and the Compact Muon Solenoid (CMS), are large general-purpose particle detectors. ALICE and LHCb have more specialized roles, while the other five—TOTEM, MoEDAL, LHCf, SND and FASER—are much smaller and are for very specialized research. The ATLAS and CMS experiments discovered the Higgs boson, which is strong evidence that the Standard Model has the correct mechanism of giving mass to elementary particles.
Computing and analysis facilities
Data produced by LHC, as well as LHC-related simulation, were estimated at 200 petabytes per year.
The LHC Computing Grid was constructed as part of the LHC design, to handle the massive amounts of data expected for its collisions. It is an international collaborative project that consists of a grid-based computer network infrastructure initially connecting 140 computing centres in 35 countries (over 170 in more than 40 countries ). It was designed by CERN to handle the significant volume of data produced by LHC experiments, incorporating both private fibre optic cable links and existing high-speed portions of the public Internet to enable data transfer from CERN to academic institutions around the world. The LHC Computing Grid consists of global federations across Europe, Asia Pacific and the Americas.
The distributed computing project LHC@home was started to support the construction and calibration of the LHC. The project uses the BOINC platform, enabling anybody with an Internet connection and a computer running Mac OS X, Windows or Linux to use their computer's idle time to simulate how particles will travel in the beam pipes. With this information, the scientists are able to determine how the magnets should be calibrated to gain the most stable "orbit" of the beams in the ring. In August 2011, a second application (Test4Theory) went live which performs simulations against which to compare actual test data, to determine confidence levels of the results.
By 2012, data from over 6 quadrillion () LHC proton–proton collisions had been analysed. The LHC Computing Grid had become the world's largest computing grid in 2012, comprising over 170 computing facilities in a worldwide network across more than 40 countries.
Operational history
The LHC first went operational on 10 September 2008, but initial testing was delayed for 14 months from 19 September 2008 to 20 November 2009, following a magnet quench incident that caused extensive damage to over 50 superconducting magnets, their mountings, and the vacuum pipe.
During its first run (2010–2013), the LHC collided two opposing particle beams of either protons at up to 4 teraelectronvolts or , or lead nuclei (574 TeV per nucleus, or 2.76 TeV per nucleon). Its first run discoveries included the long-sought Higgs boson, several composite particles (hadrons) like the χb (3P) bottomonium state, the first creation of a quark–gluon plasma, and the first observations of the very rare decay of the Bs meson into two muons (Bs0 → μ+μ−), which challenged the validity of existing models of supersymmetry.
Construction
Operational challenges
The size of the LHC constitutes an exceptional engineering challenge with unique operational issues on account of the amount of energy stored in the magnets and the beams. While operating, the total energy stored in the magnets is and the total energy carried by the two beams reaches .
Loss of only one ten-millionth part (10−7) of the beam is sufficient to quench a superconducting magnet, while each of the two beam dumps must absorb . These energies are carried by very little matter: under nominal operating conditions (2,808 bunches per beam, 1.15×1011 protons per bunch), the beam pipes contain 1.0×10−9 gram of hydrogen, which, in standard conditions for temperature and pressure, would fill the volume of one grain of fine sand.
Cost
With a budget of €7.5 billion (about $9bn or £6.19bn ), the LHC is one of the most expensive scientific instruments ever built. The total cost of the project is expected to be of the order of 4.6bn Swiss francs (SFr) (about $4.4bn, €3.1bn, or £2.8bn ) for the accelerator and 1.16bn (SFr) (about $1.1bn, €0.8bn, or £0.7bn ) for the CERN contribution to the experiments.
The construction of LHC was approved in 1995 with a budget of SFr 2.6bn, with another SFr 210M toward the experiments. However, cost overruns, estimated in a major review in 2001 at around SFr 480M for the accelerator, and SFr 50M for the experiments, along with a reduction in CERN's budget, pushed the completion date from 2005 to April 2007. The superconducting magnets were responsible for SFr 180M of the cost increase. There were also further costs and delays owing to engineering difficulties encountered while building the cavern for the Compact Muon Solenoid, and also due to magnet supports which were insufficiently strongly designed and failed their initial testing (2007) and damage from a magnet quench and liquid helium escape (inaugural testing, 2008). Because electricity costs are lower during the summer, the LHC normally does not operate over the winter months, although exceptions over the 2009/10 and 2012/2013 winters were made to make up for the 2008 start-up delays and to improve precision of measurements of the new particle discovered in 2012, respectively.
Construction accidents and delays
On 25 October 2005, José Pereira Lages, a technician, was killed in the LHC when a switchgear that was being transported fell on top of him.
On 27 March 2007, a cryogenic magnet support designed and provided by Fermilab and KEK broke during an initial pressure test involving one of the LHC's inner triplet (focusing quadrupole) magnet assemblies. No one was injured. Fermilab director Pier Oddone stated "In this case we are dumbfounded that we missed some very simple balance of forces". The fault had been present in the original design, and remained during four engineering reviews over the following years. Analysis revealed that its design, made as thin as possible for better insulation, was not strong enough to withstand the forces generated during pressure testing. Details are available in a statement from Fermilab, with which CERN is in agreement. Repairing the broken magnet and reinforcing the eight identical assemblies used by LHC delayed the start-up date, then planned for November 2007.
On 19 September 2008, during initial testing, a faulty electrical connection led to a magnet quench (the sudden loss of a superconducting magnet's superconducting ability owing to warming or electric field effects). Six tonnes of supercooled liquid helium—used to cool the magnets—escaped, with sufficient force to break 10-ton magnets nearby from their mountings, and caused considerable damage and contamination of the vacuum tube. Repairs and safety checks caused a delay of around 14 months.
Two vacuum leaks were found in July 2009, and the start of operations was further postponed to mid-November 2009.
Exclusion of Russia
With the 2022 Russian invasion of Ukraine, the participation of Russians with CERN was called into question. About 8% of the workforce are of Russian nationality. In June 2022, CERN said the governing council "intends to terminate" CERN's cooperation agreements with Belarus and Russia when they expire, respectively in June and December 2024. CERN said it would monitor developments in Ukraine and remains prepared to take additional steps as warranted. CERN further said that it would reduce the Ukrainian contribution to CERN for 2022 to the amount already remitted to the Organization, thereby waiving the second installment of the contribution.
Initial lower magnet currents
In both of its runs (2010 to 2012 and 2015), the LHC was initially run at energies below its planned operating energy, and ramped up to just 2 x 4 TeV energy on its first run and 2 x 6.5 TeV on its second run, below the design energy of 2 x 7 TeV. This is because massive superconducting magnets require considerable magnet training to handle the high currents involved without losing their superconducting ability, and the high currents are necessary to allow a high proton energy. The "training" process involves repeatedly running the magnets with lower currents to provoke any quenches or minute movements that may result. It also takes time to cool down magnets to their operating temperature of around 1.9 K (close to absolute zero). Over time the magnet "beds in" and ceases to quench at these lesser currents and can handle the full design current without quenching; CERN media describe the magnets as "shaking out" the unavoidable tiny manufacturing imperfections in their crystals and positions that had initially impaired their ability to handle their planned currents. The magnets, over time and with training, gradually become able to handle their full planned currents without quenching.
Inaugural tests (2008)
The first beam was circulated through the collider on the morning of 10 September 2008. CERN successfully fired the protons around the tunnel in stages, three kilometres at a time. The particles were fired in a clockwise direction into the accelerator and successfully steered around it at 10:28 local time. The LHC successfully completed its major test: after a series of trial runs, two white dots flashed on a computer screen showing the protons travelled the full length of the collider. It took less than one hour to guide the stream of particles around its inaugural circuit. CERN next successfully sent a beam of protons in an anticlockwise direction, taking slightly longer at one and a half hours owing to a problem with the cryogenics, with the full circuit being completed at 14:59.
Quench incident
On 19 September 2008, a magnet quench occurred in about 100 bending magnets in sectors 3 and 4, where an electrical fault vented about six tonnes of liquid helium (the magnets' cryogenic coolant) into the tunnel. The escaping vapour expanded with explosive force, damaging 53 superconducting magnets and their mountings, and contaminating the vacuum pipe, which also lost vacuum conditions.
Shortly after the incident, CERN reported that the most likely cause of the problem was a faulty electrical connection between two magnets. It estimated that repairs would take at least two months, owing to the time needed to warm up the affected sectors and then cool them back down to operating temperature. CERN released an interim technical report and preliminary analysis of the incident on 15 and 16 October 2008 respectively, and a more detailed report on 5 December 2008. The analysis of the incident by CERN confirmed that an electrical fault had indeed been the cause. The faulty electrical connection had led (correctly) to a failsafe power abort of the electrical systems powering the superconducting magnets, but had also caused an electric arc (or discharge) which damaged the integrity of the supercooled helium's enclosure and vacuum insulation, causing the coolant's temperature and pressure to rapidly rise beyond the ability of the safety systems to contain it, and leading to a temperature rise of about 100 degrees Celsius in some of the affected magnets. Energy stored in the superconducting magnets and electrical noise induced in other quench detectors also played a role in the rapid heating. Around two tonnes of liquid helium escaped explosively before detectors triggered an emergency stop, and a further four tonnes leaked at lower pressure in the aftermath. A total of 53 magnets were damaged in the incident and were repaired or replaced during the winter shutdown. This accident was thoroughly discussed in a 22 February 2010 Superconductor Science and Technology article by CERN physicist Lucio Rossi.
In the original schedule for LHC commissioning, the first "modest" high-energy collisions at a centre-of-mass energy of 900 GeV were expected to take place before the end of September 2008, and the LHC was expected to be operating at 10 TeV by the end of 2008. However, owing to the delay caused by the incident, the collider was not operational until November 2009. Despite the delay, LHC was officially inaugurated on 21 October 2008, in the presence of political leaders, science ministers from CERN's 20 Member States, CERN officials, and members of the worldwide scientific community.
Most of 2009 was spent on repairs and reviews from the damage caused by the quench incident, along with two further vacuum leaks identified in July 2009; this pushed the start of operations to November of that year.
Run 1: first operational run (2009–2013)
On 20 November 2009, low-energy beams circulated in the tunnel for the first time since the incident, and shortly after, on 30 November, the LHC achieved 1.18 TeV per beam to become the world's highest-energy particle accelerator, beating the Tevatron's previous record of 0.98 TeV per beam held for eight years.
The early part of 2010 saw the continued ramp-up of beam in energies and early physics experiments towards 3.5 TeV per beam and on 30 March 2010, LHC set a new record for high-energy collisions by colliding proton beams at a combined energy level of 7 TeV. The attempt was the third that day, after two unsuccessful attempts in which the protons had to be "dumped" from the collider and new beams had to be injected. This also marked the start of the main research programme.
The first proton run ended on 4 November 2010. A run with lead ions started on 8 November 2010, and ended on 6 December 2010, allowing the ALICE experiment to study matter under extreme conditions similar to those shortly after the Big Bang.
CERN originally planned that the LHC would run through to the end of 2012, with a short break at the end of 2011 to allow for an increase in beam energy from 3.5 to 4 TeV per beam. At the end of 2012, the LHC was planned to be temporarily shut down until around 2015 to allow upgrade to a planned beam energy of 7 TeV per beam. In late 2012, in light of the July 2012 discovery of the Higgs boson, the shutdown was postponed for some weeks into early 2013, to allow additional data to be obtained before shutdown.
Long Shutdown 1 (2013–2015)
The LHC was shut down on 13 February 2013 for its two-year upgrade called Long Shutdown 1 (LS1), which was to touch on many aspects of the LHC: enabling collisions at 14 TeV, enhancing its detectors and pre-accelerators (the Proton Synchrotron and Super Proton Synchrotron), as well as replacing its ventilation system and of cabling impaired by high-energy collisions from its first run. The upgraded collider began its long start-up and testing process in June 2014, with the Proton Synchrotron Booster starting on 2 June 2014, the final interconnection between magnets completing and the Proton Synchrotron circulating particles on 18 June 2014, and the first section of the main LHC supermagnet system reaching operating temperature of , a few days later. Due to the slow progress with "training" the superconducting magnets, it was decided to start the second run with a lower energy of 6.5 TeV per beam, corresponding to a current in the magnet of 11,000 amperes. The first of the main LHC magnets were reported to have been successfully trained by 9 December 2014, while training the other magnet sectors was finished in March 2015.
Run 2: second operational run (2015–2018)
On 5 April 2015, the LHC restarted after a two-year break, during which the electrical connectors between the bending magnets were upgraded to safely handle the current required for 7 TeV per beam (14 TeV collision energy). However, the bending magnets were only trained to handle up to 6.5 TeV per beam (13 TeV collision energy), which became the operating energy for 2015 to 2018. The energy was first reached on 10 April 2015. The upgrades culminated in colliding protons together with a combined energy of 13 TeV. On 3 June 2015, the LHC started delivering physics data after almost two years offline. In the following months, it was used for proton–proton collisions, while in November, the machine switched to collisions of lead ions and in December, the usual winter shutdown started.
In 2016, the machine operators focused on increasing the luminosity for proton–proton collisions. The design value was first reached 29 June, and further improvements increased the collision rate to 40% above the design value. The total number of collisions in 2016 exceeded the number from Run 1 – at a higher energy per collision. The proton–proton run was followed by four weeks of proton–lead collisions.
In 2017, the luminosity was increased further and reached twice the design value. The total number of collisions was higher than in 2016 as well.
The 2018 physics run began on 17 April and stopped on 3 December, including four weeks of lead–lead collisions.
Long Shutdown 2 (2018–2022)
Long Shutdown 2 (LS2) started on 10 December 2018. The LHC and the whole CERN accelerator complex was maintained and upgraded. The goal of the upgrades was to implement the High Luminosity Large Hadron Collider (HL-LHC) project that will increase the luminosity by a factor of 10. LS2 ended in April 2022. The Long Shutdown 3 (LS3) in the 2020s will take place before the HL-LHC project is done.
Run 3: third operational run (2022)
LHC became operational again on 22 April 2022 with a new maximum beam energy of 6.8 TeV (13.6 TeV collision energy), which was first achieved on 25 April. It officially commenced its run 3 physics season on 5 July 2022. This round is expected to continue until 2026. In addition to a higher energy the LHC is expected to reach a higher luminosity, which is expected to increase even further with the upgrade to the HL-LHC after Run 3.
Timeline of operations
Findings and discoveries
An initial focus of research was to investigate the possible existence of the Higgs boson, a key part of the Standard Model of physics which was predicted by theory, but had not yet been observed before due to its high mass and elusive nature. CERN scientists estimated that, if the Standard Model was correct, the LHC would produce several Higgs bosons every minute, allowing physicists to finally confirm or disprove the Higgs boson's existence. In addition, the LHC allowed the search for supersymmetric particles and other hypothetical particles as possible unknown areas of physics. Some extensions of the Standard Model predict additional particles, such as the heavy W' and Z' gauge bosons, which are also estimated to be within reach of the LHC to discover.
First run (data taken 2009–2013)
The first physics results from the LHC, involving 284 collisions which took place in the ALICE detector, were reported on 15 December 2009. The results of the first proton–proton collisions at energies higher than Fermilab's Tevatron proton–antiproton collisions were published by the CMS collaboration in early February 2010, yielding greater-than-predicted charged-hadron production.
After the first year of data collection, the LHC experimental collaborations started to release their preliminary results concerning searches for new physics beyond the Standard Model in proton–proton collisions. No evidence of new particles was detected in the 2010 data. As a result, bounds were set on the allowed parameter space of various extensions of the Standard Model, such as models with large extra dimensions, constrained versions of the Minimal Supersymmetric Standard Model, and others.
On 24 May 2011, it was reported that quark–gluon plasma (the densest matter thought to exist besides black holes) had been created in the LHC.
Between July and August 2011, results of searches for the Higgs boson and for exotic particles, based on the data collected during the first half of the 2011 run, were presented in conferences in Grenoble and Mumbai. In the latter conference, it was reported that, despite hints of a Higgs signal in earlier data, ATLAS and CMS exclude with 95% confidence level (using the CLs method) the existence of a Higgs boson with the properties predicted by the Standard Model over most of the mass region between 145 and 466 GeV. The searches for new particles did not yield signals either, allowing to further constrain the parameter space of various extensions of the Standard Model, including its supersymmetric extensions.
On 13 December 2011, CERN reported that the Standard Model Higgs boson, if it exists, is most likely to have a mass constrained to the range 115–130 GeV.
Both the CMS and ATLAS detectors have also shown intensity peaks in the 124–125 GeV range, consistent with either background noise or the observation of the Higgs boson.
On 22 December 2011, it was reported that a new composite particle had been observed, the χb (3P) bottomonium state.
On 4 July 2012, both the CMS and ATLAS teams announced the discovery of a boson in the mass region around 125–126 GeV, with a statistical significance at the level of 5 sigma each. This meets the formal level required to announce a new particle. The observed properties were consistent with the Higgs boson, but scientists were cautious as to whether it is formally identified as actually being the Higgs boson, pending further analysis. On 14 March 2013, CERN announced confirmation that the observed particle was indeed the predicted Higgs boson.
On 8 November 2012, the LHCb team reported on an experiment seen as a "golden" test of supersymmetry theories in physics, by measuring the very rare decay of the meson into two muons (). The results, which match those predicted by the non-supersymmetrical Standard Model rather than the predictions of many branches of supersymmetry, show the decays are less common than some forms of supersymmetry predict, though could still match the predictions of other versions of supersymmetry theory. The results as initially drafted are stated to be short of proof but at a relatively high 3.5 sigma level of significance. The result was later confirmed by the CMS collaboration.
In August 2013, the LHCb team revealed an anomaly in the angular distribution of B meson decay products which could not be predicted by the Standard Model; this anomaly had a statistical certainty of 4.5 sigma, just short of the 5 sigma needed to be officially recognized as a discovery. It is unknown what the cause of this anomaly would be, although the Z' boson has been suggested as a possible candidate.
On 19 November 2014, the LHCb experiment announced the discovery of two new heavy subatomic particles, and . Both of them are baryons that are composed of one bottom, one down, and one strange quark. They are excited states of the bottom Xi baryon.
The LHCb collaboration has observed multiple exotic hadrons, possibly pentaquarks or tetraquarks, in the Run 1 data.
On 4 April 2014, the collaboration confirmed the existence of the tetraquark candidate Z(4430) with a significance of over 13.9 sigma. On 13 July 2015, results consistent with pentaquark states in the decay of bottom Lambda baryons (Λ) were reported.
On 28 June 2016, the collaboration announced four tetraquark-like particles decaying into a J/ψ and a φ meson, only one of which was well established before (X(4274), X(4500) and X(4700) and X(4140)).
In December 2016, ATLAS presented a measurement of the W boson mass, researching the precision of analyses done at the Tevatron.
Second run (2015–2018)
At the conference EPS-HEP 2015 in July, the collaborations presented first cross-section measurements of several particles at the higher collision energy.
On 15 December 2015, the ATLAS and CMS experiments both reported a number of preliminary results for Higgs physics, supersymmetry (SUSY) searches and exotics searches using 13 TeV proton collision data. Both experiments saw a moderate excess around 750 GeV in the two-photon invariant mass spectrum, but the experiments did not confirm the existence of the hypothetical particle in an August 2016 report.
In July 2017, many analyses based on the large dataset collected in 2016 were shown. The properties of the Higgs boson were studied in more detail and the precision of many other results was improved.
As of March 2021, the LHC experiments have discovered 59 new hadrons in the data collected during the first two runs.
Third run (2022 – present)
The third run of the LHC began in July of 2022, after more than three years of upgrades, and is planned to last until July of 2026.
On 5 July 2022, LHCb reported the discovery of a new type of pentaquark made up of a charm quark and a charm antiquark and an up, a down and a strange quark, observed in an analysis of decays of charged B mesons. The first ever pair of tetraquarks was also reported.
On 18 September 2024, ATLAS reported the first observation of quantum entanglement between quarks, with it also being the highest-energy observation of entanglement so far.
Future plans
"High-luminosity" upgrade
After some years of running, any particle physics experiment typically begins to suffer from diminishing returns: as the key results reachable by the device begin to be completed, later years of operation discover proportionately less than earlier years. A common response is to upgrade the devices involved, typically in collision energy, luminosity, or improved detectors. In addition to a possible increase to 14 TeV collision energy, a luminosity upgrade of the LHC, called the High Luminosity Large Hadron Collider, started in June 2018 that will boost the accelerator's potential for new discoveries in physics, starting in 2027. The upgrade aims at increasing the luminosity of the machine by a factor of 10, up to 1035 cm−2s−1, providing a better chance to see rare processes and improving statistically marginal measurements.
Proposed Future Circular Collider
CERN has several preliminary designs for a Future Circular Collider (FCC)—which would be the most powerful particle accelerator ever built—with different types of collider ranging in cost from around €9 billion (US$10.2 billion) to €21 billion. It would use the LHC ring as preaccelerator, similar to how the LHC uses the smaller Super Proton Synchrotron. It is CERN's opening bid in a priority-setting process called the European Strategy for Particle Physics Update, and will affect the field's future well into the second half of the century. As of 2023, no fixed plan exists and it is unknown if the construction will be funded.
Safety of particle collisions
The experiments at the Large Hadron Collider sparked fears that the particle collisions might produce doomsday phenomena, involving the production of stable microscopic black holes or the creation of hypothetical particles called strangelets. Two CERN-commissioned safety reviews examined these concerns and concluded that the experiments at the LHC present no danger and that there is no reason for concern, a conclusion endorsed by the American Physical Society.
The reports also noted that the physical conditions and collision events that exist in the LHC and similar experiments occur naturally and routinely in the universe without hazardous consequences, including ultra-high-energy cosmic rays observed to impact Earth with energies far higher than those in any human-made collider, like the Oh-My-God particle which had 320 million TeV of energy, and a collision energy tens of times more than the most energetic collisions produced in the LHC.
Popular culture
The Large Hadron Collider gained a considerable amount of attention from outside the scientific community and its progress is followed by most popular science media. The LHC has also inspired works of fiction including novels, TV series, video games and films.
CERN employee Katherine McAlpine's "Large Hadron Rap" surpassed 8 million YouTube views as of 2022.
The band Les Horribles Cernettes was founded by women from CERN. The name was chosen so to have the same initials as the LHC.
National Geographic Channel's World's Toughest Fixes, Season 2 (2010), Episode 6 "Atom Smasher" features the replacement of the last superconducting magnet section in the repair of the collider after the 2008 quench incident. The episode includes actual footage from the repair facility to the inside of the collider, and explanations of the function, engineering, and purpose of the LHC.
The song "Munich" on the 2012 studio album Scars & Stories by The Fray is inspired by the Large Hadron Collider. Lead singer Isaac Slade said in an interview with The Huffington Post, "There's this large particle collider out in Switzerland that is kind of helping scientists peel back the curtain on what creates gravity and mass. Some very big questions are being raised, even some things that Einstein proposed, that have just been accepted for decades are starting to be challenged. They're looking for the God Particle, basically, the particle that holds it all together. That song is really just about the mystery of why we're all here and what's holding it all together, you know?"
The Large Hadron Collider was the focus of the 2012 student film Decay, with the movie being filmed on location in CERN's maintenance tunnels.
Fiction
The novel Angels & Demons, by Dan Brown, involves antimatter created at the LHC to be used in a weapon against the Vatican. In response, CERN published a "Fact or Fiction?" page discussing the accuracy of the book's portrayal of the LHC, CERN, and particle physics in general. The movie version of the book has footage filmed on-site at one of the experiments at the LHC; the director, Ron Howard, met with CERN experts in an effort to make the science in the story more accurate.
The novel FlashForward, by Robert J. Sawyer, involves the search for the Higgs boson at the LHC. CERN published a "Science and Fiction" page interviewing Sawyer and physicists about the book and the TV series based on it.
See also
List of accelerators in particle physics
Accelerator projects
Circular Electron Positron Collider
Compact Linear Collider
Future Circular Collider
International Linear Collider
Very Large Hadron Collider
References
External links
Overview of the LHC at CERN's public webpage
CERN Courier magazine
LHC Portal Web portal
Full documentation for design and construction of the LHC and its six detectors (2008).
Video
Animation of LHC in collision production mode (June 2015)
News
Eight Things To Know As The Large Hadron Collider Breaks Energy Records
Buildings and structures in Ain
Buildings and structures in the canton of Geneva
CERN accelerators
E-Science
International science experiments
Laboratories in France
Laboratories in Switzerland
Particle physics facilities
Physics beyond the Standard Model
Underground laboratories
CERN facilities
Government buildings completed in 2008 | Large Hadron Collider | [
"Physics"
] | 8,669 | [
"Unsolved problems in physics",
"Particle physics",
"Physics beyond the Standard Model"
] |
357,366 | https://en.wikipedia.org/wiki/Heuristic%20evaluation | A heuristic evaluation is a usability inspection method for computer software that helps to identify usability problems in the user interface design. It specifically involves evaluators examining the interface and judging its compliance with recognized usability principles (the "heuristics"). These evaluation methods are now widely taught and practiced in the new media sector, where user interfaces are often designed in a short space of time on a budget that may restrict the amount of money available to provide for other types of interface testing.
Introduction
The main goal of heuristic evaluations is to identify any problems associated with the design of user interfaces. Usability consultants Rolf Molich and Jakob Nielsen developed this method on the basis of several years of experience in teaching and consulting about usability engineering. Heuristic evaluations are one of the most informal methods of usability inspection in the field of human–computer interaction. There are many sets of usability design heuristics; they are not mutually exclusive and cover many of the same aspects of user interface design. Quite often, usability problems that are discovered are categorized—often on a numeric scale—according to their estimated impact on user performance or acceptance. Often the heuristic evaluation is conducted in the context of use cases (typical user tasks), to provide feedback to the developers on the extent to which the interface is likely to be compatible with the intended users' needs and preferences.
The simplicity of heuristic evaluation is beneficial at the early stages of design and prior to user-based testing. This usability inspection method does not rely on users which can be burdensome due to the need for recruiting, scheduling issues, a place to perform the evaluation, and a payment for participant time. In the original report published, Nielsen stated that four experiments showed that individual evaluators were "mostly quite bad" at doing heuristic evaluations and suggested multiple evaluators were needed, with the results aggregated, to produce and to complete an acceptable review. Most heuristic evaluations can be accomplished in a matter of days. The time required varies with the size of the artifact, its complexity, the purpose of the review, the nature of the usability issues that arise in the review, and the competence of the reviewers. Using heuristic evaluation prior to user testing is often conducted to identify areas to be included in the evaluation or to eliminate perceived design issues prior to user-based evaluation.
Although heuristic evaluation can uncover many major usability issues in a short period of time, a criticism that is often leveled is that results are highly influenced by the knowledge of the expert reviewer(s). This "one-sided" review repeatedly has different results than software performance testing, each type of testing uncovering a different set of problems.
Methodology
Heuristic evaluation are conducted in variety of ways depending on the scope and type of project. As a general rule of thumb, there are researched frameworks involved to reduce bias and maximize findings within an evaluation. There are various pros and cons to heuristic evaluation. A lot of it depends on the amount of resources and the time available to the user.
Pros: Because there’s a very detailed list of criteria the evaluator goes through, it is a very detailed process and provides good feedback on areas that could be improved on. In addition, since it is done by several people the designer can get feedback from multiple perspectives. As it is a relatively straightforward process, there are less ethical and logistical concerns related to organizing the evaluation and executing it.
Cons: Since there is a specific set of criteria, the quality of the evaluation is limited by the skill and knowledge of the people who evaluate it. This leads to another issue: finding experts and people qualified enough to conduct this evaluation. However, if you have close resources of experts and qualified evaluators, this wouldn’t pose an issue. In addition, because the evaluations are more just personal observations, there’s no hard data in the results — the designer just has to take all the information and evaluations with these considerations in mind.
Number of Evaluators
According to Nielsen, three to five evaluators are recommended within a study. Having more than five evaluators does not necessarily increase the amount of insights, and this may add more cost than benefit to the overall evaluation.
Individual and Group Process
Heuristic evaluation must start individually before aggregating results in order to reduce group confirmation bias. The evaluator should examine the prototype independently before entering group discussions to accumulate insights.
Observer Trade-offs
There are costs and benefits associated when adding an observer to an evaluation session.
In a session without an observer, evaluators would need to formalize their individual observations within a written report as they interact with the product/prototype. This option would require more time and effort from the evaluators, and this would also require further time for the conductors of the study to interpret individual reports. However, this option is less costly because it reduces the overhead costs associated with hiring observers.
With an observer, evaluators can provide their analysis verbally while observers transcribe and interpret the evaluators' findings. This option reduces the amount of workload from the evaluators and the amount of time needed to interpret findings from multiple evaluators.
Nielsen's heuristics
Jakob Nielsen's heuristics are probably the most-used usability heuristics for user interface design. An early version of the heuristics appeared in two papers by Nielsen and Rolf Molich published in 1989-1990. Nielsen published an updated set in 1994, and the final set still in use today was published in 2005:
Visibility of system status:The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
Match between system and the real world:The system should speak the user's language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
User control and freedom:Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
Consistency and standards:Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
Error prevention:Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
Recognition rather than recall:Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
Flexibility and efficiency of use:Accelerators—unseen by the novice user—may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
Aesthetic and minimalist design:Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
Help users recognize, diagnose, and recover from errors:Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
Help and documentation:Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large.
Gerhardt-Powals' cognitive engineering principles
Although Nielsen is considered the expert and field leader in heuristic evaluation, Jill Gerhardt-Powals developed a set of cognitive engineering principles for enhancing human-computer performance. These heuristics, or principles, are similar to Nielsen's heuristics but take a more holistic approach to evaluation. The Gerhardt Powals' principles are listed below.
Automate unwanted workload:Eliminate mental calculations, estimations, comparisons, and any unnecessary thinking, to free cognitive resources for high-level tasks.
Reduce uncertainty:Display data in a manner that is clear and obvious to reduce decision time and error.
Fuse data:Bring together lower level data into a higher level summation to reduce cognitive load.
Present new information with meaningful aids to interpretation:New information should be presented within familiar frameworks (e.g., schemas, metaphors, everyday terms) so that information is easier to absorb.
Use names that are conceptually related to function:Display names and labels should be context-dependent, which will improve recall and recognition.
Group data in consistently meaningful ways:Within a screen, data should be logically grouped; across screens, it should be consistently grouped. This will decrease information search time.
Limit data-driven tasks:Use color and graphics, for example, to reduce the time spent assimilating raw data.
Include in the displays only that information needed by the user at a given time:Exclude extraneous information that is not relevant to current tasks so that the user can focus attention on critical data.
Provide multiple coding of data when appropriate:The system should provide data in varying formats and/or levels of detail in order to promote cognitive flexibility and satisfy user preferences.
Practice judicious redundancy:Principle 10 was devised by the first two authors to resolve the possible conflict between Principles 6 and 8, that is, in order to be consistent, it is sometimes necessary to include more information than may be needed at a given time.
Shneiderman's Eight Golden Rules of Interface Design
Ben Shneiderman's book was published a few years prior to Nielsen, Designing the User Interface: Strategies for Effective Human-Computer Interaction (1986) covered his popular list of the, "Eight Golden Rules".
Strive for consistency:Consistent sequences of actions should be required in similar situations ...
Enable frequent users to use shortcuts: As the frequency of use increases, so do the user's desires to reduce the number of interactions ...
Offer informative feedback: For every operator action, there should be some system feedback ...
Design dialog to yield closure: Sequences of actions should be organized into groups with a beginning, middle, and end ...
Offer simple error handling: As much as possible, design the system so the user cannot make a serious error ...
Permit easy reversal of actions: This feature relieves anxiety, since the user knows that errors can be undone ...
Support internal locus of control: Experienced operators strongly desire the sense that they are in charge of the system and that the system responds to their actions. Design the system to make users the initiators of actions rather than the responders.
Reduce short-term memory load: The limitation of human information processing in short-term memory requires that displays be kept simple, multiple page displays be consolidated, window-motion frequency be reduced, and sufficient training time be allotted for codes, mnemonics, and sequences of actions.
Weinschenk and Barker classification
In 2000, Susan Weinschenk and Dean Barker created a categorization of heuristics and guidelines used by several major providers into the following twenty types:
User Control: The interface will allow the user to perceive that they are in control and will allow appropriate control.
Human Limitations: The interface will not overload the user’s cognitive, visual, auditory, tactile, or motor limits.
Modal Integrity: The interface will fit individual tasks within whatever modality is being used: auditory, visual, or motor/kinesthetic.
Accommodation: The interface will fit the way each user group works and thinks.
Linguistic Clarity: The interface will communicate as efficiently as possible.
Aesthetic Integrity: The interface will have an attractive and appropriate design.
Simplicity: The interface will present elements simply.
Predictability: The interface will behave in a manner such that users can accurately predict what will happen next.
Interpretation: The interface will make reasonable guesses about what the user is trying to do.
Accuracy: The interface will be free from errors.
Technical Clarity: The interface will have the highest possible fidelity.
Flexibility: The interface will allow the user to adjust the design for custom use.
Fulfillment: The interface will provide a satisfying user experience.
Cultural Propriety: The interface will match the user’s social customs and expectations.
Suitable Tempo: The interface will operate at a tempo suitable to the user.
Consistency: The interface will be consistent.
User Support: The interface will provide additional assistance as needed or requested.
Precision: The interface will allow the users to perform a task exactly.
Forgiveness: The interface will make actions recoverable.
Responsiveness: The interface will inform users about the results of their actions and the interface’s status.
Domain or culture-specific heuristic evaluation
For an application with a specific domain and culture, the heuristics mentioned above do not identify the potential usability problems. These limitations of heuristics occur because these heuristics are incapable of considering the domain and culture-specific features of an application. This results in the introduction of domain-specific or culture-specific heuristic evaluation.
See also
Usability inspection
Progressive disclosure
Cognitive bias
Cognitive dimensions, a framework for evaluating the design of notations, user interfaces and programming languages
References
Further reading
External links
Original article of the 10 Usability Heuristics of User Interface Design, updated and modified in Nov 2020 by Jakob Nielsen.
A list of Nielsen Norman's Heuristic Evaluation Articles & Videos – Including fundamental points, methodologies and benefits.
Heuristic Evaluation at Usability.gov
Usability inspection
User interfaces | Heuristic evaluation | [
"Technology"
] | 2,832 | [
"User interfaces",
"Interfaces"
] |
357,371 | https://en.wikipedia.org/wiki/Dickson%27s%20lemma | In mathematics, Dickson's lemma states that every set of -tuples of natural numbers has finitely many minimal elements. This simple fact from combinatorics has become attributed to the American algebraist L. E. Dickson, who used it to prove a result in number theory about perfect numbers. However, the lemma was certainly known earlier, for example to Paul Gordan in his research on invariant theory.
Example
Let be a fixed natural number, and let be the set of pairs of numbers whose product is at least . When defined over the positive real numbers, has infinitely many minimal elements of the form , one for each positive number ; this set of points forms one of the branches of a hyperbola. The pairs on this hyperbola are minimal, because it is not possible for a different pair that belongs to to be less than or equal to in both of its coordinates. However, Dickson's lemma concerns only tuples of natural numbers, and over the natural numbers there are only finitely many minimal pairs. Every minimal pair of natural numbers has and , for if x were greater than K then (x − 1, y) would also belong to S, contradicting the minimality of (x, y), and symmetrically if y were greater than K then (x, y − 1) would also belong to S. Therefore, over the natural numbers, has at most minimal elements, a finite number.
Formal statement
Let be the set of non-negative integers (natural numbers), let n be any fixed constant, and let be the set of -tuples of natural numbers. These tuples may be given a pointwise partial order, the product order, in which if and only if for every .
The set of tuples that are greater than or equal to some particular tuple forms a positive orthant with its apex at the given tuple.
With this notation, Dickson's lemma may be stated in several equivalent forms:
In every non-empty subset of there is at least one but no more than a finite number of elements that are minimal elements of for the pointwise partial order.
For every infinite sequence of -tuples of natural numbers, there exist two indices such that holds with respect to the pointwise order.
The partially ordered set does not contain infinite antichains nor infinite (strictly) descending sequences of -tuples.
The partially ordered set is a well partial order.
Every subset of may be covered by a finite set of positive orthants, whose apexes all belong to .
Generalizations and applications
Dickson used his lemma to prove that, for any given number , there can exist only a finite number of odd perfect numbers that have at most prime factors. However, it remains open whether there exist any odd perfect numbers at all.
The divisibility relation among the P-smooth numbers, natural numbers whose prime factors all belong to the finite set P, gives these numbers the structure of a partially ordered set isomorphic to . Thus, for any set S of P-smooth numbers, there is a finite subset of S such that every element of S is divisible by one of the numbers in this subset. This fact has been used, for instance, to show that there exists an algorithm for classifying the winning and losing moves from the initial position in the game of Sylver coinage, even though the algorithm itself remains unknown.
The tuples in correspond one-for-one with the monomials over a set of variables . Under this correspondence, Dickson's lemma may be seen as a special case of Hilbert's basis theorem stating that every polynomial ideal has a finite basis, for the ideals generated by monomials. Indeed, Paul Gordan used this restatement of Dickson's lemma in 1899 as part of a proof of Hilbert's basis theorem.
See also
Gordan's lemma
Notes
References
Combinatorics
Lemmas
Wellfoundedness | Dickson's lemma | [
"Mathematics"
] | 800 | [
"Lemmas",
"Discrete mathematics",
"Mathematical theorems",
"Wellfoundedness",
"Combinatorics",
"Mathematical problems",
"Order theory",
"Mathematical induction"
] |
357,416 | https://en.wikipedia.org/wiki/Monomial | In mathematics, a monomial is, roughly speaking, a polynomial which has only one term. Two definitions of a monomial may be encountered:
A monomial, also called a power product or primitive monomial, is a product of powers of variables with nonnegative integer exponents, or, in other words, a product of variables, possibly with repetitions. For example, is a monomial. The constant is a primitive monomial, being equal to the empty product and to for any variable . If only a single variable is considered, this means that a monomial is either or a power of , with a positive integer. If several variables are considered, say, then each can be given an exponent, so that any monomial is of the form with non-negative integers (taking note that any exponent makes the corresponding factor equal to ).
A monomial in the first sense multiplied by a nonzero constant, called the coefficient of the monomial. A primitive monomial is a special case of a monomial in this second sense, where the coefficient is . For example, in this interpretation and are monomials (in the second example, the variables are and the coefficient is a complex number).
In the context of Laurent polynomials and Laurent series, the exponents of a monomial may be negative, and in the context of Puiseux series, the exponents may be rational numbers.
In mathematical analysis, it is common to consider polynomials written in terms of a shifted variable for some constant rather than a variable alone, as in the study of Taylor series. By a slight abuse of notation, monomials of shifted variables, for instance may be called monomials in the sense of shifted monomials or centered monomials, where is the center or is the shift.
Since the word "monomial", as well as the word "polynomial", comes from the late Latin word "binomium" (binomial), by changing the prefix "bi-" (two in Latin), a monomial should theoretically be called a "mononomial". "Monomial" is a syncope by haplology of "mononomial".
Comparison of the two definitions
With either definition, the set of monomials is a subset of all polynomials that is closed under multiplication.
Both uses of this notion can be found, and in many cases the distinction is simply ignored, see for instance examples for the first and second meaning. In informal discussions the distinction is seldom important, and tendency is towards the broader second meaning. When studying the structure of polynomials however, one often definitely needs a notion with the first meaning. This is for instance the case when considering a monomial basis of a polynomial ring, or a monomial ordering of that basis. An argument in favor of the first meaning is that no obvious other notion is available to designate these values, though primitive monomial is in use and does make the absence of constants clear.
The remainder of this article assumes the first meaning of "monomial".
Monomial basis
The most obvious fact about monomials (first meaning) is that any polynomial is a linear combination of them, so they form a basis of the vector space of all polynomials, called the monomial basis - a fact of constant implicit use in mathematics.
Number
The number of monomials of degree in variables is the number of multicombinations of elements chosen among the variables (a variable can be chosen more than once, but order does not matter), which is given by the multiset coefficient . This expression can also be given in the form of a binomial coefficient, as a polynomial expression in , or using a rising factorial power of :
The latter forms are particularly useful when one fixes the number of variables and lets the degree vary. From these expressions one sees that for fixed n, the number of monomials of degree d is a polynomial expression in of degree with leading coefficient .
For example, the number of monomials in three variables () of degree d is ; these numbers form the sequence 1, 3, 6, 10, 15, ... of triangular numbers.
The Hilbert series is a compact way to express the number of monomials of a given degree: the number of monomials of degree in variables is the coefficient of degree of the formal power series expansion of
The number of monomials of degree at most in variables is . This follows from the one-to-one correspondence between the monomials of degree in variables and the monomials of degree at most in variables, which consists in substituting by 1 the extra variable.
Multi-index notation
The multi-index notation is often useful for having a compact notation, specially when there are more than two or three variables. If the variables being used form an indexed family like one can set
and
Then the monomial
can be compactly written as
With this notation, the product of two monomials is simply expressed by using the addition of exponent vectors:
Degree
The degree of a monomial is defined as the sum of all the exponents of the variables, including the implicit exponents of 1 for the variables which appear without exponent; e.g., in the example of the previous section, the degree is . The degree of is 1+1+2=4. The degree of a nonzero constant is 0. For example, the degree of −7 is 0.
The degree of a monomial is sometimes called order, mainly in the context of series. It is also called total degree when it is needed to distinguish it from the degree in one of the variables.
Monomial degree is fundamental to the theory of univariate and multivariate polynomials. Explicitly, it is used to define the degree of a polynomial and the notion of homogeneous polynomial, as well as for graded monomial orderings used in formulating and computing Gröbner bases. Implicitly, it is used in grouping the terms of a Taylor series in several variables.
Geometry
In algebraic geometry the varieties defined by monomial equations for some set of α have special properties of homogeneity. This can be phrased in the language of algebraic groups, in terms of the existence of a group action of an algebraic torus (equivalently by a multiplicative group of diagonal matrices). This area is studied under the name of torus embeddings.
See also
Monomial representation
Monomial matrix
Homogeneous polynomial
Homogeneous function
Multilinear form
Log-log plot
Power law
Sparse polynomial
References
Homogeneous polynomials
Algebra | Monomial | [
"Mathematics"
] | 1,370 | [
"Algebra"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.