id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
4,204,944 | https://en.wikipedia.org/wiki/Edixa | Edixa is a brand of camera manufacturer Wirgin Kamerawerk which was based in Wiesbaden, West Germany. The product line included several 35mm cameras and 16mm Edixa 16 subminiature cameras designed by Heinz Waaske from the 1950s to the 1970s.
35mm cameras
Edixa Reflex, with Steinheil Quinon 1.9/55mm, Isco Travegar 2.8/50mm
Edixa-MAT REFLEX
Edixa REX TTL
Universal edixamat cd
Edixa Stereo
Edixa Electronica
Edixa motoric
16mm subminiature cameras
Edixa 16, with Isco Travegar 2.8/25mm lens
Edixa 16M, with Schneider-Kreuznach Xenar 2.8/25mm lens
Edixa 16MB, black model of Edixa 16M
Edixa 16U
Franka 16
alka 16
Body Alunimiu body with plastic trims。
Lenses: high-end Edixa 16MB/Edixa 16M uses Schneider-Kreuznach Xenar 25mm f/2.8 Tessar 4-element 3-group lens, mid-range Edixa 16 uses Travegar 25mm f/2.8 Tessar lens, the rest uses TRINAR Cooke triplet lens. *Focusing dial: unit lens movement focusing, 40mm to infinity.
Shutter: four leaves in front of the lens shutter, B, 1/30, 1/60, 1/150.
Film
Edixa 16mm uses Rollei 16 type RADA cartridge, loaded with unperforated 16mm film, film width 16mm, frame format 14x21mm, 20 exposures per cartridge.
Accessories
Chain
Genuine leather case
Lens hood
Color filter set
1m close up attachment lens
0.5m close up lens
0.25m close up lens
AG1 flash
Selenium exposure meter coupled to the shutter
Development tank
Slide projector
References
Jörg Eikmann, Ulrich Vogt: Kameras für Millionen – Heinz Waaske, Konstrukteur. Wittig Fachbuch
External links
Site with photos and history section
German cameras
Single-lens reflex cameras
Subminiature cameras
Wirgin cameras | Edixa | [
"Technology"
] | 474 | [
"System cameras",
"Single-lens reflex cameras"
] |
4,205,059 | https://en.wikipedia.org/wiki/EuropaBio | EuropaBio ("The European Association for Bioindustries") is Europe's largest and most influential biotech industry group, whose members include leading large-size healthcare and industrial biotechnology companies. EuropaBio is located in Brussels, Belgium. The organisation was initiated in 1996 to represent the interests of the biotechnology industry at the European level, and therefore influence legislation that serves the interests of biotechnology companies in Europe.
Activity and goals
EuropaBio is engaged in dialogue with the European Parliament, the European Commission, and the Council of Ministers to influence legislation on biotechnology.
EuropaBio represents two sectors of the biotech industry.
White or industrial biotechnology is the application of biotechnology for industrial purposes, including manufacturing, alternative energy (or "bioenergy") biofuels, and biomaterials.
Red or healthcare biotechnology is the application of biotechnology for the production of medicines and therapies.
EuropaBio's stated goals are:
promoting an innovative, coherent, and dynamic biotechnology-based industry in Europe;
advocating free and open markets and the removal of barriers to competitiveness with other areas of the world;
committing to an open, transparent, and informed dialogue with all stakeholders about the ethical, social, and economic aspects of biotechnology and its benefits;
championing the socially responsible use of biotechnology to ensure that its potential is fully used to the benefit of humans and their environment.
EuropaBio's primary focus is the European Union but because of the global character of the biotech business, it also represents its members in transatlantic and worldwide forums.
Organisation
EuropaBio has a board of management made up of representatives from among its industry members. Since 2023, Dr. Sarah Reisinger representing dsm-firmenich is chair of the board.
The board is assisted by sectoral councils representing the main segments of EuropaBio – healthcare (red biotech), and industrial (white biotech).
Additionally, National Associations are represented through the National Associations Council.
Experts from member companies and national associations participate in EuropaBio's working groups which cover a very wide range of issues and areas of concern of biotech enterprises.
Since November 2020 EuropaBio Director General is Dr. Claire Skentelbery.
Members
In 2021, the association represents 79 corporate and associate members and BioRegions, and 17 national biotechnology associations in turn representing over 1800 biotech SMEs.
See also
CropLife International
European Federation of Biotechnology (EFB)
European Federation of Pharmaceutical Industries and Associations (EFPIA)
Genetically modified food controversies
Regulation of the release of genetic modified organisms
Citations
References
Transforming Europe’s position on GM food - ambassadors programme executive summary The Guardian, Thursday 20 October 2011, Guardian News and Media Limited.
Biotech group bids to recruit high-profile GM 'ambassadors' John Vidal and Hanna Gersmann, The Guardian, Thursday 20 October 2011, Guardian News and Media Limited.
Draft letter from EuropaBio to potential GM ambassadors The Guardian, Thursday 20 October 2011, Guardian News and Media Limited.
External links
EuropaBio
Biotechnology in the EU
Biotech Informa
BIO
GMO Compass
Lobbying organizations in Europe
Pan-European biotechnology organisations
Organizations established in 1996
Organisations based in Brussels
1996 establishments in Belgium | EuropaBio | [
"Engineering",
"Biology"
] | 633 | [
"Biotechnology organizations",
"Pan-European biotechnology organisations"
] |
1,580,989 | https://en.wikipedia.org/wiki/Celatone | The celatone was a device invented by Galileo Galilei to observe Jupiter's moons with the purpose of finding longitude on Earth. It took the form of a piece of headgear with a telescope taking the place of an eyehole.
Modern versions
In 2013, Matthew Dockrey created a replica celatone, using notes from a version created by Samuel Parlour. From April 2014 to January 2015, Dockrey's celatone was on display in the Royal Observatory, Greenwich in east London.
See also
Longitude prize
Galilean moons
References
External links
Video animation of a Celatone and its use in discovering the longitude for marine navigation
Dockrey celatone
"Apparatus to render a telescope manageable on shipboard"
Astronomical instruments | Celatone | [
"Astronomy"
] | 153 | [
"Astronomical instruments"
] |
1,581,104 | https://en.wikipedia.org/wiki/Ross%20River%20virus | Ross River virus (RRV) is a small encapsulated single-strand RNA Alphavirus endemic to Australia, Papua New Guinea and other islands in the South Pacific. It is responsible for a type of mosquito-borne, non-lethal but extremely debilitating tropical disease known as Ross River fever, previously termed "epidemic polyarthritis". There is no known cure, and it can last in the host's system for up to 20 years. The virus is suspected to be enzootic in populations of various native Australian mammals, and has been found on occasion in horses.
Classification and morphology
Taxonomically, Ross River virus belongs to the virus genus Alphavirus, which is part of the family Togaviridae. The alphaviruses are a group of small enveloped single-strand positive-sense RNA viruses. RRV belongs to a subgroup of "Old World" (Eurasian-African-Australasian) alphaviruses, and belongs to the SF antigenic complex of the genus Alphavirus.
The virions (virus particles) themselves contain their genome in a protein capsid 700 Å in diameter. They are characterised by the presence of two glycoproteins (E1 and E2) embedded as trimeric dimers in a host-derived lipid envelope.
Because RRV is transmitted by mosquitos, it is considered an arbovirus, a non-taxonomic term for viruses borne by arthropod vectors.
History
In 1928, an outbreak of acute febrile arthritis was recorded in Narrandera and Hay in New South Wales, Australia. In 1943, several outbreaks of arthralgia and arthritis were described in the Northern Territory, Queensland and the Schouten Islands, off the northern coast of Papua New Guinea. The name "epidemic polyarthritis" was coined for the disease. In 1956, an epidemic occurred in the Murray Valley which was compared to "acute viral polyarthritis" caused by the Chikungunya virus. The Australian disease seemed to progress in a milder fashion. In 1956, serological testing suggested an unknown new species of alphavirus (group A arbovirus) was the likely culprit.
In July and August 1956 and 1957, a virus recovered from mosquitoes collected near Tokyo, Japan, and was dubbed Sagiyama virus. For a time, it was thought to be a separate species, but is now considered conspecific with Ross River virus.
In 1959, a new alphavirus was identified in samples from a mosquito (Aedes vigilax) trapped in the Ross River, located in Townsville, Queensland, Australia. Further serological testing showed that patients who had suffered "epidemic polyarthritis" in Queensland had antibodies to the virus. The new virus was named Ross River virus, and the disease Ross River fever.
The virus itself was first isolated in 1972 using suckling mice. It was found that RRV isolated from human serum could kill mice. However, the serum containing the virus that was used had come from an Aboriginal boy from Edward River, North Queensland. The child had a fever and a rash but no arthritis, making the link between RRV and Ross River fever less than concrete.
The largest-ever outbreak of the virus was in 1979–1980 and occurred in the western Pacific. The outbreak involved the islands of Fiji, Samoa, the Cook Islands, and New Caledonia. However, RRV was later isolated in humans following a series of epidemic polyarthritis outbreaks in Fiji, Samoa and the Cook Islands during 1979. RRV was isolated in an Australian patient suffering from Ross River fever in 1985.
In 2010, Ross River virus was found to have made its way to the Aundh area in Pune, India, and spread to other parts of the city. The RRV infection is characterised by inflammation and pain to multiple joints. Hydration by sufficient fluid intake is recommended, to ensure that the fever does not rise to very dangerous levels. It is also recommended that a doctor be consulted immediately as regular paracetamol gives only temporary reprieve from the fever.
Ecology
In rural and regional areas of Australia, the continued prevalence of Ross River virus is thought to be supported by natural reservoirs such as large marsupial mammals. Antibodies to Ross River virus have been found in a wide variety of placental and marsupial mammals, and also in a few bird species. It is not presently known what reservoir hosts support Ross River virus in metropolitan areas such as Brisbane.
The southern saltmarsh mosquito (Aedes camptorhynchus), which is known to carry the Ross River virus, was discovered in Napier, New Zealand, in 1998. Due to an 11-year program by the New Zealand Ministry of Health, and later the Ministry of Agriculture & Fisheries, the species was declared completely eradicated from New Zealand in July 2010. As of September 2006, there has never been a report of a case of Ross River virus acquired within New Zealand.
Separate mosquito species may act as vector, widespread across areas and seasonal/geographical locations. In southern and northern regions, the Aedes group (A. camptorhynchus and A. vigilax) are the main RRV carriers. However, inland the Culex annulirostris is the main carrier with Aedes mosquitoes becoming active during wet seasons.
Western Australia
Due to the expansion and housing demand in the south west of Western Australia, residential development is occurring closer to wetlands in spite of the fact that the ecosystem is known for mosquito breeding. Particularly in the Peel region where living near water is desirable for aesthetic value. Over the decade of June 2011 – 2012 the population increased by 44,000 residents averaging a rate of 4.5 per cent per annum. In June 2013 the Peel region accounted for approximately five per cent of the State's population and predicted to account for around 6.7 per cent of Western Australia's population by 2031.
A study comparing the risk of contracting Ross River virus (RRV) and the distance of the dwellings from Muddy Lakes. The reports showed within a one kilometre buffer zone there were approximately 1550 mosquitos in traps per night with 89% of them being Ae. camptorhynchus decreasing to approximately 450 mosquitos with 57% Ae. camptorhynchus at the six kilometre buffer zone. The study suggests that there is a significantly higher risk of contracting RRV when living closer to Muddy Lakes however, there was a rise in the two kilometre buffer zone of 3700 mosquitos with 94% Ae. camptorhynchus.
A similar trend in the study same study conducted in the Peel region resulting in less mosquitoes the further away the buffer distance.
In 1995–96 Leschenault and Capel-Busselton were affected by an outbreak of 524 cases of RRV disease. Although this occurred around a decade ago, the data analysed the total RRV cases per 1000 persons for each 500m buffer zone. This shows an elevated risk of contracting the disease if living in close proximity to the Leschenault Estuary, within 2 km being the strongest disease risk gradient.
Evidence shows that there is a strong correlation between contracting RRV when living in close proximity to wetlands in the south west of Western Australia. However, due to continuous growth and development of residential areas around these wetlands it is expected that problems with RRV disease will occur.
Risks
There are several factors that can contribute to an individual's risk for Ross River virus in Australia. These risks were trialed in a study conducted in tropical Australia which illustrate that factors such as camping, light coloured clothing, exposure to certain flora and fauna and specific protective mechanisms are able to increase or decrease the likelihood of contracting the virus. By increasing the frequency of camping the individual's risk increases eight-fold, suggesting that an increased exposure to wildlife increases risk. This is shown by the narrow 95% confidence interval of 1.07–4.35 within the study. For example, an individual's exposure to kangaroos, wallabies and bromeliad plants also increased risk, suggesting that they are reservoirs for infection, breeding sites for mosquitoes and potential vectors of the virus. Ross River virus antibodies have been found in captive populations of tammar wallabies and wallaroos in urban areas in New South Wales, Australia, and are potential reservoirs for the virus. Although these areas show a higher risk for the virus, humans should still enjoy the wildlife but consider that preventive mechanisms as increasingly important while camping.
Prevention
Ross River virus can be easily prevented through small behavioural mechanisms which should be of high importance in tropical areas and during participation of outdoor activities. Firstly, insect repellent should be rigorously used as to prevent bites from insects that specifically include mosquitoes which are vectors that carry the disease. A study in tropical Australia shows a very narrow 95% confidence interval of 0.20–1.00 for a decrease in Ross River virus risk as a result of increased use of insect repellent, suggesting a strong correlation between the two. Following, burning citronella candles are based on the same principle, that it repels insects that are vectors of the virus. Burning such candles also show a strong correlation with decreased Ross River virus risk shown in the same study with a narrow 95% confidence interval of 0.10–0.78. Secondly, wearing light coloured clothing decrease the risk of Ross River virus three-fold. This is again based on the repelling of vectors such as mosquitoes through the use of bright colours. Lastly, high risk areas should be minimised by mechanisms of prevention that are applied within households. For example, screens should be fitted to windows and doors to prevent entry of insects carrying the virus and potential breeding areas such as open water containers or water holding plants should be removed. Therefore, specific climatic environments should be assessed for high risk factors and the appropriate precautions should be taken in response.
Lab research
The study of RRV has been recently facilitated by a mouse model. Inbred mice infected with RRV develop hind-limb arthritis/arthralgia. The disease in mice, similar to humans, is characterised by an inflammatory infiltrate including macrophages which are immunopathogenic and exacerbate disease. Furthermore, recent data indicate that the serum component, C3, directly contributes to disease since mice deficient in the C3 protein do not suffer from severe disease following infection.
Symptoms
Ross River virus can cause multiple symptoms in someone who is infected, the most common being arthritis or joint pain. Other symptoms include a rash on the limbs of the body, which often occurs roughly 10 days after arthritis begins. Lymph nodes may enlarge, most commonly in the arm pits or groin region, and rarely a feeling of 'pins and needles' in the persons hands and feet, but only occurs in a small number of people. The virus also causes moderate symptoms in horses.
The symptoms of Ross River virus are important to recognise for early diagnosis and therefore early treatment. Symptoms have been illustrated in a case report of an infected Thuringian traveller returning from South-East Australia. This case showed flu-like symptoms that include fever, chills, headache and pains in the body. Additionally, joint pain arose in which some joints become swollen and joint stiffness was particularly noticeable. A clinical examination of the infected individual shows a significant decrease of specific antibodies despite the normal blood count levels. A rash is a good indication that is likely to occur but usually disappears after ten days. The symptoms of Ross River virus are important to be aware of so that early treatment can be administered before the virus worsens. The time between catching the disease and experiencing symptoms is anywhere between three days to three weeks, usually it takes about 1–2 weeks. A person can be tested for Ross River virus by a blood test, other illnesses may need to be excluded before diagnosis.
Diagnosis
Testing for Ross River virus should occur in patients who are experiencing acute polyarthritis, tiredness and/or rashes (~90%) with a history of travel within areas prone to infection from the virus. Serology (blood tests) is the appropriate manner by which to diagnose Ross River virus. Within 7 days of infection, the virus produces Immunoglobulin M (IgM) and is a presumptive positive diagnosis. IgM may persist for months or even years and therefore false positives may be triggered by Barmah Forest virus, rubella, Q fever or rheumatoid factor. To completely test for Ross River virus, a second serology test must be conducted 10–14 days after the first. The patient may then be declared positive for Ross River virus infection if there is a 4-fold increase of IgM antibody count.
Ross River fever
Ross River fever is also known as Ross River virus infection or Ross River virus disease. Ross River virus is named after the Ross River in Townsville, which is the place where it was first identified. Ross River fever is the most common mosquito-borne disease in Australia, and nearly 5000 people are reported to be infected with the virus each year.
References
External links
Ross River & Barmah Forest University of Sydney, Department of Medical Entomology
Animal viruses
Human viruses
Species described in 1959
Viruses described in the 20th century
Arboviruses
Alphaviruses | Ross River virus | [
"Biology"
] | 2,702 | [
"Viruses",
"Arboviruses"
] |
1,581,131 | https://en.wikipedia.org/wiki/Electrostatic%20motor | An electrostatic motor or capacitor motor is a type of electric motor based on the attraction and repulsion of electric charge.
An alternative type of electrostatic motor is the spacecraft electrostatic ion drive thruster where forces and motion are created by electrostatically accelerating ions.
Overview
An electrostatic motor is based on the attraction and repulsion of electric charge. Usually, electrostatic motors are the dual of conventional coil-based motors. They typically require a high voltage power supply, although very small motors employ lower voltages. Conventional electric motors instead employ magnetic attraction and repulsion, and require high current at low voltages. In the 1740s and 1750s, the first electrostatic motors were developed by Andrew Gordon and by Benjamin Franklin. Today the electrostatic motor finds frequent use in micro-mechanical (MEMS) systems where their drive voltages are below 100 volts, and where moving, charged plates are far easier to fabricate than coils and iron cores.
Corona-discharge motor
The corona-discharge motor, also known as corona motor, has been known for centuries.
Nanotube nanomotor
In 2004, researchers at University of California, Berkeley, developed rotational bearings based upon multiwall carbon nanotubes. By attaching a gold plate (with dimensions of the order of 100 nm) to the outer shell of a suspended multiwall carbon nanotube (like nested carbon cylinders), they are able to electrostatically rotate the outer shell relative to the inner core. These bearings are very robust; devices have been oscillated thousands of times with no indication of wear. These nanoelectromechanical systems (NEMS) represent a promising direction in miniaturization and may find their way into commercial applications in the future.
Electrostatic ion drive
Electric motors, in general, produce motion when powered by electric currents. The common type of spacecraft ion thruster uses electrostatic forces to accelerate ions to generate forces to create motion, and thus can be considered as unconventional electric motors.
Gridded electrostatic ion thrusters commonly utilize xenon gas. This gas has no charge and is ionized by bombarding it with energetic electrons. These electrons can be provided from a hot-filament cathode and accelerated in the electrical field of the cathode fall to the anode (Kaufman type ion thruster). Alternatively, the electrons can be accelerated by the oscillating electric field induced by an alternating magnetic field of a coil, which results in a self-sustaining discharge and omits any cathode (radiofrequency ion thruster).
Patents
The prime classifications of electrostatic motors by the USPTO are:
Class 310 ELECTRICAL GENERATOR OR MOTOR STRUCTURE
300 NON-DYNAMOELECTRIC
308 Charge accumulating
309 Electrostatic
-- J. Gallegos -- "Static electric Machine"
-- E. Thomson -- "Electrostatic motor"
-- Harold B. Smith -- "Apparatus for transforming electrical energy into mechanical energy"
-- W. G. Cady -- "Electromechanical System"
—- T. T. Brown -- "Electrostatic motor" (1934-09-25)
-- B. Bollee -- "Electrostatic Motor" (ed. Electrostatics from Atmospheric Electricity)
-- B. Bollee -- "Electrostatic Motor"
-- MITSUBISHI CHEM CORP -- "Electrostatic actuator"
-- Robert, et al. -- "Electrostatic Motor"
See also
Electrostatic generator
Nanomotor
Oxford Electric Bell
References
External articles and further reading
de Queiroz, Antonio Carlos M., "An Electrostatic Linear Motor". 24 January 2002.
William J. Beaty, "Simple Electrostatic Motor".
"ElectrostaticMotor", tm.net.
Fast and Flexible Electrostatic Motors at Univ. Tokyo"".
Heavy Lifting Electrostatic Motors at Univ. Tokyo"".
E. Sarajlic et al., 3-Phase Electrostatic Stepper Micromotors
Electrostatics
Electric motors | Electrostatic motor | [
"Technology",
"Engineering"
] | 829 | [
"Electrical engineering",
"Engines",
"Electric motors"
] |
1,581,163 | https://en.wikipedia.org/wiki/Piezoelectric%20motor | A piezoelectric motor or piezo motor is a type of electric motor based on the change in shape of a piezoelectric material when an electric field is applied, as a consequence of the converse piezoelectric effect. An electrical circuit makes acoustic or ultrasonic vibrations in the piezoelectric material, most often lead zirconate titanate and occasionally lithium niobate or other single-crystal materials, which can produce linear or rotary motion depending on their mechanism. Examples of types of piezoelectric motors include inchworm motors, stepper and slip-stick motors as well as ultrasonic motors which can be further categorized into standing wave and travelling wave motors. Piezoelectric motors typically use a cyclic stepping motion, which allows the oscillation of the crystals to produce an arbitrarily large motion, as opposed to most other piezoelectric actuators where the range of motion is limited by the static strain that may be induced in the piezoelectric element.
The growth and forming of piezoelectric crystals is a well-developed industry, yielding very uniform and consistent distortion for a given applied potential difference. This, combined with the minute scale of the distortions, gives the piezoelectric motor the ability to make very fine steps. Manufacturers claim precision to the nanometer scale. High response rate and fast distortion of the crystals also let the steps happen at very high frequencies—upwards of 5 MHz. This provides a maximum linear speed of approximately 800 mm per second, or nearly 2.9 km/h.
A unique capability of piezoelectric motors is their ability to operate in strong magnetic fields. This extends their usefulness to applications that cannot use traditional electromagnetic motors—such as inside nuclear magnetic resonance antennas. The maximum operating temperature is limited by the Curie temperature of the used piezoelectric ceramic and can exceed +250 °C.
The main benefits of piezoelectric motors are the high positioning precision, stability of position while unpowered, and the ability to be fabricated at very small sizes or in unusual shapes such as thin rings. Common applications of piezoelectric motors include focusing systems in camera lenses as well as precision motion control in specialised applications such as microscopy.
Resonant motor types
Ultrasonic motor
Ultrasonic motors differ from other piezoelectric motors in several ways, though both typically use some form of piezoelectric material. The most obvious difference is the use of resonance to amplify the vibration of the stator in contact with the rotor in ultrasonic motors.
Two different ways are generally available to control the friction along the stator-rotor contact interface, traveling-wave vibration and standing-wave vibration. Some of the earliest versions of practical motors in the 1970s, by Sashida, for example, used standing-wave vibration in combination with fins placed at an angle to the contact surface to form a motor, albeit one that rotated in a single direction. Later designs by Sashida and researchers at Matsushita, ALPS, Xeryon and Canon made use of traveling-wave vibration to obtain bi-directional motion, and found that this arrangement offered better efficiency and less contact interface wear. An exceptionally high-torque 'hybrid transducer' ultrasonic motor uses circumferentially-poled and axially-poled piezoelectric elements together to combine axial and torsional vibration along the contact interface, representing a driving technique that lies somewhere between the standing and traveling-wave driving methods.
Non-resonant motor types
Inchworm motor
The inchworm motor uses piezoelectric ceramics to push a stator using a walking-type motion. These piezoelectric motors use three groups of crystals—two 'locking', and one 'motive' that permanently connects to either the motor's casing or stator (not both). The motive group, sandwiched between the other two, provides the motion.
The non-powered behaviour of this piezoelectric motor is one of two options: 'normally locked' or 'normally free'. A normally free type allows free movement when unpowered but can still be locked by applying a voltage.
Inchworm motors can achieve nanometre-scale positioning by varying the voltage applied to the motive crystal while one set of locking crystals is engaged.
Stepping actions
The actuation process of the inchworm motor is a multistep cyclical process:
First, one group of 'locking' crystals is activated to lock one side and unlock other side of the 'sandwich' of piezo crystals.
Next, the 'motive' crystal group is triggered and held. The expansion of this group moves the unlocked 'locking' group along the motor path. This is the only stage where the motor moves.
Then the 'locking' group triggered in stage one releases (in 'normally locking' motors, in the other it triggers).
Then the 'motive' group releases, retracting the 'trailing locking' group.
Finally, both 'locking' groups return to their default states.
Stepper or walk-drive motor
Not to be confused with the similarly named electromagnetic stepper motor, these motors are similar to the inchworm motor, however, the piezoelectric elements can be bimorph actuators which bend to feed the slider rather than using a separate expanding and contracting element.
Slip-stick motor
The mechanism of slip-stick motors rely on the inertia in combination with the difference between static and dynamic friction. The stepping action consists of a slow extension phase where static friction is not overcome, followed by a rapid contraction phase where static friction is overcome and the point of contact between the motor and moving part is changed.
Direct drive motors
The direct drive piezoelectric motor creates movement through continuous ultrasonic vibration. Its control circuit applies a two-channel sinusoidal or square wave to the piezoelectric elements that matches the bending resonant frequency of the threaded tube—typically an ultrasonic frequency of 40 kHz to 200 kHz. This creates orbital motion that drives the screw.
A second drive type, the squiggle motor, uses piezoelectric elements bonded orthogonally to a nut. Their ultrasonic vibrations rotate a central lead screw.
Single action
Very simple single-action stepping motors can be made with piezoelectric crystals. For example, with a hard and rigid rotor-spindle coated with a thin layer of a softer material (like a polyurethane rubber), a series of angled piezoelectric transducers can be arranged. (see Fig. 2). When the control circuit triggers one group of transducers, they push the rotor one step. This design cannot make steps as small or precise as more complex designs, but can reach higher speeds and is cheaper to manufacture.
Patents
The first U.S. patent to disclose a vibrationally-driven motor may be "Method and Apparatus for Delivering Vibratory Energy" (U.S. Pat. No. 3,184,842, Maropis, 1965). The Maropis patent describes a "vibratory apparatus wherein longitudinal vibrations in a resonant coupling element are converted to torsional vibrations in a toroid type resonant terminal element." The first practical piezomotors were designed and produced by V. Lavrinenko in Piezoelectronic Laboratory, starting 1964, Kyiv Polytechnic Institute, USSR. Other important patents in the early development of this technology include:
"Electrical motor", V. Lavrinenko, M. Nekrasov, Patent USSR # 217509, priority May 10, 1965.
"Piezoelectric motor structures" (U.S. Pat. No. 4,019,073, Vishnevsky, et al., 1977)
"Piezoelectrically driven torsional vibration motor" (U.S. Pat. No. 4,210,837, Vasiliev, et al., 1980)
See also
Ultrasonic motor
Ultrasonic Motor Drive as used in the Canon EF Mount
Ultrasonic homogenizer
References
Electric motors | Piezoelectric motor | [
"Technology",
"Engineering"
] | 1,653 | [
"Electrical engineering",
"Engines",
"Electric motors"
] |
1,581,406 | https://en.wikipedia.org/wiki/Congolese%20spotted%20lion | A Congolese spotted lion, also known by the portmanteau lijagulep, is the hybrid of a male lion and female jaguar-leopard hybrid (a jagulep or lepjag). Several lijaguleps have been bred, but only one appears to have been exhibited as a Congolese spotted lion. It was most likely given that name by a showman because the public were more interested in exotic captured animals than in captive-bred hybrids.
The story
The Times (April 15, 1908) pg. 6: A Strange Animal From The Congo: Mr. J. D. Hamlyn, the animal dealer of St. George St.,E., who obtained two or three new monkeys from the Congo, has just received from the same region a very curious feline animal nearly as large as an adult lioness, which it resembles in build, but irregularly spotted. There is no trace of a mane or ruff, nor is the tail tufted as in the lion. The general hue is tawny, but with a rufous tinge, reminding one of the coat of a cheetah rather than of the leopard, and the inner sides of the limbs are yellowish white, with dark spots. The markings on the upper surface differ greatly in size and character; on the hind limbs they are large; toward the forequarters and head they diminish in size, but increase greatly in number, and the face is so to speak, strippled with black, except on the nose. There is a black mark on each side of the lower jaw, and a black stripe on the posterior side of each ear; and along the spine, from the root of the tail to about the centre of the back is a row of dark markings, somewhat like disconnected links of a chain. The hue of the tail for the greater part of its length corresponds to that of the body, but the terminal portion is banded with black and white. The animal, a female, is in excellent condition and fairly quiet. The obvious suggestion is that the animal is a wild-bred hybrid, with a lioness for dam and a spotted cat for sire. Lion-tiger hybrids were bred in this country by Atkins, the proprietor of a famous travelling menagerie; among Continental breeders Carl Hagenbeck has been most successful. A cross between a puma and leopard has also been obtained, but wild-bred hybrids between the larger cats are exceedingly rare. Today the animal will be sent to the Zoological Gardens, where the question of its parentage will be scientifically investigated.
In The Field No 2887, April 25, 1908, the editor wrote: An illustration reproduced from a drawing by Mr F W Frohawk, of the supposed lion-leopard at the Zoological Gardens is now presented, and it will be interesting to compare it with the picture which accompanies the letter on feline hybrids by Mr Scherren. Mr R I Pocock's remarks on this interesting animal which appeared in last week's "Field" leave little more to be said at present. It would certainly appear to be either a hybrid lion-leopard, or else a new species of large leopard, a supposition strengthened by several points of closer resemblance to a leopard that to a lion, and the pattern of the larger rosette markings which are like those of the snow leopard (Felis uncia). It may be well to note in further detail the colouring and markings of the beast. The ground colour is a pale tawny-buff, blending into creamy-white on the undersurface of the body; chin, throat, chest, inside of legs and undersurface of end of tail whitel; the whole surface of the body and legs is spotted similar or a leopard and snow leopard, the head and neck being less plainly marked; all the markings on the upper parts are pale, dusky, developing into black below, and deep black on both surfaces of the legs. An important feature is the pattern of the larger rosette markings, which are similar to those of the snow leopard, being composed of smaller spots, but forming larger rosettes than is usual in the ordinary leopard. Excepting the crown, the spots on the head, and especially on the neck, are small, and more or less indistinct. The fore paws, rump and basal three-fourths of the tail are much like those of a lioness in form, but the end of the tail, although less ample, is marked like a show leopard's. The black angle of the mouth is similar to that of a leopard; the nose is dull-pink and eyes pale ochreous, like most leopards; but the general squareness of the head and rather large ears are more lion like. At times, when the animal is standing slackly, she is hollow backed, but usually the back is as shown in the drawing. That idea that this animal may possibly prove to be a new species of great cat will not be generally entertained; but it must be remembered that a far more conspicuous creature, to wit, the okapi, has only been made known to us of late years, and that it is possible that such an animal as that now in question frequenting, as it would undoubtedly do, dense forest regions, and being of nocturnal habits, might have escaped observation.
The possibility was considered that the Congolese spotted lion may have been part cheetah or a new species of leopard. In The Field No 2887, April 25, 1908, Henry Scherren wrote: In all probability the interesting animal now in the lion house of the Zoological Gardens is the only feline hybrid yet exhibited for which the claim has been advanced that it was wild bred. The story of its origin as told to Mr Hamlyn, and given by Mr Pocock in his letter,is of considerable interest; but, in my opinion, the interest will be heightened when the gentleman by whom the animal was consigned to Mr Hamlyn gives us full particulars. On the view that this animal was bred in the Congo, with a lioness for dam, there could have been but two possible sires: the leopard or the cheetah. Mr Pocock has given his reasons for accepting the former, but I think he will admit that there is a superficial resemblance to the latter. Conclusive evidence against the supposition that a cheetah had any share in the parentage - tough it occurred to me when I first saw the animal in her travelling box - is afforded by the size of the head, the massive forelimbs, and the retractile claws.Certain difficulties, however, present themselves with regard to the story that the animal was wild bred. One was pointed out by Mr Pocock in his remark that 'if representatives of the two species were to meet the encounter would be more likely to end in the death of the leopard than in the establishment of friendly relations between them.' next I cannot equate the appearance of the animal in regard to age and development with the scanty details that have been given to Mr Hamlyn. Granting that it was even two years old when brought by the natives to the French trading settlement, the two years spent in confinement in Africa and the time occupied in the passage to Europe do not, in my opinion, account for the whole span of its existence. And from what I saw of the hybrid before she was unpacked, and afterwards in one of the spacious dens in the lion house, I should have come to the conclusion that she was well used to being exhibited but for the assurance from Mr Hamlyn that this was not the case. Everybody who has seen the animal will agree with Mr Pocock that it is of the highest interest, and it is to be hoped that it will remain in its present quarters.
The Times (Monday May 4, 1908) pg. 12: Sale Of A Supposed Congo Hybrid. After having been on view in the lion house at the Zoological Garden for about a fortnight the feline hybrid described in The Times of April 15 was sold by auction at Aldridge's on Saturday. The attendance was very large; among those present were a good many showmen. Bidding began at 100 guineas, and eventually the animal was knocked down by Mr. Bostock at 1,030 guineas. One of the conditions of sale appears somewhat strange for it disclaimed any guarantee as to the animal's breeding, age, or any other description. This would seem to show that the story of the animal, as told to Mr. Hamlyn by the original owner, has not been verified. The story was that the hybrid had been brought as a cub by natives to a French trading settlement somewhere up country from the Gabon, and kept in captivity for about two years before being transported to the West Coast and shipped to Europe in a French boat. At any rate the responsible officials at the Zoological Society do not appear to have been convinced by the story, or they would probably have made some offer for the animal, for which about 500 pounds was asked a fortnight ago. The keepers in the lion house maintain a strong opinion that it was bred in a menagerie; the responsible officials are more reticent on the subject and prefer to wait for evidence as to the place of birth, and the species from which the creature was bred.
The real story
Three jaguar/leopardess hybrids were bred in Chicago, United States, possibly at Lincoln Park Zoo. These were sold to a traveling menagerie and one was displayed at London Zoo and White City (in London). The female jaguleps had refused to mate with a leopard, but one female was mated to a lion and produced several litters. One of the offspring was exhibited in London in 1908 and was claimed to be a type of lion. It was the size of a lioness and had brown rosettes or spots. It is not noted whether the other lijagulep cubs survived to adulthood.
In The Field No 2889, May 9, 1908, R I Pocock wrote: Sir - Since you were good enough to publish in the Field of April 18 my description of the supposed lion-leopard hybrid with its history as issued to the press by Mr Hamlyn, I should like to give what I am convinced is the true story of its origin and antecedents, so as to lay at rest once and for all the idea that it was a natural product of the French Congo.Some years ago, three hybrids, one male, two female, were bred in Chicago from a male jaguar and an Indian leopardess, and were bought by the proprietor of an American traveling show of performing animals. The male was killed by a lion, but the females lived, grew to the size of a jaguar, and when adult were mated with a young lion, choosing him, it is said, in preference to male leopards. Several litters were born, each consisting of two cubs. These resembled a lion in general colour, but were spotted. In the case of every litter the spots of one cub were like those of the jaguar, and the others like those of a leopard. The males were without mane. At the end of last year, some of these animals, then about three and a half years old, were alive in the United States.These facts I can vouch for on first hand authority. My conviction that the animal recently exhibited in the Zoological Gardens is one of those hybrids with jaguar-like spots is a conclusion deduced from a combination of circumstances, partly from a knowledge of the recent importation by Mr Bostock from America of a number of animals for the exhibition at Earl's Court, partly from a clue supplied to me by Mr Carl Hagenbeck, who predicted almost to the letter the outcome of the sale, partly from overheard remarks let drop at the auction at Aldridge's, and finally from the fact that the animal was knocked down to Mr Bostock for a sum representing ten times its market value. With the above-mentioned facts before them, your readers will be able to piece in the details of the entire transaction without further comment on my part. From a scientific standpoint the animal gains interest from a knowledge of its true nature. Not suspecting three species to be involved in its parentage, I was not quite right in determining it as a lion-leopard hybrid, although the spots, as I stated, obviously recall those of a jaguar, I dismissed that species in considering its pedigree on account of its comparatively slender build and the great length of the tail. He elimination of the shortness of the tail and of the sturdiness in shape of the jaguar is not surprising, however, seeing that these characters are only found in one out of the three parent forms.
The male lijagulep hybrid was said to have been killed by a lion while on display in Glasgow. In his comparison of a leopon with a lijagulep, R I Pocock wrote in The Field (2 November 1912): The nearest approach to [a lion-leopard] hybrid hitherto reported is the one bred at Chicago between a male lion and a female cross between a jaguar and a leopard, the true story of which, accompanied by a good figure by Mr Frohawk, may be found in the Field for April 18 and 25, and May 9, 1908. The final episode in the history of that animal has, I believe, not yet been told. After being exhibited in the Zoological Gardens and at the White City it went to Glasgow, where, according to a sensational Press notice, it was killed by a lion, which broke down the partition between the cages and made short work of its opponent. That this story was of a piece with the original account of the hybrid given out when it first appeared on the market may be inferred from the condition of the dressed skin, which had no sign of a tear or scratch upon it in London shortly after the alleged tragedy. The chief difference between this hybrid of three species and the lion-leopard born at Kolhapur lies in the size of the spots, those of the [lijagulep] being large and jaguar-like, as might be expected, while those of the [leopon] are small and more leopard-like.
The skin of the killed lijagulep went on sale in London shortly after the alleged tragedy and as noted in 1968 by German cat specialist Dr Helmut Hemmer, it appears to be this skin, mounted in a standing pose very closely corresponding with the illustration by Frohawk, that is displayed at France's National Museum of Natural History. In addition, there is a mounted jaguar-lion hybrid, preserved in a lying-down pose with head raised, at the Walter Rothschild Zoological Museum, Tring, England.
Contemporary comparison with lion and leopard hybrids
To put the Congolese spotted lion into its proper context as a hybrid, lions had been hybridized with several big cat species, several of which are mentioned in the media accounts of the Congolese spotted lion, supporting the theory that the animal was a hybrid.
with leopards to produce leopons and lipards - RI Pocock compared the appearance of the lijagulep to that of a leopon in The Field of 2 November 1912.
with tigers to produce ligers and tigons - The Times article of April 15, 1908 mentions these as part of its report on the Congolese spotted lion
with jaguars to produce jaglions - described by H Hemmer in his analysis of the skin
A mounted specimen labelled as a jaguar-lion hybrid is displayed at the Rothschild Museum in Tring, England. The Paris specimen is the closest we have to an impression of how the Congolese spotted lion may have looked when alive. The age and pose of this specimen suggests it is the skin of the female lijagulep killed in Glasgow. Hemmer identified it as being either lion x jaguar or being lion x (leopard x jaguar).
Leopards have been crossbred with jaguars to produce jaguleps (also known as leguars or lepjags), one such was the dam of the lijagulep. As mentioned in the quote from The Times as evidence in favour of the cat being a hybrid, leopards had also been crossed with pumas (see pumapards).
Fertility and breeding
In general, male big cat hybrids are sterile while female big cat hybrids are fertile and may be bred back to one of the parental species or to another big cat species, as was the case with the Congolese spotted lion (a 3-species complex hybrid).
In general, hybrids are no longer bred by zoos as the current emphasis is on conservation of pure species. The only hybrid big cats commonly and deliberately bred in recent times are ligers. It is unlikely that further lijaguleps will be bred. In theory, if one of these hybrids were to be reproduced and was fertile, a tiger could make for an interesting four species cross.
See also
Marozi
References
R. I. Pocock: 1908. "Hybrid Lion and Leopard". The Field, April 18, 1908.
R. I. Pocock: The Field no. 2889, May 9, 1908
R. I. Pocock: "The Supposed Lion and Leopard Hybrid". 1908
Henry Scherren. The Field no. 2887, April 25, 1908.
The Field (letters): April 25, 1908.
R. I. Pocock: (letter), The Field, 2 November 1912.
C.A.W. Guggisberg (1975) Wild Cats Of The World. Taplinger Pub Co.
Helmut Hemmer: "Report on a Hybrid Between Lion x Jaguar x Leopard - Panthera leo x Panthera onca x Panthera pardus" (Saeugetierkundliche-Mitteilungen, 1968; 16(2): 179-182)
Dr. Karl Shuker (1989) Mystery Cats of the World. Robert Hale: London. page 173.
External links
Jaguar & Leopard Hybrids (licensed under GFDL).
Detailed information on hybridisation in big cats. Includes tigons, ligers, leopons and others.
Panthera hybrids
Second-generation hybrids | Congolese spotted lion | [
"Biology"
] | 3,726 | [
"Second-generation hybrids",
"Hybrid organisms"
] |
1,581,427 | https://en.wikipedia.org/wiki/Bereitschaftspotential | In neurology, the Bereitschaftspotential or BP (German for "readiness potential"), also called the pre-motor potential or readiness potential (RP), is a measure of activity in the motor cortex and supplementary motor area of the brain leading up to voluntary muscle movement. The BP is a manifestation of cortical contribution to the pre-motor planning of volitional movement. It was first recorded and reported in 1964 by Hans Helmut Kornhuber and Lüder Deecke at the University of Freiburg in Germany. In 1965 the full publication appeared after many control experiments.
Discovery
In the spring of 1964 Hans Helmut Kornhuber (then docent and chief physician at the department of neurology, head Professor Richard Jung, university hospital Freiburg im Breisgau) and Lüder Deecke (his doctoral student) went for lunch to the 'Gasthaus zum Schwanen' at the foot of the Schlossberg hill in Freiburg. Sitting alone in the beautiful garden they discussed their frustration with the passive brain research prevailing worldwide and their desire to investigate self-initiated action of the brain and the will. Consequently, they decided to look for cerebral potentials in man related to volitional acts and to take voluntary movement as their research paradigm.
The possibility to do research on electrical brain potentials preceding voluntary movements came with the advent of the 'computer of average transients' (CAT computer), invented by Manfred Clynes, the first still simple instrument available at that time in the Freiburg laboratory. In the electroencephalogram (EEG) little is to be seen preceding actions, except of an inconstant diminution of the α- (or μ-) rhythm. The young researchers stored the electroencephalogram and electromyogram of self-initiated movements (fast finger flexions) on tape and analyzed the cerebral potentials preceding movements time-reversed with the start of the movement as the trigger, literally turning the tape over for analysis since they had no reversal playback or programmable computer. A potential preceding human voluntary movement was discovered and published in the same year. After detailed investigation and control experiments such as passive finger movements the Citation Classic with the term Bereitschaftspotential was published.
Mechanism
The BP is ten to hundred times smaller than the α-rhythm of the EEG; only by averaging, relating the electrical potentials to the onset of the movement it becomes apparent. Figure shows the typical slow shifts of the cortical DC potential, called Bereitschaftspotential, preceding volitional, rapid flexions of the right index finger. The vertical line indicates the instant of triggering t = 0 (first activity in the EMG of the agonist muscle). Recording positions are left precentral (L prec, C3), right precentral (R prec, C4), mid-parietal (Pz); these are unipolar recordings with linked ears as reference. The difference between the BP in C3 and in C4 is displayed in the lowest graph (L/R prec). Superimposed are the results of eight experiments as obtained in the same subject (B.L.) on different days. see Deecke, L.; Grözinger, B.; Kornhuber H.H. (1976)
Note that the BP has two components, the early one (BP1) lasting from about −1.2 to −0.5; the late component (BP2) from −0.5 to shortly before 0 sec. The pre-motion positivity is even smaller, and the motor-potential which starts about fifty to sixty milliseconds before the onset of movement and has its maximum over the contralateral precentral hand area is still smaller. Thus, it takes great care to see these potentials: exact triggering by the real onset of movement is important, which is especially difficult preceding speech movements. Furthermore, artifacts due to head-, eye-, lid-, mouth-movements and respiration have to be eliminated before averaging because such artifacts may be of a magnitude which makes it difficult to render them negligible even after hundreds of sweeps. In the case of eye movements eye muscle potentials have to be distinguished from cerebral potentials. In some cases animal experiments were necessary to clarify the origin of potentials such as the R-wave. Therefore, it took many years until some of the other laboratories were able to confirm the details of Kornhuber & Deecke's results. In addition to the finger or eye movements as mentioned above, the BP has been recorded accompanying willful movements of the wrist, arm, shoulder, hip, knee, foot and toes. It was also recorded prior to speaking, writing and also swallowing.
The magnetoencephalographic (MEG) equivalent of the Bereitschaftspotential (BP), 'Bereitschafts(magnetic)field' (BF), or readiness field (RF) was first recorded in Hal Weinberg's laboratory at Simon Fraser University Burnaby B.C. Canada in 1982. It was confirmed that the early component, BP 1 or BF1, respectively was generated by the supplementary motor area (SMA), including the pre-SMA, while the late component, BP2 or BF2, was generated by the primary motor area, MI.
A very similar event-related potential (ERP) component had earlier been discovered by the British neurophysiologist William Grey Walter in 1962 and published in 1964. It is the contingent negative variation (CNV). The CNV also composes two waves; the initial wave (i.e., O wave) and the terminal wave (i.e., E wave). The terminal CNV has similar characteristics as the BP and many researchers have claimed that the BP and the terminal CNV are the same component. At least there is a consensus that both indicate a preparation of the brain for a following behavior.
Outcomes
The Bereitschaftspotential was received with great interest by the scientific community, as reflected by Sir John Eccles's comment: "There is a delightful parallel between these impressively simple experiments and the experiments of Galileo Galilei who investigated the laws of motion of the universe with metal balls on an inclined plane". The interest was even greater in psychology and philosophy because volition is traditionally associated with human freedom (cf. Kornhuber 1984). The spirit of the time, however, was hostile to freedom in those years; it was believed that freedom is an illusion. The tradition of behaviourism and Freudism was deterministic. While will and volition were frequently leading concepts in psychological research papers before and after the first world war and even during the second war, after the end of the second world war this declined, and by the mid-sixties these key words completely disappeared and were abolished in the thesaurus of the American Psychological Association. The BP is an electrical sign of participation of the supplementary motor area (SMA) prior to volitional movement, which starts activity prior to the primary motor area. The BP has precipitated a worldwide discussion about free will (cf. the closing chapter in the book "The Bereitschaftspotential").
As said above, the activity of the SMA generates the early component of the Bereitschaftspotential (BP1 or BP early). The SMA has the starting function of the movement or action. The role of the SMA was further substantiated by Cunnington et al. 2003, showing that SMA proper and pre-SMA are active prior to volitional movement or action, as well as the cingulate motor area (CMA). This is now called ‘anterior mid-cingulate cortex (aMCC)’. Recently it has been shown by integrating simultaneously acquired EEG and fMRI that SMA and aMCC have strong reciprocal connections that act to sustain each other’s activity, and that this interaction is mediated during movement preparation according to the Bereitschaftspotential amplitude.
EEGs and EMGs are used in combination with Bayesian inference to construct Bayesian networks which attempt to predict general patterns of Motor Intent Neuron Action Potentials firing. Researchers attempting to develop non-intrusive brain–computer interfaces are interested in this, as are system analysis, operations research, and epistemology (e.g. the Smith predictor has been suggested in the discussion).
BP and free will
In a series of neuroscience of free will experiments in the 1980s, Benjamin Libet studied the relationship between conscious experience of volition and the BP e.g. and found that the BP started about 0.35 sec earlier than the subject's reported conscious awareness that "now he or she feels the desire to make a movement." Libet concludes that we have no free will in the initiation of our movements; though, since subjects were able to prevent intended movement at the last moment, we do have the ability to veto these actions ("free won't").
These studies have provoked widespread debate.
In 2016, a group around John-Dylan Haynes in Berlin (Germany) determined the time window after the BP in which an intended motion could possibly be cancelled upon command. The authors tested whether human volunteers could win a "duel" against a BCI (brain–computer interface) designed to predict their movements in real-time from observations of their EEG activity (the BP). They aimed to determine the exact time at which cancellation (veto) of movements was not possible anymore (the point of no return). The computer was trained to predict by means of the BP when a proband would move. The point of no return was at 200 ms before the movement. However, even after that, when a pedal was already set in motion, the subjects were able to reschedule their action by not completing the already started behavior. The authors pointed out in their report that cancellation of self-initiated movements had already been reported by Libet in 1985. Thus, the new achievement was a more precise determination of the point of no return.
Applications
An interesting use of the Bereitschaftspotential is in brain–computer interface (BCI) applications; this signal feature can be identified from scalp recording (even from single-trial measurements) and interpreted for various uses, for example control of computer displays or control of peripheral motor units in spinal cord injuries. The most important BCI application is the 'mental' steering of artificial limbs in amputees.
See also
C1 and P1
Contingent negative variation
Difference due to memory
Early left anterior negativity
Epiphenomenalism
Error-related negativity
Late positive component
Lateralized readiness potential
Mismatch negativity
N2pc
N100
N170
N200
N400
P3a
P3b
P200
P300 (neuroscience)
P600
Somatosensory evoked potential
Visual N1
References
Further reading
Brunia CHM, van Boxtel GJM, Böcker KBE: Negative Slow Waves as Indices of Anticipation: The Bereitschaftspotential, the Contingent Negative Variation, and the Stimulus-Preceding Negativity. In: Steven J. Luck, Emily S. Kappenman (Eds.): The Oxford Handbook of Event-Related Potential Components. Oxford University Press, USA 2012, , p. 189-207.
Deecke, L.; Kornhuber, H.H. (2003). Human freedom, reasoned will, and the brain. The Bereitschaftspotential story. In: M Jahanshahi, M Hallett (Eds.): The Bereitschaftspotential, movement-related cortical potentials. Kluwer Academic / Plenum Publishers pp. 283–320.
Kornhuber HH; Deecke L (2012) The Will and Its Brain: An Appraisal of Reasoned Free Will. University Press of America, Lanham MD USA
Wise SP: Movement selection, preparation, and the decision to act: neurophysiological studies in nonhuman primates. In: Marjan Jahanshahi, Mark Hallett (Eds.): The Bereitschaftspotential: Movement-Related Cortical Potentials. Kluwer Academic / Plenum Publishers, New York 2003, , pp. 249–268.
Nann M, Cohen LG, Deecke L & Soekadar SR: To jump or not to jump – The Bereitschaftspotential required to jump into 192-meter abyss. Scientific Reports (2019) 9:2243 https://doi.org/10.1038/s41598-018-38447-w
External links
http://www.cmds.canterbury.ac.nz/documents/huckabee_swallowing.pdf
http://www.cs.washington.edu/homes/rao/shenoy_rao05.pdf
Somatic motor system
History of neuroscience
Brain–computer interface
Motor control
Electroencephalography
Evoked potentials | Bereitschaftspotential | [
"Biology"
] | 2,710 | [
"Behavior",
"Motor control"
] |
1,581,463 | https://en.wikipedia.org/wiki/Yakalo | The yakalo is a cross of the yak (Bos grunniens) and the American bison (Bison bison, known as a buffalo in North America). It was produced by hybridisation experiments in the 1920s, when crosses were made between yak bulls and both pure bison cows and bison–cattle hybrid cows. As with many other inter-specific crosses, only female hybrids were found to be fertile (Haldane's rule). Few of the hybrids survived, and the experiments were discontinued in 1928.
See also
Beefalo
Dzo
Żubroń
Footnotes
1920s introductions
Bovid hybrids
Intergeneric hybrids
Yaks
American bison | Yakalo | [
"Biology"
] | 132 | [
"Intergeneric hybrids",
"Hybrid organisms"
] |
1,581,568 | https://en.wikipedia.org/wiki/PSR%20J0737%E2%88%923039 | PSR J0737−3039 is the first known double pulsar. It consists of two neutron stars emitting electromagnetic waves in the radio wavelength in a relativistic binary system. The two pulsars are known as PSR J0737−3039A and PSR J0737−3039B. It was discovered in 2003 at Australia's Parkes Observatory by an international team led by the Italian radio astronomer Marta Burgay during a high-latitude pulsar survey.
Pulsars
A pulsar is a neutron star which produces pulsating radio emission due to a strong magnetic field. A neutron star is the ultra-compact remnant of a massive star which exploded as a supernova. Neutron stars have a mass bigger than the Sun, yet are only a few kilometers across. These extremely dense objects rotate on their axes, producing focused electromagnetic waves which sweep around the sky and briefly point toward Earth in a lighthouse effect at rates that can reach a few hundred pulses per second.
Although double neutron star systems were known before its discovery, PSR J0737−3039 is the first and only known system () where both neutron stars are pulsars – hence, a "double pulsar" system. The object is similar to PSR B1913+16, which was discovered in 1974 by Jocelyn Bell, Taylor and Hulse, and for which the two won the 1993 Nobel Prize in Physics. Objects of this kind enable precise testing of Einstein's theory of general relativity, because the precise and consistent timing of the pulsar pulses allows relativistic effects to be seen when they would otherwise be too small. While many known pulsars have a binary companion, and many of those are believed to be neutron stars, J0737−3039 is the first case where both components are known to be not just neutron stars but pulsars.
Discovery
PSR J0737−3039A was discovered in 2003, along with its partner, at Australia's 64 m antenna of the Parkes Radio Observatory; J0737−3039B was not identified as a pulsar until a second observation. The system was originally observed by an international team during a high-latitude multibeam survey organized in order to discover more pulsars in the night sky.
Initially, this star system was thought to be an ordinary pulsar detection. The first detection showed one pulsar with a period of 23 milliseconds in orbit around a neutron star. Only after follow up observations was a weaker second pulsar detected with a pulse of 2.8 seconds from the companion star.
Physical characteristics
The orbital period of J0737−3039 (2.4 hours) is one of the shortest known for such an object (one-third that of the Taylor–Hulse binary), which enables the most precise tests yet. In 2005, it was announced that measurements had shown an excellent agreement between general relativity theory and observation. In particular, the predictions for energy loss due to gravitational waves appear to match the theory.
As a result of energy loss due to gravitational waves, the common orbit (roughly in diameter) shrinks by 7 mm per day. The two components will coalesce in about 85 million years.
{| class=wikitable
|-
! align="left" | Property
! align="left" | Pulsar A
! align="left" | Pulsar B
|-
! Spin period
| 22.699 milliseconds
| 2.773 seconds
|-
! Mass
| 1.337 solar masses
| 1.250 solar masses
|-
! Orbital period
| colspan="2" align="center" | 2.454 hours (8834.53499 seconds)
|-
|}
Due to relativistic spin precession, the pulses from Pulsar B are no longer detectable but are expected to reappear in 2035 due to precession back into view.
Use as a test of general relativity
Observations of 16 years of timing data have been reported in 2021 to be on agreement with general relativity by studying the loss of orbital energy due to gravitational waves. The orbital decay and the speedup of the orbital period was tested to follow the quadrupole formula with a great precision of 0.013% mainly because of the unique characteristics of the system which has two pulsars, is nearby and possesses an inclination close to 90°.
Unique origin
In addition to the importance of this system to tests of general relativity, Piran and Shaviv have shown that the young pulsar in this system must have been born with no mass ejection, implying a new process of neutron star formation that does not involve a supernova. Whereas the standard supernova model predicts that the system will have a proper motion of more than hundred km/s, they predicted that this system would not show any significant proper motion. Their prediction was later confirmed by pulsar timing.
Eclipses
Another discovery from the double pulsar is the observation of an eclipse from a conjunction of the superior and weaker pulsar. This happens when the doughnut shaped magnetosphere of one pulsar, which is filled with absorbing plasma, blocks the companion pulsar's light. The blockage, lasting more than 30 s, is not complete, due to the orientation of the plane of rotation of the binary system relative to Earth and the limited size of the weaker pulsar's magnetosphere; some of the stronger pulsar's light can still be detected during the eclipse.
Other binary systems
In addition to a double pulsar system, a whole range of differing two-body systems are known where only one member of the system is a pulsar. Known examples are variations on a binary star :
A pulsar–white dwarf system; e.g, PSR B1620−26.
A pulsar–neutron star system, e.g, PSR B1913+16.
A pulsar and a normal star; e.g, PSR J0045−7319, a system that is composed of a pulsar and main-sequence B star.
Theoretically, a pulsar-black hole system is possible and would be of enormous scientific interest but no such system has yet been identified. A pulsar has recently been detected very near the super-massive black hole at the core of our galaxy, but its motion has not yet been officially confirmed as a capture orbit of Sgr A*. A pulsar–black hole system could be an even stronger test of Einstein's theory of general relativity, due to the immense gravitational forces exerted by both celestial objects.
Also of great scientific interest is PSR J0337+1715, a pulsar-white dwarf binary system that has a third white dwarf star in a more distant orbit circling around both of the other two. This unique arrangement is being used to explore the strong equivalence principle of physics, a fundamental assumption upon which all of general relativity rests.
The Square Kilometre Array, a radio telescope due to be completed in the late 2020s, will both further observe known and detect new binary pulsar systems in order to test general relativity.
See also
Radio astronomy
References
External links
Puppis
Pulsars
Double neutron star systems | PSR J0737−3039 | [
"Astronomy"
] | 1,516 | [
"Puppis",
"Constellations"
] |
1,581,694 | https://en.wikipedia.org/wiki/Methyl%20jasmonate | Methyl jasmonate (abbreviated MeJA) is a volatile organic compound used in plant defense and many diverse developmental pathways such as seed germination, root growth, flowering, fruit ripening, and senescence. Methyl jasmonate is derived from jasmonic acid and the reaction is catalyzed by S-adenosyl--methionine:jasmonic acid carboxyl methyltransferase.
Description
Plants produce jasmonic acid and methyl jasmonate in response to many biotic and abiotic stresses (in particular, herbivory and wounding), which build up in the damaged parts of the plant. The methyl jasmonate can be used to signal the original plant's defense systems or it can be spread by physical contact or through the air to produce a defensive reaction in unharmed plants. The unharmed plants absorb the airborne MeJA through either the stomata or diffusion through the leaf cell cytoplasm. An herbivorous attack on a plant causes it to produce MeJA both for internal defense and for a signaling compound to other plants.
Defense chemicals
MeJA can induce the plant to produce multiple different types of defense chemicals such as phytoalexins (antimicrobial), nicotine or protease inhibitors. The protease inhibitors interfere with the insect digestive process and discourage the insect from eating the plant again.
MeJA has been used to stimulate traumatic resin duct production in Norway spruce trees. This can be used as a defense against many insect attackers as a type of vaccine.
Experiments
External application of methyl jasmonate has been shown to induce plant defensive responses against both biotic and abiotic stressors. When treatments of methyl jasmonate were applied to Picea abies (Norway spruce), the accumulation of monoterpene and sesquiterpene compounds doubled in the spruce needle tissues, a response that normally is only triggered when the tissue is damaged.
In an experiment testing the effect of methyl jasmonate treatments on drought tolerance, strawberry plants were shown to alter their metabolism and were better able to withstand water stress and drought conditions by lowering the amount of transpiration, and membrane-lipid peroxidation.
External application of methyl jasmonate has also shown a propensity for inducing an increased resistance to insect herbivory in some agricultural crops, such as brassicas and tobacco. Plants treated with methyl jasmonate and exposed to insect herbivores had significantly lower levels of herbivory, and the insect herbivores had slower development, when compared to untreated plants.
In recent experiments, methyl jasmonate has been shown to be effective at preventing bacterial growth in plants when applied in a spray to the leaves. The antibacterial effect is thought to be because of methyl jasmonate inducing resistance.
MeJA is also a plant hormone involved in tendril (root) coiling, flowering, seed and fruit maturation. An increase of the hormone affects flowering time, flower morphology and the number of open flowers. MeJA induces ethylene-forming enzyme activity, which increases the amount of ethylene to the amount necessary for fruit maturation.
Increased amounts of methyl jasmonate in plant roots have shown to inhibit their growth. It is predicted that the higher amounts of MeJA activate previously unexpressed genes within the roots to cause the growth inhibition.
Cancer cells
Methyl jasmonate induces cytochrome C release in the mitochondria of cancer cells, leading to cell death, but does not harm normal cells. Specifically, it can cause cell death in B-cell chronic lymphocytic leukemia cells taken from human patients with this disease and then treated in tissue culture with methyl jasmonate. Treatment of isolated normal human blood lymphocytes did not result in cell death.
See also
Jasmonate
Methyl dihydrojasmonate
References
External links
General information about methyl jasmonate
Jasmonate: pharmaceutical composition for treatment of cancer. US Patent Issued on October 22, 2002
Plant stress hormones suppress the proliferation and induce apoptosis in human cancer cells, Leukemia, Nature, April 2002, Volume 16, Number 4, Pages 608–616
Jasmonates induce nonapoptotic death in high-resistance mutant p53-expressing B-lymphoma cells, British Journal of Pharmacology (2005) 146, 800–808. ; published online 19 September 2005
Acetate esters
Plant hormones
Ketones
Alkene derivatives
Methyl esters | Methyl jasmonate | [
"Chemistry"
] | 924 | [
"Ketones",
"Functional groups"
] |
1,581,752 | https://en.wikipedia.org/wiki/Protein%20design | Protein design is the rational design of new protein molecules to design novel activity, behavior, or purpose, and to advance basic understanding of protein function. Proteins can be designed from scratch (de novo design) or by making calculated variants of a known protein structure and its sequence (termed protein redesign). Rational protein design approaches make protein-sequence predictions that will fold to specific structures. These predicted sequences can then be validated experimentally through methods such as peptide synthesis, site-directed mutagenesis, or artificial gene synthesis.
Rational protein design dates back to the mid-1970s. Recently, however, there were numerous examples of successful rational design of water-soluble and even transmembrane peptides and proteins, in part due to a better understanding of different factors contributing to protein structure stability and development of better computational methods.
Overview and history
The goal in rational protein design is to predict amino acid sequences that will fold to a specific protein structure. Although the number of possible protein sequences is vast, growing exponentially with the size of the protein chain, only a subset of them will fold reliably and quickly to one native state. Protein design involves identifying novel sequences within this subset. The native state of a protein is the conformational free energy minimum for the chain. Thus, protein design is the search for sequences that have the chosen structure as a free energy minimum. In a sense, it is the reverse of protein structure prediction. In design, a tertiary structure is specified, and a sequence that will fold to it is identified. Hence, it is also termed inverse folding. Protein design is then an optimization problem: using some scoring criteria, an optimized sequence that will fold to the desired structure is chosen.
When the first proteins were rationally designed during the 1970s and 1980s, the sequence for these was optimized manually based on analyses of other known proteins, the sequence composition, amino acid charges, and the geometry of the desired structure. The first designed proteins are attributed to Bernd Gutte, who designed a reduced version of a known catalyst, bovine ribonuclease, and tertiary structures consisting of beta-sheets and alpha-helices, including a binder of DDT. Urry and colleagues later designed elastin-like fibrous peptides based on rules on sequence composition. Richardson and coworkers designed a 79-residue protein with no sequence homology to a known protein. In the 1990s, the advent of powerful computers, libraries of amino acid conformations, and force fields developed mainly for molecular dynamics simulations enabled the development of structure-based computational protein design tools. Following the development of these computational tools, great success has been achieved over the last 30 years in protein design. The first protein successfully designed completely de novo was done by Stephen Mayo and coworkers in 1997, and, shortly after, in 1999 Peter S. Kim and coworkers designed dimers, trimers, and tetramers of unnatural right-handed coiled coils. In 2003, David Baker's laboratory designed a full protein to a fold never seen before in nature. Later, in 2008, Baker's group computationally designed enzymes for two different reactions. In 2010, one of the most powerful broadly neutralizing antibodies was isolated from patient serum using a computationally designed protein probe. Due to these and other successes (e.g., see examples below), protein design has become one of the most important tools available for protein engineering. There is great hope that the design of new proteins, small and large, will have uses in biomedicine and bioengineering.
Underlying models of protein structure and function
Protein design programs use computer models of the molecular forces that drive proteins in in vivo environments. In order to make the problem tractable, these forces are simplified by protein design models. Although protein design programs vary greatly, they have to address four main modeling questions: What is the target structure of the design, what flexibility is allowed on the target structure, which sequences are included in the search, and which force field will be used to score sequences and structures.
Target structure
Protein function is heavily dependent on protein structure, and rational protein design uses this relationship to design function by designing proteins that have a target structure or fold. Thus, by definition, in rational protein design the target structure or ensemble of structures must be known beforehand. This contrasts with other forms of protein engineering, such as directed evolution, where a variety of methods are used to find proteins that achieve a specific function, and with protein structure prediction where the sequence is known, but the structure is unknown.
Most often, the target structure is based on a known structure of another protein. However, novel folds not seen in nature have been made increasingly possible. Peter S. Kim and coworkers designed trimers and tetramers of unnatural coiled coils, which had not been seen before in nature. The protein Top7, developed in David Baker's lab, was designed completely using protein design algorithms, to a completely novel fold. More recently, Baker and coworkers developed a series of principles to design ideal globular-protein structures based on protein folding funnels that bridge between secondary structure prediction and tertiary structures. These principles, which build on both protein structure prediction and protein design, were used to design five different novel protein topologies.
Sequence space
In rational protein design, proteins can be redesigned from the sequence and structure of a known protein, or completely from scratch in de novo protein design. In protein redesign, most of the residues in the sequence are maintained as their wild-type amino-acid while a few are allowed to mutate. In de novo design, the entire sequence is designed anew, based on no prior sequence.
Both de novo designs and protein redesigns can establish rules on the sequence space: the specific amino acids that are allowed at each mutable residue position. For example, the composition of the surface of the RSC3 probe to select HIV-broadly neutralizing antibodies was restricted based on evolutionary data and charge balancing. Many of the earliest attempts on protein design were heavily based on empiric rules on the sequence space. Moreover, the design of fibrous proteins usually follows strict rules on the sequence space. Collagen-based designed proteins, for example, are often composed of Gly-Pro-X repeating patterns. The advent of computational techniques allows designing proteins with no human intervention in sequence selection.
Structural flexibility
In protein design, the target structure (or structures) of the protein are known. However, a rational protein design approach must model some flexibility on the target structure in order to increase the number of sequences that can be designed for that structure and to minimize the chance of a sequence folding to a different structure. For example, in a protein redesign of one small amino acid (such as alanine) in the tightly packed core of a protein, very few mutants would be predicted by a rational design approach to fold to the target structure, if the surrounding side-chains are not allowed to be repacked.
Thus, an essential parameter of any design process is the amount of flexibility allowed for both the side-chains and the backbone. In the simplest models, the protein backbone is kept rigid while some of the protein side-chains are allowed to change conformations. However, side-chains can have many degrees of freedom in their bond lengths, bond angles, and χ dihedral angles. To simplify this space, protein design methods use rotamer libraries that assume ideal values for bond lengths and bond angles, while restricting χ dihedral angles to a few frequently observed low-energy conformations termed rotamers.
Rotamer libraries are derived from the statistical analysis of many protein structures. Backbone-independent rotamer libraries describe all rotamers. Backbone-dependent rotamer libraries, in contrast, describe the rotamers as how likely they are to appear depending on the protein backbone arrangement around the side chain. Most protein design programs use one conformation (e.g., the modal value for rotamer dihedrals in space) or several points in the region described by the rotamer; the OSPREY protein design program, in contrast, models the entire continuous region.
Although rational protein design must preserve the general backbone fold a protein, allowing some backbone flexibility can significantly increase the number of sequences that fold to the structure while maintaining the general fold of the protein. Backbone flexibility is especially important in protein redesign because sequence mutations often result in small changes to the backbone structure. Moreover, backbone flexibility can be essential for more advanced applications of protein design, such as binding prediction and enzyme design. Some models of protein design backbone flexibility include small and continuous global backbone movements, discrete backbone samples around the target fold, backrub motions, and protein loop flexibility.
Energy function
Rational protein design techniques must be able to discriminate sequences that will be stable under the target fold from those that would prefer other low-energy competing states. Thus, protein design requires accurate energy functions that can rank and score sequences by how well they fold to the target structure. At the same time, however, these energy functions must consider the computational challenges behind protein design. One of the most challenging requirements for successful design is an energy function that is both accurate and simple for computational calculations.
The most accurate energy functions are those based on quantum mechanical simulations. However, such simulations are too slow and typically impractical for protein design. Instead, many protein design algorithms use either physics-based energy functions adapted from molecular mechanics simulation programs, knowledge based energy-functions, or a hybrid mix of both. The trend has been toward using more physics-based potential energy functions.
Physics-based energy functions, such as AMBER and CHARMM, are typically derived from quantum mechanical simulations, and experimental data from thermodynamics, crystallography, and spectroscopy. These energy functions typically simplify physical energy function and make them pairwise decomposable, meaning that the total energy of a protein conformation can be calculated by adding the pairwise energy between each atom pair, which makes them attractive for optimization algorithms. Physics-based energy functions typically model an attractive-repulsive Lennard-Jones term between atoms and a pairwise electrostatics coulombic term between non-bonded atoms.
Statistical potentials, in contrast to physics-based potentials, have the advantage of being fast to compute, of accounting implicitly of complex effects and being less sensitive to small changes in the protein structure. These energy functions are based on deriving energy values from frequency of appearance on a structural database.
Protein design, however, has requirements that can sometimes be limited in molecular mechanics force-fields. Molecular mechanics force-fields, which have been
used mostly in molecular dynamics simulations, are optimized for the simulation of single sequences, but protein design searches through many conformations of many sequences. Thus, molecular mechanics force-fields must be tailored for protein design. In practice, protein design energy functions often incorporate both statistical terms and physics-based terms. For example, the Rosetta energy function, one of the most-used energy functions, incorporates physics-based energy terms originating in the CHARMM energy function, and statistical energy terms, such as rotamer probability and knowledge-based electrostatics. Typically, energy functions are highly customized between laboratories, and specifically tailored for every design.
Challenges for effective design energy functions
Water makes up most of the molecules surrounding proteins and is the main driver of protein structure. Thus, modeling the interaction between water and protein is vital in protein design. The number of water molecules that interact with a protein at any given time is huge and each one has a large number of degrees of freedom and interaction partners. Instead, protein design programs model most of such water molecules as a continuum, modeling both the hydrophobic effect and solvation polarization.
Individual water molecules can sometimes have a crucial structural role in the core of proteins, and in protein–protein or protein–ligand interactions. Failing to model such waters can result in mispredictions of the optimal sequence of a protein–protein interface. As an alternative, water molecules can be added to rotamers.
As an optimization problem
The goal of protein design is to find a protein sequence that will fold to a target structure. A protein design algorithm must, thus, search all the conformations of each sequence, with respect to the target fold, and rank sequences according to the lowest-energy conformation of each one, as determined by the protein design energy function. Thus, a typical input to the protein design algorithm is the target fold, the sequence space, the structural flexibility, and the energy function, while the output is one or more sequences that are predicted to fold stably to the target structure.
The number of candidate protein sequences, however, grows exponentially with the number of protein residues; for example, there are 20100 protein sequences of length 100. Furthermore, even if amino acid side-chain conformations are limited to a few rotamers (see Structural flexibility), this results in an exponential number of conformations for each sequence. Thus, in our 100 residue protein, and assuming that each amino acid has exactly 10 rotamers, a search algorithm that searches this space will have to search over 200100 protein conformations.
The most common energy functions can be decomposed into pairwise terms between rotamers and amino acid types, which casts the problem as a combinatorial one, and powerful optimization algorithms can be used to solve it. In those cases, the total energy of each conformation belonging to each sequence can be formulated as a sum of individual and pairwise terms between residue positions. If a designer is interested only in the best sequence, the protein design algorithm only requires the lowest-energy conformation of the lowest-energy sequence. In these cases, the amino acid identity of each rotamer can be ignored and all rotamers belonging to different amino acids can be treated the same. Let ri be a rotamer at residue position i in the protein chain, and E(ri) the potential energy between the internal atoms of the rotamer. Let E(ri, rj) be the potential energy between ri and rotamer rj at residue position j. Then, we define the optimization problem as one of finding the conformation of minimum energy (ET):
The problem of minimizing ET is an NP-hard problem. Even though the class of problems is NP-hard, in practice many instances of protein design can be solved exactly or optimized satisfactorily through heuristic methods.
Algorithms
Several algorithms have been developed specifically for the protein design problem. These algorithms can be divided into two broad classes: exact algorithms, such as dead-end elimination, that lack runtime guarantees but guarantee the quality of the solution; and heuristic algorithms, such as Monte Carlo, that are faster than exact algorithms but have no guarantees on the optimality of the results. Exact algorithms guarantee that the optimization process produced the optimal according to the protein design model. Thus, if the predictions of exact algorithms fail when these are experimentally validated, then the source of error can be attributed to the energy function, the allowed flexibility, the sequence space or the target structure (e.g., if it cannot be designed for).
Some protein design algorithms are listed below. Although these algorithms address only the most basic formulation of the protein design problem, Equation (), when the optimization goal changes because designers introduce improvements and extensions to the protein design model, such as improvements to the structural flexibility allowed (e.g., protein backbone flexibility) or including sophisticated energy terms, many of the extensions on protein design that improve modeling are built atop these algorithms. For example, Rosetta Design incorporates sophisticated energy terms, and backbone flexibility using Monte Carlo as the underlying optimizing algorithm. OSPREY's algorithms build on the dead-end elimination algorithm and A* to incorporate continuous backbone and side-chain movements. Thus, these algorithms provide a good perspective on the different kinds of algorithms available for protein design.
In 2020 scientists reported the development of an AI-based process using genome databases for evolution-based designing of novel proteins. They used deep learning to identify design-rules. In 2022, a study reported deep learning software that can design proteins that contain prespecified functional sites.
With mathematical guarantees
Dead-end elimination
The dead-end elimination (DEE) algorithm reduces the search space of the problem iteratively by removing rotamers that can be provably shown to be not part of the global lowest energy conformation (GMEC). On each iteration, the dead-end elimination algorithm compares all possible pairs of rotamers at each residue position, and removes each rotamer r′i that can be shown to always be of higher energy than another rotamer ri and is thus not part of the GMEC:
Other powerful extensions to the dead-end elimination algorithm include the pairs elimination criterion, and the generalized dead-end elimination criterion. This algorithm has also been extended to handle continuous rotamers with provable guarantees.
Although the Dead-end elimination algorithm runs in polynomial time on each iteration, it cannot guarantee convergence. If, after a certain number of iterations, the dead-end elimination algorithm does not prune any more rotamers, then either rotamers have to be merged or another search algorithm must be used to search the remaining search space. In such cases, the dead-end elimination acts as a pre-filtering algorithm to reduce the search space, while other algorithms, such as A*, Monte Carlo, Linear Programming, or FASTER are used to search the remaining search space.
Branch and bound
The protein design conformational space can be represented as a tree, where the protein residues are ordered in an arbitrary way, and the tree branches at each of the rotamers in a residue. Branch and bound algorithms use this representation to efficiently explore the conformation tree: At each branching, branch and bound algorithms bound the conformation space and explore only the promising branches.
A popular search algorithm for protein design is the A* search algorithm. A* computes a lower-bound score on each partial tree path that lower bounds (with guarantees) the energy of each of the expanded rotamers. Each partial conformation is added to a priority queue and at each iteration the partial path with the lowest lower bound is popped from the queue and expanded. The algorithm stops once a full conformation has been enumerated and guarantees that the conformation is the optimal.
The A* score f in protein design consists of two parts, f=g+h. g is the exact energy of the rotamers that have already been assigned in the partial conformation. h is a lower bound on the energy of the rotamers that have not yet been assigned. Each is designed as follows, where d is the index of the last assigned residue in the partial conformation.
Integer linear programming
The problem of optimizing ET (Equation ()) can be easily formulated as an integer linear program (ILP). One of the most powerful formulations uses binary variables to represent the presence of a rotamer and edges in the final solution, and constraints the solution to have exactly one rotamer for each residue and one pairwise interaction for each pair of residues:
s.t.
ILP solvers, such as CPLEX, can compute the exact optimal solution for large instances of protein design problems. These solvers use a linear programming relaxation of the problem, where qi and qij are allowed to take continuous values, in combination with a branch and cut algorithm to search only a small portion of the conformation space for the optimal solution. ILP solvers have been shown to solve many instances of the side-chain placement problem.
Message-passing based approximations to the linear programming dual
ILP solvers depend on linear programming (LP) algorithms, such as the Simplex or barrier-based methods to perform the LP relaxation at each branch. These LP algorithms were developed as general-purpose optimization methods and are not optimized for the protein design problem (Equation ()). In consequence, the LP relaxation becomes the bottleneck of ILP solvers when the problem size is large. Recently, several alternatives based on message-passing algorithms have been designed specifically for the optimization of the LP relaxation of the protein design problem. These algorithms can approximate both the dual or the primal instances of the integer programming, but in order to maintain guarantees on optimality, they are most useful when used to approximate the dual of the protein design problem, because approximating the dual guarantees that no solutions are missed. Message-passing based approximations include the tree reweighted max-product message passing algorithm, and the message passing linear programming algorithm.
Optimization algorithms without guarantees
Monte Carlo and simulated annealing
Monte Carlo is one of the most widely used algorithms for protein design. In its simplest form, a Monte Carlo algorithm selects a residue at random, and in that residue a randomly chosen rotamer (of any amino acid) is evaluated. The new energy of the protein, Enew is compared against the old energy Eold and the new rotamer is accepted with a probability of:
where β is the Boltzmann constant and the temperature T can be chosen such that in the initial rounds it is high and it is slowly annealed to overcome local minima.
FASTER
The FASTER algorithm uses a combination of deterministic and stochastic criteria to optimize amino acid sequences. FASTER first uses DEE to eliminate rotamers that are not part of the optimal solution. Then, a series of iterative steps optimize the rotamer assignment.
Belief propagation
In belief propagation for protein design, the algorithm exchanges messages that describe the belief that each residue has about the probability of each rotamer in neighboring residues. The algorithm updates messages on every iteration and iterates until convergence or until a fixed number of iterations. Convergence is not guaranteed in protein design. The message mi→ j(rj that a residue i sends to every rotamer (rj at neighboring residue j is defined as:
Both max-product and sum-product belief propagation have been used to optimize protein design.
Applications and examples of designed proteins
Enzyme design
The design of new enzymes is a use of protein design with huge bioengineering and biomedical applications. In general, designing a protein structure can be different from designing an enzyme, because the design of enzymes must consider many states involved in the catalytic mechanism. However protein design is a prerequisite of de novo enzyme design because, at the very least, the design of catalysts requires a scaffold in which the catalytic mechanism can be inserted.
Great progress in de novo enzyme design, and redesign, was made in the first decade of the 21st century. In three major studies, David Baker and coworkers de novo designed enzymes for the retro-aldol reaction, a Kemp-elimination reaction, and for the Diels-Alder reaction. Furthermore, Stephen Mayo and coworkers developed an iterative method to design the most efficient known enzyme for the Kemp-elimination reaction. Also, in the laboratory of Bruce Donald, computational protein design was used to switch the specificity of one of the protein domains of the nonribosomal peptide synthetase that produces Gramicidin S, from its natural substrate phenylalanine to other noncognate substrates including charged amino acids; the redesigned enzymes had activities close to those of the wild-type.
Semi-rational design
Semi-rational design is a purposeful modification method based on a certain understanding of the sequence, structure, and catalytic mechanism of enzymes. This method is between irrational design and rational design. It uses known information and means to perform evolutionary modification on the specific functions of the target enzyme. The characteristic of semi-rational design is that it does not rely solely on random mutation and screening, but combines the concept of directed evolution. It creates a library of random mutants with diverse sequences through mutagenesis, error-prone RCR, DNA recombination, and site-saturation mutagenesis. At the same time, it uses the understanding of enzymes and design principles to purposefully screen out mutants with desired characteristics.
The methodology of semi-rational design emphasizes the in-depth understanding of enzymes and the control of the evolutionary process. It allows researchers to use known information to guide the evolutionary process, thereby improving efficiency and success rate. This method plays an important role in protein function modification because it can combine the advantages of irrational design and rational design, and can explore unknown space and use known knowledge for targeted modification.
Semi-rational design has a wide range of applications, including but not limited to enzyme optimization, modification of drug targets, evolution of biocatalysts, etc. Through this method, researchers can more effectively improve the functional properties of proteins to meet specific biotechnology or medical needs. Although this method has high requirements for information and technology and is relatively difficult to implement, with the development of computing technology and bioinformatics, the application prospects of semi-rational design in protein engineering are becoming more and more broad.
Design for affinity
Protein–protein interactions are involved in most biotic processes. Many of the hardest-to-treat diseases, such as Alzheimer's, many forms of cancer (e.g., TP53), and human immunodeficiency virus (HIV) infection involve protein–protein interactions. Thus, to treat such diseases, it is desirable to design protein or protein-like therapeutics that bind one of the partners of the interaction and, thus, disrupt the disease-causing interaction. This requires designing protein-therapeutics for affinity toward its partner.
Protein–protein interactions can be designed using protein design algorithms because the principles that rule protein stability also rule protein–protein binding. Protein–protein interaction design, however, presents challenges not commonly present in protein design. One of the most important challenges is that, in general, the interfaces between proteins are more polar than protein cores, and binding involves a tradeoff between desolvation and hydrogen bond formation. To overcome this challenge, Bruce Tidor and coworkers developed a method to improve the affinity of antibodies by focusing on electrostatic contributions. They found that, for the antibodies designed in the study, reducing the desolvation costs of the residues in the interface increased the affinity of the binding pair.
Scoring binding predictions
Protein design energy functions must be adapted to score binding predictions because binding involves a trade-off between the lowest-energy conformations of the free proteins (EP and EL) and the lowest-energy conformation of the bound complex (EPL):
.
The K* algorithm approximates the binding constant of the algorithm by including conformational entropy into the free energy calculation. The K* algorithm considers only the lowest-energy conformations of the free and bound complexes (denoted by the sets P, L, and PL) to approximate the partition functions of each complex:
Design for specificity
The design of protein–protein interactions must be highly specific because proteins can interact with a large number of proteins; successful design requires selective binders. Thus, protein design algorithms must be able to distinguish between on-target (or positive design) and off-target binding (or negative design). One of the most prominent examples of design for specificity is the design of specific bZIP-binding peptides by Amy Keating and coworkers for 19 out of the 20 bZIP families; 8 of these peptides were specific for their intended partner over competing peptides. Further, positive and negative design was also used by Anderson and coworkers to predict mutations in the active site of a drug target that conferred resistance to a new drug; positive design was used to maintain wild-type activity, while negative design was used to disrupt binding of the drug. Recent computational redesign by Costas Maranas and coworkers was also capable of experimentally switching the cofactor specificity of Candida boidinii xylose reductase from NADPH to NADH.
Protein resurfacing
Protein resurfacing consists of designing a protein's surface while preserving the overall fold, core, and boundary regions of the protein intact. Protein resurfacing is especially useful to alter the binding of a protein to other proteins. One of the most important applications of protein resurfacing was the design of the RSC3 probe to select broadly neutralizing HIV antibodies at the NIH Vaccine Research Center. First, residues outside of the binding interface between the gp120 HIV envelope protein and the formerly discovered b12-antibody were selected to be designed. Then, the sequence spaced was selected based on evolutionary information, solubility, similarity with the wild-type, and other considerations. Then the RosettaDesign software was used to find optimal sequences in the selected sequence space. RSC3 was later used to discover the broadly neutralizing antibody VRC01 in the serum of a long-term HIV-infected non-progressor individual.
Design of globular proteins
Globular proteins are proteins that contain a hydrophobic core and a hydrophilic surface. Globular proteins often assume a stable structure, unlike fibrous proteins, which have multiple conformations. The three-dimensional structure of globular proteins is typically easier to determine through X-ray crystallography and nuclear magnetic resonance than both fibrous proteins and membrane proteins, which makes globular proteins more attractive for protein design than the other types of proteins. Most successful protein designs have involved globular proteins. Both RSD-1, and Top7 were de novo designs of globular proteins. Five more protein structures were designed, synthesized, and verified in 2012 by the Baker group. These new proteins serve no biotic function, but the structures are intended to act as building-blocks that can be expanded to incorporate functional active sites. The structures were found computationally by using new heuristics based on analyzing the connecting loops between parts of the sequence that specify secondary structures.
Design of membrane proteins
Several transmembrane proteins have been successfully designed, along with many other membrane-associated peptides and proteins. Recently, Costas Maranas and his coworkers developed an automated tool to redesign the pore size of Outer Membrane Porin Type-F (OmpF) from E.coli to any desired sub-nm size and assembled them in membranes to perform precise angstrom scale separation.
Other applications
One of the most desirable uses for protein design is for biosensors, proteins that will sense the presence of specific compounds. Some attempts in the design of biosensors include sensors for unnatural molecules including TNT. More recently, Kuhlman and coworkers designed a biosensor of the PAK1.
In a sense, protein design is a subset of battery design.
See also
References
Further reading
Protein engineering
Protein structure | Protein design | [
"Chemistry"
] | 6,287 | [
"Protein structure",
"Structural biology"
] |
1,581,831 | https://en.wikipedia.org/wiki/Vark | Vark (also varak Waraq or warq) is a fine filigree foil sheet of pure metal, typically silver but sometimes gold, used to decorate Indian sweets and food. The silver and gold are edible, though flavorless. Vark is made by pounding silver into sheets less than one micrometre (μm) thick, typically 0.2–0.8 μm. The silver sheets are typically packed between layers of paper for support; this paper is peeled away before use. It is fragile and breaks into smaller pieces if handled with direct skin contact. Leaf that is 0.2 μm thick tends to stick to skin if handled directly.
Vark sheets are laid or rolled over some Indian sweets, confectionery, dry fruits and spices. It is also placed onto mounds of saffron rice on platters.
For safety and ethical reasons, the Government of India has issued food safety and product standards guidelines for manufacturers of silver foil.
History
Etymology
Varaka means cloth, cloak or a thing that covers something else. Vark is sometimes spelled Varaq, varq, vark, varkh, varakh, varkha, or waraq (, ). In Persian, varaqa or barga, means a sheet, leaf or foil.
Product
Manufacturing
Vark is made by placing the pure metal dust between parchment sheets, then pounding the sheets until the metal dust molds into a foil, usually less than one micrometre (μm) thick, typically 0.2–0.8 μm. The sheets are typically packed with paper for support; this paper is peeled away before use. it generally takes 2 hours to pound the silver particles into foils.
Particles were traditionally manually pounded between the layers of ox gut or cow hide. It is easier to separate the silver leaf from the animal tissue than to separate it from the paper. Due to the concerns of the vegetarian population of India, manufacturers have switched to the modern technologies that have evolved for the production of silver leaves in India, Germany, Russia and China. Modern technologies include beating over sheets of black special treated paper or polyester sheets coated with food grade calcium powder (nicknamed "German plastic") are used instead of ox-guts or cow hide. Old City in Hyderabad used to be the hub of traditional manual manufacturing, where it is a dying trade. Delhi is a new hub of vark manufacturing in India.
Usage as food
The silver is edible, though flavourless. It is also commonly used in India, Pakistan, and Bangladesh as coating on sweets, dry fruits, and in sugar balls, betel nuts, cardamom, and other spices. Estimated consumption of vark is 275 tons annually.
Using edible silver and gold foils on sweets, confectionery and desserts is not unique to the Indian subcontinent; other regions such as Japan and Europe have also been using precious metal foils as food cover and decoration, including specialty drinks such as Goldwasser and Goldschläger.
Vegetarian ethical issues
Concerns have been raised about the ethical acceptability and food safety of vark, as not all of it is pure silver, nor hygienically prepared, and the foil was until fairly recently beaten between layers of ox-gut because it is easier to separate the silver leaf from animal tissue than to separate it from paper. Due to the grinding effect of the hammering, some of the animal intestine becomes part of the silver foil, which is sold in bulk. Since Jains and a considerable percentage of Hindus are vegetarian, this led to the decline in the usage of vark in sweets or suparis. Indian Airlines asked its caterers to not apply vark to the food supplied to ensure no animal intestine is present. In 2016, Government of India banned the usage of animal guts or skins in the making of vark. Consequently, the Indian market for vark has mostly converted to using the machine-based vegetarian process in the making of the silver foils. Food Safety and Standards Authority of India has issued guidelines for the silver leaf manufactures to adhere to regarding thickness, weight, purity, labeling and hygiene of the silver leaf.
Safety
Gold and silver are approved food foils in the European Union, as E175 and E174 additives respectively. The independent European food-safety certification agency, TÜV Rheinland, has deemed gold leaf safe for consumption. Gold and silver leaf are also certified as kosher. These inert precious metal foils are not considered toxic to human beings nor to broader ecosystems. Large quantities of ingested bioactive silver can cause argyria, but the use of edible silver or gold as vark is not considered harmful to the body, since the metal is in an inert form (not ionic bioactive form), and the quantities involved in normal use are minuscule.
One study has found that about 10% of 178 foils studied from the Lucknow (India) market were made of aluminium. Of the tested foils, 46% of the samples were found to have the desired purity requirement of 99.9% silver, whereas the rest had less than 99.9% silver. All the tested Indian foils contained on average trace levels of nickel (487 ppm), lead (301 ppm), copper (324 ppm), chromium (83 ppm), cadmium (97 ppm) and manganese (43 ppm). All of these are lower than natural anthropogenic exposures of these metals; the authors suggest there is a need to address a lack of purity standards in European Union and Indian food additive grade silver. The total silver metal intake per kilogram of sweets eaten, from vark, is less than one milligram.
See also
Metallic dragée
Gold leaf
Gilding
Metal leaf
Rolling paper
References
Indian desserts
Pakistani desserts
Nepalese desserts
Food ingredients
Gold
Silver
Food and drink decorations | Vark | [
"Technology"
] | 1,204 | [
"Food ingredients",
"Components"
] |
1,581,861 | https://en.wikipedia.org/wiki/Ideographic%20Research%20Group | The Ideographic Research Group (IRG), formerly called the Ideographic Rapporteur Group, is a subgroup of Working Group 2 (WG2) of ISO/IEC JTC1 Subcommittee 2 (SC2), which is the committee responsible for developing the Universal Coded Character Set (ISO/IEC 10646). IRG is tasked with preparing and reviewing sets of CJK unified ideographs for eventual inclusion in both ISO/IEC 10646 and The Unicode Standard. The IRG is composed of representatives from national standards bodies from China, Japan, South Korea, Vietnam, and other regions that have historically used Chinese characters, as well as experts from liaison organizations such as the SAT Daizōkyō Text Database Committee (SAT), Taipei Computer Association (TCA), and the Unicode Technical Committee (UTC). The group holds two meetings every year lasting 4-5 days each, subsequently reporting its activities to its parent ISO/IEC JTC 1/SC 2 (SC2/WG2) committee.
History
The precursor to the IRG was the CJK Joint Research Group (CJK-JRG), established in 1990. In May 1993, this group was re-established as the Ideographic Rapporteur Group (IRG) as a subgroup of WG2. In June 2019, the subgroup acquired its current name.
The first IRG rapporteur was Kato Shigenobu (), from 1993 to 1994, followed by Kido Akio () from 1994 to 1995. From 1995 to 2004, the IRG rapporteur was Zhang Zhoucai (), who had been convenor and chief editor of CJK-JRG from 1990 to 1993. From 2004 to 2018 the IRG rapporteur was Hong Kong Polytechnic University professor Lu Qin (), but in June 2018 the title of "rapporteur" was changed to "convenor", and Lu Qin continued as IRG convenor for another six years. Since June 2024, the IRG convenor has been Ken Lunde.
Overview
IRG is responsible for reviewing proposals to add new CJK unified ideographs to the Universal Multiple-Octet Coded Character Set (ISO/IEC 10646), and equivalently the Unicode Standard, and submitting consolidated proposals for sets of unified ideographs to WG2, which are then processed for encoding in the respective standards by SC2 and the Unicode Technical Committee. National and liaison bodies that have been represented in IRG include China, Hong Kong, Macau, Japan (no longer active), North Korea (no longer active), South Korea, Singapore (no longer active), the Taipei Computer Association (TCA), the United Kingdom, Vietnam, and the Unicode Technical Committee (UTC).
As of Unicode version 16.0, the IRG has been responsible for submitting the following blocks of CJK unified and compatibility ideographs for encoding:
CJK Unified Ideographs and CJK Compatibility Ideographs (version 1.0)
CJK Unified Ideographs Extension A (version 3.0)
CJK Unified Ideographs Extension B and CJK Compatibility Ideographs Supplement (version 3.1)
CJK Unified Ideographs Extension C (version 5.2)
CJK Unified Ideographs Extension D (version 6.0)
CJK Unified Ideographs Extension E (version 8.0)
CJK Unified Ideographs Extension F (version 10.0)
CJK Unified Ideographs Extension G (version 13.0)
CJK Unified Ideographs Extension H (version 15.0)
Since 2015, proposed characters submitted by IRG member bodies have been processed in batches called "IRG Working Sets". Each working set undergoes several years of review by IRG experts before official submission of the working set to WG2 as a new block. Once accepted by WG2, the proposed block is processed according to the individual procedures followed by ISO/IEC JTC1 SC2 and the Unicode Technical Committee (UTC). In the case of SC2, this involves balloting of ISO member bodies. The following working sets have been processed by IRG:
WS2015. 5,547 submitted characters which resulted in 4,939 characters encoded in CJK Unified Ideographs Extension G (Unicode version 13.0, March 2020):
China: 2,277 submitted characters (1,268 Zhuang characters, 1,009 characters from the Hanyu Da Zidian (汉语大字典) dictionary)
Republic of Korea: 469 submitted characters
SAT: 350 submitted characters
TCA: 500 submitted characters
United Kingdom: 1,640 submitted characters
UTC: 311 submitted characters
WS2017. 5,027 submitted characters which resulted in 4,192 characters encoded in CJK Unified Ideographs Extension H (Unicode version 15.0, September 2022):
China: 963 submitted characters (143 person name characters, 354 place name characters, 29 characters from the Hanyu Da Cidian (汉语大词典) dictionary, 33 characters from the Dictionary of Chinese Medicine (中医字典), and 404 Zhuang characters)
Republic of Korea: 686 submitted characters
SAT: 305 submitted characters
TCA: 895 submitted characters
United Kingdom: 1,001 submitted characters
UTC: 193 submitted characters
Vietnam: 984 submitted characters
WS2021. 4,951 submitted characters which may result in up to 4,302 characters to be encoded in CJK Unified Ideographs Extension J in a future version of Unicode:
China: 1,223 submitted characters (151 place name characters, 768 science and technology characters, 4 person name characters, and 300 Zhuang characters)
Republic of Korea: 191 submitted characters
SAT: 383 submitted characters
TCA: 1,000 submitted characters
United Kingdom: 1,000 submitted characters
UTC: 153 submitted characters
Vietnam: 1,001 submitted characters
WS2024. A total of 4,674 characters were submitted for Working Set 2024 in July 2024 by China, Republic of Korea, SAT, TCA, United Kingdom, UTC, and Vietnam:
China: 1,000 submitted characters, of which 700 are Chinese characters, and 300 are Zhuang characters
Republic of Korea: 178 submitted characters
SAT: 252 submitted characters
TCA: 1,000 submitted characters
United Kingdom: 1,000 submitted characters
UTC: 244 submitted characters
Vietnam: 1,000 submitted characters
References
External links
Homepage
Old homepage (until July 2024)
IRG Working Sets
Working Set 2015
Working Set 2017
Working Set 2021
Working Set 2024
IRG Working Document Series (IWDS)
Unicode
International Organization for Standardization
Internationalization and localization
Research groups
Chinese character encodings | Ideographic Research Group | [
"Technology"
] | 1,373 | [
"Natural language and computing",
"Internationalization and localization"
] |
1,581,920 | https://en.wikipedia.org/wiki/Kolokol-1 | Kolokol-1 ( for "bell"; ) is a synthetic opioid developed for use as an aerosolizable incapacitating agent. The exact chemical structure has not yet been revealed by the Russian government. It was originally thought by some sources to be a derivative of the potent opioid fentanyl, most probably 3-methylfentanyl dissolved in an inhalational anaesthetic as an organic solvent. However, independent analysis of residues on the Moscow theater hostage crisis hostages' clothing or in one hostage's urine found no fentanyl or 3-methylfentanyl. Two much more potent and shorter-acting agents, carfentanil (a large animal tranquilizer) and remifentanil (a surgical painkiller), were found in the samples. They concluded that the agent used in the Moscow theater hostage crisis contained two fentanyl derivatives much stronger than fentanyl itself, sprayed in an aerosol mist.
Development and early use
According to Lev Fyodorov, a former Soviet chemical weapons scientist who now heads the independent Council for Chemical Security in Moscow, the agent was originally developed at a secret military research facility in Leningrad (now restored to its historic name of Saint Petersburg), during the 1970s. Methods of dispersing the compound were reportedly developed and tested by releasing harmless bacteria through subway system ventilation shafts, first in Moscow and then in Novosibirsk. Fyodorov also claimed that leaders of the failed August 20, 1991, Communist coup considered using the agent in the Russian parliament building.
Use during Moscow theater hostage crisis
Kolokol-1 is thought to be the chemical agent employed by a Russian Spetsnaz team during the Moscow theater hostage crisis in October 2002. At least 129 hostages died during the ensuing raid; nearly all of these fatalities were attributed to the effects of the aerosolised incapacitating agent that was pumped into the theatre to subdue the militants. The gas was later stated by Russian Health Minister Yuri Shevchenko to be based on fentanyl. Minister Shevchenko's statement followed speculation that the gas employed at the theater violated international prohibitions on the manufacture and use of lethal chemical weapons, and came after a request for clarification about the gas from Rogelio Pfirter, director-general of the Organisation for the Prohibition of Chemical Weapons. The minister stressed that the drug fentanyl used in the gas, which is widely used as a pain medication, "cannot in itself be called lethal".
Shevchenko attributed the hostage deaths to the use of the chemical compound on the poor physical condition of the victims after three days of captivity - dehydrated, hungry, lacking oxygen and suffering acute stress, saying "I officially declare that chemical substances of the kind banned under international conventions on chemical weapons were not used," according to the Interfax news agency.
This comment is disputed on two grounds. First, the United States Ambassador to Russia at the time complained that delays on the part of the Russian government in identifying the exact nature of the active agent in the gas led to many hostage deaths which might otherwise have been avoided. Second, a team of researchers based at the United Kingdom's chemical and biological defense laboratories at Porton Down, Wiltshire, England, subjected residues of the gas from clothing worn by two British survivors, and urine from a third survivor who survived the gas attack after hospital treatment to liquid chromatography–tandem mass spectrometry analysis. They found no fentanyl, but did find two other, much more potent and potentially toxic drugs, carfentanil and remifentanil.
The specific antidote for carfentanil is naloxone. This report goes on to state
carfentanil is only approved as a veterinary drug for use in sedating large animals such as elephants, not for use in humans because the effective dose is unacceptably close to the dose which can cause illness or death;
that the deaths among hostages after the Moscow theater can be explained by the use of carfentanil and remifentanil, two strong drugs for which there is little margin of safety between sedative and lethal doses in humans. Many deaths could have been expected unless the people exposed got quick treatment with the drugs' antidotes.
that it is highly unlikely a chemical agent can be used in a tactical environment to disable opponents reliably without many deaths.
An article in the Annals of Emergency Medicine compared the sedating dose and the toxic or lethal dose of fentanyl and those of its derivatives and found that while carfentanil and remifentanil have dramatically shorter biological half-lives and are more potent than fentanyl, the fentanyl derivatives are lipophilic (readily taken up into body fat) and can re-enter the circulation after an overdose is first treated, causing severe delayed effects and even death if the correct antidote is not administered when the drugs act again. This might account for the large number of deaths following use of large amounts of Kolokol-1 in a closed space like the Barricade Theatre, where the gas might have been unexpectedly concentrated in areas of the theater.
Under the heading "Lessons Learned," the authors state "It seems likely that the 800 hostages were about to be killed by Chechen rebels. To rescue them, the Russian military used a calmative agent in an attempt to subdue the rebels. The intent was likely to win control of the theater with as little loss of life as possible. Given the large number of explosives in the hands of the hostage takers, a conventional assault or the use of more toxic chemical agents might have significantly increased the number of casualties. Although it may seem excessive that 16% of the 800 hostages may have died from the gas exposure, 84% survived. We do not know that a different tactic would have provided a better outcome."
The authors said that the high therapeutic index of one of the fentanyl derivatives used may have inappropriately reduced the Russian government's concern about the potential lethality of these agents, the drugs' lipophilicity, and how the hostages could have been overdosed in the enclosed space of the theater as factors that should have been considered more thoroughly. They concluded by saying that poisoning by opioid agonist drugs such as Kolokol-1 is relatively simple to treat, and that many of the deaths after the Moscow theater hostage crisis could have been avoided if trained rescuers and medical teams with the proper antidotes were made ready in advance. They stated that naloxone, long a critical antidote to treat heroin overdose and unintentional poisoning with opioids during medical treatment, "has now become a crucial chemical warfare antidote."
Carfentanil
Carfentanil, one of the two fentanyl derivatives used in the Moscow theater hostage crisis was actively marketed by several Chinese chemical companies at the time. Carfentanil was not a controlled substance in China, where it was manufactured legally and sold openly over the Internet up until May 1, 2017, when a ban on fentanyl and all fentanyl analogues went into effect. There has been controversy between the US and China over whether the Chinese ban on sales of fentanyl derivatives to the US has been effective. Fentanyl led to more than 37,000 overdose deaths in the US in 2017.
The toxicity of carfentanil has been compared with nerve gas, according to an Associated Press article. The article quoted Andrew C. Weber, Assistant US Secretary of Defense for Nuclear, Chemical and Biological Defense Programs from 2009 to 2014 as saying "It's a weapon. Companies shouldn't be just sending it to anybody." Weber added, "Countries that we are concerned about were interested in using it for offensive purposes ... We are also concerned that groups like ISIS could order it commercially." Weber described various ways carfentanil could be used as a weapon, such as knocking troops out and taking them hostage, or killing civilians in closed spaces like train stations.
References
Incapacitating agents
Cold War weapons of the Soviet Union
Science and technology in the Soviet Union
Soviet inventions
Opioids
Drugs with undisclosed chemical structures
Drugs in the Soviet Union | Kolokol-1 | [
"Chemistry"
] | 1,684 | [
"Incapacitating agents",
"Chemical weapons"
] |
1,582,090 | https://en.wikipedia.org/wiki/Reticulocytosis | Reticulocytosis is a laboratory finding in which the number of reticulocytes (immature red blood cells) in the bloodstream is elevated. Reticulocytes account for approximately 0.5% to 2.5% of the total red blood cells in healthy adults and 2% to 6% in infants, but in reticulocytosis, this percentage rises. Reticulocytes are produced in the bone marrow and then released into the bloodstream, where they mature into fully developed red blood cells between 1-2 days. Reticulocytosis often reflects the body’s response to conditions rather than an independent disease process and can arise from a variety of causes such as blood loss or anemia.
Mechanism
Reticulocytosis results from the body’s physiological response to an increased need for red blood cells. When red blood cells are destroyed or lost, tissues experience low oxygen levels causing the kidneys to release the hormone erythropoietin. Erythropoietin signals the bone marrow to accelerate the production of red blood cells through a process called erythropoiesis. As a result, more reticulocytes are released into the bloodstream. These immature cells continue to mature into fully developed red blood cells in circulation, restoring the red cell count and supporting oxygen delivery to tissues.
Causes
Hemolytic Anemia
Broad category of anemias where red blood cells are destroyed faster than they can be replaced, prompting the bone marrow to increase red blood cell production and the release of immature red blood cells into the bloodstream. Reticulocytosis provides strong suspicion of hemolysis when present along with many other markers like elevations in lactate dehydrogenase and unconjugated bilirubin or a decrease in haptoglobin.
Sickle cell anemia: a genetic disorder where abnormal hemoglobin (HbS) causes red blood cells to become rigid and sickle-shaped, leading to intermittent blood vessel blockages, hemolysis, and tissue ischemia. Destruction of these defective red blood cells results in anemia, which stimulates the bone marrow to increase red blood cell production. Because of this, reticulocytosis is a possible lab finding in sickle cell disease.
Hereditary spherocytosis: a genetic disorder where defects in red blood cell membrane proteins cause them to lose their normal shape, becoming spherical (spherocytes) which are prone to getting stuck and rupturing in the spleen. This hemolysis creates a chronic shortage of red blood cells, stimulating the bone marrow to increase production and release reticulocytes into circulation.
Glucose-6-phosphate dehydrogenase (G6PD) deficiency: a genetic disorder that makes red blood cells vulnerable to oxidative stress. When individuals with this deficiency consume fava beans, experience stress or are exposed to certain medications, oxidative damage leads to red blood cell destruction (hemolysis). In response to this rapid hemolysis, the bone marrow increases RBC production, resulting in reticulocytosis as it attempts to replace the destroyed cells.
Autoimmune hemolytic anemia: caused by the host immune system attacking and destroying its own red blood cells. In response to this, the bone marrow will begin to produce more red blood cells to compensate for this destruction.
Blood Loss
In response to significant blood loss, either acute (e.g., trauma or surgery) or chronic (e.g., gastrointestinal bleeding), the bone marrow increases production to replace lost red blood cells. This results in an increased reticulocyte count, as new immature cells are released and make up a larger proportion of the blood volume.
Pregnancy
During pregnancy, folate deficiency can cause megaloblastic anemia in the mother and poses a significant neurological developmental risk for the fetus. Upon initiation of maternal folate supplementation to prevent fetal abnormalities, reticulocytosis is expected after 3–4 days of treatment.
Diagnosis
Reticulocytosis is typically diagnosed through a reticulocyte count, which measures the percentage or absolute number of reticulocytes in the blood. Common diagnostic tools for hematological disorders that may cause reticulocytosis include:
Reticulocyte Production Index (RPI): Calculation that corrects for reticulocytes counts that may be misleadingly elevated due to the decrease in total red blood cells seen in anemia. Calculated as [%reticulocyte count x Patient Hct] / 45(normal Hct). This adjustment provides insight into whether reticulocyte production is adequate for the level of anemia.
Complete Blood Count (CBC): Provides a value for a variety of blood components, including red blood cells, hemoglobin, and hematocrit levels.
Peripheral Blood Smear: Common lab test in the work up of blood disorders that evaluates the size, shape, and maturity of red blood cells and reticulocytes by observing them under a microscope. This can help narrow down the etiology of the reticulocytosis.
Treatment
The management of reticulocytosis involves treating the underlying cause rather than attempting to treat the high reticulocyte count itself.
References
External links
Histology
Abnormal clinical and laboratory findings for RBCs | Reticulocytosis | [
"Chemistry"
] | 1,093 | [
"Histology",
"Microscopy"
] |
1,582,413 | https://en.wikipedia.org/wiki/Cystovirus | Cystovirus is a genus of double-stranded RNA viruses which infects bacteria. It is the only genus in the family Cystoviridae. The name of the group cysto derives from Greek kystis which means bladder or sack. There are seven species in this genus.
Discovery
Pseudomonas virus phi6 was the first virus in this family to be discovered and was initially characterized in 1973 by Anne Vidaver at the University of Nebraska. She found that when she cultured the bacterial strain Pseudomonas phaseolicola HB1OY with halo blight infected bean straw, cytopathic effects were detected in cultured lawns, indicating that there was a lytic microbe or bacteriophage present.
In 1999, phi7–14 were identified by the laboratory of Leonard Mindich at the Public Health Research Institute associated with New York University. They did this by culturing various leaves in Lysogeny Broth and then plating the broth on lawns of Pseudomonas syringae pv phaseolicola. They were able to identify viral plaques from this and then subsequently sequence their genomes.
Microbiology
Structure
Cystovirus particles are enveloped, with icosahedral and spherical geometries, and T=13, T=2 symmetry. The virion diameter is around 85 nm. Cystoviruses are distinguished by their outer layer protein and lipid envelope. No other bacteriophage has any lipid in its outer coat, though the Tectiviridae and the Corticoviridae have lipids within their capsids.
Genome
Cystoviruses have a tripartite double-stranded RNA genome which is approximately 14 kbp in total length. The genome is linear and segmented, and labeled as large (L) 6.4 kbp, medium (M) 4 kbp, and small (S) 2.9 kb in length. The genome codes for twelve proteins.
Life cycle
Cytoviruses enter the bacteria by adsorption on its pilus and then membrane fusion. Viral replication is cytoplasmic. Replication follows the double-stranded RNA virus replication model. Double-stranded RNA virus transcription is the method of transcription. The progeny viruses are released from the cell by lysis.
Most identified cystoviruses infect Pseudomonas species, but this is likely biased due to the method of screening and enrichment. There are many proposed members of this family. Pseudomonas viruses φ7, φ8, φ9, φ10, φ11, φ12, and φ13 have been identified and named, but other cystovirus-like viruses have also been isolated. These seven putative relatives are classified as either close (φ7, φ9, φ10, φ11) or distant (φ8, φ12, φ13) relatives to φ6, with the distant relatives thought to infect via the LPS rather than the pili.
However, cystoviruses do not only infect Pseudomonas. But also bacteria of the genera Streptomyces, Microvirgula, Acinetobacter, Lactococcus, Pectobacterium, and possibly other bacterial genera.
Taxonomy
Members of the Cystoviridae appear to be most closely related to the Reoviridae, but also share homology with the Totiviridae. In particular, the structural genes of cystoviruses are highly-similar to those used by a number of dsRNA viruses that infect eukaryotes. The genus Cystovirus has seven species:
Pseudomonas virus phi6
Pseudomonas virus phi8
Pseudomonas virus phi12
Pseudomonas virus phi13
Pseudomonas virus phi2954
Pseudomonas virus phiNN
Pseudomonas virus phiYY
Other unassigned phages:
Microvirgula virus phiNY
Streptomyces virus phi0
Lactococcus virus phi7-4
Pectobacterium virus MA14
Acinetobacter virus CAP3
Acinetobacter virus CAP4
Acinetobacter virus CAP5
Acinetobacter virus CAP6
Acinetobacter virus CAP7
References
External links
ICTV Online Report: Cystoviridae
Viralzone: Cystovirus
Cystoviridae
Riboviria
Virus genera | Cystovirus | [
"Biology"
] | 884 | [
"Viruses",
"Riboviria"
] |
1,582,429 | https://en.wikipedia.org/wiki/Birnaviridae | Birnaviridae is a family of double-stranded RNA viruses. Salmonid fish, birds and insects serve as natural hosts. There are currently 11 species in this family, divided among seven genera. Diseases associated with this family include infectious pancreatic necrosis in salmonid fish, which causes significant losses to the aquaculture industry, with chronic infection in adult salmonid fish and acute viral disease in young salmonid fish.
Structure
Viruses in family Birnaviridae are non-enveloped, with icosahedral single-shelled geometries, and T=13 symmetry. The diameter is around 70 nm.
Genome
The genome is composed of linear, bi-segmented, double-stranded RNA. It is around 5.9–6.9 kbp in length and codes for five to six proteins. Birnaviruses encode the following proteins:
RNA-directed RNA polymerase (VP1), which lacks the highly conserved Gly-Asp-Asp (GDD) sequence, a component of the proposed catalytic site of this enzyme family that exists in the conserved motif VI of the palm domain of other RNA-directed RNA polymerases.
The large RNA segment, segment A, of birnaviruses codes for a polyprotein (N-VP2-VP4-VP3-C) that is processed into the major structural proteins of the virion: VP2, VP3 (a minor structural component of the virus), and into the putative protease VP4. VP4 protein is involved in generating VP2 and VP3. recombinant VP3 is more immunogenic than recombinant VP2.
Infectious pancreatic necrosis virus (IPNV), a birnavirus, is an important pathogen in fish farms. Analyses of viral proteins showed that VP2 is the major structural and immunogenic polypeptide of the virus. All neutralizing monoclonal antibodies are specific to VP2 and bind to continuous or discontinuous epitopes. The variable domain of VP2 and the 20 adjacent amino acids of the conserved C-terminal are probably the most important in inducing an immune response for the protection of animals.
Non structural protein VP5 is found in RNA segment A. The function of this small viral protein is unknown. It is believed to be involved in influencing apoptosis, but studies are not completely concurring. The protein can not be found in the virion.
Viral Replication
Viral replication is cytoplasmic. Entry into the host cell is achieved by cell receptor endocytosis. Replication follows the double-stranded RNA virus replication model in the cytoplasm. Double-stranded RNA virus transcription is the method of transcription in cytoplasm. The virus is released by budding. Salmonid fish (Aquabirnavirus), young sexually immature chickens (Avibirnavirus), insects (Entomobirnavirus), and blotched snakehead fish (Blosnavirus) are the natural hosts. Transmission routes are contact.
Taxonomy
The following genera are recognized:
Aquabirnavirus
Avibirnavirus
Blosnavirus
Dronavirus
Entomobirnavirus
Ronavirus
Telnavirus
References
External links
ICTV Report: Birnaviridae
Viralzone: Birnaviridae
Protein families
Virus families
Riboviria | Birnaviridae | [
"Biology"
] | 687 | [
"Protein families",
"Viruses",
"Riboviria",
"Protein classification"
] |
1,582,494 | https://en.wikipedia.org/wiki/Data%20access | Data access is a generic term referring to a process which has both an IT-specific meaning and other connotations involving access rights in a broader legal and/or political sense. In the former it typically refers to software and activities related to storing, retrieving, or acting on data housed in a database or other repository.
Details
Two fundamental types of data access exist:
sequential access (as in magnetic tape, for example)
random access (as in indexed media)
Data access crucially involves authorization to access different data repositories. Data access can help distinguish the abilities of administrators and users. For example, administrators may have the ability to remove, edit and add data, while general users may not even have "read" rights if they lack access to particular information.
Historically, each repository (including each different database, file system, etc.), might require the use of different methods and languages, and many of these repositories stored their content in different and incompatible formats.
Over the years standardized languages, methods, and formats, have developed to serve as interfaces between the often proprietary, and always idiosyncratic, specific languages and methods. Such standards include SQL (1974- ), ODBC (ca 1990- ), JDBC, XQJ, ADO.NET, XML, XQuery, XPath (1999- ), and Web Services.
Some of these standards enable translation of data from unstructured (such as HTML or free-text files) to structured (such as XML or SQL).
Structures such as connection strings and DBURLs
can attempt to standardise methods of connecting to databases.
See also
Right of access to personal data
Data access object
Data access layer
References
Data management
Data analysis | Data access | [
"Technology"
] | 354 | [
"Data management",
"Data"
] |
1,582,555 | https://en.wikipedia.org/wiki/Metaviridae | Metaviridae is a family of viruses which exist as Ty3-gypsy LTR retrotransposons in a eukaryotic host's genome. They are closely related to retroviruses: members of the family Metaviridae share many genomic elements with retroviruses, including length, organization, and genes themselves. This includes genes that encode reverse transcriptase, integrase, and capsid proteins. The reverse transcriptase and integrase proteins are needed for the retrotransposon activity of the virus. In some cases, virus-like particles can be formed from capsid proteins.
Some assembled virus-like particles of members of the family Metaviridae can penetrate and infect previously uninfected cells. An example of this is the gypsy, a retroelement found in the Drosophila melanogaster genome. The ability to infect other cells is determined by the presence of the retroviral env genes which encode coat proteins. Metaviridae is a family of retrotransposons found in all eukaryotes known and studied. Viruses of this family proliferate through intermediates called virus-like particles known for their ability to induce mutations and genome sequencing. Members of the family Metaviridae are often referred to as LTR-retrotransposons of the Ty3-gypsy family. Among the members are only species that produce intracellular particles, the collection of these particles is heterogeneous. Extracellular particles are surrounded by oval nuclei and are called virions. In many systems, virions are characterized biochemically. Genomes of retrotransposons in this family are positive strand RNAs. In addition to the RNA genome, some cellular RNAs can be randomly associated with particles, including specific tRNAs, in case of virus replication prepared by tRNAs. Particle fractions from cells are heterogeneous relative to maturation and are therefore associated with intermediate transcriptions and reverse transcription products in addition to genomic RNA. When it comes to virion producing members, it appears that the virion membrane is derived from the membrane of the host cell.
Taxonomy
The family Metaviridae is split into the following genera:
Genus Metavirus
Genus Errantivirus
Families Metaviridae, Belpaoviridae, Pseudoviridae, Retroviridae, and Caulimoviridae constitute the order Ortervirales.
References
External links
ICTV Report: Metaviridae
Descriptions of Plant Viruses
Ortervirales
RNA reverse-transcribing viruses
Virus families | Metaviridae | [
"Biology"
] | 523 | [
"Virus stubs",
"Viruses"
] |
1,582,728 | https://en.wikipedia.org/wiki/Barnaviridae | Barnaviridae is a family of non-enveloped, positive-strand RNA viruses. Cultivated mushrooms serve as natural hosts. The family has one genus, Barnavirus, which contains one species: Mushroom bacilliform virus. Diseases associated with this family includes La France disease.
Structure
Viruses in Barnaviridae are non-enveloped, with icosahedral and Bacilliform geometries, and T=1 symmetry. These viruses are about 50 nm long.
Genome
Genomes are linear, around 4kb in length. The genome has 4 open reading frames.
Genomic RNA serves as both the genome and viral messenger RNA. ORF2 is a polyprotein which is possibly auto-cleaved by the ORF2 viral protease. ORF3 encodes the RNA dependent RNA polymerase and may be translated by ribosomal frameshifting as an ORF2-ORF3 polyprotein. The single capsid protein (ORF4) is translated from a subgenomic RNA.
Life cycle
Viral replication is cytoplasmic. Entry into the host cell is achieved by penetration into the host cell and passing it down. Replication follows the positive stranded RNA virus replication model. Positive stranded RNA virus transcription is the method of transcription. The virus is released horizontal via mycelium and basidiospores. Cultivated mushroom, Agaricus bisporus, serves as the natural host.
References
External links
Viralzone: Barnaviridae
ICTV
Mycology
Virus families
Riboviria | Barnaviridae | [
"Biology"
] | 312 | [
"Viruses",
"Riboviria",
"Mycology"
] |
1,582,732 | https://en.wikipedia.org/wiki/Bromoviridae | Bromoviridae is a family of viruses. Plants serve as natural hosts. There are six genera in the family.
Taxonomy
The following genera are assigned to the family:
Alfamovirus
Anulavirus
Bromovirus
Cucumovirus
Ilarvirus
Oleavirus
Structure
Viruses in the family Bromoviridae are non-enveloped, with icosahedral and bacilliform geometries. The diameter is around 26-35 nm.
Genomes are linear and segmented, tripartite.
Life cycle
Viral replication is cytoplasmic, and is lysogenic. Entry into the host cell is achieved by penetration into the host cell. Replication follows the positive stranded RNA virus replication model. Positive stranded rna virus transcription, using the internal initiation model of subgenomic rna transcription is the method of transcription. The virus exits the host cell by tubule-guided viral movement. Plants serve as the natural host. Transmission routes are mechanical and contact.
References
External links
ICTV Report: Bromoviridae
Viralzone: Bromoviridae
Virus families
Riboviria | Bromoviridae | [
"Biology"
] | 218 | [
"Viruses",
"Riboviria"
] |
1,582,736 | https://en.wikipedia.org/wiki/Caliciviridae | The Caliciviridae are a family of "small round structured" viruses, members of Class IV of the Baltimore scheme. Caliciviridae bear resemblance to enlarged picornavirus and was formerly a separate genus within the picornaviridae. They are positive-sense, single-stranded RNA which is not segmented. Thirteen species are placed in this family, divided among eleven genera. Diseases associated with this family include feline calicivirus (respiratory disease), rabbit hemorrhagic disease virus (often fatal hepatitis), and Norwalk group of viruses (gastroenteritis). Caliciviruses naturally infect vertebrates, and have been found in a number of organisms such as humans, cattle, pigs, cats, chickens, reptiles, dolphins and amphibians. The caliciviruses have a simple construction and are not enveloped. The capsid appears hexagonal/spherical and has icosahedral symmetry (T=1 or T=3) with a diameter of 35–39 nm.
Caliciviruses are not very well studied because until recently, they could not be grown in culture, and they have a very narrow host range and no suitable animal model. However, the recent application of modern genomic technologies has led to an increased understanding of the virus family. A recent isolate from rhesus monkeys—Tulane virus—can be grown in culture, and this system promises to increase understanding of these viruses.
Etymology
Calici- comes from the Latin word Calyx and the Greek word kalyx. The words mean a cup or chalice, a Calix. This comes from the strains having visible cup-shaped depressions.
Taxonomy
The following genera are recognized:
Bavovirus
Lagovirus
Minovirus
Nacovirus
Nebovirus
Norovirus
Recovirus
Salovirus
Sapovirus
Valovirus
Vesivirus
A number of other caliciviruses remain unclassified, including the chicken calicivirus.
Virology
Life cycle
Viral replication is cytoplasmic. Entry into the host cell is achieved by attachment to host receptors, which mediate endocytosis. Replication follows the positive-stranded RNA virus replication model. Positive-stranded RNA virus transcription is the method of transcription. Translation takes place by leaky scanning, and RNA termination-reinitiation. Vertebrates serve as the natural host. Transmission routes are fecal-oral.
Human disease
History
Establishing the viral etiology took many decades due to the difficulty of growing the virus in cell culture. In the 1940s and 1950s in the United States and Japan, caliciviridae could not be grown in culture, but as an experiment bacterial free filtrate of diarrhea was given to volunteers to check if viruses were present in volunteers' stool. These experiments demonstrated that nonbacterial, filterable agents had the capability of causing enteric disease in humans. In 1968, an outbreak at a Norwalk elementary school (e.g. Norwalk virus) in Ohio led to stool samples again being given to volunteers and serially passaged to other people. Finally, in 1972, Kapikian and his colleagues isolated the Norwalk virus from volunteers using immune electron microscopy, a process that involves looking directly at antibody-antigen complexes. The classification of this one Norwalk virus strain served as the prototype for other species and small round structured viruses later known as Norovirus.
Animal viruses
Rabbit hemorrhagic disease virus is a pathogen of rabbits that causes major problems throughout the world where rabbits are reared for food and clothing, make a significant contribution to ecosystem ecology, and where they support valued wildlife as a food source.
Uses
Australia and New Zealand, in an effort to control their rabbit populations, have intentionally spread rabbit calicivirus.
References
External links
ICTV Report: Caliciviridae
Caliciviridae description page from the International Committee on Taxonomy of Viruses site
MicrobiologyBytes: Caliciviruses
Human caliciviruses
Stanford University
Virus Pathogen Database and Analysis Resource (ViPR): Caliciviridae
Viralzone: Caliciviridae
3D macromolecular structures of Caliciviridae from the EM Data Bank(EMDB)
ICTV
Riboviria
Virus families | Caliciviridae | [
"Biology"
] | 857 | [
"Viruses",
"Riboviria"
] |
1,582,743 | https://en.wikipedia.org/wiki/Closteroviridae | Closteroviridae is a family of viruses. Plants serve as natural hosts. There are four genera and 59 species in this family, seven of which are unassigned to a genus. Diseases associated with this family include: yellowing and necrosis, particularly affecting the phloem.
Taxonomy
Genome type and transmission vector are two of the most important traits used for classification. Ampeloviruses and Closteroviruses have monopartite genomes and are transmitted by pseudococcid mealybugs (and soft scale insects) and aphids respectively. While Criniviruses are bipartite and transmitted by whiteflies.
Genera:
Ampelovirus
Closterovirus
Crinivirus
Velarivirus
Unassigned species:
Actinidia virus 1
Alligatorweed stunting virus
Blueberry virus A
Megakespama mosaic virus
Mint vein banding-associated virus
Olive leaf yellowing-associated virus
Persimmon virus B
Structure
Viruses in the family Closteroviridae are non-enveloped, with flexuous and filamentous geometries. The diameter is around 10–13 nm, with a length of 950–2200 nm. Genomes are linear and non-segmented, bipartite, around 20kb in length.
Life cycle
Viral replication is cytoplasmic. Entry into the host cell is achieved by penetration into the host cell. Replication follows the positive stranded RNA virus replication model. Positive stranded rna virus transcription is the method of transcription. The virus exits the host cell by tubule-guided viral movement.
Plants serve as the natural host. Transmission routes are mechanical.
References
External links
ICTV Report: Closteroviridae
Viralzone: Closteroviridae
Virus families
Riboviria | Closteroviridae | [
"Biology"
] | 357 | [
"Viruses",
"Riboviria"
] |
1,582,785 | https://en.wikipedia.org/wiki/Flexiviridae | Flexiviridae was a family of viruses named after being filamentous and highly flexible. Members of the family infect plants. In 2009, the family was dissolved and replaced with four families, each of which still contain the name flexiviridae:
Alphaflexiviridae
Betaflexiviridae
Gammaflexiviridae
Deltaflexiviridae
Flexiviridae was incertae sedis but the new families are in Tymovirales.
References
Obsolete virus taxa
Unaccepted virus taxa | Flexiviridae | [
"Biology"
] | 106 | [
"Biological hypotheses",
"Unaccepted virus taxa",
"Controversial taxa"
] |
1,582,810 | https://en.wikipedia.org/wiki/Crown%20molding | Crown moulding (interchangeably spelled Crown molding in American English) is a form of cornice created out of decorative moulding installed atop an interior wall. It is also used atop doors, windows, pilasters and cabinets.
Historically made of plaster or wood, modern crown moulding installation may be of a single element, or a build-up of multiple components into a more elaborate whole.
Application
Crown moulding is typically installed at the intersection of walls and ceiling, but may also be used above doors, windows, or cabinets. Crown treatments made out of wood may be a single piece of trim, or a build-up of multiple components to create a more elaborate look. The main element, or the only in a plain installation, is a piece of trim that is sculpted on one side and flat on the other, with standard angles forming 90-degrees milled on both its top and bottom edges. When placed against a wall and ceiling a triangular void is created behind it. Cutting inside and outside corners requires complex cuts at standard angles, typically done with powered compound miter saws that feature detents at these angles to aid the user.
An alternative method, coping, is a two step process that begins with cutting a simple miter on both mating trim ends, then uses a coping saw to back-cut at least one of the miters along its profiled edge to provide relief during installation.
Simplified crown installation is possible when using manufactured corner blocks, requiring only simple butt cuts on each end of lengths of trim. Plastic and foam versions of crown are now available, typically with corner blocks, for easy installation by DIY home improvement enthusiasts.
Angle calculations
Fitting crown moulding requires a cut at the correct combination of miter angle and bevel angle. The calculation of these angles is affected by two variables: (1) the spring angle (or crown angle, typically sold in 45° and 38° formats), and (2) the wall angle.
Pre-calculated crown moulding tables or software can be used to facilitate the determination of the correct angles. Given the spring angle and the wall angle, the formulas used to calculate the miter angle and the bevel angle are:
Miter angle
Bevel angle
See also
Baseboard
Dado rail
Panelling
Picture rail
External links
Online Angle Generator and Crown Molding Installation Tutorials
Floors
Walls
Woodworking
Architectural elements | Crown molding | [
"Technology",
"Engineering"
] | 481 | [
"Structural engineering",
"Building engineering",
"Floors",
"Architectural elements",
"Components",
"Architecture"
] |
1,582,812 | https://en.wikipedia.org/wiki/Hepeviridae | Hepeviridae is a family of viruses. Human, pig, wild boar, sheep, cow, camel, monkey, some rodents, bats and chickens serve as natural hosts. There are two genera in the family. Diseases associated with this family include: hepatitis; high mortality rate during pregnancy; and avian hepatitis E virus is the cause of hepatitis-splenomegaly (HS) syndrome among chickens.
Taxonomy
The following genera are assigned to the family:
Orthohepevirus
Piscihepevirus
A third genus has been proposed — Insecthepevirus. This proposed genus contains one species — Sogatella furcifera hepe-like virus.
A species — Crustacea hepe-like virus 1, has been isolated from a prawn (Macrobrachium rosenbergii).
Structure
Viruses in the family Hepeviridae are non-enveloped, with icosahedral and spherical geometries, and T=1 symmetry. The diameter is around 32-34 nm. Genomes are linear and non-segmented, around 7.2kb in length. The genome has three open reading frames.
Evolution
This has been studied by examining the ORF1 and the capsid proteins. The ORF1 protein appears to be related to members of the Alphatetraviridae - a member of the "Alpha-like" super-group of viruses - while the capsid protein is related to that of the chicken astrovirus capsid - a member of the "Picorna-like" supergroup. This suggests that a recombination event at some point in the past between at least two distinct viruses gave rise to the ancestor of this family. This recombination event occurred at the junction of the structural and non structural proteins.
Life cycle
Entry into the host cell is achieved by attachment of the virus to host receptors, which mediates clathrin-mediated endocytosis. Replication follows the positive stranded RNA virus replication model. Positive stranded rna virus transcription is the method of transcription. Translation takes place by leaky scanning. Human, pig, wild boar, monkey, cow, sheep, camel some rodents, bat and chicken serve as the natural host. Transmission routes are zoonosis and fomite.
References
External links
ICTV Online (10th) Report Hepeviridae
Viralzone: Hepeviridae
Virus families
Riboviria | Hepeviridae | [
"Biology"
] | 487 | [
"Viruses",
"Riboviria"
] |
1,582,822 | https://en.wikipedia.org/wiki/Fiersviridae | Fiersviridae is a family of positive-strand RNA viruses which infect prokaryotes. Bacteria serve as the natural host. They are small viruses with linear, positive-sense, single-stranded RNA genomes that encode four proteins. All phages of this family require bacterial pili to attach to and infect cells. The family has 185 genera, most discovered by metagenomics. In 2020, the family was renamed from Leviviridae to its current name.
Structure
Viruses in Fiersviridae are non-enveloped, with icosahedral and spherical geometries, and T=3 symmetry. Their virion diameter is around 26 nm.
Genome
Fiersviruses have a positive-sense, single-stranded RNA genome. It is linear and non-segmented and around 4kb in length. The genome encodes four proteins, which are the coat, replicase, maturation, and lysis protein.
Life cycle
Entry into the host cell is achieved by adsorption into the host cell. Replication follows the positive-strand RNA virus replication model. Positive-strand RNA virus transcription is the method of transcription. Translation takes place by suppression of termination. The virus exits the host cell by bacteria lysis. Bacteria serve as the natural host.
Taxonomy
Fiersviridae contains 185 genera. Two notable genera are Emesvirus, which contains bacteriophage MS2, and Qubevirus, which contains bacteriophage Qbeta.
References
External links
Viralzone: Leviviridae
ICTV
Virus families
Riboviria | Fiersviridae | [
"Biology"
] | 319 | [
"Viruses",
"Riboviria"
] |
1,582,833 | https://en.wikipedia.org/wiki/Comparison%20of%20platform%20virtualization%20software | Platform virtualization software, specifically emulators and hypervisors, are software packages that emulate the whole physical computer machine, often providing multiple virtual machines on one physical platform. The table below compares basic information about platform virtualization hypervisors.
General
Features
Providing any virtual environment usually requires some overhead of some type or another. Native usually means that the virtualization technique does not do any CPU level virtualization (like Bochs), which executes code more slowly than when it is directly executed by a CPU. Some other products such as VMware and Virtual PC use similar approaches to Bochs and QEMU, however they use a number of advanced techniques to shortcut most of the calls directly to the CPU (similar to the process that JIT compiler uses) to bring the speed to near native in most cases. However, some products such as coLinux, Xen, z/VM (in real mode) do not suffer the cost of CPU-level slowdowns as the CPU-level instructions are not proxied or executing against an emulated architecture since the guest OS or hardware is providing the environment for the applications to run under. However access to many of the other resources on the system, such as devices and memory may be proxied or emulated in order to broker those shared services out to all the guests, which may cause some slow downs as compared to running outside of virtualization.
OS-level virtualization is described as "native" speed, however some groups have found overhead as high as 3% for some operations, but generally figures come under 1%, so long as secondary effects do not appear.
See for a paper comparing performance of paravirtualization approaches (e.g. Xen) with OS-level virtualization
Requires patches/recompiling.
Exceptional for lightweight, paravirtualized, single-user VM/CMS interactive shell: largest customers run several thousand users on even single prior models. For multiprogramming OSes like Linux on IBM Z and z/OS that make heavy use of native supervisor state instructions, performance will vary depending on nature of workload but is near native. Hundreds into the low thousands of Linux guests are possible on a single machine for certain workloads.
Image type compatibility
Other features
Windows Server 2008 R2 SP1 and Windows 7 SP1 have limited support for redirecting the USB protocol over RDP using RemoteFX.
Windows Server 2008 R2 SP1 adds accelerated graphics support for certain editions of Windows Server 2008 R2 SP1 and Windows 7 SP1 using RemoteFX.
Restrictions
This table is meant to outline restrictions in the software dictated by licensing or capabilities.
Note: No limit means no enforced limit. For example, a VM with 1 TB of memory cannot fit in a host with only 8 GB memory and no memory swap disk, so it will have a limit of 8 GB physically.
See also
List of computer system emulators
Comparison of application virtualization software
Comparison of OS emulation or virtualization apps on Android
Popek and Goldberg virtualization requirements
Virtual DOS machine
x86 virtualization
Notes
References
Platform virtualization software | Comparison of platform virtualization software | [
"Technology"
] | 637 | [
"Software comparisons",
"Computing comparisons"
] |
1,582,869 | https://en.wikipedia.org/wiki/Nodaviridae | Nodaviridae is a family of nonenveloped positive-strand RNA viruses. Vertebrates and invertebrates serve as natural hosts. Diseases associated with this family include: viral encephalopathy and retinopathy in fish. There are nine species in the family, assigned to two genera.
History
The name of the family is derived from the Japanese village of Nodamura, Iwate Prefecture where Nodamura virus was first isolated from Culex tritaeniorhynchus mosquitoes.
Virology
Structure
The virus is not enveloped and has an icosahedral capsid (triangulation number = 3) ranging from 29 to 35 nm in diameter. The capsid is constructed of 32 capsomers.
Genome
The genome is linear, positive sense, bipartite (composed of two segments – RNA1 and RNA2) single stranded RNA consisting of 4500 nucleotides with a 5’ terminal methylated cap and a non-polyadenylated 3’ terminal.
RNA1, which is ~3.1 kilobases in length, encodes a protein that has multiple functional domains: a mitochondrial targeting domain, a transmembrane domain, an RNA-dependent RNA polymerase (RdRp) domain, a self-interaction domain and an RNA capping domain. In addition, RNA1 encodes a subgenomic RNA3 that encodes protein B2, an RNA silencing inhibitor.
RNA2 encodes protein α, a viral capsid protein precursor, which is auto-cleaved into two mature proteins, a 38 kDa β protein and a 5 kDa γ protein, at a conserved Asn/Ala site during virus assembly.
Life cycle
Viral replication is cytoplasmic. Entry into the host cell is achieved by penetration into the host cell. Replication follows the positive stranded RNA virus replication model. Positive stranded RNA virus transcription, using the internal initiation model of subgenomic RNA transcription is the method of transcription. Vertebrates and invertebrates serve as the natural host. Transmission routes are contact and contamination.
Taxonomy
The members of the genus Alphanodavirus were originally isolated from insects while those of the genus Betanodavirus were isolated from fish. A small number of nodoviruses seem to lie outside either of these clades. Flock house virus (FHV) is the best studied of the nodaviruses. There are nine species in this family, assigned to two genera:
Alphanodavirus
Black beetle virus
Boolarra virus
Flock House virus
Nodamura virus
Pariacoto virus
Betanodavirus
Barfin flounder nervous necrosis virus
Redspotted grouper nervous necrosis virus
Striped jack nervous necrosis virus
Tiger puffer nervous necrosis virus
References
External links
ICTV Report: Nodaviridae
Viralzone: Nodaviridae
Virus families
Fish viral diseases
Insect viral diseases
Riboviria | Nodaviridae | [
"Biology"
] | 589 | [
"Viruses",
"Riboviria"
] |
1,582,877 | https://en.wikipedia.org/wiki/Polite%20fiction | A polite fiction is a social scenario in which all participants are aware of a truth, but pretend to believe in some alternative version of events to avoid conflict or embarrassment. Polite fictions are closely related to euphemism, in which a word or phrase that might be impolite, disagreeable, or offensive is replaced by another word or phrase that both speaker and listener understand to have the same meaning. In scholarly usage, "polite fiction" can be traced to at least 1953.
Examples
An informal example would be of someone who goes out drinking after telling their family that they are merely going for an evening walk to enjoy the night air. Even though many relatives involved know that the person is likely leaving to drink alcohol, and may come home drunk, they may act as if the person is going out for a walk, and act as if they do not notice signs of alcohol intoxication when they return.
Another common example is a couple that has had an argument, after which one of them absents themselves from a subsequent social gathering, with the other claiming that they are ill, especially if this is a regular occurrence.
In these instances, although others in the subject's social circles may have seen this behavior numerous times and are aware that a problem of some sort exists, they may remain silent for fear of causing upset, thereby further troubling their relationship with the subject. This violates social norms (a human behavior related to ethics codes and ethics clarity), and can be used to retain politeness and trust, with the effect of maintenance of social bonds and provision of ideological support.
Denialism
Polite fictions can slip into denial. This is especially the case when the fiction is actually meant to fool some observers, such as outsiders or children judged too young to be told the truth. The truth then becomes "the elephant in the room"; no matter how obvious it is, the people most affected pretend to others and to themselves that it is not so. This can be used to humorous effect in comedy, where a character will seem bent on making it impossible to maintain the polite fiction.
See also
Diplomatic illness
Dogma
The Emperor's New Clothes
Etiquette
Interpersonal communication
Kayfabe
Legal fiction
Minimisation (psychology)
Obfuscation
Open secret
Persuasive definition
Polite lie
Voldemort effect
What happens on tour, stays on tour
White lie
References
External links
Explanations of "polite fictions" in U.S. culture for Japanese visitors
Denialism
Etiquette
Figures of speech | Polite fiction | [
"Biology"
] | 507 | [
"Etiquette",
"Behavior",
"Human behavior"
] |
1,582,891 | https://en.wikipedia.org/wiki/Potyviridae | Potyviridae is a family of positive-strand RNA viruses that encompasses more than 30% of known plant viruses, many of which are of great agricultural significance. The family has 12 genera and 235 species, three of which are unassigned to a genus.
Structure
Potyvirid virions are nonenveloped, flexuous filamentous, rod-shaped particles. The diameter is around 11–20 nm, with a length of 650–950 nm.
Genome
Genomes are linear and usually nonsegmented, around 8–12kb in length, consisting of positive-sense RNA, which is surrounded by a protein coat made up of a single viral encoded protein called a capsid. All induce the formation of virus inclusion bodies called cylindrical inclusions (‘pinwheels’) in their hosts. These consist of a single protein (about 70 kDa) made in their hosts from a single viral genome product.
Member viruses encode large polypeptides that are cleaved into mature proteins. In 5'–3' order these proteins are
P1 (a serine protease): 83 kDa
HC (a protease): 51 kDa
P3: 34 kDa
6K1: 5 kDa
Cl (helicase): 71 kDa
6K2: 6 kDa
VPg (the 5' binding protein): 20 kDa
NIa-Pro (a protease): 27 kDa
NIb (RNA dependent RNA polymerase): 57 kDa
Capsid protein: 34 kDa
There may be some variation in the number of the proteins depending on the genera and species. For instance some genera lack P1, some virus of the genus Ipomovirus lack HC and have a P1 tandem. Pretty interesting sweet potato potyviral ORF (PISPO), alkylation B (AlkB), and inosine triphosphate pyrophosphatase (known as ITPase or HAM1) are protein domains identified in atypical members.
Life cycle
Viral replication is cytoplasmic. Entry into the host cell is achieved by penetration. Replication follows the positive-stranded RNA virus replication model. Positive-stranded RNA virus transcription is the method of transcription. Translation takes place by −1 ribosomal frameshifting. The virus exits the host cell by tubule-guided viral movement. Plants serve as the natural host. The virus is transmitted via a vector (often an insect or mite). Transmission routes are vector and mechanical.
Transmission
Potyvirus is the largest genus in the family, with 183 known species. These viruses are 720–850 nm in length and are transmitted by aphids. They can also be easily transmitted by mechanical means. These viruses shared a common ancestry 6,600 years ago and are transmitted by over 200 species of aphids.
The species in the genus Macluravirus are 650–675 nm in length and are also transmitted by aphids. The plant viruses in the genus Ipomovirus are transmitted by whiteflies and they are 750–950 nm long. Tritimovirus and the Rymovirus viruses are 680–750 nm long and are transmitted by eriophydid mites. (The rymoviruses are closely related to the potyviruses and may eventually be merged with the potyviruses.) The Bymovirus genome consists of two particles instead of one (275 and 550 nm) and these viruses are transmitted by the chytrid fungus, Polymyxa graminis.
Taxonomy
The following genera are recognized:
Arepavirus
Bevemovirus
Brambyvirus
Bymovirus
Celavirus
Ipomovirus
Macluravirus
Poacevirus
Potyvirus
Roymovirus
Rymovirus
Tritimovirus
The following species are unassigned to a genus:
Common reed chlorotic stripe virus
Longan witches broom-associated virus
Spartina mottle virus
References
External links
ICTV Online (10th) Report: Potyviridae
Viralzone: Potyviridae
Viral plant pathogens and diseases
Virus families
Riboviria | Potyviridae | [
"Biology"
] | 844 | [
"Viruses",
"Riboviria"
] |
1,582,906 | https://en.wikipedia.org/wiki/Tetraviridae | Tetraviridae is a family of viruses named due to its members having T=4 symmetry and infecting butterflies and moths. The family was dissolved in 2011 due to genetic differences and replaced with three families, each of which still contain the name tetraviridae:
Alphatetraviridae
Carmotetraviridae
Permutotetraviridae
References
Obsolete virus taxa
Unaccepted virus taxa | Tetraviridae | [
"Biology"
] | 84 | [
"Biological hypotheses",
"Unaccepted virus taxa",
"Controversial taxa"
] |
1,582,920 | https://en.wikipedia.org/wiki/Tombusviridae | Tombusviridae is a family of single-stranded positive sense RNA plant viruses. There are three subfamilies, 17 genera, and 95 species in this family. The name is derived from Tomato bushy stunt virus (TBSV).
Genome
All viruses in the family have a non-segmented (monopartite) linear genome, with the exception of Dianthoviruses, whose genome is bipartite. The genome is approximately 4.6–4.8kb in length, lacks a 5' cap and a poly(A) tail, and it encodes 4–6 ORFs. The polymerase encodes an amber stop codon which is the site of a readthrough event within ORF1, producing two products necessary for replication. There is no helicase encoded by the virus.
Structure
The RNA is encapsulated in an icosahedral (T=3) capsid, composed of 180 units of a single coat protein 27–42K in size; the virion measures 28–35 nm in diameter, and it is not enveloped.
Life cycle
Viral replication is cytoplasmic, and is lysogenic. Entry into the host cell is achieved by penetration into the host cell. Replication follows the positive stranded RNA virus replication model. Positive stranded RNA virus transcription, using the premature termination model of subgenomic RNA transcription is the method of transcription. Translation takes place by leaky scanning, −1 ribosomal frameshifting, viral initiation, and suppression of termination. The virus exits the host cell by tubule-guided viral movement. Plants serve as the natural host. Transmission routes are mechanical, seed borne, and contact.
Viruses in this family are primarily soil-borne, some transmitted by fungal species of the order Chytridiales, others by no known vector. Virions may spread by water, root growth into infected soil, contact between plants, pollen, or seed, depending on the virus species. These viruses may be successfully transmitted by grafting or mechanical inoculation, and both the virion and the genetic material alone are infective.
Replication
Members of Tombusviridae replicate in the cytoplasm, by use of negative strand templates. The replication process leaves a surplus of positive sense (+)RNA strands, and it is thought that not only does the viral RNA act as a template for replication, but is also able to manipulate and regulate RNA synthesis.
The level of RNA synthesis has been shown to be affected by the cis-acting properties of certain elements on the RNA (such as RNA1 and 2), which include core promoter sequences which regulate the site of initiation for the complementary RNA strand synthesis. This mechanism is thought to be recognised by RNA-dependent RNA polymerase, found encoded within the genome.
Viruses in Tombusviridae have been found to co-opt GAPDH, a host metabolic enzyme, for use in the replication center. GAPDH may bind to the (−)RNA strand and keep it in the replicase complex, allowing (+)RNA strands synthesized from it to be exported and accumulate in the host cell. Downregulation of GAPDH reduced viral RNA accumulation, and eliminated the surplus of (+)RNA copies.
Notes
Research has shown that infection of plants from tombusviruses contain defective interfering RNAs that are born directly from the viruses RNA genome, and no host genome. Viral DI RNAs with their small size and cis-acting elements are good templates both in vivo and in vitro on which to study RNA replication.
Sub-genomic RNA is used in the synthesis of some proteins; they are generated by premature termination of (−)strand synthesis. sgRNAs and sgRNA negative-sense templates are found in infected cells.
Taxonomy
The family contains the following subfamilies and genera (-virinae denotes subfamily and -virus denotes genus):
Calvusvirinae
Umbravirus
Procedovirinae
Alphacarmovirus
Alphanecrovirus
Aureusvirus
Avenavirus
Betacarmovirus
Betanecrovirus
Gallantivirus
Gammacarmovirus
Macanavirus
Machlomovirus
Panicovirus
Pelarspovirus
Tombusvirus
Zeavirus
Species unassigned to a genus in Procedovirinae:
Ahlum waterborne virus
Bena mild mosaic virus
Chenopodium necrosis virus
Cucumber soil-borne virus
Trailing lespedeza virus 1
Weddel waterborne virus
Regressovirinae
Dianthovirus
Lastly, one genus is unassigned to a subfamily: Luteovirus.
References
External links
Viralzone: Tombusviridae
ICTV
Viral plant pathogens and diseases
Virus families
Riboviria | Tombusviridae | [
"Biology"
] | 957 | [
"Viruses",
"Riboviria"
] |
1,583,111 | https://en.wikipedia.org/wiki/Squamous%20metaplasia | Squamous metaplasia is a benign non-cancerous change (metaplasia) of surfacing lining cells (epithelium) to a squamous morphology.
Location
Common sites for squamous metaplasia include the bladder and cervix. Smokers often exhibit squamous metaplasia in the linings of their airways. These changes don't signify a specific disease, but rather usually represent the body's response to stress or irritation. Vitamin A deficiency or overdose can also lead to squamous metaplasia.
Uterine cervix
In regard to the cervix, squamous metaplasia can sometimes be found in the endocervix, as it is composed of simple columnar epithelium, whereas the ectocervix is composed of stratified squamous non-keratinized epithelium.
Significance
Squamous metaplasia may be seen in the context of benign lesions (e.g., atypical polypoid adenomyoma), chronic irritation, or cancer (e.g., endometrioid endometrial carcinoma), as well as pleomorphic adenoma.
See also
Metaplasia
Dysplasia
Barrett esophagus - a columnar cell metaplasia of squamous epithelium
Subareolar abscess
References
Histopathology | Squamous metaplasia | [
"Chemistry"
] | 297 | [
"Histopathology",
"Microscopy"
] |
1,583,130 | https://en.wikipedia.org/wiki/Rainbow%20Bridge%20%28pets%29 | The Rainbow Bridge is the theme of several works written first in 1959, then in the 1980s and 1990s, that speak of an other-worldly place where pets go upon death, eventually to be reunited with their owners. One is a short story whose original creator was long uncertain. The other is a six-stanza poem of rhyming pentameter couplets, created by a couple to help ease the pain of friends who lost pets. Each has gained popularity around the world among animal lovers who have lost a pet or wild animals that are cared for. The belief has many antecedents, including similarities to the Bifröst bridge of Norse mythology.
Story
The story tells of a lush green meadow just "this side of Heaven" (i.e., before one enters it). Rainbow Bridge is the name of both the meadow and the adjoining pan-prismatic conveyance connecting it to Heaven.
According to the story, when a pet dies, it goes to the meadow, restored to perfect health and free of any injuries. The pet runs and plays all day with the others; there is always fresh food and water, and the sun is always shining. However, it is said that while the pet is at peace and happy, it also misses its owner whom it left behind on Earth.
When its owner dies, they too arrive at the meadow, and that is when the pet stops playing, turns, sniffs at the air and looks into the distance where it sees its beloved owner. Excited, it runs as fast as it can, until owner and pet are once more united. The pet greets its former owner in great joy while the human looks into the soft, trusting eyes of the pet, who might have been gone and absent on Earth but never absent in the heart. Then side by side, they cross the Rainbow Bridge together into Heaven, to play again and be together in love and happiness, never again to be separated.
Authorship
In February 2023, authorship of the original story was confirmed by National Geographic magazine as Edna Clyne-Rekhy, an 82-year-old artist from Scotland.
Having been circulated widely around the world, the prose poem's original authorship was uncertain. Among those who have claimed authorship are:
Paul C. Dahm, a grief counselor in Oregon, US, claimed to have written the poem in 1981, and published it in a 1998 book of the same name (1981, ).
William N. Britton, author of Legend of Rainbow Bridge (1994, )
Wallace Sife, head of the Association for Pet Loss and Bereavement, whose poem "All Pets Go to Heaven" appears on the association's website as well as in his book The Loss of a Pet
However, American author Paul Koudounaris, a member of The Order of the Good Death, published an article in February 2023, in which he detangled the history of the poem and provided proof, including the original handwritten manuscript of the text, which make it clear that the author is Edna Clyne-Rekhy, who wrote it as a teenager in Scotland in 1959 to mourn the death of her dog Major. The article explained that she had originally considered the Rainbow Bridge to be private and kept it to herself. But she had typed out copies to give to friends, who were moved by the words and passed them on. But since these copies lacked her name, the Rainbow Bridge eventually became disconnected from its author. Eventually it was introduced to U.S. readers in 1994 when "Dear Abby", an advice column with a wide newspaper circulation, printed it in its entirety, but unattributed. It then became a staple in pet mourning circles, and later popular on the internet.
A Washington Post reporter opines that: "It is, in free verse form, 'Chicken Soup for the Soul' for an exploding $69 billion pet care industry."
Background
The concept of a paradise where pets wait for their human owners appeared much earlier, in the little-known sequel to Beautiful Joe, Margaret Marshall Saunders' book Beautiful Joe's Paradise. In this green land, the animals do not simply await their owners, but also help each other learn and grow and recover from mistreatment they may have endured in life. But the animals come to this land, and continue to true heaven, not by a bridge but by balloon.
The first mention of the "Rainbow Bridge" story online is a post on the newsgroup rec.pets.dogs, dated 7 January 1993, quoting the poem from a 1992 (or earlier) issue of Mid-Atlantic Great Dane Rescue League Newsletter, which in turn is stated to have quoted it from the Akita Rescue Society of America. Other posts from 1993 suggest it was already well established and being circulated online at that time, enough for the quotation of even a single line to be expected to be recognized by other newsgroup readers.
See also
Affectional bond
Deathbed phenomena – A range of experiences often involves sightings of deceased pets, and is noted to be effectual to ease grief.
"The Hunt", an episode of The Twilight Zone
Pet humanization
References
1959 short stories
Afterlife places
Animal death
Animal literature
Human–animal interaction
Animals in religion
Pets
Rainbows in culture
Scottish short stories | Rainbow Bridge (pets) | [
"Biology"
] | 1,070 | [] |
1,583,409 | https://en.wikipedia.org/wiki/Extensible%20Authentication%20Protocol | Extensible Authentication Protocol (EAP) is an authentication framework frequently used in network and internet connections. It is defined in , which made obsolete, and is updated by .
EAP is an authentication framework for providing the transport and usage of material and parameters generated by EAP methods. There are many methods defined by RFCs, and a number of vendor-specific methods and new proposals exist. EAP is not a wire protocol; instead it only defines the information from the interface and the formats. Each protocol that uses EAP defines a way to encapsulate by the user EAP messages within that protocol's messages.
EAP is in wide use. For example, in IEEE 802.11 (Wi-Fi) the WPA and WPA2 standards have adopted IEEE 802.1X (with various EAP types) as the canonical authentication mechanism.
Methods
EAP is an authentication framework, not a specific authentication mechanism. It provides some common functions and negotiation of authentication methods called EAP methods. There are currently about 40 different methods defined. Methods defined in IETF RFCs include EAP-MD5, EAP-POTP, EAP-GTC, EAP-TLS, EAP-IKEv2, EAP-SIM, EAP-AKA, and EAP-AKA'. Additionally, a number of vendor-specific methods and new proposals exist. Commonly used modern methods capable of operating in wireless networks include EAP-TLS, EAP-SIM, EAP-AKA, LEAP and EAP-TTLS. Requirements for EAP methods used in wireless LAN authentication are described in . The list of type and packets codes used in EAP is available from the IANA EAP Registry.
The standard also describes the conditions under which the AAA key management requirements described in can be satisfied.
Lightweight Extensible Authentication Protocol (LEAP)
The Lightweight Extensible Authentication Protocol (LEAP) method was developed by Cisco Systems prior to the IEEE ratification of the 802.11i security standard. Cisco distributed the protocol through the CCX (Cisco Certified Extensions) as part of getting 802.1X and dynamic WEP adoption into the industry in the absence of a standard. There is no native support for LEAP in any Windows operating system, but it is widely supported by third-party client software most commonly included with WLAN (wireless LAN) devices. LEAP support for Microsoft Windows 7 and Microsoft Windows Vista can be added by downloading a client add in from Cisco that provides support for both LEAP and EAP-FAST. Due to the wide adoption of LEAP in the networking industry many other WLAN vendors claim support for LEAP.
LEAP uses a modified version of MS-CHAP, an authentication protocol in which user credentials are not strongly protected and easily compromised; an exploit tool called ASLEAP was released in early 2004 by Joshua Wright. Cisco recommends that customers who absolutely must use LEAP do so only with sufficiently complex passwords, though complex passwords are difficult to administer and enforce. Cisco's current recommendation is to use newer and stronger EAP protocols such as EAP-FAST, PEAP, or EAP-TLS.
EAP Transport Layer Security (EAP-TLS)
EAP Transport Layer Security (EAP-TLS), defined in , is an IETF open standard that uses the Transport Layer Security (TLS) protocol, and is well-supported among wireless vendors. EAP-TLS is the original, standard wireless LAN EAP authentication protocol.
EAP-TLS is still considered one of the most secure EAP standards available, although TLS provides strong security only as long as the user understands potential warnings about false credentials, and is universally supported by all manufacturers of wireless LAN hardware and software. Until April 2005, EAP-TLS was the only EAP type vendors needed to certify for a WPA or WPA2 logo. There are client and server implementations of EAP-TLS in 3Com, Apple, Avaya, Brocade Communications, Cisco, Enterasys Networks, Fortinet, Foundry, Hirschmann, HP, Juniper, Microsoft, and open source operating systems. EAP-TLS is natively supported in Mac OS X 10.3 and above, wpa_supplicant, Windows 2000 SP4, Windows XP and above, Windows Mobile 2003 and above, Windows CE 4.2, and Apple's iOS mobile operating system.
Unlike most TLS implementations of HTTPS, such as on the World Wide Web, the majority of implementations of EAP-TLS require mutual authentication using client-side X.509 certificates without giving the option to disable the requirement, even though the standard does not mandate their use. Some have identified this as having the potential to dramatically reduce adoption of EAP-TLS and prevent "open" but encrypted access points. On 22 August 2012 hostapd (and wpa_supplicant) added support in its Git repository for an UNAUTH-TLS vendor-specific EAP type (using the hostapd/wpa_supplicant project Private Enterprise Number), and on 25 February 2014 added support for the WFA-UNAUTH-TLS vendor-specific EAP type (using the Wi-Fi Alliance Private Enterprise Number), which only do server authentication. This would allow for situations much like HTTPS, where a wireless hotspot allows free access and does not authenticate station clients but station clients wish to use encryption (IEEE 802.11i-2004 i.e. WPA2) and potentially authenticate the wireless hotspot. There have also been proposals to use IEEE 802.11u for access points to signal that they allow EAP-TLS using only server-side authentication, using the standard EAP-TLS IETF type instead of a vendor-specific EAP type.
The requirement for a client-side certificate, however unpopular it may be, is what gives EAP-TLS its authentication strength and illustrates the classic convenience vs. security trade-off. With a client-side certificate, a compromised password is not enough to break into EAP-TLS enabled systems because the intruder still needs to have the client-side certificate; indeed, a password is not even needed, as it is only used to encrypt the client-side certificate for storage. The highest security available is when the "private keys" of client-side certificate are housed in smart cards. This is because there is no way to steal a client-side certificate's corresponding private key from a smart card without stealing the card itself. It is more likely that the physical theft of a smart card would be noticed (and the smart card immediately revoked) than a (typical) password theft would be noticed. In addition, the private key on a smart card is typically encrypted using a PIN that only the owner of the smart card knows, minimizing its utility for a thief even before the card has been reported stolen and revoked.
EAP-MD5
EAP-MD5 was the only IETF Standards Track based EAP method when it was first defined in the original RFC for EAP, . It offers minimal security; the MD5 hash function is vulnerable to dictionary attacks, and does not support key generation, which makes it unsuitable for use with dynamic WEP, or WPA/WPA2 enterprise. EAP-MD5 differs from other EAP methods in that it only provides authentication of the EAP peer to the EAP server but not mutual authentication. By not providing EAP server authentication, this EAP method is vulnerable to man-in-the-middle attacks. EAP-MD5 support was first included in Windows 2000 and deprecated in Windows Vista.
EAP Protected One-Time Password (EAP-POTP)
EAP Protected One-Time Password (EAP-POTP), which is described in , is an EAP method developed by RSA Laboratories that uses one-time password (OTP) tokens, such as a handheld hardware device or a hardware or software module running on a personal computer, to generate authentication keys. EAP-POTP can be used to provide unilateral or mutual authentication and key material in protocols that use EAP.
The EAP-POTP method provides two-factor user authentication, meaning that a user needs both physical access to a token and knowledge of a personal identification number (PIN) to perform authentication.
EAP Pre-Shared Key (EAP-PSK)
EAP Pre-shared key (EAP-PSK), defined in , is an EAP method for mutual authentication and session key derivation using a pre-shared key (PSK). It provides a protected communication channel, when mutual authentication is successful, for both parties to communicate and is designed for authentication over insecure networks such as IEEE 802.11.
EAP-PSK is documented in an experimental RFC that provides a lightweight and extensible EAP method that does not require any public-key cryptography. The EAP method protocol exchange is done in a minimum of four messages.
EAP Password (EAP-PWD)
EAP Password (EAP-PWD), defined in , is an EAP method which uses a shared password for authentication. The password may be a low-entropy one and may be drawn from some set of possible passwords, like a dictionary, which is available to an attacker. The underlying key exchange is resistant to active attack, passive attack, and dictionary attack.
EAP-PWD is in the base of Android 4.0 (ICS). It is in FreeRADIUS and Radiator RADIUS servers, and it is in hostapd and wpa_supplicant.
EAP Tunneled Transport Layer Security (EAP-TTLS)
EAP Tunneled Transport Layer Security (EAP-TTLS) is an EAP protocol that extends TLS. It was co-developed by Funk Software and Certicom and is widely supported across platforms. Microsoft did not incorporate native support for the EAP-TTLS protocol in Windows XP, Vista, or 7. Supporting TTLS on these platforms requires third-party Encryption Control Protocol (ECP) certified software. Microsoft Windows started EAP-TTLS support with Windows 8, support for EAP-TTLS appeared in Windows Phone version 8.1.
The client can, but does not have to be authenticated via a CA-signed PKI certificate to the server. This greatly simplifies the setup procedure since a certificate is not needed on every client.
After the server is securely authenticated to the client via its CA certificate and optionally the client to the server, the server can then use the established secure connection ("tunnel") to authenticate the client. It can use an existing and widely deployed authentication protocol and infrastructure, incorporating legacy password mechanisms and authentication databases, while the secure tunnel provides protection from eavesdropping and man-in-the-middle attack. Note that the user's name is never transmitted in unencrypted clear text, improving privacy.
Two distinct versions of EAP-TTLS exist: original EAP-TTLS (a.k.a. EAP-TTLSv0) and EAP-TTLSv1. EAP-TTLSv0 is described in , EAP-TTLSv1 is available as an Internet draft.
EAP Internet Key Exchange v. 2 (EAP-IKEv2)
EAP Internet Key Exchange v. 2 (EAP-IKEv2) is an EAP method based on the Internet Key Exchange protocol version 2 (IKEv2). It provides mutual authentication and session key establishment between an EAP peer and an EAP server. It supports authentication techniques that are based on the following types of credentials:
Asymmetric key pairs Public/private key pairs where the public key is embedded into a digital certificate, and the corresponding private key is known only to a single party.
Passwords Low-entropy bit strings that are known to both the server and the peer.
Symmetric keys High-entropy bit strings that are known to both the server and the peer.
It is possible to use a different authentication credential (and thereby technique) in each direction. For example, the EAP server authenticates itself using public/private key pair and the EAP peer using symmetric key. However, not all of the nine theoretical combinations are expected in practice. Specifically, the standard lists four use cases: The server authenticating with an asymmetric key pair while the client uses any of the three methods; and that both sides use a symmetric key.
EAP-IKEv2 is described in , and a prototype implementation exists.
EAP Flexible Authentication via Secure Tunneling (EAP-FAST)
Flexible Authentication via Secure Tunneling (EAP-FAST; ) is a protocol proposal by Cisco Systems as a replacement for LEAP. The protocol was designed to address the weaknesses of LEAP while preserving the "lightweight" implementation. Use of server certificates is optional in EAP-FAST. EAP-FAST uses a Protected Access Credential (PAC) to establish a TLS tunnel in which client credentials are verified.
EAP-FAST has three phases:
When automatic PAC provisioning is enabled, EAP-FAST has a vulnerability where an attacker can intercept the PAC and use that to compromise user credentials. This vulnerability is mitigated by manual PAC provisioning or by using server certificates for the PAC provisioning phase.
It is worth noting that the PAC file is issued on a per-user basis. This is a requirement in sec 7.4.4 so if a new user logs on the network from a device, a new PAC file must be provisioned first. This is one reason why it is difficult not to run EAP-FAST in insecure anonymous provisioning mode. The alternative is to use device passwords instead, but then the device is validated on the network not the user.
EAP-FAST can be used without PAC files, falling back to normal TLS.
EAP-FAST is natively supported in Apple OS X 10.4.8 and newer. Cisco supplies an EAP-FAST module for Windows Vista and later operating systems which have an extensible EAPHost architecture for new authentication methods and supplicants.
Tunnel Extensible Authentication Protocol (TEAP)
Tunnel Extensible Authentication Protocol (TEAP; ) is a tunnel-based EAP method that enables secure communication between a peer and a server by using the Transport Layer Security (TLS) protocol to establish a mutually authenticated tunnel. Within the tunnel, TLV (Type-Length-Value) objects are used to convey authentication-related data between the EAP peer and the EAP server.
In addition to peer authentication, TEAP allows the peer to ask the server for a certificate by sending a request in PKCS#10 format. After receiving the certificate request and authenticating the peer, the server can provision a certificate to the peer in PKCS#7 format (). The server can also distribute trusted root certificates to the peer in PKCS#7 format (). Both operations are enclosed into the corresponding TLVs and happen securely within the already established TLS tunnel.
EAP Subscriber Identity Module (EAP-SIM)
EAP Subscriber Identity Module (EAP-SIM) is used for authentication and session key distribution using the subscriber identity module (SIM) from the Global System for Mobile Communications (GSM).
GSM cellular networks use a subscriber identity module card to carry out user authentication. EAP-SIM use a SIM authentication algorithm between the client and an Authentication, Authorization and Accounting (AAA) server providing mutual authentication between the client and the network.
In EAP-SIM the communication between the SIM card and the Authentication Centre (AuC) replaces the need for a pre-established password between the client and the AAA server.
The A3/A8 algorithms are being run a few times, with different 128 bit challenges, so there will be more 64 bit Kc-s which will be combined/mixed to create stronger keys (Kc-s won't be used directly). The lack of mutual authentication in GSM has also been overcome.
EAP-SIM is described in .
EAP Authentication and Key Agreement (EAP-AKA)
Extensible Authentication Protocol Method for Universal Mobile Telecommunications System (UMTS) Authentication and Key Agreement (EAP-AKA), is an EAP mechanism for authentication and session key distribution using the UMTS Subscriber Identity Module (USIM). EAP-AKA is defined in .
EAP Authentication and Key Agreement prime (EAP-AKA')
The EAP-AKA' variant of EAP-AKA, defined in , and is used for non-3GPP access to a 3GPP core network. For example, via EVDO, WiFi, or WiMax.
EAP Generic Token Card (EAP-GTC)
EAP Generic Token Card, or EAP-GTC, is an EAP method created by Cisco as an alternative to PEAPv0/EAP-MSCHAPv2 and defined in and . EAP-GTC carries a text challenge from the authentication server, and a reply generated by a security token. The PEAP-GTC authentication mechanism allows generic authentication to a number of databases such as Novell Directory Service (NDS) and Lightweight Directory Access Protocol (LDAP), as well as the use of a one-time password.
EAP Encrypted Key Exchange (EAP-EKE)
EAP with the encrypted key exchange, or EAP-EKE, is one of the few EAP methods that provide secure mutual authentication using short passwords and no need for public key certificates. It is a three-round exchange, based on the Diffie-Hellman variant of the well-known EKE protocol.
EAP-EKE is specified in .
Nimble out-of-band authentication for EAP (EAP-NOOB)
Nimble out-of-band authentication for EAP (EAP-NOOB) is a generic bootstrapping solution for devices which have no pre-configured authentication credentials and which are not yet registered on any server. It is especially useful for Internet-of-Things (IoT) gadgets and toys that come with no information about any owner, network or server. Authentication for this EAP method is based on a user-assisted out-of-band (OOB) channel between the server and peer. EAP-NOOB supports many types of OOB channels such as QR codes, NFC tags, audio etc. and unlike other EAP methods, the protocol security has been verified by formal modeling of the specification with ProVerif and MCRL2 tools.
EAP-NOOB performs an Ephemeral Elliptic Curve Diffie-Hellman (ECDHE) over the in-band EAP channel. The user then confirms this exchange by transferring the OOB message. Users can transfer the OOB message from the peer to the server, when for example, the device is a smart TV that can show a QR code. Alternatively, users can transfer the OOB message from the server to the peer, when for example, the device being bootstrapped is a camera that can only read a QR code.
Encapsulation
EAP is not a wire protocol; instead it only defines message formats. Each protocol that uses EAP defines a way to encapsulate EAP messages within that protocol's messages.
IEEE 802.1X
The encapsulation of EAP over IEEE 802 is defined in IEEE 802.1X and known as "EAP over LANs" or EAPOL. EAPOL was originally designed for IEEE 802.3 Ethernet in 802.1X-2001, but was clarified to suit other IEEE 802 LAN technologies such as IEEE 802.11 wireless and Fiber Distributed Data Interface (ANSI X3T9.5/X3T12, adopted as ISO 9314) in 802.1X-2004. The EAPOL protocol was also modified for use with IEEE 802.1AE (MACsec) and IEEE 802.1AR (Initial Device Identity, IDevID) in 802.1X-2010.
When EAP is invoked by an 802.1X enabled Network Access Server (NAS) device such as an IEEE 802.11i-2004 Wireless Access Point (WAP), modern EAP methods can provide a secure authentication mechanism and negotiate a secure private key (Pair-wise Master Key, PMK) between the client and NAS which can then be used for a wireless encryption session utilizing TKIP or CCMP (based on AES) encryption.
PEAP
The Protected Extensible Authentication Protocol, also known as Protected EAP or simply PEAP, is a protocol that encapsulates EAP within a potentially encrypted and authenticated Transport Layer Security (TLS) tunnel. The purpose was to correct deficiencies in EAP; EAP assumed a protected communication channel, such as that provided by physical security, so facilities for protection of the EAP conversation were not provided.
PEAP was jointly developed by Cisco Systems, Microsoft, and RSA Security. PEAPv0 was the version included with Microsoft Windows XP and was nominally defined in draft-kamath-pppext-peapv0-00. PEAPv1 and PEAPv2 were defined in different versions of draft-josefsson-pppext-eap-tls-eap. PEAPv1 was defined in draft-josefsson-pppext-eap-tls-eap-00 through draft-josefsson-pppext-eap-tls-eap-05, and PEAPv2 was defined in versions beginning with draft-josefsson-pppext-eap-tls-eap-06.
The protocol only specifies chaining multiple EAP mechanisms and not any specific method. Use of the EAP-MSCHAPv2 and EAP-GTC methods are the most commonly supported.
RADIUS and Diameter
Both the RADIUS and Diameter AAA protocols can encapsulate EAP messages. They are often used by Network Access Server (NAS) devices to forward EAP packets between IEEE 802.1X endpoints and AAA servers to facilitate IEEE 802.1X.
PANA
The Protocol for Carrying Authentication for Network Access (PANA) is an IP-based protocol that allows a device to authenticate itself with a network to be granted access. PANA will not define any new authentication protocol, key distribution, key agreement or key derivation protocols; for these purposes, EAP will be used, and PANA will carry the EAP payload. PANA allows dynamic service provider selection, supports various authentication methods, is suitable for roaming users, and is independent from the link layer mechanisms.
PPP
EAP was originally an authentication extension for the Point-to-Point Protocol (PPP). PPP has supported EAP since EAP was created as an alternative to the Challenge-Handshake Authentication Protocol (CHAP) and the Password Authentication Protocol (PAP), which were eventually incorporated into EAP. The EAP extension to PPP was first defined in , now obsoleted by .
See also
Authentication protocol
Handover keying
ITU-T X.1035
References
Further reading
"AAA and Network Security for Mobile Access. RADIUS, DIAMETER, EAP, PKI and IP mobility". M Nakhjiri. John Wiley and Sons, Ltd.
External links
: Extensible Authentication Protocol (EAP) (June 2004)
: Extensible Authentication Protocol (EAP) Key Management Framework (August 2008)
Configure RADIUS for secure 802.1x wireless LAN
How to self-sign a RADIUS server for secure PEAP or EAP-TTLS authentication
Extensible Authentication Protocol on Microsoft TechNet
EAPHost in Windows Vista and Windows Server 2008
WIRE1x
"IETF EAP Method Update (emu) Working Group"
Wireless networking
Authentication protocols | Extensible Authentication Protocol | [
"Technology",
"Engineering"
] | 4,982 | [
"Wireless networking",
"Computer networks engineering"
] |
1,583,584 | https://en.wikipedia.org/wiki/NuSTAR | NuSTAR (Nuclear Spectroscopic Telescope Array, also named Explorer 93 and SMEX-11) is a NASA space-based X-ray telescope that uses a conical approximation to a Wolter telescope to focus high energy X-rays from astrophysical sources, especially for nuclear spectroscopy, and operates in the range of 3 to 79 keV.
NuSTAR is the eleventh mission of NASA's Small Explorer (SMEX-11) satellite program and the first space-based direct-imaging X-ray telescope at energies beyond those of the Chandra X-ray Observatory and XMM-Newton. It was successfully launched on 13 June 2012, having previously been delayed from 21 March 2012 due to software issues with the launch vehicle.
The mission's primary scientific goals are to conduct a deep survey for black holes a billion times more massive than the Sun, to investigate how particles are accelerated to very high energy in active galaxies, and to understand how the elements are created in the explosions of massive stars by imaging supernova remnants.
Having completed a two-year primary mission, NuSTAR is in its year of operation.
History
NuSTAR's predecessor, the High Energy Focusing Telescope (HEFT), was a balloon-borne version that carried telescopes and detectors constructed using similar technologies. In February 2003, NASA issued an Explorer program Announcement of Opportunity (AoO). In response, NuSTAR was submitted to NASA in May 2003, as one of 36 mission proposals vying to be the tenth and eleventh Small Explorer missions. In November 2003, NASA selected NuSTAR and four other proposals for a five-month implementation feasibility study.
In January 2005, NASA selected NuSTAR for flight pending a one-year feasibility study. The program was cancelled in February 2006 as a result of cuts to science in NASA's 2007 budget. On 21 September 2007, it was announced that the program had been restarted, with an expected launch in August 2011, though this was later delayed to June 2012.
The principal investigator is Fiona A. Harrison of the California Institute of Technology (Caltech). Other major partners include the Jet Propulsion Laboratory (JPL), University of California, Berkeley, Technical University of Denmark (DTU), Columbia University, Goddard Space Flight Center (GSFC), Stanford University, University of California, Santa Cruz, Sonoma State University, Lawrence Livermore National Laboratory, and the Italian Space Agency (ASI). NuSTAR's major industrial partners include Orbital Sciences Corporation and ATK Space Components.
Launch
NASA contracted with Orbital Sciences Corporation to launch NuSTAR (mass ) on a Pegasus XL launch vehicle on 21 March 2012. It had earlier been planned for 15 August 2011, 3 February 2012, 16 March 2012, and 14 March 2012. After a launch meeting on 15 March 2012, the launch was pushed further back to allow time to review flight software used by the launch vehicle's flight computer. The launch was conducted successfully at 16:00:37 UTC on 13 June 2012 about south of Kwajalein Atoll. The Pegasus launch vehicle was dropped from the L-1011 'Stargazer' aircraft.
On 22 June 2012, it was confirmed that the mast was fully deployed.
Optics
Unlike visible light telescopes – which employ mirrors or lenses working with normal incidence – NuSTAR has to employ grazing incidence optics to be able to focus X-rays. For this two conical approximation Wolter telescope design optics with focal length are held at the end of a long deployable mast. A laser metrology system is used to determine the exact relative positions of the optics and the focal plane at all times, so that each detected photon can be mapped back to the correct point on the sky even if the optics and the focal plane move relative to one another during an exposure.
Each focusing optic consists of 133 concentric shells. One particular innovation enabling NuSTAR is that these shells are coated with depth-graded multilayers (alternating atomically thin layers of a high-density and low-density material); with NuSTAR's choice of Pt/SiC and W/Si multilayers, this enables reflectivity up to 79 keV (the platinum K-edge energy).
The optics were produced, at Goddard Space Flight Center, by heating thin () sheets of flexible glass in an oven so that they slumped over precision-polished cylindrical quartz mandrels of the appropriate radius. The coatings were applied by a group at the Danish Technical University.
The shells were then assembled, at the Nevis Laboratories of Columbia University, using graphite spacers machined to constrain the glass to the conical shape, and held together by epoxy. There are 4680 mirror segments in total (the 65 inner shells each comprise six segments and the 65 outer shells twelve; there are upper and lower segments to each shell, and there are two telescopes); there are five spacers per segment. Since the epoxy takes 24 hours to cure, one shell is assembled per day – it took four months to build up one optic.
The actual telescope consists of two separate Focal Plane Modules (FPMs) labelled FPMA and FPMB. These two FPMs are built to be similar, though they are not identical. Depending on the source and on the observation, one of the modules will usually report higher counts. This is corrected for in the science results step, usually by apply a constant multiplier during spectral fitting and light curve analysis.
The expected point spread function for the flight mirrors is 43 arcseconds, giving a spot size of about two millimeters at the focal plane; this is unprecedentedly good resolution for focusing hard X-ray optics, though it is about one hundred times worse than the best resolution achieved at longer wavelengths by the Chandra X-ray Observatory.
Detectors
Each focusing optic has its own focal plane module, consisting of a solid state cadmium zinc telluride (CdZnTe) pixel detector surrounded by a cesium iodide (CsI) anti-coincidence shield. One detector unit — or focal plane — comprises four (two-by-two) detectors, manufactured by eV Products. Each detector is a rectangular crystal of dimension and thickness ~ that have been gridded into pixels (each pixel subtending 12.3 arcseconds) and provides a total of 12 arcminutes field of view (FoV) for each focal plane module.
The cadmium zinc telluride (CdZnTe) detectors are state of the art room temperature semiconductors that are very efficient at turning high energy photons into electrons. The electrons are digitally recorded using custom application-specific integrated circuits (ASICs) designed by the NuSTAR California Institute of Technology (CalTech) Focal Plane Team. Each pixel has an independent discriminator and individual X-ray interactions trigger the readout process. On-board processors, one for each telescope, identify the row and column with the largest pulse height and read out pulse height information from this pixel as well as its eight neighbors. The event time is recorded to an accuracy of 2 μs relative to the on-board clock. The event location, energy, and depth of interaction in the detector are computed from the nine-pixel signals.
The focal planes are shielded by cesium iodide (CsI) crystals that surround the detector housings. The crystal shields, grown by Saint-Gobain, register high energy photons and cosmic rays which cross the focal plane from directions other than the along the NuSTAR optical axis. Such events are the primary background for NuSTAR and must be properly identified and subtracted in order to identify high energy photons from cosmic sources. The NuSTAR active shielding ensures that any CZT detector event coincident with an active shield event is ignored.
Major scientific results
NuSTAR has demonstrated its versatility, opening the way to many new discoveries in a wide variety of areas of astrophysical research since its launch.
Spin measurement of a supermassive black hole
In February 2013, NASA revealed that NuSTAR, along with the XMM-Newton space observatory, has measured the spin rate of the supermassive black hole at the center of the galaxy NGC 1365. By measuring the frequency change of X-ray light emitted from the black hole corona, NuSTAR was able to view material from the corona be drawn closer to the event horizon. This caused inner portions of the black hole's accretion disk to be illuminated with X-rays, allowing this elusive region to be studied by astronomers for spin rates.
Tracing radioactivity in a supernova remnant
One of NuSTAR's main goals is to characterize stars' explosions by mapping the radioactive material in a supernova remnants. The NuSTAR map of Cassiopeia A shows the titanium-44 isotope concentrated in clumps at the remnant's center and points to a possible solution to the mystery of how the star exploded. When researchers simulate supernova blasts with computers, as a massive star dies and collapses, the main shock wave often stalls and the star fails to shatter. The latest findings strongly suggest the exploding star literally sloshed around, re-energizing the stalled shock wave and allowing the star to finally blast off its outer layers.
Nearby supermassive black holes
In January 2017, researchers from Durham University and the University of Southampton, leading a coalition of agencies using NuSTAR data, announced the discovery of supermassive black holes at the center of nearby galaxies NGC 1448 and IC 3639.
Measurement of temperature variations of AGN wind
In March 2nd of 2017, NuSTAR published an article to Nature detailing observations of wind temperature variations around AGN IRAS 13224−3809. By detecting periodic absences of absorption lines in the X-ray spectrum from the accretion disk winds, NuSTAR and XMM-Newton observed heating and cooling cycles of the relativistic winds leaving the accretion disk.
Detection of light reflecting behind a black hole
NuSTAR and XMM-Newton detected X-rays emitted behind the supermassive black hole within Seyfert 1 galaxy I Zwicky 1. Upon studying the flashes of light emitted by the corona of the black hole, researchers noticed that some detected light arrived to the detector later than the rest, with a corresponding change in frequency. The Stanford University team of scientists that led the study concluded that this change was directly attributable to radiation from the flash reflecting off of the accretion disk on the opposing side of the black hole. The path of this reflected light was bent by the high spacetime curvature, directed to the detector after the initial flash.
Ultra-luminous neutron star violating the Eddington limit
In April 6th of 2023, the NuSTAR team confirmed that neutron star M82 X-2 was emitting more radiation than was physically thought possible due to the Eddington limit, officially labeling it as an Ultraluminous X-ray source (ULX).
See also
Explorer program
Gravity and Extreme Magnetism, hard X-ray telescope measuring polarization (cancelled 2012)
James Webb Space Telescope, infrared telescope launched on 25 December 2021
XRISM, joint Japanese and American X-ray telescope
List of X-ray space telescopes
References
External links
NuSTAR website at nasa.gov
NuSTAR website at caltech.edu
Building, Launching, and Using the NuSTAR X-ray Observatory, talk by Melania Nynka of the MIT Kavli Institute for Astrophysics and Space Research
Further reading
X-ray telescopes
Explorers Program
Space telescopes
Spacecraft launched in 2012
Spacecraft launched by Pegasus rockets | NuSTAR | [
"Astronomy"
] | 2,337 | [
"Space telescopes"
] |
1,583,648 | https://en.wikipedia.org/wiki/Smart%20Battery%20System | Smart Battery System (SBS) is a specification for managing a smart battery, usually for a portable computer. It allows operating systems to perform power management operations via a smart battery charger based on remaining estimated run times by determining accurate state of charge readings. Through this communication, the system also controls the battery charge rate. Communication is carried over an SMBus two-wire communication bus. The specification originated with the Duracell and Intel companies in 1994, but was later adopted by several battery and semiconductor makers.
The Smart Battery System defines the SMBus connection, the data that can be sent over the connection (Smart Battery Data or SBD), the Smart Battery Charger, and a computer BIOS interface for control. In principle, any battery operated product can use SBS.
A special integrated circuit in the battery pack (called a fuel gauge or battery management system) monitors the battery and reports information to the SMBus. This information might include battery type, model number, manufacturer, characteristics, charge/discharge rate, predicted remaining capacity, an almost-discharged alarm so that the PC or other device can shut down gracefully, and temperature and voltage to provide safe fast-charging.
See also
List of battery types
Power Management Bus (PMBus)
References
External links
SBS-IF Smart Battery System Implementers Forum
Battery Firmware Hacking Inside the innards of a Smart Battery
Rechargeable batteries
Battery charging
Computer hardware standards | Smart Battery System | [
"Technology"
] | 284 | [
"Computer standards",
"Computer hardware standards"
] |
1,583,685 | https://en.wikipedia.org/wiki/Samuel%20S.%20Wagstaff%20Jr. | Samuel Standfield Wagstaff Jr. (born 21 February 1945) is an American mathematician and computer scientist, whose research interests are in the areas of cryptography, parallel computation, and analysis of algorithms, especially number theoretic algorithms. He is currently a professor of computer science and mathematics at Purdue University who coordinates the Cunningham project, a project to factor numbers of the form bn ± 1, since 1983. He has authored/coauthored over 50 research papers and four books. He has an Erdős number of 1.
Wagstaff received his Bachelor of Science in 1966 from Massachusetts Institute of Technology. His doctoral dissertation was titled, On Infinite Matroids, PhD in 1970 from Cornell University.
Wagstaff was one of the founding faculty of Center for Education and Research in Information Assurance and Security (CERIAS) at Purdue, and its precursor, the Computer Operations, Audit, and Security Technology (COAST) Laboratory.
Selected publications
with John Brillhart, D. H. Lehmer, John L. Selfridge, Bryant Tuckerman: Factorization of bn ± 1, b = 2,3,5,6,7,10,11,12 up to high powers, American Mathematical Society, 1983, 3rd edition 2002 as electronic book, Online text
Wagstaff The Cunningham Project, Fields Institute, pdf file
References
External links
Cunningham project website
CERIAS WWW site
Archival COAST WWW site
Number theorists
Cornell University alumni
20th-century American mathematicians
21st-century American mathematicians
Living people
1945 births
Massachusetts Institute of Technology alumni | Samuel S. Wagstaff Jr. | [
"Mathematics"
] | 309 | [
"Number theorists",
"Number theory"
] |
1,583,875 | https://en.wikipedia.org/wiki/Birut%C4%97%20Galdikas | Birutė Marija Filomena Galdikas or Birutė Mary Galdikas, OC (born 10 May 1946), is a Lithuanian-Canadian anthropologist, primatologist, conservationist, ethologist, and author. She is a professor at Simon Fraser University. In the field of primatology, Galdikas is recognized as a leading authority on orangutans. Prior to her field study of orangutans, scientists knew little about the species.
Early life
Galdikas was born on 10 May 1946 in Wiesbaden, West Germany. Her parents, Antanas and Filomena Galdikas, were Lithuanian refugees fleeing the Soviet occupation of the Baltic states following World War II. When Galdikas was two years old, the family moved to Canada in 1948, when her father signed a contract to work in copper mining in Quebec. The following year, they relocated to Toronto, where Galdikas grew up. Her father worked as a miner and a contractor. As a young child, Birutė's head was filled with visions of far-off forests and exotic creatures. The first book she borrowed from the Toronto Public Library was a tale about a monkey named Curious George. When she grew older, she was inspired by the National Geographic adventures of Jane Goodall and Dian Fossey. She has two younger brothers and a younger sister.
Education
In 1962, the Galdikas family moved to Vancouver, where Galdikas met her future husband, Rod Brindamour. Two years later, after Galdikas had begun studies at the University of British Columbia (UBC), the family moved to the United States, where Galdikas enrolled in the University of California, Los Angeles (UCLA), and studied psychology and zoology. In 1966, she earned her bachelor's degrees in psychology and zoology, jointly awarded by UCLA and UBC. She married Brindamour and earned her master's degree in anthropology from UCLA both in 1969.
During her graduate studies at UCLA, Galdikas met paleoanthropologist Louis Leakey, and proposed a plan aimed at studying orangutans in their natural habitats. Galdikas convinced Leakey to help orchestrate her endeavour, despite his initial reservations. Leakey found funding from the National Geographic Society which agreed to establish a research facility in Borneo. Her research became the basis of her doctoral studies, and she earned her doctorate in anthropology from UCLA in 1978.
Works
Research in Borneo
In 1971, at age 25, Galdikas and her then-husband, photographer Rod Brindamour, arrived in Tanjung Puting Reserve, in Indonesian Borneo. Galdikas was the third of a trio of women appointed by Leakey to study great apes in their natural habitat. Dubbed by Leakey "The Trimates" the trio also included Jane Goodall, who studied chimpanzees, and Dian Fossey, who studied gorillas. Leakey and the National Geographic Society helped Galdikas set up her research camp near the edge of the Java Sea, dubbed "Camp Leakey", to conduct field study on orangutans in Borneo. Before Galdikas's studies, the orangutan was the least understood of the great apes. Galdikas went on to greatly expand scientific knowledge of orangutan behaviour, habitat and diet.
Orangutan Foundation International
In 1986, Galdikas and her colleagues founded Orangutan Foundation International (OFI), based in Los Angeles, USA, to help support orangutans around the world. Her second husband, Pak Bohap, who was a Dayak rice farmer and tribal president, assisted in setting up sister organisations in Australia, Indonesia, and the United Kingdom and is co-director of the orangutan program in Borneo.
Advocacy and rehabilitation work
Galdikas has remained in Borneo for over 40 years while becoming an outspoken advocate for orangutans and the preservation of their rainforest habitat, which is rapidly being destroyed by loggers, palm oil plantations, gold miners, and unnatural conflagrations. While campaigning actively on behalf of primate conservation and preservation of rainforest, Galdikas continues her field research, among the lengthiest continuous studies of a mammal ever conducted.
Galdikas's conservation efforts extend beyond advocacy, largely focusing on rehabilitation of the orphaned orangutans turned over to her for care. Many of these orphans were once illegal pets, before becoming too smart and difficult for their owners to handle.
She has written several books, including a memoir entitled Reflections of Eden. In it, Galdikas describes her experiences at Camp Leakey and efforts to rehabilitate ex-captive orangutans and release them into the Borneo rainforest.
Galdikas is a professor at Simon Fraser University in Burnaby, British Columbia, and Professor Extraordinaire at Universitas Nasional in Jakarta, Indonesia. She is also president of the Orangutan Foundation International in Los Angeles, California.
In 2021 Dr. Birutė Galdikas became a patron of the nature conservation non-profit organisation the aiming to protect the remaining old-growth forests in Lithuania with all the biodiversity there.
Recognition
Galdikas has been featured in Life, The New York Times, The Washington Post, the Los Angeles Times, numerous television documentaries, and twice on the cover of National Geographic. Galdikas's work has been acknowledged in television shows hosted by Steve Irwin as well as Jeff Corwin on Animal Planet.
In 1995, Galdikas was made an Officer of the Order of Canada.
Along with fellow Trimate Jane Goodall and preeminent field biologist George Schaller, Galdikas received the Tyler Prize for Environmental Achievement in 1997 for her groundbreaking field research and lifetime contributions to the advancement of environmental science. Other honours include the Indonesia's Hero for the Earth Award (Kalpataru), Institute of Human Origins Science Award Officer, United Nations Global 500 Award (1993), Elizabeth II Commemorative Medal, the Eddie Bauer Hero of the Earth (1991), PETA Humanitarian Award (1990), and the Sierra Club Chico Mendes Award (1992). She was awarded a key to the city of Las Vegas, Nevada, in 2009 when she gave a presentation for the anthropology department at U.N.L.V.
Media
Books
Reflections of Eden: My Years with the Orangutans of Borneo (1995)
Orangutan Odyssey (1999)
Great Ape Odyssey. (2005). Abrams: New York.
Film and television
Galdikas stars in the feature documentary Born to Be Wild 3D, released in April 2011. She has also appeared in the documentaries Nature (TV series documentary, 2005), Life and Times (TV series documentary, 1996), 30 Years of National Geographic Specials (TV documentary, 1995), Orangutans: Grasping the Last Branch (documentary, 1989), Beauty and the beasts (Channel 4 UK documentary, 1996), The Last Trimate (TV documentary, 2008), and She Walks With Apes (CBC TV documentary, 2019). Terry Pratchett's Jungle Quest (documentary, C4, UK 1995)
Controversy
Galdikas was criticised in the late 1990s regarding her methods of rehabilitation. Primatologists debated the issue on the Internet mailing list Primate-Talk; the issue was further fuelled by the publication of articles in Outside magazine (May 1998) and Newsweek (June 1998). As reported in both articles and summarized in the 1999 book A Dark Place in the Jungle by Canadian novelist Linda Spalding, the Indonesian Ministry of Forestry — with whom Galdikas had clashed over logging policies — claimed that Galdikas held "a very large number of illegal orangutans ... in very poor conditions" at her Indonesian home, prompting the government to consider formal charges. Galdikas denied all such claims in a response to Newsweek in June 1999, remarking that allegations of mistreatment were "simply, wrong" and that the "outlandish" claims formed the basis of "a totally one-sided campaign against me."
See also
Jeffrey H. Schwartz
InfiniteEARTH
List of animal rights advocates
List of apes
Timeline of women in science
References
External links
Galdikas's official blog
Orangutan Foundation Canada
International Birute Galdikas charity fund
Orangutan.org - Orangutan Foundation International
"Does an Orangutan find freedom in the gift of words? Do we?" by Susanne Antonetta (March 2005)
Profile at science.ca (20 April 2004)
Tyler Prize for Environmental Achievement, awarded to Galdikas in 1997
1946 births
Living people
20th-century Canadian women scientists
Canadian environmentalists
Canadian anthropologists
Canadian women anthropologists
Canadian people of Lithuanian descent
Lithuanian anthropologists
Lithuanian women anthropologists
Ethologists
Women primatologists
Primatologists
Orangutan conservation
Officers of the Order of Canada
Academic staff of Simon Fraser University
University of California, Los Angeles alumni
Scientists from Toronto
People from Wiesbaden | Birutė Galdikas | [
"Biology"
] | 1,844 | [
"Ethology",
"Behavior",
"Ethologists"
] |
1,583,887 | https://en.wikipedia.org/wiki/Smart%20Common%20Input%20Method | The Smart Common Input Method (SCIM) is a platform for inputting more than thirty languages on computers, including Chinese-Japanese-Korean style character languages (CJK), and many European languages. It is used for POSIX-style operating systems including Linux and BSD. Its purposes are to provide a simple and powerful common interface for users from any country, and to provide a clear architecture for programming, so as to reduce time required to develop individual input methods.
Goals
The main goals of the SCIM project include:
To act as a unified frontend for current available input method libraries. Bindings to uim and m17n library are available (as of August 2007).
To act as a language engine of IIIMF (an input method framework).
To support as many input method protocols/interfaces as existing and in common use.
To support multiple operating systems. (Currently, only POSIX-style operating systems are available.)
Architecture
SCIM was originally written in the C++ language but has moved to pure C since 1.4.14. It abstracts the input method interface to several classes and attempts to simplify the classes and make them more independent from each other. With the simpler and more independent interfaces, developers can write their own input methods in fewer lines of code.
SCIM is a modularized IM platform, and as such, components can be implemented as dynamically loadable modules, thus can be loaded during runtime at will. For example, input methods written for SCIM could be IMEngine modules, and users can use such IMEngine modules combined with different interface modules (FrontEnd) in different environments without rewrite or recompile of the IMEngine modules, reducing the compile time or development time of the project.
SCIM is a high-level library, similar to XIM or IIIMF; however, SCIM claims to be simpler than either of those IM platforms. SCIM also claims that it can be used alongside XIM or IIIMF. SCIM can also be used to extend the input method interface of existing application toolkits, such as GTK+, Qt and Clutter via IMmodules.
Related projects
SKIM is a separate project aimed at integrating SCIM more tightly into the K Desktop Environment, by providing a GUI panel (named scim-panel-kde as an alternative to scim-panel-gtk), a KConfig config module and setup dialogs for itself and the SCIM module libscim. It also has its own plugin system which supports on-demand loadable actions.
t-latn-pre and t-latn-post are two input methods that provide an easy way for composing accented characters, either by preceding regular characters with diacritic marks (in the case of t-latn-pre), or by adding the marks subsequently (in the case of t-latn-post). Their main advantage is the large number of composed characters from different languages that can be entered this way, rendering it unnecessary to install, for example, separate keyboard layouts. These input methods are available for SCIM through the M17n library.
See also
Input method
IBus
List of input methods for UNIX platforms
uim
References
External links
m17n Multilingualization
Freedesktop.org
Input methods
Free software programmed in C++ | Smart Common Input Method | [
"Technology"
] | 704 | [
"Input methods",
"Natural language and computing"
] |
1,583,938 | https://en.wikipedia.org/wiki/International%20Plant%20Protection%20Convention | The International Plant Protection Convention (IPPC) is a 1951 multilateral treaty overseen by the United Nations Food and Agriculture Organization that aims to secure coordinated, effective action to prevent and to control the introduction and spread of pests of plants and plant products. The Convention extends beyond the protection of cultivated plants to the protection of natural flora and plant products. It also takes into consideration both direct and indirect damage by pests, so it includes weeds. IPPC promulgates International Standards for Phytosanitary Measures (ISPMs).
The Convention created a governing body consisting of each party, known as the Commission on Phytosanitary Measures, which oversees the implementation of the convention (see ). As of August 2017, the convention has 183 parties, being 180 United Nations member states and the Cook Islands, Niue, and the European Union. The convention is recognized by the World Trade Organization's (WTO) Agreement on the Application of Sanitary and Phytosanitary Measures (the SPS Agreement) as the only international standard setting body for plant health.
Goals
While the IPPC's primary focus is on plants and plant products moving in international trade, the convention also covers research materials, biological control organisms, germplasm banks, containment facilities, food aid, emergency aid and anything else that can act as a vector for the spread of plant pests – for example, containers, packaging materials, soil, vehicles, vessels and machinery.
The IPPC was created by member countries of the Food and Agriculture Organization (UN FAO). The IPPC places emphasis on three core areas: international standard setting, information exchange and capacity development for the implementation of the IPPC and associated international phytosanitary standards. The Secretariat of the IPPC is housed at FAO headquarters in Rome, Italy, and is responsible for the coordination of core activities under the IPPC work program.
In recent years the Commission of Phytosanitary Measures of the IPPC has developed a strategic framework with the objectives of:
protecting sustainable agriculture and enhancing global food security through the prevention of pest spread;
protecting the environment, forests and biodiversity from plant pests;
facilitating economic and trade development through the promotion of harmonized scientifically based phytosanitary measures, and:
developing phytosanitary capacity for members to accomplish the preceding three objectives.
By focusing the convention's efforts on these objectives, the Commission on Phytosanitary Measures of the IPPC intends to:
protect farmers from economically devastating pest and disease outbreaks.
protect the environment from the loss of species diversity.
protect ecosystems from the loss of viability and function as a result of pest invasions.
protect industries and consumers from the costs of pest control or eradication.
facilitate trade through International Standards that regulate the safe movements of plants and plant products.
protect livelihoods and food security by preventing the entry and spread of new pests of plants into a country.
Regional Plant Protection Organizations
Under the IPPC are Regional Plant Protection Organizations (RPPO). These are intergovernmental organizations responsible for cooperation in plant protection. There are the following organizations recognized by and working under the IPPC:
Asia and Pacific Plant Protection Commission (APPPC)
Caribbean Agricultural Health and Food Safety Agency (CAHFSA)
Andean Community (Comunidad Andina, CAN)
Plant Health Committee of the Southern Cone (, COSAVE)
European and Mediterranean Plant Protection Organization (EPPO)
Inter-African Phytosanitary Council (IAPSC)
Near East Plant Protection Organization (NEPPO)
North American Plant Protection Organization (NAPPO)
International Regional Organization for Agricultural Health (, OIRSA)
Pacific Plant Protection Organization (PPPO)
Under the IPPC, the role of an RPPO is to:
function as the coordinating bodies in the areas covered, shall participate in various activities to achieve the objectives of this Convention and, where appropriate, shall gather and disseminate information.
cooperate with the Secretary in achieving the objectives of the Convention and, where appropriate, cooperate with the Secretary and the Commission in developing international standards.
hold regular Technical Consultations of representatives of regional plant protection organizations to:
promote the development and use of relevant international standards for phytosanitary measures; and
encourage inter-regional cooperation in promoting harmonized phytosanitary measures for controlling pests and in preventing their spread and/or introduction.
International Plant Health Conference
The first annual International Plant Health Conference was organized by the FAO and set to be hosted by the Finnish Government in Helsinki 28 June–July 1, 2021. However, on 9 February 2021 it was cancelled due to the ongoing pandemic.
Commission on Phytosanitary Measures
The fifteenth session of the Commission on Phytosanitary Measures (CPM) was held 16 March, 18 March and 1 April 2021 virtually over Zoom.
ePhyto
The IPPC created and administers the ePhyto system, the international electronic phytosanitary certificate standard. ePhyto has been very widely adopted three million ePhyto certificates have been exchanged between exporting and importing partner states.
Activities
IPPC convenes consultative committees and forms international standards. This includes standards on food irradiation.
Haack et al., 2014 find the IPPC has been successful in reducing wood boring beetle infestation of wood packaging material in shipments entering the United States.
See also
Phytosanitary certification
Phytosanitary Certificate Issuance and Tracking System (PCIT)
International Year of Plant Health (IYPH)
References
External links
International Plant Protection Convention
Food and Agriculture Organization of the United Nations
Ratifications
1951 in the environment
Crop protection organizations
Treaties concluded in 1951
Treaties entered into force in 1952
International Plant Protection Convention
Treaties of Afghanistan
Treaties of Albania
Treaties of Algeria
Treaties of Antigua and Barbuda
Treaties of Argentina
Treaties of Armenia
Treaties of Australia
Treaties of Austria
Treaties of Azerbaijan
Treaties of the Bahamas
Treaties of Bahrain
Treaties of Bangladesh
Treaties of Barbados
Treaties of Belarus
Treaties of Belgium
Treaties of Belize
Treaties of Benin
Treaties of Bhutan
Treaties of Bolivia
Treaties of Bosnia and Herzegovina
Treaties of Botswana
Treaties of the Second Brazilian Republic
Treaties of Bulgaria
Treaties of Burkina Faso
Treaties of Burundi
Treaties of the French protectorate of Cambodia
Treaties of Cameroon
Treaties of Canada
Treaties of Cape Verde
Treaties of the Central African Republic
Treaties of Chad
Treaties of Chile
Treaties of the People's Republic of China
Treaties of Colombia
Treaties of the Cook Islands
Treaties of the Comoros
Treaties of the Democratic Republic of the Congo
Treaties of the Republic of the Congo
Treaties of Costa Rica
Treaties of Ivory Coast
Treaties of Croatia
Treaties of Cuba
Treaties of Cyprus
Treaties of the Czech Republic
Treaties of North Korea
Treaties of Denmark
Treaties of Djibouti
Treaties of Dominica
Treaties of the Dominican Republic
Treaties of Ecuador
Treaties of the Republic of Egypt (1953–1958)
Treaties of El Salvador
Treaties of Equatorial Guinea
Treaties of Eritrea
Treaties of Estonia
Treaties of the Derg
Treaties of Fiji
Treaties of Finland
Treaties of France
Treaties of Gabon
Treaties of the Gambia
Treaties of Georgia (country)
Treaties of Germany
Treaties of Ghana
Treaties of Greece
Treaties of Grenada
Treaties of Guatemala
Treaties of Guinea
Treaties of Guinea-Bissau
Treaties of Guyana
Treaties of Haiti
Treaties of Honduras
Treaties of Hungary
Treaties of Iceland
Treaties of India
Treaties of Indonesia
Treaties of Iran
Treaties of Ireland
Treaties of Israel
Treaties of Italy
Treaties of Jamaica
Treaties of Japan
Treaties of Jordan
Treaties of Kazakhstan
Treaties of Kenya
Treaties of the Kingdom of Afghanistan
Treaties of the Kingdom of Iraq
Treaties of the Kingdom of Laos
Treaties of Kuwait
Treaties of Kyrgyzstan
Treaties of Latvia
Treaties of Lebanon
Treaties of Lesotho
Treaties of Liberia
Treaties of the Libyan Arab Republic
Treaties of Lithuania
Treaties of Luxembourg
Treaties of Madagascar
Treaties of Malawi
Treaties of Malaysia
Treaties of the Maldives
Treaties of Mali
Treaties of Malta
Treaties of Mauritania
Treaties of Mauritius
Treaties of Mexico
Treaties of the Federated States of Micronesia
Treaties of Mongolia
Treaties of Montenegro
Treaties of Morocco
Treaties of Mozambique
Treaties of Myanmar
Treaties of Namibia
Treaties of Nepal
Treaties of the Netherlands
Treaties of New Zealand
Treaties of Nicaragua
Treaties of Niue
Treaties of Niger
Treaties of Nigeria
Treaties of Norway
Treaties of Oman
Treaties of the Dominion of Pakistan
Treaties of Palau
Treaties of Panama
Treaties of Papua New Guinea
Treaties of Paraguay
Treaties of Peru
Treaties of the Philippines
Treaties of Poland
Treaties of Portugal
Treaties of Qatar
Treaties of South Korea
Treaties of Moldova
Treaties of Romania
Treaties of Russia
Treaties of Rwanda
Treaties of Samoa
Treaties of São Tomé and Príncipe
Treaties of Saudi Arabia
Treaties of Senegal
Treaties of Serbia
Treaties of Seychelles
Treaties of Sierra Leone
Treaties of Singapore
Treaties of Slovakia
Treaties of Slovenia
Treaties of the Solomon Islands
Treaties of South Africa
Treaties of South Sudan
Treaties of Spain
Treaties of the Dominion of Ceylon
Treaties of Saint Kitts and Nevis
Treaties of Saint Lucia
Treaties of Saint Vincent and the Grenadines
Treaties of the Democratic Republic of the Sudan
Treaties of Suriname
Treaties of Eswatini
Treaties of Sweden
Treaties of Switzerland
Treaties of Syria
Treaties of Tajikistan
Treaties of Thailand
Treaties of North Macedonia
Treaties of Togo
Treaties of Tonga
Treaties of Trinidad and Tobago
Treaties of Tunisia
Treaties of Turkey
Treaties of Tuvalu
Treaties of Uganda
Treaties of Ukraine
Treaties of the United Arab Emirates
Treaties of the United Kingdom
Treaties of Tanzania
Treaties of the United States
Treaties of Uruguay
Treaties of Uzbekistan
Treaties of Vanuatu
Treaties of Venezuela
Treaties of Vietnam
Treaties of Yemen
Treaties of Zambia
Treaties of Zimbabwe
Treaties entered into by the European Union
International Plant Protection Convention
International Plant Protection Convention
1951 in Italy
Treaties establishing intergovernmental organizations
Agricultural treaties
Treaties extended to Norfolk Island
Treaties extended to the Isle of Man
Treaties extended to Jersey
Treaties extended to Guernsey
Treaties extended to American Samoa
Treaties extended to Baker Island
Treaties extended to Guam
Treaties extended to Howland Island
Treaties extended to Jarvis Island
Treaties extended to Johnston Atoll
Treaties extended to Midway Atoll
Treaties extended to Navassa Island
Treaties extended to the Trust Territory of the Pacific Islands
Treaties extended to Palmyra Atoll
Treaties extended to Puerto Rico
Treaties extended to the United States Virgin Islands
Treaties extended to Wake Island
Treaties extended to the Nauru Trust Territory
Treaties extended to Dutch New Guinea
Treaties extended to Surinam (Dutch colony)
Treaties extended to Macau
Treaties extended to the Panama Canal Zone | International Plant Protection Convention | [
"Biology"
] | 2,023 | [
"Pests (organism)",
"Pest control"
] |
1,583,940 | https://en.wikipedia.org/wiki/PNY%20Technologies | PNY Technologies, Inc., doing business as PNY, is an American manufacturer of flash memory cards, USB flash drives, solid state drives, memory upgrade modules, portable battery chargers, computer locks, cables, chargers, adapters, and consumer and professional graphics cards. The company is headquartered in Parsippany-Troy Hills, New Jersey.
PNY stands for "Paris, New York", as they used to trade memory modules between Paris and New York.
History
PNY Electronics, Inc. originated out of Brooklyn, New York in 1985 as a company that bought and sold memory chips.
In 1996, the company was headquartered in Moonachie, New Jersey, and had a manufacturing production plant there, an additional plant in Santa Clara, California, and served Europe from a third facility in Bordeaux, France.
To emphasize its expansion into manufacturing new forms of memory and complementary products, the company changed its name in 1997 to PNY Technologies, Inc. The company now has main offices in Parsippany, New Jersey; Santa Clara, California; Miami, Florida; Bordeaux, France, and Taiwan.
In 2009, the New Jersey Nets sold the naming rights of their practice jerseys to PNY. In 2010, New Jersey Governor Chris Christie spoke to PNY CEO Gadi Cohen about staying in New Jersey after Cohen was reportedly considering a move to Pennsylvania. In 2011, PNY moved their global headquarters and main manufacturing facility to a 40+ acre location on Jefferson Road in Parsippany, NJ. Lt. Governor Kim Guadagno toured the company and called it "a real good business news story for New Jersey."
Products
PNY is a memory and graphics technology company and manufacturer of computer peripherals, including the following products:
Flash memory cards
USB flash drives
Solid state drives
Memory upgrades
NVIDIA graphics cards
HDMI cables
DRAM modules
Portable battery chargers
HP Pendrive & MicroSD Cards
Legacy products:
CD-R discs
PNY has introduced water-cooled video cards and themed USB flash drives that include full films.
References
External links
Companies based in Morris County, New Jersey
American companies established in 1985
Computer companies established in 1985
1985 establishments in New York City
Computer companies of the United States
Computer memory companies
Computer hardware companies
Graphics hardware companies
Manufacturing companies based in New Jersey
Parsippany-Troy Hills, New Jersey
Privately held companies based in New Jersey | PNY Technologies | [
"Technology"
] | 476 | [
"Computer hardware companies",
"Computers"
] |
1,584,112 | https://en.wikipedia.org/wiki/Sawdust | Sawdust (or wood dust) is a by-product or waste product of woodworking operations such as sawing, sanding, milling and routing. It is composed of very small chips of wood. These operations can be performed by woodworking machinery, portable power tools or by use of hand tools. In some manufacturing industries it can be a significant fire hazard and source of occupational dust exposure.
Sawdust, as particulates, is the main component of particleboard. Research on health hazards comes from the field of occupational safety and health, and study of ventilation happens in indoor air quality engineering. Sawdust is an IARC group 1 Carcinogen.
Formation
Two waste products, dust and chips, form at the working surface during woodworking operations such as sawing, milling and sanding. These operations both shatter lignified wood cells and break out whole cells and groups of cells. Shattering of wood cells creates dust, while breaking out of whole groups of wood cells creates chips. The more cell-shattering that occurs, the finer the dust particles that are produced. For example, sawing and milling are mixed cell shattering and chip forming processes, whereas sanding is almost exclusively cell shattering.
Uses
A major use of sawdust is for particleboard; coarse sawdust may be used for wood pulp. Sawdust has a variety of other practical uses, including serving as a mulch, as an alternative to clay cat litter, or as a fuel. Until the advent of refrigeration, it was often used in icehouses to keep ice frozen during the summer. It has been used in artistic displays, and as scatter in miniature railroad and other models. It is also sometimes used to soak up liquid spills, allowing the spill to be easily collected or swept aside. As such, it was formerly common on barroom floors. It is used to make Cutler's resin. Mixed with water and frozen, it forms pykrete, a slow-melting, much stronger form of ice.
Sawdust is used in the manufacture of charcoal briquettes. The claim for invention of the first commercial charcoal briquettes goes to Henry Ford who created them from the wood scraps and sawdust produced by his automobile factory.
Food
Cellulose, fibre starch that is indigestible to humans, and a filler in some low calorie foods, can be and is made from sawdust, as well as from other plant sources. While there is no documentation for the persistent rumor, based upon Upton Sinclair's novel The Jungle, that sawdust was used as a filler in sausage, cellulose derived from sawdust was and is used for sausage casings. Sawdust-derived cellulose has also been used as a filler in bread.
When cereals were scarce, sawdust was sometimes an ingredient in kommissbrot. Auschwitz concentration camp survivor, Dr. Miklós Nyiszli, reports in Auschwitz: A Doctor's Eyewitness Account that the subaltern medical staff, who served Dr. Josef Mengele, subsisted on "bread made from wild chestnuts sprinkled with sawdust".
Health hazards
Airborne sawdust and sawdust accumulations present a number of health and safety hazards. Wood dust becomes a potential health problem when, for example, the wood particles, from processes such as sanding, become airborne and are inhaled. Wood dust is a known human carcinogen. Certain woods and their dust contain toxins that can produce severe allergic reactions. The composition of sawdust depends on the material it comes from; sawdust produced from natural wood is different from that of sawdust produced from processed wood or wood veneer.
Breathing airborne wood dust may cause allergic respiratory symptoms, mucosal and non-allergic respiratory symptoms, and cancer. In the US, lists of carcinogenic factors are published by the American Conference of Governmental Industrial Hygienists (ACGIH), the Occupational Safety and Health Administration (OSHA), and the National Institute for Occupational Safety and Health (NIOSH). All these organisations recognize wood dust as carcinogenic in relation to the nasal cavities and paranasal sinuses.
People can be exposed to wood dust in the workplace by breathing it in, skin contact, or eye contact. The OSHA has set the legal limit (permissible exposure limit) for wood dust exposure in the workplace as 15 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday. The NIOSH has set a recommended exposure limit (REL) of 1 mg/m3 over an 8-hour workday.
Water-borne bacteria digest organic material in leachate, but use up much of the available oxygen. This high biochemical oxygen demand can suffocate fish and other organisms. There is an equally detrimental effect on beneficial bacteria, so it is not at all advisable to use sawdust within home aquariums, as was once done by hobbyists seeking to save some expense on activated carbon.
Explosions and fire
Sawdust is flammable and accumulations provide a ready source of fuel. Airborne sawdust can be ignited by sparks or even heat accumulation and result in dust fire or explosions.
Environmental effects
At sawmills, unless reprocessed into particleboard, burned in a sawdust burner, or used to make heat for other milling operations, sawdust may collect in piles and add harmful leachates into local water systems, creating an environmental hazard. This has placed small sawyers and environmental agencies in a deadlock.
Questions about the science behind the determination of sawdust being an environmental hazard remain for sawmill operators (though this is mainly with finer particles), who compare wood residuals to dead trees in a forest. Technical advisors have reviewed some of the environmental studies, but say most lack standardized methodology or evidence of a direct impact on wildlife. They do not take into account large drainage areas, so the amount of material that is getting into the water from the site in relation to the total drainage area is minuscule.
Other scientists have a different view, saying the "dilution is the solution to pollution" argument is no longer accepted in environmental science. The decomposition of a tree in a forest is similar to the impact of sawdust, but the difference is of scale. Sawmills may be storing thousands of cubic metres of wood residues in one place, so the issue becomes one of concentration.
Of larger concern are substances such as lignins and fatty acids that protect trees from predators while they are alive, but can leach into water and poison wildlife. Those types of things remain in the tree and, as the tree decays, they slowly are broken down. But when sawyers are processing a large volume of wood and large concentrations of these materials permeate into the runoff, the toxicity they cause is harmful to a broad range of organisms.
Wood flour
Wood flour is finely pulverized wood that has a consistency fairly equal to sand or sawdust, but can vary considerably, with particles ranging in dimensions from a fine powder to roughly that of a grain of rice. Most wood flour manufacturers are able to create batches of wood flour that have the same consistency throughout. All high quality wood flour is made from hardwoods because of its durability and strength. Very low grade wood flour is occasionally made from sapless softwoods such as pine or fir.
Applications
Wood flour is commonly used as a filler in thermosetting resins such as bakelite, and in linoleum floor coverings. Wood flour is also the main ingredient in wood/plastic composite building products such as decks and roofs. Prior to 1920, wood flour was used as the filler in ¼-inch thick Edison Diamond Discs.
Wood flour has found a use in plugging small through-wall holes in leaking main condenser (heat exchanger) tubes at electrical power generating stations via injecting small quantities of the wood flour into the cooling water supply lines. Some of the injected wood flour clogs the small holes while the remainder exits the station in a relatively environmentally benign fashion.
Because of its adsorbent properties it has been used as a cleaning agent for removing grease or oil in various occupations. It has also been noted for its ability to remove lead contamination from water.
Wood flour can be used as a binder in grain filler compounds.
Sources
Large quantities of wood flour are frequently to be found in the waste from woodworking and furniture companies. An adaptive reuse to which this material can be directed is composting.
Wood flour can be subject to dust explosions if not cared for and disposed of properly.
Respirable particulates
As with all airborne particulates, wood dust particle sizes are classified with regard to effect on the human respiratory system. For this classification, the unit for measurement of particle sizes is the micrometre or micron (μm), where 1 micrometre = 1 micron. Particles below 50 μm are not normally visible to the naked human eye. Particles of concern for human respiratory health are those <100 μm (where the symbol < means ‘less than’).
Zhang (2004) has defined the size of indoor particulates according to respiratory fraction:
Particles which precipitate in the vicinity of the mouth and eyes, and get into the organism, are defined as the inhalable fraction, that is total dust. Smaller fractions, penetrating into the non-cartilage respiratory tract, are defined as respirable dust. Dust emitted in the wood industry is characterized by the dimensional disintegration of particles up to 5 μm, and that
is why they precipitate mostly in the nasal cavity, increasing the risk of cancer of the upper respiratory tract.
Exposure
The parameter most commonly used to characterize exposures to wood dust in air is total wood dust concentration, in mass per unit volume. In countries that use the metric system, this is usually measured in mg/m3 (milligram per cubic metre)
A study to estimate occupational exposure to inhalable wood dust by country, industry, the level of exposure and type of wood dust in 25 member states of the European Union (EU-25) found that in 2000–2003, about 3.6 million workers (2.0% of the employed EU-25 population) were occupationally exposed to inhalable wood dust. The highest exposure levels were estimated to occur in the construction sector and furniture industry.
Cancer
Wood dust is known to be a human carcinogen, based on sufficient evidence of carcinogenicity from studies in humans. It has been demonstrated through human epidemiologic studies that exposure to wood dust increases the occurrence of cancer of the nose (nasal cavities and paranasal sinuses). An association of wood dust exposure and cancers of the nose has been observed in numerous case reports, cohort studies, and case control studies specifically addressing nasal cancer.
Ventilation
To lower the concentration of airborne dust concentrations during woodworking, dust extraction systems are used. These can be divided into two types. The first are local exhaust ventilation systems, the second are room ventilation systems. Use of personal respirators, a form of personal protective equipment, can also isolate workers from dust.
Local exhaust
Local exhaust ventilation (LEV) systems rely on air pulled with a suction force through piping systems from the point of dust formation to a waste disposal unit. They consist of four elements: dust hoods at the point of dust formation, ventilation ducts, an air cleaning device (waste separator or dust collector) and an air moving device (a fan, otherwise known as an impeller). The air, containing dust and chips from the woodworking operation, is sucked by an impeller. The impeller is usually built into, or placed close to, the waste disposal unit, or dust collector.
Guidelines of performance for woodworking LEV systems exist, and these tie into occupational air quality regulations that exist in many countries. The LEV guidelines often referred to are those set by the ACIAH.
Low volume/high velocity
Low-volume/high-velocity (LVHV) capture systems are specialised types of LEV that use an extractor hood designed as an integral part of the tool or positioned very close to the operating point of the cutting tool. The hood is designed to provide high capture velocities, often greater than 50 m/s (10,000 fpm) at the contaminant release point. This high velocity is accompanied by airflows often less than 0.02m3/s (50 cfm) resulting from the small face area of the hood that is used. These systems have come into favour for portable power tools, although adoption of the technology is not widespread. Festool is one manufacturer of portable power tools using LVHV ventilation integrated into the tool design.
Room
If suitably designed, general ventilation can also be used as a control of airborne dust. General ventilation can often help reduce skin and clothing contamination, and dust deposition on surfaces.
History
″There was once a time when sawmill operators could barely give away their sawdust. They dumped it in the woods or incinerated it just to get rid of the stuff. These days, they have ready markets for sawdust…″, according to a report in 2008. For example, sawdust is used by biomass power plants as fuel or is sold to dairy farmers as animal bedding.
See also
Arsenic - was used as wood preservative
Dust collection system
Formaldehyde - used as adhesive
Pesticides - used as preservative to replace arsenic and chromium
Swarf
Wood glue
Wood preservation
References
Further reading
External links
BillPentz.com: Dust Collection Research.
WHO 2005. Air Quality Guidelines for Europe, 2nd ed. WHO regional publications. European series, No. 91. Copenhagen: WHO Regional Office for Europe.
Environmental chemistry
Saws
Waste
Woodworking
IARC Group 1 carcinogens
Wood fuel
Wood products
Dust
By-products | Sawdust | [
"Physics",
"Chemistry",
"Environmental_science"
] | 2,852 | [
"Environmental chemistry",
"Materials",
"nan",
"Waste",
"Matter"
] |
1,584,125 | https://en.wikipedia.org/wiki/Test%20plan | A test plan is a document detailing the objectives, resources, and processes for a specific test session for a software or hardware product. The plan typically contains a detailed understanding of the eventual workflow.
Test plans
A test plan documents the strategy that will be used to verify and ensure that a product or system meets its design specifications and other requirements. A test plan is usually prepared by or with significant input from test engineers.
Depending on the product and the responsibility of the organization to which the test plan applies, a test plan may include a strategy for one or more of the following:
Design verification or compliance test – to be performed during the development or approval stages of the product, typically on a small sample of units.
Manufacturing test or production test – to be performed during preparation or assembly of the product in an ongoing manner for purposes of performance verification and quality control.
Acceptance test or commissioning test – to be performed at the time of delivery or installation of the product.
Service and repair test – to be performed as required over the service life of the product.
Regression test – to be performed on an existing operational product, to verify that existing functionality was not negatively affected when other aspects of the environment were changed (e.g., upgrading the platform on which an existing application runs).
A complex system may have a high-level test plan to address the overall requirements and supporting test plans to address the design details of subsystems and components.
Test plan document formats can be as varied as the products and organizations to which they apply. There are three major elements that should be described in the test plan: test coverage, test methods, and test responsibilities. These are also used in a formal test strategy.
Test coverage
Test coverage in the test plan states what requirements will be verified during what stages of the product life. Test coverage is derived from design specifications and other requirements, such as safety standards or regulatory codes, where each requirement or specification of the design ideally will have one or more corresponding means of verification. Test coverage for different product life stages may overlap but will not necessarily be exactly the same for all stages. For example, some requirements may be verified during design verification test, but not repeated during acceptance test. Test coverage also feeds back into the design process, since the product may have to be designed to allow test access.
Test methods
Test methods in the test plan state how test coverage will be implemented. Test methods may be determined by standards, regulatory agencies, or contractual agreement, or may have to be created new. Test methods also specify test equipment to be used in the performance of the tests and establish pass/fail criteria. Test methods used to verify hardware design requirements can range from very simple steps, such as visual inspection, to elaborate test procedures that are documented separately.
Test responsibilities
Test responsibilities include what organizations will perform the test methods and at each stage of the product life. This allows test organizations to plan, acquire or develop test equipment and other resources necessary to implement the test methods for which they are responsible. Test responsibilities also include what data will be collected and how that data will be stored and reported (often referred to as "deliverables"). One outcome of a successful test plan should be a record or report of the verification of all design specifications and requirements as agreed upon by all parties.
IEEE 829 test plan structure
IEEE 829-2008, also known as the 829 Standard for Software Test Documentation, is an IEEE standard that specifies the form of a set of documents for use in defined stages of software testing, each stage potentially producing its own separate type of document. These stages are:
Test plan identifier
Introduction
Test items
Features to be tested
Features not to be tested
Approach
Item pass/fail criteria
Suspension criteria and resumption requirements
Test deliverables
Testing tasks
Environmental needs
Responsibilities
Staffing and training needs
Schedule
Risks and contingencies
Approvals
The IEEE documents that suggest what should be contained in a test plan are:
829-2008 IEEE Standard for Software and System Test Documentation
829-1998 IEEE Standard for Software Test Documentation (superseded by 829-2008)
829-1983 IEEE Standard for Software Test Documentation (superseded by 829-1998)
1008-1987 IEEE Standard for Software Unit Testing
1012-2004 IEEE Standard for Software Verification and Validation
1012-1998 IEEE Standard for Software Verification and Validation (superseded by 1012-2004)
1012-1986 IEEE Standard for Software Verification and Validation Plans (superseded by 1012-1998)
1059-1993 IEEE Guide for Software Verification & Validation Plans (withdrawn)
See also
Software testing
Test suite
Test case
Test script
Scenario testing
Session-based testing
IEEE 829
Ad hoc testing
References
External links
Public domain RUP test plan template at Sourceforge (templates are currently inaccessible but sample documents can be seen here: DBV Samples)
Plan | Test plan | [
"Engineering"
] | 967 | [
"Software engineering",
"Software testing"
] |
1,584,230 | https://en.wikipedia.org/wiki/NASU%20Institute%20of%20Electrodynamics | NASU Institute of Electrodynamics (IED) () is a Ukraine leading science institution in field of electrical engineering, thermal power (heat energy), and research of electrodynamics located in Kyiv, Ukraine as a part of the Ukrainian Academy of Sciences. It is well known for the prominent achievements in the field of computer science and electronics, made in early 1950s by Sergei Alekseyevich Lebedev.
The institute was established in 1947 on the basis of electrical engineering department of the NASU Energy Institute as the NASU Institute of Electrical Engineering. In 1963 it was renamed as the NASU Institute of Electrodynamics.
Notable achievements
MESM, an abbreviation for small electronic calculating system
Directors
1947 — 1952 Sergei Lebedev
1952 — 1959 Anatoliy Nesterenko
1959 — 1973 Oleksandr Milyakh
1973 — 2007 Anatoliy Shydlovskyi
2007 — Oleksandr Kyrylenko
External links
Official website
IED NASU. National Academy of Sciences of Ukraine
Research institutes in Kyiv
Computing in the Soviet Union
Research institutes in the Soviet Union
Institutes of the National Academy of Sciences of Ukraine
Computer science institutes in Ukraine
Energy research institutes | NASU Institute of Electrodynamics | [
"Technology",
"Engineering"
] | 239 | [
"Energy research institutes",
"Energy organizations",
"Computing in the Soviet Union",
"History of computing"
] |
1,584,239 | https://en.wikipedia.org/wiki/SCK%20CEN | SCK CEN (the Belgian Nuclear Research Centre), until 2020 shortened as SCK•CEN, is the Belgian nuclear research centre located in Mol, Belgium. SCK CEN is a global leader in the field of nuclear research, services, and education.
History
SCK CEN was founded in 1952 and originally named Studiecentrum voor de Toepassingen van de Kernenergie (Research Centre for the Applications of Nuclear Energy), abbreviated to STK. Land was bought in the municipality of Mol, and over the next years many technical, administrative, medical, and residential buildings were constructed on the site. From 1956 to 1964 four nuclear research reactors became operational: the BR 1, BR 2, BR 3, the first pressurized water reactor in Europe, and VENUS.
In 1963 SCK CEN already employed 1600 people, a number that would remain about the same over the next decades. In 1970 SCK CEN widened its field of activities outside the nuclear sector, but the emphasis remained on nuclear research. In 1991 SCK CEN was split and a new institute, VITO (Vlaamse Instelling voor Technologisch Onderzoek; Flemish institute for technological research), took over the non-nuclear activities. SCK CEN currently has about 850 employees.
In the 1980s, SCK CEN employees were bribed to receive and store high-level nuclear waste from the West German firm Transnuklear.
In 2017, the International Atomic Energy Agency designated SCK CEN as one of the four International Centres based on Research Reactor (ICERR).
Organisation profile
SCK CEN is a foundation of public utility with a legal status according to private law, under the guidance of the Belgian Federal Ministry in charge of energy. SCK CEN has more than 800 employees and an annual budget of €180 million. The organization receives 25% of its funding directly from government grants, 5% indirectly via activities for the dismantling of declassified installations and 70% from contract work and services.
Since 1991, the organization's statutory mission gives priority to research on problems of societal concern:
Safety of nuclear installations
Radiation protection
Medical and industrial applications of radiation
The back end of the nuclear fuel cycle (nuclear reprocessing and management of radioactive waste)
Nuclear decommissioning and decontamination of nuclear sites
The fight against nuclear proliferation
To these domains, SCK CEN contributes with research and development, training, communication, and services. This is done with a view to sustainable development, and hence taking into account environmental, economical and social factors.
Chairmen of the Board of Governors (since 1952)
Count Pierre Ryckmans (1952-1959)
Count Marc de Hemptinne (1959-1963)
Professor Julien Hoste (1963-1963)
General Letor (1963-1971)
Mr. André Baeyens (1971-1975)
Baron Frans Van den Bergh (1975-1986)
Mr. Ivo Van Vaerenbergh (1986-1989)
Professor Roger Van Geen (1991-1995)
Professor Frank Deconinck (1996-2013)
Baron Derrick Gosselin (since 2013)
Reactors
BR1
The Belgian Reactor 1 (BR1) is the first research reactor to have been built and commissioned in Belgium. This natural uranium air-cooled graphite-moderated reactor was commissioned in 1956. Its maximal thermal power is 4 MW, but it is presently only operated at 700 kW. Its natural uranium inventory could allow the reactor to run without refueling during several centuries (~ 300 years). At first, this research reactor was used primarily for research into reactor and neutron physics, for neutron activation analysis, and for a minor production of radionuclides. Now, it is being used for the irradiation of components, the calibration of measuring instruments, and for performing analyses and training nuclear students. BR1 operates by order of other research centres, universities and the industry.
BR2
Commissioned in 1962, The Belgian Reactor 2 (BR2) is a materials testing reactor. It is a high-flux reactor (~ 10 neutron・cm・s) in which neutrons are moderated by a beryllium matrix and cooled by light water pumped at low pressure (12-15 bar). Its core is very compact due to the particular shape of its beryllium matrix (paraboloid of revolution) allowing to install the fuel rods, the control rods, and the experiments in a very small volume (~ 1m). One reports that its very compact core architecture was quickly drawn on a beer mat during a discussion between nuclear physicists in a bar in New York during a very creative night at the end of the 1950s, or beginning 1960. At the demand of the US authorities, its nuclear fuel is presently based on low-enriched uranium (LEU) to minimize the risk of nuclear proliferation. Its thermal power (100 MW) is dissipated in the environment by water heated at modest temperature (40-48 °C). This research reactor is also used for the production of medical radio-isotopes. The BR2 research reactor produces on an annual basis more than 25% of the worldwide demand for molybdenum-99 and in peak periods even up to 65%.
BR3
The Belgian Reactor 3 was the first pressurised water reactor (PWR) in Europe. The reactor served as a prototype for the reactors in Doel and Tihange. It was taken into service in 1962 and permanently shut down in 1987.
Decommissioning
Decommissioning started in 2002. The European Commission selected BR3 as a pilot project to show the technical and economic feasibility of the dismantling of a reactor under real conditions.
VENUS
The research reactor VENUS, which stands for Vulcan Experimental Nuclear Study was commissioned in 1964. VENUS is used as an experimental installation for nuclear reactor physics studies of new reactor systems and for testing reactor calculations. The installation was re-built and modernised several times. As part of the GUINEVERE project, SCK CEN decided to re-build the VENUS reactor into a scale model of Accelerator Driven Systems (ADS). The particle accelerator was first connected in 2011. VENUS is a "zero power reactor": it has a power consumption of only 500 Watt.
MYRRHA
MYRRHA is a design of a Multi-purpose HYbrid Research Reactor for High-tech Applications. MYRRHA is the world's first research reactor driven by a particle accelerator.
INES incidents
After a leak in the hot cell of BR2 reactor, selenium-75 was released in the atmosphere on 15 May 2019. The event was classified by FANC at the level 1 of the international nuclear and radiological events scale (INES scale). 75Se (half-life = 119.8 days) was detected at low concentrations on aerosol filters from several air monitoring stations belonging to IRSN (Institut de Radioprotection et de Sûreté Nucléaire, France), in the Lille area and in the northwestern part of France. IRSN also performed an atmospheric dispersion modeling analysis. The dose assessment showed very low exposure levels (< 1 microsievert) without concern for public health in France.
The power of the BR2 reactor was insufficiently measured on January 27, 2021, because two of the three measuring chains were not functioning in accordance with the regulations and the third was defective. Since the installation had two independent sets of three measuring chains, any power variations could still be detected.
FANC has classified this incident at level 2 on the INES scale, not only because the operating conditions were not respected, but also because a similar incident had already occurred at SCK CEN in 2019. These two incidents were related to a lack of safety culture from the licensee leading to inappropriate operations.
Research activities
The Centres research activities are concentrated into the following main tracks.
HADES
In 1980, SCK CEN started the construction of an Underground Research Laboratory (URL) at 223 m below the ground level to study the feasibility of geological disposal in deep clay layers in the Boom Clay Formation at the Mol site. The underground laboratory was given the name HADES, god of the underworld in the Greek mythology. HADES is an acronym meaning: High Activity Disposal Experimental Site. Here, for more than 45 years, scientists perform research on the geomechanical, geochemical, mineralogical and microbiological characteristics of Boom Clay and on the interactions between the clay and the candidate materials for the waste packages. The underground laboratory HADES is now operated by the ESV EURIDICE, an economic partnership between SCK CEN and NIRAS.
Snow White
Since 2018, SCK CEN has commissioned a Snow White (JL-900) Early Warning System. This installation aspirates 900 m3 of air per hour across filters. These filters are replaced and analysed on a weekly basis. Because the system sucks up large quantities of air, SCK CEN can detect very low concentrations of radioactivity in the airborne dust. In this way, radioactive emissions, even when originating from abroad, do not remain unnoticed. Detection of low concentrations may indicate an abnormal emission, such as a hidden leak, or signal a nuclear incident. Snow White successfully detected airborne Cs-137 released during forest fires in the Chornobyl Exclusion Zone in Ukraine in 2020.
Nuclear Materials Science
Research is performed to improve the knowledge, understanding, and numerical simulation of the behaviour of materials under irradiation, and from there on predicting their performance. The aim is to develop, assess and validate new materials such as nuclear fuel, construction materials, and radioisotopes to be used in nuclear applications.
Advanced Nuclear Systems
Extensive contributions are made to extend the present Belgian expertise in the field of developments related to GEN IV reactor systems and ITER. In co-operation with the industry and international research teams, R&D efforts are made to develop and test innovative reactor technologies and instrumentation. This will contribute to the construction of an experimental fast spectrum installation (MYRRHA), allowing a.o. transmutation processes to be performed.
Environment, Health and Safety
Next to specialised R&D in the field of a.o. radiobiology and -ecology, environmental chemistry, decommissioning, radioactive waste management and disposal, SCK CEN also delivers high-quality measurement services such as radiation dosimetry, calibration, and spectrometry. Policy support, decision making, and research on the integration of social aspects into nuclear research contribute to meet complex problems related to radiation protection and energy policy.
The facility has for meteorological measurements a 121.1 metres tall guyed mast.
Education and Training – Academy (ACA)
Throughout its more than 60 years of research experience in the field of peaceful applications of nuclear science and technology, SCK CEN has also conducted education and training (). The ACA activities at SCK CEN cover a. o. reactor physics, reactor operation, reactor engineering, radiation protection, decommissioning, and waste management. Next to courses, SCK CEN also offers students the possibility to perform their research work at our laboratories and research reactors. Final-year students and Ph.D. candidates can enter a programme outlined together with a SCK CEN mentor and in close collaboration with a university promotor. Post-docs are mainly recruited in specialised research domains that reflect the priority programmes and R&D topics of our institute.
The Atoomwijk
The Atoomwijk was built to accommodate the employees. When the Flemish Institute for Technological Research was set up, a number of apartments were transferred, but the majority of the district is still owned by the study center. In addition to housing, the district also consists of sports infrastructure.
Increased risk of cancer?
On behalf of the Belgian Ministry of Social Affairs and Public Health, Sciensano conducted the Nucabel 2 study from 9 January 2017 to 30 June 2020. This national epidemiological study focuses on the possible health risks, mainly cancer, for people living in the vicinity of Belgian nuclear sites. The results of Nucabel 2 state that the incidence in the close vicinity (< 5 km) of the Mol-Dessel nuclear site is 3 times higher than the rest of Belgium. The results are statistically significant. Nevertheless, the number of observed cases remains low.
However, the results of this study - as the Sciensano researchers also indicate - cannot establish a causal link between the occurrence of cancer cases and the proximity of the Mol-Dessel site.
Additional information on the Nucabel 2 study:
The Sciensano study was a descriptive epidemiological study in which no attention was paid to:
other sources to which Belgians may be exposed, such as medical applications or background radiation;
the effective dose that would be emitted in Mol/Dessel;
individual factors, such as infections, genetics, and other risk factors.
After further questioning SCK CEN on points 1 and 2, the following emerged:
Every year, a Belgian is on average exposed to a dose of 4 millisieverts. Almost half of this comes from medical applications. This - like the exposure from natural background radiation - has not been taken into account. However, this represents a much larger dose burden for most critical members of the surrounding population. The doses from discharges from nuclear installations are so small that the dose burden - compared to natural and medical exposure - is almost negligible.
The effective dose of all atmospheric discharges and all exposure routes of the SCK CEN installations amounts to a maximum of 2 micro Sv (μSv) per year. This is therefore 1/50 of the limit of 100 micro Sv per year for the whole nuclear site and 500 times less than the effective dose of natural exposure in the Kempen.
See also
Edgar Sengier
European Atomic Energy Community (EURATOM)
Flemish Institute for Technological Research (VITO)
List of Cancer Clusters
Nuclear Energy in Belgium
References
External links
Official history brochure
SCK CEN’s public Institutional Repository
Nuclear research institutes
Radiation protection organizations
Research institutes in Belgium
Nuclear technology in Belgium
Buildings and structures in Antwerp Province
Mol, Belgium | SCK CEN | [
"Engineering"
] | 2,882 | [
"Nuclear research institutes",
"Nuclear organizations",
"Radiation protection organizations"
] |
1,584,277 | https://en.wikipedia.org/wiki/Earnings%20per%20share | Earnings per share (EPS) is the monetary value of earnings per outstanding share of common stock for a company during a defined period of time. It is a key measure of corporate profitability, focusing on the interests of the company's owners (shareholders), and is commonly used to price stocks.
In the United States, the Financial Accounting Standards Board (FASB) requires EPS information for the four major categories of the income statement: continuing operations, discontinued operations, extraordinary items, and net income.
Calculation
Preferred stock rights have precedence over common stock. Therefore, dividends on preferred shares are subtracted before calculating the EPS. When preferred shares are cumulative (i.e. dividends accumulate as payable if unpaid in the given accounting year), annual dividends are deducted whether or not they have been declared. Dividends in arrears are not relevant when calculating EPS.
Basic formula
Earnings per share =
Net income formula
Earnings per share =
Continuing operations formula
Earnings per share =
Diluted earnings per share
Diluted earnings per share (diluted EPS) is a company's earnings per share calculated using fully diluted shares outstanding (i.e. including the impact of stock option grants and convertible bonds). Diluted EPS indicates a "worst case" scenario, one that reflects the issuance of stock for all outstanding options, warrants and convertible securities that would reduce earnings per share.
Calculations
Calculations of diluted EPS vary. Morningstar reports diluted EPS "Earnings/Share $", which is net income minus preferred stock dividends divided by the weighted average of common stock shares outstanding over the past year; this is adjusted for dilutive shares. Some data sources may simplify this calculation by using the number of shares outstanding at the end of a reporting period. The methods of simplifying EPS calculations and eliminating inappropriate assumptions include replacing primary EPS with basic EPS, eliminating the treasury stock method of accounting from fully diluted EPS, eliminating the three-percent test for dual presentation, and providing information on individual dilative securities.
U.S. GAAP
Calculations of diluted EPS under U.S. GAAP are described under Statement No. 128 of the Financial Accounting Standards Board (FAS No. 128). The objective of diluted EPS is to measure the performance of a company over the reporting period taking into account the dilutive effect of potential common stock that could be issued by the company. To compute diluted EPS, both the denominator (outstanding shares) and the numerator (earnings) may need to be adjusted.
Diluted shares:
To calculate the total number of shares used in the calculation, FASB prescribes using the treasury method to calculate the dilutive effect of any instruments that could result in the issuance of shares, including:
Stock options
Stock Warrants
Convertible preferred stock
Convertible bonds
Share-based payment arrangements
Written put options
Contingently issuable shares
Earnings:
The numerator used in calculating diluted EPS is adjusted to take into account the impact that the conversion of any securities would have on earnings. For example, interest would be added back to earnings to reflect the conversion of any outstanding convertible bonds, preferred dividends would be added back to reflect the conversion of convertible preferred stock, and any impact of these changes on other financial items, such as royalties and taxes, would also be adjusted.
As mentioned above, a helpful way to consider the effect of dilutive instruments on EPS is to think about the "as if" method, in the sense that "if the instrument is converted, how does it affect EPS?" For example, let Company XYZ have Net Income = $2,000,000, there are 50,000 shares of common stock outstanding, and $1,000,000 of 10% bonds, convertible into 50,000 shares of common stock. Company A's tax rate is 25%.
Basic EPS = ($2,000,000 - ($1,000,000*10%))/50,000 = $38
Diluted EPS = ($2,000,000 + (25%*($1,000,000*10%))) / 50,000 + 50,000 = $20.25
Note that other than accounting for the shares of common stock added in the denominator of the EPS equation, we also add back the taxes that would have been taken out from Net income if the bonds were not converted. In conclusion, the "as if" method is helpful in considering the effect on dilutive instruments on EPS because it helps us think about the overall effect rather than just thinking about the numerator and denominator of the equation separately.
International financial reporting standards
Under International Financial Reporting Standards, diluted earnings per share is calculated by adjusting the earnings and number of shares for the effects of dilutive options and other dilutive potential common stock. Dilutive potential common stock includes:
convertible debt
convertible preferred stock
share warrants
share options
share rights
Employee stock purchase plans
contractual rights to purchase shares
contingent issuance contracts or agreement
The earnings per share requirements of U.S. GAAP, FASB ASC 260: EPS, are a result of the FASB's cooperation with the IASB to narrow the difference between IFRS and US GAAP. A few differences remain.
The differences that remain are the result of differences in the application of the treasury stock method, the treatment of contracts that may be settled in shares or cash, and contingently issuable shares.
See also
Accretion/dilution analysis
Dilutive security
P/E ratio
Whisper number
References
External links
European banks’ earnings announcements, video
Earnings Per Share Screener- figures from official financial statements
Price-To-Earning Ratio calculator
Corporate finance
Fundamental analysis
Financial ratios | Earnings per share | [
"Mathematics"
] | 1,165 | [
"Financial ratios",
"Quantity",
"Metrics"
] |
1,584,291 | https://en.wikipedia.org/wiki/Guastavino%20tile | The Guastavino tile arch system is a version of Catalan vault introduced to the United States in 1885 by Spanish architect and builder Rafael Guastavino (1842–1908). It was patented in the United States by Guastavino in 1892.
Description
Guastavino vaulting is a technique for constructing robust, self-supporting arches and architectural vaults using interlocking terracotta tiles and layers of mortar to form a thin skin, with the tiles following the curve of the roof as opposed to horizontally (corbelling), or perpendicular to the curve (as in Roman vaulting). This is known as timbrel vaulting, because of supposed likeness to the skin of a timbrel or tambourine. It is also called Catalan vaulting (though Guastavino did not use this term) and "compression-only thin-tile vaulting".
Guastavino tile is found in some of the most prominent Beaux-Arts structures in New York and Massachusetts, as well as in major buildings across the United States. In New York City, these include the Grand Central Oyster Bar & Restaurant and the remnants of the Della Robbia Bar at the former Vanderbilt Hotel at 4 Park Avenue. It is also found in some non-Beaux-Arts structures such as the crossing of the Cathedral of St. John the Divine.
Construction
The Guastavino terracotta tiles are standardized, less than thick, and about across. They are usually set in three herringbone-pattern courses with a sandwich of thin layers of Portland cement. Unlike heavier stone construction, these tile domes could be built without centering. Supporting formwork was still required for structural arches which established a framework for the ceiling. The large openings framed by the support arches were then filled in with thin Guastavino tiles fabricated into domed surfaces. Each ceiling tile was cantilevered out over the open space, relying only on the quick-drying cements developed by the company. Akoustolith, a special sound-absorbing tile, was one of several trade names used by Guastavino.
Significance
Guastavino tile has both structural and aesthetic significance.
Structurally, the timbrel vault was based on traditional vernacular vaulting techniques already very familiar to Mediterranean architects, but not well known in America. Terracotta free-span timbrel vaults were far more economical and structurally resilient than the ancient Roman vaulting alternatives.
Guastavino wrote extensively about his system of "Cohesive Construction". As the name suggests, he believed that these timbrel vaults represented an innovation in structural engineering. The tile system provided solutions that were impossible with traditional masonry arches and vaults. Subsequent research has shown the timbrel vault is simply a masonry vault, much less thick than traditional arches, that produces less horizontal thrust due to its lighter weight. This permits flatter arch profiles, which would produce unacceptable horizontal thrust if constructed in thicker, heavier masonry.
Exhibitions
In 2012, a group of students under supervision of MIT professor John Ochsendorf built a full-scale reproduction of a small Guastavino vault. The resulting structure was exhibited, as well as a time lapse video documenting the construction process.
Ochsendorf also curated Palaces for the People, an exhibition featuring the history and legacy of Guastavino which was premiered in September 2012 at the Boston Public Library, Rafael Guastavino's first major architectural work in America. The exhibition then traveled to the National Building Museum in Washington DC, and an expanded version later appeared at the Museum of the City of New York. Ochsendorf, a winner of the MacArthur Foundation "genius grant", also wrote the book-length color-illustrated monograph Guastavino Vaulting: The Art of Structural Tile, and an online exhibition coordinated with the traveling exhibits.
In addition, Ochsendorf directs the Guastavino Project at MIT, which researches and maintains the Guastavino.net online archive of related materials.
Archival sources
The Guastavino company was headquartered in Woburn, Massachusetts, in a building of their own design which still stands. The records and drawings of the Guastavino Fireproof Construction Company are preserved by the Department of Drawings & Archives in the Avery Architectural and Fine Arts Library at Columbia University in New York City.
See also
Glazed architectural terra-cotta
List of architectural vaults
First Church of Christ, Scientist (Cambridge, Massachusetts)
Ed Koch Queensboro Bridge
Basilica of St. Lawrence, Asheville
Biltmore Estate
Grant's Tomb
Grand Central Oyster Bar & Restaurant
Notes
Further reading
External links
global database of Guastavino sites with photos. Created as a companion to a museum exhibition that traveled to three American museums, 2012–2014.
Guastavino.net: documenting Guastavino's work in the Boston area. This page provides copies of writings and patents by the Guastavinos as well.
Rafaelguastavino.com: documenting Guastavino's work in New York City
"CONSTRUCTION OF A VAULT", Massachusetts Institute of Technology (shows method of construction)
Tiling
Building materials
Masonry
Structural system
Architecture in Spain | Guastavino tile | [
"Physics",
"Technology",
"Engineering"
] | 1,044 | [
"Structural engineering",
"Masonry",
"Building engineering",
"Structural system",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
1,584,613 | https://en.wikipedia.org/wiki/Air%20cycle%20machine | An air cycle machine (ACM) is the refrigeration unit of the environmental control system (ECS) used in pressurized gas turbine-powered aircraft. Normally an aircraft has two or three of these ACM. Each ACM and its components are often referred as an air conditioning pack. The air cycle cooling process uses air instead of a phase changing material such as Freon in the gas cycle. No condensation or evaporation of a refrigerant is involved, and the cooled air output from the process is used directly for cabin ventilation or for cooling electronic equipment.
History
Air cycle machines were first developed in the 19th century for providing chilling on ships. The technique is a reverse Brayton cycle (the thermodynamic cycle of a gas turbine engine) and is also known as a Bell Coleman cycle or "Air-Standard Refrigeration Cycle".
Technical details
The usual compression, cooling and expansion seen in any refrigeration cycle is accomplished in the ACM by a centrifugal compressor, two air-to-air heat exchangers and an expansion turbine.
Bleed air from the engines, an auxiliary power unit, or a ground source, which can be in excess of 150 °C and at a pressure of perhaps , is directed into a primary heat exchanger. Outside air at ambient temperature and pressure is used as the coolant in this air-to-air heat exchanger. Once the hot air has been cooled, it is then compressed by the centrifugal compressor. This compression heats the air (the maximum air temperature at this point is about 250 °C) and it is sent to the secondary heat exchanger, which again uses outside air as the coolant. The pre-cooling through the first heat exchanger increases the efficiency of the ACM because it lowers the temperature of the air entering the compressor, so that less work is required to compress a given air mass (the energy required to compress a gas by a given ratio rises as the temperature of the incoming gas rises).
At this point, the temperature of the compressed cooled air is somewhat greater than the ambient temperature of the outside air. The compressed, cooled air then travels through the expansion turbine which extracts heat from the air as it expands, cooling it to below ambient temperature (down to −20 °C or −30 °C). It is possible for the ACM to produce air cooled to less than 0 °C even when outside air temperature is high (as might be experienced with the aircraft stationary on the ground in a hot climate). The work extracted by the expansion turbine is transmitted by a shaft to spin the pack's centrifugal compressor and an inlet fan which draws in the external air for the heat exchangers during ground running; ram air is used in flight. The power for the air conditioning pack comes from the reduction of the pressure of the incoming bleed air relative to that of the cooled air exiting the system; typical differentials are from about to about .
The next step is to dehumidify the air. Cooling the air has caused any water vapor it contains to condense into fog, which can be removed using a cyclonic separator. Historically, the water extracted by the separator was simply dumped overboard, but newer ACMs spray the water into the outside-air intakes for each heat exchanger, which gives the coolant a greater heat capacity and improves efficiency. (It also means that running the ACM on an airplane parked on the tarmac does not leave a puddle.)
The air can now be combined in a mixing chamber with a small amount of non-conditioned engine bleed air. This warms the air to the desired temperature, and then the air is vented into the cabin or to electronic equipment.
Manufacturers
Major manufacturers of ACM are Honeywell Aerospace, Liebherr Aerospace, Collins Aerospace, and PBS Velka Bites.
Nomenclature
Types
The types of air cycle machines may be identified as:
Simple cycle consisting of a turbine and fan on a common shaft
Two-wheel bootstrap consisting of a turbine and compressor on a common shaft
Three-wheel consisting of a turbine, compressor, and fan on a common shaft
Four-wheel/dual-spool consisting of two turbines, a compressor, and a fan on a common shaft
Abbreviations
The equipment is referred to variously as PAC, air conditioning pack, or A/C pack, but there is a lack of consistency and agreement as to the derivations and meanings:
Pack. as an abbreviation of package, applied to both pneumatic and non-pneumatic systems (Boeing, Airbus, Embraer, Bombardier and Lockheed)
PAC as an acronym meaning either Passenger Air Conditioning or pneumatic air conditioning (the latter being found on systems control panels of at least one business jet supplier)
PACK as an acronym for pneumatic air cycle kit or pressurization & air conditioning kit
See also
Brayton cycle
Refrigeration
Aerotoxic syndrome
References
External links
What is air cycle?
Legislation and guidance from the UK Government and the National Health Service (scroll to page 5 for schematic of ACM system)
Gas compressors
Aircraft components | Air cycle machine | [
"Chemistry"
] | 1,049 | [
"Gas compressors",
"Turbomachinery"
] |
1,584,707 | https://en.wikipedia.org/wiki/1%2C10-Phenanthroline | 1,10-Phenanthroline (phen) is a heterocyclic organic compound. It is a white solid that is soluble in organic solvents. The 1,10 refer to the location of the nitrogen atoms that replace CH's in the hydrocarbon called phenanthrene.
Abbreviated "phen", it is used as a ligand in coordination chemistry, forming strong complexes with most metal ions. It is often sold as the monohydrate.
Synthesis
Phenanthroline may be prepared by two successive Skraup reactions of glycerol with o-phenylenediamine, catalyzed by sulfuric acid, and an oxidizing agent, traditionally aqueous arsenic acid or nitrobenzene. Dehydration of glycerol gives acrolein which condenses with the amine followed by a cyclization.
Reactions
Oxidation of 1,10-phenanthroline with a mixture of nitric and sulfuric acids gives 1,10-phenanthroline-5,6-dione.
1,10-Phenanthroline forms many coordination complexes. One example is the iron complex called ferroin.
Alkyllithium reagents form deeply colored derivatives with phenanthroline. The alkyllithium content of solutions can be determined by treatment of such reagents with small amounts of phenanthroline (ca. 1 mg) followed by titration with alcohols to a colourless endpoint. Grignard reagents may be similarly titrated.
Pharmacology
Phenanthroline is used in scientific research as an inhibitor of the deubiquitination enzyme Rpn11.
References
Redox indicators
Chelating agents | 1,10-Phenanthroline | [
"Chemistry"
] | 379 | [
"Redox indicators",
"Chelating agents",
"Electrochemistry",
"Process chemicals"
] |
1,584,732 | https://en.wikipedia.org/wiki/Thermochromism | Thermochromism is the property of substances to change color due to a change in temperature. A mood ring is an example of this property used in a consumer product although thermochromism also has more practical uses, such as baby bottles, which change to a different color when cool enough to drink, or kettles which change color when water is at or near boiling point. Thermochromism is one of several types of chromism.
Organic materials
Thermochromatic liquid crystals
The two common approaches are based on liquid crystals and leuco dyes. Liquid crystals are used in precision applications, as their responses can be engineered to accurate temperatures, but their color range is limited by their principle of operation. Leuco dyes allow wider range of colors to be used, but their response temperatures are more difficult to set with accuracy.
Some liquid crystals are capable of displaying different colors at different temperatures. This change is dependent on selective reflection of certain wavelengths by the crystallic structure of the material, as it changes between the low-temperature crystallic phase, through anisotropic chiral or twisted nematic phase, to the high-temperature isotropic liquid phase. Only the nematic mesophase has thermochromic properties; this restricts the effective temperature range of the material.
The twisted nematic phase has the molecules oriented in layers with regularly changing orientation, which gives them periodic spacing. The light passing through the crystal undergoes Bragg diffraction on these layers, and the wavelength with the greatest constructive interference is reflected back, which is perceived as a spectral color. A change in the crystal temperature can result in a change of spacing between the layers and therefore in the reflected wavelength. The color of the thermochromic liquid crystal can therefore continuously range from non-reflective (black) through the spectral colors to black again, depending on the temperature. Typically, the high temperature state will reflect blue-violet, while the low-temperature state will reflect red-orange. Since blue is a shorter wavelength than red, this indicates that the distance of layer spacing is reduced by heating through the liquid-crystal state.
Some such materials are cholesteryl nonanoate or cyanobiphenyls.
Mixtures with 3–5 °C span of temperatures and ranges from about 17–23 °C to about 37–40 °C can be composed from varying proportions of cholesteryl oleyl carbonate, cholesteryl nonanoate, and cholesteryl benzoate. For example, the mass ratio of 65:25:10 yields range of 17–23 °C, and 30:60:10 yields range of 37–40 °C.
Liquid crystals used in dyes and inks often come microencapsulated, in the form of suspension.
Liquid crystals are used in applications where the color change has to be accurately defined. They find applications in thermometers for room, refrigerator, aquarium, and medical use, and in indicators of level of propane in tanks. A popular application for thermochromic liquid crystals are the mood rings.
Liquid crystals are difficult to work with and require specialized printing equipment. The material itself is also typically more expensive than alternative technologies. High temperatures, ultraviolet radiation, some chemicals and/or solvents have a negative impact on their lifespan.
Leuco dyes
Thermochromic dyes are based on mixtures of leuco dyes with other suitable chemicals, displaying a color change (usually between the colorless leuco form and the colored form) that depends upon temperature. The dyes are rarely applied on materials directly; they are usually in the form of microcapsules with the mixture sealed inside. An illustrative example is the Hypercolor fashion, where microcapsules with crystal violet lactone, weak acid, and a dissociable salt dissolved in dodecanol are applied to the fabric. When the solvent is solid, the dye exists in its lactone leuco form, while when the solvent melts, the salt dissociates, the pH inside the microcapsule lowers, the dye becomes protonated, its lactone ring opens, and its absorption spectrum shifts drastically, therefore it becomes deeply violet. In this case the apparent thermochromism is in fact halochromism.
The dyes most commonly used are spirolactones, fluorans, spiropyrans, and fulgides. The acids include bisphenol A, parabens, 1,2,3-triazole derivates, and 4-hydroxycoumarin and act as proton donors, changing the dye molecule between its leuco form and its protonated colored form; stronger acids would make the change irreversible.
Leuco dyes have less accurate temperature response than liquid crystals. They are suitable for general indicators of approximate temperature ("too cool", "too hot", "about OK"), or for various novelty items. They are usually used in combination with some other pigment, producing a color change between the color of the base pigment and the color of the pigment combined with the color of the non-leuco form of the leuco dye. Organic leuco dyes are available for temperature ranges between about and , in wide range of colors. The color change usually happens in a 3 °C (5.4 °F) interval.
Leuco dyes are used in applications where temperature response accuracy is not critical: e.g. novelties, bath toys, flying discs, and approximate temperature indicators for microwave-heated foods. Microencapsulation allows their use in wide range of materials and products. The size of the microcapsules typically ranges between 3–5 μm (over 10 times larger than regular pigment particles), which requires some adjustments to printing and manufacturing processes.
An application of leuco dyes is in the Duracell battery state indicators. A layer of a leuco dye is applied on a resistive strip to indicate its heating, thus gauging the amount of current the battery is able to supply. The strip is triangular-shaped, changing its resistance along its length, therefore heating up a proportionally long segment with the amount of current flowing through it. The length of the segment above the threshold temperature for the leuco dye then becomes colored.
Exposure to ultraviolet radiation, solvents and high temperatures reduce the lifespan of leuco dyes. Temperatures above about typically cause irreversible damage to leuco dyes; a time-limited exposure of some types to about is allowed during manufacturing.
Thermochromic paints use liquid crystals or leuco dye technology. After absorbing a certain amount of light or heat, the crystallic or molecular structure of the pigment reversibly changes in such a way that it absorbs and emits light at a different wavelength than at lower temperatures. Thermochromic paints are seen quite often as a coating on coffee mugs, whereby once hot coffee is poured into the mugs, the thermochromic paint absorbs the heat and becomes colored or transparent, therefore changing the appearance of the mug. These are known as magic mugs or heat changing mugs. Another common example is the use of leuco dye in spoons used in ice cream parlors and frozen yogurt shops. Once dipped into the cold desserts, part of the spoon appears to change color.
Papers
Thermochromic papers are used for thermal printers. One example is the paper impregnated with the solid mixture of a fluoran dye with octadecylphosphonic acid. This mixture is stable in solid phase; however, when the octadecylphosphonic acid is melted, the dye undergoes a chemical reaction in the liquid phase, and assumes the protonated colored form. This state is then conserved when the matrix solidifies again, if the cooling process is fast enough. As the leuco form is more stable in lower temperatures and solid phase, the records on thermochromic papers slowly fade out over years.
Polymers
Thermochromism can appear in thermoplastics, duroplastics, gels or any kind of coatings. The polymer itself, an embedded thermochromic additive or a high ordered structure built by the interaction of the polymer with an incorporated non-thermochromic additive can be the origin of the thermochromic effect. Furthermore, from the physical point of view, the origin of the thermochromic effect can be multifarious. So it can come from changes of light reflection, absorption and/or scattering properties with temperature. The application of thermochromic polymers for adaptive solar protection is of great interest. For instance, polymer films with tunable thermochromic nanoparticles, reflective or transparent to sunlight depending on the temperature, have been used to create windows that optimize to the weather. A function by design strategy, e.g. applied for the development of non-toxic thermochromic polymers has come into the focus in the last decade.
Inks
Thermochromic inks or dyes are temperature sensitive compounds, developed in the 1970s, that temporarily change color with exposure to heat. They come in two forms, liquid crystals and leuco dyes. Leuco dyes are easier to work with and allow for a greater range of applications. These applications include: flat thermometers, battery testers, clothing, and the indicator on bottles of maple syrup that change color when the syrup is warm. The thermometers are often used on the exterior of aquariums, or to obtain a body temperature via the forehead. Coors Light uses thermochromic ink on its cans, changing from white to blue to indicate the can is cold.
Inorganic materials
Virtually all inorganic compounds are thermochromic to some extent. Most examples however involve only subtle changes in color. For example, titanium dioxide, zinc sulfide and zinc oxide are white at room temperature but when heated change to yellow. Similarly indium(III) oxide is yellow and darkens to yellow-brown when heated. Lead(II) oxide exhibits a similar color change on heating. The color change is linked to changes in the electronic properties (energy levels, populations) of these materials.
More dramatic examples of thermochromism are found in materials that undergo phase transition or exhibit charge-transfer bands near the visible region. Examples include
Cuprous mercury iodide (Cu2[HgI4]) undergoes a phase transition at 67 °C, reversibly changing from a bright red solid material at low temperature to a dark brown solid at high temperature, with intermediate red-purple states. The colors are intense and seem to be caused by Cu(I)–Hg(II) charge-transfer complexes.
Silver mercury iodide (Ag2[HgI4]) is yellow at low temperatures and orange above 47–51 °C, with intermediate yellow-orange states. The colors are intense and seem to be caused by Ag(I)–Hg(II) charge-transfer complexes.
Mercury(II) iodide is a crystalline material which at 126 °C undergoes reversible phase transition from red alpha phase to pale yellow beta phase.
Bis(dimethylammonium) tetrachloronickelate(II) ([(CH3)2NH2]2NiCl4) is a raspberry-red compound, which becomes blue at about 110 °C. On cooling, the compound becomes a light yellow metastable phase, which over 2–3 weeks turns back into original red. Many other tetrachloronickelates are also thermochromic.
Bis(diethylammonium) tetrachlorocuprate(II) ([(CH3CH2)2NH2]2CuCl4) is a bright green solid material, which at 52–53 °C reversibly changes color to yellow. The color change is caused by relaxation of the hydrogen bonds and subsequent change of geometry of the copper-chlorine complex from planar to deformed tetrahedral, with appropriate change of arrangement of the copper atom's d-orbitals. There is no stable intermediate, the crystals are either green or yellow.
Chromium(III) oxide and aluminium(III) oxide in a 1:9 ratio is red at room temperature and grey at 400 °C, due to changes in its crystal field.
Vanadium dioxide has been investigated for use as a "spectrally-selective" window coating to block infrared transmission and reduce the loss of building interior heat through windows. This material behaves like a semiconductor at lower temperatures, allowing more transmission, and like a conductor at higher temperatures, providing much greater reflectivity. The phase change between transparent semiconductive and reflective conductive phase occurs at 68 °C; doping the material with 1.9% of tungsten lowers the transition temperature to 29 °C.
Other thermochromic solid semiconductor materials include
CdxZn1−xSySe1−y (x = 0.5–1, y = 0.5–1),
ZnxCdyHg1−x−yOaSbSecTe1−a−b−c (x = 0–0.5, y = 0.5–1, a = 0–0.5, b = 0.5–1, c = 0–0.5),
HgxCdyZn1−x−ySbSe1−b (x = 0–1, y = 0–1, b = 0.5–1).
Many tetraorganodiarsine, -distibine, and -dibismuthine compounds are strongly thermochromic. The color changes arise because they form van der Waals chains when cold, and the intermolecular spacing is sufficiently short for orbital overlap. The energy levels of the resulting bands then depend on the intermolecular distance, which varies with temperature.
Some minerals are thermochromic as well; for example some chromium-rich pyropes, normally reddish-purplish, become green when heated to about 80 °C.
Irreversible inorganic thermochromes
Some materials change color irreversibly. These can be used for e.g. laser marking of materials.
Copper(I) iodide is a solid pale tan material transforming at 60–62 °C to orange color.
Ammonium metavanadate is a white material, turning to brown at 150 °C and then to black at 170 °C.
Manganese violet (Mn(NH4)2P2O7) is a violet material, a popular pigment, turning to white at 400 °C.
Applications in buildings
Thermochromic materials, in the form of coatings, can be applied in buildings as a technique of passive energy retrofit. Thermochromic coatings are characterized as active, dynamic and adaptive materials that can adjust their optical properties according to external stimuli, usually temperature. Thermochromic coating modulate their reflectance as a function of their temperature, making them an appropriate solution for combating cooling loads, without diminishing the building's thermal performance during the winter period.
Thermochromic materials are categorized into two subgroups, dye-based and non-dye-based thermochromic materials. However, the only class of dye-based thermochromic materials that are widely, commercially available and have been applicated and tested into buildings, are the leuco dyes.
References
Inks
Chromism
Heat transfer | Thermochromism | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,239 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Spectrum (physical sciences)",
"Chromism",
"Materials science",
"Thermodynamics",
"Smart materials",
"Spectroscopy",
"Thermochromism"
] |
1,584,795 | https://en.wikipedia.org/wiki/Opsonin | Opsonins are extracellular proteins that, when bound to substances or cells, induce phagocytes to phagocytose the substances or cells with the opsonins bound. Thus, opsonins act as tags to label things in the body that should be phagocytosed (i.e. eaten) by phagocytes (cells that specialise in phagocytosis, i.e. cellular eating). Different types of things ("targets") can be tagged by opsonins for phagocytosis, including: pathogens (such as bacteria), cancer cells, aged cells, dead or dying cells (such as apoptotic cells), excess synapses, or protein aggregates (such as amyloid plaques). Opsonins help clear pathogens, as well as dead, dying and diseased cells.
Opsonins were discovered and named "opsonins" in 1904 by Wright and Douglas, who found that incubating bacteria with blood plasma enabled phagocytes to phagocytose (and thereby destroy) the bacteria. They concluded that: “We have here conclusive proof that the blood fluids modify the bacteria in a manner which renders them a ready prey to the phagocytes. We may speak of this as an “opsonic” effect (opsono - I cater for; I prepare victuals for), and we may employ the term “opsonins” to designate the elements in the blood fluids which produce this effect.”
Subsequent research found two main types of opsonin in blood that opsonised bacteria: complement proteins and antibodies. However, there are now known to be at least 50 proteins that act as opsonins for pathogens or other targets.
Mechanisms
Opsonins induce phagocytosis of targets by binding the targets (e.g. bacteria) and then also binding phagocytic receptors on phagocytes. Thus, opsonins act as bridging molecules between the target and the phagocyte, bringing them into contact, and then usually activating the phagocytic receptor to induce engulfment of the target by the phagocyte.
All cell membranes have negative charges (zeta potential) which makes it difficult for two cells to come close together. When opsonins bind to their targets they boost the kinetics of phagocytosis by favoring interaction between the opsonin and cell surface receptors on immune cells. This overrides the negative charges from cell membranes.
It is important that opsonins do not tag healthy, non-pathogenic cells for phagocytosis, as phagocytosis results in digestion and thus destruction of targets. Therefore, Some opsonins (including some complement proteins) have evolved to bind Pathogen-associated molecular patterns, molecules only found on the surface of pathogens, enabling phagocytosis of these pathogens, and thus innate immunity. Antibodies bind to antigens on the pathogen surface, enabling adaptive immunity. Opsonins that opsonise host body cells (e.g. GAS6 that opsonises apoptotic cells) bind to "eat-me" signals (such as phosphatidylserine) exposed by dead, dying or stressed cells.
Types
Opsonins are related to the two types of immune systems: the adaptive immune system and the innate immune system.
Adaptive
Antibodies are synthesized by B cells and are secreted in response to recognition of specific antigenic epitopes, and bind only to specific epitopes (regions) on an antigen. They comprise the adaptive opsonization pathway, and are composed of two fragments: antigen binding region (Fab region) and the fragment crystallizable region (Fc region). The Fab region is able to bind to a specific epitope on an antigen, such as a specific region of a bacterial surface protein. The Fc region of IgG is recognized by the Fc Receptor (FcR) on natural killer cells and other effector cells; the binding of IgG to antigen causes a conformational change that allows FcR to bind the Fc region and initiate attack on the pathogen through the release of lytic products. Antibodies may also tag tumor cells or virally infected cells, with NK cells responding via the FcR; this process is known as antibody-dependent cellular cytotoxicity (ADCC).
Both IgM and IgG undergo conformational change upon binding antigen that allows complement protein C1q to associate with the Fc region of the antibody. C1q association eventually leads to the recruitment of complement C4b and C3b, both of which are recognized by complement receptor 1, 3, and 4 (CR1, CR3, CR4), which are present on most phagocytes. In this way, the complement system participates in the adaptive immune response.
C3d, a cleavage product of C3, recognizes pathogen-associated molecular patterns (PAMPs) and can opsonize molecules to the CR2 receptor on B cells. This lowers the threshold of interaction required for B cell activation via the B cell receptor, and aids in the activation of the adaptive response.
Innate
The complement system, independently of the adaptive immune response, is able to opsonize pathogen before adaptive immunity may even be required. Complement proteins involved in innate opsonization include C4b, C3b and iC3b. In the alternative pathway of complement activation, circulating C3b is deposited directly onto antigens with particular PAMPs, such as lipopolysaccharides on gram-negative bacteria. C3b is recognized by CR1 on phagocytes. iC3b attaches to apoptotic cells and bodies and facilitates clearance of dead cells and remnants without initiating inflammatory pathways, through interaction with CR3 and CR4 on phagocytes.
Mannose-binding lectins, or ficolins, along with pentraxins and collectins are able to recognize certain types of carbohydrates that are expressed on the cell membranes of bacteria, fungi, viruses, and parasites, and can act as opsonin by activating the complement system and phagocytic cells.
Targets
Apoptotic cells
A number of opsonins play a role in marking apoptotic cells for phagocytosis without a pro-inflammatory response.
Members of the pentraxin family can bind to apoptotic cell membrane components like phosphatidylcholine (PC) and phosphatidylethanolamine (PE). IgM antibodies also bind to PC. Collectin molecules such as mannose-binding lectin (MBL), surfactant protein A (SP-A), and SP-D interact with unknown ligands on apoptotic cell membranes. When bound to the appropriate ligand these molecules interact with phagocyte receptors, enhancing phagocytosis of the marked cell.
C1q is capable of binding directly to apoptotic cells. It can also indirectly bind to apoptotic cells via intermediates like IgM autoantibodies, MBL, and pentraxins. In both cases C1q activates complement, resulting in the cells being marked for phagocytosis by C3b and C4b. C1q is an important contributor to the clearance of apoptotic cells and debris. This process usually occurs in late apoptotic cells.
Opsonization of apoptotic cells occurs by different mechanisms in a tissue-dependent pattern. For example, while C1q is necessary for proper apoptotic cell clearance in the peritoneal cavity, it is not important in the lungs where SP-D plays an important role.
Pathogens
As part of the late stage adaptive immune response, pathogens and other particles are marked by IgG antibodies. These antibodies interact with Fc receptors on macrophages and neutrophils resulting in phagocytosis. The C1 complement complex can also interact with the Fc region of IgG and IgM immune complexes activating the classical complement pathway and marking the antigen with C3b. C3b can spontaneously bind to pathogen surfaces through the alternative complement pathway. Furthermore, pentraxins can directly bind to C1q from the C1 complex.
SP-A opsonizes a number of bacterial and viral pathogens for clearance by lung alveolar macrophages.
See also
Antibody opsonization
References
External links
Immune system | Opsonin | [
"Biology"
] | 1,736 | [
"Immune system",
"Organ systems"
] |
1,584,829 | https://en.wikipedia.org/wiki/Anethole | Anethole (also known as anise camphor) is an organic compound that is widely used as a flavoring substance. It is a derivative of the aromatic compound allylbenzene and occurs widely in the essential oils of plants. It is in the class of phenylpropanoid organic compounds. It contributes a large component of the odor and flavor of anise and fennel (both in the botanical family Apiaceae), anise myrtle (Myrtaceae), liquorice (Fabaceae), magnolia blossoms, and star anise (Schisandraceae). Closely related to anethole is its isomer estragole, which is abundant in tarragon (Asteraceae) and basil (Lamiaceae), and has a flavor reminiscent of anise. It is a colorless, fragrant, mildly volatile liquid. Anethole is only slightly soluble in water but exhibits high solubility in ethanol. This trait causes certain anise-flavored liqueurs to become opaque when diluted with water; this is called the ouzo effect.
Structure and production
Anethole is an aromatic, unsaturated ether related to lignols. It exists as both cis–trans isomers (see also E–Z notation), involving the double bond outside the ring. The more abundant isomer, and the one preferred for use, is the trans or E isomer.
Like related compounds, anethole is poorly soluble in water. Historically, this property was used to detect adulteration in samples.
Most anethole is obtained from turpentine-like extracts from trees. Of only minor commercial significance, anethole can also be isolated from essential oils.
Currently Banwari Chemicals Pvt Ltd situated in Bhiwadi, Rajasthan, India is the leading manufacturer of anethole. It is prepared commercially from 4-methoxypropiophenone, which is prepared from anisole.
Uses
Flavoring
Anethole is distinctly sweet, measuring 13 times sweeter than sugar. It is perceived as being pleasant to the taste even at higher concentrations. It is used in alcoholic drinks ouzo, rakı, anisette and absinthe, among others. It is also used in seasoning and confectionery applications, such as German Lebkuchen, oral hygiene products, and in small quantities in natural berry flavors.
Precursor to other compounds
Because they metabolize anethole into several aromatic chemical compounds, some bacteria are candidates for use in commercial bioconversion of anethole to more valuable materials. Bacterial strains capable of using trans-anethole as the sole carbon source include JYR-1 (Pseudomonas putida) and TA13 (Arthrobacter aurescens).
Research
Antimicrobial and antifungal activity
Anethole has potent antimicrobial properties, against bacteria, yeasts, and fungi. Reported antibacterial properties include both bacteriostatic and bactericidal action against Salmonella enterica but not when used against Salmonella via a fumigation method. Antifungal activity includes increasing the effectiveness of some other phytochemicals (such as polygodial) against Saccharomyces cerevisiae and Candida albicans;
In vitro, anethole has antihelmintic action on eggs and larvae of the sheep gastrointestinal nematode Haemonchus contortus. Anethole also has nematicidal activity against the plant nematode Meloidogyne javanica in vitro and in pots of cucumber seedlings.
Insecticidal activity
Anethole also is a promising insecticide. Several essential oils consisting mostly of anethole have insecticidal action against larvae of the mosquito Ochlerotatus caspius and Aedes aegypti. In a similar manner, anethole itself is effective against the fungus gnat Lycoriella ingenua (Sciaridae) and the mold mite Tyrophagus putrescentiae. Against the mite, anethole is a slightly more effective pesticide than DEET, but anisaldehyde, a related natural compound that occurs with anethole in many essential oils, is 14 times more effective. The insecticidal action of anethole is greater as a fumigant than as a contact agent. trans-Anethole is highly effective as a fumigant against the cockroach Blattella germanica and against adults of the weevils Sitophilus oryzae, Callosobruchus chinensis and beetle Lasioderma serricorne.
As well as an insect pesticide, anethole is an effective insect repellent against mosquitos.
Ouzo effect
Anethole is responsible for the "ouzo effect" (also "louche effect"), the spontaneous formation of a microemulsion that gives many alcoholic beverages containing anethole and water their cloudy appearance. Such a spontaneous microemulsion has many potential commercial applications in the food and pharmaceutical industries.
Precursor to illicit drugs
Anethole is an inexpensive chemical precursor for paramethoxyamphetamine (PMA), and is used in its clandestine manufacture. Anethole is present in the essential oil from guarana, which has psychoactive effects typically attributed to its caffeine content. The absence of PMA or any other known psychoactive derivative of anethole in human urine after ingestion of guarana leads to the conclusion that any psychoactive effect of guarana is not due to aminated anethole metabolites.
Anethole is also present in absinthe, a liquor with a reputation for psychoactive effects; these effects, however, are attributed to ethanol. (See also thujone, anethole dithione (ADT), and anethole trithione (ATT).)
Estrogen and prolactin
Anethole has estrogenic activity. It has been found to significantly increase uterine weight in immature female rats.
Fennel, which contains anethole, has been found to have a galactagogue effect in animals. Anethole bears a structural resemblance to catecholamines like dopamine and may displace dopamine from its receptors and thereby disinhibit prolactin secretion, which in turn may be responsible for the galactagogue effects.
Safety
In the USA, anethole is generally recognized as safe (GRAS). After a hiatus due to safety concerns, anethole was reaffirmed by Flavor and Extract Manufacturers Association (FEMA) as GRAS. The concerns related to liver toxicity and possible carcinogenic activity reported in rats. Anethole is associated with a slight increase in liver cancer in rats, although the evidence is scant and generally regarded as evidence that anethole is not a carcinogen. An evaluation of anethole by the Joint FAO/WHO Expert Committee on Food Additives (JECFA) found its notable pharmacologic properties to be reduction in motor activity, lowering of body temperature, and hypnotic, analgesic, and anticonvulsant effects. A subsequent evaluation by JECFA found some reason for concern regarding carcinogenicity, but there is currently insufficient data to support this. At this time, the JECFA summary of these evaluations is that anethole has "no safety concern at current levels of intake when used as a flavoring agent".
In large quantities, anethole is slightly toxic and may act as an irritant.
History
That an oil could be extracted from anise and fennel had been known since the Renaissance by the German alchemist Hieronymus Brunschwig (), the German botanist Adam Lonicer (1528–1586), and the German physician Valerius Cordus (1515–1544), among others. Anethole was first investigated chemically by the Swiss chemist Nicolas-Théodore de Saussure in 1820. In 1832, the French chemist Jean Baptiste Dumas determined that the crystallizable components of anise oil and fennel oil were identical, and he determined anethole's empirical formula. In 1845, the French chemist Charles Gerhardt coined the term anethol – from the Latin anethum (anise) + oleum (oil) – for the fundamental compound from which a family of related compounds was derived. Although the German chemist Emil Erlenmeyer proposed the correct molecular structure for anethole in 1866, it was not until 1872, that the structure was accepted as correct.
See also
:Category:Anise liqueurs and spirits
List of liqueurs § Anise-flavored liqueurs
Anol
Chavicol
Dianethole
Fenchone
Pseudoisoeugenol
Safrole
References
External links
Flavors
Sugar substitutes
Essential oils
Phenylpropenes
O-methylated natural phenols
Estrogens | Anethole | [
"Chemistry"
] | 1,866 | [
"Essential oils",
"Natural products"
] |
1,584,835 | https://en.wikipedia.org/wiki/Smart%20key | A smart key is a vehicular passive entry system developed by Siemens in 1995 and introduced by Mercedes-Benz under the name "Keyless-Go" in 1998 on the W220 S-Class, after the design patent was filed by Daimler-Benz on May 17, 1997.
Numerous manufacturers subsequently developed similar passive systems that unlock a vehicle on approach — while the key remains pocketed by the user.
Operation
The smart key allows the driver to keep the key fob pocketed when unlocking, locking and starting the vehicle. The key is identified via one of several antennas in the car's bodywork and an ISM band radio pulse generator in the key housing. Depending on the system, the vehicle is automatically unlocked when a button or sensor on the door handle or trunk release is pressed. Vehicles with a smart-key system have a mechanical backup, usually in the form of a spare key blade supplied with the vehicle. Some manufacturers hide the backup lock behind a cover for styling.
Vehicles with a smart-key system can disengage the immobilizer and activate the ignition without inserting a key in the ignition, provided the driver has the key inside the car. On most vehicles, this is done by pressing a starter button or twisting an ignition switch.
When leaving a vehicle that is equipped with a smart-key system, the vehicle is locked by either pressing a button on a door handle, touching a capacitive area on a door handle, or simply walking away from the vehicle. The method of locking varies across models.
Some vehicles automatically adjust settings based on the smart key used to unlock the car. User preferences such as seat positions, steering wheel position, exterior mirror settings, climate control (e.g. temperature) settings, and stereo presets are popular adjustments. Some models, such as the Ford Escape, even have settings to prevent the vehicle from exceeding a maximum speed if it has been started with a certain key.
Insurance standard
In 2005, the UK motor insurance research expert Thatcham introduced a standard for keyless entry, requiring the device to be inoperable at a distance of more than 10 cm from the vehicle. In an independent test, the Nissan Micra's system was found to be the most secure, while certain BMW and Mercedes keys failed, being theoretically capable of allowing cars to be driven away while their owners were refueling. Despite these security vulnerabilities, auto theft rates have decreased 7 percent between 2009 and 2010, and the National Insurance Crime Bureau credits smart keys for this decrease.
SmartKeys
SmartKeys were developed by Siemens in the mid-1990s and introduced by Mercedes-Benz in 1997 to replace the infrared security system introduced in 1989. Daimler-Benz filed the first patents for SmartKey on February 28, 1997, in German patent offices, with multifunction switchblade key variants following on May 17, 1997. The device entailed a plastic key to be used in place of the traditional metal key. Electronics that control locking systems and the ignitions made it possible to replace the traditional key with a sophisticated computerized "Key". It is considered a step up from remote keyless entry. The SmartKey adopts the remote control buttons from keyless entry, and incorporates them into the SmartKey fob.
Once inside a Mercedes-Benz vehicle, the SmartKey fob, unlike keyless entry fobs, is placed in the ignition slot where a starter computer verifies the rolling code. Verified in milliseconds, it can then be turned as a traditional key to start the engine. The device was designed with the cooperation of Siemens Automotive and Huf exclusively for Mercedes-Benz, but many luxury manufacturers have implemented similar technology based on the same idea. In addition to the SmartKey, Mercedes-Benz now integrates as an option Keyless Go; this feature allows the driver to keep the SmartKey in their pocket, yet giving them the ability to open the doors, trunk as well as starting the car without ever removing it from their pocket.
The SmartKey's electronics are embedded in a hollow, triangular piece of plastic, wide at the top, narrow at the bottom, squared-off at the tip with a half-inch-long insert piece. The side of the SmartKey also hides a traditional Mercedes-Benz key that can be pulled out from a release at the top. The metal key is used for valet purposes such as locking the glove compartment and/or trunk before the SmartKey is turned over to a parking attendant. Once locked manually, the trunk cannot be opened with the SmartKey or interior buttons. The key fob utilizes a radio-frequency transponder to communicate with the door locks, but it uses infrared to communicate with the engine immobilizer system. The original SmartKeys had a limited frequency and could have only been used in line-of-sight for safety purposes. The driver can also point the smart key at the front driver side door while pushing and holding the unlock button on the SmartKey and the windows and the sunroof will open in order to ventilate the cabin. Similarly, if the same procedure is completed while holding the lock button, the windows and sunroof will close. In cars equipped with the Active Ventilated Seats, the summer opening feature will activate seat ventilation in addition to opening the windows and sunroof.
Display Key
Display Key is a type of smart key developed by BMW that includes a small LCD color touchscreen on it. It performs the standard functions that a key fob would normally do such as locking, unlocking & keyless start, but because of the screen the user can also perform a number of the features from BMW's app. One of which includes commanding the car to self park from the key if your car has self parking capability. The key is currently available for the 3 Series, 4 Series, 5 Series, 6 Series, 7 Series, 8 Series, X3, X4, X5, X6, and X7. The key is rechargeable and will last about 3 weeks. It can be charged via a micro USB port on the side or wirelessly on the center console.
Keyless Go
Keyless Go (also: Keyless Entry / Go; Passive Entry / Go) is Mercedes' term for an automotive technology which allows a driver to lock and unlock a vehicle without using the corresponding SmartKey buttons. Once a driver enters a vehicle with an equipped Keyless Go SmartKey or Keyless Go wallet-size card, they have the ability to start and stop the engine, without inserting the SmartKey. A transponder built within the SmartKey allows the vehicle to identify a driver. An additional safety feature is integrated into the vehicle, making it impossible to lock a SmartKey with Keyless Go inside a vehicle.
The system works by having a series of LF (low frequency 125 kHz) transmitting antennas both inside and outside the vehicle. The external antennas are located in the door handles. When the vehicle is triggered, either by pulling the handle or touching the handle, an LF signal is transmitted from the antennas to the key. The key becomes activated if it is sufficiently close and it transmits its ID back to the vehicle via RF (Radio frequency >300 MHz) to a receiver located in the vehicle. If the key has the correct ID, the PASE module unlocks the vehicle.
The hardware blocks of a Keyless Entry / Go Electronic control unit ECU are based on its functionality:
transmitting low-frequency LF signals via the 125 kHz power amplifier block
receiving radio frequency RF signals (> 300 MHz) from the built-in ISM receiver block
encrypting and decrypting all relevant data signals (security)
communicating relevant interface signals with other electronic control units
microcontroller
Inside Outside detection
The smart key determines if it is inside or outside the vehicle by measuring the strength of the LF fields. In order to start the vehicle, the smart key must be inside the vehicle.
Security requirements
It is important that the vehicle can't be started when the user and therefore the smart key is outside the vehicle. This is especially important at fueling stations where the user is very close to the vehicle. The internal LF field is allowed to overshoot by a maximum of 10 cm to help minimise this risk. Maximum overshoot is usually found on the side windows where there is very little attenuation of the signal.
A second scenario exists under the name "relay station attack" (RSA). The RSA is based on the idea of reducing the long physical distance between the car and the regular car owner's SmartKey. Two relay stations will be needed for this: The first relay station is located nearby the car and the second is close to the SmartKey. So on first view, the Keyless Entry / Go ECU and the SmartKey could communicate together. A third person at the car could pull the door handle and the door would open. However, in every Keyless Entry / Go system provisions exist to avoid a successful two-way communication via RSA. Some of the most known are:
measuring group delay time to detect illegal high values
measuring third-order intercept point to detect illegal intermodulation products
measuring field strength of the electric field
measuring the response time of 125 kHz LC circuit
using a more complex modulation (i.e. quadrature amplitude modulation) which can't be demodulated and modulated by a simple relay station
Furthermore, Keyless Entry / Go communicates with other Control Units within the same vehicle. Depending on the electric car architecture, the following are some Control Systems that can be enabled or disabled:
ESCL Electric Steering Column Lock
EIS Electronic Ignition Switch
Central door locking system
Immobiliser
Engine Control Unit (Motor management system)
BCU Body control unit
Another possibility is using a motion sensor within the key fob.
Internal LF field dead spots
Dead spots are a result of the maximum overshoot requirement from above. The power delivered to the internal LF antennas has to be tuned to provide the best performance i.e. minimum dead spots and maximum average overshoot. Dead spots are usually near the extremities of the vehicle e.g. the rear parcel shelf.
Battery backup
If the battery in the smart key becomes depleted, it is necessary for there to be a backup method of opening and starting the vehicle. Opening is achieved by an emergency (fully mechanical) key blade usually hidden in the smart key. On many cars emergency starting is achieved by use of an inductive coupling. The user either has to put the key in a slot or hold it near a special area on the cockpit, where there is an inductive coil hidden behind which transfers energy to a matching coil in the dead key fob using inductive charging.
Slots have proven to be problematic, as they can go wrong and the key becomes locked in and cannot be removed. Another problem with the slot is it can't compensate for a fob battery below certain operating threshold. Most smart key batteries are temperature sensitive causing the fob to become intermittent, fully functional, or inoperative all in the same day.
Special cases
A Keyless Entry / Go system should be able to detect and handle most of the following cases:
SmartKey Transponder was forgotten in the rear trunk
More than one SmartKey is present inside the car
SmartKey getting lost during the drive
Smartkey battery low (Limp-Home)
Effectiveness
A test by ADAC revealed that 20 car models with Keyless Go could be entered and driven away without the key. In 2014, 6,000 cars (about 17 per day) were stolen using keyless entry in London.
See also
Remote keyless system
Transponder car key
References
External links
Relay Attacks on Passive Keyless Entry and Start Systems in Modern Cars
Security Flaws of Remote
Smart devices
Automotive technology tradenames
Radio electronics
Vehicle security systems | Smart key | [
"Technology",
"Engineering"
] | 2,421 | [
"Home automation",
"Smart devices",
"Radio electronics"
] |
1,584,889 | https://en.wikipedia.org/wiki/Hofmann%20rearrangement | The Hofmann rearrangement (Hofmann degradation) is the organic reaction of a primary amide to a primary amine with one less carbon atom. The reaction involves oxidation of the nitrogen followed by rearrangement of the carbonyl and nitrogen to give an isocyanate intermediate. The reaction can form a wide range of products, including alkyl and aryl amines.
The reaction is named after its discoverer, August Wilhelm von Hofmann, and should not be confused with the Hofmann elimination, another name reaction for which he is eponymous.
Mechanism
The reaction of bromine with sodium hydroxide forms sodium hypobromite in situ, which transforms the primary amide into an intermediate isocyanate. The formation of an intermediate nitrene is not possible because it implies also the formation of a hydroxamic acid as a byproduct, which has never been observed. The intermediate isocyanate is hydrolyzed to a primary amine, giving off carbon dioxide.
Base abstracts an acidic N-H proton, yielding an anion.
The anion reacts with bromine in an α-substitution reaction to give an N-bromoamide.
Base abstraction of the remaining amide proton gives a bromoamide anion.
The bromoamide anion rearranges as the R group attached to the carbonyl carbon migrates to nitrogen at the same time the bromide ion leaves, giving an isocyanate.
The isocyanate adds water in a nucleophilic addition step to yield a carbamic acid (aka urethane).
The carbamic acid spontaneously loses CO2, yielding the amine product.
Variations
Several reagents can be substituted for bromine. Sodium hypochlorite, lead tetraacetate, N-bromosuccinimide, and (bis(trifluoroacetoxy)iodo)benzene can effect a Hofmann rearrangement.
The intermediate isocyanate can be trapped with various nucleophiles to form stable carbamates or other products rather than undergoing decarboxylation. In the following example, the intermediate isocyanate is trapped by methanol.
In a similar fashion, the intermediate isocyanate can be trapped by tert-butyl alcohol, yielding the tert-butoxycarbonyl (Boc)-protected amine.
The Hofmann Rearrangement also can be used to yield carbamates from α,β-unsaturated or α-hydroxy amides or nitriles from α,β-acetylenic amides in good yields (≈70%).
Applications
In the preparation of anthranilic acid from phthalimide
Nicotinamide is converted into 3-Aminopyridine
The symmetrical structure of α-phenyl propanamide does not change after Hofmann reaction.
In the synthesis of gabapentin, beginning with the mono-amidation of 1,1-cyclohexane diacetic acid anhydride with ammonia to 1,1-cyclohexane diacetic acid mono-amide, followed by a Hofmann rearrangement
See also
Beckmann rearrangement
Curtius rearrangement
Iodoform reaction
Lossen rearrangement
Schmidt reaction
Weerman degradation
References
Bibliography
Rearrangement reactions
Degradation reactions
Name reactions | Hofmann rearrangement | [
"Chemistry"
] | 707 | [
"Name reactions",
"Degradation reactions",
"Rearrangement reactions",
"Organic reactions"
] |
1,585,092 | https://en.wikipedia.org/wiki/Institute%20of%20Physical%20Chemistry%20of%20the%20Polish%20Academy%20of%20Sciences | The Institute of Physical Chemistry of the Polish Academy of Sciences (Polish Instytut Chemii Fizycznej Polskiej Akademii Nauk, IChF PAN) is one of numerous institutes belonging to the Polish Academy of Sciences. As its name suggests, the institute's primary research interests are in the field of physical chemistry.
History
The Institute was established by a resolution of the Presidium of the Government of the Polish People's Republic on 19 March 1955. It was the first chemical institute of the Polish Academy of Sciences. Its tasks were defined at that time: "The Institute of Physical Chemistry covers research on current issues of physical chemistry important from the point of view of the development of chemical sciences and the needs of the national economy".
At the beginning of its activity, the main task was to prepare scientific staff who would be able to conduct scientific research in the field of physical chemistry. The development of scientific staff was facilitated because the employed scientific workers did not have the teaching burdens required in higher education institutions.
The first Director of the Institute and, at the same time, the Chairman of the Scientific Council of the Institute was prof. Wojciech Świętosławski. The subsequent directors of the Institute were prof. Michał Śmiałowski (1960–1973), prof. Wojciech Zielenkiewicz (1973–1990), prof. Jan Popielawski (1990–1992), prof. Janusz Lipkowski (1992–2003), prof. Aleksander Jabłoński (2003–2011), prof. Robert Hołyst (2011–2015), prof. Marcin Opałło (2015-2023), dr hab. Adam Kubas (since 2023).
Over the following years, the structure of the IChF changed, the number of employees increased, and new research topics emerged, which is reflected in the current structure of the Institute.
Structure
The Institute is divided into research departments, within which research teams operate:
Department of Physical Chemistry of Biological Systems (Head: prof. Maciej Wojtkowski).
Team leaders: prof. M. Wojtkowski, dr. Jan Guzowski and dr. hab. Jan Paczesny
Department of Physical Chemistry of Soft Matter (Head: prof. Robert Hołyst)
Team leaders: dr. hab. Jacek Gregorowicz, dr. hab. Volodymyr Sashuk, prof. Robert Hołyst, prof. Piotr Garstecki, dr. hab. Marco Costantini
Department of Catalysis on Metals (Head: dr. hab. Zbigniew Kaszkur)
Team leaders: dr. hab. Zbigniew Kaszkur, prof. Rafał Szmigielski and dr. hab. Juan Carlos Colmenares Quintero
Department of Electrode Processes (Head: prof. Marcin Opałło)
Team leaders: prof. Joanna Niedziółka-Jönsson, dr hab. Martin Jönsson-Niedziółka, dr Wojciech Nogala, prof. Marcin Opałło and dr inż Emilia Witkowska-Nery
Department of Complex Systems and Chemical Information Processing (Head: prof. Jerzy Górecki)
Team leaders: dr hab. Wojciech Góźdź and prof. Jerzy Górecki
Department of Photochemistry and Spectroscopy (Head: prof. Jacek Waluk)
Team leaders: dr hab. Agnieszka Michota-Kamińska, dr hab. Gonzalo Angulo Nunez, prof. Robert Kołos, dr hab. Yuriy Stepanenko and prof. Jacek Waluk
Independent teams
Leaders: prof. Janusz Lewiński, dr Bartłomiej Wacław, dr Piyush Sindhu Sharma, dr hab. Adam Kubas, prof. Robert Nowakowski, dr hab. Daniel Prochowicz and dr Tomasz Ratajczyk
International Centre for Translational Eye Research - ICTER (International Centre for Translational Eye Research), headed by Professor Maciej Wojtkowski. ICTER's strategic foreign partner is the Institute of Ophthalmology, University College London, in the United Kingdom. The Centre's international scientific partner is the University of California, Irvine, United States. The primary scientific priority of ICTER is to thoroughly investigate the dynamics and plasticity of the human eye to develop new therapies and diagnostic tools. Cutting-edge ICTER research is conducted at various levels of resolution, from single molecules to the entire architecture and function of the eye. ICTER has five research groups:
Physical Optics and Biophotonics Group (POB)
Integrated Structural Biology Group (ISB)
Ophthalmic Imaging and Technologies Group (IDoc)
Ocular Biology Group (OBi)
Computational Genomics Group (CGG)
Commercialization
The work conducted by the Institute has given rise to five companies, operating mainly in the field of medical diagnostics:
Scope Fluidics was founded in 2010 as the first spin-off of the Institute of Physical Chemistry of the Polish Academy of Sciences. The company aimed to commercialize microfluidic technologies developed at the Institute. Since its inception, the company has specialized in creating innovative solutions for medical diagnostics.
SERSitive - manufacturer of SERS substrates for a wide range of analytical sciences, such as pharmacy, forensic laboratories, border guard laboratories and medicine.
Siliquan - manufacturer of fluorescent silica nanomaterials.
Cell-IN offers a reagent enabling the introduction of various types of macromolecules (from polymers and proteins to DNA molecules) into cells.
InCellVu is developing a clinical form of a prototype device for in vivo imaging of the human retina using the new STOC-T method developed by scientists from the International Center for Eye Research.
External links
Institute of Physical Chemistry website
Institutes of the Polish Academy of Sciences
Chemistry organizations | Institute of Physical Chemistry of the Polish Academy of Sciences | [
"Chemistry"
] | 1,249 | [
"nan",
"Physical chemistry stubs",
"Chemistry organization stubs"
] |
1,585,142 | https://en.wikipedia.org/wiki/Saltillo%20tile | Saltillo tile is a type of terracotta tile that originates in Saltillo, Coahuila, Mexico. It is one of the two most famous products of the city, the other being multicolored woven sarapes typical of the region. Saltillo-type tiles are now manufactured at many places in Mexico, and high-fire "Saltillo look" tiles, many from Italy, compete with the terracotta originals.
Description
Saltillo tiles vary in color and shape, but the majority of Traditional Saltillo tiles range in varying hues of reds, oranges, and yellows. Manganese Saltillo tile has light and dark brown colors with some terracotta tones. Antique Saltillo tile is a hand-textured finished with deep terracotta tones of color. With its textured surface, Antique Saltillo tile is ideal for areas that need a non-slip surface. Spanish Mission Red Saltillo tile is similar to Traditional Saltillo tile, except it doesn't have as many of the light cream and golden colors.
Saltillo tile flooring can be found in many shapes and sizes. Tiles are shaped either by pressing quarried clay with a wooden frame (super), or carving out the desired shape (regular). Depending on the raw tile's placement among other tiles at the time of firing, its color ranges from yellow to a rich orange.
Rustic characteristics found in handmade Saltillo tile include bumps, chips, lime pops, color striping, color variations, size variations, thickness variations, concave back, fingerprints, irregular markings and smudges. These rustic characteristics lend to the appeal of rustic style flooring.
Installation and sealing
Saltillo tile is highly porous, and soaks in liquid easily. Unlike most ceramic tile, there is no glaze on the top surface of the tile. It is difficult to install as it absorbs water from the thin-set mortar, grout, grease pencils, etc. Once placed, it stains and scuffs easily if not properly sealed and maintained with a quality sealant. Saltillo is a poor choice for outdoor installation in freeze-thaw climates, although a popular choice in warmer climates. During installation, the tiles should be handled carefully to avoid stains that can even occur from body oils on the installer's hands. To avoid stain issues, consider sourcing presealed Saltillo tile vs. raw clay tile. The process of installing presealed tiles is simpler and less costly for the overall project.
Preferred methods for installation invariably relate to its propensity for soaking in liquid. One method involves soaking the tile in water, setting the tile with thin-set mortar, grouting, then sealing both the Saltillo and the grout with a quality surface sealant. However, using this method may cause the grout to stick to the surface of the Saltillo tile, making it impossible to remove. This method is not recommended for do-it-yourselfers.
A penetrating sealant will maintain the natural look of the tile. You should periodically test the seal by putting a few drops of water on the tile in various places. If the water is absorbed, then another coat of sealant should be applied.
Other surface sealants may give the tile a shiny appearance. As the tile loses its shine, another coat is applied on top of the old sealant. If the finish becomes too worn or uneven, it can be stripped and a new coat applied. However, this option is very labor-intensive.
Additionally, another coat of sealant can be used on both the Saltillo tile and grout. A professional with experience in Saltillo will charge $2.50 to $6.00 per square foot for installation, depending on your locale.
Treatments for Saltillo include: coating them with a surface sealant prior to grouting (as mentioned earlier), applying an admixture of linseed oil and paint thinner, applying natural stone color enhancers, applying floor hardeners, applying shine, painting them with a water-based paint, coating them with wood stain, etc. As the tile is incredibly porous it will readily absorb just about any liquid. Please note any of these treatments may be used on the tile, however, some of them such as penetrating sealant, enhancers or linseed oil treatments penetrate into the tile and may affect the ability of later coatings to adhere to the tile. Ultra-durable, water-based polyurethane makes an excellent coating for adding slip resistance, beautiful appearance, and protection from penetrating stains. Look for a polyurethane coating that has no VOCs for maximum environmental friendliness.
Saltillo tile may be sealed with a penetrating sealant or a film forming sealant (coating). A film forming sealant will leave a film on the surface of the tile. With multiple coats, the film will build an even protective film and gloss that may repel water, oil, grease, and efflorescence. A quality acrylic sealant should be used as it will be easy to apply, non-yellowing and long-lasting. A quality acrylic floor polish can be applied over the sealed surface for added abrasion and wear protection.
The finished sealed floor should be maintained for best results. For routine cleaning use a neutral cleaner to damp mop the floor (never flood the sealed floor with water). Reapply the polish if areas begin to show wear over time. Maintaining the sealant/polish will greatly extend the life of the sealant and minimize repair needs.
References
Pavements
Floors
Masonry
Building materials
Terracotta
Saltillo | Saltillo tile | [
"Physics",
"Engineering"
] | 1,148 | [
"Structural engineering",
"Masonry",
"Building engineering",
"Floors",
"Architecture",
"Construction",
"Materials",
"Matter",
"Building materials"
] |
1,585,155 | https://en.wikipedia.org/wiki/Weierstrass%20factorization%20theorem | In mathematics, and particularly in the field of complex analysis, the Weierstrass factorization theorem asserts that every entire function can be represented as a (possibly infinite) product involving its zeroes. The theorem may be viewed as an extension of the fundamental theorem of algebra, which asserts that every polynomial may be factored into linear factors, one for each root.
The theorem, which is named for Karl Weierstrass, is closely related to a second result that every sequence tending to infinity has an associated entire function with zeroes at precisely the points of that sequence.
A generalization of the theorem extends it to meromorphic functions and allows one to consider a given meromorphic function as a product of three factors: terms depending on the function's zeros and poles, and an associated non-zero holomorphic function.
Motivation
It is clear that any finite set of points in the complex plane has an associated polynomial whose zeroes are precisely at the points of that set. The converse is a consequence of the fundamental theorem of algebra: any polynomial function in the complex plane has a factorization
where is a non-zero constant and is the set of zeroes of .
The two forms of the Weierstrass factorization theorem can be thought of as extensions of the above to entire functions. The necessity of additional terms in the product is demonstrated when one considers where the sequence is not finite. It can never define an entire function, because the infinite product does not converge. Thus one cannot, in general, define an entire function from a sequence of prescribed zeroes or represent an entire function by its zeroes using the expressions yielded by the fundamental theorem of algebra.
A necessary condition for convergence of the infinite product in question is that for each z, the factors must approach 1 as . So it stands to reason that one should seek a function that could be 0 at a prescribed point, yet remain near 1 when not at that point and furthermore introduce no more zeroes than those prescribed.
Weierstrass' elementary factors have these properties and serve the same purpose as the factors above.
The elementary factors
Consider the functions of the form for . At , they evaluate to and have a flat slope at order up to . Right after , they sharply fall to some small positive value. In contrast, consider the function which has no flat slope but, at , evaluates to exactly zero. Also note that for ,
[[File:First_5_Weierstrass_factors_on_the_unit_interval.svg|thumb|right|alt=First 5 Weierstrass factors on the unit interval.|Plot of for n = 0,...,4 and x in the interval [-1,1].]]
The elementary factors,
also referred to as primary factors'',
are functions that combine the properties of zero slope and zero value (see graphic):
For and , one may express it as
and one can read off how those properties are enforced.
The utility of the elementary factors lies in the following lemma:
Lemma (15.8, Rudin) for ,
The two forms of the theorem
Existence of entire function with specified zeroes
Let be a sequence of non-zero complex numbers such that .
If is any sequence of nonnegative integers such that for all ,
then the function
is entire with zeros only at points . If a number occurs in the sequence exactly times, then function has a zero at of multiplicity .
The sequence in the statement of the theorem always exists. For example, we could always take and have the convergence. Such a sequence is not unique: changing it at finite number of positions, or taking another sequence , will not break the convergence.
The theorem generalizes to the following: sequences in open subsets (and hence regions) of the Riemann sphere have associated functions that are holomorphic in those subsets and have zeroes at the points of the sequence.
Also the case given by the fundamental theorem of algebra is incorporated here. If the sequence is finite then we can take and obtain: .
The Weierstrass factorization theorem
Let be an entire function, and let be the non-zero zeros of repeated according to multiplicity; suppose also that has a zero at of order .
Then there exists an entire function and a sequence of integers such that
Examples of factorization
The trigonometric functions sine and cosine have the factorizations
while the gamma function has factorization
where is the Euler–Mascheroni constant. The cosine identity can be seen as special case of
for .
Hadamard factorization theorem
A special case of the Weierstraß factorization theorem occurs for entire functions of finite order. In this case the can be taken independent of and the function is a polynomial. Thus where are those roots of that are not zero (), is the order of the zero of at (the case being taken to mean ), a polynomial (whose degree we shall call ), and is the smallest non-negative integer such that the seriesconverges. This is called Hadamard's canonical representation. The non-negative integer is called the genus of the entire function . The order of satisfies
In other words: If the order is not an integer, then is the integer part of . If the order is a positive integer, then there are two possibilities: or .
For example, , and are entire functions of genus .
See also
Mittag-Leffler's theorem
Wallis product, which can be derived from this theorem applied to the sine function
Blaschke product
Notes
External links
Theorems in complex analysis | Weierstrass factorization theorem | [
"Mathematics"
] | 1,146 | [
"Theorems in mathematical analysis",
"Theorems in complex analysis"
] |
1,585,223 | https://en.wikipedia.org/wiki/Beer%20distribution%20game | The beer distribution game (also known as the beer game) is an educational game that is used to experience typical coordination problems of a supply chain process. It reflects a role-play simulation where several participants play with each other. The game represents a supply chain with a non-coordinated process where problems arise due to lack of information sharing.
This game outlines the importance of information sharing, supply chain management and collaboration throughout a supply chain process. Due to lack of information, suppliers, manufacturers, sales people and customers often have an incomplete understanding of what the real demand of an order is. The most interesting part of the game is that each group has no control over another part of the supply chain. Therefore, each group has only significant control over their own part of the supply chain. Each group can highly influence the entire supply chain by ordering too much or too little which can lead to a bullwhip effect. Therefore, the order taking of a group also highly depends on decisions of the other groups.
History
The Beer Game was invented by Jay Wright Forrester at the MIT Sloan School of Management in 1960. The beer game was a result of his work on system dynamics.
Rules
In the beer game participants enact a four-stage supply chain. The task is to produce and deliver units of beer: the factory produces, and the other three stages deliver the beer units until it reaches the customer at the downstream end of the chain. The goal of the game is to meet customer demand with minimal expenditure on back orders and inventory.
The game is played in 24 rounds and in each round of the game the following four steps have to be performed:
Check deliveries: How many units of beer are being delivered to the player from the wholesaler.
Check orders: How many units the customer has ordered.
Deliver beer: Deliver as much beer as a player can to satisfy the demand (in this game the step is performed automatically).
Make order decision: Decide how many units are needed to order to maintain stock.
As previously said, there are four stages, manufacturer, distributor, supplier, retailer, with a two-week communication gap of orders toward the upstream and a two-week supply chain delay of product towards the downstream. There is a one-point cost for holding excess inventory and a one-point cost for any backlog (old backlog + orders - current inventory). In the board game version, players cannot see anything other than what is communicated to them through pieces of paper with numbers written on them, signifying orders or product. The retailer draws from a deck of cards for what the customer demands, and the manufacturer places an order which, in turn, becomes product in four weeks.
Players look to one another within their supply chain frantically trying to figure out where things are going wrong. The team or supply chain that achieves the lowest total costs wins. At the end during the debriefing, it is explained that these feelings are common and that reactions based on these feelings within supply chains create the bullwhip effect. The game illustrates in a compelling way the effects of poor system understanding and poor communication for even a relatively simple and idealized supply chain. Although players often raise the lack of perfect information about the customer orders as a primary reason for their poor team performance in the game, analysis of the minimum possible score using the optimal strategy under different conditions shows an expected value of perfect information of 0 for the standard game, and simulations that included giving players perfect information still showed poor team performance.
Supply chain
A supply chain is a network between a company and its suppliers to produce and distribute a specific product to the final buyer. This network includes different activities, people, entities, information, and resources. The supply chain also represents the steps it takes to get the product or service from its original state to the customer.
Supply chains are developed by companies so they can reduce their costs and remain competitive in the business landscape. It is important to understand how to manage the supply chain in the right way.
Supply chain management (SCM) is the management of the flow of goods and services and includes all processes that transform raw materials into final products. It involves the active streamlining of a business's supply-side activities to maximize customer value and gain a competitive advantage in the marketplace. SCM represents an effort by suppliers to develop and implement supply chains that are as efficient and economical as possible. Supply chains cover everything from production to product development to the information systems needed to direct these undertakings.
Typically, SCM attempts to centrally control or link the production, shipment, and distribution of a product. By managing the supply chain, companies are able to cut excess costs and deliver products to the consumer faster. This is done by keeping tighter control of internal inventories, internal production, distribution, sales, and the inventories of company vendors. SCM is based on the idea that nearly every product that comes to market results from the efforts of various organizations that make up a supply chain. Although supply chains have existed for ages, most companies have only recently paid attention to them as a value-add to their operation.
Bullwhip effect
The bullwhip effect (or whiplash or whipsaw effect) is a well-known symptom of coordination problems in traditional supply chains. It refers to the role played by periodical order amounts as one moves upstream in the supply chain toward the production end.
Even when demand is stable, small variations in that demand, at the retail-end, tend to dramatically amplify themselves upstream through the supply chain. The resulting effect is that order amounts become very erratic. Very high one week, and then zero the next. The term was first coined around 1990 when Procter & Gamble perceived erratic and amplified order patterns in its supply chain for babies' diapers. As a consequence of the bullwhip effect, a range of inefficiencies occur throughout the supply chain:
high (safety) stock levels
poor customer service levels
poor capacity utilization
aggravated problems with demand forecasting
ultimately high cost and low levels of inter-firm trust
While the effect is not new, it is still a timely and pressing problem in contemporary supply chains.
Generally, the reasons for the bullwhip effect are:
Order batching: Happens when each member in the chain orders more quantities than it needs, warping the original quantities demanded.
Price fluctuation: Special discounts and cost changes can cause buyers to take advantage, resulting in irregular production and distorted demand.
Demand information misuse: When past demand information for new estimates do not take into account fluctuations.
Lack of communication: This can lead to constraints when processes are not run efficiently, this usually happens when organizations identify the product demand differently within different links of the supply chain.
Free return policies: Customers may overstate demands due to shortages, if customers cannot return items, retailers will continue to exaggerate their needs, cancelling orders and causing in excess product or materials.
Types
There are several options how to play the beer game. The various approaches are explained in more detail below.
Traditional board game
The traditional version of the beer game is a physical board game where people have to move actual objects. The tokens on the board game represent orders and stocks of a supply chain process. The main disadvantage is that this type of beer game takes much more time than the software version. Moreover, it is quite complex to play it since people need physical objects that represent the inventory on the board. Additionally, inventory levels of other supply chain stages are transparent and are therefore quite hard to estimate.
Table version
This version of the beer game was introduced by the University of Klagenfurt. The game can be played with the usage of paper slips where the players have to write numbers on top. This type of game is a more pragmatic approach to moving orders and stock in the supply chain. Additionally, there is one person with the role of a bookkeeping person that keeps track of everything happening.
Adapted
The adapted table version is an expanded version of the table version where the bookkeeper is eliminated to achieve a more straightforward game.
In order to play this game a spreadsheet and a laptop on each table are needed. The laptops are used for people's play sheets, which eliminates risks of miscalculating inventory levels.
Software
The software version of the beer game is an online approach. This approach can be either used as a one player simulation demonstration or as a multiplayer simulation demonstration.
References
Further reading
423 pp.
External links
ASU materials (score sheets, hints, etc.)
Experiential learning
Non-cooperative games
Supply chain management | Beer distribution game | [
"Mathematics"
] | 1,733 | [
"Game theory",
"Non-cooperative games"
] |
1,585,226 | https://en.wikipedia.org/wiki/Young%20symmetrizer | In mathematics, a Young symmetrizer is an element of the group algebra of the symmetric group whose natural action on tensor products of a complex vector space has as image an irreducible representation of the group of invertible linear transformations . All irreducible representations of are thus obtained. It is constructed from the action of on the vector space by permutation of the different factors (or equivalently, from the permutation of the indices of the tensor components). A similar construction works over any field but in characteristic p (in particular over finite fields) the image need not be an irreducible representation. The Young symmetrizers also act on the vector space of functions on Young tableau and the resulting representations are called Specht modules which again construct all complex irreducible representations of the symmetric group while the analogous construction in prime characteristic need not be irreducible. The Young symmetrizer is named after British mathematician Alfred Young.
Definition
Given a finite symmetric group Sn and specific Young tableau λ corresponding to a numbered partition of n, and consider the action of given by permuting the boxes of . Define two permutation subgroups and of Sn as follows:
and
Corresponding to these two subgroups, define two vectors in the group algebra as
and
where is the unit vector corresponding to g, and is the sign of the permutation. The product
is the Young symmetrizer corresponding to the Young tableau λ. Each Young symmetrizer corresponds to an irreducible representation of the symmetric group, and every irreducible representation can be obtained from a corresponding Young symmetrizer. (If we replace the complex numbers by more general fields the corresponding representations will not be irreducible in general.)
Construction
Let V be any vector space over the complex numbers. Consider then the tensor product vector space (n times). Let Sn act on this tensor product space by permuting the indices. One then has a natural group algebra representation on (i.e. is a right module).
Given a partition λ of n, so that , then the image of is
For instance, if , and , with the canonical Young tableau . Then the corresponding is given by
For any product vector of we then have
Thus the set of all clearly spans and since the span we obtain , where we wrote informally .
Notice also how this construction can be reduced to the construction for .
Let be the identity operator and the swap operator defined by , thus and . We have that
maps into , more precisely
is the projector onto .
Then
which is the projector onto .
The image of is
where μ is the conjugate partition to λ. Here, and are the symmetric and alternating tensor product spaces.
The image of in is an irreducible representation of Sn, called a Specht module. We write
for the irreducible representation.
Some scalar multiple of is idempotent, that is for some rational number Specifically, one finds . In particular, this implies that representations of the symmetric group can be defined over the rational numbers; that is, over the rational group algebra .
Consider, for example, S3 and the partition (2,1). Then one has
If V is a complex vector space, then the images of on spaces provides essentially all the finite-dimensional irreducible representations of GL(V).
See also
Representation theory of the symmetric group
Notes
References
William Fulton. Young Tableaux, with Applications to Representation Theory and Geometry. Cambridge University Press, 1997.
Lecture 4 of
Bruce E. Sagan. The Symmetric Group. Springer, 2001.
Representation theory of finite groups
Symmetric functions
Permutations | Young symmetrizer | [
"Physics",
"Mathematics"
] | 746 | [
"Functions and mappings",
"Permutations",
"Algebra",
"Mathematical objects",
"Combinatorics",
"Symmetric functions",
"Mathematical relations",
"Symmetry"
] |
1,585,274 | https://en.wikipedia.org/wiki/Heawood%20conjecture | In graph theory, the Heawood conjecture or Ringel–Youngs theorem gives a lower bound for the number of colors that are necessary for graph coloring on a surface of a given genus. For surfaces of genus 0, 1, 2, 3, 4, 5, 6, 7, ..., the required number of colors is 4, 7, 8, 9, 10, 11, 12, 12, .... , the chromatic number or Heawood number.
The conjecture was formulated in 1890 by P.J. Heawood and proven in 1968 by Gerhard Ringel and J.W.T. Youngs. One case, the non-orientable Klein bottle, proved an exception to the general formula. An entirely different approach was needed for the much older problem of finding the number of colors needed for the plane or sphere, solved in 1976 as the four color theorem by Haken and Appel. On the sphere the lower bound is easy, whereas for higher genera the upper bound is easy and was proved in Heawood's original short paper that contained the conjecture. In other words, Ringel, Youngs, and others had to construct extreme examples for every genus g = 1, 2, 3, …. If g = 12s + k, then the genera fall into the 12 cases as k = 0, 1, 2, 3, …., 11. To simplify, suppose that case k has been established if only a finite number of gs of the form 12s + k are in doubt. Then the years in which the twelve cases were settled, and by whom, are the following:
1954, Ringel: case 5
1961, Ringel: cases 3, 7, 10
1963, Terry, Welch, Youngs: cases 0, 4
1964, Gustin, Youngs: case 1
1965, Gustin: case 9
1966, Youngs: case 6
1967, Ringel, Youngs: cases 2, 8, 11
The last seven sporadic exceptions were settled as follows:
1967, Mayer: cases 18, 20, 23
1968, Ringel, Youngs: cases 30, 35, 47, 59, and the conjecture was proved.
Formal statement
Percy John Heawood conjectured in 1890 that for a given genus g > 0, the minimum number of colors necessary to color all graphs drawn on an orientable surface of that genus (or equivalently, to color the regions of any partition of the surface into simply connected regions) is given by
where is the floor function.
Replacing the genus by the Euler characteristic, we obtain a formula that covers both the orientable and non-orientable cases,
This relation holds, as Ringel and Youngs showed, for all surfaces except for the Klein bottle. Philip Franklin (1930) proved that the Klein bottle requires at most 6 colors, rather than 7 as predicted by the formula. The Franklin graph can be drawn on the Klein bottle in a way that forms six mutually-adjacent regions, showing that this bound is tight.
The upper bound, proved in Heawood's original short paper, is based on a greedy coloring algorithm. By manipulating the Euler characteristic, one can show that every graph embedded in the given surface must have at least one vertex of degree less than the given bound. If one removes this vertex, and colors the rest of the graph, the small number of edges incident to the removed vertex ensures that it can be added back to the graph and colored without increasing the needed number of colors beyond the bound. In the other direction, the proof is more difficult, and involves showing that in each case (except the Klein bottle) a complete graph with a number of vertices equal to the given number of colors can be embedded on the surface.
Example
The torus has g = 1, so χ = 0. Therefore, as the formula states, any subdivision of the torus into regions can be colored using at most seven colors. The illustration shows a subdivision of the torus in which each of seven regions are adjacent to each other region; this subdivision shows that the bound of seven on the number of colors is tight for this case. The boundary of this subdivision forms an embedding of the Heawood graph onto the torus.
References
External links
Conjectures that have been proved
Graph coloring
Topological graph theory
Theorems in graph theory | Heawood conjecture | [
"Mathematics"
] | 884 | [
"Mathematical theorems",
"Graph coloring",
"Theorems in graph theory",
"Graph theory",
"Theorems in discrete mathematics",
"Topology",
"Mathematical relations",
"Conjectures that have been proved",
"Mathematical problems",
"Topological graph theory"
] |
1,585,314 | https://en.wikipedia.org/wiki/Matt%20Mullenweg | Matthew Charles Mullenweg (born January 11, 1984) is an American web developer and entrepreneur. He is known as a co-founder of the free and open-source web publishing software WordPress, and the founder of Automattic.
Early life and education
Mullenweg was born in Houston, Texas, and grew up in the Willowbend neighborhood. His father, Chuck, was a computer programmer. Mullenweg was raised Catholic. He attended the High School for the Performing and Visual Arts to play the saxophone, although he was frequently absent due to chronic migraines. After graduating from high school, he studied economics, philosophy and political science at the University of Houston, eventually dropping out after his sophomore year in 2004.
WordPress
Mullenweg became enamored with blogging and started contributing updates to b2—a popular open-source blogging software—in 2002. However, Michel Valdrighi—the sole maintainer—soon ceased activity, and Mullenweg discussed prospects of creating a fork with other contributors; thus, in January 2003, Mullenweg created WordPress with Mike Little under the GPL v2-or-later open-source license at the age of 19, and Valdrighi endorsed the project a few months later.
In March 2003, he co-founded the Global Multimedia Protocols Group (GMPG) with Eric A. Meyer and Tantek Çelik. In April 2004, he helped launch Ping-O-Matic, a mechanism for notifying search engines about blog updates.
In October 2004, he was hired by CNET who would allow him to develop WordPress part-time as part of his job. He dropped out of college and moved to San Francisco for the position.
Automattic
Mullenweg left CNET in October 2005 to focus on WordPress full-time. Soon after he announced Akismet, an initiative to reduce comment and trackback spam. In December, he founded Automattic, with Akismet and managed web hosting service WordPress.com as its flagship products. In January 2006, Mullenweg recruited former Yahoo! executive Toni Schneider to join Automattic as CEO.
Since 2006, he has delivered an annual "State of the Word" speech on the progress and future of the WordPress software, named after the State of the Union address.
In 2011, Mullenweg purchased the WordPress news website WP Tavern.
In January 2014, Mullenweg became CEO of Automattic. Schneider moved to work on new projects at Automattic. Mullenweg received the Heinz Award for Technology, the Economy and Employment in 2016, for "helping to democratize online publishing".
Mullenweg began a three-month sabbatical from his role as CEO at the beginning of February 2024. Later that month, Mullenweg engaged in a public feud with a transgender Tumblr user who, frustrated with the site's failure to address transphobic harassment, posted that she wished Mullenweg would die in a comedic way. The user was subsequently banned. Responding to user uproar, Mullenweg addressed the ban in posts on his personal Tumblr blog, in which he characterized the post as a death threat, and shared private account information about the user. Mullenweg also responded to individual commenters on Tumblr in posts and direct messages, and went to Twitter to respond to the banned user's tweets about the situation. A few days later, transgender employees of Tumblr and Automattic made a post on the official Tumblr staff blog characterizing his response as "unwarranted and harmful" and stating that he did not speak on their behalf. They also said that the user's post was not a realistic threat of violence and not the reason for her ban.
Public disputes
On several occasions, Mullenweg has publicly challenged competitors to WordPress and WordPress.com. He has stated that he prefers to settle disputes in the court of public opinion and described his approach as "brinksmanship", noting that the potential cost of legal action could put Automattic in a "tough spot".
In 2008, shortly before WordPress 2.5's release, Six Apart's Movable Type published "A WordPress 2.5 Upgrade Guide"—a comparison of their CMS with their rival, WordPress—as a company blog article that Mullenweg characterized as "desperate and dirty". In 2013, developers on the digital marketplace Envato were banned from speaking at WordPress events after he criticized the platform for selling WordPress themes with the graphics and CSS components under a proprietary license instead of the GPL.
In 2016, Mullenweg accused Wix.com, a competitor to WordPress.com, of reusing WordPress's mobile text editor code in Wix's own mobile app without adhering to the terms of the GPL. Despite the license's requirement to publish anything built with GPL code under the GPL, Wix's CEO claimed that the company open-sourced their forked version of the component and satisfied the license's terms before the app switched to its own fork of the MIT-licensed text editor that the WordPress editor was based upon. The new fork added a clause to the MIT license that forbids redistribution under any other license.
In 2022, Mullenweg criticized GoDaddy for not reinvesting in the WordPress project sufficiently.
On January 9, 2025, the representative of the WordPress Sustainability team, Thijs Buijs, resigned via WordPress.org’s Slack channel, citing dissatisfaction with Matt Mullenweg’s December 24, 2024, Reddit post titled “What drama should I create in 2025?” highlighting concerns about what he described as “unsustainable leadership”. In response, Matt Mullenweg thanked Thijs Buijs in reminding him the existence of a sustainability team, announced its disbanding, and subsequently closing Wordpress.org's #sustainability Slack channel.
WP Engine dispute
Audrey Capital
Mullenweg is a principal at angel investment firm Audrey Capital, which he co-founded in 2008 alongside Naveen Selvadurai and Audrey Kim.
, the company lists investments in companies such as CoinDesk, MakerBot, Sonos, SpaceX, Ring, as well as software companies including Calm, Chartbeat, DailyBurn, Memrise, Genius, Nord Security and Telegram. It has also funded startups that provide services to web developers including Creative Market, GitLab, NPM, SendGrid, Stripe and Typekit. From 2017 to 2019, Mullenweg also served as a board member for GitLab.
Mullenweg has employed a team of contributors to WordPress through Audrey Capital since 2010, who work separately from Automattic.
On the 20th anniversary of WordPress' initial release, Mullenweg announced a scholarship program aimed at the children of significant contributors to open-source projects. To remain in the program, participants must commit annually to a set of principles.
See also
Browse Happy
References
External links
1984 births
21st-century American businesspeople
American computer programmers
American male bloggers
American social entrepreneurs
American technology chief executives
American technology company founders
Angel investors
Automattic
Businesspeople from Houston
Free software programmers
Henry Crown Fellows
High School for the Performing and Visual Arts alumni
Living people
Open source advocates
People from Houston
Tumblr
University of Houston alumni
Web developers
Web development
WordPress | Matt Mullenweg | [
"Engineering"
] | 1,560 | [
"Software engineering",
"Web development"
] |
1,585,348 | https://en.wikipedia.org/wiki/RL%20circuit | A resistor–inductor circuit (RL circuit), or RL filter or RL network, is an electric circuit composed of resistors and inductors driven by a voltage or current source. A first-order RL circuit is composed of one resistor and one inductor, either in series driven by a voltage source or in parallel driven by a current source. It is one of the simplest analogue infinite impulse response electronic filters.
Introduction
The fundamental passive linear circuit elements are the resistor (R), capacitor (C) and inductor (L). These circuit elements can be combined to form an electrical circuit in four distinct ways: the RC circuit, the RL circuit, the LC circuit and the RLC circuit, with the abbreviations indicating which components are used. These circuits exhibit important types of behaviour that are fundamental to analogue electronics. In particular, they are able to act as passive filters.
In practice, however, capacitors (and RC circuits) are usually preferred to inductors since they can be more easily manufactured and are generally physically smaller, particularly for higher values of components.
Both RC and RL circuits form a single-pole filter. Depending on whether the reactive element (C or L) is in series with the load, or parallel with the load will dictate whether the filter is low-pass or high-pass.
Frequently RL circuits are used as DC power supplies for RF amplifiers, where the inductor is used to pass DC bias current and block the RF getting back into the power supply.
Complex impedance
The complex impedance (in ohms) of an inductor with inductance (in henries) is
The complex frequency is a complex number,
where
represents the imaginary unit: ,
is the exponential decay constant (in radians per second), and
is the angular frequency (in radians per second).
Eigenfunctions
The complex-valued eigenfunctions of any linear time-invariant (LTI) system are of the following forms:
From Euler's formula, the real-part of these eigenfunctions are exponentially-decaying sinusoids:
Sinusoidal steady state
Sinusoidal steady state is a special case in which the input voltage consists of a pure sinusoid (with no exponential decay). As a result,
and the evaluation of becomes
Series circuit
By viewing the circuit as a voltage divider, we see that the voltage across the inductor is:
and the voltage across the resistor is:
Current
The current in the circuit is the same everywhere since the circuit is in series:
Transfer functions
The transfer function to the inductor voltage is
Similarly, the transfer function to the resistor voltage is
The transfer function, to the current, is
Poles and zeros
The transfer functions have a single pole located at
In addition, the transfer function for the inductor has a zero located at the origin.
Gain and phase angle
The gains across the two components are found by taking the magnitudes of the above expressions:
and
and the phase angles are:
and
Phasor notation
These expressions together may be substituted into the usual expression for the phasor representing the output:
Impulse response
The impulse response for each voltage is the inverse Laplace transform of the corresponding transfer function. It represents the response of the circuit to an input voltage consisting of an impulse or Dirac delta function.
The impulse response for the inductor voltage is
where is the Heaviside step function and is the time constant.
Similarly, the impulse response for the resistor voltage is
Zero-input response
The zero-input response (ZIR), also called the natural response, of an RL circuit describes the behavior of the circuit after it has reached constant voltages and currents and is disconnected from any power source. It is called the zero-input response because it requires no input.
The ZIR of an RL circuit is:
Frequency domain considerations
These are frequency domain expressions. Analysis of them will show which frequencies the circuits (or filters) pass and reject. This analysis rests on a consideration of what happens to these gains as the frequency becomes very large and very small.
As :
As :
This shows that, if the output is taken across the inductor, high frequencies are passed and low frequencies are attenuated (rejected). Thus, the circuit behaves as a high-pass filter. If, though, the output is taken across the resistor, high frequencies are rejected and low frequencies are passed. In this configuration, the circuit behaves as a low-pass filter. Compare this with the behaviour of the resistor output in an RC circuit, where the reverse is the case.
The range of frequencies that the filter passes is called its bandwidth. The point at which the filter attenuates the signal to half its unfiltered power is termed its cutoff frequency. This requires that the gain of the circuit be reduced to
Solving the above equation yields
which is the frequency that the filter will attenuate to half its original power.
Clearly, the phases also depend on frequency, although this effect is less interesting generally than the gain variations.
As :
As :
So at DC (0 Hz), the resistor voltage is in phase with the signal voltage while the inductor voltage leads it by 90°. As frequency increases, the resistor voltage comes to have a 90° lag relative to the signal and the inductor voltage comes to be in-phase with the signal.
Time domain considerations
This section relies on knowledge of , the natural logarithmic constant.
The most straightforward way to derive the time domain behaviour is to use the Laplace transforms of the expressions for and given above. This effectively transforms . Assuming a step input (i.e., before and then afterwards):
Partial fractions expansions and the inverse Laplace transform yield:
Thus, the voltage across the inductor tends towards 0 as time passes, while the voltage across the resistor tends towards , as shown in the figures. This is in keeping with the intuitive point that the inductor will only have a voltage across as long as the current in the circuit is changing — as the circuit reaches its steady-state, there is no further current change and ultimately no inductor voltage.
These equations show that a series RL circuit has a time constant, usually denoted being the time it takes the voltage across the component to either fall (across the inductor) or rise (across the resistor) to within of its final value. That is, is the time it takes to reach and to reach .
The rate of change is a fractional per . Thus, in going from to , the voltage will have moved about 63% of the way from its level at toward its final value. So the voltage across the inductor will have dropped to about 37% after , and essentially to zero (0.7%) after about . Kirchhoff's voltage law implies that the voltage across the resistor will rise at the same rate. When the voltage source is then replaced with a short circuit, the voltage across the resistor drops exponentially with from towards 0. The resistor will be discharged to about 37% after , and essentially fully discharged (0.7%) after about . Note that the current, , in the circuit behaves as the voltage across the resistor does, via Ohm's Law.
The delay in the rise or fall time of the circuit is in this case caused by the back-EMF from the inductor which, as the current flowing through it tries to change, prevents the current (and hence the voltage across the resistor) from rising or falling much faster than the time-constant of the circuit. Since all wires have some self-inductance and resistance, all circuits have a time constant. As a result, when the power supply is switched on, the current does not instantaneously reach its steady-state value, . The rise instead takes several time-constants to complete. If this were not the case, and the current were to reach steady-state immediately, extremely strong inductive electric fields would be generated by the sharp change in the magnetic field — this would lead to breakdown of the air in the circuit and electric arcing, probably damaging components (and users).
These results may also be derived by solving the differential equation describing the circuit:
The first equation is solved by using an integrating factor and yields the current which must be differentiated to give ; the second equation is straightforward. The solutions are exactly the same as those obtained via Laplace transforms.
Short circuit equation
For short circuit evaluation, RL circuit is considered. The more general equation is:
With initial condition:
Which can be solved by Laplace transform:
Thus:
Then antitransform returns:
In case the source voltage is a Heaviside step function (DC):
Returns:
In case the source voltage is a sinusoidal function (AC):
Returns:
Parallel circuit
When both the resistor and the inductor are connected in parallel connection and supplied through a voltage source, this is known as a RL parallel circuit. The parallel RL circuit is generally of less interest than the series circuit unless fed by a current source. This is largely because the output voltage () is equal to the input voltage (); as a result, this circuit does not act as a filter for a voltage input signal.
With complex impedances:
This shows that the inductor lags the resistor (and source) current by 90°.
The parallel circuit is seen on the output of many amplifier circuits, and is used to isolate the amplifier from capacitive loading effects at high frequencies. Because of the phase shift introduced by capacitance, some amplifiers become unstable at very high frequencies, and tend to oscillate. This affects sound quality and component life, especially the transistors.
See also
LC circuit
RC circuit
RLC circuit
Electrical network
List of electronics topics
References
Analog circuits
Electronic filter topology | RL circuit | [
"Engineering"
] | 2,053 | [
"Analog circuits",
"Electronic engineering"
] |
1,585,406 | https://en.wikipedia.org/wiki/Boarding%20pass | A boarding pass or boarding card is a document provided by an airline during airport check-in, giving a passenger permission to enter the restricted area of an airport (also known as the airside portion of the airport) and to board the airplane for a particular flight. At a minimum, it identifies the passenger, the flight number, the date, and scheduled time for departure. A boarding pass may also indicate details of the perks a passenger is entitled to (e.g., lounge access, priority boarding) and is thus presented at the entrance of such facilities to show eligibility.
In some cases, flyers can check in online and print the boarding passes themselves. There are also codes that can be saved to an electronic device or from the airline's app that are scanned during boarding. A boarding pass may be required for a passenger to enter a secure area of an airport.
Generally, a passenger with an electronic ticket will only need a boarding pass. If a passenger has a paper airline ticket, that ticket (or flight coupon) may be required to be attached to the boarding pass for the passenger to board the aircraft. For "connecting flights", a boarding pass is required for each new leg (distinguished by a different flight number), regardless of whether a different aircraft is boarded or not.
The paper boarding pass (and ticket, if any), or portions thereof, are sometimes collected and counted for cross-check of passenger counts by gate agents, but more frequently are scanned (via barcode or magnetic strip) and returned to the passengers in their entirety. The standards for bar codes and magnetic stripes on boarding passes are published by the IATA. The bar code standard (Bar Coded Boarding Pass) defines the 2D bar code printed on paper boarding passes or sent to mobile phones for electronic boarding passes. The magnetic stripe standard (ATB2) expired in 2010.
Most airports and airlines have automatic readers that will verify the validity of the boarding pass at the jetway door or boarding gate. This also automatically updates the airline's database to show the passenger has boarded and the seat is used, and that the checked baggage for that passenger may stay aboard. This speeds up the paperwork process at the gate.
During security screenings, the personnel will also scan the boarding pass to authenticate the passenger.
Once an airline has scanned all boarding passes presented at the gate for a particular flight and knows which passengers actually boarded the aircraft, its database system can compile the passenger manifest for that flight.
Bar-codes
BCBP (bar-coded boarding pass) is the name of the standard used by more than 200 airlines. BCBP defines the 2-dimensional (2D) bar code printed on a boarding pass or sent to a mobile phone for electronic boarding passes.
BCBP was part of the IATA Simplifying the Business program, which issued an industry mandate for all boarding passes to be barcoded. This was achieved in 2010.
Airlines and third parties use a barcode reader to read the bar codes and capture the data. Reading the bar code usually takes place in the boarding process but can also happen when entering the airport security checkpoints, while paying for items at the check-out tills of airport stores or trying to access airline lounges.
The standard was originally published in 2005 by IATA and updated in 2008 to include symbologies for mobile phones and in 2009 to include a field for a digital signature in the mobile bar codes. Future developments of the standard will include a near field communication format.
Security concerns
In recent years concerns have been raised both to the security of the boarding pass bar-codes, the data they contain and the PNR (Passenger Name Record) data that they link to. Some airline barcodes can be scanned by mobile phone applications to reveal names, dates of birth, source and destination airports and the PNR locator code, a 6-digit alphanumeric code also sometimes referred to as a booking reference number. This code plus the surname of the traveller can be used to log in to the airline's website, and access information on the traveller. In 2020, a photograph of a boarding pass posted by former Australian Prime Minister Tony Abbott on Instagram provided sufficient information to log in to Qantas's website. While not in and of itself problematic as the flight had happened in the past, the website (through its source code) unintentionally leaked private data not intended to be displayed directly, such as Abbott's passport number and Qantas's internal PNR remarks.
Paper boarding passes
Paper boarding passes are issued either by agents at a check-in counter, self-service kiosks, or by the airline's web check-in site. BCBP can be printed at the airport by an ATB (Automated Ticket & Boarding Pass) printer or a direct thermal printer, or by a personal inkjet or laser printer. The symbology for paper boarding passes is PDF417.
IATA's Board of Governors' mandate stated that all the IATA member airlines would be capable of issuing BCBP by the end of 2008, and all boarding passes would contain the 2D bar code by the end of 2010. The BCBP standard was published in 2005. It has been progressively adopted by airlines: By the end of 2005, 9 airlines were BCBP capable; 32 by the end of 2006; 101 by the end of 2007; and 200 by the end of 2008.
Mobile boarding passes
Electronic boarding passes were 'the industry's next major technological innovation after e-ticketing'. According to SITA's Airline IT Trend Survey 2009, mobile BCBP accounted for 2.1% of use (vs. paper boarding passes), forecast rising to 11.6% in 2012.
Overview
Many airlines have moved to issuing electronic boarding passes, whereby the passenger checks in either online or via a mobile device, and the boarding pass is then sent to the mobile device as an SMS or e-mail. Upon completing an online reservation, the passenger can tick a box offering a mobile boarding pass. Most carriers offer two ways to get it: have one sent to mobile device (via e-mail or text message) when checking in online, or use an airline app to check in, and the boarding pass will appear within the application. In many cases, a passenger with a smartphone can add their boarding pass to their primary digital wallet app, such as Google Wallet, Samsung Wallet, or Apple Wallet. This way the passenger does not need to open the airline's dedicated app and shortly before the flight, the boarding pass appears on their device's home screen. Furthermore, a mobile boarding cards can be loaded into smart watches through the phones they are paired with.
The mobile pass is equipped with the same bar code as a standard paper boarding pass, and it is completely machine readable. The gate attendant simply scans the code displayed on the phone. IATA's BCBP standard defines the three symbologies accepted for mobile phones: Aztec code, Datamatrix and QR code. The United Nations International Telecommunication Union expected mobile phone subscribers to hit the 4 billion mark by the end of 2008.
Airlines using mobile boarding passes
In September 2006, All Nippon Airways first began mobile boarding passes in Japan. Today, most major carriers offer mobile boarding passes at many airports. Airlines that issue electronic boarding passes include:
In Europe, Lufthansa was one of the first airlines to launch Mobile BCBP in April 2008. In the US, the Transportation Security Administration runs a pilot program of a Boarding Pass Scanning System, using the IATA BCBP standard.
On October 15, 2008, the TSA announced that scanners would be deployed within a year and scanning mobile BCBP would enable to better track wait times. The TSA keeps adding new pilot airports: Cleveland on October 23, 2008.
On October 14, 2008, Alaska Airlines started piloting mobile boarding passes at Seattle Seatac Airport.
On November 3, 2008, Air New Zealand launched the mpass, a boarding pass received on the mobile phone.
On November 10, 2008, Qatar Airways launched their online check-in: passengers can have their boarding passes sent directly to their mobile phones.
On November 13, 2008, American Airlines started offering mobile boarding passes at Chicago O'Hare Airport.
On December 18, 2008, Cathay Pacific launched its mobile Check-in service, including the delivery of the barcode to the mobile phone.
On February 24, 2009, Austrian Airlines begun offering paperless boarding passes to customers on selected routes.
On April 16, 2009, SAS joined the mobile boarding pass bandwagon.
On May 26, 2009, Air China offered its customers to receive a two-dimensional bar-code e-boarding pass on their mobile phone, with which they can go through security procedures at any channel at Beijing Airport Terminal 3, enabling a completely paperless check-in service.
On October 1, 2009, Swiss introduced mobile boarding pass to its customers.
On November 12, 2009, Finnair explained that "The mobile boarding pass system cuts passengers’ carbon footprint by removing the need for passengers to print out and keep track of a paper boarding pass".
On March 15, 2010, United began to offer mobile boarding passes to customers equipped with smartphones.
In July/August 2014, Ryanair became the latest airline to offer mobile boarding passes to customers equipped with smartphones.
Benefits
Practical: Travelers don’t always have access to a printer, while not all airlines automatically print boarding passes during check-in, so choosing a mobile boarding pass eliminates the hassle of stopping at a kiosk at the airport.
Ecological: Issuing electronic boarding passes is much more environmentally friendly than constantly using paper for boarding passes.
Drawbacks
Using a mobile boarding pass is risky if one's phone battery runs out (rendering the boarding pass inaccessible) or if there are any problems reading the e-boarding pass.
Using a mobile boarding pass can also be a challenge when traveling with multiple passengers on one reservation, because not all airline apps handle multiple mobile boarding passes. (However, some airlines, like Alaska Airlines, do allow users to switch between multiple boarding passes within their apps.)
Some airlines (and even a few government authorities) may still require some paper portions of the boarding cards to be retained by staff. This is obviously not possible with a mobile boarding card.
Some airlines need to stamp a boarding card after performing document verification checks on some passengers (e.g. Ryanair). Some airport authorities (e.g. Philippine immigration officers) also stamp the boarding card with the departure date. Passengers in turn have to present to staff their stamped boarding card at the gate to be allowed to board. As such, airlines may not extend the mobile boarding card feature to all its passengers within certain flights.
Print-at-home boarding passes
A print-at-home boarding pass is a document that a traveller can print at home, at their office, or anywhere with an Internet connection and printer, giving them permission to board an airplane for a particular flight.
British Airways CitiExpress, the first to pioneer this self-service initiative, piloted it on its London City Airport routes to minimize queues at check-in desks, in 1999. The CAA (Civil Aviation Authority) approved the introduction of the 3D boarding pass in February 2000. Early adoption with passengers was slow, except for Business Travellers. However, the advent of low-cost carriers that charged for not using print-at-home boarding passes was the catalyst to shift consumers away from traditional at-airport check-in functions. This paved the way for British Airways to become the first global airline to deploy self-service boarding passes using this now ubiquitous technology.
Many airlines encourage travellers to check in online up to a month before their flight and obtain their boarding pass before arriving at the airport. Some carriers offer incentives for doing so (e.g., in 2015, US Airways offered 1000 bonus miles to anyone checking in online), while others charge fees for checking in or printing one's boarding pass at the airport.
Benefits
Cost efficient for the airline – Passengers who print their own boarding pass reduce airline and airport staff, and infrastructure costs for check-in
Passengers without baggage to drop do not have to drop by the check-in desk or self-service check-in machines at the airport and can go straight to security checks. Exceptions for this may be international passengers that require document verification (e.g. those that require a visa for their destination).
Problems
Passengers have to remember to check-in in advance of their flight.
Passengers need to have access to a printer and provide the paper and ink themselves or find printing points that already have them, to avoid being charged to print their boarding passes at the airport. Affordable access to printers equipped with paper and ink one can use to print one's boarding pass can be difficult to find while travelling away from home or their offices, although some airlines have responded by allowing passengers to check-in further in advance. Additionally, some hotels have computer terminals that allow passengers to access their airlines' website to print out boarding cards or passengers can email the boarding cards to the hotel's reception which can print it out for them.
Some kinds of printers such as older dot matrix printers may not print the QR barcode portion legibly enough to be read accurately by the scanners.
Some budget airlines which have moved towards passengers printing their boarding passes in advance may charge an unexpected hidden fee to print the boarding pass at the airport, often in excess of the cost of the flight itself. This, along with other such hidden costs, has led to allegations of false advertising and drip pricing being levelled towards the budget airlines in question.
Print-at-home boarding pass advertising
In a bid to boost ancillary revenue from other sources of in-flight advertising, many airlines have turned to targeted advertising technologies aimed at passengers from their departure city to their destination.
Print-at-home boarding passes display adverts chosen specifically for given travellers based on their anonymised passenger information, which does not contain any personally identifiable data. Advertisers are able to target specific demographic information (age range, gender, nationality) and route information (origin and destination of flight). The same technology can also be used to serve advertising on airline booking confirmation emails, itinerary emails, and pre-departure reminders.
Advantages of print-at-home boarding pass advertising
Ability to use targeted advertising technologies to target messaging to relevant demographics and routes – providing travellers with offers that are likely to be relevant and useful
High engagement level – research by the Global Passenger Survey has shown that on average, travellers look at their boarding pass over four times across 12 keytouch points in their journey
The revenues airlines gain from advertising can help to offset operating costs and reduce ticket price rises for passengers
Concerns of print-at-home boarding pass advertising
Some passengers find the advertising intrusive
The advertising uses additional quantities of the passenger's ink when printing at home
See also
Airline ticket
Auto check-in
Secondary Security Screening Selection (SSSS rating)
References
Bibliography
Qantas boosts mobile device check-in options
Northwest Airlines offer E-Boarding Pass functionality for their passengers
Vueling: Now You Can Use Your Mobile as a Boarding Pass!
Lufthansa offers mobile boarding pass worldwide
Bar Coded Boarding Passes – Secure, Mobile and On the way
Qatar launch mobile boarding pass service
Mobile Boarding Pass Innovation Takes off with Qatar
TSA Expands Paperless Boarding Pass Pilot Program to Additional Airports and Airlines
Mobile boarding passes come to Barcelona Airport
Spanair extend their mobile boarding pass service
External links
History of paper boarding passes from CNN (with photos)
The Latest Development of paperless boarding pass technology
International Air Transport Association (IATA)
Airline tickets
Civil aviation
Encodings
Automatic identification and data capture | Boarding pass | [
"Technology"
] | 3,195 | [
"Data",
"Automatic identification and data capture"
] |
1,585,426 | https://en.wikipedia.org/wiki/Traffic%20contract | If a network service (or application) wishes to use a broadband network (an ATM network in particular) to transport a particular kind of traffic, it must first inform the network about what kind of traffic is to be transported, and the performance requirements of that traffic. The application presents this information to the network in the form of a traffic contract.
The Traffic descriptor
When a connection is requested by an application, the application indicates to the network:
The Type of Service required.
The Traffic Parameters of each data flow in both directions.
The quality of service (QoS) Parameters requested in each direction.
These parameters form the traffic descriptor for the connection.
Type of Service
Currently, five ATM Forum-defined service categories exist (see Table 1). The basic differences among these service categories are described in the following sub-sections. These service categories provide a method to relate traffic characteristics and QoS requirements to network behaviour. The service categories are characterised as being real-time or non-real-time. CBR and rt-VBR are the real-time service categories. The remaining three service categories (nrt-VBR, UBR and ABR) are considered non-real-time service categories.
Constant Bit Rate (CBR)
The CBR service category is used for connections that transport traffic at a constant bit rate, where there is an inherent reliance on time synchronisation between the traffic source and destination. CBR is tailored for any type of data for which the end-systems require predictable response time and a static amount of bandwidth continuously available for the life-time of the connection. The amount of bandwidth is characterized by a Peak Cell Rate (PCR). These applications include services such as video conferencing, telephony (voice services) or any type of on-demand service, such as interactive voice and audio. For telephony and native voice applications CBR provides low-latency traffic with predictable delivery characteristics, and is therefore typically used for circuit emulation.
Real-Time Variable Bit Rate (rt-VBR)
The rt-VBR service category is used for connections that transport traffic at variable rates — traffic that relies on accurate timing between the traffic source and destination. An example of traffic that requires this type of service category are variable rate, compressed video streams. Sources that use rt-VBR connections are expected to transmit at a rate that varies with time (for example, traffic that can be considered bursty). Real-time VBR connections can be characterized by a Peak Cell Rate (PCR), Sustained Cell Rate (SCR), and Maximum Burst Size (MBS). Cells delayed beyond the value specified by the maximum CTD (Cell Transfer Delay) are assumed to be of significantly reduced value to the application.
Non-Real-Time Variable Bit Rate (nrt-VBR)
The nrt-VBR service category is used for connections that transport variable bit rate traffic for which there is no inherent reliance on time synchronisation between the traffic source and destination, but there is a need for an attempt at a guaranteed bandwidth or latency. An application that might require an nrt-VBR service category is Frame Relay interworking, where the Frame Relay CIR (Committed Information Rate) is mapped to a bandwidth guarantee in the ATM network. No delay bounds are associated with nrt-VBR service.
Available Bit Rate (ABR)
The ABR service category is similar to nrt-VBR, because it also is used for connections that transport variable bit rate traffic for which there is no reliance on time synchronisation between the traffic source and destination, and for which no required guarantees of bandwidth or latency exist. ABR provides a best-effort transport service, in which flow-control mechanisms are used to adjust the amount of bandwidth available to the traffic originator. The ABR service category is designed primarily for any type of traffic that is not time sensitive and expects no guarantees of service. ABR service generally is considered preferable for TCP/IP traffic, as well as other LAN-based protocols, that can modify its transmission behaviour in response to the ABR’s rate-control mechanics.
ABR uses Resource Management (RM) cells to provide feedback that controls the traffic source in response to fluctuations in available resources within the interior ATM network. The specification for ABR flow control uses these RM cells to control the flow of cell traffic on ABR connections. The ABR service expects the end-system to adapt its traffic rate in accordance with the feedback so that it may obtain its fair share of available network resources. The goal of ABR service is to provide fast access to available network resources at up to the specified Peak Cell Rate (PCR).
Unspecified Bit Rate (UBR)
The UBR service category also is similar to nrt-VBR, because it is used for connections that transport variable bit rate traffic for which there is no reliance on time synchronization between the traffic source and destination. However, unlike ABR, there are no flow-control mechanisms to dynamically adjust the amount of bandwidth available to the user. UBR generally is used for applications that are very tolerant of delay and cell loss. UBR has enjoyed success in the Internet LAN and WAN environments for store-and-forward traffic, such as file-transfers and e-mail. Similar to the way in which upper-layer protocols react to ABR’s traffic-control mechanisms, TCP/IP and other LAN-based traffic protocols can modify their transmission behaviour in response to latency or cell loss in the ATM network.
Traffic parameters
Each ATM connection contains a set of parameters that describes the traffic characteristics of the source. These parameters are called source traffic parameters. They are [2][5]:
Peak Cell Rate (PCR). The maximum allowable rate at which cells can be transported along a connection in the ATM network. The PCR is the determining factor in how often cells are sent in relation to time in an effort to minimize jitter. PCR generally is coupled with the CDVT (Cell Delay Variation Tolerance), which indicates how much jitter is allowable.
Sustainable Cell Rate (SCR). A calculation of the average allowable, long-term cell transfer rate on a specific connection.
Maximum Burst Size (MBS). The maximum allowable burst size of cells that can be transmitted contiguously on a particular connection.
Minimum Cell Rate (MCR). The minimum allowable rate at which cells can be transported along an ATM connection.
Quality of service parameters
A set of parameters are negotiated when a connection is set up in an ATM network. These parameters are used to measure the QoS of a connection and quantify end-to-end network performance at the ATM layer. The network should guarantee the negotiated QoS by meeting certain values of these parameters.
Cell Transfer Delay (CTD). The delay experienced by a cell between the time it takes for the first bit of the cell to be transmitted by the source and the last bit of the cell to be received by the destination. Maximum Cell Transfer Delay (Max CTD) and Mean Cell Transfer Delay (Mean CTD) are used.
Peak-to-peak Cell Delay Variation (CDV). The difference between the maximum and minimum CTD experienced during the connection. Peak-to-peak CDV and Instantaneous CDV are used.
Cell Loss Ratio (CLR). The percentage of cells that are lost in the network due to error or congestion and are not received by the destination.
See also
Broadband Networks
Teletraffic engineering
Asynchronous Transfer Mode (ATM)
Teletraffic engineering in broadband networks
References
Broadband
Teletraffic
Asynchronous Transfer Mode | Traffic contract | [
"Engineering"
] | 1,586 | [
"Asynchronous Transfer Mode",
"Computer networks engineering"
] |
1,585,444 | https://en.wikipedia.org/wiki/Traffic%20policing%20%28communications%29 | In communications, traffic policing is the process of monitoring network traffic for compliance with a traffic contract and taking steps to enforce that contract. Traffic sources which are aware of a traffic contract may apply traffic shaping to ensure their output stays within the contract and is thus not discarded. Traffic exceeding a traffic contract may be discarded immediately, marked as non-compliant, or left as-is, depending on administrative policy and the characteristics of the excess traffic.
Effects
The recipient of traffic that has been policed will observe packet loss distributed throughout periods when incoming traffic exceeded the contract. If the source does not limit its sending rate (for example, through a feedback mechanism), this will continue, and may appear to the recipient as if link errors or some other disruption is causing random packet loss. The received traffic, which has experienced policing en route, will typically comply with the contract, although jitter may be introduced by elements in the network downstream of the policer.
With reliable protocols, such as TCP as opposed to UDP, the dropped packets will not be acknowledged by the receiver, and therefore will be resent by the emitter, thus generating more traffic.
Impact on congestion-controlled sources
Sources with feedback-based congestion control mechanisms (for example TCP) typically adapt rapidly to static policing, converging on a rate just below the policed sustained rate.
Co-operative policing mechanisms, such as packet-based discard facilitate more rapid convergence, higher stability and more efficient resource sharing. As a result, it may be hard for endpoints to distinguish TCP traffic that has been merely policed from TCP traffic that has been shaped.
Impact in the case of ATM
Where cell-level dropping is enforced (as opposed to that achieved through packet-based policing) the impact is particularly severe on longer packets. Since cells are typically much shorter than the maximum packet size, conventional policers discard cells which do not respect packet boundaries, and hence the total amount of traffic dropped will typically be distributed throughout a number of packets. Almost all known packet reassembly mechanisms will respond to a missing cell by dropping the packet entirely, and consequently a very large number of packet losses can result from moderately exceeding the policed contract.
Process
RFC 2475 describes traffic policing elements like a meter and a dropper. They may also optionally include a marker. The meter measures the traffic and determines whether or not it exceeds the contract (for example by GCRA). Where it exceeds the contract, some policy determines if any given PDU is dropped, or if marking is implemented, if and how it is to be marked. Marking can comprise setting a congestion flag (such as ECN flag of TCP or CLP bit of ATM) or setting a traffic aggregate indication (such as Differentiated Services Code Point of IP).
In simple implementations, traffic is classified into two categories, or "colors" : compliant (green) and in excess (red). RFC 2697 proposes a more precise classification, with three "colors". In this document, the contract is described through three parameters: Committed Information Rate (CIR), Committed Burst Size (CBS), and Excess Burst Size (EBS). A packet is "green" if it doesn't exceed the CBS, "yellow" if it does exceed the CBS, but not the EBS, and "red" otherwise.
The "single-rate three-color marker" described by RFC 2697 allows for temporary bursts. The bursts are allowed when the line was under-used before they appeared. A more predictable algorithm is described in RFC 2698, which proposes a "double-rate three-color marker". RFC 2698 defines a new parameter, the Peak Information Rate (PIR). RFC 2859 describes the "Time Sliding Window Three Colour Marker" which meters a traffic stream and marks packets based on measured throughput relative to two specified rates: Committed Target Rate (CTR) and Peak Target Rate (PTR).
Implementations
On Cisco equipment, both traffic policing and shaping are implemented through the token bucket algorithm.
Traffic policing in ATM networks is known as Usage/Network Parameter Control. The network can also discard non-conformant traffic in the network (using Priority Control). The reference for both traffic policing and traffic shaping in ATM (given by the ATM Forum and the ITU-T) is the Generic Cell Rate Algorithm (GCRA), which is described as a version of the leaky bucket algorithm.
However, comparison of the leaky bucket and token bucket algorithms shows that they are simply mirror images of one another, one adding bucket content where the other takes it away and taking away bucket content where the other adds it. Hence, given equivalent parameters, implementations of both algorithms will see exactly the same traffic as conforming and non-conforming.
Traffic policing requires maintenance of numerical statistics and measures for each policed traffic flow, but it does not require implementation or management of significant volumes of packet buffer. Consequently, it is significantly less complex to implement than traffic shaping.
Connection Admission Control as an alternative
Connection-oriented networks (for example ATM systems) can perform Connection Admission Control (CAC) based on traffic contracts. In the context of Voice over IP (VoIP), this is also known as Call Admission Control (CAC).
An application that wishes to use a connection-oriented network to transport traffic must first request a connection (through signalling, for example Q.2931), which involves informing the network about the characteristics of the traffic and the quality of service (QoS) required by the application. This information is matched against a traffic contract. If the connection request is accepted, the application is permitted to use the network to transport traffic.
This function protects the network resources from malicious connections and enforces the compliance of every connection to its negotiated traffic contract.
Difference between CAC and traffic policing is that CAC is an a priori verification (before the transfer occurs), while traffic policing is an a posteriori verification (during the transfer).
See also
Broadband Networks
Queuing discipline
Teletraffic engineering in broadband networks
References
Teletraffic
Telecommunications engineering
Network performance | Traffic policing (communications) | [
"Engineering"
] | 1,247 | [
"Electrical engineering",
"Telecommunications engineering"
] |
1,585,648 | https://en.wikipedia.org/wiki/Metabolic%20engineering | Metabolic engineering is the practice of optimizing genetic and regulatory processes within cells to increase the cell's production of a certain substance. These processes are chemical networks that use a series of biochemical reactions and enzymes that allow cells to convert raw materials into molecules necessary for the cell's survival. Metabolic engineering specifically seeks to mathematically model these networks, calculate a yield of useful products, and pin point parts of the network that constrain the production of these products. Genetic engineering techniques can then be used to modify the network in order to relieve these constraints. Once again this modified network can be modeled to calculate the new product yield.
The ultimate goal of metabolic engineering is to be able to use these organisms to produce valuable substances on an industrial scale in a cost-effective manner. Current examples include producing beer, wine, cheese, pharmaceuticals, and other biotechnology products. Another possible area of use is the development of oil crops whose composition has been modified to improve their nutritional value. Some of the common strategies used for metabolic engineering are (1) overexpressing the gene encoding the rate-limiting enzyme of the biosynthetic pathway, (2) blocking the competing metabolic pathways, (3) heterologous gene expression, and (4) enzyme engineering.
Since cells use these metabolic networks for their survival, changes can have drastic effects on the cells' viability. Therefore, trade-offs in metabolic engineering arise between the cells ability to produce the desired substance and its natural survival needs. Therefore, instead of directly deleting and/or overexpressing the genes that encode for metabolic enzymes, the current focus is to target the regulatory networks in a cell to efficiently engineer the metabolism.
History and applications
In the past, to increase the productivity of a desired metabolite, a microorganism was genetically modified by chemically induced mutation, and the mutant strain that overexpressed the desired metabolite was then chosen. However, one of the main problems with this technique was that the metabolic pathway for the production of that metabolite was not analyzed, and as a result, the constraints to production and relevant pathway enzymes to be modified were unknown.
In 1990s, a new technique called metabolic engineering emerged. This technique analyzes the metabolic pathway of a microorganism, and determines the constraints and their effects on the production of desired compounds. It then uses genetic engineering to relieve these constraints. Some examples of successful metabolic engineering are the following: (i) Identification of constraints to lysine production in Corynebacterium glutamicum and insertion of new genes to relieve these constraints to improve production (ii) Engineering of a new fatty acid biosynthesis pathway, called reversed beta oxidation pathway, that is more efficient than the native pathway in producing fatty acids and alcohols which can potentially be catalytically converted to chemicals and fuels (iii) Improved production of DAHP an aromatic metabolite produced by E. coli that is an intermediate in the production of aromatic amino acids. It was determined through metabolic flux analysis that the theoretical maximal yield of DAHP per glucose molecule utilized, was 3/7. This is because some of the carbon from glucose is lost as carbon dioxide, instead of being utilized to produce DAHP. Also, one of the metabolites (PEP, or phosphoenolpyruvate) that are used to produce DAHP, was being converted to pyruvate (PYR) to transport glucose into the cell, and therefore, was no longer available to produce DAHP. In order to relieve the shortage of PEP and increase yield, Patnaik et al. used genetic engineering on E. coli to introduce a reaction that converts PYR back to PEP. Thus, the PEP used to transport glucose into the cell is regenerated, and can be used to make DAHP. This resulted in a new theoretical maximal yield of 6/7 – double that of the native E. coli system.
At the industrial scale, metabolic engineering is becoming more convenient and cost-effective. According to the Biotechnology Industry Organization, "more than 50 biorefinery facilities are being built across North America to apply metabolic engineering to produce biofuels and chemicals from renewable biomass which can help reduce greenhouse gas emissions". Potential biofuels include short-chain alcohols and alkanes (to replace gasoline), fatty acid methyl esters and fatty alcohols (to replace diesel), and fatty acid-and isoprenoid-based biofuels (to replace diesel).
Metabolic engineering continues to evolve in efficiency and processes aided by breakthroughs in the field of synthetic biology and progress in understanding metabolite damage and its repair or preemption. Early metabolic engineering experiments showed that accumulation of reactive intermediates can limit flux in engineered pathways and be deleterious to host cells if matching damage control systems are missing or inadequate. Researchers in synthetic biology optimize genetic pathways, which in turn influence cellular metabolic outputs. Recent decreases in cost of synthesized DNA and developments in genetic circuits help to influence the ability of metabolic engineering to produce desired outputs.
Metabolic flux analysis
An analysis of metabolic flux can be found at Flux balance analysis
Setting up a metabolic pathway for analysis
The first step in the process is to identify a desired goal to achieve through the improvement or modification of an organism's metabolism. Reference books and online databases are used to research reactions and metabolic pathways that are able to produce this product or result. These databases contain copious genomic and chemical information including pathways for metabolism and other cellular processes. Using this research, an organism is chosen that will be used to create the desired product or result. Considerations that are taken into account when making this decision are how close the organism's metabolic pathway is to the desired pathway, the maintenance costs associated with the organism, and how easy it is to modify the pathway of the organism. Escherichia coli (E. coli) is widely used in metabolic engineering to synthesize a wide variety of products such as amino acids because it is relatively easy to maintain and modify. If the organism does not contain the complete pathway for the desired product or result, then genes that produce the missing enzymes must be incorporated into the organism.
Analyzing a metabolic pathway
The completed metabolic pathway is modeled mathematically to find the theoretical yield of the product or the reaction fluxes in the cell. A flux is the rate at which a given reaction in the network occurs. Simple metabolic pathway analysis can be done by hand, but most require the use of software to perform the computations. These programs use complex linear algebra algorithms to solve these models. To solve a network using the equation for determined systems shown below, one must input the necessary information about the relevant reactions and their fluxes. Information about the reaction (such as the reactants and stoichiometry) are contained in the matrices Gx and Gm. Matrices Vm and Vx contain the fluxes of the relevant reactions. When solved, the equation yields the values of all the unknown fluxes (contained in Vx).
Determining the optimal genetic manipulations
After solving for the fluxes of reactions in the network, it is necessary to determine which reactions may be altered in order to maximize the yield of the desired product. To determine what specific genetic manipulations to perform, it is necessary to use computational algorithms, such as OptGene or OptFlux. They provide recommendations for which genes should be overexpressed, knocked out, or introduced in a cell to allow increased production of the desired product. For example, if a given reaction has particularly low flux and is limiting the amount of product, the software may recommend that the enzyme catalyzing this reaction should be overexpressed in the cell to increase the reaction flux. The necessary genetic manipulations can be performed using standard molecular biology techniques. Genes may be overexpressed or knocked out from an organism, depending on their effect on the pathway and the ultimate goal.
Experimental measurements
In order to create a solvable model, it is often necessary to have certain fluxes already known or experimentally measured. In addition, in order to verify the effect of genetic manipulations on the metabolic network (to ensure they align with the model), it is necessary to experimentally measure the fluxes in the network. To measure reaction fluxes, carbon flux measurements are made using carbon-13 isotopic labeling. The organism is fed a mixture that contains molecules where specific carbons are engineered to be carbon-13 atoms, instead of carbon-12. After these molecules are used in the network, downstream metabolites also become labeled with carbon-13, as they incorporate those atoms in their structures. The specific labeling pattern of the various metabolites is determined by the reaction fluxes in the network. Labeling patterns may be measured using techniques such as gas chromatography-mass spectrometry (GC-MS) along with computational algorithms to determine reaction fluxes.
See also
Bacterial transformation
Bioreactor
Genetic engineering
Synthetic biological circuit
Synthetic biology
References
External links
Biotechnology Industry Organization(BIO) website:
BIO Website
Biological engineering | Metabolic engineering | [
"Engineering",
"Biology"
] | 1,836 | [
"Biological engineering"
] |
1,585,824 | https://en.wikipedia.org/wiki/Antinous%20%28constellation%29 | Antinous is an obsolete constellation no longer in use by astronomers, having been merged into Aquila, which it bordered to the north.
The constellation was created by the emperor Hadrian in 132. Antinous was a beautiful youth loved by Hadrian, and also his erotic lover. Cassius Dio, having access to Hadrian's diary now lost, informs that Antinous died either by drowning or (as he himself believed) as a voluntary human sacrifice, something supported by Lambert (1984). The elevation to divinity meant that Antinous was to be a god in the heavens forever – Hadrian having named an asterism in the sky after him.
Tycho Brahe was originally given credit for inventing Antinous, but current finds include a celestial globe by the cartographer Caspar Vopel from 1536 that contains Antinous, so Brahe simply measured up the sky according to contemporary traditions and decided to give Antinous a separate table in his star catalogue.
In the following modern times, Antinous has been variously considered an asterism within Aquila or as a separate constellation, until the International Astronomical Union discarded it when formalizing the 88 constellations in 1922.
References
External links
http://www.ianridpath.com/startales/antinous.html
https://web.pa.msu.edu/people/horvatin/Astronomy_Facts/obsolete_pages/antinous.htm
Former constellations | Antinous (constellation) | [
"Astronomy"
] | 300 | [
"Former constellations",
"Constellations"
] |
1,585,845 | https://en.wikipedia.org/wiki/Felis%20%28constellation%29 | Felis (Latin for cat) was a constellation created by French astronomer Jérôme Lalande in 1799. He chose the name partly because, as a cat lover, he felt sorry that there was not yet a cat among the constellations (although there are two lions and a lynx). It was between the constellations of Antlia and Hydra.
This constellation was first depicted in the Uranographia sive Astrorum Descriptio (1801) of Johann Elert Bode. It is now obsolete.
Its brightest star, HD 85951, was named Felis by the International Astronomical Union on 1 June 2018 and it is now so included in the List of IAU-approved Star Names.
See also
Former constellations
External links
Felis, Ian Ridpath's Star Tales
References
Former constellations
gl:Felis | Felis (constellation) | [
"Astronomy"
] | 170 | [
"Former constellations",
"Astronomy stubs",
"Constellations"
] |
1,585,855 | https://en.wikipedia.org/wiki/Honores%20Friderici | Honores Friderici or Frederici Honores, (Latin, "the Honours, or Regalia, of Frederic") also called Gloria Frederica or Frederici ("Glory of Frederick") was a constellation created by Johann Bode in 1787 to honor Frederick the Great, the king of Prussia who had died in the previous year. It was between the constellations of Cepheus, Andromeda, Cassiopeia and Cygnus. Its most important stars were Iota, Kappa, Lambda, Omicron, and Psi Andromedae. The constellation is no longer in use.
History
Johann Bode first introduced the constellation in his 1787 publication Astronomisches Jahrbuch, calling it Friedrichs Ehre, to honour Frederick the Great, who had just died the previous year. He latinized its name to Honores Friderici in his 1801 work Uranographia.
He illustrated it as a crown above a sword, pen and olive branch, based on his perception of Frederick as a "hero, sage and peacemaker".
The constellation was taken up by some cartographers and not by others, but was increasingly ignored from the latter half of the 19th century, and is no longer in use. Most of it lies within the borders of modern Andromeda, with parts in Cassiopeia, Cepheus and Pegasus.
Stars
Bode incorporated 76 stars into his new constellation, made up of 26 from Andromeda, 9 from Lacerta, 6 from Cepheus, 5 from Pegasus, and 3 from Cassiopeia. The three brightest stars—all of magnitude 4—that lay within its borders are Omicron, Lambda, and Kappa Andromedae. With an apparent magnitude of 3.62, Omicron Andromedae is a multiple star system, the brightest star of which is a blue-white subgiant of spectral type B6 IIIpe and its visible companion a white star of spectral type A2. Each appears to have a close companion, making it a quadruple system. It is approximately 690 light-years from Earth. Lambda Andromedae is a yellow subgiant star of spectral type G8IVk around 1.3 times as massive as the Sun that has used up its core hydrogen and expanded to around 7 times its diameter. It is a spectroscopic binary composed of two stars close together orbiting each other every 20 days, the brighter component a RS Canum Venaticorum variable. Kappa Andromedae is a blue white star of apparent magnitude 4.14, that was found to have a substellar companion by direct imaging in 2012. Initially thought to be a planet, it is now thought to be a brown dwarf around 22 times as massive as Jupiter.
Iota and Psi Andromedae make up the asterism. Shining at magnitude 4.29, Iota Andromedae is a blue-white main sequence star of spectral type B8V around 500 light-years distant from Earth.
References
Former constellations | Honores Friderici | [
"Astronomy"
] | 619 | [
"Former constellations",
"Constellations"
] |
1,585,857 | https://en.wikipedia.org/wiki/Globus%20Aerostaticus | Globus Aerostaticus (Latin for hot air balloon) or Ballon Aerostatique (the French equivalent) was a constellation created by Jérôme Lalande in 1798. It lay between the constellations Piscis Austrinus, Capricornus and Microscopium. It is no longer in use.
The constellation was created to honor the invention of the Montgolfier brothers, who launched the first hot air balloon in the late eighteenth century.
References
External links
Globus Aerostaticus: Ian Ridpath's Star Tales
Globus Aerostaticus: Shane Horvatin, Abrams Planetarium
Former constellations | Globus Aerostaticus | [
"Astronomy"
] | 132 | [
"Former constellations",
"Astronomy stubs",
"Constellations"
] |
1,585,868 | https://en.wikipedia.org/wiki/Abuse%20prevention%20program | An abuse prevention program is a social program designed to help parents and teachers recognize the signs of violence in an abused child and teaches how to explain abuse protection to them. These programs also help children in establishing self-esteem.
An alternate definition of abuse prevention programme describes those projects which identify risk indicators such as poverty, poor housing, inadequate educational facilities etc. and aim to reduce the impact of such indicators, either through social reform or through developing parents and children's coping strategies.
See also
Abuse
Substance abuse prevention
External links
Prevent Child Abuse America
Prevention As A Cure
Abuse | Abuse prevention program | [
"Biology"
] | 114 | [
"Abuse",
"Behavior",
"Aggression",
"Human behavior"
] |
1,585,869 | https://en.wikipedia.org/wiki/Machina%20Electrica | Machina Electrica (Latin for electricity generator) was a constellation created by Johann Bode in 1800. He created it from faint stars between Fornax and Sculptor, to the south of Cetus. It represented an electrostatic generator. The constellation was somewhat popular during the 19th century and had appeared in a number of star charts, but was eventually rendered obsolete when the International Astronomical Union standardized constellation boundaries in 1930 and is now no longer in use.
External links
Star Tales: Machina Electrica by Ian Ridpath
Astronomy Facts: Machina Electrica, by Shane Horvatin
Former constellations | Machina Electrica | [
"Astronomy"
] | 121 | [
"Former constellations",
"Constellations"
] |
1,585,881 | https://en.wikipedia.org/wiki/Mons%20Maenalus | Mons Maenalus (Latin for Mount Maenalus) was a constellation created by Johannes Hevelius in 1687. It was located between the constellations of Boötes and Virgo, and depicts a mountain in Greece that the herdsman is stepping upon. It was increasingly considered obsolete by the latter half of the 19th century. Its brightest star is 31 Boötis, a G-type giant of apparent magnitude 4.86m.
Stars
The main stars that make up the constellation are 14, 15, 18, 31 Boötis and 71 Virginis.
References
Former constellations
Constellations listed by Johannes Hevelius | Mons Maenalus | [
"Astronomy"
] | 126 | [
"Former constellations",
"Astronomy stubs",
"Constellations",
"Constellations listed by Johannes Hevelius"
] |
1,585,889 | https://en.wikipedia.org/wiki/Musca%20Borealis | Musca Borealis (Latin for northern fly) was a constellation, now discarded, located between the constellations of Aries and Perseus. It was originally called Apes (plural of Apis, Latin for bee) by Petrus Plancius when he created it in 1612. It was made up of a small group of stars, now called 33 Arietis, 35 Arietis, 39 Arietis, and 41 Arietis, in the north of the constellation of Aries.
The brightest star is now known as 41 Arietis (Bharani). At magnitude 3.63, it is a blue-white main sequence star of spectral type B8V around 166 light-years distant. 39 Arietis (Lilii Borea) is an orange giant star of magnitude 4.51 and spectral type K1.5III that is around 171 light-years distant.
The constellation was renamed Vespa by Jakob Bartsch in 1624. The renaming by Bartsch may have been intended to avoid confusion with another constellation, created by Plancius in 1598, that was called Apis by Bayer in 1603. Plancius called this earlier constellation Muia (Greek for fly) in 1612, and it had been called Musca (Latin for fly) by Blaeu in 1602, although Bayer was evidently unaware of this.
In 1679 Augustin Royer used these stars for his constellation Lilium (the Lily, representing the fleur-de-lis and in honour of his patron, King Louis XIV).
It was first described as "Musca" by Hevelius in his catalogue of 1690. Subsequent astronomers renamed it into "Musca Borealis", to distinguish it from the southern fly, Musca Australis.
This constellation is no longer in use; the stars it contained are now included in Aries. The Southern Fly, Musca Australis, is now simply known as Musca.
See also
Apis
Musca
Obsolete constellations
References
External links
Former constellations
Constellations listed by Petrus Plancius
zh:雀蜂座 | Musca Borealis | [
"Astronomy"
] | 427 | [
"Former constellations",
"Constellations listed by Petrus Plancius",
"Constellations"
] |
1,585,922 | https://en.wikipedia.org/wiki/Dessin%20d%27enfant | In mathematics, a dessin d'enfant is a type of graph embedding used to study Riemann surfaces and to provide combinatorial invariants for the action of the absolute Galois group of the rational numbers. The name of these embeddings is French for a "child's drawing"; its plural is either dessins d'enfant, "child's drawings", or dessins d'enfants, "children's drawings".
A dessin d'enfant is a graph, with its vertices colored alternately black and white, embedded in an oriented surface that, in many cases, is simply a plane. For the coloring to exist, the graph must be bipartite. The faces of the embedding are required to be topological disks. The surface and the embedding may be described combinatorially using a rotation system, a cyclic order of the edges surrounding each vertex of the graph that describes the order in which the edges would be crossed by a path that travels clockwise on the surface in a small loop around the vertex.
Any dessin can provide the surface it is embedded in with a structure as a Riemann surface. It is natural to ask which Riemann surfaces arise in this way. The answer is provided by Belyi's theorem, which states that the Riemann surfaces that can be described by dessins are precisely those that can be defined as algebraic curves over the field of algebraic numbers. The absolute Galois group transforms these particular curves into each other, and thereby also transforms the underlying dessins.
For a more detailed treatment of this subject, see or .
History
19th century
Early proto-forms of dessins d'enfants appeared as early as 1856 in the icosian calculus of William Rowan Hamilton; in modern terms, these are Hamiltonian paths on the icosahedral graph.
Recognizable modern dessins d'enfants and Belyi functions were used by Felix Klein. Klein called these diagrams Linienzüge (German, plural of Linienzug "line-track", also used as a term for polygon); he used a white circle for the preimage of 0 and a '+' for the preimage of 1, rather than a black circle for 0 and white circle for 1 as in modern notation. He used these diagrams to construct an 11-fold cover of the Riemann sphere by itself, with monodromy group , following earlier constructions of a 7-fold cover with monodromy connected to the Klein quartic. These were all related to his investigations of the geometry of the quintic equation and the group collected in his famous 1884/88 Lectures on the Icosahedron. The three surfaces constructed in this way from these three groups were much later shown to be closely related through the phenomenon of trinity.
20th century
Dessins d'enfant in their modern form were then rediscovered over a century later and named by Alexander Grothendieck in 1984 in his Esquisse d'un Programme. quotes Grothendieck regarding his discovery of the Galois action on dessins d'enfants:
Part of the theory had already been developed independently by some time before Grothendieck. They outline the correspondence between maps on topological surfaces, maps on Riemann surfaces, and groups with certain distinguished generators, but do not consider the Galois action. Their notion of a map corresponds to a particular instance of a dessin d'enfant. Later work by extends the treatment to surfaces with a boundary.
Riemann surfaces and Belyi pairs
The complex numbers, together with a special point designated as , form a topological space known as the Riemann sphere. Any polynomial, and more generally any rational function where and are polynomials, transforms the Riemann sphere by mapping it to itself.
Consider, for example, the rational function
At most points of the Riemann sphere, this transformation is a local homeomorphism: it maps a small disk centered at any point in a one-to-one way into another disk. However, at certain critical points, the mapping is more complicated, and maps a disk centered at the point in a -to-one way onto its image. The number is known as the degree of the critical point and the transformed image of a critical point is known as a critical value.
The example given above, , has the following critical points and critical values. (Some points of the Riemann sphere that, while not themselves critical, map to one of the critical values, are also included; these are indicated by having degree one.)
One may form a dessin d'enfant from by placing black points at the preimages of 0 (that is, at 1 and 9), white points at the preimages of 1 (that is, at ), and arcs at the preimages of the line segment [0, 1]. This line segment has four preimages, two along the line segment from 1 to 9 and two forming a simple closed curve that loops from 1 to itself, surrounding 0; the resulting dessin is shown in the figure.
In the other direction, from this dessin, described as a combinatorial object without specifying the locations of the critical points, one may form a compact Riemann surface, and a map from that surface to the Riemann sphere, equivalent to the map from which the dessin was originally constructed. To do so, place a point labeled within each region of the dessin (shown as the red points in the second figure), and triangulate each region by connecting this point to the black and white points forming the boundary of the region, connecting multiple times to the same black or white point if it appears multiple times on the boundary of the region. Each triangle in the triangulation has three vertices labeled 0 (for the black points), 1 (for the white points), or . For each triangle, substitute a half-plane, either the upper half-plane for a triangle that has 0, 1, and in counterclockwise order or the lower half-plane for a triangle that has them in clockwise order, and for every adjacent pair of triangles glue the corresponding half-planes together along the portion of their boundaries indicated by the vertex labels. The resulting Riemann surface can be mapped to the Riemann sphere by using the identity map within each half-plane. Thus, the dessin d'enfant formed from is sufficient to describe itself up to biholomorphism. However, this construction identifies the Riemann surface only as a manifold with complex structure; it does not construct an embedding of this manifold as an algebraic curve in the complex projective plane, although such an embedding always exists.
The same construction applies more generally when is any Riemann surface and is a Belyi function; that is, a holomorphic function from to the Riemann sphere having only 0, 1, and as critical values. A pair of this type is known as a Belyi pair. From any Belyi pair one can form a dessin d'enfant, drawn on the that has its black points at the preimages of 0, its white points at the preimages of 1, and its edges placed along the preimages of the line segment . Conversely, any dessin d'enfant on any surface can be used to define gluing instructions for a collection of halfspaces that together form a Riemann surface homeomorphic to ; mapping each halfspace by the identity to the Riemann sphere produces a Belyi function on , and therefore leads to a Belyi pair . Any two Belyi pairs that lead to combinatorially equivalent dessins d'enfants are biholomorphic, and Belyi's theorem implies that, for any compact Riemann surface defined over the algebraic numbers, there are a Belyi function and a dessin d'enfant that provides a combinatorial description of both
Maps and hypermaps
A vertex in a dessin has a graph-theoretic degree, the number of incident edges, that equals its degree as a critical point of the Belyi function. In the example above, all white points have degree two; dessins with the property that each white point has two edges are known as clean, and their corresponding Belyi functions are called pure. When this happens, one can describe the dessin by a simpler embedded graph, one that has only the black points as its vertices and that has an edge for each white point with endpoints at the white point's two black neighbors. For instance, the dessin shown in the figure could be drawn more simply in this way as a pair of black points with an edge between them and a self-loop on one of the points.
It is common to draw only the black points of a clean dessin and to leave the white points unmarked; one can recover the full dessin by adding a white point at the midpoint of each edge of the map.
Thus, any embedding of a graph in a surface in which each face is a disk (that is, a topological map) gives rise to a dessin by treating the graph vertices as black points of a dessin, and placing white points at the midpoint of each embedded graph edge.
If a map corresponds to a Belyi function , its dual map (the dessin formed from the preimages of the line segment ) corresponds to the
A dessin that is not clean can be transformed into a clean dessin in the same surface, by recoloring all of its points as black and adding new white points on each of its edges. The corresponding transformation of Belyi pairs is to replace a Belyi function by the pure Belyi function . One may calculate the critical points of directly from this formula: , , and . Thus, is the preimage under of the midpoint of the line segment , and the edges of the dessin formed from subdivide the edges of the dessin formed from .
Under the interpretation of a clean dessin as a map, an arbitrary dessin is a hypermap: that is, a drawing of a hypergraph in which the black points represent vertices and the white points represent hyperedges.
Regular maps and triangle groups
The five Platonic solids – the regular tetrahedron, cube, octahedron, dodecahedron, and icosahedron – viewed as two-dimensional surfaces, have the property that any flag (a triple of a vertex, edge, and face that all meet each other) can be taken to any other flag by a symmetry of the surface. More generally, a map embedded in a surface with the same property, that any flag can be transformed to any other flag by a symmetry, is called a regular map.
If a regular map is used to generate a clean dessin, and the resulting dessin is used to generate a triangulated Riemann surface, then the edges of the triangles lie along lines of symmetry of the surface, and the reflections across those lines generate a symmetry group called a triangle group, for which the triangles form the fundamental domains. For example, the figure shows the set of triangles generated in this way starting from a regular dodecahedron. When the regular map lies in a surface whose genus is greater than one, the universal cover of the surface is the hyperbolic plane, and the triangle group in the hyperbolic plane formed from the lifted triangulation is a (cocompact) Fuchsian group representing a discrete set of isometries of the hyperbolic plane. In this case, the starting surface is the quotient of the hyperbolic plane by a finite index subgroup Γ in this group.
Conversely, given a Riemann surface that is a quotient of a tiling (a tiling of the sphere, Euclidean plane, or hyperbolic plane by triangles with angles , , and , the associated dessin is the Cayley graph given by the order two and order three generators of the group, or equivalently, the tiling of the same surface by -gons meeting three per vertex. Vertices of this tiling give black dots of the dessin, centers of edges give white dots, and centers of faces give the points over infinity.
Trees and Shabat polynomials
The simplest bipartite graphs are the trees. Any embedding of a tree has a single region, and therefore by Euler's formula lies in a spherical surface. The corresponding Belyi pair forms a transformation of the Riemann sphere that, if one places the pole at , can be represented as a polynomial. Conversely, any polynomial with 0 and 1 as its finite critical values forms a Belyi function from the Riemann sphere to itself, having a single infinite-valued critical point, and corresponding to a dessin d'enfant that is a tree. The degree of the polynomial equals the number of edges in the corresponding tree. Such a polynomial Belyi function is known as a Shabat polynomial, after George Shabat.
For example, take to be the monomial having only one finite critical point and critical value, both zero. Although 1 is not a critical value for , it is still possible to interpret as a Belyi function from the Riemann sphere to itself because its critical values all lie in the set . The corresponding dessin d'enfant is a star having one central black vertex connected to white leaves (a complete bipartite graph ).
More generally, a polynomial having two critical values and may be termed a Shabat polynomial. Such a polynomial may be normalized into a Belyi function, with its critical values at 0 and 1, by the formula
but it may be more convenient to leave in its un-normalized form.
An important family of examples of Shabat polynomials are given by the Chebyshev polynomials of the first kind, , which have −1 and 1 as critical values. The corresponding dessins take the form of path graphs, alternating between black and white vertices, with edges in the path. Due to the connection between Shabat polynomials and Chebyshev polynomials, Shabat polynomials themselves are sometimes called generalized Chebyshev polynomials.
Different trees will, in general, correspond to different Shabat polynomials, as will different embeddings or colorings of the same tree. Up to normalization and linear transformations of its argument, the Shabat polynomial is uniquely determined from a coloring of an embedded tree, but it is not always straightforward to find a Shabat polynomial that has a given embedded tree as its dessin d'enfant.
The absolute Galois group and its invariants
The polynomial
may be made into a Shabat polynomial by choosing
The two choices of lead to two Belyi functions and . These functions, though closely related to each other, are not equivalent, as they are described by the two nonisomorphic trees shown in the figure.
However, as these polynomials are defined over the algebraic number field , they may be transformed by the action of the absolute Galois group of the rational numbers. An element of that transforms to will transform into and vice versa, and thus can also be said to transform each of the two trees shown in the figure into the other tree. More generally, due to the fact that the critical values of any Belyi function are the pure rationals 0, 1, and , these critical values are unchanged by the Galois action, so this action takes Belyi pairs to other Belyi pairs. One may define an action of on any dessin d'enfant by the corresponding action on Belyi pairs; this action, for instance, permutes the two trees shown in the figure.
Due to Belyi's theorem, the action of on dessins is faithful (that is, every two elements of define different permutations on the set of dessins), so the study of dessins d'enfants can tell us much about itself. In this light, it is of great interest to understand which dessins may be transformed into each other by the action of and which may not. For instance, one may observe that the two trees shown have the same degree sequences for their black nodes and white nodes: both have a black node with degree three, two black nodes with degree two, two white nodes with degree two, and three white nodes with degree one. This equality is not a coincidence: whenever transforms one dessin into another, both will have the same degree sequence. The degree sequence is one known invariant of the Galois action, but not the only invariant.
The stabilizer of a dessin is the subgroup of consisting of group elements that leave the dessin unchanged. Due to the Galois correspondence between subgroups of and algebraic number fields, the stabilizer corresponds to a field, the field of moduli of the dessin. An orbit of a dessin is the set of all other dessins into which it may be transformed; due to the degree invariant, orbits are necessarily finite and stabilizers are of finite index. One may similarly define the stabilizer of an orbit (the subgroup that fixes all elements of the orbit) and the corresponding field of moduli of the orbit, another invariant of the dessin. The stabilizer of the orbit is the maximal normal subgroup of contained in the stabilizer of the dessin, and the field of moduli of the orbit corresponds to the smallest normal extension of that contains the field of moduli of the dessin. For instance, for the two conjugate dessins considered in this section, the field of moduli of the orbit is . The two Belyi functions and of this example are defined over the field of moduli, but there exist dessins for which the field of definition of the Belyi function must be larger than the field of moduli.
Notes
References
.
.
.
. Collected in .
.
.
.
.
, collected as pp. 140–165 in Oeuvres, Tome 3 .
. See especially chapter 2, "Dessins d'Enfants", pp. 79–153.
.
.
.
.
.
Complex analysis
Algebraic geometry
Topological graph theory | Dessin d'enfant | [
"Mathematics"
] | 3,727 | [
"Graph theory",
"Fields of abstract algebra",
"Topology",
"Mathematical relations",
"Algebraic geometry",
"Topological graph theory"
] |
1,586,039 | https://en.wikipedia.org/wiki/Menstrual%20synchrony | Menstrual synchrony, also called the McClintock effect, or the Wellesley effect, is a contested process whereby women who begin living together in close proximity would experience their menstrual cycle onsets (the onset of menstruation or menses) becoming more synchronized together in time than when previously living apart. "For example, the distribution of onsets of seven female lifeguards was scattered at the beginning of the summer, but after 3 months spent together, the onset of all seven cycles fell within a 4-day period."
Martha McClintock's 1971 paper, published in Nature, says that menstrual cycle synchronization happens when the menstrual cycle onsets of two or more women become closer together in time than they were several months earlier.
After the initial studies, several papers were published reporting methodological flaws in studies reporting menstrual synchrony, including McClintock's study. In addition, other studies were published that failed to find synchrony. The proposed mechanisms have also received scientific criticism. Reviews in 2006 and 2013 concluded that menstrual synchrony likely does not exist.
Overview
Original study by Martha McClintock
Martha McClintock published the first study on menstrual synchrony among women living together in dormitories at Wellesley College, a women's liberal arts college in Massachusetts, US.
Proposed causes
McClintock hypothesized that pheromones could cause menstrual cycle synchronization. However, other mechanisms have been proposed, most prominently synchronization with lunar phases.
Efforts to replicate McClintock's results
No scientific evidence supports the lunar hypothesis, and doubt has been cast on pheromone mechanisms.
After the initial studies reporting menstrual synchrony began to appear in the scientific literature, other researchers began reporting the failure to find menstrual synchrony.
These studies were followed by critiques of the methods used in early studies, which argued that biases in the methods used produced menstrual synchrony as an artifact.
More recent studies, which took into account some of these methodological criticisms, failed to find menstrual synchrony.
Terminology
The term synchrony has been argued to be misleading because no study has ever found that menstrual cycles become strictly concordant, nevertheless menstrual synchrony is used to refer the phenomenon of menstrual cycle onsets becoming closer to each other over time.
Status of the hypothesis
In a 2013 systematic review of menstrual synchrony, Harris and Vitzthum concluded, "In light of the lack of empirical evidence for MS [menstrual synchrony] sensu stricto, it seems there should be more widespread doubt than acceptance of this hypothesis" (pp. 238–239).
The experience of synchrony may be the result of the mathematical fact that menstrual cycles of different frequencies repeatedly converge and diverge over time and not due to a process of synchronization, and the probability of encountering such overlaps by chance is high.
Evolutionary perspective
Researchers are divided on whether menstrual synchrony would be adaptive. McClintock has suggested that menstrual synchrony may not be adaptive but rather epiphenomenal, lacking any biological function. Among those who postulate an adaptive function, one argument is that menstrual synchrony is only a particular aspect of the much more general phenomenon of reproductive synchrony, an occurrence familiar to ecologists studying animal populations in the wild. Whether seasonal, tidal, or lunar, reproductive synchrony is a relatively common mechanism through which co-cycling females can increase the number of males included in the local breeding system.
Conversely, it has been argued that if there are too many females cycling together, they would be competing for the highest quality males, forcing female–female competition for high quality mates and thereby lowering fitness. In such cases, selection should favor avoiding synchrony. Divergent climate regimes differentiating Neanderthal reproductive strategies from those of modern Homo sapiens have recently been analysed in these terms.
Turning to the evolutionary past, a possible adaptive basis for the biological capacity would be reproductive levelling: among primates, synchronising to any natural clock makes it difficult for an alpha male to monopolise fertile sex with multiple females. This would be consistent with the striking gender egalitarianism of extant non-storage hunter-gatherer societies. When early Pleistocene hominids in Africa were attempting to survive by robbing big cats of their kills, according to some evolutionary scientists, it may have been adaptive to restrict overnight journeys—including sexual liaisons—to times when there was a moon in the sky.
Media attention
The question of whether those who live together do in fact synchronize their menstrual cycles has also received attention in the popular media.
Traditional myth and ritual
The idea that menstruation is – or ideally ought to be – in harmony with wider cosmic rhythms is one of the most tenacious ideas central to the myths and rituals of traditional communities across the world.
The !Kung (or Ju|'hoansi) hunter-gatherers of the Kalahari "believe ... that if a woman sees traces of menstrual blood on another woman's leg or even is told that another woman has started her period, she will begin menstruating as well". Among the Yurok people of northwestern California, according to one ethnographic study, "all of a household's fertile women who were not pregnant menstruated at the same time...".
Scientific details
The phenomenon of menstrual synchrony is the closeness in time of the menstrual cycle onsets of two or more women. The phenomenon is not synchronization in the strict sense of concordance of menstrual cycle onsets but the term menstrual synchrony is still used perhaps misleadingly. As an undergraduate, Martha McClintock published the first study on menstrual synchrony; her report detailed the menstrual synchrony of undergraduate women living in a dormitory in Wellesley College. Since then, there have been attempts to replicate her findings and to determine the conditions under which synchrony occurs, if it exists. Her work was followed up by studies reporting menstrual synchrony and by other studies that failed to find synchrony.
Thus, a number of studies were published from the 1980s to the mid 2000s, which attempted to replicate menstrual synchrony in college women, determine the conditions under which menstrual synchrony occurred, and to address methodological issues that were raised as these studies were published. The rest of this section discusses these studies in chronological order, briefly presenting their findings and main conclusions grouped by decade followed by general methodological issues in menstrual synchrony research.
Studies
1970s
McClintock's study consisted of 135 female college students who were 17 to 22 years old at the time of the study. They were all residents of a single dormitory, which had four main corridors. The women were asked when their last and second to last menstrual period had started three times during the academic year (which ranged from September to April). They also were asked who (other women in the dormitory) they associated with most and how often each week they associated with males. From these data, McClintock placed women into pairs of close friends and roommates and she also placed them into groups of friends ranging in size from 5 to 10 women. She reported statistically significant synchrony for both her pairwise sorting of women and her group sorting of women. That is, whether women were placed into pairs of close friends and roommates or whether they were placed into larger groups of friends, she reported that they synchronized their menstrual cycles. She also reported that the more often women associated with males, the shorter their menstrual cycles were. She speculated that this may be a pheromone effect paralleling the Whitten effect in mice but that it could not explain menstrual synchrony among women. Finally, she speculated that there could be a pheromone mechanism of menstrual synchrony similar to the Lee-Boot effect in mice.
1980s
Graham and McGrew were the first researchers to attempt to replicate McClintock's study. There were 79 women living in halls of residence or apartments on the campus of a college in Scotland. The women were 17 to 21 years old at the time of the study and the procedures followed were similar to those used in McClintock's study. She partially replicated McClintock's study reporting that close friends but not neighbors synchronized their cycles. Unlike in McClintock's study, close friends did not synchronize in groups. They considered a pheromone mechanism a possible explanation of synchrony, but noted that if pheromones were the cause, neighbors should have synchronized as well. They concluded that the mechanism of synchrony remains unknown, but emotional attachment may play a role.
Quadagno et al. conducted the second replication of McClintock's study. There were 85 women living in dormitories, sorority houses, and apartments who attended a large midwestern university in the United States. Their study used methods similar to McClintock's study except in addition to two women living together, there were also groups of three and four women living together. They reported that the women synchronized their menstrual cycles and concluded that pheromones may have played a role in synchronization.
Jarett's study was the third to attempt to replicate McClintock's original study using college roommates. There were 144 women who attended two colleges. The women were 17 to 22 years old and the procedures followed were similar to McClintock's study except only pairs of roommates were used. She reported that the women did not synchronize. Jarett concluded that whether menstrual synchrony occurs in a group of women may depend on the variability of their menstrual cycles. She conjectured that the reason the women in her study did not synchronize their menstrual cycles was because they happened to have longer and more irregular menstrual cycles than in McClintock's original study.
1990s
Wilson, Kiefhabe, and Gravel conducted two studies with college women. The first study consisted of 132 women who were members of a sorority or roommates of members at the University of Missouri. The women were 18 to 22 years of age and the study aimed to replicate McClintock's original study. However, instead of asking women to recall when their last and next to last menstrual onsets occurred, one of the researchers visited the sorority daily to record the occurrence of menstrual onsets and to collect other biographical data. The second study consisted of 24 women living in a cooperative house near the University of Missouri. The women were 18 to 31 years of age. One of the researchers visited the house three times a week recording menstrual onset and collecting more extensive biographical and psychological test data than in the first study. They found no menstrual synchrony in either study. They considered the possibility that women with irregular cycles may reduce the likelihood of detecting synchrony, so they re-analyzed their data after they removed women with irregular cycles, but again there was no statistically significant effect of synchrony. They concluded that "It is clear no meaningful process of selection or exclusion of pairs can produce a significant level of menstrual synchrony in our samples... Therefore, whether or not menstrual synchrony occurs among women who spend time together must remain a hypothesis requiring further investigation" (p. 358).
Weller and Weller conducted a study with 20 lesbian couples. They hypothesized that contact within couples should be maximal and contact with men minimal compared to previous studies, which should maximize the likelihood of detecting synchrony. The women ranged in age between 19 and 34 years of age. This was the first study that did not explicitly use college women, but instead the women were recruited at a bar by a research assistant who was a proprietor of a bar. Unlike previous studies, they only asked the women for the date of their last menstrual onset. They then assumed that all the women had menstrual cycles that were exactly 28 days long. Based on this assumption and one menstrual onset for each woman in a couple, they calculated the degree of synchrony. They reported that more than half of the couples had synchronized within a two-day timespan of each other.
Trevathan, Burleson, and Gregory also conducted a study with 29 lesbian couples (22 to 48 years of age), but they incorporated the methodological critique of Wilson into the design of their study. In particular, Wilson emphasized the importance of using actual menstrual cycle lengths with their inherent variability. The lesbian couples were drawn from a larger sample of women who had kept daily records of their menstrual cycles for three months and who had participated in a previous study. They found no evidence of synchrony. They discussed several factors that could have prevented synchrony in their study but they strongly suggested that menstrual synchrony may not be a real phenomenon because of the methodological issues Wilson raised and because menstrual synchrony appears to lack adaptive significance.
In addition to the study they conducted with lesbian couples, Weller and Weller conducted a number of other studies on menstrual synchrony during the 1990s. In most studies they reported finding menstrual synchrony, but in some studies they did not find synchrony. In a methodological review article in 1997, they refined their approach to measuring to better handle the problem of cycle variability. Specifically, they concluded that several menstrual cycles should be measured from each woman and that the longest average cycle length in a pair or group of women should be the basis for calculating the expected cycle onset difference. Thus, their research falls into the pre-1997 methodology and post-1997 methodology.
In 1997, Weller and Weller published one of the first studies to investigate when menstrual synchrony occurs in complete families. Their study was conducted in Bedouin villages in northern Israel. Twenty seven families, which had from two to seven sisters 13 years or older and collected data on menstrual cycle onsets over a three-month period. Using the methods of, they reported menstrual synchrony occurred for the first two months, but not for the third month for roommate sisters, close friend roommates, and for families as a whole.
Strassmann investigated whether menstrual synchrony occurred in a natural fertility population of Dogon village women. Her study consisted of 122 Dogon women with an average lifetime fertility rate of 8.6 ± .3 live births per woman. Their median cycle length was 30 days, which is indistinguishable from western women. In analyzing whether menstrual synchrony occurs among Dogon women, she was aware of Wilson's methodological criticisms of previous studies and aware that menstrual synchrony isn't synchrony per se, but rather the closeness of menstruation among women. She used Cox regression to determine whether the likelihood of menstruating was influenced by other women. She considered the levels of all the women in the village, all the women in the same lineage, and all the women in the same economic unit (i.e., they worked together). She found no significant relationship at any level, which means that there was no evidence of synchronization. She concluded that this result undermined the view that menstrual synchrony is adaptive and the view held by many anthropologists at the time that menstrual synchrony occurred in preindustrial societies.
2000s
Menstrual synchrony research declined after the published critiques in the 1990s and around the turn of the century. The two studies published during this decade incorporated the methodological critiques into their designs and used more appropriate statistical methods.
Yang and Schank conducted the largest study to date with 186 Chinese college women. Ninety three of the women lived in 13 dorm rooms, 5 to 8 women per room. The other ninety three women lived in 16 dorm rooms, 4 to 8 women per room for a total of 29 rooms. The women were given notebooks to record the onset of each of their cycles and they collected data for over a year for most of the women.
Following the statistical critiques of Schank, they argued that circular statistics were required to analyze periodic data for the existence of synchrony. However, menstrual cycles are variable in frequency (e.g., 28 or 31 day cycles) and in length. They pointed out that there are no statistical methods for analyzing messy data like this, so they developed Monte Carlo methods for detecting synchrony.
They found that in 9 of the 29 groups, women's cycles converged for one cycle closer than expected by chance, but then they diverged again. Upon further analysis, they found that for women with the cycle variability reported in this study, on average 10 out of 29 groups of women would show this pattern of convergence followed by divergence. They concluded that finding 9 out of 29 groups with convergence and then divergence is about what would be expected by chance and concluded that there was no evidence the women in this study synchronized their menstrual cycles.
Ziomkiewicz conducted a study with 99 Polish college women living in two dormitories. Thirty six of the women lived in 18 double rooms and sixty three lived in 21 triple rooms. Women recorded their menstrual cycle onsets on menstrual calendars provided to them and 181 days' worth of menstrual cycle data were collected. The mean menstrual cycle length was 30.5 days (SD = 4.56).
Based on the mean cycle length of the women in this study, the expected difference by chance in menstrual cycle onset was approximately 7.5 days. The mean difference in cycle onset was calculated for the beginning, middle, and end of the study for the pairs and triples of women. Ziomkiewicz found no statistically significant difference from the 7.5 day expected difference at either the beginning, middle, or end of the study. She concluded that there was no evidence of menstrual synchrony.
Methodological issues
Initial onset differences
H. Clyde Wilson argued that at the start of any menstrual synchrony study, the minimum cycle onset difference must be calculated by using two onset differences from each woman in a group. For example, suppose two women have exactly 28-day cycles. The greatest distance that one cycle onset can be from another is 14 days. Suppose the first two onsets for woman A are July 1 and July 29 and for woman B, they are July 24 and August 21. If only the first two recorded onsets of A and B are compared, the difference between onsets is 23 days, which is greater than the 14 days that can actually occur. Wilson argued that McClintock did not correctly calculate the initial onset difference among women and concluded that the initial onset difference among women in a group was biased towards asynchrony.
Yang and Schank followed up on this point by using computer simulations to estimate the average onset difference that would occur by among women with variable cycle lengths and a mean cycle length of 29.5 days reported by McClintock. They reported that the average onset difference by chance among women with cycle characteristics reported by McClintock was about 5 days. They also calculated the expected onset difference at the beginning of the study using McClintock's method for calculating initial cycle onset differences. They reported that the initial cycle onset difference for the groups of women using McClintock's method was about 6.5 days. McClintock reported that groups of women had an initial cycle onset difference at the beginning of her study of about 6.5 days and then subsequently synchronized to an average difference of a little less than 5 days. Yang and Schank point out that since the expected cycle onset differences they calculated were so close to the differences reported by McClintock, that there may be no statistical difference. They concluded that If their analysis is correct, it implies that synchrony did not occur in McClintock's original study.
Hypothesized mechanisms of synchronization
Lunar synchronization
Cutler and Law hypothesized that menstrual synchrony is caused by menstrual cycles synchronizing with lunar phases. However, neither of them agree on what phase of the lunar cycle menstrual cycles synchronize with. Cutler hypothesizes the synchronization with the full moon and Law with the new moon. Neither offer hypotheses regarding how lunar phases cause menstrual synchrony and neither are consistent with previous studies that found no relationship between menstrual cycles and lunar cycles. More recently, Strassmann investigated menstrual synchrony among Dogon village women. The women were outdoors most nights and did not have electrical lighting. She hypothesized that Dogon women would be ideal for detecting a lunar influence on menstrual cycles, but she found no relationship.
Social affiliation
Jarett hypothesized that women who were more affiliative and concerned with how others viewed them would synchronize more. In her study, however, women with low affiliation scores were associated with greater synchrony. She found that women with a need for social recognition and approval from others were associated with synchrony, which is partially consistent with her hypothesis. Nevertheless, the group of women she studied did not synchronize their menstrual cycles.
Coupled oscillators
When McClintock published her study on menstrual synchrony, she speculated that pheromones may cause menstrual synchrony. In a study on Norway rats, McClintock proposed and tested a coupled oscillator hypothesis (see section on rats below). The coupled-oscillator hypothesis proposed estrous cycles in rats were cause by two, estrous phase dependent pheromones that mutually modulated the length of cycles in a group and thereby causing synchrony.
This idea was extended to humans in a study by Stern and McClintock. They investigated whether a coupled-oscillator mechanism first reported for Norway rats (see section below on rats) could also exist in humans. The coupled-oscillator hypothesis in humans proposed that human females release and receive pheromones that regulate the length of their menstrual cycles. This was assumed to occur without consciously detecting any odor. The study was conducted by collecting compounds from axillae (underarms) of donor women at prescribed phases during their menstrual cycles (i.e., the follicular phase, ovulatory phase, and luteal phase), and applying the compounds daily under the noses of recipient women. In order to collect the axillary compounds, the donor women wore cotton pads under their arms for at least 8 hours, and then the pads were cut into smaller squares, frozen to preserve the scent, and readied for distribution to the recipients. The recipients were split into two groups, and were exposed to the compounds via application of the thawed axillary pad under their noses daily.
The researchers concluded that odorless compounds collected from women during the late follicular phase of their menstrual cycles triggered hormonal events that shortened the menstrual cycles of the recipient women, and that odorless compounds collected from women during the time of ovulation triggered a hormonal event in the recipient women that lengthened their menstrual cycles. Stern and McClintock concluded that these findings "proved the existence of human pheromones" as well as illustrated manipulation of the human menstrual cycle.
Researchers pointed out several flaws in their study. Whitten's main critiques was with their using only their first cycles as a control for the subsequent conditions. He argued that this eliminate all within-subject variance. Control conditions should have been run between each experimental condition and not just at the beginning of the study. He was also skeptical about whether the coupled-oscillator model from rat research could be applied to humans.
Perception and awareness of synchrony
Arden and Dye investigated women's awareness and perception of menstrual synchrony. Their study consisted of 122 women (students and staff) at Leeds University. A four-page questionnaire was sent to each participant. After providing personal details, they were given a description of menstrual synchrony: "Menstrual synchrony occurs when two or more women, who spend time with each other, have their periods at approximately the same time" (p. 257). After reading the description they were asked whether they were aware of menstrual synchrony and whether they had experienced it. They were then asked details about their experience of synchrony such as how many times they experienced and how long it lasted.
They found that 84% of the women were aware of the phenomenon of menstrual synchrony and 70% reported the personal experience of synchrony. The experience of synchrony occurred most commonly with close friends followed by roommates. There was considerable variation in the reported time spent together before synchrony occurred ranging from zero to four weeks to 12 months or more. The most common time was 12 months or more. The duration of menstrual synchrony also was highly variable with responses ranging from one to two months to 12 months or more. They conclude that "Whether or not future research concludes that menstrual synchrony is an objective phenomenon, subjective experiences, which are apparently widespread, need to be given careful consideration." (p. 265)
Both Wilson and Arden and Dye pointed out that menstrual synchrony can occur by chance when there is menstrual cycle variability. Yang and Schank argued that when there is cycle variability (i.e., either women have irregular cycles, have cycles of different frequencies, or both), most women will have the opportunity to experience synchrony even though it is a result of cycle variability and not a result of a mechanism such as the exchange of pheromones. For example, consider two women A and B. Suppose A has menstrual cycles that are 28 days long and B has cycles that are 30 days long. Suppose further that when A and B become close friends, B has a cycle onset 14 days before A's next onset. The next time both of them have menstrual cycle onsets, B will have a cycle onset 12 days before A. B will continue to gain two days on A until their onsets coincide, then their cycles will begin to diverge again. The cycles of A and B will repeatedly converge and diverge creating the appearance of synchrony during convergence. This is a mathematical property of cycles of different frequencies and not due to the interaction of A and B. If, in addition, the duration of menstruation is considered (typically 3 to 5 days with a range of 2 to 7 days), then the experience of synchrony may last a number of months.
Strassmann argued menstrual synchrony defined as menstruation overlap should be quite common. For example, the expected difference by chance between two women with 28-day cycles—which is approximately the average length of menstrual cycles of women at the age —is 7 days. Considering that the mean duration of menses is 5 days and the range is 2 to 7 days, the probability of menstruation overlap by chance should be high.
Adaptivity of menstrual synchrony
In order to work out why menstrual synchrony might have evolved, it is necessary to investigate why individuals who synchronized their cycles might have had increased survival and reproduction in the evolutionary past. The relevant field in this case is behavioral ecology.
In mammalian mating systems generally, and among primates in particular, female spatio-temporal distribution – how clumped females are in the environment and how much they overlap their fertile periods – affects the ability of any single male to monopolize matings. The basic principle is that the more females are fertile at any one time, the harder it is for any single male to monopolize access to them, impregnating all simultaneously at the expense of rival males. In the case of nonhuman primates, once the number of co-cycling females rises above a critical threshold, a harem-holder may be unable to prevent other males from invading and mating with his females. A dominant male can maintain his monopoly only if his females stagger their fertile periods, so that he can impregnate them one at a time (see figure a, right). Suppose a group of female baboons need between them just one dominant male, desirable in view of his high-quality genes. Then, logically, they should avoid synchronizing their cycles. By the same token, if males during the course of human evolution became valued by females for additional purposes – hunting and bringing home food, for example – then females should resist being controlled by dominant male harem-holders. If males are useful partners to have and keep around, then ideally each female should have at least one for herself. Under those circumstances, according to this argument, the logical strategy would be for females to synchronize as tightly as they can (see figure b, right).
One implication is that there may be a link between the degree of synchrony in a population (whether seasonal, lunar or both), and the degree of reproductive egalitarianism among males. Foley and Fitzgerald objected to the idea that synchrony could have been a factor in human evolution on the grounds that for hominins with inter-birth intervals of 3–5 years, achieving synchrony was unrealistic. Infant mortality would disrupt synchrony since it would be too costly for a mother who had miscarried or lost her baby to wait until everyone else had weaned their babies and resumed cycling before having sex and getting pregnant herself. On the other hand, while conceding that it would be impossible to get clockwork synchrony throughout an inter-birth interval, Power et al. argued that once we take account of birth seasonality – enhancing the effects of menstrual synchrony by clumping fertile cycles within a relatively brief time-window – it emerges that reproductive synchrony can be effective as a female strategy to undermine primate-style sexual monopolization by dominant males. The controversy remains unresolved.
Adopting a compromise position, one school of Darwinian thought sets out from the fact that the mean length of the human menstrual cycle is 29.3 days, which is strikingly close to the 29.5 day periodicity of the moon. It is suggested that the human female may once have had adaptive reasons for evolving such a cycle length – implying some theoretical potential for synchrony to a lunar clock – but did so in an African setting under prehistoric conditions which today no longer exist. Not all archaeologists accept that lunar periodicity was ever relevant to human evolution. On the other hand, according to Curtis Marean (head of excavations at the important Middle Stone Age site of Pinnacle Point, South Africa), anatomically modern humans around 165,000 years ago – when inland regions of the continent were dry, arid and uninhabitable – became restricted to small populations clustered around coastal refugia, reliant on marine resources including shellfish whose safe harvesting at spring low tides presupposed careful tracking of lunar phase.
Olfactory Influences on menstrual synchrony
College students' menstrual periods can become synchronised when they live together as roommates, according to research by McClintock (McClintock, 1971). Since then, numerous investigations have supported the existence of menstrual synchronisation among women, including close friends, mothers and daughters, and coworkers [reviewed by Weller and Weller in 1993]. Women who spent the most time together were more likely to exhibit menstrual synchrony in each of these investigations. The axillary region's scents have been shown to be capable of mediating these effects (Preti et al., 1987; Russell et al., 1980; Stern and McClintock, 1998), but their active ingredients have not yet been discovered.
The main olfactory system, which receives sensory inputs from the olfactory mucosa and connects to the rest of the central nervous system via the main olfactory bulbs, and the accessory system, which receives inputs from the vomeronasal organ and connects to other brain centres via the accessory olfactory bulbs, are the two olfactory systems that are present in the majority of mammals (Scalia and Winans, 1976). There are connections from the olfactory bulbs to the hypothalamus, the brain region in charge of regulating the release of luteinizing hormone, in both systems.
In rats, the accessory system mediates the pheromonal action, [reviewed by Marchewska-Koj (Marchewska-Koj, in 1984)]. However, it appears that the pheromonal action in ewes and pigs is largely mediated via the primary olfactory system (Martin et al., 1986). (Dorries et al., 1997). If pheromones that mediate menstrual synchrony use the main olfactory system, a comparison of synchronised and non-synchronized women's ability to smell a particular pheromone can be used to infer a causal relationship between the ability to smell a pheromone and a potential role for the pheromone in mediating synchrony. In the current work, we looked at how menstrual synchrony and the sense of smell for the putative pheromones 3 androstenol and 5 androstenone related.
Non-human species
Estrous synchrony, a phenomenon similar to menstrual synchrony,
has been reported in several other mammalian species.
Menstrual or estrous synchrony has been reported in other species including Norway rats, hamsters, chimpanzees, and golden lion tamarins. In non-human primates, the term may also refer to the degree of overlap of menstrual or estrous cycles, which is the overlap of estrous or menses of two or more females in a group due, for example, to seasonal breeding.
However, as with early human studies on menstrual synchrony, non-human estrous synchrony studies also were criticized for methodological problems.
Subsequent studies failed to find estrous synchrony in rats, hamsters, chimpanzees, and golden lion tamarins.
Rats
McClintock also conducted a 1978 study of estrous synchrony in Norway rats (Rattus norvegicus). She reported that the estrous cycles of female rats living in groups of five were more regular than those of rats housed singly. She also reported that social interaction, and more importantly a shared air supply that allowed for olfactory communication enhanced the regularity of the rats' cycles and synchronized their estrous phases after two or three cycles. McClintock hypothesized that estrous synchrony was caused by pheromones and that a coupled oscillator mechanism produced estrous synchrony in rats This observation of menstrual synchrony in Norway rats is not the same as the Whitten effect because it was the result of the continuous interactions of ongoing cycles within a female group, rather than the result of an exposure to a single external stimulus such as male odor, which in the Whitten effect releases all exposed females simultaneously from an acyclic condition.
The coupled-oscillator hypothesis asserted that females rats release two pheromone signals. One signal is released during the follicular phase of the estrous cycle and it shortens estrous cycles. The second signal is released during the ovulatory phase of the estrous cycle and it lengthens estrous cycles. When rats live together or share the same air supply, the pheromones released by each female in a group as a function of the phase of her estrous cycle causes other females in the group to either lengthen or shorten their estrous cycles. This mutual lengthening and shortening of estrous cycles was theorized to produce synchronization of estrous cycles over time.
McClintock investigated the coupled oscillator hypothesis experimentally. She provided three groups of rats with airborne odors from female rats in three different phases of the estrous cycle: ovulatory phase, follicular phase, and luteal phase. She hypothesized that ovulatory phase odors would lengthen cycles, follicular phase odors would shorten cycles, and luteal phase odors would have no effect. Her results showed a lengthening of estrous cycles for females who received ovulatory odors, shortening of cycles for females who received follicular odors, and no effect for females who received luteal phase odors.
The coupled-ocillator hypothesis was also investigated using a computer simulation model, which was compared with data from McClintock's 1978 study. They found that a coupled oscillator mechanism could produce estrous synchrony in female rats, but the effect was very weak. The proposed mechanisms of this model were more precisely tested by controlling the airborne odors received by individual females. They found support for the hypothesis that follicular phase odors short the length of estrous cycles, but they did not find that ovulatory phase odors lengthened cycles as the earlier study by McClintock had found.
Schank conducted another experiment to test whether female rats could synchronize their cycles. He found that female rats did not synchronize their cycles and he argued that in the original McClintock study, the random control group was more asynchronous than expected by chance. When the experimental group was compared to the control group in McClintock's 1978 study, the experimental group was more synchronous than the control group but only because the control group was too asynchronous and not because the experimental group had synchronized their cycles. In a follow-up study, Schank again found no effect of estrous synchrony in rats.
Hamsters
In 1980, estrous synchrony was reported in female hamsters. In their study, hamsters were housed in four colony phase of the estrous cycle. They monitored and females in each room and removed the females that did not stay in phase. They placed a wire metal cage (i.e., condo consisting of four equally sized rectangular compartments) in the corner of each room. For each room, three animals were randomly selected and placed in three of the condo compartments. A fourth female was randomly selected from another room and placed in the remaining condo compartment. In the control condition, all four females placed in the condos came from the same room. Females were kept in the condos until all four animals exhibited 4 consecutive days of synchrony. They were then removed and a new group was formed until all combinations were tested. They found that the fourth female in the experimental condition always synchronized with the remaining three
Their study was criticized as methodologically flawed because females were left together until the fourth female synchronized with the others. When female hamsters are subjected to the stress of stranger hamsters, their cycles become irregular. If only the female from another room's cycles change, then by chance, the longer the female is left with the other three, the more likely it is that she will synchronize by chance with the other three. In a follow-up experimental study motivated by this methodological critique, no evidence for estrous synchrony was found for female hamsters.
Chimpanzees
In 1985, estrous synchrony was reported in female chimpanzees. In her study, 10 female chimpanzees were caged, at different times, in two groups of four and six in the same building. The anogenital swelling of each female was recorded daily. Synchrony was measured by calculating the absolute differences in days between (1) the day of swelling onset and (2) the day of maximum swelling. She reported a statistically significant average difference of 5.7 days for onset of swelling and 8.0 days for maximum swelling. Schank, however, noted that due to females who became pregnant and who stopped cycling, most of the data were based on only four animals. He performed a computer simulation study to calculate the expected swelling onset and maximal swelling onset difference for female chimpanzees with the reported mean estrous cycle lengths of 36.7 (with a standard deviation of 4.3) days. He reported an expected difference of 7.7 days. Thus, a maximum swelling difference of 8.0 days is about what would be expected by chance and given that only four animals contributed data to the study, a 5.7 day onset difference is not significantly less than 7.7 days.
Since then Matsumoto and colleagues have reported estrous asynchrony in groups of free-living chimpanzees in Mahale Mountains National Park, Tanzania. They subsequently investigated whether estrous asynchrony was adaptive for female chimpanzees. They tested three hypotheses about the adaptiveness of estrous asynchrony: (1) females become asynchronous to increase copulation frequency and opportunities for giving birth; (2) paternity confusion to reduce infanticide; and (3) sperm competition. They found no support for hypothesis (1) and partial support for hypotheses (2) and (3).
Golden lion tamarins
In 1987, estrous synchrony was reported in female golden lion tamarins by French and Stribley. Their subjects consisted of five adult female golden lion tamarins that were housed in two groups. Two females were housed with adult males and three females (a mother and two daughters) were housed with an adult male and infant male. They reported a 2.11 day difference in peak cycle estrogen for the two groups, which was less than the 4.5 day difference that they calculated would be the difference based on golden lion tamarins having a 19-day estrous cycle. Schank reanalyzed their study with the help of computer simulation and reported that a 2.11 day difference was not likely statistically significant. Monfort and colleagues conducted a study with eight females housed in pairs and found no evidence of synchrony.
Mandrills
Setchella, Kendala, and Tyniec investigated whether menstrual synchrony occurred in a semi-free-ranging population of mandrills of 10-group years. They reported that mandrills do not synchronize their menstrual cycles and concluded that cycle synchrony does not occur in non-human primates.
Lions
Oestrus synchrony has been reported of lions in the wild.
See also
Culture and menstruation
References and notes
External links
The story of menstrual synchrony and suppression
"The Claim: Menstrual Cycles Can Synchronize Over Time" – The New York Times, February 5, 2008
Dr. Harriet Hall, Menstrual Synchrony: Do Girls Who Go Together Flow Together? Science-Based Medicine, September 6, 2011
Ethology
Menstrual cycle
Periodic phenomena
Synchronization | Menstrual synchrony | [
"Engineering",
"Biology"
] | 9,177 | [
"Behavior",
"Telecommunications engineering",
"Behavioural sciences",
"Ethology",
"Synchronization"
] |
8,844 | https://en.wikipedia.org/wiki/Digital%20cinema | Digital cinema is the digital technology used within the film industry to distribute or project motion pictures as opposed to the historical use of reels of motion picture film, such as 35 mm film. Whereas film reels have to be shipped to movie theaters, a digital movie can be distributed to cinemas in a number of ways: over the Internet or dedicated satellite links, or by sending hard drives or optical discs such as Blu-ray discs.
Digital movies are projected using a digital video projector instead of a film projector, are shot using digital movie cameras or in animation transferred from a file and are edited using a non-linear editing system (NLE). The NLE is often a video editing application installed in one or more computers that may be networked to access the original footage from a remote server, share or gain access to computing resources for rendering the final video, and allow several editors to work on the same timeline or project.
Alternatively a digital movie could be a film reel that has been digitized using a motion picture film scanner and then restored, or, a digital movie could be recorded using a film recorder onto film stock for projection using a traditional film projector.
Digital cinema is distinct from high-definition television and does not necessarily use traditional television or other traditional high-definition video standards, aspect ratios, or frame rates. In digital cinema, resolutions are represented by the horizontal pixel count, usually 2K (2048×1080 or 2.2 megapixels) or 4K (4096×2160 or 8.8 megapixels). The 2K and 4K resolutions used in digital cinema projection are often referred to as DCI 2K and DCI 4K. DCI stands for Digital Cinema Initiatives.
As digital cinema technology improved in the early 2010s, most theaters across the world converted to digital video projection. Digital cinema technology has continued to develop over the years with 3D, RPX, 4DX and ScreenX, allowing moviegoers with more immersive experiences.
History
The transition from film to digital video was preceded by cinema's transition from analog to digital audio, with the release of the Dolby Digital (AC-3) audio coding standard in 1991. Its main basis is the modified discrete cosine transform (MDCT), a lossy audio compression algorithm. It is a modification of the discrete cosine transform (DCT) algorithm, which was first proposed by Nasir Ahmed in 1972 and was originally intended for image compression. The DCT was adapted into the MDCT by J.P. Princen, A.W. Johnson and Alan B. Bradley at the University of Surrey in 1987, and then Dolby Laboratories adapted the MDCT algorithm along with perceptual coding principles to develop the AC-3 audio format for cinema needs. Cinema in the 1990s typically combined analog photochemical images with digital audio.
Digital media playback of high-resolution 2K files has at least a 20-year history. Early video data storage units (RAIDs) fed custom frame buffer systems with large memories. In early digital video units, the content was usually restricted to several minutes of material. Transfer of content between remote locations was slow and had limited capacity. It was not until the late 1990s that feature-length films could be sent over the "wire" (Internet or dedicated fiber links). On October 23, 1998, Digital light processing (DLP) projector technology was publicly demonstrated with the release of The Last Broadcast, the first feature-length movie, shot, edited and distributed digitally. In conjunction with Texas Instruments, the movie was publicly demonstrated in five theaters across the United States (Philadelphia, Portland (Oregon), Minneapolis, Providence, and Orlando).
Foundations
In the United States, on June 18, 1999, Texas Instruments' DLP Cinema projector technology was publicly demonstrated on two screens in Los Angeles and New York for the release of Lucasfilm's Star Wars Episode I: The Phantom Menace. In Europe, on February 2, 2000, Texas Instruments' DLP Cinema projector technology was publicly demonstrated, by Philippe Binant, on one screen in Paris for the release of Toy Story 2.
From 1997 to 2000, the JPEG 2000 image compression standard was developed by a Joint Photographic Experts Group (JPEG) committee chaired by Touradj Ebrahimi (later the JPEG president). In contrast to the original 1992 JPEG standard, which is a DCT-based lossy compression format for static digital images, JPEG 2000 is a discrete wavelet transform (DWT) based compression standard that could be adapted for motion imaging video compression with the Motion JPEG 2000 extension. JPEG 2000 technology was later selected as the video coding standard for digital cinema in 2004.
Initiatives
On January 19, 2000, the Society of Motion Picture and Television Engineers, in the United States, initiated the first standards group dedicated towards developing digital cinema. By December 2000, there were 15 digital cinema screens in the United States and Canada, 11 in Western Europe, 4 in Asia, and 1 in South America. Digital Cinema Initiatives (DCI) was formed in March 2002 as a joint project of many motion picture studios (Disney, Fox, MGM, Paramount, Sony Pictures, Universal and Warner Bros.) to develop a system specification for digital cinema. The same month it was reported that the number of cinemas equipped with digital projectors had increased to about 50 in the US and 30 more in the rest of the world.
In April 2004, in cooperation with the American Society of Cinematographers, DCI created standard evaluation material (the ASC/DCI StEM material) for testing of 2K and 4K playback and compression technologies. DCI selected JPEG 2000 as the basis for the compression in the system the same year. Initial tests with JPEG 2000 produced bit rates of around 75125 Mbit/s for 2K resolution and 100200 Mbit/s for 4K resolution.
Worldwide deployment
In China, in June 2005, an e-cinema system called "dMs" was established and was used in over 15,000 screens spread across China's 30 provinces. dMs estimated that the system would expand to 40,000 screens in 2009. In 2005 the UK Film Council Digital Screen Network launched in the UK by Arts Alliance Media creating a chain of 250 2K digital cinema systems. The roll-out was completed in 2006. This was the first mass roll-out in Europe. AccessIT/Christie Digital also started a roll-out in the United States and Canada. By mid 2006, about 400 theaters were equipped with 2K digital projectors with the number increasing every month. In August 2006, the Malayalam digital movie Moonnamathoral, produced by Benzy Martin, was distributed via satellite to cinemas, thus becoming the first Indian digital cinema. This was done by Emil and Eric Digital Films, a company based at Thrissur using the end-to-end digital cinema system developed by Singapore-based DG2L Technologies.
In January 2007, Guru became the first Indian film mastered in the DCI-compliant JPEG 2000 Interop format and also the first Indian film to be previewed digitally, internationally, at the Elgin Winter Garden in Toronto. This film was digitally mastered at Real Image Media Technologies in India. In 2007, the UK became home to Europe's first DCI-compliant fully digital multiplex cinemas; Odeon Hatfield and Odeon Surrey Quays (in London), with a total of 18 digital screens, were launched on 9 February 2007. By March 2007, with the release of Disney's Meet the Robinsons, about 600 screens had been equipped with digital projectors. In June 2007, Arts Alliance Media announced the first European commercial digital cinema Virtual Print Fee (VPF) agreements (with 20th Century Fox and Universal Pictures). In March 2009 AMC Theatres announced that it closed a $315 million deal with Sony to replace all of its movie projectors with 4K digital projectors starting in the second quarter of 2009; it was anticipated that this replacement would be finished by 2012.
As digital cinema technology improved in the early 2010s, most theaters across the world converted to digital video projection. In January 2011, the total number of digital screens worldwide was 36,242, up from 16,339 at end 2009 or a growth rate of 121.8 percent during the year. There were 10,083 d-screens in Europe as a whole (28.2 percent of global figure), 16,522 in the United States and Canada (46.2 percent of global figure) and 7,703 in Asia (21.6 percent of global figure). Worldwide progress was slower as in some territories, particularly Latin America and Africa. As of 31 March 2015, 38,719 screens (out of a total of 39,789 screens) in the United States have been converted to digital, 3,007 screens in Canada have been converted, and 93,147 screens internationally have been converted. By the end of 2017, virtually all of the world's cinema screens were digital (98%). Digital cinema technology has continued to develop over the years with 3D, RPX, 4DX and ScreenX, allowing moviegoers with more immersive experiences.
Despite the fact that today, virtually all global movie theaters have converted their screens to digital cinemas, some major motion pictures even as of 2019 are shot on film. For example, Quentin Tarantino released his latest film Once Upon a Time in Hollywood in 70 mm and 35 mm in selected theaters across the United States and Canada.
Elements
In addition to the equipment already found in a film-based movie theatre (e.g., a sound reinforcement system, screen, etc.), a DCI-compliant digital cinema requires a DCI-compliant digital projector and a powerful computer known as a server. Movies are supplied to the theatre as a set of digital files called a Digital Cinema Package (DCP). For a typical feature film, these files will be anywhere between 90 GB and 300 GB of data (roughly two to six times the information of a Blu-ray disc) and may arrive as a physical delivery on a conventional computer hard drive or via satellite or fibre-optic broadband Internet. As of 2013, physical deliveries of hard drives were most common in the industry. Promotional trailers arrive on a separate hard drive and range between 200 GB and 400 GB in size.
Regardless of how the DCP arrives, it first needs to be copied onto the internal hard drives of the server, either via an eSATA connection, or via a closed network, a process known as "ingesting." DCPs can be, and in the case of feature films almost always are, encrypted, to prevent illegal copying and piracy. The necessary decryption keys are supplied separately, usually as email attachments or via download, and then "ingested" via USB. Keys are time-limited and will expire after the end of the period for which the title has been booked. They are also locked to the hardware (server and projector) that is to screen the film, so if the theatre wishes to move the title to another screen or extend the run, a new key must be obtained from the distributor. Several versions of the same feature can be sent together. The original version (OV) is used as the basis of all the other playback options. Version files (VF) may have a different sound format (e.g. 7.1 as opposed to 5.1 surround sound) or subtitles. 2D and 3D versions are often distributed on the same hard drive.
The playback of the content is controlled by the server using a "playlist". As the name implies, this is a list of all the content that is to be played as part of the performance. The playlist will be created by a member of the theatre's staff using proprietary software that runs on the server. In addition to listing the content to be played the playlist also includes automation cues that allow the playlist to control the projector, the sound system, auditorium lighting, tab curtains and screen masking (if present), etc. The playlist can be started manually, by clicking the "play" button on the server's monitor screen, or automatically at pre-set times.
Technology and standards
Digital Cinema Initiatives
Digital Cinema Initiatives (DCI), a joint venture of the six major studios, published the first version (V1.0) of a system specification for digital cinema in July 2005. The main declared objectives of the specification were to define a digital cinema system that would "present a theatrical experience that is better than what one could achieve now with a traditional 35mm Answer Print", to provide global standards for interoperability such that any DCI-compliant content could play on any DCI-compliant hardware anywhere in the world and to provide robust protection for the intellectual property of the content providers.
The DCI specification calls for picture encoding using the ISO/IEC 15444-1 "JPEG2000" (.j2c) standard and use of the CIE XYZ color space at 12 bits per component encoded with a 2.6 gamma applied at projection. Two levels of resolution for both content and projectors are supported: 2K (2048×1080) or 2.2 MP at 24 or 48 frames per second, and 4K (4096×2160) or 8.85 MP at 24 frames per second. The specification ensures that 2K content can play on 4K projectors and vice versa. Smaller resolutions in one direction are also supported (the image gets automatically centered). Later versions of the standard added additional playback rates (like 25 fps in SMPTE mode). For the sound component of the content the specification provides for up to 16 channels of uncompressed audio using the "Broadcast Wave" (.wav) format at 24 bits and 48 kHz or 96 kHz sampling.
Playback is controlled by an XML-format Composition Playlist, into an MXF-compliant file at a maximum data rate of 250 Mbit/s. Details about encryption, key management, and logging are all discussed in the specification as are the minimum specifications for the projectors employed including the color gamut, the contrast ratio and the brightness of the image. While much of the specification codifies work that had already been ongoing in the Society of Motion Picture and Television Engineers (SMPTE), the specification is important in establishing a content owner framework for the distribution and security of first-release motion-picture content.
National Association of Theatre Owners
In addition to DCI's work, the National Association of Theatre Owners (NATO) released its Digital Cinema System Requirements. The document addresses the requirements of digital cinema systems from the operational needs of the exhibitor, focusing on areas not addressed by DCI, including access for the visually impaired and hearing impaired, workflow inside the cinema, and equipment interoperability. In particular, NATO's document details requirements for the Theatre Management System (TMS), the governing software for digital cinema systems within a theatre complex, and provides direction for the development of security key management systems. As with DCI's document, NATO's document is also important to the SMPTE standards effort.
E-Cinema
The Society of Motion Picture and Television Engineers (SMPTE) began work on standards for digital cinema in 2000. It was clear by that point in time that HDTV did not provide a sufficient technological basis for the foundation of digital cinema playback. In Europe, India and Japan however, there is still a significant presence of HDTV for theatrical presentations. Agreements within the ISO standards body have led to these non-compliant systems being referred to as Electronic Cinema Systems (E-Cinema).
Projectors
Only three manufacturers make DCI-approved digital cinema projectors; these are Barco, Christie and Sharp/NEC. Except for Sony, who used to use their own SXRD technology, all use the Digital light processing (DLP) technology developed by Texas Instruments (TI). D-Cinema projectors are similar in principle to digital projectors used in industry, education, and domestic home cinemas, but differ in two important respects. First, projectors must conform to the strict performance requirements of the DCI specification. Second, projectors must incorporate anti-piracy devices intended to enforce copyright compliance such as licensing limits. For these reasons all projectors intended to be sold to theaters for screening current release movies must be approved by the DCI before being put on sale. They now pass through a process called CTP (compliance test plan). Because feature films in digital form are encrypted and the decryption keys (KDMs) are locked to the serial number of the server used (linking to both the projector serial number and server is planned in the future), a system will allow playback of a protected feature only with the required KDM.
DLP Cinema
Three manufacturers have licensed the DLP Cinema technology developed by Texas Instruments (TI): Christie Digital Systems, Barco, and NEC. While NEC is a relative newcomer to Digital Cinema, Christie is the main player in the U.S. and Barco takes the lead in Europe and Asia. Initially DCI-compliant DLP projectors were available in 2K only, but from early 2012, when TI's 4K DLP chip went into full production, DLP projectors have been available in both 2K and 4K versions. Manufacturers of DLP-based cinema projectors can now also offer 4K upgrades to some of the more recent 2K models. Early DLP Cinema projectors, which were deployed primarily in the United States, used limited 1280×1024 resolution or the equivalent of 1.3 MP (megapixels). Digital Projection Incorporated (DPI) designed and sold a few DLP Cinema units (is8-2K) when TI's 2K technology debuted but then abandoned the D-Cinema market while continuing to offer DLP-based projectors for non-cinema purposes. Although based on the same 2K TI "light engine" as those of the major players they are so rare as to be virtually unknown in the industry. They are still widely used for pre-show advertising but not usually for feature presentations.
TI's technology is based on the use of digital micromirror devices (DMDs). These are MEMS devices that are manufactured from silicon using similar technology to that of computer chips. The surface of these devices is covered by a very large number of microscopic mirrors, one for each pixel, so a 2K device has about 2.2 million mirrors and a 4K device about 8.8 million. Each mirror vibrates several thousand times a second between two positions: In one, light from the projector's lamp is reflected towards the screen, in the other away from it. The proportion of the time the mirror is in each position varies according to the required brightness of each pixel. Three DMD devices are used, one for each of the primary colors. Light from the lamp, usually a Xenon arc lamp similar to those used in film projectors with a power between 1 kW and 7 kW, is split by colored filters into red, green and blue beams which are directed at the appropriate DMD. The 'forward' reflected beam from the three DMDs is then re-combined and focused by the lens onto the cinema screen. Later projectors may use lasers instead of xenon lamps.
Sony SXRD
Alone amongst the manufacturers of DCI-compliant cinema projectors Sony decided to develop its own technology rather than use TI's DLP technology. SXRD (Silicon X-tal (Crystal) Reflective Display) projectors have only ever been manufactured in 4K form and, until the launch of the 4K DLP chip by TI, Sony SXRD projectors were the only 4K DCI-compatible projectors on the market. Unlike DLP projectors, however, SXRD projectors do not present the left and right eye images of stereoscopic movies sequentially, instead they use half the available area on the SXRD chip for each eye image. Thus during stereoscopic presentations the SXRD projector functions as a sub 2K projector, the same for HFR 3D Content.
However, Sony decided in late April, 2020 that they would no longer manufacture digital cinema projectors.
Stereo 3D images
In late 2005, interest in digital 3D stereoscopic projection led to a new willingness on the part of theaters to co-operate in installing 2K stereo installations to show Disney's Chicken Little in 3D film. Six more digital 3D movies were released in 2006 and 2007 (including Beowulf, Monster House and Meet the Robinsons). The technology combines a single digital projector fitted with either a polarizing filter (for use with polarized glasses and silver screens), a filter wheel or an emitter for LCD glasses. RealD uses a "ZScreen" for polarisation and MasterImage uses a filter wheel that changes the polarity of projector's light output several times per second to alternate quickly the left-and-right-eye views. Another system that uses a filter wheel is Dolby 3D. The wheel changes the wavelengths of the colours being displayed, and tinted glasses filter these changes so the incorrect wavelength cannot enter the wrong eye. XpanD makes use of an external emitter that sends a signal to the 3D glasses to block out the wrong image from the wrong eye.
Laser
RGB laser projection produces the purest BT.2020 colors and the brightest images.
LED screens
In Asia, on July 13, 2017, an LED screen for digital cinema developed by Samsung Electronics was publicly demonstrated on one screen at Lotte Cinema World Tower in Seoul. The first installation in Europe is in Arena Sihlcity Cinema in Zürich. These displays do not use a projector; instead they use a LED video wall, and can offer higher contrast ratios, higher resolutions, and overall improvements in image quality. Sony already sells MicroLED displays as a replacement for conventional cinema screens.
Effect on distribution
Digital distribution of movies has the potential to save money for film distributors. Making thousands of prints for a wide-release movie can be expensive. In contrast, at the maximum 250 megabit-per-second data rate (as defined by DCI for digital cinema), a feature-length movie can be stored on an off-the-shelf 300 GB hard drive for $50 and a broad release of 4000 'digital prints' might cost $200,000. In addition hard drives can be returned to distributors for reuse. With several hundred movies distributed every year, the industry saves billions of dollars. The digital-cinema roll-out was stalled by the slow pace at which exhibitors acquired digital projectors, since the savings would be seen not by themselves but by distribution companies. The Virtual Print Fee model was created to address this by passing some of the saving on to the cinemas. As a consequence of the rapid conversion to digital projection, the number of theatrical releases exhibited on film is dwindling. As of 4 May 2014, 37,711 screens (out of a total of 40,048 screens) in the United States have been converted to digital, 3,013 screens in Canada have been converted, and 79,043 screens internationally have been converted.
Telecommunication
Realization and demonstration, on October 29, 2001, of the first digital cinema transmission by satellite in Europe of a feature film by Bernard Pauchon, Alain Lorentz, Raymond Melwig and Philippe Binant.
Live broadcasting to cinemas or event cinema
Digital cinemas can deliver live broadcasts from performances or events. This began initially with live broadcasts from the New York Metropolitan Opera delivering regular live broadcasts into cinemas and has been widely imitated ever since. Leading territories providing the content are the UK, the US, France and Germany. The Royal Opera House, Sydney Opera House, English National Opera and others have found new and returning audiences captivated by the detail offered by a live digital broadcast featuring handheld and cameras on cranes positioned throughout the venue to capture the emotion that might be missed in a live venue situation. In addition these providers all offer additional value during the intervals e.g. interviews with choreographers, cast members, a backstage tour which would not be on offer at the live event itself. Other live events in this field include live theatre from NT Live, Branagh Live, Royal Shakespeare Company, Shakespeare's Globe, the Royal Ballet, Mariinsky Ballet, the Bolshoi Ballet and the Berlin Philharmoniker.
In the last ten years this initial offering of the arts has also expanded to include live and recorded music events such as Take That Live, One Direction Live, Andre Rieu, live musicals such as the recent Miss Saigon and a record-breaking Billy Elliot Live In Cinemas. Live sport, documentary with a live question and answer element such as the recent Oasis documentary, lectures, faith broadcasts, stand-up comedy, museum and gallery exhibitions, TV specials such as the record-breaking Doctor Who fiftieth anniversary special The Day Of The Doctor, have all contributed to creating a valuable revenue stream for cinemas large and small all over the world. Subsequently, live broadcasting, formerly known as Alternative Content, has become known as Event Cinema and a trade association now exists to that end. Ten years on the sector has become a sizeable revenue stream in its own right, earning a loyal following amongst fans of the arts, and the content limited only by the imagination of the producers it would seem. Theatre, ballet, sport, exhibitions, TV specials and documentaries are now established forms of Event Cinema. Worldwide estimations put the likely value of the Event Cinema industry at $1bn by 2019.
Event Cinema currently accounts for on average between 1-3% of overall box office for cinemas worldwide but anecdotally it's been reported that some cinemas attribute as much as 25%, 48% and even 51% (the Rio Bio cinema in Stockholm) of their overall box office. It is envisaged ultimately that Event Cinema will account for around 5% of the overall box office globally. Event Cinema saw six worldwide records set and broken over from 2013 to 2015 with notable successes Dr Who ($10.2m in three days at the box office – event was also broadcast on terrestrial TV simultaneously), Pompeii Live by the British Museum, Billy Elliot, Andre Rieu, One Direction, Richard III by the Royal Shakespeare Company.
Event Cinema is defined more by the frequency of events rather than by the content itself. Event Cinema events typically appear in cinemas during traditionally quieter times in the cinema week such as the Monday-Thursday daytime/evening slot and are characterised by the One Night Only release, followed by one or possibly more 'Encore' releases a few days or weeks later if the event is successful and sold out. On occasion more successful events have returned to cinemas some months or even years later in the case of NT Live where the audience loyalty and company branding is so strong the content owner can be assured of a good showing at the box office.
Pros and cons
Pros
The digital formation of sets and locations, especially in the time of growing film series and sequels, is that virtual sets, once computer generated and stored, can be easily revived for future films.
Considering digital film images are documented as data files on hard disk or flash memory, varying systems of edits can be executed with the alteration of a few settings on the editing console with the structure being composed virtually in the computer's memory. A broad choice of effects can be sampled simply and rapidly, without the physical constraints posed by traditional cut-and-stick editing. Digital cinema allows national cinemas to construct films specific to their cultures in ways that the more constricting configurations and economics of customary film-making prevented. Low-cost cameras and computer-based editing software have gradually enabled films to be produced for minimal cost. The ability of digital cameras to allow film-makers to shoot limitless footage without wasting costly film has transformed film production in some Third World countries. From consumers' perspective digital prints do not deteriorate with the number of showings. Unlike film, there is no projection mechanism or manual handling to add scratches or other physically generated artefacts. Provincial cinemas that would have received old prints can give consumers the same cinematographic experience (all other things being equal) as those attending the premiere.
The use of NLEs in movies allows for edits and cuts to be made non-destructively, without actually discarding any footage.
Cons
A number of high-profile film directors, including Christopher Nolan, Paul Thomas Anderson, David O. Russell and Quentin Tarantino have publicly criticized digital cinema and advocated the use of film and film prints. Most famously, Tarantino has suggested he may retire because, though he can still shoot on film, because of the rapid conversion to digital, he cannot project from 35 mm prints in the majority of American cinemas. Steven Spielberg has stated that though digital projection produces a much better image than film if originally shot in digital, it is "inferior" when it has been converted to digital. He attempted at one stage to release Indiana Jones and the Kingdom of the Crystal Skull solely on film. Paul Thomas Anderson recently was able to create 70-mm film prints for his film The Master.
Film critic Roger Ebert criticized the use of DCPs after a cancelled film festival screening of Brian DePalma's film Passion at New York Film Festival as a result of a lockup due to the coding system.
The theoretical resolution of 35 mm film is greater than that of 2K digital cinema. 2K resolution (2048×1080) is also only slightly greater than that of consumer based 1080p HD (1920x1080). However, since digital post-production techniques became the standard in the early 2000s, the majority of movies, whether photographed digitally or on 35 mm film, have been mastered and edited at the 2K resolution. Moreover, 4K post production was becoming more common as of 2013. As projectors are replaced with 4K models the difference in resolution between digital and 35 mm film is somewhat reduced. Digital cinema servers utilize far greater bandwidth over domestic "HD", allowing for a difference in quality (e.g., Blu-ray colour encoding 4:2:0 48 Mbit/s MAX datarate, DCI D-Cinema 4:4:4 250 Mbit/s 2D/3D, 500 Mbit/s HFR3D). Each frame has greater detail.
Owing to the smaller dynamic range of digital cameras, correcting poor digital exposures is more difficult than correcting poor film exposures during post-production. A partial solution to this problem is to add complex video-assist technology during the shooting process. However, such technologies are typically available only to high-budget production companies. Digital cinemas' efficiency of storing images has a downside. The speed and ease of modern digital editing processes threatens to give editors and their directors, if not an embarrassment of choice then at least a confusion of options, potentially making the editing process, with this 'try it and see' philosophy, lengthier rather than shorter. Because the equipment needed to produce digital feature films can be obtained more easily than film projectors, producers could inundate the market with cheap productions and potentially dominate the efforts of serious directors. Because of the quick speed in which they are filmed, these stories sometimes lack essential narrative structure.
Costs
Pros
The electronic transferring of digital film, from central servers to servers in cinema projection booths, is an inexpensive process of supplying copies of newest releases to the vast number of cinema screens demanded by prevailing saturation-release strategies. There is a significant saving on print expenses in such cases: at a minimum cost per print of $1200–2000, the cost of film print production is between $5–8 million per movie. With several thousand releases a year, the probable savings offered by digital distribution and projection are over $1 billion. The cost savings and ease, together with the ability to store film rather than having to send a print on to the next cinema, allows a larger scope of films to be screened and watched by the public; minority and small-budget films that would not otherwise get such a chance.
Cons
The initial costs for converting theaters to digital are high: $100,000 per screen, on average. Theaters have been reluctant to switch without a cost-sharing arrangement with film distributors. A solution is a temporary Virtual Print Fee system, where the distributor (who saves the money of producing and transporting a film print) pays a fee per copy to help finance the digital systems of the theaters. A theater can purchase a film projector for as little as $10,000 (though projectors intended for commercial cinemas cost two to three times that; to which must be added the cost of a long-play system, which also costs around $10,000, making a total of around $30,000–$40,000) from which they could expect an average life of 30–40 years. By contrast, a digital cinema playback system—including server, media block, and projector—can cost two to three times as much, and would have a greater risk of component failure and obsolescence. (In Britain the cost of an entry-level projector including server, installation, etc., would be £31,000 [$50,000].)
Archiving digital masters has also turned out to be both tricky and costly. In a 2007 study, the Academy of Motion Picture Arts and Sciences found the cost of long-term storage of 4K digital masters to be "enormously higher—up to 11 times that of the cost of storing film masters." This is because of the limited or uncertain lifespan of digital storage: No current digital medium—be it optical disc, magnetic hard drive or digital tape—can reliably store a motion picture for as long as a hundred years or more (a timeframe for film properly stored). The short history of digital storage media has been one of innovation and, therefore, of obsolescence. Archived digital content must be periodically removed from obsolete physical media to up-to-date media. The expense of digital image capture is not necessarily less than the capture of images onto film; indeed, it is sometimes greater.
See also
Cinematography
JPEG 2000
3D film
4K resolution
Digital cinematography
Digital projector
Digital intermediate
Digital Cinema Initiatives
Display resolution
Digital 3D
Color suite
List of film-related topics (extensive alphabetical listing)
References
Bibliography
Charles S. Swartz (editor), Understanding digital cinema. A professional handbook, Elseiver / Focal Press, Burlington, Oxford, 2005, xvi + 327 p.
Philippe Binant (propos recueillis par Dominique Maillet), "Kodak. Au cœur de la projection numérique", Actions, n° 29, Division Cinéma et Télévision Kodak, Paris, 2007, p. 12–13.
Filmography
Christopher Kenneally, Side by Side, 2012. IMDb
External links
Side by Side : Q & A with Keanu Reeves, Le Royal Monceau, Paris, April 11–12, 2016.
Film and video technology
Digital media
Cinematography
Filmmaking | Digital cinema | [
"Technology"
] | 7,189 | [
"Multimedia",
"Digital media"
] |
8,847 | https://en.wikipedia.org/wiki/Commutator%20subgroup | In mathematics, more specifically in abstract algebra, the commutator subgroup or derived subgroup of a group is the subgroup generated by all the commutators of the group.
The commutator subgroup is important because it is the smallest normal subgroup such that the quotient group of the original group by this subgroup is abelian. In other words, is abelian if and only if contains the commutator subgroup of . So in some sense it provides a measure of how far the group is from being abelian; the larger the commutator subgroup is, the "less abelian" the group is.
Commutators
For elements and of a group G, the commutator of and is . The commutator is equal to the identity element e if and only if , that is, if and only if and commute. In general, .
However, the notation is somewhat arbitrary and there is a non-equivalent variant definition for the commutator that has the inverses on the right hand side of the equation: in which case but instead .
An element of G of the form for some g and h is called a commutator. The identity element e = [e,e] is always a commutator, and it is the only commutator if and only if G is abelian.
Here are some simple but useful commutator identities, true for any elements s, g, h of a group G:
where (or, respectively, ) is the conjugate of by
for any homomorphism ,
The first and second identities imply that the set of commutators in G is closed under inversion and conjugation. If in the third identity we take H = G, we get that the set of commutators is stable under any endomorphism of G. This is in fact a generalization of the second identity, since we can take f to be the conjugation automorphism on G, , to get the second identity.
However, the product of two or more commutators need not be a commutator. A generic example is [a,b][c,d] in the free group on a,b,c,d. It is known that the least order of a finite group for which there exists two commutators whose product is not a commutator is 96; in fact there are two nonisomorphic groups of order 96 with this property.
Definition
This motivates the definition of the commutator subgroup (also called the derived subgroup, and denoted or ) of G: it is the subgroup generated by all the commutators.
It follows from this definition that any element of is of the form
for some natural number , where the gi and hi are elements of G. Moreover, since , the commutator subgroup is normal in G. For any homomorphism f: G → H,
,
so that .
This shows that the commutator subgroup can be viewed as a functor on the category of groups, some implications of which are explored below. Moreover, taking G = H it shows that the commutator subgroup is stable under every endomorphism of G: that is, [G,G] is a fully characteristic subgroup of G, a property considerably stronger than normality.
The commutator subgroup can also be defined as the set of elements g of the group that have an expression as a product g = g1 g2 ... gk that can be rearranged to give the identity.
Derived series
This construction can be iterated:
The groups are called the second derived subgroup, third derived subgroup, and so forth, and the descending normal series
is called the derived series. This should not be confused with the lower central series, whose terms are .
For a finite group, the derived series terminates in a perfect group, which may or may not be trivial. For an infinite group, the derived series need not terminate at a finite stage, and one can continue it to infinite ordinal numbers via transfinite recursion, thereby obtaining the transfinite derived series, which eventually terminates at the perfect core of the group.
Abelianization
Given a group , a quotient group is abelian if and only if .
The quotient is an abelian group called the abelianization of or made abelian. It is usually denoted by or .
There is a useful categorical interpretation of the map . Namely is universal for homomorphisms from to an abelian group : for any abelian group and homomorphism of groups there exists a unique homomorphism such that . As usual for objects defined by universal mapping properties, this shows the uniqueness of the abelianization up to canonical isomorphism, whereas the explicit construction shows existence.
The abelianization functor is the left adjoint of the inclusion functor from the category of abelian groups to the category of groups. The existence of the abelianization functor Grp → Ab makes the category Ab a reflective subcategory of the category of groups, defined as a full subcategory whose inclusion functor has a left adjoint.
Another important interpretation of is as , the first homology group of with integral coefficients.
Classes of groups
A group is an abelian group if and only if the derived group is trivial: [G,G] = {e}. Equivalently, if and only if the group equals its abelianization. See above for the definition of a group's abelianization.
A group is a perfect group if and only if the derived group equals the group itself: [G,G] = G. Equivalently, if and only if the abelianization of the group is trivial. This is "opposite" to abelian.
A group with for some n in N is called a solvable group; this is weaker than abelian, which is the case n = 1.
A group with for all n in N is called a non-solvable group.
A group with for some ordinal number, possibly infinite, is called a hypoabelian group; this is weaker than solvable, which is the case α is finite (a natural number).
Perfect group
Whenever a group has derived subgroup equal to itself, , it is called a perfect group. This includes non-abelian simple groups and the special linear groups for a fixed field .
Examples
The commutator subgroup of any abelian group is trivial.
The commutator subgroup of the general linear group over a field or a division ring k equals the special linear group provided that or k is not the field with two elements.
The commutator subgroup of the alternating group A4 is the Klein four group.
The commutator subgroup of the symmetric group Sn is the alternating group An.
The commutator subgroup of the quaternion group Q = {1, −1, i, −i, j, −j, k, −k} is [Q,Q] = {1, −1}.
Map from Out
Since the derived subgroup is characteristic, any automorphism of G induces an automorphism of the abelianization. Since the abelianization is abelian, inner automorphisms act trivially, hence this yields a map
See also
Solvable group
Nilpotent group
The abelianization H/H' of a subgroup H < G of finite index (G:H) is the target of the Artin transfer T(G,H).
Notes
References
External links
Group theory
Functional subgroups
Articles containing proofs
Subgroup properties | Commutator subgroup | [
"Mathematics"
] | 1,544 | [
"Group theory",
"Articles containing proofs",
"Fields of abstract algebra"
] |
8,859 | https://en.wikipedia.org/wiki/Dandy | A dandy is a man who places particular importance upon physical appearance and personal grooming, refined language and leisurely hobbies. A dandy could be a self-made man both in person and persona, who emulated the aristocratic style of life regardless of his middle-class origin, birth, and background, especially during the late 18th and early 19th centuries in Britain.
Early manifestations of dandyism were Le petit-maître (the Little Master) and the musk-wearing Muscadin ruffians of the middle-class Thermidorean reaction (1794–1795). Modern dandyism, however, emerged in stratified societies of Europe during the 1790s revolution periods, especially in London and Paris. Within social settings, the dandy cultivated a persona characterized by extreme posed cynicism, or "intellectual dandyism" as defined by Victorian novelist George Meredith; whereas Thomas Carlyle, in his novel Sartor Resartus (1831), dismissed the dandy as "a clothes-wearing man"; Honoré de Balzac's La fille aux yeux d'or (1835) chronicled the idle life of Henri de Marsay, a model French dandy whose downfall stemmed from his obsessive Romanticism in the pursuit of love, which led him to yield to sexual passion and murderous jealousy.
In the metaphysical phase of dandyism, the poet Charles Baudelaire portrayed the dandy as an existential reproach to the conformity of contemporary middle-class men, cultivating the idea of beauty and aesthetics akin to a living religion. The dandy lifestyle, in certain respects, "comes close to spirituality and to stoicism" as an approach to living daily life, while its followers "have no other status, but that of cultivating the idea of beauty in their own persons, of satisfying their passions, of feeling and thinking … [because] Dandyism is a form of Romanticism. Contrary to what many thoughtless people seem to believe, dandyism is not even an excessive delight in clothes and material elegance. For the perfect dandy, these [material] things are no more than the symbol of the aristocratic superiority of mind."
The linkage of clothing and political protest was a particularly English characteristic in 18th-century Britain; the sociologic connotation was that dandyism embodied a reactionary form of protest against social equality and the leveling effects of egalitarian principles. Thus, the dandy represented a nostalgic yearning for feudal values and the ideals of the perfect gentleman as well as the autonomous aristocratreferring to men of self-made person and persona. The social existence of the dandy, paradoxically, required the gaze of spectators, an audience, and readers who consumed their "successfully marketed lives" in the public sphere. Figures such as playwright Oscar Wilde and poet Lord Byron personified the dual social roles of the dandy: the dandy-as-writer, and the dandy-as-persona; each role a source of gossip and scandal, confining each man to the realm of entertaining high society.
Etymology
The earliest record of the word dandy dates back to the late 1700s, in Scottish Song . Since the late 18th century, the word dandy has been rumored to be an abbreviated usage of the 17th-century British jack-a-dandy used to described a conceited man. In British North America, prior to American Revolution (1765–1791), a British version of the song "Yankee Doodle" in its first verse: "Yankee Doodle went to town, / Upon a little pony; / He stuck a feather in his hat, / And called it Macoroni … ." and chorus: "Yankee Doodle, keep it up, / Yankee Doodle Dandy, / Mind the music and the step, / And with the girls be handy … ." derided the rustic manner and perceived poverty of colonial American. The lyrics, particularly the reference to "stuck a feather in his hat" and "called it Macoroni," suggested that adorning fashionable attire (a fine horse and gold-braided clothing) was what set the dandy apart from colonial society. In other cultural contexts, an Anglo–Scottish border ballad dated around 1780 utilized dandy in its Scottish connotation and not the derisive British usage populated in colonial North America. Since the 18th century, contemporary British usage has drawn a distinction between a dandy and a fop, with the former characterized by a more restrained and refined wardrobe compared to the flamboyant and ostentatious attire of the latter.
British dandyism
Beau Brummell (George Bryan Brummell, 1778–1840) was the model British dandy since his days as an undergraduate at Oriel College, Oxford, and later as an associate of the Prince Regent (George IV)all despite not being an aristocrat. Always bathed and shaved, always powdered and perfumed, always groomed and immaculately dressed in a dark-blue coat of plain style. Sartorially, the look of Brummell's tailoring was perfectly fitted, clean, and displayed much linen; an elaborately knotted cravat completed the aesthetics of Brummell's suite of clothes. During the mid-1790s, the handsome Beau Brummell became a personable man-about-town in Regency London's high society, who was famous for being famous and celebrated "based on nothing at all" but personal charm and social connections.
During the national politics of the Regency era (1795–1837), by the time that Prime Minister William Pitt the Younger had introduced the Duty on Hair Powder Act 1795 in order to fund the Britain's war efforts against France and discouraged the use of foodstuffs as hair powder, the dandy Brummell already had abandoned wearing a powdered wig and wore his hair cut à la Brutus, in the Roman fashion. Moreover, Brummell also led the sartorial transition from breeches to tailored pantaloons, which eventually evolved into modern trousers.
Upon coming of age in 1799, Brummell received a paternal inheritance of thirty thousand pounds sterling, which he squandered on a high life of gambling, lavish tailors, and visits to brothels. Eventually declaring bankruptcy in 1816, Brummell fled England to France, where he lived in destitution and pursued by creditors; in 1840, at the age of sixty-one years, Beau Brummell passed away in a lunatic asylum in Caen, marking the tragic end to his once-glamorous legacy. Nonetheless, despite his ignominious end, Brummell's influence on European fashion endured, with men across the continent seeking to emulate his dandyism. Among them was the poetical persona of Lord Byron (George Gordon Byron, 1788–1824), who wore a poet's shirt featuring a lace-collar, a lace-placket, and lace-cuffs in a portrait of himself in Albanian national costume in 1813; Count d'Orsay (Alfred Guillaume Gabriel Grimod d'Orsay, 1801–1852), himself a prominent figure in upper-class social circles and an acquaintance of Lord Byron, likewise embodied the spirit of dandyism within elite British society.
In chapter "The Dandiacal Body" of the novel Sartor Resartus (Carlyle, 1831), Thomas Carlyle described the dandy's symbolic social function as a man and a persona of refined masculinity:A Dandy is a Clothes-wearing Man, a Man whose trade, office, and existence consists in the wearing of Clothes. Every faculty of his soul, spirit, purse, and person is heroically consecrated to this one object, the wearing of Clothes wisely and well: so that as others dress to live, he lives to dress. . . . And now, for all this perennial Martyrdom, and Poesy, and even Prophecy, what is it that the Dandy asks in return? Solely, we may say, that you would recognise his existence; would admit him to be a living object; or even failing this, a visual object, or thing that will reflect rays of light.
In the mid-19th century, amidst the restricted palette of muted colors for men's clothing, the English dandy dedicated meticulous attention to the finer details of sartorial refinement (design, cut, and style), including: "The quality of the fine woollen cloth, the slope of a pocket flap or coat revers, exactly the right colour for the gloves, the correct amount of shine on boots and shoes, and so on. It was an image of a well-dressed man who, while taking infinite pains about his appearance, affected indifference to it. This refined dandyism continued to be regarded as an essential strand of male Englishness."
French dandyism
In monarchic France, dandyism was ideologically bound to the egalitarian politics of the French Revolution (1789–1799); thus the dandyism of the jeunesse dorée (the Gilded Youth) was their political statement of aristocratic style in effort to differentiate and distinguish themselves from the working-class sans-culottes, from the poor men who owned no stylish knee-breeches made of silk.
In the late 18th century, British and French men abided Beau Brummell's dictates about fashion and etiquette, especially the French bohemians who closely imitated Brummell's habits of dress, manner, and style. In that time of political progress, French dandies were celebrated as social revolutionaries who were self-created men possessed of a consciously designed personality, men whose way of being broke with inflexible tradition that limited the social progress of greater French society; thus, with their elaborate dress and decadent styles of life, the French dandies conveyed their moral superiority to and political contempt for the conformist bourgeoisie.
Regarding the social function of the dandy in a stratified society, like the British writer Carlyle, in Sartor Resartus, the French poet Baudelaire said that dandies have "no profession other than elegance … no other [social] status, but that of cultivating the idea of beauty in their own persons. … The dandy must aspire to be sublime without interruption; he must live and sleep before a mirror." Likewise, French intellectuals investigated the sociology of the dandies (flâneurs) who strolled Parisian boulevards; in the essay "On Dandyism and George Brummell" (1845) Jules Amédée Barbey d'Aurevilly analysed the personal and social career of Beau Brummell as a man-about-town who arbitrated what was fashionable and what was unfashionable in polite society.
In the late 19th century, dandified bohemianism was characteristic of the artists who were the Symbolist movement in French poetry and literature, wherein the "Truth of Art" included the artist to the work of art.
Black dandyism
Black dandies have existed since the beginnings of dandyism and have been formative for its aesthetics in many ways. Maria Weilandt in "The Black Dandy and Neo-Victorianism: Re-fashioning a Stereotype" (2021) critiques the history of Western European dandyism as primarily centered around white individuals and the homogenization whiteness as the figurehead of the movement. It is important to acknowledge Black dandyism as distinct and a highly political effort at challenging stereotypes of race, class, gender, and nationality.
British-Nigerian artist Yinka Shonibare (b. 1962) employs the neo-Victorian dandy stereotypes to illustrate the Black man experiences in Western European societies. Shonibare's photographic suite Dorian Gray (2001) refers to Oscar Wilde's literary creation of the same name,The Picture Of Dorian Gray (1890), but with the substitution of a disfigured Black protagonist. As the series progress, readers soon notice that there exists no real picture of "Dorian Gray" but only illustrations of other white protagonists. It is through this theme of isolation and Otherness that the Black Dorian Gray becomes Shonibare's comment on the absence of Black representation in Victorian Britain.
Shonibare's artwork Diary of a Victorian Dandy (1998) reimagines one day in the life of a dandy in Victorian England, through which the author challenges conventional Victorian depictions of race, class, and British identity by depicting the Victorian dandy as Black, surrounded by white servants.By reversing concepts of the Victorian master-servant relationship, by rewriting stereotypings of the Victorian dandy to include Black masculinities, and by positioning his dandy figure as a noble man who is the leader of his social circle, Shonibare uses neo-Victorianism as a genre to interrogate and counter normative historical narratives and the power hierarchies they produce(d).Black dandyism serves as a catalyst for contemporary Black identities to explore self-fashioning and expressions of neo-Victorian Blacks: The Black dandy's look is highly tailored – the antithesis of baggy wear. [...] Black dandyism rejects this. In fact, the Black dandy is often making a concerted effort to juxtapose himself against racist stereotyping seen in mass media and popular culture [...] For dandies, dress becomes a strategy for negotiating the complexities of Black male identity [...].
"Dandy Jim of Carolina" is a minstrel song that originated in the United States during the 19th century. It tells the story of a character named Dandy Jim, who is depicted as a stylish and flamboyant individual from the state of Carolina. The song often highlights Dandy Jim's extravagant clothing, his charm, and his prowess with the ladies. While the song does not explicitly address race, Dandy Jim's stylish and flamboyant persona aligns with aspects of Black dandyism, a cultural phenomenon characterized by sharp dressing, self-assurance, and individuality within Black communities.
According to the standards of the day, it was ludicrous and hilarious to see a person of perceived lower social standing donning fashionable attire and "putting on airs." For most of racist 19th century America a well dressed African American was an odd thing, and naturally someone of that ilk would be seen as acting out of place. The representation of Dandy Jim, while potentially rooted in caricature or exaggeration, nonetheless contribute to the broader cultural landscape surrounding Black dandyism and its portrayal in American folk music.
Dandy sociology
Regarding the existence and the political and cultural functions of the dandy in a society, in the essay L'Homme révolté (1951), Albert Camus said that:
The dandy creates his own unity by aesthetic means. But it is an aesthetic of negation. To live and die before a mirror: that, according to Baudelaire, was the dandy's slogan. It is indeed a coherent slogan. The dandy is, by occupation, always in opposition [to society]. He can only exist by defiance … The dandy, therefore, is always compelled to astonish. Singularity is his vocation, excess his way to perfection. Perpetually incomplete, always on the fringe of things, he compels others to create him, while denying their values. He plays at life because he is unable to live [life].
Further addressing that vein of male narcissism, in the book Simulacra and Simulation (1981), Jean Baudrillard said that dandyism is "an aesthetic form of nihilism" that is centred upon the Self as the centre of the world.
Elizabeth Amann's Dandyism in the Age of Revolution: The Art of the Cut (2015) quotes, "Dandyism has always been a cross-cultural phenomenon". Male self-fashioning carries socio-political implications beyond its superficiality and opulent external. Through the analysis of clothing, aesthetics, and societal norms, Amann examines how dandyism emerged as a means of asserting identity, power, and autonomy in the midst of revolutionary change. Male self-fashioning, in particular, was wielded as a resistance expression in denial of itself due to the influence of the French Revolution on British discussions of masculinity. British prime minister William Pitt proposed an unusual measure: the Duty on Hair Powder Act 1795, which aimed to levy a tax on from affluent consumers of hair powder to raise money for the war. Critics of the act expressed fear regarding the association between wearing hair powder and "a tendency to produce a famine,” and those who did so would “run the further risque of being knocked on the head”. In August 1975, journalists and new reports complained that "the papers had misled the poor and encouraged them to consider powdered heads their enemies," a “calculated to excite riots.” With the new legislation, the powdered look became a marker of class in English society and a much more exclusive one, polarizing those who used the products and those who did not. Those who feared making class boundaries too visible considered the distinctions to be deep and significant and therefore wished to protect them by making them less evident, by allowing a self-fashioning that created an illusion of mobility in a highly stratified society.
In the early discussion of the tax, the London Packet posed the question, “Is an actor, who in his own private character uniformly appears in a scratch wig, or wears his hair without powder, liable to pay the tax imposed by the new act, for any of the parts which he is necessitated to dress with powder on the stage?” This seemingly trivial inquiry unveils a profound aspect of the legislation: By paying the tax, citizens were essentially purchasing the right to craft a persona, akin to an actor who took on a stage role. Exaggerated self-fashioning was no longer an oppositional strategy and instead became the prevailing norm. To protest the tax and the war against France was to embrace a new aesthetic of invisibility, wherein individuals favored natural attire and simplicity in order to blend into the social fabric rather than stand out.
Dandyism and capitalism
Dandyism is intricately linked with modern capitalism, embodying both a product of and a critique against it. According to Elisa Glick, the dandy's attention to their appearance and their engagement "consumption and display of luxury goods" can be read as an expression of capitalist commodification. However, interestingly, this meticulous attention to personal appearance can also be seen as an assertion of individuality and thus a revolt against capitalism's emphasis on mass production and utilitarianism.
Underscoring this somewhat paradoxical nature, philosopher Thorsten Botz-Bornstein describes the dandy as "an anarchist who does not claim anarchy." He argues that this simultaneous abiding by and also ignorance of capitalist social pressures speaks to what he calls a “playful attitude towards life’s conventions." Not only does the dandy play with traditional conceptions of gender, but also with the socioeconomic norms of the society they inhabit; he agrees the importance that dandyism places on uniquely personal style directly opposes capitalism's call for conformity.
Thomas Spence Smith highlights the function of style in maintaining social boundaries and individual status, particularly as traditional social structures have decrystallized in modernity. He notes that "style becomes a crucial element in maintaining social boundaries and individual status." This process "creates a market for new social models, with the dandy as a prime example of how individuals navigate and resist the pressures of a capitalist society." Here, another paradoxical relation between dandyism and capitalism emerges: dandyism's emphasis on individuality and on forming an idiomatic sense of style can be read as a sort of marketing or commodification of the self.
Quaintrelle
The counterpart to the dandy is the quaintrelle, a woman whose life is dedicated to the passionate expression of personal charm and style, to enjoying leisurely pastimes, and the dedicated cultivation of the pleasures of life.
In the 12th century, cointerrels (male) and cointrelles (female) emerged, based upon coint, a word applied to things skillfully made, later indicating a person of beautiful dress and refined speech. By the 18th century, coint became quaint, indicating elegant speech and beauty. Middle English dictionaries note quaintrelle as a beautifully dressed woman (or overly dressed), but do not include the favorable personality elements of grace and charm. The notion of a quaintrelle sharing the major philosophical components of refinement with dandies is a modern development that returns quaintrelles to their historic roots.
Female dandies did overlap with male dandies for a brief period during the early 19th century when dandy had a derisive definition of "fop" or "over-the-top fellow"; the female equivalents were dandyess or dandizette. Charles Dickens, in All the Year Around (1869) comments, "The dandies and dandizettes of 1819–20 must have been a strange race. "Dandizette" was a term applied to the feminine devotees to dress, and their absurdities were fully equal to those of the dandies." In 1819, Charms of Dandyism, in three volumes, was published by Olivia Moreland, Chief of the Female Dandies; most likely one of many pseudonyms used by Thomas Ashe. Olivia Moreland may have existed, as Ashe did write several novels about living persons. Throughout the novel, dandyism is associated with "living in style". Later, as the word dandy evolved to denote refinement, it became applied solely to men. Popular Culture and Performance in the Victorian City (2003) notes this evolution in the latter 19th century: " … or dandizette, although the term was increasingly reserved for men."
See also
Adonis
Dandy and Dedicated Follower of Fashion, songs by the Kinks that parody modern (1960s) dandyism.
Dude
Effeminacy
Flâneur
Fop
Gentleman
Hipster (contemporary subculture)
Incroyables and Merveilleuses
La Sape
Macaroni (fashion)
Metrosexual
Narcissus (mythology)
Personal branding
Preppy
Risqué
Swenkas
Zoot suit (a style of clothing)
References
Further reading
Barbey d'Aurevilly, Jules. Of Dandyism and of George Brummell. Translated by Douglas Ainslie. New York: PAJ Publications, 1988.
Botz-Bornstein, Thorsten. 'Rulefollowing in Dandyism: Style as an Overcoming of Rule and Structure' in The Modern Language Review 90, April 1995, pp. 285–295.
Carassus, Émile. Le Mythe du Dandy 1971.
Carlyle, Thomas. Sartor Resartus. In A Carlyle Reader: Selections from the Writings of Thomas Carlyle. Edited by G.B. Tennyson. London: Cambridge University Press, 1984.
Jesse, Captain William. The Life of Beau Brummell. London: The Navarre Society Limited, 1927.
Lytton, Edward Bulwer, Lord Lytton. Pelham or the Adventures of a Gentleman. Edited by Jerome McGann. Lincoln: University of Nebraska Press, 1972.
Moers, Ellen. The Dandy: Brummell to Beerbohm. London: Secker and Warburg, 1960.
Murray, Venetia. An Elegant Madness: High Society in Regency England. New York: Viking, 1998.
Nicolay, Claire. Origins and Reception of Regency Dandyism: Brummell to Baudelaire. PhD diss., Loyola U of Chicago, 1998.
Prevost, John C., Le Dandysme en France (1817–1839) (Geneva and Paris) 1957.
Nigel Rodgers The Dandy: Peacock or Enigma? (London) 2012
Stanton, Domna. The Aristocrat as Art 1980.
Wharton, Grace and Philip. Wits and Beaux of Society. New York: Harper and Brothers, 1861.
External links
La Loge d'Apollon
"Bohemianism and Counter-Culture": The Dandy
Il Dandy (in Italian)
Dandyism.net
"The Dandy"
Walter Thornbury, Dandysme.eu "London Parks: IV. Hyde Park" , Belgravia: A London Magazine 1868
1790s fashion
19th-century fashion
Androgyny
History of clothing (Western fashion)
Human appearance
Middle class culture
Narcissism
Terms for men
Upper class culture
Art Nouveau
Male beauty
Lifestyles
1790s neologisms | Dandy | [
"Biology"
] | 5,162 | [
"Behavior",
"Narcissism",
"Human behavior"
] |
8,864 | https://en.wikipedia.org/wiki/Delaunay%20triangulation | In computational geometry, a Delaunay triangulation or Delone triangulation of a set of points in the plane subdivides their convex hull into triangles whose circumcircles do not contain any of the points. This maximizes the size of the smallest angle in any of the triangles, and tends to avoid sliver triangles.
The triangulation is named after Boris Delaunay for his work on it from 1934.
If the points all lie on a straight line, the notion of triangulation becomes degenerate and there is no Delaunay triangulation. For four or more points on the same circle (e.g., the vertices of a rectangle) the Delaunay triangulation is not unique: each of the two possible triangulations that split the quadrangle into two triangles satisfies the "Delaunay condition", i.e., the requirement that the circumcircles of all triangles have empty interiors.
By considering circumscribed spheres, the notion of Delaunay triangulation extends to three and higher dimensions. Generalizations are possible to metrics other than Euclidean distance. However, in these cases a Delaunay triangulation is not guaranteed to exist or be unique.
Relationship with the Voronoi diagram
The Delaunay triangulation of a discrete point set in general position corresponds to the dual graph of the Voronoi diagram for .
The circumcenters of Delaunay triangles are the vertices of the Voronoi diagram.
In the 2D case, the Voronoi vertices are connected via edges, that can be derived from adjacency-relationships of the Delaunay triangles: If two triangles share an edge in the Delaunay triangulation, their circumcenters are to be connected with an edge in the Voronoi tesselation.
Special cases where this relationship does not hold, or is ambiguous, include cases like:
Three or more collinear points, where the circumcircles are of infinite radii.
Four or more points on a perfect circle, where the triangulation is ambiguous and all circumcenters are trivially identical. In this case the Voronoi diagram contains vertices of degree four or greater and its dual graph contains polygonal faces with four or more sides. The various triangulations of these faces complete the various possible Delaunay triangulations.
Edges of the Voronoi diagram going to infinity are not defined by this relation in case of a finite set . If the Delaunay triangulation is calculated using the Bowyer–Watson algorithm then the circumcenters of triangles having a common vertex with the "super" triangle should be ignored. Edges going to infinity start from a circumcenter and they are perpendicular to the common edge between the kept and ignored triangle.
d-dimensional Delaunay
For a set of points in the (-dimensional) Euclidean space, a Delaunay triangulation is a triangulation such that no point in is inside the circum-hypersphere of any -simplex in . It is known that there exists a unique Delaunay triangulation for if is a set of points in general position; that is, the affine hull of is -dimensional and no set of points in lie on the boundary of a ball whose interior does not intersect .
The problem of finding the Delaunay triangulation of a set of points in -dimensional Euclidean space can be converted to the problem of finding the convex hull of a set of points in ()-dimensional space. This may be done by giving each point an extra coordinate equal to , thus turning it into a hyper-paraboloid (this is termed "lifting"); taking the bottom side of the convex hull (as the top end-cap faces upwards away from the origin, and must be discarded); and mapping back to -dimensional space by deleting the last coordinate. As the convex hull is unique, so is the triangulation, assuming all facets of the convex hull are simplices. Nonsimplicial facets only occur when of the original points lie on the same -hypersphere, i.e., the points are not in general position.
Properties
Let be the number of points and the number of dimensions.
The union of all simplices in the triangulation is the convex hull of the points.
The Delaunay triangulation contains simplices.
In the plane (), if there are vertices on the convex hull, then any triangulation of the points has at most triangles, plus one exterior face (see Euler characteristic).
If points are distributed according to a Poisson process in the plane with constant intensity, then each vertex has on average six surrounding triangles. More generally for the same process in dimensions the average number of neighbors is a constant depending only on .
In the plane, the Delaunay triangulation maximizes the minimum angle. Compared to any other triangulation of the points, the smallest angle in the Delaunay triangulation is at least as large as the smallest angle in any other. However, the Delaunay triangulation does not necessarily minimize the maximum angle. The Delaunay triangulation also does not necessarily minimize the length of the edges.
A circle circumscribing any Delaunay triangle does not contain any other input points in its interior.
If a circle passing through two of the input points doesn't contain any other input points in its interior, then the segment connecting the two points is an edge of a Delaunay triangulation of the given points.
Each triangle of the Delaunay triangulation of a set of points in -dimensional spaces corresponds to a facet of convex hull of the projection of the points onto a ()-dimensional paraboloid, and vice versa.
The closest neighbor to any point is on an edge in the Delaunay triangulation since the nearest neighbor graph is a subgraph of the Delaunay triangulation.
The Delaunay triangulation is a geometric spanner: In the plane (), the shortest path between two vertices, along Delaunay edges, is known to be no longer than 1.998 times the Euclidean distance between them.
Visual Delaunay definition: Flipping
From the above properties an important feature arises: Looking at two triangles with the common edge (see figures), if the sum of the angles , the triangles meet the Delaunay condition.
This is an important property because it allows the use of a flipping technique. If two triangles do not meet the Delaunay condition, switching the common edge for the common edge produces two triangles that do meet the Delaunay condition:
This operation is called a flip, and can be generalised to three and higher dimensions.
Algorithms
Many algorithms for computing Delaunay triangulations rely on fast operations for detecting when a point is within a triangle's circumcircle and an efficient data structure for storing triangles and edges. In two dimensions, one way to detect if point lies in the circumcircle of is to evaluate the determinant:
When are sorted in a counterclockwise order, this determinant is positive only if lies inside the circumcircle.
Flip algorithms
As mentioned above, if a triangle is non-Delaunay, we can flip one of its edges. This leads to a straightforward algorithm: construct any triangulation of the points, and then flip edges until no triangle is non-Delaunay. Unfortunately, this can take edge flips. While this algorithm can be generalised to three and higher dimensions, its convergence is not guaranteed in these cases, as it is conditioned to the connectedness of the underlying flip graph: this graph is connected for two-dimensional sets of points, but may be disconnected in higher dimensions.
Incremental
The most straightforward way of efficiently computing the Delaunay triangulation is to repeatedly add one vertex at a time, retriangulating the affected parts of the graph. When a vertex is added, we split in three the triangle that contains , then we apply the flip algorithm. Done naïvely, this will take time: we search through all the triangles to find the one that contains , then we potentially flip away every triangle. Then the overall runtime is .
If we insert vertices in random order, it turns out (by a somewhat intricate proof) that each insertion will flip, on average, only triangles – although sometimes it will flip many more. This still leaves the point location time to improve. We can store the history of the splits and flips performed: each triangle stores a pointer to the two or three triangles that replaced it. To find the triangle that contains , we start at a root triangle, and follow the pointer that points to a triangle that contains , until we find a triangle that has not yet been replaced. On average, this will also take time. Over all vertices, then, this takes time. While the technique extends to higher dimension (as proved by Edelsbrunner and Shah), the runtime can be exponential in the dimension even if the final Delaunay triangulation is small.
The Bowyer–Watson algorithm provides another approach for incremental construction. It gives an alternative to edge flipping for computing the Delaunay triangles containing a newly inserted vertex.
Unfortunately the flipping-based algorithms are generally hard to parallelize, since adding some certain point (e.g. the center point of a wagon wheel) can lead to up to consecutive flips. Blelloch et al. proposed another version of incremental algorithm based on rip-and-tent, which is practical and highly parallelized with polylogarithmic span.
Divide and conquer
A divide and conquer algorithm for triangulations in two dimensions was developed by Lee and Schachter and improved by Guibas and Stolfi and later by Dwyer. In this algorithm, one recursively draws a line to split the vertices into two sets. The Delaunay triangulation is computed for each set, and then the two sets are merged along the splitting line. Using some clever tricks, the merge operation can be done in time , so the total running time is .
For certain types of point sets, such as a uniform random distribution, by intelligently picking the splitting lines the expected time can be reduced to while still maintaining worst-case performance.
A divide and conquer paradigm to performing a triangulation in dimensions is presented in "DeWall: A fast divide and conquer Delaunay triangulation algorithm in Ed" by P. Cignoni, C. Montani, R. Scopigno.
The divide and conquer algorithm has been shown to be the fastest DT generation technique sequentially.
Sweephull
Sweephull is a hybrid technique for 2D Delaunay triangulation that uses a radially propagating sweep-hull, and a flipping algorithm. The sweep-hull is created sequentially by iterating a radially-sorted set of 2D points, and connecting triangles to the visible part of the convex hull, which gives a non-overlapping triangulation. One can build a convex hull in this manner so long as the order of points guarantees no point would fall within the triangle. But, radially sorting should minimize flipping by being highly Delaunay to start. This is then paired with a final iterative triangle flipping step.
Applications
The Euclidean minimum spanning tree of a set of points is a subset of the Delaunay triangulation of the same points, and this can be exploited to compute it efficiently.
For modelling terrain or other objects given a point cloud, the Delaunay triangulation gives a nice set of triangles to use as polygons in the model. In particular, the Delaunay triangulation avoids narrow triangles (as they have large circumcircles compared to their area). See triangulated irregular network.
Delaunay triangulations can be used to determine the density or intensity of points samplings by means of the Delaunay tessellation field estimator (DTFE).
Delaunay triangulations are often used to generate meshes for space-discretised solvers such as the finite element method and the finite volume method of physics simulation, because of the angle guarantee and because fast triangulation algorithms have been developed. Typically, the domain to be meshed is specified as a coarse simplicial complex; for the mesh to be numerically stable, it must be refined, for instance by using Ruppert's algorithm.
The increasing popularity of finite element method and boundary element method techniques increases the incentive to improve automatic meshing algorithms. However, all of these algorithms can create distorted and even unusable grid elements. Fortunately, several techniques exist which can take an existing mesh and improve its quality. For example, smoothing (also referred to as mesh refinement) is one such method, which repositions nodes to minimize element distortion. The stretched grid method allows the generation of pseudo-regular meshes that meet the Delaunay criteria easily and quickly in a one-step solution.
Constrained Delaunay triangulation has found applications in path planning in automated driving and topographic surveying.
See also
Beta skeleton
Centroidal Voronoi tessellation
Convex hull algorithms
Delaunay refinement
Delone set – also known as a Delaunay set
Disordered hyperuniformity
Farthest-first traversal – incremental Voronoi insertion
Gabriel graph
Giant's Causeway
Gradient pattern analysis
Hamming bound – sphere-packing bound
Linde–Buzo–Gray algorithm
Lloyd's algorithm – Voronoi iteration
Meyer set
Pisot–Vijayaraghavan number
Pitteway triangulation
Plesiohedron
Quasicrystal
Quasitriangulation
Salem number
Steiner point (triangle)
Triangle mesh
Urquhart graph
Voronoi diagram
References
External links
Delaunay triangulation in CGAL, the Computational Geometry Algorithms Library:
Mariette Yvinec. 2D Triangulation. Retrieved April 2010.
Pion, Sylvain; Teillaud, Monique. 3D Triangulations. Retrieved April 2010.
Hornus, Samuel; Devillers, Olivier; Jamin, Clément. dD Triangulations.
Hert, Susan; Seel, Michael. dD Convex Hulls and Delaunay Triangulations. Retrieved April 2010.
"Poly2Tri: Incremental constrained Delaunay triangulation. Open source C++ implementation. Retrieved April 2019.
"Divide & Conquer Delaunay triangulation construction". Open source C99 implementation. Retrieved April 2019.
"CDT: Constrained Delaunay Triangulation in C++". Open source C++ implementation. Retrieved August 2022.
Triangulation (geometry)
Geometric algorithms | Delaunay triangulation | [
"Mathematics"
] | 3,072 | [
"Triangulation (geometry)",
"Planes (geometry)",
"Planar graphs"
] |
8,887 | https://en.wikipedia.org/wiki/Direct%20product | In mathematics, one can often define a direct product of objects already known, giving a new one. This induces a structure on the Cartesian product of the underlying sets from that of the contributing objects. More abstractly, one talks about the product in category theory, which formalizes these notions.
Examples are the product of sets, groups (described below), rings, and other algebraic structures. The product of topological spaces is another instance.
There is also the direct sum – in some areas this is used interchangeably, while in others it is a different concept.
Examples
If we think of as the set of real numbers without further structure, then the direct product is just the Cartesian product
If we think of as the group of real numbers under addition, then the direct product still has as its underlying set. The difference between this and the preceding example is that is now a group, and so we have to also say how to add their elements. This is done by defining
If we think of as the ring of real numbers, then the direct product again has as its underlying set. The ring structure consists of addition defined by and multiplication defined by
Although the ring is a field, is not, because the nonzero element does not have a multiplicative inverse.
In a similar manner, we can talk about the direct product of finitely many algebraic structures, for example, This relies on the direct product being associative up to isomorphism. That is, for any algebraic structures and of the same kind. The direct product is also commutative up to isomorphism, that is, for any algebraic structures and of the same kind. We can even talk about the direct product of infinitely many algebraic structures; for example we can take the direct product of countably many copies of which we write as
Direct product of groups
In group theory one can define the direct product of two groups and denoted by For abelian groups that are written additively, it may also be called the direct sum of two groups, denoted by
It is defined as follows:
the set of the elements of the new group is the Cartesian product of the sets of elements of that is
on these elements put an operation, defined element-wise:
Note that may be the same as
This construction gives a new group. It has a normal subgroup isomorphic to (given by the elements of the form ), and one isomorphic to (comprising the elements ).
The reverse also holds. There is the following recognition theorem: If a group contains two normal subgroups such that and the intersection of contains only the identity, then is isomorphic to A relaxation of these conditions, requiring only one subgroup to be normal, gives the semidirect product.
As an example, take as two copies of the unique (up to isomorphisms) group of order 2, say Then with the operation element by element. For instance, and
With a direct product, we get some natural group homomorphisms for free: the projection maps defined by
are called the coordinate functions.
Also, every homomorphism to the direct product is totally determined by its component functions
For any group and any integer repeated application of the direct product gives the group of all -tuples (for this is the trivial group), for example and
Direct product of modules
The direct product for modules (not to be confused with the tensor product) is very similar to the one defined for groups above, using the Cartesian product with the operation of addition being componentwise, and the scalar multiplication just distributing over all the components. Starting from we get Euclidean space the prototypical example of a real -dimensional vector space. The direct product of and is
Note that a direct product for a finite index is canonically isomorphic to the direct sum The direct sum and direct product are not isomorphic for infinite indices, where the elements of a direct sum are zero for all but for a finite number of entries. They are dual in the sense of category theory: the direct sum is the coproduct, while the direct product is the product.
For example, consider and the infinite direct product and direct sum of the real numbers. Only sequences with a finite number of non-zero elements are in For example, is in but is not. Both of these sequences are in the direct product in fact, is a proper subset of (that is, ).
Topological space direct product
The direct product for a collection of topological spaces for in some index set, once again makes use of the Cartesian product
Defining the topology is a little tricky. For finitely many factors, this is the obvious and natural thing to do: simply take as a basis of open sets to be the collection of all Cartesian products of open subsets from each factor:
This topology is called the product topology. For example, directly defining the product topology on by the open sets of (disjoint unions of open intervals), the basis for this topology would consist of all disjoint unions of open rectangles in the plane (as it turns out, it coincides with the usual metric topology).
The product topology for infinite products has a twist, and this has to do with being able to make all the projection maps continuous and to make all functions into the product continuous if and only if all its component functions are continuous (that is, to satisfy the categorical definition of product: the morphisms here are continuous functions): we take as a basis of open sets to be the collection of all Cartesian products of open subsets from each factor, as before, with the proviso that all but finitely many of the open subsets are the entire factor:
The more natural-sounding topology would be, in this case, to take products of infinitely many open subsets as before, and this does yield a somewhat interesting topology, the box topology. However it is not too difficult to find an example of bunch of continuous component functions whose product function is not continuous (see the separate entry box topology for an example and more). The problem that makes the twist necessary is ultimately rooted in the fact that the intersection of open sets is only guaranteed to be open for finitely many sets in the definition of topology.
Products (with the product topology) are nice with respect to preserving properties of their factors; for example, the product of Hausdorff spaces is Hausdorff; the product of connected spaces is connected, and the product of compact spaces is compact. That last one, called Tychonoff's theorem, is yet another equivalence to the axiom of choice.
For more properties and equivalent formulations, see the separate entry product topology.
Direct product of binary relations
On the Cartesian product of two sets with binary relations define as If are both reflexive, irreflexive, transitive, symmetric, or antisymmetric, then will be also. Similarly, totality of is inherited from Combining properties it follows that this also applies for being a preorder and being an equivalence relation. However, if are connected relations, need not be connected; for example, the direct product of on with itself does not relate
Direct product in universal algebra
If is a fixed signature, is an arbitrary (possibly infinite) index set, and is an indexed family of algebras, the direct product is a algebra defined as follows:
The universe set of is the Cartesian product of the universe sets of formally:
For each and each -ary operation symbol its interpretation in is defined componentwise, formally: for all and each the th component of is defined as
For each the th projection is defined by It is a surjective homomorphism between the algebras
As a special case, if the index set the direct product of two algebras is obtained, written as If just contains one binary operation the above definition of the direct product of groups is obtained, using the notation Similarly, the definition of the direct product of modules is subsumed here.
Categorical product
The direct product can be abstracted to an arbitrary category. In a category, given a collection of objects indexed by a set , a product of these objects is an object together with morphisms for all , such that if is any other object with morphisms for all , there exists a unique morphism whose composition with equals for every .
Such and do not always exist. If they do exist, then is unique up to isomorphism, and is denoted .
In the special case of the category of groups, a product always exists: the underlying set of is the Cartesian product of the underlying sets of the , the group operation is componentwise multiplication, and the (homo)morphism is the projection sending each tuple to its th coordinate.
Internal and external direct product
Some authors draw a distinction between an internal direct product and an external direct product. For example, if and are subgroups of an additive abelian group , such that and , then and we say that is the internal direct product of and . To avoid ambiguity, we can refer to the set as the external direct product of and .
See also
Notes
References
Abstract algebra
ru:Прямое произведение#Прямое произведение групп | Direct product | [
"Mathematics"
] | 1,884 | [
"Abstract algebra",
"Algebra"
] |
8,888 | https://en.wikipedia.org/wiki/D%C3%A9j%C3%A0%20vu | Déjà vu ( , ; "already seen") is the phenomenon of feeling as though one has lived through the present situation before. It is an illusion of memory whereby—despite a strong sense of recollection—the time, place, and context of the "previous" experience are uncertain or impossible. Approximately two-thirds of surveyed populations report experiencing déjà vu at least one time in their lives. The phenomenon manifests occasionally as a symptom of seizure auras, and some researchers have associated chronic/frequent "pathological" déjà vu with neurological or psychiatric
illness. Experiencing déjà vu has been correlated with higher socioeconomic status, better educational attainment, and lower ages. People who travel often, frequently watch films, or frequently remember their dreams are also more likely to report experiencing déjà vu than others.
History
The term was first used by Émile Boirac in 1876. Boirac was a French philosopher whose book L'avenir des sciences psychiques () included the sensation of déjà vu. Déjà vu has been presented by Émile as a reminiscence of memories, "These experiments have led scientists to suspect that déjà vu is a memory phenomenon. We encounter a situation that is similar to an actual memory but we can’t fully recall that memory." This evidence, found by Émile Boirac, helps the public understand what déjà vu can entail on the average brain. It was also stated, "Our brain recognizes the similarities between our current experience and one in the past ... left with a feeling of familiarity that we can't quite place."
Throughout history, there have been many theories on what causes déjà vu.
Medical disorders
Déjà vu is associated with temporal lobe epilepsy. This experience is a neurological anomaly related to epileptic electrical discharge in the brain, creating a strong sensation that an event or experience currently being experienced has already been experienced in the past.
Migraines with aura are also associated with déjà vu.
Early researchers tried to establish a link between déjà vu and mental disorders such as anxiety, dissociative identity disorder and schizophrenia but failed to find correlations of any diagnostic value. No special association has been found between déjà vu and schizophrenia. A 2008 study found that déjà vu experiences are unlikely to be pathological dissociative experiences.
Some research has looked into genetics when considering déjà vu. Although there is not currently a gene associated with déjà vu, the LGI1 gene on chromosome 10 is being studied for a possible link. Certain forms of the gene are associated with a mild form of epilepsy, and, though by no means a certainty, déjà vu, along with jamais vu, occurs often enough during seizures (such as simple partial seizures) that researchers have reason to suspect a link.
Pharmacology
Certain combinations of medical drugs have been reported to increase the chances of déjà vu occurring in the user. Taiminen and Jääskeläinen (2001) explored the case of an otherwise healthy person who started experiencing intense and recurrent sensations of déjà vu upon taking the drugs amantadine and phenylpropanolamine together to relieve flu symptoms. Because of the dopaminergic action of the drugs and previous findings from electrode stimulation of the brain (e.g. Bancaud, Brunet-Bourgin, Chauvel, & Halgren, 1994), Tamminen and Jääskeläinen speculated that déjà vu occurs as a result of hyperdopaminergic action in the medial temporal areas of the brain. A similar case study by Karla, Chancellor, and Zeman (2007) suggests a link between déjà vu and the serotonergic system, after an otherwise healthy woman began experiencing similar symptoms while taking a combination of 5-hydroxytryptophan and carbidopa.
Explanations
Split perception explanation
Déjà vu may happen if a person experienced the current sensory experience twice successively. The first input experience is brief, degraded, occluded, or distracted. Immediately following that, the second perception might be familiar because the person naturally related it to the first input. One possibility behind this mechanism is that the first input experience involves shallow processing, which means that only some superficial physical attributes are extracted from the stimulus.
Memory-based explanation
Implicit memory
Research has associated déjà vu experiences with good memory functions, particularly long-term implicit memory. Recognition memory enables people to realize the event or activity that they are experiencing has happened before. When people experience déjà vu, they may have their recognition memory triggered by certain situations which they have never encountered.
The similarity between a déjà-vu-eliciting stimulus and an existing, or non-existing but different, memory trace may lead to the sensation that an event or experience currently being experienced has already been experienced in the past. Thus, encountering something that evokes the implicit associations of an experience or sensation that cannot be remembered may lead to déjà vu. In an effort to reproduce the sensation experimentally, Banister and Zangwill (1941) used hypnosis to give participants posthypnotic amnesia for material they had already seen. When this was later re-encountered, the restricted activation caused thereafter by the posthypnotic amnesia resulted in three of the 10 participants reporting what the authors termed "paramnesias".
Two approaches are used by researchers to study feelings of previous experience, with the process of recollection and familiarity. Recollection-based recognition refers to an ostensible realization that the current situation has occurred before. Familiarity-based recognition refers to the feeling of familiarity with the current situation without being able to identify any specific memory or previous event that could be associated with the sensation.
In 2010, O'Connor, Moulin, and Conway developed another laboratory analog of déjà vu based on two contrast groups of carefully selected participants, a group under posthypnotic amnesia condition (PHA) and a group under posthypnotic familiarity condition (PHF). The idea of PHA group was based on the work done by Banister and Zangwill (1941), and the PHF group was built on the research results of O'Connor, Moulin, and Conway (2007). They applied the same puzzle game for both groups, "Railroad Rush Hour", a game in which one aims to slide a red car through the exit by rearranging and shifting other blocking trucks and cars on the road. After completing the puzzle, each participant in the PHA group received a posthypnotic amnesia suggestion to forget the game in the hypnosis. Then, each participant in the PHF group was not given the puzzle but received a posthypnotic familiarity suggestion that they would feel familiar with this game during the hypnosis. After the hypnosis, all participants were asked to play the puzzle (the second time for PHA group) and reported the feelings of playing.
In the PHA condition, if a participant reported no memory of completing the puzzle game during hypnosis, researchers scored the participant as passing the suggestion. In the PHF condition, if participants reported that the puzzle game felt familiar, researchers scored the participant as passing the suggestion. It turned out that, both in the PHA and PHF conditions, five participants passed the suggestion and one did not, which is 83.33% of the total sample. More participants in PHF group felt a strong sense of familiarity, for instance, comments like "I think I have done this several years ago." Furthermore, more participants in PHF group experienced a strong déjà vu, for example, "I think I have done the exact puzzle before." Three out of six participants in the PHA group felt a sense of déjà vu, and none of them experienced a strong sense of it. These figures are consistent with Banister and Zangwill's findings. Some participants in PHA group related the familiarity when completing the puzzle with an exact event that happened before, which is more likely to be a phenomenon of source amnesia. Other participants started to realize that they may have completed the puzzle game during hypnosis, which is more akin to the phenomenon of breaching. In contrast, participants in the PHF group reported that they felt confused about the strong familiarity of this puzzle, with the feeling of playing it just sliding across their minds. Overall, the experiences of participants in the PHF group is more likely to be the déjà vu in life, while the experiences of participants in the PHA group is unlikely to be real déjà vu.
A 2012 study in the journal Consciousness and Cognition, that used virtual reality technology to study reported déjà vu experiences, supported this idea. This virtual reality investigation suggested that similarity between a new scene's spatial layout and the layout of a previously experienced scene in memory (but which fails to be recalled) may contribute to the déjà vu experience. When the previously experienced scene fails to come to mind in response to viewing the new scene, that previously experienced scene in memory can still exert an effect—that effect may be a feeling of familiarity with the new scene that is subjectively experienced as a feeling that an event or experience currently being experienced has already been experienced in the past, or of having been there before despite knowing otherwise.
In 2018 a study examined volunteers' brains under experimentally induced déjà vu through the use of fMRI brain scans. The induced "deja vu" state was created by getting them to look at a series of logically related and unrelated words. The researchers would then ask the participants how many words starting with a specific letter they saw. With related words such as "door, shutter, screen, breeze", the participants would be asked if they saw any words that started with "W" (i.e. Window, a term that was not presented to the participants). If they did note that they thought they saw a word that wasn't presented to them, then déjà vu was induced. The researchers would then examine the volunteers' brains at the moment of induced déjà vu. From these scans, they noticed that there was visible activity in regions of the brain associated with mnemonic conflict. This finding suggests that more research regarding memory conflict may be important in better understanding déjà vu.
Cryptomnesia
Another possible explanation for the phenomenon of déjà vu is the occurrence of cryptomnesia, which is where information learned is forgotten but nevertheless stored in the brain, and similar occurrences invoke the contained knowledge, leading to a feeling of familiarity because the event or experience being experienced has already been experienced in the past, known as "déjà vu". Some experts suggest that memory is a process of reconstruction, rather than a recollection of fixed, established events. This reconstruction comes from stored components, involving emotions, distortions, and omissions. Each successive recall of an event is merely a recall of the last reconstruction. The proposed sense of recognition (déjà vu) involves achieving a good match between the present experience and the stored data. This reconstruction, however, may now differ so much from the original event it is as though it had never been experienced before, even though it seems similar.
Dual neurological processing
In 1965, Robert Efron of Boston's Veterans Hospital proposed that déjà vu is caused by dual neurological processing caused by delayed signals. Efron found that the brain's sorting of incoming signals is done in the temporal lobe of the brain's left hemisphere. However, signals enter the temporal lobe twice before processing, once from each hemisphere of the brain, normally with a slight delay of milliseconds between them. Efron proposed that if the two signals were occasionally not synchronized properly, then they would be processed as two separate experiences, with the second seeming to be a re-living of the first.
Dream-based explanation
Dreams can also be used to explain the experience of déjà vu, and they are related in three different aspects. Firstly, some déjà vu experiences duplicate the situation in dreams instead of waking conditions, according to the survey done by Brown (2004). Twenty percent of the respondents reported their déjà vu experiences were from dreams and 40% of the respondents reported from both reality and dreams. Secondly, people may experience déjà vu because some elements in their remembered dreams were shown. Research done by Zuger (1966) supported this idea by investigating the relationship between remembered dreams and déjà vu experiences, and suggested that there is a strong correlation. Thirdly, people may experience déjà vu during a dream state, which links déjà vu with dream frequency. Some researchers, including Swiss scientist Arthur Funkhouser, firmly believe that precognitive dreams are the source of many déjà vu experiences. Researchers also connected evidence of precognitive dreams experiences to déjà vu experiences that occurred anywhere from one day to eight years later.
Collective unconscious
Collective Unconscious is a controversial theory created by Carl Jung that has been used to explain the phenomenon of déjà vu. His theory was that all people have a shared pool of knowledge that has been passed down through generations and we can unconsciously access this knowledge. Some of said knowledge would be about certain archetypes like mother, father and hero or possibly about basic situations, emotions or other patterns. If we can access shared knowledge déjà vu could potentially be an effect of recognizing one of the collectively stored patterns.
Related terms
Jamais vu
Jamais vu (from French, meaning "never seen") is any familiar situation which is not recognized by the observer.
Often described as the opposite of déjà vu, jamais vu involves a sense of eeriness and the observer's impression of seeing the situation for the first time, despite rationally knowing that they have been in the situation before. Jamais vu is most commonly experienced when a person momentarily does not recognize a word, person or place that they already know. Jamais vu is sometimes associated with certain types of aphasia, amnesia, and epilepsy.
Theoretically, a jamais vu feeling in someone with a delirious disorder or intoxication could result in a delirious explanation of it, such as in the Capgras delusion, in which the patient takes a known person for a false double or impostor. If the impostor is himself, the clinical setting would be the same as the one described as depersonalization, hence jamais vus of oneself or of the "reality of reality", are termed depersonalization (or surreality) feelings.
The feeling has been evoked through semantic satiation. Chris Moulin of the University of Leeds asked 95 volunteers to write the word "door" 30 times in 60 seconds. Sixty-eight percent of the subjects reported symptoms of jamais vu, with some beginning to doubt that "door" was a real word.
Déjà vécu
Déjà vécu (from French, meaning "already lived") is an intense, but false, feeling of having already lived through the present situation. Recently, it has been considered a pathological form of déjà vu. However, unlike déjà vu, déjà vécu has behavioral consequences. Patients with déjà vécu often cannot tell that this feeling of familiarity is not real. Because of the intense feeling of familiarity, patients experiencing déjà vécu may withdraw from their current events or activities. Patients may justify their feelings of familiarity with beliefs bordering on delusion.
Presque vu
Presque vu (, from French, meaning "almost seen") is the intense feeling of being on the very brink of a powerful epiphany, insight, or revelation, without actually achieving the revelation. The feeling is often therefore associated with a frustrating, tantalizing sense of incompleteness or near-completeness.
Déjà rêvé
Déjà rêvé (from French, meaning "already dreamed") is the feeling of having already dreamed something that is currently being experienced.
Déjà entendu
Déjà entendu (literally "already heard") is the experience of feeling sure about having already heard something, even though the exact details are uncertain or were perhaps imagined.
See also
Intuition (knowledge)
Repression (psychology)
Scientific skepticism
Screen memory
Uncanny
References
Further reading
Neppe, Vernon. (1983). The Psychology of Déjà vu: Have We Been Here Before?. Witwatersrand University Press.
External links
Anne Cleary discussing a virtual reality investigation of déjà vu
Dream Déjà Vu - Psychology Today
Chronic déjà vu - quirks and quarks episode (mp3)
Déjà vu - The Skeptic's Dictionary
How Déjà Vu Works — a Howstuffworks article
Déjà Experience Research — a website dedicated to providing déjà experience information and research
Nikhil Swaminathan, Think You've Previously Read About This?, Scientific American, June 8, 2007
Deberoh Halber, Research Deciphers Deju-Vu Brain Mechanics, MIT Report, June 7, 2007
Memory
Cognitive science
Perception
French words and phrases
Time in life
Psychological concepts | Déjà vu | [
"Physics"
] | 3,491 | [
"Spacetime",
"Physical quantities",
"Time in life",
"Time"
] |
8,900 | https://en.wikipedia.org/wiki/Discrimination | Discrimination is the process of making unfair or prejudicial distinctions between people based on the groups, classes, or other categories to which they belong or are perceived to belong, such as race, gender, age, species, religion, physical attractiveness or sexual orientation. Discrimination typically leads to groups being unfairly treated on the basis of perceived statuses based on ethnic, racial, gender or religious categories. It involves depriving members of one group of opportunities or privileges that are available to members of another group.
Discriminatory traditions, policies, ideas, practices and laws exist in many countries and institutions in all parts of the world, including some, where such discrimination is generally decried. In some places, countervailing measures such as quotas have been used to redress the balance in favor of those who are believed to be current or past victims of discrimination. These attempts have often been met with controversy, and sometimes been called reverse discrimination.
Etymology
The term discriminate appeared in the early 17th century in the English language. It is from the Latin discriminat- 'distinguished between', from the verb discriminare, from discrimen 'distinction', from the verb discernere (corresponding to "to discern"). Since the American Civil War the term "discrimination" generally evolved in American English usage as an understanding of prejudicial treatment of an individual based solely on their race, later generalized as membership in a certain socially undesirable group or social category.
Before this sense of the word became almost universal, it was a synonym for discernment, tact and culture as in "taste and discrimination", generally a laudable attribute; to "discriminate against" being commonly disparaged.
Definitions
Moral philosophers have defined discrimination using a moralized definition. Under this approach, discrimination is defined as acts, practices, or policies that wrongfully impose a relative disadvantage or deprivation on persons based on their membership in a salient social group. This is a comparative definition. An individual need not be actually harmed in order to be discriminated against. He or she just needs to be treated worse than others for some arbitrary reason. If someone decides to donate to help orphan children, but decides to donate less, say, to children of a particular race out of a racist attitude, he or she will be acting in a discriminatory way even if he or she actually benefits the people he discriminates against by donating some money to them. Discrimination also develops into a source of oppression, the action of recognizing someone as 'different' so much that they are treated inhumanly and degraded.
This moralized definition of discrimination is distinct from a non-moralized definition - in the former, discrimination is wrong by definition, whereas in the latter, this is not the case.
The United Nations stance on discrimination includes the statement: "Discriminatory behaviors take many forms, but they all involve some form of exclusion or rejection." The United Nations Human Rights Council and other international bodies work towards helping ending discrimination around the world.
Types
Age
Ageism or age discrimination is discrimination and stereotyping based on the grounds of someone's age. It is a set of beliefs, norms, and values which used to justify discrimination or subordination based on a person's age. Ageism is most often directed toward elderly people, or adolescents and children.
Age discrimination in hiring has been shown to exist in the United States. Joanna Lahey, professor at The Bush School of Government and Public Service at Texas A&M, found that firms are more than 40% more likely to interview a young adult job applicant than an older job applicant. In Europe, Stijn Baert, Jennifer Norga, Yannick Thuy and Marieke Van Hecke, researchers at Ghent University, measured comparable ratios in Belgium. They found that age discrimination is heterogeneous by the activity older candidates undertook during their additional post-educational years. In Belgium, they are only discriminated if they have more years of inactivity or irrelevant employment.
In a survey for the University of Kent, England, 29% of respondents stated that they had suffered from age discrimination. This is a higher proportion than for gender or racial discrimination. Dominic Abrams, social psychology professor at the university, concluded that ageism is the most pervasive form of prejudice experienced in the UK population.
Caste
According to UNICEF and Human Rights Watch, caste discrimination affects an estimated 250 million people worldwide and is mainly prevalent in parts of Asia (India, Sri Lanka, Bangladesh, Pakistan, Nepal, Japan) and Africa. , there were 200 million Dalits or Scheduled Castes (formerly known as "untouchables") in India.
Disability
Discrimination against people with disabilities in favor of people who are not is called ableism or disablism. Disability discrimination, which treats non-disabled individuals as the standard of 'normal living', results in public and private places and services, educational settings, and social services that are built to serve 'standard' people, thereby excluding those with various disabilities. Studies have shown that disabled people not only need employment in order to be provided with the opportunity to earn a living but they also need employment in order to sustain their mental health and well-being. Work fulfils a number of basic needs for an individual such as collective purpose, social contact, status, and activity. A person with a disability is often found to be socially isolated and work is one way to reduce his or her isolation. In the United States, the Americans with Disabilities Act mandates the provision of equality of access to both buildings and services and is paralleled by similar acts in other countries, such as the Equality Act 2010 in the UK.
Excellence
Language
Name
Discrimination based on a person's name may also occur, with researchers suggesting that this form of discrimination is present based on a name's meaning, its pronunciation, its uniqueness, its gender affiliation, and its racial affiliation. Research has further shown that real world recruiters spend an average of just six seconds reviewing each résumé before making their initial "fit/no fit" screen-out decision and that a person's name is one of the six things they focus on most. France has made it illegal to view a person's name on a résumé when screening for the initial list of most qualified candidates. Great Britain, Germany, Sweden, and the Netherlands have also experimented with name-blind summary processes. Some apparent discrimination may be explained by other factors such as name frequency. The effects of name discrimination based on a name's fluency is subtle, small and subject to significantly changing norms.
Nationality
The Anti-discrimination laws of most countries allow and make exceptions for discrimination based on nationality and immigration status. The International Convention on the Elimination of All Forms of Racial Discrimination (CERD) does not prohibit discrimination by nationality, citizenship or naturalization but forbids discrimination "against any particular nationality".
Discrimination on the basis of nationality is usually included in employment laws (see above section for employment discrimination specifically). It is sometimes referred to as bound together with racial discrimination although it can be separate. It may vary from laws that stop refusals of hiring based on nationality, asking questions regarding origin, to prohibitions of firing, forced retirement, compensation and pay, etc., based on nationality.
Discrimination on the basis of nationality may show as a "level of acceptance" in a sport or work team regarding new team members and employees who differ from the nationality of the majority of team members.
In the GCC states, in the workplace, preferential treatment is given to full citizens, even though many of them lack experience or motivation to do the job. State benefits are also generally available for citizens only. Westerners might also get paid more than other expatriates.
Race or ethnicity
Racial and ethnic discrimination differentiates individuals on the basis of real and perceived racial and ethnic differences and leads to various forms of the ethnic penalty. It can also refer to the belief that groups of humans possess different behavioral traits corresponding to physical appearance and can be divided based on the superiority of one race over another. It may also mean prejudice, discrimination, or antagonism directed against other people because they are of a different race or ethnicity. Modern variants of racism are often based in social perceptions of biological differences between peoples. These views can take the form of social actions, practices or beliefs, or political systems in which different races are ranked as inherently superior or inferior to each other, based on presumed shared inheritable traits, abilities, or qualities. It has been official government policy in several countries, such as South Africa during the apartheid era. Discriminatory policies towards ethnic minorities include the race-based discrimination against ethnic Indians and Chinese in Malaysia After the Vietnam War, many Vietnamese refugees moved to Australia and the United States, where they faced discrimination.
Region
Regional or geographic discrimination is a form of discrimination that is based on the region in which a person lives or the region in which a person was born. It differs from national discrimination because it may not be based on national borders or the country in which the victim lives, instead, it is based on prejudices against a specific region of one or more countries. Examples include discrimination against Chinese people who were born in regions of the countryside that are far away from cities that are located within China, and discrimination against Americans who are from the southern or northern regions of the United States. It is often accompanied by discrimination that is based on accent, dialect, or cultural differences.
Religious beliefs
Religious discrimination is valuing or treating people or groups differently because of what they do or do not believe in or because of their feelings towards a given religion. For instance, the Jewish population of Germany, and indeed a large portion of Europe, was subjected to discrimination under Adolf Hitler and his Nazi party between 1933 and 1945. They were forced to live in ghettos, wear an identifying star of David on their clothes, and sent to concentration and death camps in rural Germany and Poland, where they were to be tortured and killed, all because of their Jewish religion. Many laws (most prominently the Nuremberg Laws of 1935) separated those of Jewish faith as supposedly inferior to the Christian population.
Restrictions on the types of occupations that Jewish people could hold were imposed by Christian authorities. Local rulers and church officials closed many professions to religious Jews, pushing them into marginal roles that were considered socially inferior, such as tax and rent collecting and moneylending, occupations that were only tolerated as a "necessary evil". The number of Jews who were permitted to reside in different places was limited; they were concentrated in ghettos and banned from owning land. In Saudi Arabia, non-Muslims are not allowed to publicly practice their religions and they cannot enter Mecca and Medina. Furthermore, private non-Muslim religious gatherings might be raided by the religious police. In Maldives, non-Muslims living and visiting the country are prohibited from openly expressing their religious beliefs, holding public congregations to conduct religious activities, or involving Maldivians in such activities. Those expressing religious beliefs other than Islam may face imprisonment of up to five years or house arrest, fines ranging from 5,000 to 20,000 rufiyaa ($320 to $1,300), and deportation.
In a 1979 consultation on the issue, the United States commission on civil rights defined religious discrimination in relation to the civil rights which are guaranteed by the Fourteenth Amendment. Whereas religious civil liberties, such as the right to hold or not to hold a religious belief, are essential for Freedom of Religion (in the United States as secured by the First Amendment), religious discrimination occurs when someone is denied "equal protection under the law, equality of status under the law, equal treatment in the administration of justice, and equality of opportunity and access to employment, education, housing, public services and facilities, and public accommodation because of their exercise of their right to religious freedom".
Sex, sex characteristics, gender, and gender identity
Sexism is a form of discrimination based on a person's sex or gender. It has been linked to stereotypes and gender roles, and may include the belief that one sex or gender is intrinsically superior to another. Extreme sexism may foster sexual harassment, rape, and other forms of sexual violence. Gender discrimination may encompass sexism and is discrimination toward people based on their gender identity or their gender or sex differences. Gender discrimination is especially defined in terms of workplace inequality. It may arise from social or cultural customs and norms.
Intersex persons experience discrimination due to innate, atypical sex characteristics. Multiple jurisdictions now protect individuals on grounds of intersex status or sex characteristics. South Africa was the first country to explicitly add intersex to legislation, as part of the attribute of 'sex'. Australia was the first country to add an independent attribute, of 'intersex status'. Malta was the first to adopt a broader framework of 'sex characteristics', through legislation that also ended modifications to the sex characteristics of minors undertaken for social and cultural reasons. Global efforts such as the United Nations Sustainable Development Goal 5 is also aimed at ending all forms of discrimination on the basis of gender and sex.
Sexual orientation
One's sexual orientation is a "predilection for homosexuality, heterosexuality, or bisexuality". Like most minority groups, homosexuals and bisexuals are vulnerable to prejudice and discrimination from the majority group. They may experience hatred from others because of their sexuality; a term for such hatred based upon one's sexual orientation is often called homophobia. Many continue to hold negative feelings towards those with non-heterosexual orientations and will discriminate against people who have them or are thought to have them. People of other uncommon sexual orientations also experience discrimination. One study found its sample of heterosexuals to be more prejudiced against asexual people than against homosexual or bisexual people.
Employment discrimination based on sexual orientation varies by country. Revealing a lesbian sexual orientation (by means of mentioning an engagement in a rainbow organisation or by mentioning one's partner name) lowers employment opportunities in Cyprus and Greece but overall, it has no negative effect in Sweden and Belgium. In the latter country, even a positive effect of revealing a lesbian sexual orientation is found for women at their fertile ages.
Besides these academic studies, in 2009, ILGA published a report based on research carried out by Daniel Ottosson at Södertörn University College in Stockholm, Sweden. This research found that of the 80 countries around the world that continue to consider homosexuality illegal, five carry the death penalty for homosexual activity, and two do in some regions of the country. In the report, this is described as "State sponsored homophobia". This happens in Islamic states, or in two cases regions under Islamic authority. On February 5, 2005, the IRIN issued a reported titled "Iraq: Male homosexuality still a taboo". The article stated, among other things that honor killings by Iraqis against a gay family member are common and given some legal protection. In August 2009, Human Rights Watch published an extensive report detailing torture of men accused of being gay in Iraq, including the blocking of men's anuses with glue and then giving the men laxatives. Although gay marriage has been legal in South Africa since 2006, same-sex unions are often condemned as "un-African". Research conducted in 2009 shows 86% of black lesbians from the Western Cape live in fear of sexual assault.
A number of countries, especially those in the Western world, have passed measures to alleviate discrimination against sexual minorities, including laws against anti-gay hate crimes and workplace discrimination. Some have also legalized same-sex marriage or civil unions in order to grant same-sex couples the same protections and benefits as opposite-sex couples. In 2011, the United Nations passed its first resolution recognizing LGBT rights.
Reverse discrimination
Reverse discrimination is discrimination against members of a dominant or majority group, in favor of members of a minority or historically disadvantaged group.
This discrimination may seek to redress social inequalities under which minority groups have had less access to privileges enjoyed by the majority group. In such cases it is intended to remove discrimination that minority groups may already face. Reverse discrimination can be defined as the unequal treatment of members of the majority groups resulting from preferential policies, as in college admissions or employment, intended to remedy earlier discrimination against minorities.
Conceptualizing affirmative action as reverse discrimination became popular in the early- to mid-1970s, a time period that focused on under-representation and action policies intended to remedy the effects of past discrimination in both government and the business world.
Anti-discrimination legislation
Australia
Racial Discrimination Act 1975
Sex Discrimination Act 1984
Disability Discrimination Act 1992
Age Discrimination Act 2004
Canada
Ontario Human Rights Code 1962
Canadian Human Rights Act 1977
Hong Kong
Sex Discrimination Ordinance (1996)
India
Article 15 of the Constitution of India prohibits discrimination against any citizen on grounds of caste, religion, sex, race or place of birth etc. Similarly, the Constitution of India guarantees several rights to all citizens irrespective of gender, such as right to equality under Article 14, right to life and personal liberty under Article 21.
Indian Penal Code, 1860 (Section 153 A) - Criminalises the use of language that promotes discrimination or violence against people on the basis of race, caste, sex, place of birth, religion, gender identity, sexual orientation or any other category.
Israel
Prohibition of Discrimination in Products, Services and Entry into Places of Entertainment and Public Places Law, 2000
Employment (Equal Opportunities) Law, 1988
Law of Equal Rights for Persons with Disabilities, 1998
Netherlands
Article 137c, part 1 of Wetboek van Strafrecht prohibits insults towards a group because of its race, religion, sexual orientation (straight or gay), handicap (somatically, mental or psychiatric) in public or by speech, by writing or by a picture. Maximum imprisonment one year of imprisonment or a fine of the third category.
Part 2 increases the maximum imprisonment to two years and the maximum fine category to 4, when the crime is committed as a habit or is committed by two or more persons.
Article 137d prohibits provoking to discrimination or hate against the group described above. Same penalties apply as in article 137c.
Article 137e part 1 prohibits publishing a discriminatory statement, other than in formal message, or hands over an object (that contains discriminatory information) otherwise than on his request. Maximum imprisonment is 6 months or a fine of the third category.
Part 2 increases the maximum imprisonment to one year and the maximum fine category to 4, when the crime is committed as a habit or committed by two or more persons.
Article 137f prohibits supporting discriminatory activities by giving money or goods. Maximum imprisonment is 3 months or a fine of the second category.
United Kingdom
Equal Pay Act 1970 – provides for equal pay for comparable work.
Sex Discrimination Act 1975 – makes discrimination against women or men, including discrimination on the grounds of marital status, illegal in the workplace.
Human Rights Act 1998 – provides more scope for redressing all forms of discriminatory imbalances.
Equality Act 2010 – consolidates, updates and supplements the prior Acts and Regulations that formed the basis of anti-discrimination law.
United States
Equal Pay Act of 1963 – (part of the Fair Labor Standards Act) – prohibits wage discrimination by employers and labor organizations based on sex.
Civil Rights Act of 1964 – many provisions, including broadly prohibiting discrimination in the workplace including hiring, firing, workforce reduction, benefits, and sexually harassing conduct.
Fair Housing Act of 1968 prohibited discrimination in the sale or rental of housing based on race, color, national origin, religion, sex, familial status, or disability. The Office of Fair Housing and Equal Opportunity is charged with administering and enforcing the Act.
Pregnancy Discrimination Act of 1978, which amended Title VII of the Civil Rights Act of 1964 – covers discrimination based upon pregnancy in the workplace.
Violence Against Women Act of 1994
Racism still occurs in a widespread manner in real estate.
United Nations documents
Important UN documents addressing discrimination include:
The Universal Declaration of Human Rights is a declaration adopted by the United Nations General Assembly on December 10, 1948. It states that:" Everyone is entitled to all the rights and freedoms set forth in this Declaration, without distinction of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status."
The International Convention on the Elimination of All Forms of Racial Discrimination (ICERD) is a United Nations convention. The Convention commits its members to the elimination of racial discrimination. The convention was adopted and opened for signature by the United Nations General Assembly on December 21, 1965, and entered into force on January 4, 1969.
The Convention on the Elimination of All Forms of Discrimination against Women (CEDAW) is an international treaty adopted in 1979 by the United Nations General Assembly. Described as an international bill of rights for women, it came into force on September 3, 1981.
The Convention on the Rights of Persons with Disabilities is an international human rights instrument treaty of the United Nations. Parties to the convention are required to promote, protect, and ensure the full enjoyment of human rights by persons with disabilities and ensure that they enjoy full equality under the law. The text was adopted by the United Nations General Assembly on December 13, 2006, and opened for signature on March 30, 2007. Following ratification by the 20th party, it came into force on May 3, 2008.
International cooperation
Global Forum against Racism and Discrimination
The International Coalition of Inclusive and Sustainable Cities (ICCAR) launched by UNESCO in 2004
Routes of Enslaved Peoples project
Theories and philosophy
Social theories such as egalitarianism assert that social equality should prevail. In some societies, including most developed countries, each individual's civil rights include the right to be free from government sponsored social discrimination. Due to a belief in the capacity to perceive pain or suffering shared by all animals, abolitionist or vegan egalitarianism maintains that the interests of every individual (regardless of their species), warrant equal consideration with the interests of humans, and that not doing so is speciesist.
Philosophers have debated as to how inclusive the definition of discrimination should be. Some philosophers have argued that discrimination should only refer to wrongful or disadvantageous treatment in the context of a socially salient group (such as race, gender, sexuality etc.) within a given context. Under this view, failure to limit the concept of discrimination would lead to it being overinclusive; for example, since most murders occur because of some perceived difference between the perpetrator and the victim, many murders would constitute discrimination if the social salience requirement is not included. Thus this view argues that making the definition of discrimination overinclusive renders it meaningless. Conversely, other philosophers argue that discrimination should simply refer to wrongful disadvantageous treatment regardless of the social salience of the group, arguing that limiting the concept only to socially salient groups is arbitrary, as well as raising issues of determining which groups would count as socially salient. The issue of which groups should count has caused many political and social debates.
Based on realistic-conflict theory and social-identity theory, Rubin and Hewstone have highlighted a distinction among three types of discrimination:
Realistic competition is driven by self-interest and is aimed at obtaining material resources (e.g., food, territory, customers) for the in-group (e.g., favoring an in-group in order to obtain more resources for its members, including the self).
Social competition is driven by the need for self-esteem and is aimed at achieving a positive social status for the in-group relative to comparable out-groups (e.g., favoring an in-group in order to make it better than an out-group).
Consensual discrimination is driven by the need for accuracy and reflects stable and legitimate intergroup status hierarchies (e.g., favoring a high-status in-group because it is high status).
Labeling theory
Discrimination, in labeling theory, takes form as mental categorization of minorities and the use of stereotype. This theory describes difference as deviance from the norm, which results in internal devaluation and social stigma that may be seen as discrimination. It is started by describing a "natural" social order. It is distinguished between the fundamental principle of fascism and social democracy. The Nazis in 1930s-era Germany and the pre-1990 Apartheid government of South Africa used racially discriminatory agendas for their political ends. This practice continues with some present day governments.
Game theory
Economist Yanis Varoufakis (2013) argues that "discrimination based on utterly arbitrary characteristics evolves quickly and systematically in the experimental laboratory", and that neither classical game theory nor neoclassical economics can explain this.
In 2002, Varoufakis and Shaun Hargreaves-Heap ran an experiment where volunteers played a computer-mediated, multiround hawk-dove game. At the start of each session, each participant was assigned a color at random, either red or blue. At each round, each player learned the color assigned to his or her opponent, but nothing else about the opponent. Hargreaves-Heap and Varoufakis found that the players' behavior within a session frequently developed a discriminatory convention, giving a Nash equilibrium where players of one color (the "advantaged" color) consistently played the aggressive "hawk" strategy against players of the other, "disadvantaged" color, who played the acquiescent "dove" strategy against the advantaged color. Players of both colors used a mixed strategy when playing against players assigned the same color as their own. The experimenters then added a cooperation option to the game, and found that disadvantaged players usually cooperated with each other, while advantaged players usually did not. They state that while the equilibria reached in the original hawk-dove game are predicted by evolutionary game theory, game theory does not explain the emergence of cooperation in the disadvantaged group. Citing earlier psychological work of Matthew Rabin, they hypothesize that a norm of differing entitlements emerges across the two groups, and that this norm could define a "fairness" equilibrium within the disadvantaged group.
Effects on health
See also
Adultism
Afrophobia
Allport's Scale
Anti-Arabism
Anti-Catholicism
Anti-intellectualism
Anti-Iranian sentiment
Anti-Mormonism
Anti-Protestantism
Antisemitism
Antiziganism
Aporophobia
Apostasy
Apostasy in Islam
Atlantic slave trade
Benevolent prejudice
Bias
Bumiputera (Malaysia)
Civil and political rights
Classicide
Cultural appropriation
Cultural assimilation
Cultural genocide
Dehumanization
Dignity
Discrimination against asexual people
Discrimination against atheists
Discrimination against drug addicts
Discrimination against members of the armed forces in the United Kingdom
Discrimination against people with HIV/AIDS
Discrimination based on skin color
Discrimination of excellence
Economic discrimination
Equal opportunity
Equal rights
Ethnic cleansing
Ethnocentrism
Figleaf
Genetic discrimination
Genocide
Hate group
Heightism
Homophobia
Identicide
In-group favoritism
Ingroups and outgroups
Institutional discrimination
Institutional racism
Intersectionality
Intersex human rights
Islamophobia
Jim Crow laws
List of anti-discrimination acts
List of global issues
Lookism
Microaggression
Minority stress
Nativism (politics)
Online hate speech
Oppression
Persecution
Politicide
Paradox of tolerance
Racial segregation
Religious intolerance
Religious persecution
Religious segregation
Second-class citizen
Sizeism
Slavery
Stigma management
Structural discrimination
Structural violence
Supremacism
Taste-based discrimination
Transphobia
Weightism
Xenophobia
References
External links
Employment discrimination –Topics.law.cornell.edu
Legal definitions
Australia
Canada
Russia
US
Discrimination Laws in Europe
Behavioral Biology and Racism
Barriers to critical thinking
Social justice
Concepts in social philosophy
Abuse | Discrimination | [
"Biology"
] | 5,620 | [
"Behavior",
"Abuse",
"Aggression",
"Discrimination",
"Human behavior"
] |
8,904 | https://en.wikipedia.org/wiki/Double-ended%20queue | In computer science, a double-ended queue (abbreviated to deque, ) is an abstract data type that generalizes a queue, for which elements can be added to or removed from either the front (head) or back (tail). It is also often called a head-tail linked list, though properly this refers to a specific data structure implementation of a deque (see below).
Naming conventions
Deque is sometimes written dequeue, but this use is generally deprecated in technical literature or technical writing because dequeue is also a verb meaning "to remove from a queue". Nevertheless, several libraries and some writers, such as Aho, Hopcroft, and Ullman in their textbook Data Structures and Algorithms, spell it dequeue. John Mitchell, author of Concepts in Programming Languages, also uses this terminology.
Distinctions and sub-types
This differs from the queue abstract data type or first in first out list (FIFO), where elements can only be added to one end and removed from the other. This general data class has some possible sub-types:
An input-restricted deque is one where deletion can be made from both ends, but insertion can be made at one end only.
An output-restricted deque is one where insertion can be made at both ends, but deletion can be made from one end only.
Both the basic and most common list types in computing, queues and stacks can be considered specializations of deques, and can be implemented using deques. A deque is a data structure that allows users to perform push and pop operations at both ends, providing flexibility in managing the order of elements.
Operations
The basic operations on a deque are enqueue and dequeue on either end. Also generally implemented are peek operations, which return the value at that end without dequeuing it.
Names vary between languages; major implementations include:
Implementations
There are at least two common ways to efficiently implement a deque: with a modified dynamic array or with a doubly linked list.
The dynamic array approach uses a variant of a dynamic array that can grow from both ends, sometimes called array deques. These array deques have all the properties of a dynamic array, such as constant-time random access, good locality of reference, and inefficient insertion/removal in the middle, with the addition of amortized constant-time insertion/removal at both ends, instead of just one end. Three common implementations include:
Storing deque contents in a circular buffer, and only resizing when the buffer becomes full. This decreases the frequency of resizings.
Allocating deque contents from the center of the underlying array, and resizing the underlying array when either end is reached. This approach may require more frequent resizings and waste more space, particularly when elements are only inserted at one end.
Storing contents in multiple smaller arrays, allocating additional arrays at the beginning or end as needed. Indexing is implemented by keeping a dynamic array containing pointers to each of the smaller arrays.
Purely functional implementation
Double-ended queues can also be implemented as a purely functional data structure. Two versions of the implementation exist. The first one, called 'real-time deque, is presented below. It allows the queue to be persistent with operations in worst-case time, but requires lazy lists with memoization. The second one, with no lazy lists nor memoization is presented at the end of the sections. Its amortized time is if the persistency is not used; but the worst-time complexity of an operation is where is the number of elements in the double-ended queue.
Let us recall that, for a list l, |l| denotes its length, that NIL represents an empty list and CONS(h, t) represents the list whose head is h and whose tail is t. The functions drop(i, l) and take(i, l) return the list l without its first i elements, and the first i elements of l, respectively. Or, if |l| < i, they return the empty list and l respectively.
Real-time deques via lazy rebuilding and scheduling
A double-ended queue is represented as a sextuple (len_front, front, tail_front, len_rear, rear, tail_rear) where front is a linked list which contains the front of the queue of length len_front. Similarly, rear is a linked list which represents the reverse of the rear of the queue, of length len_rear. Furthermore, it is assured that |front| ≤ 2|rear|+1 and |rear| ≤ 2|front|+1 - intuitively, it means that both the front and the rear contains between a third minus one and two thirds plus one of the elements. Finally, tail_front and tail_rear are tails of front and of rear, they allow scheduling the moment where some lazy operations are forced. Note that, when a double-ended queue contains n elements in the front list and n elements in the rear list, then the inequality invariant remains satisfied after i insertions and d deletions when (i+d) ≤ n/2. That is, at most n/2 operations can happen between each rebalancing.
Let us first give an implementation of the various operations that affect the front of the deque - cons, head and tail. Those implementations do not necessarily respect the invariant. In a second time we'll explain how to modify a deque which does not satisfy the invariant into one which satisfies it. However, they use the invariant, in that if the front is empty then the rear has at most one element. The operations affecting the rear of the list are defined similarly by symmetry.
empty = (0, NIL, NIL, 0, NIL, NIL)
fun insert'(x, (len_front, front, tail_front, len_rear, rear, tail_rear)) =
(len_front+1, CONS(x, front), drop(2, tail_front), len_rear, rear, drop(2, tail_rear))
fun head((_, CONS(h, _), _, _, _, _)) = h
fun head((_, NIL, _, _, CONS(h, NIL), _)) = h
fun tail'((len_front, CONS(head_front, front), tail_front, len_rear, rear, tail_rear)) =
(len_front - 1, front, drop(2, tail_front), len_rear, rear, drop(2, tail_rear))
fun tail'((_, NIL, _, _, CONS(h, NIL), _)) = empty
It remains to explain how to define a method balance that rebalance the deque if insert' or tail broke the invariant. The method insert and tail can be defined by first applying insert' and tail' and then applying balance.
fun balance(q as (len_front, front, tail_front, len_rear, rear, tail_rear)) =
let floor_half_len = (len_front + len_rear) / 2 in
let ceil_half_len = len_front + len_rear - floor_half_len in
if len_front > 2*len_rear+1 then
let val front' = take(ceil_half_len, front)
val rear' = rotateDrop(rear, floor_half_len, front)
in (ceil_half_len, front', front', floor_half_len, rear', rear')
else if len_front > 2*len_rear+1 then
let val rear' = take(floor_half_len, rear)
val front' = rotateDrop(front, ceil_half_len, rear)
in (ceil_half_len, front', front', floor_half_len, rear', rear')
else q
where rotateDrop(front, i, rear)) return the concatenation of front and of drop(i, rear). That isfront' = rotateDrop(front, ceil_half_len, rear) put into front' the content of front and the content of rear that is not already in rear'. Since dropping n elements takes time, we use laziness to ensure that elements are dropped two by two, with two drops being done during each tail' and each insert' operation.
fun rotateDrop(front, i, rear) =
if i < 2 then rotateRev(front, drop(i, rear), NIL)
else let CONS(x, front') = front in
CONS(x, rotateDrop(front', j-2, drop(2, rear)))
where rotateRev(front, middle, rear) is a function that returns the front, followed by the middle reversed, followed by the rear. This function is also defined using laziness to ensure that it can be computed step by step, with one step executed during each insert' and tail' and taking a constant time. This function uses the invariant that |rear|-2|front| is 2 or 3.
fun rotateRev(NIL, rear, a) =
reverse(rear)++a
fun rotateRev(CONS(x, front), rear, a) =
CONS(x, rotateRev(front, drop(2, rear), reverse(take(2, rear))++a))
where ++ is the function concatenating two lists.
Implementation without laziness
Note that, without the lazy part of the implementation, this would be a non-persistent implementation of queue in amortized time. In this case, the lists tail_front and tail_rear could be removed from the representation of the double-ended queue.
Language support
Ada's containers provides the generic packages Ada.Containers.Vectors and Ada.Containers.Doubly_Linked_Lists, for the dynamic array and linked list implementations, respectively.
C++'s Standard Template Library provides the class templates std::deque and std::list, for the multiple array and linked list implementations, respectively.
As of Java 6, Java's Collections Framework provides a new interface that provides the functionality of insertion and removal at both ends. It is implemented by classes such as (also new in Java 6) and , providing the dynamic array and linked list implementations, respectively. However, the ArrayDeque, contrary to its name, does not support random access.
Javascript's Array prototype & Perl's arrays have native support for both removing (shift and pop) and adding (unshift and push) elements on both ends.
Python 2.4 introduced the collections module with support for deque objects. It is implemented using a doubly linked list of fixed-length subarrays.
As of PHP 5.3, PHP's SPL extension contains the 'SplDoublyLinkedList' class that can be used to implement Deque datastructures. Previously to make a Deque structure the array functions array_shift/unshift/pop/push had to be used instead.
GHC's Data.Sequence module implements an efficient, functional deque structure in Haskell. The implementation uses 2–3 finger trees annotated with sizes. There are other (fast) possibilities to implement purely functional (thus also persistent) double queues (most using heavily lazy evaluation). Kaplan and Tarjan were the first to implement optimal confluently persistent catenable deques. Their implementation was strictly purely functional in the sense that it did not use lazy evaluation. Okasaki simplified the data structure by using lazy evaluation with a bootstrapped data structure and degrading the performance bounds from worst-case to amortized. Kaplan, Okasaki, and Tarjan produced a simpler, non-bootstrapped, amortized version that can be implemented either using lazy evaluation or more efficiently using mutation in a broader but still restricted fashion. Mihaescu and Tarjan created a simpler (but still highly complex) strictly purely functional implementation of catenable deques, and also a much simpler implementation of strictly purely functional non-catenable deques, both of which have optimal worst-case bounds.
Rust's std::collections includes VecDeque which implements a double-ended queue using a growable ring buffer.
Complexity
In a doubly-linked list implementation and assuming no allocation/deallocation overhead, the time complexity of all deque operations is O(1). Additionally, the time complexity of insertion or deletion in the middle, given an iterator, is O(1); however, the time complexity of random access by index is O(n).
In a growing array, the amortized time complexity of all deque operations is O(1). Additionally, the time complexity of random access by index is O(1); but the time complexity of insertion or deletion in the middle is O(n).
Applications
One example where a deque can be used is the work stealing algorithm. This algorithm implements task scheduling for several processors. A separate deque with threads to be executed is maintained for each processor. To execute the next thread, the processor gets the first element from the deque (using the "remove first element" deque operation). If the current thread forks, it is put back to the front of the deque ("insert element at front") and a new thread is executed. When one of the processors finishes execution of its own threads (i.e. its deque is empty), it can "steal" a thread from another processor: it gets the last element from the deque of another processor ("remove last element") and executes it. The work stealing algorithm is used by Intel's Threading Building Blocks (TBB) library for parallel programming.
See also
Pipe
Priority queue
References
External links
Type-safe open source deque implementation at Comprehensive C Archive Network
SGI STL Documentation: deque<T, Alloc>
Code Project: An In-Depth Study of the STL Deque Container
Deque implementation in C
VBScript implementation of stack, queue, deque, and Red-Black Tree
Multiple implementations of non-catenable deques in Haskell
Abstract data types | Double-ended queue | [
"Mathematics"
] | 3,024 | [
"Type theory",
"Mathematical structures",
"Abstract data types"
] |
8,912 | https://en.wikipedia.org/wiki/Drake%20equation | The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way Galaxy.
The equation was formulated in 1961 by Frank Drake, not for purposes of quantifying the number of civilizations, but as a way to stimulate scientific dialogue at the first scientific meeting on the search for extraterrestrial intelligence (SETI). The equation summarizes the main concepts which scientists must contemplate when considering the question of other radio-communicative life. It is more properly thought of as an approximation than as a serious attempt to determine a precise number.
Criticism related to the Drake equation focuses not on the equation itself, but on the fact that the estimated values for several of its factors are highly conjectural, the combined multiplicative effect being that the uncertainty associated with any derived value is so large that the equation cannot be used to draw firm conclusions.
Equation
The Drake equation is:
where
= the number of civilizations in the Milky Way galaxy with which communication might be possible (i.e. which are on the current past light cone);
and
= the average rate of star formation in our Galaxy.
= the fraction of those stars that have planets.
= the average number of planets that can potentially support life per star that has planets.
= the fraction of planets that could support life that actually develop life at some point.
= the fraction of planets with life that go on to develop intelligent life (civilizations).
= the fraction of civilizations that develop a technology that releases detectable signs of their existence into space.
= the length of time for which such civilizations release detectable signals into space.
This form of the equation first appeared in Drake's 1965 paper.
History
In September 1959, physicists Giuseppe Cocconi and Philip Morrison published an article in the journal Nature with the provocative title "Searching for Interstellar Communications". Cocconi and Morrison argued that radio telescopes had become sensitive enough to pick up transmissions that might be broadcast into space by civilizations orbiting other stars. Such messages, they suggested, might be transmitted at a wavelength of 21 cm (1,420.4 MHz). This is the wavelength of radio emission by neutral hydrogen, the most common element in the universe, and they reasoned that other intelligences might see this as a logical landmark in the radio spectrum.
Two months later, Harvard University astronomy professor Harlow Shapley speculated on the number of inhabited planets in the universe, saying "The universe has 10 million, million, million suns (10 followed by 18 zeros) similar to our own. One in a million has planets around it. Only one in a million million has the right combination of chemicals, temperature, water, days and nights to support planetary life as we know it. This calculation arrives at the estimated figure of 100 million worlds where life has been forged by evolution."
Seven months after Cocconi and Morrison published their article, Drake began searching for extraterrestrial intelligence in an experiment called Project Ozma. It was the first systematic search for signals from communicative extraterrestrial civilizations. Using the dish of the National Radio Astronomy Observatory, Green Bank in Green Bank, West Virginia, Drake monitored two nearby Sun-like stars: Epsilon Eridani and Tau Ceti, slowly scanning frequencies close to the 21 cm wavelength for six hours per day from April to July 1960. The project was well designed, inexpensive, and simple by today's standards. It detected no signals.
Soon thereafter, Drake hosted the first search for extraterrestrial intelligence conference on detecting their radio signals. The meeting was held at the Green Bank facility in 1961. The equation that bears Drake's name arose out of his preparations for the meeting.
The ten attendees were conference organizer J. Peter Pearman, Frank Drake, Philip Morrison, businessman and radio amateur Dana Atchley, chemist Melvin Calvin, astronomer Su-Shu Huang, neuroscientist John C. Lilly, inventor Barney Oliver, astronomer Carl Sagan, and radio-astronomer Otto Struve. These participants called themselves "The Order of the Dolphin" (because of Lilly's work on dolphin communication), and commemorated their first meeting with a plaque at the observatory hall.
Usefulness
The Drake equation results in a summary of the factors affecting the likelihood that we might detect radio-communication from intelligent extraterrestrial life. The last three parameters, , , and , are not known and are very difficult to estimate, with values ranging over many orders of magnitude (see ). Therefore, the usefulness of the Drake equation is not in the solving, but rather in the contemplation of all the various concepts which scientists must incorporate when considering the question of life elsewhere, and gives the question of life elsewhere a basis for scientific analysis. The equation has helped draw attention to some particular scientific problems related to life in the universe, for example abiogenesis, the development of multi-cellular life, and the development of intelligence itself.
Within the limits of existing human technology, any practical search for distant intelligent life must necessarily be a search for some manifestation of a distant technology. After about 50 years, the Drake equation is still of seminal importance because it is a 'road map' of what we need to learn in order to solve this fundamental existential question. It also formed the backbone of astrobiology as a science; although speculation is entertained to give context, astrobiology concerns itself primarily with hypotheses that fit firmly into existing scientific theories. Some 50 years of SETI have failed to find anything, even though radio telescopes, receiver techniques, and computational abilities have improved significantly since the early 1960s. SETI efforts since 1961 have conclusively ruled out widespread alien emissions near the 21 cm wavelength of the hydrogen frequency.
Estimates
Original estimates
There is considerable disagreement on the values of these parameters, but the 'educated guesses' used by Drake and his colleagues in 1961 were:
= 1 yr−1 (1 star formed per year, on the average over the life of the galaxy; this was regarded as conservative)
= 0.2 to 0.5 (one fifth to one half of all stars formed will have planets)
= 1 to 5 (stars with planets will have between 1 and 5 planets capable of developing life)
= 1 (100% of these planets will develop life)
= 1 (100% of which will develop intelligent life)
= 0.1 to 0.2 (10–20% of which will be able to communicate)
= somewhere between 1000 and 100,000,000 years
Inserting the above minimum numbers into the equation gives a minimum N of 20 (see: Range of results). Inserting the maximum numbers gives a maximum of 50,000,000. Drake states that given the uncertainties, the original meeting concluded that , and there were probably between 1000 and 100,000,000 planets with civilizations in the Milky Way Galaxy.
Current estimates
This section discusses and attempts to list the best current estimates for the parameters of the Drake equation.
Rate of star creation in this Galaxy,
Calculations in 2010, from NASA and the European Space Agency indicate that the rate of star formation in this Galaxy is about of material per year. To get the number of stars per year, we divide this by the initial mass function (IMF) for stars, where the average new star's mass is about . This gives a star formation rate of about 1.5–3 stars per year.
Fraction of those stars that have planets,
Analysis of microlensing surveys, in 2012, has found that may approach 1—that is, stars are orbited by planets as a rule, rather than the exception; and that there are one or more bound planets per Milky Way star.
Average number of planets that might support life per star that has planets,
In November 2013, astronomers reported, based on Kepler space telescope data, that there could be as many as 40 billion Earth-sized planets orbiting in the habitable zones of sun-like stars and red dwarf stars within the Milky Way Galaxy. 11 billion of these estimated planets may be orbiting sun-like stars. Since there are about 100 billion stars in the galaxy, this implies is roughly 0.4. The nearest planet in the habitable zone is Proxima Centauri b, which is as close as about 4.2 light-years away.
The consensus at the Green Bank meeting was that had a minimum value between 3 and 5. Dutch science journalist Govert Schilling has opined that this is optimistic. Even if planets are in the habitable zone, the number of planets with the right proportion of elements is difficult to estimate. Brad Gibson, Yeshe Fenner, and Charley Lineweaver determined that about 10% of star systems in the Milky Way Galaxy are hospitable to life, by having heavy elements, being far from supernovae and being stable for a sufficient time.
The discovery of numerous gas giants in close orbit with their stars has introduced doubt that life-supporting planets commonly survive the formation of their stellar systems. So-called hot Jupiters may migrate from distant orbits to near orbits, in the process disrupting the orbits of habitable planets.
On the other hand, the variety of star systems that might have habitable zones is not just limited to solar-type stars and Earth-sized planets. It is now estimated that even tidally locked planets close to red dwarf stars might have habitable zones, although the flaring behavior of these stars might speak against this. The possibility of life on moons of gas giants (such as Jupiter's moon Europa, or Saturn's moons Titan and Enceladus) adds further uncertainty to this figure.
The authors of the rare Earth hypothesis propose a number of additional constraints on habitability for planets, including being in galactic zones with suitably low radiation, high star metallicity, and low enough density to avoid excessive asteroid bombardment. They also propose that it is necessary to have a planetary system with large gas giants which provide bombardment protection without a hot Jupiter; and a planet with plate tectonics, a large moon that creates tidal pools, and moderate axial tilt to generate seasonal variation.
Fraction of the above that actually go on to develop life,
Geological evidence from the Earth suggests that may be high; life on Earth appears to have begun around the same time as favorable conditions arose, suggesting that abiogenesis may be relatively common once conditions are right. However, this evidence only looks at the Earth (a single model planet), and contains anthropic bias, as the planet of study was not chosen randomly, but by the living organisms that already inhabit it (ourselves). From a classical hypothesis testing standpoint, without assuming that the underlying distribution of is the same for all planets in the Milky Way, there are zero degrees of freedom, permitting no valid estimates to be made. If life (or evidence of past life) were to be found on Mars, Europa, Enceladus or Titan that developed independently from life on Earth it would imply a value for close to 1. While this would raise the number of degrees of freedom from zero to one, there would remain a great deal of uncertainty on any estimate due to the small sample size, and the chance they are not really independent.
Countering this argument is that there is no evidence for abiogenesis occurring more than once on the Earth—that is, all terrestrial life stems from a common origin. If abiogenesis were more common it would be speculated to have occurred more than once on the Earth. Scientists have searched for this by looking for bacteria that are unrelated to other life on Earth, but none have been found yet. It is also possible that life arose more than once, but that other branches were out-competed, or died in mass extinctions, or were lost in other ways. Biochemists Francis Crick and Leslie Orgel laid special emphasis on this uncertainty: "At the moment we have no means at all of knowing" whether we are "likely to be alone in the galaxy (Universe)" or whether "the galaxy may be pullulating with life of many different forms." As an alternative to abiogenesis on Earth, they proposed the hypothesis of directed panspermia, which states that Earth life began with "microorganisms sent here deliberately by a technological society on another planet, by means of a special long-range unmanned spaceship".
In 2020, a paper by scholars at the University of Nottingham proposed an "Astrobiological Copernican" principle, based on the Principle of Mediocrity, and speculated that "intelligent life would form on other [Earth-like] planets like it has on Earth, so within a few billion years life would automatically form as a natural part of evolution". In the authors' framework, , , and are all set to a probability of 1 (certainty). Their resultant calculation concludes there are more than thirty current technological civilizations in the galaxy (disregarding error bars).
Fraction of the above that develops intelligent life,
This value remains particularly controversial. Those who favor a low value, such as the biologist Ernst Mayr, point out that of the billions of species that have existed on Earth, only one has become intelligent and from this, infer a tiny value for . Likewise, the Rare Earth hypothesis, notwithstanding their low value for above, also think a low value for dominates the analysis. Those who favor higher values note the generally increasing complexity of life over time, concluding that the appearance of intelligence is almost inevitable, implying an approaching 1. Skeptics point out that the large spread of values in this factor and others make all estimates unreliable. (See Criticism).
In addition, while it appears that life developed soon after the formation of Earth, the Cambrian explosion, in which a large variety of multicellular life forms came into being, occurred a considerable amount of time after the formation of Earth, which suggests the possibility that special conditions were necessary. Some scenarios such as the snowball Earth or research into extinction events have raised the possibility that life on Earth is relatively fragile. Research on any past life on Mars is relevant since a discovery that life did form on Mars but ceased to exist might raise the estimate of but would indicate that in half the known cases, intelligent life did not develop.
Estimates of have been affected by discoveries that the Solar System's orbit is circular in the galaxy, at such a distance that it remains out of the spiral arms for tens of millions of years (evading radiation from novae). Also, Earth's large moon may aid the evolution of life by stabilizing the planet's axis of rotation.
There has been quantitative work to begin to define . One example is a Bayesian analysis published in 2020. In the conclusion, the author cautions that this study applies to Earth's conditions. In Bayesian terms, the study favors the formation of intelligence on a planet with identical conditions to Earth but does not do so with high confidence.
Planetary scientist Pascal Lee of the SETI Institute proposes that this fraction is very low (0.0002). He based this estimate on how long it took Earth to develop intelligent life (1 million years since Homo erectus evolved, compared to 4.6 billion years since Earth formed).
Fraction of the above revealing their existence via signal release into space,
For deliberate communication, the one example we have (the Earth) does not do much explicit communication, though there are some efforts covering only a tiny fraction of the stars that might look for human presence. (See Arecibo message, for example). There is considerable speculation why an extraterrestrial civilization might exist but choose not to communicate. However, deliberate communication is not required, and calculations indicate that current or near-future Earth-level technology might well be detectable to civilizations not too much more advanced than present day humans. By this standard, the Earth is a communicating civilization.
Another question is what percentage of civilizations in the galaxy are close enough for us to detect, assuming that they send out signals. For example, existing Earth radio telescopes could only detect Earth radio transmissions from roughly a light year away.
Lifetime of such a civilization wherein it communicates its signals into space,
Michael Shermer estimated as 420 years, based on the duration of sixty historical Earthly civilizations. Using 28 civilizations more recent than the Roman Empire, he calculates a figure of 304 years for "modern" civilizations. It could also be argued from Michael Shermer's results that the fall of most of these civilizations was followed by later civilizations that carried on the technologies, so it is doubtful that they are separate civilizations in the context of the Drake equation. In the expanded version, including reappearance number, this lack of specificity in defining single civilizations does not matter for the result, since such a civilization turnover could be described as an increase in the reappearance number rather than increase in , stating that a civilization reappears in the form of the succeeding cultures. Furthermore, since none could communicate over interstellar space, the method of comparing with historical civilizations could be regarded as invalid.
David Grinspoon has argued that once a civilization has developed enough, it might overcome all threats to its survival. It will then last for an indefinite period of time, making the value for potentially billions of years. If this is the case, then he proposes that the Milky Way Galaxy may have been steadily accumulating advanced civilizations since it formed. He proposes that the last factor be replaced with , where is the fraction of communicating civilizations that become "immortal" (in the sense that they simply do not die out), and representing the length of time during which this process has been going on. This has the advantage that would be a relatively easy-to-discover number, as it would simply be some fraction of the age of the universe.
It has also been hypothesized that once a civilization has learned of a more advanced one, its longevity could increase because it can learn from the experiences of the other.
The astronomer Carl Sagan speculated that all of the terms, except for the lifetime of a civilization, are relatively high and the determining factor in whether there are large or small numbers of civilizations in the universe is the civilization lifetime, or in other words, the ability of technological civilizations to avoid self-destruction. In Sagan's case, the Drake equation was a strong motivating factor for his interest in environmental issues and his efforts to warn against the dangers of nuclear warfare. Paleobiologist Olev Vinn suggests that the lifetime of most technological civilizations is brief due to inherited behavior patterns present in all intelligent organisms. These behaviors, incompatible with civilized conditions, inevitably lead to self-destruction soon after the emergence of advanced technologies.
An intelligent civilization might not be organic, as some have suggested that artificial general intelligence may replace humanity.
Range of results
As many skeptics have pointed out, the Drake equation can give a very wide range of values, depending on the assumptions, as the values used in portions of the Drake equation are not well established. In particular, the result can be , meaning we are likely alone in the galaxy, or , implying there are many civilizations we might contact. One of the few points of wide agreement is that the presence of humanity implies a probability of intelligence arising of greater than zero.
As an example of a low estimate, combining NASA's star formation rates, the rare Earth hypothesis value of , Mayr's view on intelligence arising, Drake's view of communication, and Shermer's estimate of lifetime:
, , , [Drake, above], and years
gives:
i.e., suggesting that we are probably alone in this galaxy, and possibly in the observable universe.
On the other hand, with larger values for each of the parameters above, values of can be derived that are greater than 1. The following higher values that have been proposed for each of the parameters:
, , , , , [Drake, above], and years
Use of these parameters gives:
Monte Carlo simulations of estimates of the Drake equation factors based on a stellar and planetary model of the Milky Way have resulted in the number of civilizations varying by a factor of 100.
Possible former technological civilizations
In 2016, Adam Frank and Woodruff Sullivan modified the Drake equation to determine just how unlikely the event of a technological species arising on a given habitable planet must be, to give the result that Earth hosts the only technological species that has ever arisen, for two cases: (a) this Galaxy, and (b) the universe as a whole. By asking this different question, one removes the lifetime and simultaneous communication uncertainties. Since the numbers of habitable planets per star can today be reasonably estimated, the only remaining unknown in the Drake equation is the probability that a habitable planet ever develops a technological species over its lifetime. For Earth to have the only technological species that has ever occurred in the universe, they calculate the probability of any given habitable planet ever developing a technological species must be less than . Similarly, for Earth to have been the only case of hosting a technological species over the history of this Galaxy, the odds of a habitable zone planet ever hosting a technological species must be less than (about 1 in 60 billion). The figure for the universe implies that it is extremely unlikely that Earth hosts the only technological species that has ever occurred. On the other hand, for this Galaxy one must think that fewer than 1 in 60 billion habitable planets develop a technological species for there not to have been at least a second case of such a species over the past history of this Galaxy.
Modifications
As many observers have pointed out, the Drake equation is a very simple model that omits potentially relevant parameters, and many changes and modifications to the equation have been proposed. One line of modification, for example, attempts to account for the uncertainty inherent in many of the terms.
Combining the estimates of the original six factors by major researchers via a Monte Carlo procedure leads to a best value for the non-longevity factors of 0.85 1/years. This result differs insignificantly from the estimate of unity given both by Drake and the Cyclops report.
Others note that the Drake equation ignores many concepts that might be relevant to the odds of contacting other civilizations. For example, David Brin states: "The Drake equation merely speaks of the number of sites at which ETIs spontaneously arise. The equation says nothing directly about the contact cross-section between an ETIS and contemporary human society". Because it is the contact cross-section that is of interest to the SETI community, many additional factors and modifications of the Drake equation have been proposed.
Colonization It has been proposed to generalize the Drake equation to include additional effects of alien civilizations colonizing other star systems. Each original site expands with an expansion velocity , and establishes additional sites that survive for a lifetime . The result is a more complex set of 3 equations.
Reappearance factor The Drake equation may furthermore be multiplied by how many times an intelligent civilization may occur on planets where it has happened once. Even if an intelligent civilization reaches the end of its lifetime after, for example, 10,000 years, life may still prevail on the planet for billions of years, permitting the next civilization to evolve. Thus, several civilizations may come and go during the lifespan of one and the same planet. Thus, if is the average number of times a new civilization reappears on the same planet where a previous civilization once has appeared and ended, then the total number of civilizations on such a planet would be , which is the actual reappearance factor added to the equation.
The factor depends on what generally is the cause of civilization extinction. If it is generally by temporary uninhabitability, for example a nuclear winter, then may be relatively high. On the other hand, if it is generally by permanent uninhabitability, such as stellar evolution, then may be almost zero. In the case of total life extinction, a similar factor may be applicable for , that is, how many times life may appear on a planet where it has appeared once.
METI factor Alexander Zaitsev said that to be in a communicative phase and emit dedicated messages are not the same. For example, humans, although being in a communicative phase, are not a communicative civilization; we do not practise such activities as the purposeful and regular transmission of interstellar messages. For this reason, he suggested introducing the METI factor (messaging to extraterrestrial intelligence) to the classical Drake equation. He defined the factor as "the fraction of communicative civilizations with clear and non-paranoid planetary consciousness", or alternatively expressed, the fraction of communicative civilizations that actually engage in deliberate interstellar transmission.
The METI factor is somewhat misleading since active, purposeful transmission of messages by a civilization is not required for them to receive a broadcast sent by another that is seeking first contact. It is merely required they have capable and compatible receiver systems operational; however, this is a variable humans cannot accurately estimate.
Biogenic gases Astronomer Sara Seager proposed a revised equation that focuses on the search for planets with biosignature gases. These gases are produced by living organisms that can accumulate in a planet atmosphere to levels that can be detected with remote space telescopes.
The Seager equation looks like this:
where:
= the number of planets with detectable signs of life
= the number of stars observed
= the fraction of stars that are quiet
= the fraction of stars with rocky planets in the habitable zone
= the fraction of those planets that can be observed
= the fraction that have life
= the fraction on which life produces a detectable signature gas
Seager stresses, "We're not throwing out the Drake Equation, which is really a different topic," explaining, "Since Drake came up with the equation, we have discovered thousands of exoplanets. We as a community have had our views revolutionized as to what could possibly be out there. And now we have a real question on our hands, one that's not related to intelligent life: Can we detect any signs of life in any way in the very near future?"
Carl Sagan's version of the Drake equationAmerican astronomer Carl Sagan made some modifications in the Drake equation and presented it in the 1980 program Cosmos: A Personal Voyage. The modified equation is shown below
where
= the number of civilizations in the Milky Way galaxy with which communication might be possible (i.e. which are on the current past light cone);
and
= Number of stars in the Milky Way Galaxy
= the fraction of those stars that have planets.
= the average number of planets that can potentially support life per star that has planets.
= the fraction of planets that could support life that actually develop life at some point.
= the fraction of planets with life that go on to develop intelligent life (civilizations).
= the fraction of civilizations that develop a technology that releases detectable signs of their existence into space.
= fraction of a planetary lifetime graced by a technological civilization
Criticism
Criticism of the Drake equation is varied. Firstly, many of the terms in the equation are largely or entirely based on conjecture. Star formation rates are well-known, and the incidence of planets has a sound theoretical and observational basis, but the other terms in the equation become very speculative. The uncertainties revolve around the present day understanding of the evolution of life, intelligence, and civilization, not physics. No statistical estimates are possible for some of the parameters, where only one example is known. The net result is that the equation cannot be used to draw firm conclusions of any kind, and the resulting margin of error is huge, far beyond what some consider acceptable or meaningful.
Others point out that the equation was formulated before our understanding of the universe had matured. Astrophysicist Ethan Siegel, said:
One reply to such criticisms is that even though the Drake equation currently involves speculation about unmeasured parameters, it was intended as a way to stimulate dialogue on these topics. Then the focus becomes how to proceed experimentally. Indeed, Drake originally formulated the equation merely as an agenda for discussion at the Green Bank conference.
Fermi paradox
A civilization lasting for tens of millions of years could be able to spread throughout the galaxy, even at the slow speeds foreseeable with present-day technology. However, no confirmed signs of civilizations or intelligent life elsewhere have been found, either in this Galaxy or in the observable universe of 2 trillion galaxies. According to this line of thinking, the tendency to fill (or at least explore) all available territory seems to be a universal trait of living things, so the Earth should have already been colonized, or at least visited, but no evidence of this exists. Hence Fermi's question "Where is everybody?".
A large number of explanations have been proposed to explain this lack of contact; a book published in 2015 elaborated on 75 different explanations. In terms of the Drake Equation, the explanations can be divided into three classes:
Few intelligent civilizations ever arise. This is an argument that at least one of the first few terms, , has a low value. The most common suspect is , but explanations such as the rare Earth hypothesis argue that is the small term.
Intelligent civilizations exist, but we see no evidence, meaning is small. Typical arguments include that civilizations are too far apart, it is too expensive to spread throughout the galaxy, civilizations broadcast signals for only a brief period of time, communication is dangerous, and many others.
The lifetime of intelligent, communicative civilizations is short, meaning the value of is small. Drake suggested that a large number of extraterrestrial civilizations would form, and he further speculated that the lack of evidence of such civilizations may be because technological civilizations tend to disappear rather quickly. Typical explanations include it is the nature of intelligent life to destroy itself, it is the nature of intelligent life to destroy others, they tend to be destroyed by natural events, and others.
These lines of reasoning lead to the Great Filter hypothesis, which states that since there are no observed extraterrestrial civilizations despite the vast number of stars, at least one step in the process must be acting as a filter to reduce the final value. According to this view, either it is very difficult for intelligent life to arise, or the lifetime of technologically advanced civilizations, or the period of time they reveal their existence must be relatively short.
An analysis by Anders Sandberg, Eric Drexler and Toby Ord suggests "a substantial ex ante (predicted) probability of there being no other intelligent life in our observable universe".
In fiction and popular culture
The equation was cited by Gene Roddenberry as supporting the multiplicity of inhabited planets shown on Star Trek, the television series he created. However, Roddenberry did not have the equation with him, and he was forced to "invent" it for his original proposal. The invented equation created by Roddenberry is:
Regarding Roddenberry's fictional version of the equation, Drake himself commented that a number raised to the first power is just the number itself.
A commemorative plate on NASA's Europa Clipper mission, planned for launch in October 2024, features a poem by the U.S. Poet Laureate Ada Limón, waveforms of the word 'water' in 103 languages, a schematic of the water hole, the Drake equation, and a portrait of planetary scientist Ron Greeley on it.
The track Abiogenesis on the Carbon Based Lifeforms album World of Sleepers features the Drake equation in a spoken voice-over.
See also
The Search for Life: The Drake Equation, BBC documentary
Notes
References
Further reading
External links
Interactive Drake Equation Calculator
Frank Drake's 2010 article on "The Origin of the Drake Equation"
"Only a matter of time, says Frank Drake". A Q&A with Frank Drake in February 2010
Macromedia Flash page allowing the user to modify Drake's values from PBS's Nova
"The Drake Equation", Astronomy Cast episode #23; includes full transcript
Animated simulation of the Drake equation. ()
"The Alien Equation", BBC Radio program Discovery (22 September 2010)
"Reflections on the Equation" (PDF), by Frank Drake, 2013
1961 introductions
Astrobiology
Astronomical controversies
Astronomical hypotheses
Eponymous equations of physics
Fermi paradox
Interstellar messages
Search for extraterrestrial intelligence | Drake equation | [
"Physics",
"Astronomy",
"Biology"
] | 6,582 | [
"Astronomical hypotheses",
"Origin of life",
"Equations of physics",
"History of astronomy",
"Speculative evolution",
"Eponymous equations of physics",
"Astrobiology",
"Astronomical controversies",
"Fermi paradox",
"Biological hypotheses",
"Astronomical sub-disciplines"
] |
8,946 | https://en.wikipedia.org/wiki/Decipherment | In philology and linguistics, decipherment is the discovery of the meaning of the symbols found in extinct languages and/or alphabets. Decipherment is possible with respect to languages and scripts. One can also study or try to decipher how spoken languages that no longer exist were once pronounced, or how living languages used to be pronounced in prior eras.
Notable examples of decipherment include the decipherment of ancient Egyptian scripts and the decipherment of cuneiform. A notable decipherment in recent years is that of the Linear Elamite script. Today, at least a dozen languages remain undeciphered. Historically speaking, decipherments do not come suddenly through single individuals who "crack" ancient scripts. Instead, they emerge from the incremental progress brought about by a broader community of researchers.
Decipherment should not be confused with cryptanalysis, which aims to decipher special written codes or ciphers used in intentionally concealed secret communication (especially during war). It should also not be confused with determining the meaning of ambiguous text in a known language (interpretation).
Categories
According to Gelb and Whiting, the approach of decipherment depends on four categories of situations in an undeciphered language:
Type O: known writing and known language. Although decipherment in this case is trivial, useful information can be gleaned when a known language is written in an alphabet other than the one it is commonly written in. Studying the writing of the Phoenician or Sumerian languages in the Greek alphabet allows information about pronunciation and vocalization to be gleaned that cannot be obtained when studying the expression of these languages in their normal writing system.
Type I: unknown writing and known language. Deciphered languages in this category include Phoenician, Ugaritic, Cypriot, and Linear B. In this situation, alphabetic systems are the easiest to decipher, followed by syllabic languages, and finally the most difficult being logo-syllabic.
Type II: known writing and unknown language. An example is Linear A. Strictly speaking, this situation is not one of decipherment but of linguistic analysis. Decipherment in this category is considered extremely difficult to achieve on the basis of internal information only.
Type III: unknown writing and unknown language. Examples include the Archanes script and the Archanes formula, Phaistos disk, Cretan hieroglyphs, and Cypro-Minoan syllabary. When this situation occurs in an isolated culture and without the availability of outside information, decipherment is typically considered impossible.
Methods
There is no single recipe or linear method for decipherment, however: instead, philologists and linguists must rely on a set of heuristic devices that have been established. Broadly, it is important to be familiar with the relevant texts where the script or language occurs in, access to accurate drawings or photographs of these texts, information about their relative chronology, and background information on where the texts occur in (their geography, perhaps being found in the context of a funerary monument, etc).
These methods can be divided into approaches utilizing external or internal information.
External information
Many successful encipherments have proceeded from the discovery of external information, a common example being through the use of multilingual inscriptions, such as the Rosetta Stone (with the same text in three scripts: Demotic, hieroglyphic, and Greek) that enabled the decipherment of Egyptian hieroglyphic. In principle, multilingual text may be insufficient for a decipherment as translation is not a linear and reversible process, but instead represents an encoding of the message in a different symbolic system. Translating a text from one language into a second, and then from the second language back into the first, rarely reproduces exactly the original writing. Likewise, unless a significant number of words are contained in the multilingual text, limited information can be gleaned from it.
Internal information
Internal approaches are multi-step: one must first ensure that the writing they are looking at represents real writing, as opposed to a grouping of pictorial representations or a modern-day forgery without further meaning. This is commonly approached with methods from the field of grammatology. Prior to decipherment of meaning, one can then determine the number of distinct graphemes (which, in turn, allows one to tell if the writing system is alphabetic, syllabic, or logo-syllabic; this is because such writing systems typically do not overlap in the number of graphemes they use), the sequence of writing (whether it be from left to right, right to left, top to bottom, etc.), and the determination of whether individual words are properly segmented when the alphabet is written (such as with the use of a space or a different special mark) or not. If a repetitive schematic arrangement can be identified, this can help in decipherment. For example, if the last line of a text has a small number, it can be reasonably guessed to be referring to the date, where one of the words means "year" and, sometimes, a royal name also appears. Another case is when the text contains many small numbers, followed by a word, followed by a larger number; here, the word likely means "total" or "sum". After one has exhausted the information that can be inferentially derived from probable content, they must transition to the systematic application of statistical tools. These include methods concerning the frequency of appearance of each symbol, the order in which these symbols typically appear, whether some symbols appear at the beginning or end of words, etc. There are situations where orthographic features of a language make it difficult if not impossible to decipher specific features (especially without certain outside information), such as when an alphabet does not express double consonants. Additional, and more complex methods, also exist. Eventually, the application of such statistical methods becomes exceedingly laborious, in which computers might be used to apply them automatically.
Computational approaches
Computational approaches towards the decipherment of unknown languages began to appear in the late 1990s. Typically, there are two types of computational approaches used in language decipherment: approaches meant to produce translations in known languages, and approaches used to detect new information that might enable future efforts at translation. The second approach is more common, and includes things such as the detection of cognates or related words, discovery of the closest known language, word alignments, and more.
Artificial intelligence
In recent years, there has been a growing emphasis on methods utilizing artificial intelligence for the decipherment of lost languages, especially through natural language processing (NLP) methods. Proof-of-concept methods have independently re-deciphered Ugaritic and Linear B using data from similar languages, in this case Hebrew and Ancient Greek.
Deciphering pronunciation
Related to attempts to decipher the meaning of languages and alphabets, include attempts to decipher how extinct writing systems, or older versions of contemporary writing systems (such as English in the 1600s) were pronounced. Several methods and criteria have been developed in this regard. Important criteria include (1) Rhymes and the testimony of poetry (2) Evidence from occasional spellings and misspellings (3) Interpretations of material in one language from authors in foreign languags (4) Information obtained from related languages (5) Grammatical changes in spelling over time.
For example, analysis of poetry focuses on the use of wordplay or literary techniques between words that have a similar sound. Shakespeare's play Romeo and Juliet contains wordplay that relies on a similar sound between the words "soul" and "soles", allowing confidence that the similar pronunciation between the terms today also existed in Shakespeare's time. Another common source of information on pronunciation is when earlier texts use rhyme, such as when consecutive lines in poetry end in the similar or the same sound. This method does have some limitations however, as texts may use rhymes that rely on visual similarities between words (such as 'love' and 'remove') as opposed to auditory similarities, and that rhymes can be imperfect. Another source of information about pronunciation comes from explicit description of pronunciations from earlier texts, as in the case of the Grammatica Anglicana, such as in the following comment about the letter <o>: "In the long time it naturally soundeth sharp, and high; as in chósen, hósen, hóly, fólly [. . .] In the short time more flat, and a kin to u; as còsen, dòsen, mòther, bròther, lòve, pròve". Another example comes from detailed comments on pronunciations of Sanskrit from the surviving works of Sanskrit grammarians.
Challenges
Many challenges exist in the decipherment of languages, including when:
When it is not known which language is closest to it.
When the words in the script are not clearly segmented, like in some Iberian languages.
When the writing system is not known. In specific, if there is little certainty towards the number of graphemes that exist in a certain writing system, it cannot be determined if that system is an alphabet, a syllabry, a logosyllabry, or something else.
When the reading direction is not known. For example, it may not be clear if a writing system is meant to be read from left to right, or from right to left.
When it is not known if a script uses punctuation or spaces between words.
When the language of a script subject to decipherment efforts is not known.
When there is a small dataset available to learn about the properties of a script. This could lead to issues such as an incomplete vocabulary being known for the script.
When the typical order between subjects, objects, and verbs is not known.
When it is not known whether or how certain words can change their form.
When it is not known when multiple symbols are used to represent the same sound, syllable, word, concept, or idea (allographs).
When it is not clear how the penmanship or the style of writing of a particular scribe relates to the style of writing of another scribe working in the same text (the same letters or words might be written in a way that looks different), in which case it is difficult to correlate information across multiple examples of the use of the writing system.
When it is not known if certain words change their meaning depending on the context they appear in (homonyms).
When the context of discovery of a writing is not known. This is because information about the location out of which a writing system came from can provide valuable information about its relationship to known languages.
When adequate digital datasets for documented writing systems is not available, limiting the ability to use computational methods for decipherment.
When sufficient hardware resources, such as high performance computing, is not available (which might be necessary for more energy-intensive computational methods).
Relationship to cryptanalysis
Decipherment overlaps with another technical field known as cryptanalysis, a field that aims to decipher writings used in secret communication, known as ciphertext. A famous case of this was in the cryptanalysis of the Enigma during the World War II. Many other ciphers from past wars have only recently been cracked. Unlike in language decipherment, however, actors using ciphertext intentionally lay obstacles to prevent outsiders from uncovering the meaning of the communication system.
History
Interest in ancient scripts and dead languages began to arise by the Renaissance, if not earlier. Extensive information began to be collected about these scripts in the 16th and 17th centuries, and a typology of writing was established in the 17th century. The first serious decipherments, however, did not take place until the 18th century. In 1754, Swinton and Barthélemy independently deciphered the Aramaic script as represented in Palmyrene inscriptions from the 3rd century AD. In 1787, Silvestre de Sacy deciphered the Sasanian script, which was the script used in Ancient Persia to write down the Middle Iranian language used in the Sasanian empire. Both decipherments relied on bilingual texts where Greek was included as the second script. It was also in the 18th century when the methodological framework for deciphering scripts and languages began to be established. For example, in 1714, Leibniz advocated that parallel content in bilingual inscriptions could be specified by correlating where personal names occur in both inscriptions. By the 19th century, the prerequisites for decipherment began to become widely available. These included extensive knowledge about the scripts themselves, adequate editions of known texts from that script, philological skills, and the ability to reconstruct linguistic forms from the limited available evidence. The 19th century saw two major successes in decipherment: that of Egyptian hieroglyphic and cuneiform.
Notable decipherers
See also
Deciphered scripts
Cuneiform
Egyptian hieroglyphs
Kharoshthi
Linear B
Mayan
Staveless Runes
Cypriot Syllabary
Undeciphered scripts
Rongorongo (Decipherment of rongorongo)
Indus script
Cretan hieroglyphs
Byblos syllabary
Linear A
Cypro-Minoan syllabary
Espanca
Numidian language
Undeciphered texts
Phaistos Disc
Rohonc Codex
Voynich Manuscript
References
Further reading
Cryptography
Writing systems
Genetics terms
Philology
Decipherment | Decipherment | [
"Mathematics",
"Engineering",
"Biology"
] | 2,783 | [
"Applied mathematics",
"Genetics terms",
"Cryptography",
"Cybersecurity engineering"
] |
8,957 | https://en.wikipedia.org/wiki/DARPA | The Defense Advanced Research Projects Agency (DARPA) is a research and development agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. Originally known as the Advanced Research Projects Agency (ARPA), the agency was created on February 7, 1958, by President Dwight D. Eisenhower in response to the Soviet launching of Sputnik 1 in 1957. By collaborating with academia, industry, and government partners, DARPA formulates and executes research and development projects to expand the frontiers of technology and science, often beyond immediate U.S. military requirements. The name of the organization first changed from its founding name, ARPA, to DARPA, in March 1972, changing back to ARPA in February 1993, then reverted to DARPA in March 1996.
The Economist has called DARPA "the agency that shaped the modern world," with technologies like "Moderna's COVID-19 vaccine ... weather satellites, GPS, drones, stealth technology, voice interfaces, the personal computer and the internet on the list of innovations for which DARPA can claim at least partial credit." Its track record of success has inspired governments around the world to launch similar research and development agencies.
DARPA is independent of other military research and development and reports directly to senior Department of Defense management. DARPA comprises approximately 220 government employees in six technical offices, including nearly 100 program managers, who together oversee about 250 research and development programs. The agency's current director, appointed in March 2021, is Stefanie Tompkins.
Mission
, their mission statement is "to make pivotal investments in breakthrough technologies for national security".
History
Early history (1958–1969)
The Advanced Research Projects Agency (ARPA) was suggested by the President's Scientific Advisory Committee to President Dwight D. Eisenhower in a meeting called after the launch of Sputnik. ARPA was formally authorized by President Eisenhower in 1958 for the purpose of forming and executing research and development projects to expand the frontiers of technology and science, and able to reach far beyond immediate military requirements. The two relevant acts are the Supplemental Military Construction Authorization (Air Force) (Public Law 85-325) and Department of Defense Directive 5105.15, in February 1958. It was placed within the Office of the Secretary of Defense (OSD) and counted approximately 150 people. Its creation was directly attributed to the launching of Sputnik and to U.S. realization that the Soviet Union had developed the capacity to rapidly exploit military technology. Initial funding of ARPA was $520 million. ARPA's first director, Roy Johnson, left a $160,000 management job at General Electric for an $18,000 job at ARPA. Herbert York from Lawrence Livermore National Laboratory was hired as his scientific assistant.
Johnson and York were both keen on space projects, but when NASA was established later in 1958 all space projects and most of ARPA's funding were transferred to it. Johnson resigned and ARPA was repurposed to do "high-risk", "high-gain", "far out" basic research, a posture that was enthusiastically embraced by the nation's scientists and research universities. ARPA's second director was Brigadier General Austin W. Betts, who resigned in early 1961 and was succeeded by Jack Ruina who served until 1963. Ruina, the first scientist to administer ARPA, managed to raise its budget to $250 million. It was Ruina who hired J. C. R. Licklider as the first administrator of the Information Processing Techniques Office, which played a vital role in creation of ARPANET, the basis for the future Internet.
Additionally, the political and defense communities recognized the need for a high-level Department of Defense organization to formulate and execute R&D projects that would expand the frontiers of technology beyond the immediate and specific requirements of the Military Services and their laboratories. In pursuit of this mission, DARPA has developed and transferred technology programs encompassing a wide range of scientific disciplines that address the full spectrum of national security needs.
From 1958 to 1965, ARPA's emphasis centered on major national issues, including space, ballistic missile defense, and nuclear test detection. During 1960, all of its civilian space programs were transferred to the National Aeronautics and Space Administration (NASA) and the military space programs to the individual services.
This allowed ARPA to concentrate its efforts on the Project Defender (defense against ballistic missiles), Project Vela (nuclear test detection), and Project AGILE (counterinsurgency R&D) programs, and to begin work on computer processing, behavioral sciences, and materials sciences. The DEFENDER and AGILE programs formed the foundation of DARPA sensor, surveillance, and directed energy R&D, particularly in the study of radar, infrared sensing, and x-ray/gamma ray detection.
ARPA at this point (1959) played an early role in Transit (also called NavSat) a predecessor to the Global Positioning System (GPS). "Fast-forward to 1959 when a joint effort between DARPA and the Johns Hopkins Applied Physics Laboratory began to fine-tune the early explorers' discoveries. TRANSIT, sponsored by the Navy and developed under the leadership of Richard Kirschner at Johns Hopkins, was the first satellite positioning system."
During the late 1960s, with the transfer of these mature programs to the Services, ARPA redefined its role and concentrated on a diverse set of relatively small, essentially exploratory research programs. The agency was renamed the Defense Advanced Research Projects Agency (DARPA) in 1972, and during the early 1970s, it emphasized direct energy programs, information processing, and tactical technologies.
Concerning information processing, DARPA made great progress, initially through its support of the development of time-sharing. All modern operating systems rely on concepts invented for the Multics system, developed by a cooperation among Bell Labs, General Electric and MIT, which DARPA supported by funding Project MAC at MIT with an initial two-million-dollar grant.
DARPA supported the evolution of the ARPANET (the first wide-area packet switching network), Packet Radio Network, Packet Satellite Network and ultimately, the Internet and research in the artificial intelligence fields of speech recognition and signal processing, including parts of Shakey the robot. DARPA also supported the early development of both hypertext and hypermedia. DARPA funded one of the first two hypertext systems, Douglas Engelbart's NLS computer system, as well as The Mother of All Demos. DARPA later funded the development of the Aspen Movie Map, which is generally seen as the first hypermedia system and an important precursor of virtual reality.
Later history (1970–1980)
The Mansfield Amendment of 1973 expressly limited appropriations for defense research (through ARPA/DARPA) only to projects with direct military application.
The resulting "brain drain" is credited with boosting the development of the fledgling personal computer industry. Some young computer scientists left the universities to startups and private research laboratories such as Xerox PARC.
Between 1976 and 1981, DARPA's major projects were dominated by air, land, sea, and space technology, tactical armor and anti-armor programs, infrared sensing for space-based surveillance, high-energy laser technology for space-based missile defense, antisubmarine warfare, advanced cruise missiles, advanced aircraft, and defense applications of advanced computing.
Many of the successful programs were transitioned to the Services, such as the foundation technologies in automatic target recognition, space-based sensing, propulsion, and materials that were transferred to the Strategic Defense Initiative Organization (SDIO), later known as the Ballistic Missile Defense Organization (BMDO), now titled the Missile Defense Agency (MDA).
Recent history (1981–present)
During the 1980s, the attention of the Agency was centered on information processing and aircraft-related programs, including the National Aerospace Plane (NASP) or Hypersonic Research Program. The Strategic Computing Program enabled DARPA to exploit advanced processing and networking technologies and to rebuild and strengthen relationships with universities after the Vietnam War. In addition, DARPA began to pursue new concepts for small, lightweight satellites (LIGHTSAT) and directed new programs regarding defense manufacturing, submarine technology, and armor/anti-armor.
In 1981, two engineers, Robert McGhee and Kenneth Waldron, started to develop the Adaptive Suspension Vehicle (ASV) nicknamed the "Walker" at the Ohio State University, under a research contract from DARPA. The vehicle was 17 feet long, 8 feet wide, and 10.5 feet high, and had six legs to support its three-ton aluminum body, in which it was designed to carry cargo over difficult terrains. However, DARPA lost interest in the ASV, after problems with cold-weather tests.
On February 4, 2004, the agency shut down its so called "LifeLog Project". The project's aim would have been, "to gather in a single place just about everything an individual says, sees or does".
On October 28, 2009, the agency broke ground on a new facility in Arlington County, Virginia a few miles from The Pentagon.
In fall 2011, DARPA hosted the 100-Year Starship Symposium with the aim of getting the public to start thinking seriously about interstellar travel.
On June 5, 2016, NASA and DARPA announced that it planned to build new X-planes with NASA's plan setting to create a whole series of X planes over the next 10 years.
Between 2014 and 2016, DARPA shepherded the first machine-to-machine computer security competition, the Cyber Grand Challenge (CGC),
bringing a group of top-notch computer security experts to search for security vulnerabilities, exploit them, and create fixes that patch those vulnerabilities in a fully automated fashion. It is one of DARPA prize competitions to spur innovations.
In June 2018, DARPA leaders demonstrated a number of new technologies that were developed within the framework of the GXV-T program. The goal of this program is to create a lightly armored combat vehicle of not very large dimensions, which, due to maneuverability and other tricks, can successfully resist modern anti-tank weapon systems.
In September 2020, DARPA and the US Air Force announced that the Hypersonic Air-breathing Weapon Concept (HAWC) are ready for free-flight tests within the next year.
Victoria Coleman became the director of DARPA in November 2020.
In recent years, DARPA officials have contracted out core functions to corporations. For example, during fiscal year 2020, Chenega ran physical security on DARPA's premises, System High Corp. carried out program security, and Agile Defense ran unclassified IT services. General Dynamics runs classified IT services. Strategic Analysis Inc. provided support services regarding engineering, science, mathematics, and front office and administrative work.
Organization
Current program offices
DARPA has six technical offices that manage the agency's research portfolio, and two additional offices that manage special projects. All offices report to the DARPA director, including:
The Defense Sciences Office (DSO): DSO identifies and pursues high-risk, high-payoff research initiatives across a broad spectrum of science and engineering disciplines and transforms them into important, new game-changing technologies for U.S. national security. Current DSO themes include novel materials and structures, sensing and measurement, computation and processing, enabling operations, collective intelligence, and global change.
The Information Innovation Office (I2O) aims to ensure U.S. technological superiority in all areas where information can provide a decisive military advantage.
The Microsystems Technology Office (MTO) core mission is the development of high-performance, intelligent microsystems and next-generation components to ensure U.S. dominance in Command, Control, Communications, Computer, Intelligence, Surveillance, and Reconnaissance (C4ISR), Electronic Warfare (EW), and Directed Energy (DE). The effectiveness, survivability, and lethality of systems that relate to these applications depend critically on microsystems and components.
The Strategic Technology Office (STO) mission is to focus on technologies that have a global theater-wide impact and that involve multiple Services.
The Tactical Technology Office (TTO) engages in high-risk, high-payoff advanced military research, emphasizing the "system" and "subsystem" approach to the development of aeronautic, space, and land systems as well as embedded processors and control systems
The Biological Technologies Office (BTO) fosters, demonstrates, and transitions breakthrough fundamental research, discoveries, and applications that integrate biology, engineering, and computer science for national security. Created in April 2014 by then Director Arati Prabhakar, taking programs from the MTO and DSO offices.
Former offices
The Adaptive Execution Office (AEO) was created in 2009 by the DARPA Director, Regina Dugan. The office's four project areas included technology transition, assessment, rapid productivity and adaptive systems. AEO provided the agency with robust connections to the warfighter community and assisted the agency with the planning and execution of technology demonstrations and field trials to promote adoption by the warfighter, accelerating the transition of new technologies into DoD capabilities.
Information Awareness Office: 2002–2003
The Advanced Technology Office (ATO) researched, demonstrated, and developed high payoff projects in maritime, communications, special operations, command and control, and information assurance and survivability mission areas.
The Special Projects Office (SPO) researched, developed, demonstrated, and transitioned technologies focused on addressing present and emerging national challenges. SPO investments ranged from the development of enabling technologies to the demonstration of large prototype systems. SPO developed technologies to counter the emerging threat of underground facilities used for purposes ranging from command-and-control, to weapons storage and staging, to the manufacture of weapons of mass destruction. SPO developed significantly more cost-effective ways to counter proliferated, inexpensive cruise missiles, UAVs, and other platforms used for weapon delivery, jamming, and surveillance. SPO invested in novel space technologies across the spectrum of space control applications including rapid access, space situational awareness, counterspace, and persistent tactical grade sensing approaches including extremely large space apertures and structures.
The Office of Special Development (OSD) in the 1960s developed a real-time remote sensing, monitoring, and predictive activity system on trails used by insurgents in Laos, Cambodia, and the Republic of Vietnam. This was done from an office in Bangkok, Thailand, that was ostensibly established to catalog and support the Thai fishing fleet, of which two volumes were published. This is a personal recollection without a published citation. A report on the ARPA group under which OSD operated is found here.
A 1991 reorganization created several offices which existed throughout the early 1990s:
The Electronic Systems Technology Office combined areas of the Defense Sciences Office and the Defense Manufacturing Office. This new office will focus on the boundary between general-purpose computers and the physical world, such as sensors, displays and the first few layers of specialized signal-processing that couple these modules to standard computer interfaces.
The Software and Intelligent Systems Technology Office and the Computing Systems office will have responsibility associated with the Presidential High-Performance Computing Initiative. The Software office will also be responsible for "software systems technology, machine intelligence and software engineering."
The Land Systems Office was created to develop advanced land vehicle and anti-armor systems, once the domain of the Tactical Technology Office.
The Undersea Warfare Office combined areas of the Advanced Vehicle Systems and Tactical Technology offices to develop and demonstrate submarine stealth and counter-stealth and automation.
A 2010 reorganization merged two offices:
The Transformational Convergence Technology Office (TCTO) and the Information Processing Techniques Office (IPTO) were combined in 2010 to form the Information Innovation Office (I2O).
TCTO's mission was to develop new crosscutting capabilities from a broad range of emerging technological and social trends, particularly in areas related to computing and computing-reliant subareas of the life sciences, social sciences, manufacturing, and commerce.
IPTO focused on inventing the sensing, networking, computing, and software technologies vital to ensuring DOD military superiority.
Projects
A list of DARPA's active and archived projects is available on the agency's website. Because of the agency's fast pace, programs constantly start and stop based on the needs of the U.S. government. Structured information about some of the DARPA's contracts and projects is publicly available.
Active projects
AdvaNced airCraft Infrastructure-Less Launch And RecoverY X-Plane (ANCILLARY) (2022): The program is to develop and demonstrate a vertical takeoff and landing (VTOL) plane that can launch without the supporting infrastructure, with low-weight, high-payload, and long-endurance capabilities. In June 2023, DARPA selected nine companies to produce initial operational system and demonstration system conceptual designs for an uncrewed aerial system (UAS).
AI Cyber Challenge (AIxCC) (2023): It is a two-year competition to identify and fix software vulnerabilities using AI in partnership with Anthropic, Google, Microsoft, and OpenAI which will provide their expertise and their platforms for this competition. There will be a semifinal phase and the final phase. Both competitions will be held at DEF CON in Las Vegas in 2024 and 2025, respectively.
Air Combat Evolution (ACE) (2019): The goal of ACE is to automate air-to-air combat, enabling reaction times at machine speeds. By using human-machine collaborative dogfighting as its challenge problem, ACE seeks to increase trust in combat autonomy. Eight teams from academia and industry were selected in October 2019. In April 2024, DARPA and U.S. Air Force announced that ACE conducted the first-ever in-air dogfighting tests of AI algorithms autonomously flying an F-16 against a human-piloted F-16.
Air Space Total Awareness for Rapid Tactical Execution (ASTARTE) (2020): The program is conducted in partnership with the Army and Air Force on sensors, artificial intelligence algorithms, and virtual testing environments in order to create an understandable common operating picture when troops are spread out across battlefields
Atmospheric Water Extraction (AWE) program
Biomanufacturing: Survival, Utility, and Reliability beyond Earth (B-SURE) (2021): This program aims to address foundational scientific questions to determine how well industrial bio-manufacturing microorganisms perform in space conditions. International Space Station (ISS) announced in April 2023 that Rhodium-DARPA Biomanufacturing 01 investigation was launched on SpaceX, and ISS crew members are carrying out this project which examines gravity's effect on production of drugs and nutrients from bacteria and yeast.
Big Mechanism: Cancer research. (2015) The program aims to develop technology to read research abstracts and papers to extract pieces of causal mechanisms, assemble these pieces into more complete causal models, and reason over these models to produce explanations. The domain of the program is cancer biology with an emphasis on signaling pathways. It has a successor program called World Modelers.
Binary structure inference system: extract software properties from binary code to support repository-based reverse engineering for micro-patching that minimizes lifecycle maintenance and costs (2020).
Blackjack (2017): a program to develop and test military satellite constellation technologies with a variety of "military-unique sensors and payloads [attached to] commercial satellite buses. ...as an 'architecture demonstration intending to show the high military utility of global LEO constellations and mesh networks of lower size, weight, and cost spacecraft nodes.' ... The idea is to demonstrate that 'good enough' payloads in LEO can perform military missions, augment existing programs, and potentially perform 'on par or better than currently deployed exquisite space systems." Blue Canyon Technologies, Raytheon, and SA Photonics Inc. were working on phases 2 and 3 as of fiscal year 2020. On June 12, 2023, DARPA launched four satellites for a technology demonstration in low Earth orbit on the SpaceX Transporter-8 rideshare.
broadband, electro-magnetic spectrum receiver system: prototype and demonstration
BlockADE: Rapidly constructed barrier. (2014)
Captive Air Amphibious Transporter (CAAT)
Causal Exploration of Complex Operational Environments ("Causal Exploration") – computerized aid to military planning. (2018)
Clean-Slate Design of Resilient, Adaptive, Secure Hosts (CRASH), a DARPA Transformation Convergence Technology Office (TCTO) initiative
Collaborative Operations in Denied Environment (CODE): Modular software architecture for UAVs to pass information to each other in contested environments to identify and engage targets with limited operator direction. (2015)
Control of Revolutionary Aircraft with Novel Effectors (CRANE) (2019): The program seeks to demonstrate an experimental aircraft design based on active flow control (AFC), which is defined as on-demand addition of energy into a boundary layer in order to maintain, recover, or improve aerodynamic performance. The aim is for CRANE to generally improve aircraft performance and reliability while reducing cost. In May 2023, DARPA designated the experimental uncrewed aircraft the X-65 which will use banks of compressed air nozzles to execute maneuvers without traditional, exterior-moving flight controls.
Computational Weapon Optic (CWO) (2015): Computer rifle scope that combines various features into one optic.
DARPA Triage Challenge (DTC) (2023): The DTC will use a series of challenge events to spur development of novel physiological features for medical triage. The three-year competition focuses on improving emergency medical response in military and civilian mass casualty incidents.
DARPA XG (2005) : technology for Dynamic Spectrum Access for assured military communications.
Demonstration Rocket for Agile Cislunar Operations (DRACO) (2021): The program is to demonstrate a nuclear thermal rocket (NTR) in orbit by 2027 in collaboration with NASA (nuclear thermal engine) and U.S. Space Force (launch).
Detection system consisting of Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR)-based assays paired with reconfigurable point-of-need and massively multi-plexed devices for diagnostics and surveillance
Electronics Resurgence Initiative (ERI) (2019): Started in 2019, the initiative aims at both national security capabilities and commercial economic competitiveness and sustainability. These programs emphasize forward-looking partnerships with U.S. industry, the defense industrial base, and university researchers. In 2023, DARPA expanded ERI's focus with the announcement of ERI 2.0 seeking to reinvent domestic microelectronics manufacturing.
Experimental Spaceplane 1 (formerly XS-1): In 2017, Boeing was selected for Phases 2 and 3 for the fabrication and flight of a reusable unmanned space transport after it completed the initial design in Phase 1 as one of the three teams. In January 2020, Boeing ended its role in the program.
Fast Lightweight Autonomy: Software algorithms that enable small UAVs to fly fast in cluttered environments without GPS or external communications. (2014)
Fast Network Interface Cards (FastNICs): develop and integrate new, clean-slate network subsystems in order to speed up applications, such as the distributed training of machine learning classifiers by 100x. Perspecta Labs and Raytheon BBN were working on FastNICs as of fiscal year 2020.
Force Application and Launch from Continental United States (FALCON): a research effort to develop a small satellite launch vehicle. (2008) This vehicle is under development by AirLaunch LLC.
Gamma Ray Inspection Technology (GRIT) program: research and develop high-intensity, tunable, and narrow-bandwidth gamma ray production in compact, transportable form. This technology can be utilized for discovering smuggled nuclear material in cargo via new inspection techniques, and enabling new medical diagnostics and therapies. RadiaBeam Technologies LLC was working on a phase 1 of the program, Laser-Compton approach, in fiscal year 2020.
Glide Breaker program: technology for an advanced interceptor capable of engaging maneuvering hypersonic vehicles or missiles in the upper atmosphere. Northrop Grumman and Aerojet Rocketdyne were working on this program as of fiscal year 2020.
Gremlins (2015): Air-launched and recoverable UAVs with distributed capabilities to provide low-cost flexibility over expensive multirole platforms. In October 2021, two X-61 Gremlin air vehicles were tested at the Army's Dugway Proving Ground, Utah.
Ground X-Vehicle Technology (GXV-T) (2015): This program aims to improve mobility, survivability, safety, and effectiveness of future combat vehicles without piling on armor.
High Productivity Computing Systems
High Operational Temperature Sensors (HOTS)(2023): The program is to develop sensor microelectronics consisting of transducers, signal conditioning microelectronics, and integration that operate with high bandwidth (>1 MHz) and dynamic range (>90 dB) at extreme temperatures (i.e., at least 800 °C).
HIVE (Hierarchical Identify Verify Exploit) CPU architecture. (2017)
Hypersonic Air-breathing Weapon Concept (HAWC). This program is a joint DARPA/U.S. Air Force effort that seeks to develop and demonstrate critical technologies to enable an effective and affordable air-launched hypersonic cruise missile.
Hypersonic Boost Glide Systems Research
Insect Allies (2017–2021)
Integrated Sensor is Structure (ISIS): This was a joint DARPA and U.S. Air Force program to develop a sensor of unprecedented proportions to be fully integrated into a stratospheric airship.
Intelligent Integration of Information (I3) in SISTO, 1994–2000 – supported database research and with ARPA CISTO and NASA funded the NSF Digital Library program, that led. a.o. to Google.
Joint All-Domain Warfighting Software (JAWS): software suite featuring automation and predictive analytics for battle management and command & control with tactical coordination for capture ("target custody") and kill missions. Systems & Technology Research of Woburn, Massachusetts, is working on this project, with an expected completion date of March 2022. Raytheon is also working on this project, with an expected completion date of April 2022.
Lasers for Universal Microscale Optical Systems (LUMOS): integrate heterogeneous materials to bring high performance lasers and amplifiers to manufacturable photonics platforms. As of fiscal year 2020, the Research Foundation for the State University of New York (SUNY) was working to enable "on-chip optical gain" to integrated photonics platforms, and enable complete photonics functionality "on a single substrate for disruptive optical microsystems."
LongShot (2021): The program is to demonstrate an unmanned air-launched vehicle (UAV) capable of employing air-to-air weapons. Phase 1 design work started in early 2021. In June 2023, DARPA awarded a Phase 3 contract to General Atomics for the manufacturing and a flight demonstration in 2025 of an air-launched, flying and potentially recoverable missile carrier.
Manta Ray: A 2020 DARPA program to develop a series of autonomous, large-size, unmanned underwater vehicles (UUVs) capable of long-duration missions and having large payload capacities. In December 2021, DARPA awarded Phase 2 contracts to Northrop Grumman Systems Corporation and Martin Defense Group to work on subsystem testing followed by fabrication and in-water demonstrations of full-scale integrated vehicles.
By May 2024, Manta Ray was not only the descriptor for the DARPA R&D program, but was also the name of a specific prototype UUV built by Northrop Grumman, with initial tests conducted in the Pacific Ocean during 1Q2024. Manta Ray has been designed to be broken down and fit into 5 standard shipping containers, shipped to where it will be deployed, and be reassembled in the theatre of operations where it will be used. DARPA is working with the US Navy to further test and then transition the technology.
Media Forensics (MediFor): A project aimed at automatically spotting digital manipulation in images and videos, including Deepfakes. (2018). MediFor largely ended in 2020 and DARPA launched a follow-on program in 2021 called the semantic forensics, or SemaFor.
MEMS Exchange: Microelectromechanical systems (MEMS) Implementation Environment (MX)
Millimeter-wave GaN Maturation (MGM) program: develop new GaN transistor technology to attain high-speed and large voltage swing at the same time. HRL Laboratories LLC, a joint venture between Boeing and General Motors, is working on phase 2 as of fiscal year 2020.
Modular Optical Aperture Building Blocks (MOABB) program (2015): design free-space optical components (e.g., telescope, bulk lasers with mechanical beam-steering, detectors, electronics) in a single device. Create a wafer-scale system that is one hundred times smaller and lighter than existing systems and can steer the optical beam far faster than mechanical components. Research and design electronic-photonic unit cells that can be tiled together to form large-scale planar apertures (up to 10 centimeters in diameter) that can run at 100 watts of optical power. The overall goals of such technology are (1) rapid 3D scanning using devices smaller than a cell-phone camera; (2) high-speed laser communications without mechanical steering; (3) and foliage-penetrating perimeter sensing, remote wind sensing, and long-range 3-D mapping. As of fiscal year 2020, Analog Photonics LLC of Boston, Massachusetts, was working on phase 3 of the program and is expected to finish by May 2022.
Multi- Azimuth Defense Fast Intercept Round Engagement System (MAD-FIRES) program: develop technologies that combine advantages of a missile (guidance, precision, accuracy) with advantages of a bullet (speed, rapid-fire, large ammunition capacity) to be used on a medium-caliber guided projectile in defending ships. Raytheon is currently working on MAD-FIRES phase 3 (enhance seeker performance, and develop a functional demonstration illuminator and engagement manager to engage and defeat a representative surrogate target) and is expected to be finished by November 2022.
Near Zero Power RF and Sensor Operations (N-ZERO): Reducing or eliminating the standby power unattended ground sensors consume. (2015)
Neural implants for soldiers. (2014)
Novel, nonsurgical, bi-directional brain-computer interface with high spacio-temporal resolution and low latency for potential human use.
Open, Programmable, Secure 5G (OPS-5G) (2020): The program is to address security risks of 5G networks by pursuing research leading to the development of a portable standards-compliant network stack for 5G mobile that is open source and secure by design. OPS-5G seeks to create open source software and systems that enable secure 5G and subsequent mobile networks such as 6G.
Operational Fires (OpFires): developing a new mobile ground-launched booster that helps hypersonic boost glide weapons penetrate enemy air defenses. As of 17 July 2020, Lockheed Martin was working on phase 3 of the program (develop propulsion components for the missile's Stage 2 section) to be completed by January 2022. The system was successfully tested in July 2022.
Persistent Close Air Support (PCAS): DARPA created the program in 2010 to seek to fundamentally increase Close Air Support effectiveness by enabling dismounted ground agents—Joint Terminal Attack Controllers—and combat aircrews to share real-time situational awareness and weapons systems data.
PREventing EMerging Pathogenic Threats (PREEMPT)
QuASAR: Quantum Assisted Sensing and Readout
QuBE: Quantum Effects in Biological Environments
QUEST: Quantum Entanglement Science and Technology
Quiness: Macroscopic Quantum Communications
QUIST: Quantum Information Science and Technology
RADICS: Rapid Attack Detection, Isolation and Characterization Systems
Rational Integrated Design of Energetics (RIDE): developing tools that speed up and facilitate energetics research.
Remote-controlled insects
Robotic Servicing of Geosynchronous Satellites program (RSGS): a telerobotic and autonomous robotic satellite-servicing project, conceived in 2017. In 2020, DARPA selected Northrop Grumman's SpaceLogistics as its RSGS partner. The U.S. Naval Research Laboratory designed and developed the RSGS robotic arm with DARPA funding. The RSGS system is anticipated to start servicing satellites in space in 2025.
Robotic Autonomy in Complex Environments with Resiliency (RACER) (2020): This is a four-year program and aims to make sure algorithms aren't the limiting part of the system and that autonomous combat vehicles can meet or exceed soldier driving abilities. RACER conducted its third experiment to assess the performance of off-road unmanned vehicles March 12–27, 2023.
SafeGenes: a synthetic biology project to program "undo" sequences into gene editing programs (2016)
Sea Train (2019): The program goal is to develop and demonstrate ways to overcome range limitations in medium unmanned surface vessels by exploiting wave-making resistance reductions. Applied Physical Sciences Corp. of Groton, Connecticut, is undertaking Phase 1 of the Sea Train program, with an expected completion date of March 2022. Sea Train, NOMARS and Manta Ray are the three programs that could significantly impact naval operations by extending the range and payloads for unmanned vessels on and below the surface.
Secure Advanced Framework for Simulation & Modeling (SAFE-SiM) program: build a rapid modeling and simulation environment to enable quick analysis in support of senior-level decision-making. As of fiscal year 2020, Radiance Technologies and L3Harris were working on portions of the program, with expected completion in August and September 2021, respectively.
Securing Information for Encrypted Verification and Evaluation (SIEVE) program: use zero knowledge proofs to enable the verification of capabilities for the US military "without revealing the sensitive details associated with those capabilities." Galois Inc. of Portland, Oregon, and Stealth Software Technologies of Los Angeles, California, are currently working on the SIEVE program, with a projected completion date of May 2024.
Semantic Forensics (SemaFor) program: develop technologies to automatically detect, attribute, and characterize falsified media (e.g., text, audio, image, video) to defend against automated disinformation. SRI International of Menlo Park, California, and Kitware Inc. of Clifton, New York, are working on the SemaFor program, with an expected completion date of July 2024.
Sensor plants: DARPA "is working on a plan to use plants to gather intelligence information" through DARPA's Advanced Plant Technologies (APT) program, which aims to control the physiology of plants in order to detect chemical, biological, radiological and nuclear threats. (2017)
Synthetic Hemo-technologIEs to Locate and Disinfect (SHIELD) (2023): The program aims to develop prophylaxes and prevent bloodstream infections (BSI) caused by bacterial/fungal agents, a threat to military and civilian populations.
SIGMA: A network of radiological detection devices the size of smart phones that can detect small amounts of radioactive materials. The devices are paired with larger detector devices along major roads and bridges. (2016)
SIGMA+ program (2018): by building on concepts theorized in the SIGMA program, develop new sensors and analytics to detect small traces of explosives and chemical and biological weaponry throughout any given large metropolitan area. In October 2021, SIGMA+ program, in collaboration with the Indianapolis Metropolitan Police Department (IMPD), concluded a three-month-long pilot study with new sensors to support early detection and interdictions of weapons of mass destruction (WMD) threats.
SoSITE: System of Systems Integration Technology and Experimentation: Combinations of aircraft, weapons, sensors, and mission systems that distribute air warfare capabilities across a large number of interoperable manned and unmanned platforms. (2015)
SSITH: System Security Integrated Through Hardware and Firmware - secure hardware platform (2017); basis for open-source, hack-proof voting system project and 2019 system prototype contract
SXCT: Squad X Core Technologies: Digitized, integrated technologies that improve infantry squads' awareness, precision, and influence. (2015)
SyNAPSE: Systems of Neuromorphic Adaptive Plastic Scalable Electronics
Tactical Boost Glide (TBG): Air-launched hypersonic boost glide missile. (2016)
Tactically Exploited Reconnaissance Node (Tern)(2014): The program seeks to develop ship based UAS systems and technologies to enable a future air vehicle that could provide persistent ISR and strike capabilities beyond the limited range and endurance provided by existing helicopter platforms.
TransApps (Transformative Applications), rapid development and fielding of secure mobile apps in the battlefield
ULTRA-Vis (Urban Leader Tactical Response, Awareness and Visualization): Heads-up display for individual soldiers. (2014)
underwater network, heterogeneous: develop concepts and reconfigurable architecture, leveraging advancement in undersea communications and autonomous ocean systems, to demonstrate utility at sea. Raytheon BBN is currently working on this program, with work expected through 4 May 2021, though if the government exercises all options on the contract then work will continue through 4 February 2024.
Upward Falling Payloads: Payloads stored on the ocean floor that can be activated and retrieved when needed. (2014)
Urban Reconnaissance through Supervised Autonomy (URSA) program: develop technology for use in cities to enable autonomous systems that U.S. infantry and ground forces operate to detect and identify enemies before U.S. troops come across them. Program will factor in algorithms, multiple sensors, and scientific knowledge about human behavior to determine subtle differences between hostiles and innocent civilians. Soar Technology Inc. of Ann Arbor, Michigan, is currently working on pertinent vehicle autonomy technology, with work expected completed by March 2022.
Warrior Web: Soft exosuit to alleviate musculoskeletal stress on soldiers when carrying heavy loads. (2014)
Waste Upcycling for Defense (WUD) (2023): to turn scrap wood, cardboard, paper, and other cellulose-derived matter into sustainable materials such as building materials for re-use.
Past or transitioned projects
4MM (4-minute mile): Wearable jetpack to enable soldiers to run at increased speed.
Air Dominance Initiative: a 2015 program to develop technologies to be used in sixth-generation jet fighters. The Air Dominance Initiative study led to the U.S. Air Force's sixth-generation air superiority initiative, the Next Generation Air Dominance.
Anti-submarine warfare (ASW) Continuous Trail Unmanned Vessel (ACTUV) (2010): A project to build an unmanned anti-submarine warfare vessel.
AGM-158C LRASM: Anti-ship cruise missile.
Adaptive Vehicle Make: Revolutionary approaches to the design, verification, and manufacturing of complex defense systems and vehicles.
ARPA Midcourse Optical Station (AMOS), a research facility that now forms part of the Haleakala Observatory.
ArcLight: Ship-based weapon system capable of striking targets nearly anywhere on the globe, based on the Standard Missile 3.
ARPANET, earliest predecessor of the Internet.
Assault Breaker: technology integration to defeat armored attacks
ASTOVL, precursor of the Joint Strike Fighter program
The Aspen Movie Map allowed one to virtually tour the streets of Aspen, Colorado. Developed in 1978, it is the earliest predecessor to products like Google Street View.
Atlas: A humanoid robot.
Battlefield Illusion
BigDog/Legged Squad Support System (2012): legged robots.
Boeing Pelican
Boeing X-37 (2004): The X-37 program was transferred from NASA to DARPA in September 2004.
The Boeing X-45 unmanned combat aerial vehicle refers to a mid-2000s concept demonstrator for autonomous military aircraft.
Boomerang (mobile shooter detection system): an acoustic gunfire locator developed by BBN Technologies for detecting snipers on military combat vehicles.
CALO or "Cognitive Assistant that Learns and Organizes": software
Combat Zones That See (CTS): "track everything that moves" in a city by linking up a massive network of surveillance cameras
Cognitive Technology Threat Warning System (CT2WS) (2011)
Consortium for Execution of Rendezvous and Servicing Operations (CONFERS) (2017).
CPOF: the command post of the future—networked information system for Command control.
DAML
ALASA: (Airborne Launch Assist Space Access): A rocket capable of launching a 100-pound satellite into low Earth orbit for less than $1 million.
FALCON
DARPA Grand Challenge: driverless car competitions
DARPA GXV-T: Ground X Vehicle
Hydra: Undersea network of mobile unmanned sensors. (2013)
DARPA Network Challenge (before 2010)
DARPA Shredder Challenge 2011 – Reconstruction of shredded documents
DARPA Silent Talk: A planned program attempting to identify EEG patterns for words and transmit these for covert communications.
DARPA Spectrum Challenge (2014)
DEFENDER
Defense Simulation Internet, a wide-area network supporting Distributed Interactive Simulation
Discoverer II radar satellite constellation
EATR
EXACTO: Sniper rifle firing guided smart bullets.
GALE: Global Autonomous Language Exploitation
High Frequency Active Auroral Research Program (HAARP): An ionospheric research program jointly funded by DARPA, the U.S. Air Force's AFRL and the U.S. Navy's NRL. The most prominent area during this research was the high-power radio frequency transmitter facility, which tested the use of the Ionospheric Research Instrument (IRI).
High Energy Liquid Laser Area Defense System (HELLADS) The goal of the HELLADS program was to develop a 150 kilowatt (kW) laser weapon system. In 2015, DARPA's contractor, General Atomics, successfully demonstrated a prototype. In 2020, General Atomics and Boeing announced to develop a 100 kW liquid laser system, with plans to scale it up to 250 kW.
High Performance Knowledge Bases
HISSS
Human Universal Load Carrier: battery-powered human exoskeleton.
Hypersonic Research Program
Luke Arm, a DEKA creation produced under the Revolutionizing Prosthetics program.
MAHEM: Molten penetrating munition.
MEMEX (2014–2017): an online search tool to fight human trafficking crimes on the dark web. In 2016, DARPA Memex program received the 2016 Presidential Award for Extraordinary Efforts to Combat Trafficking in Persons for the development of the anti-trafficking technology tool. The program was named and inspired by the Vannevar Bush's hypothetical device described in his 1945 article.
MeshWorm: an earthworm-like robot.
Mind's Eye: A visual intelligence system capable of detecting and analyzing activity from video feeds.
MOSIS
MQ-1 Predator
Multics
Next Generation Tactical Wearable Night Vision: Smaller and lighter sunglass-sized night vision devices that can switch between different viewing bands.
NLS/Augment: the origin of the canonical contemporary computer user interface
Northrop Grumman Switchblade: an unmanned oblique-wing flying aircraft for high speed, long range and long endurance flight
One Shot: Sniper scope that automatically measures crosswind and range to ensure accuracy in field conditions.
Onion routing, a technique developed in the mid-1990s and later employed by Tor to anonymize communications over a computer network.
Passive radar
Phoenix: A 2012–early-2015 satellite project with the aim to recycle retired satellite parts into new on-orbit assets. The project was initiated in July 2012 with plans for system launches no earlier than 2016. At the time, Satlet tests in low Earth orbit were projected to occur as early as 2015.
Policy Analysis Market, evaluating the trading of information futures contracts based on possible political developments in several Middle Eastern countries. An application of prediction markets.
POSSE
Project AGILE, a Vietnam War-era investigation into methods of remote, asymmetric warfare for use in conflicts with Communist insurgents.
Project MAC
Proto 2: a thought-controlled prosthetic arm
Rapid Knowledge Formation
Sea Shadow
SIMNET: Wide area network with vehicle simulators and displays for real-time distributed combat simulation: tanks, helicopters and airplanes in a virtual battlefield.
System F6—Future, Fast, Flexible, Fractionated Free-flying Spacecraft United by Information Exchange—technology demonstrator: a 2006–2012
I3 (Intelligent Integration of Information), supported the Digital Library research effort through NSF
Strategic Computing Program
Synthetic Aperture Ladar for Tactical Applications (SALTI)
XOS: powered military exoskeleton $226 million technology development program. Cancelled in 2013 before the notionally planned 2015 launch date.
SURAN (1983–87)
Project Vela (1963)
UAVForge (2011)
Vertical Take-Off and Landing Experimental Aircraft (VTOL X-Plane) (2013)
Viet Cong Motivation and Morale Project (1964–1968)
Vulture: Long endurance, high-altitude unmanned aerial vehicle.
VLSI Project (1978) – Its offspring include BSD Unix, the RISC processor concept, many CAD tools still in use today.
Walrus HULA: high-capacity, long range cargo airship.
Wireless Network after Next (WNaN), advanced tactical mobile ad hoc network
WolfPack (2010)
XDATA: Processing and analyzing vast amounts of information. (2012)
Rockwell-MBB X-31
Grumman X-29
Notable fiction
DARPA is well known as a high-tech government agency, and as such has many appearances in popular fiction. Some realistic references to DARPA in fiction are as "ARPA" in Tom Swift and the Visitor from Planet X (DARPA consults on a technical threat), in episodes of television program The West Wing (the ARPA-DARPA distinction), the television program Numb3rs, and the Netflix film Spectral.
See also
Air Force Nuclear Weapons Center (NWC)
Air Force Research Laboratory (AFRL)
Advanced Research Projects Agency–Energy (ARPA-E)
Advanced Research Projects Agency for Health (ARPA-H)
Advanced Research Projects Agency–Infrastructure (ARPA-I)
Engineer Research and Development Center (ERDC)
Homeland Security Advanced Research Projects Agency (HSARPA)
Intelligence Advanced Research Projects Activity (IARPA)
Joint European Disruptive Initiative (JEDI)
Lawrence Berkeley National Laboratory (LBNL or LBL)
Lawrence Livermore National Laboratory (LLNL)
Los Alamos National Laboratory (LANL)
Marine Corps Combat Development Command (MCCDC)
Naval Air Weapons Station China Lake (NAWS)
Naval Research Laboratory (NRL)
Office of Naval Research (ONR)
Pacific Northwest National Laboratory (PNNL)
Sandia National Laboratories (SNL)
United States Army Armament Research, Development and Engineering Center (ARDEC)
United States Army Research, Development and Engineering Command (RDECOM)
United States Army Research Laboratory (ARL)
United States Marine Corps Warfighting Laboratory (MCWL)
References
Further reading
The Advanced Research Projects Agency, 1958–1974; , Barber Associates, December 1975.
DARPA Technical Accomplishments: 1958–1990; , Volumes 1–3, Richard H. Van Atta, Sidney G. Reed, Seymour J. Deitchman, et al., Institute for Defense Analyses, January 1990 – March 1991.
William Saletan writes of Belfiore's book that "His tone is reverential and at times breathless, but he captures the agency's essential virtues: boldness, creativity, agility, practicality and speed." ()
Castell, Manuel, The Network Society: A Cross-cultural Perspective, Edward Elgar Publishing Limited, Cheltenham, UK, 2004.
Weinberger, Sharon, The Imagineers of War: The Untold Story of DARPA, the Pentagon Agency that Changed the World, New York, Alfred A. Knopf, 2017, .
External links
Official website
1958 establishments in Virginia
Articles containing video clips
Ballston, Virginia
Collier Trophy recipients
Corporate spin-offs
Government agencies established in 1958
Life sciences industry
Military units and formations established in 1958
Research and development in the United States
Research projects
United States Department of Defense agencies
Government research | DARPA | [
"Biology"
] | 9,850 | [
"Life sciences industry"
] |
8,958 | https://en.wikipedia.org/wiki/Dunstan | Dunstan ( – 19 May 988), was an English bishop and Benedictine monk. He was successively Abbot of Glastonbury Abbey, Bishop of Worcester, Bishop of London and Archbishop of Canterbury, later canonised. His work restored monastic life in England and reformed the English Church. His 11th-century biographer Osbern, himself an artist and scribe, states that Dunstan was skilled in "making a picture and forming letters", as were other clergy of his age who reached senior rank.
Dunstan served as an important minister of state to several English kings. He was the most popular saint in England for nearly two centuries, having gained fame for the many stories of his greatness, not least among which were those concerning his famed cunning in defeating the Devil.
Early life (909–943)
Birth and relatives
According to Dunstan's earliest biographer, known only as 'B', his parents were called Heorstan and Cynethryth and they lived near Glastonbury. B states that Dunstan was "oritur" in the days of King Æthelstan, 924 to 939. "Oritur" has often been taken to mean "born", but this is unlikely as another source states that he was ordained during Æthelstan's reign, and he would have been under the minimum age of 30 if he was born no earlier than 924. It is more likely that "oritur" should be taken as "emerged" and that he was born around 910. B states that he was related to Ælfheah the Bald, the Bishop of Winchester and Cynesige, Bishop of Lichfield. According to a later biographer, Adelard of Ghent, he was a nephew of Athelm, Archbishop of Canterbury, but this is less certain as it is not mentioned by B, who should have known as he had been a member of Dunstan's household.
School to the king's court
As a young boy, Dunstan studied under the Irish monks who then occupied the ruins of Glastonbury Abbey. Accounts tell of his youthful optimism and of his vision of the abbey being restored. While still a boy, Dunstan was stricken with a near-fatal illness and effected a seemingly miraculous recovery. Even as a child, he was noted for his devotion to learning and for his mastery of many kinds of artistic craftsmanship. With his parents' consent he was tonsured, received minor orders and served in the ancient church of St Mary. He became so well known for his devotion to learning that he is said to have been summoned by Athelm to enter his service. He was later appointed to the court of King Æthelstan.
Dunstan soon became a favourite of the king and was the envy of other members of the court. A plot was hatched to disgrace him and Dunstan was accused of being involved with witchcraft and black magic. The king ordered him to leave the court and as Dunstan was leaving the palace his enemies physically attacked him, beat him severely, bound him, and threw him into a cesspool. He managed to crawl out and make his way to the house of a friend. From there, he journeyed to Winchester and entered the service of his kinsman Ælfheah, Bishop of Winchester.
The bishop tried to persuade him to become a monk, but Dunstan was doubtful whether he had a vocation to a celibate life. The answer came in the form of an attack of swelling tumours all over Dunstan's body. This ailment was so severe that it was thought to be leprosy. It was more probably some form of blood poisoning caused by being beaten and thrown in the cesspool. Whatever the cause, it changed Dunstan's mind. He took Holy Orders in 943, in the presence of Ælfheah, and returned to live the life of a hermit at Glastonbury. Against the old church of St Mary he built a small cell long and deep. It was there that Dunstan studied, worked at his art, and played on his harp. It is at this time, according to a late 11th-century legend, that the Devil is said to have tempted Dunstan and to have been held by the face with Dunstan's tongs.
Monk and abbot (943–957)
Life as a monk
Dunstan worked as a silversmith and in the scriptorium while he was living at Glastonbury. It is thought likely that he was the artist who drew the well-known image of Christ with a small kneeling monk beside him in the Glastonbury Classbook, "one of the first of a series of outline drawings which were to become a special feature of Anglo-Saxon art of this period." Dunstan became famous as a musician, illuminator, and metalworker. Lady Æthelflæd, King Æthelstan's niece, made Dunstan a trusted adviser and on her death, she left a considerable fortune to him. He used this money later in life to foster and encourage a monastic revival in England. About the same time, his father Heorstan died and Dunstan inherited his fortune as well. He became a person of great influence, and on the death of King Æthelstan in 940, the new King, Edmund, summoned him to his court at Cheddar and made him a minister.
Again, royal favour fostered jealousy among other courtiers and again Dunstan's enemies succeeded in their plots. The King was prepared to send Dunstan away. There were then at Cheddar certain envoys from the "Eastern Kingdom", which probably meant East Anglia. Dunstan implored the envoys to take him with them when they returned to their homes. They agreed to do so, but it never happened. The story is recorded:
Abbot of Glastonbury
Dunstan, now Abbot of Glastonbury, went to work at once on the task of reform. He had to re-create monastic life and to rebuild the abbey. He began by establishing Benedictine monasticism at Glastonbury. The Rule of St. Benedict was the basis of his restoration according to the author of 'Edgar's Establishment of the Monasteries' (written in the 960s or 970s) and according to Dunstan's first biographer, who had been a member of the community at Glastonbury. Their statements are also in accordance with the nature of his first measures as abbot, with the significance of his first buildings, and with the Benedictine leanings of his most prominent disciples.
Nevertheless, not all the members of Dunstan's community at Glastonbury were monks who followed the Benedictine Rule. In fact, Dunstan's first biographer, 'B.', was a cleric who eventually joined a community of canons at Liège after leaving Glastonbury.
Dunstan's first care was to rebuild the Church of St. Peter, rebuild the cloister, and re-establish the monastic enclosure. The secular affairs of the house were committed to his brother, Wulfric, "so that neither himself nor any of the professed monks might break enclosure." A school for the local youth was founded and soon became the most famous of its time in England. A substantial extension of the irrigation system on the surrounding Somerset Levels was also completed.
Within two years of Dunstan's appointment, in 946, King Edmund was assassinated. His successor was Eadred. The policy of the new government was supported by the Queen mother, Eadgifu of Kent, by the Archbishop of Canterbury, Oda, and by the East Anglian nobles, at whose head was the powerful ealdorman Æthelstan the "Half-king". It was a policy of unification and conciliation with the Danish half of the kingdom. The goal was a firm establishment of royal authority. In ecclesiastical matters it favoured the spread of Catholic observance, the rebuilding of churches, the moral reform of the clergy and laity, and the end of the religion of the Danes in England. These policies made Dunstan popular in the North of England, but unpopular in the South. Against all these reforms were the nobles of Wessex, who included most of Dunstan's own relatives, and who had an interest in maintaining established customs. For nine years Dunstan's influence was dominant, during which time he twice refused the office of bishop (that of Winchester in 951 and Crediton in 953), affirming that he would not leave the king's side so long as the king lived and needed him.
Changes in fortune
In 955, Eadred died, and the situation was at once changed. Eadwig, the elder son of Edmund, who then came to the throne, was a headstrong youth wholly devoted to the reactionary nobles. According to one legend, the feud with Dunstan began on the day of Eadwig's coronation, when he failed to attend a meeting of nobles. When Dunstan eventually found the young monarch, he was cavorting with a noblewoman named Ælfgifu and her mother, and refused to return with the bishop. Infuriated by this, Dunstan dragged Eadwig back to the royal gathering.
Later realising that he had provoked the king, Dunstan saw that his life was in danger. He fled England and crossed the channel to Flanders, where he found himself ignorant of the language and of the customs of the locals. The count of Flanders, Arnulf I, received him with honour and lodged him in the Abbey of Mont Blandin, near Ghent. This was one of the centres of the Benedictine revival in that country, and Dunstan was able for the first time to observe the strict observance that had seen its rebirth at Cluny at the beginning of the century. His exile was not of long duration. Before the end of 957, the Mercians and Northumbrians revolted and drove out Eadwig, choosing his brother Edgar as king of the country north of the Thames. The south remained faithful to Eadwig. At once Edgar's advisers recalled Dunstan.
Bishop and archbishop (957–978)
Bishop of Worcester and of London
On Dunstan's return, Archbishop Oda consecrated him a bishop and, on the death of Coenwald of Worcester at the end of 957, Oda appointed Dunstan to the see.
In the following year the see of London became vacant and was conferred on Dunstan, who held it simultaneously with Worcester. In October 959, Eadwig died and his brother Edgar was readily accepted as ruler of Wessex. One of Eadwig's final acts had been to appoint a successor to Archbishop Oda, who died on 2 June 958. The chosen candidate was Ælfsige of Winchester, but he died of cold in the Alps as he journeyed to Rome for the pallium. In his place Eadwig then nominated one of his supporters, the Bishop of Wells, Byrhthelm. As soon as Edgar became king, he reversed this second choice on the ground that Byrhthelm had not been able to govern even his first diocese properly. The archbishopric was then conferred on Dunstan.
Archbishop of Canterbury
Dunstan went to Rome in 960, and received the pallium from Pope John XII. On his journey there, Dunstan's acts of charity were so lavish as to leave nothing for himself and his attendants. His steward complained, but Dunstan seems to have suggested that they trust in Jesus Christ.
On his return from Rome, Dunstan at once regained his position as virtual prime minister of the kingdom. By his advice Ælfstan was appointed to the Bishopric of London, and Oswald to that of Worcester. In 963, Æthelwold, the Abbot of Abingdon, was appointed to the See of Winchester. With their aid and with the ready support of King Edgar, Dunstan pushed forward his reforms in the English Church. The monks in his communities were taught to live in a spirit of self-sacrifice, and Dunstan actively enforced the law of celibacy whenever possible. He forbade the practices of simony (selling ecclesiastical offices for money) and ended the custom of clerics appointing relatives to offices under their jurisdiction. Monasteries were built, and in some of the great cathedrals, monks took the place of the secular canons; in the rest the canons were obliged to live according to rule. The parish priests were compelled to be qualified for their office; they were urged to teach parishioners not only the truths of the Christian faith, but also trades to improve their position. The state saw reforms as well. Good order was maintained throughout the realm and there was respect for the law. Trained bands policed the north, and a navy guarded the shores from Viking raids.
In 973, Dunstan's statesmanship reached its zenith when he officiated at the coronation of King Edgar. Edgar was crowned at Bath in an imperial ceremony planned not as the initiation, but as the culmination of his reign (a move that must have taken a great deal of preliminary diplomacy). This service, devised by Dunstan himself and celebrated with a poem in the Anglo-Saxon Chronicle forms the basis of the present-day British coronation ceremony. There was a second symbolic coronation held later. This was an important step, as other kings of Britain came and gave their allegiance to Edgar at Chester. Six kings in Britain, including the kings of Scotland and of Strathclyde, pledged their faith that they would be the king's liege-men on sea and land.
Edgar ruled as a strong and popular king for 16 years. Edgar's reign, and implicitly his governing partnership with Dunstan, was praised by early chroniclers and historians who regarded it as a golden age. The Anglo-Saxon Chronicle caveated the acclaim with one complaint, criticising the high level of immigration that took place at that time. It would appear from William of Malmesbury's later history that the objection was limited to the mercenary seaman, employed from around the North Sea littoral, to assist in the defence of the country.
In 975, Edgar was succeeded by his eldest son Edward "the Martyr". His accession was disputed by his stepmother, Ælfthryth, who wished her own son Æthelred to reign. Through the influence of Dunstan, Edward was chosen and crowned at Winchester. Edgar's death had encouraged the reactionary nobles, and at once there was a determined attack upon the monks, the protagonists of reform. Throughout Mercia they were persecuted and deprived of their possessions. Their cause, however, was supported by Æthelwine, the ealdorman of East Anglia, and the realm was in serious danger of civil war. Three meetings of the Witan were held to settle these disputes, at Kyrtlington, at Calne, and at Amesbury. At the second of them the floor of the hall where the Witan was sitting gave way, and all except Dunstan, who clung to a beam, fell into the room below; several men were killed.
Final years (978–88)
In March 978, King Edward was assassinated at Corfe Castle, possibly at the instigation of his stepmother, and Æthelred the Unready became king. The coronation took place on Low Sunday 31 March 978. According to William of Malmsesbury, writing over a century later, when the young king took the usual oath to govern well, Dunstan addressed him in solemn warning. He criticised the violent act whereby he became king and prophesied the misfortunes that were shortly to fall on the kingdom, but Dunstan's influence at court was ended. Dunstan retired to Canterbury, to teach at the cathedral school.
Only three more public acts are known. In 980, Dunstan joined Ælfhere of Mercia in the solemn translation of the relics of King Edward, soon to be regarded as a saint, from their grave at Wareham to a shrine at Shaftesbury Abbey. In 984, he persuaded King Æthelred to appoint Ælfheah as Bishop of Winchester in succession to Æthelwold. In 986, Dunstan induced the king, by a donation of 100 pounds of silver, to stop his persecution of the See of Rochester.
Dunstan's retirement at Canterbury consisted of long hours, both day and night, spent in private prayer, as well as his regular attendance at Mass and the daily office. He visited the shrines of St Augustine and St Æthelberht. He worked to improve the spiritual and temporal well-being of his people, to build and restore churches, to establish schools, to judge suits, to defend widows and orphans, to promote peace, and to enforce respect for purity. He practised his crafts, made bells and organs and corrected the books in the cathedral library. He encouraged and protected European scholars who came to England, and was active as a teacher of boys in the cathedral school. On Ascension Day 988, Dunstan said Mass and preached three times to the people: at the Gospel, at the benediction, and after the Agnus Dei. In this last address, he announced his impending death and wished his congregation well. That afternoon he chose the spot for his tomb, then went to his bed. His strength failed rapidly, and on Saturday morning, 19 May, he caused the clergy to assemble. Mass was celebrated in his presence, then he received Extreme Unction and the Viaticum, and died. Dunstan's final words are reported to have been, "He hath made a remembrance of his wonderful works, being a merciful and gracious Lord: He hath given food to them that fear Him."
The English people accepted him as a saint shortly thereafter. He was formally canonised in 1029. That year at the Synod of Winchester, St Dunstan's feast was ordered to be kept solemnly throughout England.
Legacy
Until Thomas Becket's fame overshadowed Dunstan's, he was the favourite saint of the English people. Dunstan had been buried in his cathedral. In 1180 his relics were translated to a tomb on the south side of the high altar, when that building was restored after being partially destroyed by a fire in 1174.
The monks of Glastonbury used to claim that during the sack of Canterbury by the Danes in 1012, Dunstan's body had been carried for safety to their abbey. This story was disproved by Archbishop William Warham, who opened the tomb at Canterbury in 1508. They found Dunstan's relics still to be there. However, his shrine was destroyed during the English Reformation.
Patronage and feast day
Dunstan became patron saint of English goldsmiths and silversmiths because he worked as a silversmith making church plate. The Eastern Orthodox Church and the Roman Catholic Church mark his feast day on 19 May. Dunstan is also honoured in the Church of England and in the Episcopal Church on 19 May.
In 2023, a pastoral area of the Roman Catholic Diocese of Clifton was named in honour of Dunstan.
In literature and folklore
English literature contains many references to him: for example, in A Christmas Carol by Charles Dickens, and in this folk rhyme:
St Dunstan, as the story goes,
Once pull'd the devil by the nose
With red-hot tongs, which made him roar,
That he was heard three miles or more.
This folk story is already shown in an initial in the Life of Dunstan in the Canterbury Passionale, from the second quarter of the 12th century (British Library, Harley MS 315, f. 15v.).
Daniel Anlezark has tentatively suggested that Dunstan may be the medieval author of the poem Solomon and Saturn, citing the style, word choice, and Hiberno-Latin used in the texts. However, Clive Tolley examines this claim from a linguistic point-of-view and disagrees with Anlezark's claim.
Another story relates how Dunstan nailed a horseshoe to the Devil's foot when he was asked to re-shoe the Devil's cloven hoof. This caused the Devil great pain, and Dunstan only agreed to remove the shoe and release the Devil after he promised never to enter a place where a horseshoe is over the door. This is claimed as the origin of the lucky horseshoe.
A further legend relating to Dunstan and the Devil seeks to explain the phenomena of Franklin nights, late frosts which occur around his Feast Day. The story goes that Dunstan was a great brewer and negotiated an agreement whereby the Devil could blast the blossom of local apple trees with frost, damaging the cider crop so that Dunstan's own beer would sell more readily.
An East London saint
As Bishop of London, Dunstan was also Lord of the Manor of Stepney, and may, like subsequent bishops, have lived there. Dunstan is recorded as having founded (or rebuilt) Stepney's church, in 952 AD. This church was dedicated to All Saints, but was rededicated to Dunstan after his canonisation in 1029, making Dunstan the patron saint of Stepney.
References
Notes
Citations
Sources
Further reading
Primary sources
'Author B', Vita S. Dunstani, ed. W. Stubbs, Memorials of St Dunstan, Archbishop of Canterbury. Rolls Series. London, 1874. 3–52. Portions of the text are translated by Dorothy Whitelock in English Historical Documents c. 500–1042. 2nd ed. London, 1979. These have been superseded by the new edition and translation by Michael Lapidge and Michael Winterbottom, The Early Lives of St Dunstan, Oxford University Press, 2012.
Adelard of Ghent, Epistola Adelardi ad Elfegum Archiepiscopum de Vita Sancti Dunstani, Adelard's letter to Archbishop Ælfheah of Canterbury (1005–1012) on the Life of St Dunstan, ed. W. Stubbs, Memorials of St Dunstan, Archbishop of Canterbury. Rolls Series 63. London, 1874. 53–68. Also in the new edition and translation by Michael Lapidge and Michael Winterbottom, The Early Lives of St Dunstan, Oxford University Press, 2012.
Wulfstan of Winchester, The Life of St Æthelwold, ed. and tr. M. Lapidge and M. Winterbottom, Wulfstan of Winchester. The Life of St Æthelwold. Oxford Medieval Texts. Oxford, 1991.
Reliquiae Dunstanianae, ed. W. Stubbs, Memorials of St Dunstan, Archbishop of Canterbury. Rolls Series. London, 1874. 354–439.
Fragmenta ritualia de Dunstano, ed. W. Stubbs, Memorials of St Dunstan, Archbishop of Canterbury. Rolls Series. London, 1874. 440–57.
Osbern of Canterbury, Vita sancti Dunstani and Liber Miraculorum Sancti Dunstani, ed. W. Stubbs, Memorials of St Dunstan, Archbishop of Canterbury. Rolls Series. London, 1874. 69–161.
Eadmer, Vita S. Dunstani and Miracula S. Dunstani, ed. and tr. Bernard J. Muir and Andrew J. Turner, Eadmer of Canterbury. Lives and Miracles of Saints Oda, Dunstan, and Oswald. OMT. Oxford, 2006. 41–159 and 160–212; ed. W. Stubbs, Memorials of St Dunstan, archbishop of Canterbury. Rolls Series 63. London, 1874. 162–249, 412–25.
An Old English Account of the King Edgar's Establishment of the Monasteries, tr. D. Whitelock, English Historical Documents I. Oxford University Press, 1979.
Secondary sources
Dales, Douglas, Dunstan: Saint and Statesman, 3rd ed., James Clark & Co, 2023
Duckett, Eleanor. Saint Dunstan of Canterbury (1955).
Dunstan, St. Encyclopedia of World Biography, 2nd ed. 17 vols. Gale Research, 1998.
Knowles, David. The Monastic Orders in England (1940; 2d ed. 1963).
Ramsay, Nigel St Dunstan: his Life, Times, and Cult, Woodbridge, Suffolk, UK; Rochester, NY: Boydell Press, 1992.
Sayles, G. O., The Medieval Foundations of England (1948; 2d ed. 1950).
William of Malmesbury, Vita sancti Dunstani, ed. and tr. Bernard J. Muir and Andrew J. Turner, William of Malmesbury. Lives of SS. Wulfstan, Dunstan, Patrick, Benignus and Indract. Oxford Medieval Texts. Oxford, 2002; ed. W. Stubbs, Memorials of St Dunstan, Archbishop of Canterbury. Rolls Series. London, 1874. 250–324.
John Capgrave, Vita sancti Dunstani, ed. W. Stubbs, Memorials of St Dunstan, Archbishop of Canterbury. Rolls Series. London, 1874. 325–53.
External links
The True Legend of St. Dunstan and the Devil by Edward G. Flight, illustrated by George Cruikshank, published in 1871, and available from Project Gutenberg
Dunstan at the British Library, BL medieval manuscripts blogpost, May 2016
900s births
988 deaths
10th-century English archbishops
10th-century artists
10th-century Christian saints
10th-century English bishops
Abbots of Glastonbury
Angelic visionaries
Anglican saints
Anglo-Saxon artists
Anglo-Saxon Benedictines
Anglo-Saxon saints
Archbishops of Canterbury
Bishops of London
Bishops of Worcester
English blacksmiths
English folklore
Manuscript illuminators
Medieval European scribes
English scribes
People from Mendip District
Year of birth uncertain
English silversmiths
10th-century Christian abbots | Dunstan | [
"Physics"
] | 5,330 | [
"Weather",
"Physical phenomena",
"Weather lore"
] |
9,008 | https://en.wikipedia.org/wiki/Debit%20card | A debit card, also known as a check card or bank card, is a payment card that can be used in place of cash to make purchases. The card usually consists of the bank's name, a card number, the cardholder's name, and an expiration date, on either the front or the back. Many new cards now have a chip on them, which allows people to use their card by touch (contactless), or by inserting the card and keying in a PIN as with swiping the magnetic stripe. Debit cards are similar to a credit card, but the money for the purchase must be in the cardholder's bank account at the time of the purchase and is immediately transferred directly from that account to the merchant's account to pay for the purchase.
Some debit cards carry a stored value with which a payment is made (prepaid cards), but most relay a message to the cardholder's bank to withdraw funds from the cardholder's designated bank account. In some cases, the payment card number is assigned exclusively for use on the Internet, and there is no physical card. This is referred to as a virtual card.
In many countries, the use of debit cards has become so widespread that they have overtaken checks in volume or have entirely replaced them; in some instances, debit cards have also largely replaced cash transactions. The development of debit cards, unlike credit cards and charge cards, has generally been country-specific, resulting in a number of different systems around the world that are often incompatible. Since the mid-2000s, a number of initiatives have allowed debit cards issued in one country to be used in other countries and allowed their use for internet and phone purchases.
Debit cards usually also allow an instant withdrawal of cash, acting as an ATM card for this purpose. Merchants may also offer cashback facilities to customers so that they can withdraw cash along with their purchase. There are usually daily limits on the amount of cash that can be withdrawn. Most debit cards are plastic, but there are cards made of metal and, rarely, wood.
Types of debit card systems
There are currently three ways that debit card transactions are processed: EFTPOS (also known as online debit or PIN debit), offline debit (also known as signature debit), and the Electronic Purse Card System. One physical card can include the functions of all three types, so it can be used in a number of different circumstances.
The five major debit card networks are UnionPay, American Express, Discover, Mastercard, and Visa. Other card networks are STAR, JCB, Pulse, etc. There are many types of debit cards, each accepted only within a particular country or region; for example, Switch (since merged with Maestro) and Solo in the United Kingdom; Interac in Canada; Carte Bleue in France; EC electronic cash (formerly Eurocheque) in Germany; Bancomat/PagoBancomat in Italy; UnionPay in China; RuPay in India; and EFTPOS cards in Australia and New Zealand. The need for cross-border compatibility and the advent of the euro recently led to many of these card networks (such as Switzerland's "EC direkt", Austria's "Bankomatkasse", and Switch in the United Kingdom) being re-branded with the internationally recognized Maestro logo, which is part of the Mastercard brand. Some debit cards are dual-branded with the logo of the (former) national card as well as Maestro (for example, EC cards in Germany, Switch and Solo in the UK, Pinpas cards in the Netherlands, Bancontact cards in Belgium, etc.). The use of a debit card system allows operators to package their products more effectively while monitoring customer spending.
Online debit system
Online debit cards require electronic authorization of every transaction, and the debits are reflected in the user's account immediately. The transaction may be additionally secured with the personal identification number (PIN) authentication system; some online cards require such authentication for every transaction, essentially becoming enhanced automatic teller machine (ATM) cards.
One difficulty with using online debit cards is the necessity of an electronic authorization device at the point of sale (POS) and sometimes also a separate PINpad to enter the PIN, although this is becoming commonplace for all card transactions in many countries.
Overall, the online debit card is generally viewed as superior to the offline debit card because of its more secure authentication system and live status, which alleviates problems with processing lag on transactions that may only issue online debit cards. Some online debit systems are using the normal authentication processes of Internet banking to provide real-time online debit transactions.
Offline debit system
Offline debit cards have the logos of major credit cards (for example, Visa or Mastercard). These cards connect straight to a person's bank account, but there is a delay before the money is taken out.
Electronic purse card system
Smart-card-based electronic purse systems (in which value is stored on the card chip, not in an externally recorded account, so that machines accepting the card need no network connectivity) have been in use throughout Europe since the mid-1990s, most notably in Germany (Geldkarte), Austria (Quick Wertkarte), the Netherlands (Chipknip), Belgium (Proton), Switzerland (CASH), and France (Moneo, which is usually carried by a debit card). In Austria and Germany, almost all current bank cards now include electronic purses, whereas the electronic purse has been recently phased out in the Netherlands.
Prepaid debit cards
Nomenclature
Prepaid debit cards are reloadable and can also be called reloadable or rechargeable debit cards.
Users
The primary market for prepaid debit cards has historically been unbanked people; that is, people who do not use banks or credit unions for their financial transactions.
Advantages
Advantages of prepaid debit cards include being safer than carrying cash, worldwide acceptance, not having to worry about paying a credit card bill or going into debt, the opportunity for anyone over the age of 18 to apply and be accepted without checks on creditworthiness, and the option to deposit paychecks and government benefits directly onto the card for free. A newer advantage is the use of EMV technology and even contactless functionality, which had previously been limited to bank debit cards and credit cards.
Risks
If the card provider offers an insecure website for the cardholder to check the balance on the card, this could give an attacker access to the card information.
If the user loses the card and has not somehow registered it, they will likely lose the money.
If a provider has technical issues, the money might not be accessible when a user needs it. Some companies' payment systems do not appear to accept prepaid debit cards.
Types
Prepaid cards vary by the issuer company: key and niche financial players (sometimes collaborations between businesses); purpose of usage (transit card, beauty gift cards, travel card, health savings card, business, insurance, etc.); and regions.
Governments
As of 2013, several city governments (including Oakland, California, and Chicago, Illinois) are now offering prepaid debit cards, either as part of a municipal ID card (for people such as illegal immigrants who are unable to obtain a state driver's license or DMV ID card) in the case of Oakland or in conjunction with a prepaid transit pass (in Chicago). These cards have been heavily criticized for their higher-than-average fees, such as excessive flat fees added onto every purchase made with the card.
The U.S. federal government uses prepaid debit cards to make benefit payments to people who do not have bank accounts.
In July 2013, the Association of Government Accountants released a report on government use of prepaid cards, concluding that such programs offer a number of advantages to governments and those who receive payments on a prepaid card rather than by check. The prepaid card programs benefit payments largely for the cost savings they offer and provide easier access to cash for recipients, as well as increased security. The report also advises that governments should consider replacing any remaining cheque-based payments with prepaid card programs in order to realize substantial savings for taxpayers as well as benefits for payees.
Impact of government-mandated fee-free bank accounts
In January 2016, the UK government introduced a requirement for banks to offer fee-free basic bank accounts for all, which had a significant impact on the prepaid industry, including the departure of a number of firms.
Consumer protection
Consumer protections vary depending on the network used. Visa and MasterCard, for instance, prohibit minimum and maximum purchase sizes, surcharges, and arbitrary security procedures on the part of merchants. Merchants are usually charged higher transaction fees for credit transactions since debit network transactions are less likely to be fraudulent. This may lead them to "steer" customers toward debit transactions. Consumers disputing charges may find it easier to do so with a credit card since the money will not immediately leave their control. Fraudulent charges on a debit card can also cause problems with a checking account because the money is withdrawn immediately and may thus result in an overdraft or bounced checks. In some cases, debit card-issuing banks will promptly refund any disputed charges until the matter can be settled, and in some jurisdictions, the consumer's liability for unauthorized charges is the same for both debit and credit cards.
In 2010, Bank of America announced that "it was doing away with overdraft fees for debit card purchases."
In some countries, such as India and Sweden, consumer protection is the same regardless of the network used. Some banks set minimum and maximum purchase sizes, mostly for online-only cards. However, this has nothing to do with the card networks but rather with the bank's judgment of the person's age and credit records. Any fees that the customers have to pay to the bank are the same regardless of whether the transaction is conducted as a credit or debit transaction, so there is no advantage for the customers to choose one transaction mode over another. Shops may add surcharges to the price of goods or services in accordance with laws allowing them to do so. Banks consider the purchases to have been made at the moment when the card was swiped, regardless of when the purchase settlement was made. Regardless of which transaction type was used, the purchase may result in an overdraft because the money is considered to have left the account at the moment of the card swipe.
According to Singapore's local financial and banking laws and regulations, all Singapore-issued credit and debit cards with Visa or MasterCard swipe magnet strips are disabled by default if used outside of Singapore. The whole idea is to prevent fraudulent activities and protect the card holder. If customers want to use card swipe magnet strips aboard and internationally, they will have to activate and enable international card usage.
Financial access
Debit cards and secured credit cards are popular among college students who have not yet established a credit history. Debit cards may also be used by expatriate workers to send money home to their families holding an affiliated debit card.
Issues with deferred posting of offline debit
The consumer perceives a debit transaction as occurring in real time: the money is withdrawn from their account immediately after the authorization request from the merchant. In many countries, this is correct for online debit purchases. However, when a purchase is made using the "credit" (offline debit) option, the transaction merely places an authorization hold on the customer's account; funds are not actually withdrawn until the transaction is reconciled and hard-posted to the customer's account, usually a few days later. This is in contrast to a typical credit card transaction, in which, after a few days delay before the transaction is posted to the account, there is a further period of maybe a month before the consumer makes repayment.
Because of this, in the case of an error by the merchant or issuer, a debit transaction may cause more serious problems (for example, overdraft/money not accessible/overdrawn account) than a credit card transaction (for example, credit not accessible due to being over one's credit limit). This is especially true in the United States, where check fraud is a crime in every state but exceeding one's credit limit is not.
Internet purchases
Debit cards may also be used on the Internet, either with or without using a PIN. Internet transactions may be conducted in either online or offline mode. Shops accepting online-only cards are rare in some countries (such as Sweden), while they are common in other countries (such as the Netherlands). For a comparison, PayPal offers the customer to use an online-only Maestro card if the customer enters a Dutch address of residence, but not if the same customer enters a Swedish address of residence.
Internet purchases can be authenticated by the consumer entering their PIN if the merchant has enabled a secure online PIN pad, in which case the transaction is conducted in debit mode. Otherwise, transactions may be conducted in either credit or debit mode (which is sometimes, but not always, indicated on the receipt), and this has nothing to do with whether the transaction was conducted in online or offline mode, since both credit and debit transactions may be conducted in both modes.
Debit cards around the world
In some countries, banks tend to levy a small fee for each debit card transaction. In other countries (for example, New Zealand and the UK) the merchants bear all the costs and customers are not charged. There are many people who routinely use debit cards for all transactions, no matter how small. Some (small) retailers refuse to accept debit cards for small transactions, where paying the transaction fee would absorb the profit margin on the sale, making the transaction uneconomic for the retailer.
Some businesses do not accept card payments at all, even in an era with declining use of cash. This still happens for a variety of reasons, tax evasion by small business included.
In 2019, £35,000 million in tax revenue was lost in the United Kingdom due to cash-only payments. Many businesses such as, barber shops, fish & chip shops, Chinese takeaways, the black market, and even some building sites are known for cash-in-hand payments in the UK, meaning high amounts of money can be unaccounted for.
Angola
The banks in Angola issue by official regulation only one brand of debit cards: Multicaixa, which is also the brand name of the one and only network of ATMs and POS terminals.
Armenia
ArCa (Armenian Card), a national system of debit (ArCa Debit and ArCa Classic) and credit (ArCa Gold, ArCa Business, ArCA Platinum, ArCa Affinity and ArCa Co-branded) cards popular in Armenia. Established in 2000 by 17 largest Armenian banks.
Australia
Debit cards in Australia are called different names depending on the issuing bank: Commonwealth Bank of Australia: Keycard; Westpac Banking Corporation: Handycard; National Australia Bank: FlexiCard; ANZ Bank: Access card; Bendigo Bank: Easy Money card.
A payment in Australia using a debit card can be processed by the local proprietary interbank network called EFTPOS, which is very popular and has been operating there since the 1980s, or it could be processed via an international Card scheme network (ie Visa, Mastercard). Debit cards that were solely EFTPOS-enabled can only be used domestically within Australia and would not be accepted internationally due to the absence of other scheme networks.
EFTPOS cards can also be used to deposit and withdraw cash over the counter at Australia Post outlets participating in Giro Post and withdrawals without purchase from certain major retailers, just as if the transaction was conducted at a bank branch, even if the bank branch is closed. Electronic transactions in Australia are generally processed via the Telstra Argent and Optus Transact Plus network—which has recently superseded the old Transcend network in the last few years. Most early keycards were only usable for EFTPOS and at ATM or bank branches, whilst the new debit card system works in the same way as a credit card, except it will only use funds in the specified bank account. This means that, among other advantages, the new system is suitable for electronic purchases without a delay of two to four days for bank-to-bank money transfers.
Australia operates both electronic credit card transaction authorization and traditional EFTPOS debit card authorization systems, the difference between the two being that EFTPOS transactions are authorized by a personal identification number (PIN) while credit card transactions can additionally be authorized using a contactless payment mechanism (requiring a PIN for purchases over $200). If the user fails to enter the correct pin three times, the consequences range from the card being locked out for a minimum 24-hour period, a phone call or trip to the branch to reactivate with a new PIN, the card being cut up by the merchant, or in the case of an ATM, being kept inside the machine, both of which require a new card to be ordered.
Generally credit card transaction costs are borne by the merchant with no fee applied to the end user (although a direct consumer surcharge of 0.5–3% is not uncommon) while EFTPOS transactions cost the consumer an applicable withdrawal fee charged by their bank.
The introduction of Visa and MasterCard debit cards along with regulation in the settlement fees charged by the operators of both EFTPOS and credit cards by the Reserve Bank has seen a continuation in the increasing ubiquity of credit card use among Australians and a general decline in the profile of EFTPOS. However, the regulation of settlement fees also removed the ability of banks, who typically provide merchant services to retailers on behalf of Visa or MasterCard, from stopping those retailers charging extra fees to take payment by credit card instead of cash or EFTPOS.
Bahrain
In Bahrain debit cards are under Benefit, the interbanking network for Bahrain. Benefit is also accepted in other countries though, mainly GCC, similar to the Saudi Payments Network and the Kuwaiti KNET.
Bangladesh
Bangladesh launched its first domestic card scheme, "Taka Pay" on 1 November 2023. Until now banks were dependent on international card schemes such as Visa, Mastercard, UnionPay etc. From the launching day 3 banks are issuing "Taka Pay" card. Those banks are: Sonali Bank PLC, BRAC Bank PLC and The City Bank Limited. 5 banks (Dutch Bangla Bank Limited, Estern Bank PLC, Islami Bank Bangladesh PLC, Mutual Trust Bank Limited and United Commercial Bank PLC) have joined the scheme and will start issuing cards soon. Bangladesh Bank is working to bring all Bank, Mobile financial service provider and other financial institutions into the scheme.
Belgium
In Belgium, debit cards are widely accepted in most businesses, as well as in most hotels and restaurants. Smaller restaurants or small retailers often accept either debit cards or Payconiq, but generally not credit cards. All Belgian banks provide debit cards when you open a bank account. Usually, it is free to use debit cards on national and EU ATMs even if they aren't owned by the issuing bank. Since 2019, a few banks charge a 50ct cost when using ATMs who are not owned by the issuing bank. The debit cards in Belgium are branded with the logo of the national Bancontact system and also with an international debit system, Maestro (for the moment there aren't any banks who issue the V-Pay or Visa Electron cards even if they are widely accepted), the Maestro system is used mostly for payments in other countries, but a few national card payment services use the Maestro system. Some banks also offer Visa and MasterCard debit cards but these are mostly online banks.
Brazil
In Brazil debit cards are called cartão de débito (singular) or cartões de débito (plural) and became popular in 2008. In 2013, the 100 millionth Brazilian debit card was issued. Debit cards replaced cheques, common until the first decade of the 2000s.
Today, the majority of the financial transactions (like buying food at a supermarket), are made using debit cards (and this system is quickly replacing cash payments in Brazil). Nowadays, the majority of debit card payments are processed using a card + PIN combination, and almost every card comes with a NFC chip to make transactions.
The major debit card flags in Brazil are Visa (with Electron cards), Mastercard (with Maestro cards), and Elo.
The tap to pay technology has been quite popular in Brazil, you won't need to insert your card with a smart chip and put your password, you just need to approximate the card at the credit card machine, it works for debit and credit cards. Some virtual wallets like Samsung Pay, Google Pay and Apple Pay can be used on time of purchase, you just need to approximate your mobile phone or watch at the credit card machine. Generally the amount you are allowed to pay without a pin is quite low for security, but is really useful for daily things that won't cost too much.
Something that appeared recently is a virtual card by some banks (such as Itaú, Bradesco, Mercado Pago and Nubank) on your internet banking platform. They give you a card number, expiration date and the CVV code to be used online. They also have a temporary virtual card number that works just in 48 hours, according to Itaú, you can use it to buy on unknown websites for safety reasons, because in the case of a data leak, the credit card number that was leaked wouldn't work.
Benin
Bulgaria
In Bulgaria, debit cards are accepted in almost all stores and shops, as well as in most of the hotels and restaurants in the bigger cities. Smaller restaurants or small shops often accept cash only. All Bulgarian banks can provide debit cards when you open a bank account, for maintenance costs. The most common cards in Bulgaria are contactless (and Chip&PIN or Magnetic stripe and PIN) with the brands of Debit Mastercard and Visa Debit (the most common were Maestro and Visa Electron some years ago). All POS terminals and ATMs accept Visa, Visa Electron, Visa Debit, VPay, Mastercard, Debit Mastercard, Maestro and Bcard. Also some POS terminals and ATMs accept Discover, American Express, Diners Club, JCB and UnionPay. Almost all POS terminals in Bulgaria support contactless payments. Credit cards are also common in Bulgaria. Paying with smartphones/smartwatches at POS terminals is also getting common.
Burkina Faso
Canada
Canada has a nationwide EFTPOS system, called Interac Direct Payment (IDP). Since being introduced in 1994, IDP has become the most popular payment method in the country. Previously, debit cards have been in use for ABM usage since the late 1970s, with credit unions in Saskatchewan and Alberta introducing the first card-based, networked ATMs beginning in June 1977. Debit cards, which could be used anywhere a credit card was accepted, were first introduced in Canada by Saskatchewan Credit Unions in 1982. In the early 1990s, pilot projects were conducted among Canada's six largest banks to gauge security, accuracy and feasibility of the Interac system. Slowly in the later half of the 1990s, it was estimated that approximately 50% of retailers offered Interac as a source of payment. Retailers, many small transaction retailers like coffee shops, resisted offering IDP to promote faster service. In 2009, 99% of retailers offer IDP as an alternative payment form.
In Canada, the debit card is sometimes referred to as a "bank card". It is a client card issued by a bank that provides access to funds and other bank account transactions, such as transferring funds, checking balances, paying bills, etc., as well as point of purchase transactions connected on the Interac network. Since its national launch in 1994, Interac Direct Payment has become so widespread that, as of 2001, more transactions in Canada were completed using debit cards than cash. This popularity may be partially attributable to two main factors: the convenience of not having to carry cash, and the availability of automated bank machines (ABMs) and direct payment merchants on the network. Debit cards may be considered similar to stored-value cards in that they represent a finite amount of money owed by the card issuer to the holder. They are different in that stored-value cards are generally anonymous and are only usable at the issuer, while debit cards are generally associated with an individual's bank account and can be used anywhere on the Interac network.
In Canada, the bank cards can be used at POS and ATMs. Interac Online has also been introduced in recent years allowing clients of most major Canadian banks to use their debit cards for online payment with certain merchants as well. Certain financial institutions also allow their clients to use their debit cards in the United States on the NYCE network. Several Canadian financial institutions that primarily offer VISA credit cards, including CIBC, RBC, Scotiabank, and TD, also issue a Visa Debit card in addition to their Interac debit card, either through dual-network co-branded cards (CIBC, Scotia, and TD), or as a "virtual" card used alongside the customer's existing Interac debit card (RBC). This allows for customer to use Interlink for online, over-the-phone, and international transactions and Plus for international ATMs, since Interac isn't well supported in these situations.
Consumer protection in Canada
Consumers in Canada are protected under a voluntary code entered into by all providers of debit card services, The Canadian Code of Practice for Consumer Debit Card Services (sometimes called the "Debit Card Code"). Adherence to the Code is overseen by the Financial Consumer Agency of Canada (FCAC), which investigates consumer complaints.
According to the FCAC website, revisions to the code that came into effect in 2005 put the onus on the financial institution to prove that a consumer was responsible for a disputed transaction, and also place a limit on the number of days that an account can be frozen during the financial institution's investigation of a transaction.
Chile
Chile has an EFTPOS system called Redcompra (Purchase Network) which is currently used in at least 23,000 establishments throughout the country. Goods may be purchased using this system at most supermarkets, retail stores, pubs and restaurants in major urban centers. Chilean banks issue Maestro, Visa Electron and Visa Debit cards.
Colombia
Colombia has a system called Redeban-Multicolor and Credibanco Visa which are currently used in at least 23,000 establishments throughout the country. Goods may be purchased using this system at most supermarkets, retail stores, pubs and restaurants in major urban centers. Colombian debit cards are Maestro (pin), Visa Electron (pin), Visa Debit (as credit) and MasterCard-Debit (as credit).
Côte d'Ivoire
Denmark
The Danish debit card Dankort is ubiquitous in Denmark. It was introduced on 1 September 1983, and despite the initial transactions being paper-based, the Dankort quickly won widespread acceptance. By 1985 the first EFTPOS terminals were introduced, and 1985 was also the year when the number of Dankort transactions first exceeded 1 million. Today Dankort is primarily issued as a Multicard combining the national Dankort with the more internationally recognized Visa (denoted simply as a "Visa/Dankort" card). In September 2008, 4 million cards had been issued, of which three million cards were Visa/Dankort cards. It is also possible to get a Visa Electron debit card and MasterCard.
In 2007, PBS (now called Nets), the Danish operator of the Dankort system, processed a total of 737 million Dankort transactions. Of these, 4.5 million were processed on just a single day, 21 December. This remains the current record.
, there were 3.9 million Dankort cards in existence.
, more than 80,000 Danish shops had a Dankort terminal, and another 11,000 internet shops also accepted the Dankort.
Finland
Most daily customer transactions are carried out with debit cards or online giro/electronic bill payment, although credit cards and cash are accepted. Checks are no longer used. Prior to European standardization, Finland had a national standard (pankkikortti = "bank card"). Physically, a pankkikortti was the same as an international credit card, and the same card imprinters and slips were used for pankkikortti and credit cards, but the cards were not accepted abroad. This has now been replaced by the Visa and MasterCard debit card systems, and Finnish cards can be used elsewhere in the European Union and the world.
An electronic purse system, with a chipped card, was introduced, but did not gain much traction.
Signing a payment offline entails incurring debt, thus offline payment is not available to minors. However, online transactions are permitted, and since almost all stores have electronic terminals, today also minors can use debit cards. Previously, only cash withdrawal from ATMs was available to minors (automaattikortti (ATM card) or Visa Electron).
France
Carte Bancaire (CB), the national payment scheme, in 2008, had 57.5 million cards carrying its logo and 7.76 billion transactions (POS and ATM) were processed through the e-rsb network (135 transactions per card mostly debit or deferred debit). In 2019, Carte Bancaire had 71.1 million cards carrying its logo and 13.76 billion transactions (POS and ATM) were processed through its network. Most CB cards are debit cards, either debit or deferred debit. Less than 10% of CB cards were credit cards.
Banks in France usually charge annual fees for debit cards (despite card payments being very cost efficient for the banks), yet they do not charge personal customers for chequebooks or processing checks (despite cheques being very costly for the banks). This imbalance dates from the unilateral introduction in France of Chip and PIN debit cards in the early 1990s, when the cost of this technology was much higher than it is now. Credit cards of the type found in the United Kingdom and United States are unusual in France and the closest equivalent is the deferred debit card, which operates like a normal debit card, except that all purchase transactions are postponed until the end of the month, thereby giving the customer between 1 and 31 days of "interest-free" credit. Banks can charge more for a deferred debit card.
Most France debit cards are branded with the CB logo, which assures acceptance throughout France. Most banks now issue Visa or MasterCard co-branded cards, so that the card is accepted on both the CB and the Visa or Mastercard networks.
In France payment cards are commonly called Carte Bleue ("blue card") regardless of their actual brand. Carte Bleue was a card brand acquired in 2010 by Visa which is not used anymore. Until its purchase the main characteristic of Carte Bleue was to benefit from its alliance with Visa which allowed the use of the cards on both networks.
Many smaller merchants in France refuse to accept debit cards for transactions under a certain amount because of the minimum fee charged by merchants' banks per transaction. But more and more merchants accept debit cards for small amounts, due to the increased use of debit cards. Merchants in France do not differentiate between debit and credit cards, and so both have equal acceptance. It is legal in France to set a minimum amount to transactions, but the merchants must display it clearly.
In January 2016, 57.2% of all the debit cards in France also had a contactless payment chip. The maximum amount per transaction was originally set to €20 and the maximum amount of all contactless payments per day is between €50-100 depending on the bank. The per-transaction limit increased to €30 in October 2017. Due to the COVID-19 pandemic, the per-transaction limit increased to €50 in May 2020 to comply with demands from the French government and the European Banking Authority.
Liability and e-cards
According to French law, banks are liable for any transaction made with a copy of the original card and for any transaction made without a card (on the phone or on the Internet), so banks have to pay back any fraudulent transaction to the card holder if the previous criteria are met. Fighting card fraud is therefore more interesting for banks. As a consequence, French banks websites usually propose an "e-card" service ("electronic (bank) card"), where a new virtual card is created and linked to a physical card. Such virtual card can be used only once and for the maximum amount given by the card holder. If the virtual card number is intercepted or used to try to get a higher amount than expected, the transaction is blocked.
Germany
Germany has a dedicated debit card payment system called girocard which is usually co-branded with V Pay or Maestro depending on the issuing bank. In recent years both Visa Debit and Mastercard Debit cards are increasingly more common as well.
Historically, facilities already existed before EFTPOS became popular with the Eurocheque card, an authorization system initially developed for paper checks where, in addition to signing the actual check, customers also needed to show the card alongside the check as a security measure. Those cards could also be used at ATMs and for card-based electronic funds transfer with PIN entry. These are now the only functions of such cards: the Eurocheque system (along with the brand) was abandoned in 2002 during the transition from the Deutsche Mark to the euro. As of 2005, most stores and petrol outlets have EFTPOS facilities. Processing fees are paid by the businesses, which leads to some business owners refusing debit card payments for sales totalling less than a certain amount, usually 5 or 10 euro.
To avoid the processing fees, many businesses resorted to using direct debit, which is then called electronic direct debit (, abbr. ELV). The point-of-sale terminal reads the bank sort code and account number from the card but instead of handling the transaction through the Girocard network it simply prints a form, which the customer signs to authorise the debit note. However, this method also avoids any verification or payment guarantee provided by the network. Further, customers can return debit notes by notifying their bank without giving a reason. This means that the beneficiary bears the risk of fraud and illiquidity. Some business mitigate the risk by consulting a proprietary blacklist or by switching to Girocard for higher transaction amounts.
Around 2000, an Electronic Purse Card was introduced, dubbed Geldkarte ("money card"). It makes use of the smart card chip on the front of the standard issue debit card. This chip can be charged with up to 200 euro, and is advertised as a means of making medium to very small payments, even down to several euros or cent payments. The key factor here is that no processing fees are deducted by banks. It did not gain the popularity its inventors had hoped for. As of 2020, several partners pulled out of accepting the Geldkarte which is no longer issued and set to be retired altogether in the near future.
Guinée Bissau
See "UEMOA".
Greece
Debit card usage surged in Greece after the introduction of Capital Controls in 2015.
Hong Kong
Most bank cards in Hong Kong for saving / current accounts are equipped with EPS and UnionPay, which function as a debit card and can be used at merchants for purchases, where funds are withdrawn from the associated account immediately.
EPS is a Hong Kong only system and is widely accepted in merchants and government departments. However, as UnionPay cards are accepted more widely overseas, consumers can use the UnionPay functionality of the bank card to make purchases directly from the bank account.
Visa debit cards are uncommon in Hong Kong. The British banking firm HSBC's subsidiary Hang Seng Bank's Enjoy card and American firm Citibank's ATM Visa are two of the Visa debit cards available in Hong Kong.
Debit cards usage in Hong Kong is relatively low, as the credit card penetration rate is high in Hong Kong. In Q1 2017, there are near 20 million credit cards in circulation, about 3 times the adult population. There are 145,800 thousand transaction made by credit cards but only 34,001 thousand transactions made by debit cards.
Hungary
In Hungary debit cards are far more common and popular than credit cards. Many Hungarians even refer to their debit card ("betéti kártya") mistakenly using the word for credit card ("hitelkártya"). The most commonly used phrase, however, is simply bank card ("bankkártya").
India
After the demonetization by current government in the December 2016, there has been a surge in cashless transactions, so nowadays you could find card acceptance in most places. The debit card was mostly used for ATM transactions. RBI has announced that fees are not justified so transactions have no processing fees. Almost half of Indian debit and credit card users use Rupay card. Some Indian banks issue Visa debit cards, though some banks (like SBI and Citibank India) also issue Maestro cards. The debit card transactions are routed through Rupay (mostly), Visa or MasterCard networks in India and overseas rather than directly via the issuing bank.
The National Payments Corporation of India (NPCI) launched a new card processing platform called RuPay, similar to Singapore's NETS and Mainland China's UnionPay, as an alternative to Visa and MasterCard, it is widely accepted, but due to the popularisation of Unified Payments Interface (UPI), most people connect their bank accounts directly to a UPI provider to access mobile payments, cutting out the need for a debit card.
As the COVID cases in India are surging up, the banking institution has shifted its focus to contactless payment options such as contactless debit card, contactless credit card and contactless prepaid card. The payment methods are changing drastically in India because of social distancing norms and lockdown; people are using more of the digital transactions rather than cash.
Indonesia
Foreign-owned brands issuing Indonesian debit cards include Visa, Maestro, MasterCard, and MEPS. Domestically owned debit card networks operating in Indonesia include Debit BCA (and its Prima network's counterpart, Prima Debit) and Mandiri Debit.
Iraq
Iraq's two biggest state-owned banks, Rafidain Bank and Rasheed Bank, together with the Iraqi Electronic Payment System (IEPS) have established a company called International Smart Card, which has developed a national credit card called 'Qi Card', which they have issued since 2008. According to the company's website: 'after less than two years of the initial launch of the Qi card solution, we have hit 1.6 million cardholder with the potential to issue 2 million cards by the end of 2010, issuing about 100,000 card monthly is a testament to the huge success of the Qi card solution. Parallel to this will be the expansion into retail stores through a network of points of sales of about 30,000 units by 2015'.
Ireland
Current system (as of December 2022)
In Ireland, all debits cards are exclusively Chip and PIN. The market is dominated by Visa Debit cards - the "Top 3" banks in Ireland: Allied Irish Banks, Bank of Ireland and Permanent TSB all use Visa Debit, as well as the exiting bank Ulster Bank. Other financial institutions that maintain a minority stake such as EBS, An Post Money and some credit unions use Mastercard Debit cards, as well as the exiting bank KBC. Revolut, with over 2 million customers in Ireland, varies between Mastercard and Visa Debit cards.
Irish debit cards are normally multi-functional and combine ATM card facilities. Some banks will provide ATM cards to vulnerable or elderly customers, but only on request. The practice is rare and it is on a case-by-case basis.
For online purchases, the cards are used together with the bank's mobile app for Strong Customer Authentication as required by the EU's Payment Services Directive (PSD2).
Most Irish debit cards are also enabled for contactless payment for purchases €50 or below, and display the contactless symbol. The limit was previously €30, but was increased to €50 as a result of the COVID-19 pandemic to increase card usage in order to minimize the handling of cash. Some banks, such as AIB, do not provide contactless cards to certain account holders, such as those under 18. After 3-5 contactless transactions, the bank will ask the card user to enter their PIN through a Chip and PIN transaction for authentication.
Apple Pay and Google Pay are also embraced as contactless payment methods with many retailers as they use the same contactless technology. However, due to the device's authentication of the user, there is no limit on the purchase amount. In some cases, there are limits of a large amount such as €500, however this may be imposed by the retailer due to technical constraints rather than for security purposes.
The cards are usually processed online, but some cards can also be processed offline depending on the rules applied by the card issuer.
A number of card issuers also provide prepaid debit card accounts primarily for use as gift cards / vouchers or for added security and anonymity online, e.g. CleverCards. These may be disposable or reloadable and are predominately MasterCard branded. One4All vouchers, a popular voucher given particularly to employees by companies at Christmas time, are another type of a prepaid debit card used. However, it is limited to retailers that specifically opt-in to using One4All cards as a payment method and are neither Visa nor Mastercard branded.
Previous system (defunct since 28 February 2014)
Laser was launched by the Irish banks in 1996 as an extension of the existing ATM and Cheque guarantee card systems that had existed for many years. When the service was added, it became possible to make payments with a multifunctional card that combined ATM, cheque and debit card and international ATM facilities through MasterCard Cirrus or Visa Plus and sometimes the British Link ATM system. Their functionality was similar to the British Switch card.
The system first launched as a swipe & sign card and could be used in Ireland in much the same way as a credit card and were compatible standard card terminals (online or offline, although they were usually processed online). They could also be used in cardholder-not-present transactions over the phone, by mail or on the internet or for processing recurring payments. Laser also offered 'cash back' facilities where customers could ask retailers (where offered) for an amount of cash along with their transaction. This service allowed retailers to reduce volumes of cash in tills and allowed consumers to avoid having to use ATMs. Laser adopted EMV 'Chip and PIN' security in 2002 in common with other credit and debit cards right across Europe. In 2005, some banks issued customers with Lasers cards that were co-branded with Maestro. This allowed them to be used in POS terminals overseas, internet transactions were usually restricted to sites that specifically accepted Laser.
Since 2006, Irish banks have progressively replaced Laser with international schemes, primarily Visa Debit and by 28 February 2014 the Laser Card system had been withdrawn entirely and is no longer accepted by retailers.
Israel
The Israel bank card system is somewhat confusing to newcomers, comprising a blend of features taken from different types of cards. What may be referred to as a credit card, is most likely to be a deferred debit card on an associated bank current account, the most common type of card in Israel, somewhat like the situation in France, though the term "debit card" is not in common usage. Cards are nearly universally called cartis ashrai (כרטיס אשראי), literally, "credit card", a term which may belie the card's characteristics. Its main feature may be a direct link to a connected bank account (through which they are mostly issued), with the total value of the transactions made on the card being debited from the bank account in full on a regular date once a month, without the option to carry the balance over; indeed certain types of transactions (such as online and/or foreign currency) may be debited directly from the connected bank account at the time of the transaction. Any such limited credit enjoyed is a result of the customer's assets and credibility with the bank, and not granted by the credit card company. The card usually enables immediate ATM cash withdrawals & balance inquiries (as debit cards do), installment & deferred charge interest free transactions offered by merchants (also applicable in Brazil), interest bearing installment plans/deferred charge/revolving credit which is transaction specific at the point of sale (though granted by the issuer, hence the interest), and a variety of automated/upon request types of credit schemes including loans, some of which revolve or resemble the extended payment options sometimes offered by charge cards.
Thus the "true" debit card is not so common in Israel, though it has existed since 1994. It is offered by two credit companies in Israel: One is ICC, short for "Israeli Credit Cards" (referred to as "CAL", an acronym formed from its abbreviation in Hebrew), which issues it in the form of a Visa Electron card valid only in Israel. It is offered mainly through the Israel Post (post office) bank (which is not allowed, by regulation, to offer any type of credit) or through Israel Discount Bank, its main owner (where it is branded as "Discount Money Key" card). This branded Israel Discount Bank branded debit card also offered as valid worldwide card, either as Visa Electron or MasterCard Debit cards. The second & more common debit card is offered by the Isracard consortium to its affiliate banks and is branded "Direct". It is valid only in Israel, under its local private label brand, as "Isracard Direct" (which was known as "Electro Cheque" until 2002 and while the local brand Isracard is often viewed as a MasterCard for local use only). Since 2006, Isracard has also offered an international version, branded "MasterCard Direct", which is less common. These two debit card brands operate offline in Israel (meaning the transaction operates under the credit cards systems & debited officially from the cardholder account only few days later, after being processed—though reflected on the current account immediately). In 2014 the Isracard Direct card (a.k.a. the valid only in Israel version) was relaunched as Isracash, though the former subbrand still being marketed and replaced ICC Visa Electron as Israel Post bank debit card.
Overall, banks routinely offer deferred debit cards to their new customers, with "true" debit cards usually offered only to those who cannot obtain credit. These latter cards are not attractive to the average customer since they attract both a monthly fee from the credit company and a bank account fee for each day's debits. Isracard Direct is by far more common than the ICC Visa Electron debit card. Banks who issue mainly Visa cards will rather offer electronic use, mandate authorized transaction only, unembossed version of Visa Electron deferred debit cards (branded as "Visa Basic" or "Visa Classic") to its customers—sometimes even in the form of revolving credit card.
Credit/debit card transactions in Israel are not PIN based (other than at ATMs) and it is only in recent years that EMV chip smart cards have begun to be issued, with the Bank of Israel in 2013 ordering the banks and credit card companies to switch customers to credit cards with the EMV security standard within 3.5 years.
Italy
Debit cards are quite popular in Italy. There are both classic and prepaid cards. There are two Italian interbank networks, Bancomat and PagoBancomat: Bancomat is the commercial brand for the cash withdrawal circuit, while PagoBancomat is used for POS transactions. Nowadays many debit cards use Visa or Mastercard circuit, often in co-badging with Bancomat/PagoBancomat.
There is another national circuit, Postamat, that is used by the debit and prepaid cards offered by the national post service, Poste Italiane, mainly for the cash withdrawal in the post-office ATM.
Japan
In Japan people usually use their , originally intended only for use with cash machines, as debit cards. The debit functionality of these cards is usually referred to as , and only cash cards from certain banks can be used. A cash card has the same size as a Visa/MasterCard. As identification, the user will have to enter their four-digit PIN when paying. J-Debit was started in Japan on 6 March 2000. However, J-Debit has not been that popular since then.
Suruga Bank began service of Japan's first Visa Debit in 2006. Rakuten Bank, formally known as Ebank, offers a Visa debit card.
Resona Bank and The Bank of Tokyo-Mitsubishi UFJ bank also offer a Visa branded debit card.
Kuwait
In Kuwait, all banks provide a debit card to their account holders. This card is branded as KNET, which is the central switch in Kuwait. KNET card transactions are free for both customer and the merchant and therefore KNET debit cards are used for low valued transactions as well. KNET cards are mostly co-branded as Maestro or Visa Electron which makes it possible to use the same card outside Kuwait on any terminal supporting these payment schemes.
Malaysia
In Malaysia, the local debit card network is operated by the Malaysian Electronic Clearing Corporation (MyClear), which had taken over the scheme from MEPS in 2008. The new name for the local debit card in Malaysia is MyDebit, which was previously known as either bankcard or e-debit. Debit cards in Malaysia are now issued on a combo basis where the card has both the local debit card payment application as well as having that of an International scheme (Visa or MasterCard). All newly issued MyDebit combo cards with Visa or MasterCard have the contactless payment feature. The same card also acts as the ATM card for cash withdrawals.
Mali
See "UEMOA".
Mexico
In Mexico, many companies use a type of debit card called a payroll card (tarjeta de nómina), in which they deposit their employee's payrolls, instead of paying them in cash or through checks. This method is preferred in many places because it is a much safer and secure alternative compared to the more traditional forms of payment.
Netherlands
In the Netherlands using EFTPOS is known as pinnen (pinning), a term derived from the use of a personal identification number (PIN). PINs are also used for ATM transactions, and the term is used interchangeably by many people, although it was introduced as a marketing brand for EFTPOS. The system was launched in 1987, and in 2010 there were 258,585 terminals throughout the country, including mobile terminals used by delivery services and on markets. All banks offer a debit card suitable for EFTPOS with current accounts.
PIN transactions are usually free to the customer, but the retailer is charged per-transaction and monthly fees. Equens, an association with all major banks as its members, runs the system, and until August 2005 also charged for it. Responding to allegations of monopoly abuse, it has handed over contractual responsibilities to its member banks through who now offer competing contracts. The system is organised through a special banking association Currence set up specifically to coordinate access to payment systems in The Netherlands. Interpay, a legal predecessor of Equens, was fined €47,000,000 in 2004, but the fine was later dropped, and a related fine for banks was lowered from €17 million to €14 million. Per-transaction fees are between 5–10 cts, depending on volume.
Credit card use in the Netherlands is very low, and most credit cards cannot be used with EFTPOS, or charge very high fees to the customer. Debit cards can often, though not always, be used in the entire EU for EFTPOS. Most debit cards are Mastercard Maestro cards. Visa's V Pay cards are also accepted at most locations.
In 2011, spending money using debit cards rose to €83,000,000,000 whilst cash spending dropped to €51,000,000,000 and credit card spending grew to €5,000,000,000.
Electronic Purse Cards (called Chipknip) were introduced in 1996, but have never become very popular. The system was abolished at the end of 2014.
New Zealand
EFTPOS (electronic fund transfer at point of sale) in New Zealand was highly popular until other forms of payment began to take over in the 2010s. In 2006, 70 percent of all retail transactions were made by EFTPOS, with an average of 306 EFTPOS transactions being made per person. By 2023, this had declined to a little over 20%.
The system involves the merchant swiping (or inserting) the customer's card and entering the purchase amount. Point of sale systems with integrated EFTPOS often send the purchase total to the terminal and the customer swipes their own card. The customer then selects the account they wish to use: Current/Cheque (CHQ), Savings (SAV), or Credit Card (CRD), before entering in their PIN. After a short processing time in which the terminal contacts the EFTPOS network and the bank, the transaction is approved (or declined) and a receipt is printed. The EFTPOS system is used for credit cards as well, with a customer selecting Credit Card and entering their PIN.
Nearly all retail outlets have EFTPOS facilities, to the point that retailers without EFTPOS normally advertise 'cash only'. The main exceptions are small traders at farmers markets and other occasional outlets. Most mobile operators such as taxis, stall holders and pizza deliverers have mobile EFTPOS systems. The system is made up of two primary networks: EFTPOS NZ, which is owned by VeriFone and Worldline NZ, which is owned by ANZ Bank New Zealand, ASB Bank, Westpac and the Bank of New Zealand. The two networks are intertwined, highly sophisticated and secure, able to handle huge volumes of transactions during busy periods such as the lead-up to Christmas. Network failures are rare, but when they occur they cause massive disruption, major delays and loss of income for businesses. The CrowdStrike failure in July 2024 was one such incident.
Merchants and customers are not charged a fee for using EFTPOS - merchants only have to pay for the equipment rental.
One of the disadvantages of New Zealand's well-established EFTPOS system is that it is incompatible with overseas systems and non-face-to-face purchases. In response to this, many banks since 2005 have introduced international debit cards such as Maestro and Visa Debit which work online and overseas as well as on the New Zealand EFTPOS system.
Nigeria
Many Nigerians regard Debit cards as ATM cards because of its features to withdraw money directly from the ATM.
According to the Central Bank of Nigeria, Debit Cards can be issued to customers having Savings /Current Accounts. There are three major types of Debit card in Nigeria: MasterCard, Verve, and Visa card. These Debit cards companies have other packages they offer in Nigeria like Naira MasterCard platinum, Visa Debit (Dual currency), GTCrea8 Card, SKS Teen Card, etc. All the packages depend on your Bank.
Philippines
In the Philippines, all three national ATM network consortia offer proprietary PIN debit. This was first offered by Express Payment System in 1987, followed by Megalink with Paylink in 1993 then BancNet with the Point-of-Sale in 1994.
Express Payment System or EPS was the pioneer provider, having launched the service in 1987 on behalf of the Bank of the Philippine Islands. The EPS service has subsequently been extended in late 2005 to include the other Expressnet members: Banco de Oro and Land Bank of the Philippines. They currently operate 10,000 terminals for their cardholders.
Megalink launched Paylink EFTPOS system in 1993. Terminal services are provided by Equitable Card Network on behalf of the consortium. Service is available in 2,000 terminals, mostly in Metro Manila.
BancNet introduced their point of sale system in 1994 as the first consortium-operated EFTPOS service in the country. The service is available in over 1,400 locations throughout the Philippines, including second and third-class municipalities. In 2005, BancNet signed a Memorandum of Agreement to serve as the local gateway for China UnionPay, the sole ATM switch in China. This will allow the estimated 1.0 billion Chinese ATM cardholders to use the BancNet ATMs and the EFTPOS in all participating merchants.
Visa debit cards are issued by Union Bank of the Philippines (e-Wallet & eon), Chinatrust, Equicom Savings Bank (Key Card & Cash Card), Banco de Oro, HSBC, HSBC Savings Bank, Sterling Bank of Asia (Visa ShopNPay prepaid and debit cards) and EastWest Bank. Union Bank of the Philippines cards, EastWest Visa Debit Card, Equicom Savings Bank & Sterling Bank of Asia EMV cards which can also be used for internet purchases. Sterling Bank of Asia has released its first line of prepaid and debit Visa cards with EMV chip.
MasterCard debit cards are issued by Banco de Oro, Security Bank (Cashlink & Cash Card) and Smart Communications (Smart Money) tied up with Banco de Oro. MasterCard Electronic cards are issued by BPI (Express Cash) and Security Bank (CashLink Plus).
Originally, all Visa and MasterCard based debit cards in the Philippines are non-embossed and are marked either for "Electronic Use Only" (Visa/MasterCard) or "Valid only where MasterCard Electronic is Accepted" (MasterCard Electronic). However, EastWest Bank started to offer embossed Visa Debit Cards without the for "Electronic Use Only" mark. Paypass Debit MasterCard from other banks also have embossed labels without the for "Electronic Use Only" mark. Unlike credit cards issued by some banks, these Visa and MasterCard-branded debit cards do not feature EMV chips, hence they can only be read by the machines through swiping.
By 21 March 2016, BDO has started issuing sets of Debit MasterCards having the EMV chip and is the first Philippine bank to have it. This is a response to the BSP's monitor of the EMV shift progress in the country. By 2017, all Debit Cards in the country should have an EMV chip on it.
Poland
In Poland, the first system of electronic payments was operated by Orbis, which later was changed to PolCard in 1991 (which also issued its own cards) and then that system was bought by First Data Poland Holding SA. In the mid-1990s international brands such as Visa, MasterCard, and the unembossed Visa Electron or Maestro were introduced.
Visa Electron and Maestro work as a standard debit cards: the transactions are debited instantly, although it may happen on some occasions that a transaction is processed with some delay (hours, up to one day). These cards do not possess the options that credit cards have.
In the late 2000s, contactless cards started to be introduced. The first technology to be used was MasterCard PayPass, later joined by Visa's payWave. This payment method is now universal and accepted almost everywhere. In an everyday use this payment method is always called Paypass.
Almost all businesses in Poland accept debit and credit cards.
In the mid-2010s, Polish banks started to replace unembossed cards with embossed electronic cards such as Debit MasterCard and Visa Debit, allowing the customers to own a card that has all qualities of a credit card (given that credit cards are not popular in Poland).
There are also some banks that do not possess an identification system to allow customers to order debit cards online.
Portugal
In Portugal, debit cards are accepted almost everywhere: ATMs, stores, and so on. The most commonly accepted are Visa and MasterCard, or the unembossed Visa Electron or Maestro. Regarding Internet payments debit cards cannot be used for transfers, due to its unsafeness, so banks recommend the use of 'MBnet', a pre-registered safe system that creates a virtual card with a pre-selected credit limit. All the card system is regulated by SIBS, the institution created by Portuguese banks to manage all the regulations and communication processes properly. SIBS' shareholders are all the 27 banks operating in Portugal.
Russia
In addition to Visa, MasterCard and American Express, there are some local payment systems based in general on smart card technology.
Sbercard. This payment system was created by Sberbank around 1995–1996. It uses BGS Smartcard Systems AG smart card technology that is, DUET. Sberbank was a single retail bank in the Soviet Union before 1990. De facto this is a payment system of the SberBank.
Zolotaya Korona. This card brand was created in 1994. Zolotaya Korona is based on CFT technology.
STB Card. This card uses the classic magnetic stripe technology. It almost fully collapsed after 1998 (GKO crisis) with STB bank failure.
Union Card. The card also uses the classic magnetic stripe technology. This card brand is on the decline. These accounts are being reissued as Visa or MasterCard accounts.
Nearly every transaction, regardless of brand or system, is processed as an immediate debit transaction. Non-debit transactions within these systems have spending limits that are strictly limited when compared with typical Visa or MasterCard accounts.
Saudi Arabia
In Saudi Arabia, all debit card transactions are routed through Saudi Payments Network (mada), the only electronic payment system in the Kingdom and all banks are required by the Saudi Central Bank (SAMA) to issue cards fully compatible with the network. It connects all point of sale (POS) terminals throughout the country to a central payment switch which in turn re-routes the financial transactions to the card issuer, local bank, Visa, Amex or MasterCard.
As well as its use for debit cards, the network is also used for ATM and credit card transactions.
Senegal
Serbia
All Serbian banks issue debit cards. Since August 2018, all owners of transactional accounts in Serbian dinars are automatically issued a debit card of the national brand DinaCard. Other brands (VISA, MasterCard and Maestro) are more popular, better accepted and more secure, but must be requested specifically as additional cards. Debit cards are used for cash withdrawal at ATMs as well as store transactions.
Singapore
Singapore's debit service is managed by the Network for Electronic Transfers (NETS), founded by Singapore's leading banks and shareholders namely DBS, Keppel Bank, OCBC and its associates, OUB, IBS, POSB, Tat Lee Bank and UOB in 1985 as a result of a need for a centralised e-Payment operator.
However, due to the banking restructuring and mergers, the local banks remaining were UOB, OCBC, DBS-POSB as the shareholders of NETS with Standard Chartered Bank to offer NETS to their customers. However, DBS and POSB customers can use their network ATMs on their own and not be shared with UOB, OCBC or SCB (StanChart). The mega failure of 5 July 2010 of POSB-DBS ATM Networks (about 97,000 machines) made the government to rethink the shared ATM system again as it affected the NETS system too.
In 2010, in line with the mandatory EMV system, Local Singapore Banks started to reissue their Debit Visa/MasterCard branded debit cards with EMV Chip compliant ones to replace the magnetic stripe system. Banks involved included NETS Members of POSB-DBS, UOB-OCBC-SCB along with the SharedATM alliance (NON-NETS) of HSBC, Citibank, State Bank of India, and Maybank. Standard Chartered Bank (SCB) is also a SharedATM alliance member. Non branded cards of POSB and Maybank local ATM Cards are kept without a chip but have a Plus or Maestro sign which can be used to withdraw cash locally or overseas.
Maybank Debit MasterCards can be used in Malaysia just like a normal ATM or Debit MEPS card.
Singapore also uses the e-purse systems of NETS CASHCARD and the CEPAS wave system by EZ-Link and NETS.
South Korea
There are two kinds of debit cards are in South Korea; 'Debit card' Issued by bank, and 'Check card' Issued by card company. Debit cards are only accepted in debit networks such as Shinsegae and e-mart. Check cards are accepted in every stores accept credit cards. Korean debit cards do not accept offline Debit(credit) transactions domestically, so every transactions must made by real time.
Spain
Debit cards are accepted in a relatively large number of stores, both large and small, in Spain. Banks often offer debit cards for small fees in connection with a checking account. These cards are used more often than credit cards at ATMs because it is a cheaper alternative.
Sweden
Debit cards are common in Sweden as they are traditionally issued by your bank who in turn normally cooperates with either Visa Debit, Visa Electron, Debit MasterCard, or Mastercard Maestro. Thus, ATM's and stores in Sweden accept these debit cards if they accept card payments with only rare exceptions.
Taiwan
Most banks issue major-brand debit cards that can be used internationally such as Visa, MasterCard and JCB, often with contactless functionality. Payments at brick-and-mortar stores generally require a signature except for contactless payments.
A separate, local debit system, known as Smart Pay, can be used by the majority of debit and ATM cards, even major-brand cards. This system is available only in Taiwan and a few locations in Japan as of 2016. Non-contactless payments require a PIN instead of a signature. Cards from a few banks support contactless payment with Smart Pay.
Togo
Turkey
UAE
Debit cards are widely accepted from different debit card issuers including the Network International local subsidiary of Emirates Bank.
United Kingdom
In the UK debit cards (an integrated EFTPOS system) are an established part of the retail market and are widely accepted by both physical and internet stores. The term EFTPOS is not widely used by the public; "debit card" is the generic term used. Debit cards issued are predominantly Visa Debit, with Debit Mastercard becoming increasingly common. Maestro, Visa Electron and UnionPay are also in circulation. Banks do not charge customers for EFTPOS transactions in the UK, but some retailers used to make small charges, particularly for small transaction amounts. However, the UK Government introduced legislation on 13 January 2018 banning all surcharges for card payments, including those made online and through services such as PayPal. The UK has converted all debit cards in circulation to Chip and PIN (except for Chip and Signature cards issued to people with certain disabilities and non-reloadable prepaid cards), based on the EMV standard, to increase transaction security; however, PINs are not required for Internet transactions (though some banks employ additional security measures for online transactions such as Verified by Visa and MasterCard Secure Code), nor for most contactless transactions.
In the United Kingdom, banks started to issue debit cards in the mid-1980s to reduce the number of cheques being used at the point of sale, which are costly for the banks to process; the first bank to do so was Barclays with the Barclays Connect card. As in most countries, fees paid by merchants in the UK to accept credit cards are a percentage of the transaction amount, which funds cardholders' interest-free credit periods as well as incentive schemes such as points or cashback. For consumer credit cards issued within the EEA, the interchange fee is capped at 0.3%, with a cap of 0.2% for debit cards, although the merchant acquirers may charge the merchant a higher fee. Most debit cards in the UK lack the advantages offered to holders of UK-issued credit cards, such as free incentives (points, cashback etc.; the Tesco Bank debit card was one exception), interest-free credit and protection against defaulting merchants under Section 75 of the Consumer Credit Act 1974. Almost all establishments in the UK that accept credit cards also accept debit cards. Some merchants, for cost reasons, accept debit cards but not credit cards, and some smaller retailers only accept card payments for purchases above a certain value, typically £5 or £10.
The 21st century has seen an increase in Challenger banks in the United Kingdom, with benefits including fee-free overseas spending. Notable challenger banks include Monzo, Revolut and Starling Bank.
UEMOA
It is the West Africa Economic and Monetary Union federating eight countries: Benin, Burkina Faso, Ivory Coast, Guinea-Bissau, Mali, Niger, Senegal and Togo.
GIM-UEMOA is the regional switch federating more than 120 members (banks, microfinances, electronic money issuers, etc.). All interbank cards transactions between banks in the same country or between banks in two different countries UEMOA zone are routed and cleared by GIM-UEMOA. The settlement is done on Central Bank RTGS.
GIM-UEMOA also provides some processing products and services to more than 50 banks in UEMOA zone and out of UEMOA zone.
United States
In the U.S., EFTPOS is universally referred to simply as debit. The largest pre-paid debit card company is Green Dot Corporation, by market capitalization. The same interbank networks that operate the ATM network also operate the POS network. Most interbank networks, such as Pulse, NYCE, MAC, Tyme, SHAZAM, STAR, and so on, are regional and do not overlap, however, most ATM/POS networks have agreements to accept each other's cards. This means that cards issued by one network will typically work anywhere they accept ATM/POS cards for payment. For example, a NYCE card will work at a Pulse POS terminal or ATM, and vice versa. Debit cards in the United States are usually issued with a Visa, MasterCard, Discover or American Express logo allowing use of their signature-based networks. In 2018, there were 5.836 billion debit cards in circulation in the U.S., and 71.7% were prepaid cards.
U.S. Federal law caps the liability of a U.S. debit card user in case of loss or theft at US$50 if the loss or theft is reported to the issuing bank in two business days after the customer notices the loss. Most banks will, however, set this limit to $0 for debit cards issued to their customers which are linked to their checking or savings account. Unlike credit cards, loss or theft reported more than two business days after being discovered is capped at $500 (vs. $50 for credit cards), and if reported more than 60 calendar days after the statement is sent all the money in the account may be lost.
The fees charged to merchants for offline debit purchases vis-à-vis the lack of fees charged to merchants for processing online debit purchases and paper checks have prompted some major merchants in the U.S. to file lawsuits against debit-card transaction processors, such as Visa and MasterCard. In 2003, Visa and MasterCard agreed to settle the largest of these lawsuits for $2 billion and $1 billion, respectively.
Some consumers prefer "credit" transactions because of the lack of a fee charged to the consumer/purchaser. A few debit cards in the U.S. offer rewards for using "credit". However, since "credit" transactions cost more for merchants, many terminals at PIN-accepting merchant locations now make the "credit" function more difficult to access.
As a result of the Dodd–Frank Wall Street Reform and Consumer Protection Act, U.S. merchants can now set a minimum purchase amount for credit card transactions, as long as it does not exceed $10.
FSA, HRA, and HSA debit cards
In the United States, an FSA debit card only allow medical expenses. It is used by some banks for withdrawals from their healthcare FSAs (Flexible Savings Account) medical savings accounts (MSA), and health savings accounts (HSA) as well. They have Visa or MasterCard logos, but cannot be used as "debit cards", only as "credit cards". Furthermore, they are not accepted by all merchants that accept debit and credit cards, but only by those that specifically accept FSA debit cards. Merchant codes and product codes are used at the point of sale (required by law by certain merchants by certain states in the U.S.) to restrict sales if they do not qualify. Because of the extra checking and documenting that goes on, later, the statement can be used to substantiate these purchases for tax deductions. In the occasional instance that a qualifying purchase is rejected, another form of payment must be used (a check or payment from another account and a claim for reimbursement later). In the more likely case that non-qualifying items are accepted, the consumer is technically still responsible, and the discrepancy could be revealed during an audit. A small but growing segment of the debit card business in the U.S. involves access to tax-favored spending accounts such as FSAs, HRAs, and HSAs. Most of these debit cards are for medical expenses, though a few are also issued for dependent care and transportation expenses.
Traditionally, FSAs (the oldest of these accounts) were accessed only through claims for reimbursement after incurring, and often paying, an out-of-pocket expense; this often happens after the funds have already been deducted from the employee's paycheck. (FSAs are usually funded by payroll deduction.) The only method permitted by the Internal Revenue Service (IRS) to avoid this "double-dipping" for medical FSAs and HRAs is through accurate and auditable reporting on the tax return. Statements on the debit card that say "for medical uses only" are invalid for several reasons: (1) The merchant and issuing banks have no way of quickly determining whether the entire purchase qualifies for the customer's type of tax benefit; (2) the customer also has no quick way of knowing; often has mixed purchases by necessity or convenience; and can easily make mistakes; (3) extra contractual clauses between the customer and issuing bank would cross-over into the payment processing standards, creating additional confusion (for example if a customer was penalized for accidentally purchasing a non-qualifying item, it would undercut the potential savings advantages of the account). Therefore, using the card exclusively for qualifying purchases may be convenient for the customer, but it has nothing to do with how the card can actually be used. If the bank rejects a transaction, for instance, because it is not at a recognized drug store, then it would be causing harm and confusion to the cardholder. In the United States, not all medical service or supply stores are capable of providing the correct information so an FSA debit card issuer can honor every transaction-if rejected or documentation is not deemed enough to satisfy regulations, cardholders may have to send in forms manually.
One difference between FSAs and HSAs is the matter of yearend and rollovers: FSAs began as per calendar year, although by 2013 rollovers were introduced.
Uruguay
Debit cards are accepted in a relatively large number of stores, both large and small in Uruguay; but their use has so far remained low as compared to credit cards at ATMs. Since August 2014, with the Financial Inclusion Law coming into force, end consumers obtain a 4% VAT deduction for using debit cards in their purchases.
Venezuela
There has been a lack of cash due to the Venezuelan economic crisis and thus the demand for and use of debit cards has increased greatly in recent years. One reason why a noticeable percentage of businesses have closed is a lack of payment terminals. The most used brands are Maestro (debit card) and Visa Electron.
Vietnam
In Vietnam, debit cards are issued by banks in collaboration with the National Payment Corporation of Vietnam, abbreviated as NAPAS. Most banks issue this type of card. Customers can simply go to the nearest branch to register or open a debit card online. VISA Debit and Mastercard Debit are the most widely issued cards in Vietnam.
As of June 2023, there are over 94 million debit cards in circulation in Vietnam. The number of cards is growing at an average rate of 18% per year. The transaction value reached over 1,200 trillion VND per year. More than 80% of transactions are made at ATMs.
See also
Card (disambiguation)
ATM card
Cantaloupe, Inc.
Charge card
Credit card
Debit card cashback
Electronic funds transfer
Electronic Payment Services
EPAS
Interac
Inventory information approval system, a point-of-sale technology used with FSA debit cards
Payment card
Payments Council
Payoneer
Point-of-sale (POS)
References
American inventions
Banking terms
Embedded systems
20th-century inventions | Debit card | [
"Technology",
"Engineering"
] | 16,265 | [
"Embedded systems",
"Computer science",
"Computer engineering",
"Computer systems"
] |
9,014 | https://en.wikipedia.org/wiki/Developmental%20psychology | Developmental psychology is the scientific study of how and why humans grow, change, and adapt across the course of their lives. Originally concerned with infants and children, the field has expanded to include adolescence, adult development, aging, and the entire lifespan. Developmental psychologists aim to explain how thinking, feeling, and behaviors change throughout life. This field examines change across three major dimensions, which are physical development, cognitive development, and social emotional development. Within these three dimensions are a broad range of topics including motor skills, executive functions, moral understanding, language acquisition, social change, personality, emotional development, self-concept, and identity formation.
Developmental psychology examines the influences of nature and nurture on the process of human development, as well as processes of change in context across time. Many researchers are interested in the interactions among personal characteristics, the individual's behavior, and environmental factors, including the social context and the built environment. Ongoing debates in regards to developmental psychology include biological essentialism vs. neuroplasticity and stages of development vs. dynamic systems of development. Research in developmental psychology has some limitations but at the moment researchers are working to understand how transitioning through stages of life and biological factors may impact our behaviors and development.
Developmental psychology involves a range of fields, such as educational psychology, child psychopathology, forensic developmental psychology, child development, cognitive psychology, ecological psychology, and cultural psychology. Influential developmental psychologists from the 20th century include Urie Bronfenbrenner, Erik Erikson, Sigmund Freud, Anna Freud, Jean Piaget, Barbara Rogoff, Esther Thelen, and Lev Vygotsky.
Historical antecedents
Jean-Jacques Rousseau and John B. Watson are typically cited as providing the foundation for modern developmental psychology. In the mid-18th century, Jean Jacques Rousseau described three stages of development: infants (infancy), puer (childhood) and adolescence in Emile: Or, On Education. Rousseau's ideas were adopted and supported by educators at the time.
Developmental psychology generally focuses on how and why certain changes (cognitive, social, intellectual, personality) occur over time in the course of a human life. Many theorists have made a profound contribution to this area of psychology. One of them is the psychologist Erik Erikson, who created a model of eight phases of psychosocial development. According to his theory, people go through different phases in their lives, each of which has its own developmental crisis that shapes a person's personality and behavior.
In the late 19th century, psychologists familiar with the evolutionary theory of Darwin began seeking an evolutionary description of psychological development; prominent here was the pioneering psychologist G. Stanley Hall, who attempted to correlate ages of childhood with previous ages of humanity. James Mark Baldwin, who wrote essays on topics that included Imitation: A Chapter in the Natural History of Consciousness and Mental Development in the Child and the Race: Methods and Processes, was significantly involved in the theory of developmental psychology. Sigmund Freud, whose concepts were developmental, significantly affected public perceptions.
Theories
Psychosexual development
Sigmund Freud developed a theory that suggested that humans behave as they do because they are constantly seeking pleasure. This process of seeking pleasure changes through stages because people evolve. Each period of seeking pleasure that a person experiences is represented by a stage of psychosexual development. These stages symbolize the process of arriving to become a maturing adult.
The first is the oral stage, which begins at birth and ends around a year and a half of age. During the oral stage, the child finds pleasure in behaviors like sucking or other behaviors with the mouth. The second is the anal stage, from about a year or a year and a half to three years of age. During the anal stage, the child defecates from the anus and is often fascinated with its defecation. This period of development often occurs during the time when the child is being toilet trained. The child becomes interested with feces and urine. Children begin to see themselves as independent from their parents. They begin to desire assertiveness and autonomy.
The third is the phallic stage, which occurs from three to five years of age (most of a person's personality forms by this age). During the phallic stage, the child becomes aware of its sexual organs. Pleasure comes from finding acceptance and love from the opposite sex. The fourth is the latency stage, which occurs from age five until puberty. During the latency stage, the child's sexual interests are repressed.
Stage five is the genital stage, which takes place from puberty until adulthood. During the genital stage, puberty begins to occur. Children have now matured, and begin to think about other people instead of just themselves. Pleasure comes from feelings of affection from other people.
Freud believed there is tension between the conscious and unconscious because the conscious tries to hold back what the unconscious tries to express. To explain this, he developed three personality structures: id, ego, and superego. The id, the most primitive of the three, functions according to the pleasure principle: seek pleasure and avoid pain. The superego plays the critical and moralizing role, while the ego is the organized, realistic part that mediates between the desires of the id and the superego.
Theories of cognitive development
Jean Piaget, a Swiss theorist, posited that children learn by actively constructing knowledge through their interactions with their physical and social environments. He suggested that the adult's role in helping the child learn was to provide appropriate materials. In his interview techniques with children that formed an empirical basis for his theories, he used something similar to Socratic questioning to get children to reveal their thinking. He argued that a principal source of development was through the child's inevitable generation of contradictions through their interactions with their physical and social worlds. The child's resolution of these contradictions led to more integrated and advanced forms of interaction, a developmental process that he called, "equilibration."
Piaget argued that intellectual development takes place through a series of stages generated through the equilibration process. Each stage consists of steps the child must master before moving to the next step. He believed that these stages are not separate from one another, but rather that each stage builds on the previous one in a continuous learning process. He proposed four stages: sensorimotor, pre-operational, concrete operational, and formal operational. Though he did not believe these stages occurred at any given age, many studies have determined when these cognitive abilities should take place.
Stages of moral development
Piaget claimed that logic and morality develop through constructive stages. Expanding on Piaget's work, Lawrence Kohlberg determined that the process of moral development was principally concerned with justice, and that it continued throughout the individual's lifetime.
He suggested three levels of moral reasoning; pre-conventional moral reasoning, conventional moral reasoning, and post-conventional moral reasoning. The pre-conventional moral reasoning is typical of children and is characterized by reasoning that is based on rewards and punishments associated with different courses of action. Conventional moral reason occurs during late childhood and early adolescence and is characterized by reasoning based on rules and conventions of society. Lastly, post-conventional moral reasoning is a stage during which the individual sees society's rules and conventions as relative and subjective, rather than as authoritative.
Kohlberg used the Heinz Dilemma to apply to his stages of moral development. The Heinz Dilemma involves Heinz's wife dying from cancer and Heinz having the dilemma to save his wife by stealing a drug. Preconventional morality, conventional morality, and post-conventional morality applies to Heinz's situation.
Stages of psychosocial development
German-American psychologist Erik Erikson and his collaborator and wife, Joan Erikson, posits eight stages of individual human development influenced by biological, psychological, and social factors throughout the lifespan. At each stage the person must resolve a challenge, or an existential dilemma. Successful resolution of the dilemma results in the person ingraining a positive virtue, but failure to resolve the fundamental challenge of that stage reinforces negative perceptions of the person or the world around them and the person's personal development is unable to progress.
The first stage, "Trust vs. Mistrust", takes place in infancy. The positive virtue for the first stage is hope, in the infant learning whom to trust and having hope for a supportive group of people to be there for him/her. The second stage is "Autonomy vs. Shame and Doubt" with the positive virtue being will. This takes place in early childhood when the child learns to become more independent by discovering what they are capable of whereas if the child is overly controlled, feelings of inadequacy are reinforced, which can lead to low self-esteem and doubt.
The third stage is "Initiative vs. Guilt". The virtue of being gained is a sense of purpose. This takes place primarily via play. This is the stage where the child will be curious and have many interactions with other kids. They will ask many questions as their curiosity grows. If too much guilt is present, the child may have a slower and harder time interacting with their world and other children in it.
The fourth stage is "Industry (competence) vs. Inferiority". The virtue for this stage is competency and is the result of the child's early experiences in school. This stage is when the child will try to win the approval of others and understand the value of their accomplishments.
The fifth stage is "Identity vs. Role Confusion". The virtue gained is fidelity and it takes place in adolescence. This is when the child ideally starts to identify their place in society, particularly in terms of their gender role.
The sixth stage is "Intimacy vs. Isolation", which happens in young adults and the virtue gained is love. This is when the person starts to share his/her life with someone else intimately and emotionally. Not doing so can reinforce feelings of isolation.
The seventh stage is "Generativity vs. Stagnation". This happens in adulthood and the virtue gained is care. A person becomes stable and starts to give back by raising a family and becoming involved in the community.
The eighth stage is "Ego Integrity vs. Despair". When one grows old, they look back on their life and contemplate their successes and failures. If they resolve this positively, the virtue of wisdom is gained. This is also the stage when one can gain a sense of closure and accept death without regret or fear.
Stages based on the model of hierarchical complexity
Michael Commons enhanced and simplified Bärbel Inhelder and Piaget's developmental theory and offers a standard method of examining the universal pattern of development. The Model of Hierarchical Complexity (MHC) is not based on the assessment of domain-specific information, It divides the Order of Hierarchical Complexity of tasks to be addressed from the Stage performance on those tasks. A stage is the order hierarchical complexity of the tasks the participant's successfully addresses. He expanded Piaget's original eight stage (counting the half stages) to seventeen stages. The stages are:
Calculatory
Automatic
Sensory & Motor
Circular sensory-motor
Sensory-motor
Nominal
Sentential
Preoperational
Primary
Concrete
Abstract
Formal
Systematic
Metasystematic
Paradigmatic
Cross-paradigmatic
Meta-Cross-paradigmatic
The order of hierarchical complexity of tasks predicts how difficult the performance is with an R ranging from 0.9 to 0.98.
In the MHC, there are three main axioms for an order to meet in order for the higher order task to coordinate the next lower order task. Axioms are rules that are followed to determine how the MHC orders actions to form a hierarchy. These axioms are: a) defined in terms of tasks at the next lower order of hierarchical complexity task action; b) defined as the higher order task action that organizes two or more less complex actions; that is, the more complex action specifies the way in which the less complex actions combine; c) defined as the lower order task actions have to be carried out non-arbitrarily.
Ecological systems theory
Ecological systems theory, originally formulated by Urie Bronfenbrenner, specifies four types of nested environmental systems, with bi-directional influences within and between the systems. The four systems are microsystem, mesosystem, exosystem, and macrosystem. Each system contains roles, norms and rules that can powerfully shape development. The microsystem is the direct environment in our lives such as our home and school. Mesosystem is how relationships connect to the microsystem. Exosystem is a larger social system where the child plays no role. Macrosystem refers to the cultural values, customs and laws of society.
The microsystem is the immediate environment surrounding and influencing the individual (example: school or the home setting). The mesosystem is the combination of two microsystems and how they influence each other (example: sibling relationships at home vs. peer relationships at school). The exosystem is the interaction among two or more settings that are indirectly linked (example: a father's job requiring more overtime ends up influencing his daughter's performance in school because he can no longer help with her homework). The macrosystem is broader taking into account social economic status, culture, beliefs, customs and morals (example: a child from a wealthier family sees a peer from a less wealthy family as inferior for that reason). Lastly, the chronosystem refers to the chronological nature of life events and how they interact and change the individual and their circumstances through transition (example: a mother losing her own mother to illness and no longer having that support in her life).
Since its publication in 1979, Bronfenbrenner's major statement of this theory, The Ecology of Human Development, has had widespread influence on the way psychologists and others approach the study of human beings and their environments. As a result of this conceptualization of development, these environments—from the family to economic and political structures—have come to be viewed as part of the life course from childhood through to adulthood.
Zone of proximal development
Lev Vygotsky was a Russian theorist from the Soviet era, who posited that children learn through hands-on experience and social interactions with members of their culture. Vygotsky believed that a child's development should be examined during problem-solving activities. Unlike Piaget, he claimed that timely and sensitive intervention by adults when a child is on the edge of learning a new task (called the "zone of proximal development") could help children learn new tasks. Zone of proximal development is a tool used to explain the learning of children and collaborating problem solving activities with an adult or peer. This adult role is often referred to as the skilled "master", whereas the child is considered the learning apprentice through an educational process often termed "cognitive apprenticeship" Martin Hill stated that "The world of reality does not apply to the mind of a child." This technique is called "scaffolding", because it builds upon knowledge children already have with new knowledge that adults can help the child learn. Vygotsky was strongly focused on the role of culture in determining the child's pattern of development, arguing that development moves from the social level to the individual level. In other words, Vygotsky claimed that psychology should focus on the progress of human consciousness through the relationship of an individual and their environment. He felt that if scholars continued to disregard this connection, then this disregard would inhibit the full comprehension of the human consciousness.
Constructivism
Constructivism is a paradigm in psychology that characterizes learning as a process of actively constructing knowledge. Individuals create meaning for themselves or make sense of new information by selecting, organizing, and integrating information with other knowledge, often in the context of social interactions. Constructivism can occur in two ways: individual and social. Individual constructivism is when a person constructs knowledge through cognitive processes of their own experiences rather than by memorizing facts provided by others. Social constructivism is when individuals construct knowledge through an interaction between the knowledge they bring to a situation and social or cultural exchanges within that content. A foundational concept of constructivism is that the purpose of cognition is to organize one's experiential world, instead of the ontological world around them.
Jean Piaget, a Swiss developmental psychologist, proposed that learning is an active process because children learn through experience and make mistakes and solve problems. Piaget proposed that learning should be whole by helping students understand that meaning is constructed.
Evolutionary developmental psychology
Evolutionary developmental psychology is a research paradigm that applies the basic principles of Darwinian evolution, particularly natural selection, to understand the development of human behavior and cognition. It involves the study of both the genetic and environmental mechanisms that underlie the development of social and cognitive competencies, as well as the epigenetic (gene-environment interactions) processes that adapt these competencies to local conditions.
EDP considers both the reliably developing, species-typical features of ontogeny (developmental adaptations), as well as individual differences in behavior, from an evolutionary perspective. While evolutionary views tend to regard most individual differences as the result of either random genetic noise (evolutionary byproducts) and/or idiosyncrasies (for example, peer groups, education, neighborhoods, and chance encounters) rather than products of natural selection, EDP asserts that natural selection can favor the emergence of individual differences via "adaptive developmental plasticity". From this perspective, human development follows alternative life-history strategies in response to environmental variability, rather than following one species-typical pattern of development.
EDP is closely linked to the theoretical framework of evolutionary psychology (EP), but is also distinct from EP in several domains, including research emphasis (EDP focuses on adaptations of ontogeny, as opposed to adaptations of adulthood) and consideration of proximate ontogenetic and environmental factors (i.e., how development happens) in addition to more ultimate factors (i.e., why development happens), which are the focus of mainstream evolutionary psychology.
Attachment theory
Attachment theory, originally developed by John Bowlby, focuses on the importance of open, intimate, emotionally meaningful relationships. Attachment is described as a biological system or powerful survival impulse that evolved to ensure the survival of the infant. A threatened or stressed child will move toward caregivers who create a sense of physical, emotional, and psychological safety for the individual. Attachment feeds on body contact and familiarity. Later Mary Ainsworth developed the Strange Situation protocol and the concept of the secure base. This tool has been found to help understand attachment, such as the Strange Situation Test and the Adult Attachment Interview. Both of which help determine factors to certain attachment styles. The Strange Situation Test helps find "disturbances in attachment" and whether certain attributes are found to contribute to a certain attachment issue. The Adult Attachment Interview is a tool that is similar to the Strange Situation Test but instead focuses attachment issues found in adults. Both tests have helped many researchers gain more information on the risks and how to identify them.
Theorists have proposed four types of attachment styles: secure, anxious-avoidant, anxious-resistant, and disorganized. Secure attachment is a healthy attachment between the infant and the caregiver. It is characterized by trust. Anxious-avoidant is an insecure attachment between an infant and a caregiver. This is characterized by the infant's indifference toward the caregiver. Anxious-resistant is an insecure attachment between the infant and the caregiver characterized by distress from the infant when separated and anger when reunited. Disorganized is an attachment style without a consistent pattern of responses upon return of the parent.
It is possible to prevent a child's innate propensity to develop bonds. Some infants are kept in isolation or subjected to severe neglect or abuse, or they are raised without the stimulation and care of a regular caregiver. This deprivation may cause short-term consequences such as separation, rage, despair, and a brief lag in cerebral growth. Increased aggression, clinging behavior, alienation, psychosomatic illnesses, and an elevated risk of adult depression are among the long-term consequences.\
According to attachment theory, which is a psychological concept, people's capacity to develop healthy social and emotional ties later in life is greatly impacted by their early relationships with their primary caregivers, especially during infancy. This suggests that humans have an inbuilt need to develop strong bonds with caregivers in order to survive and be healthy. Childhood attachment styles can have an impact on how people behave in adult social situations, including romantic partnerships.
Nature vs nurture
A significant debate in developmental psychology is the relationship between innateness and environmental influence in regard to any particular aspect of development. This is often referred to as "nature and nurture" or nativism versus empiricism. A nativist account of development would argue that the processes in question are innate, that is, they are specified by the organism's genes. What makes a person who they are? Is it their environment or their genetics? This is the debate of nature vs nurture.
According to an empiricist viewpoint, those processes are learned through interaction with the environment. Today developmental psychologists rarely take such polarized positions with regard to most aspects of development; rather they investigate, among many other things, the relationship between innate and environmental influences. One of the ways this relationship has been explored in recent years is through the emerging field of evolutionary developmental psychology.
The dispute over innateness has been well represented in the field of language acquisition studies. A major question in this area is whether or not certain properties of human language are specified genetically or can be acquired through learning. The empiricist position on the issue of language acquisition suggests that the language input provides the necessary information required for learning the structure of language and that infants acquire language through a process of statistical learning. From this perspective, language can be acquired via general learning methods that also apply to other aspects of development, such as perceptual learning.
The nativist position argues that the input from language is too impoverished for infants and children to acquire the structure of language. Linguist Noam Chomsky asserts that, evidenced by the lack of sufficient information in the language input, there is a universal grammar that applies to all human languages and is pre-specified. This has led to the idea that there is a special cognitive module suited for learning language, often called the language acquisition device. Chomsky's critique of the behaviorist model of language acquisition is regarded by many as a key turning point in the decline in the prominence of the theory of behaviorism generally. But Skinner's conception of "Verbal Behavior" has not died, perhaps in part because it has generated successful practical applications.
Maybe there could be "strong interactions of both nature and nurture".
Continuity vs discontinuity
One of the major discussions in developmental psychology includes whether development is discontinuous or continuous.
Continuous development is quantifiable and quantitative, whereas discontinuous development is qualitative. Quantitative estimations of development can be measuring the stature of a child, and measuring their memory or consideration span. "Particularly dramatic examples of qualitative changes are metamorphoses, such as the emergence of a caterpillar into a butterfly."
Those psychologists who bolster the continuous view of improvement propose that improvement includes slow and progressing changes all through the life span, with behavior within the prior stages of advancement giving the premise of abilities and capacities required for the other stages. "To many, the concept of continuous, quantifiable measurement seems to be the essence of science".
Not all psychologists, be that as it may, concur that advancement could be a continuous process. A few see advancement as a discontinuous process. They accept advancement includes unmistakable and partitioned stages with diverse sorts of behavior happening in each organization. This proposes that the development of certain capacities in each arrange, such as particular feelings or ways of considering, have a definite beginning and finishing point. Be that as it may, there's no correct time at which a capacity abruptly shows up or disappears. Although some sorts of considering, feeling or carrying on could seem to seem abruptly, it is more than likely that this has been developing gradually for some time.
Stage theories of development rest on the suspicion that development may be a discontinuous process including particular stages which are characterized by subjective contrasts in behavior. They moreover assume that the structure of the stages is not variable concurring to each person, in any case, the time of each arrangement may shift separately. Stage theories can be differentiated with ceaseless hypotheses, which set that development is an incremental process.
Stability vs change
This issue involves the degree to which one becomes older renditions of their early experience or whether they develop into something different from who they were at an earlier point in development. It considers the extent to which early experiences (especially infancy) or later experiences are the key determinants of a person's development. Stability is defined as the consistent ordering of individual differences with respect to some attribute. Change is altering someone/something.
Most human development lifespan developmentalists recognize that extreme positions are unwise. Therefore, the key to a comprehensive understanding of development at any stage requires the interaction of different factors and not only one.
Theory of mind
Theory of mind is the ability to attribute mental states to ourselves and others. It is a complex but vital process in which children begin to understand the emotions, motives, and feelings of not only themselves but also others. Theory of mind allows people to understand that others have unique beliefs and desires that are different from our own. This enables people to engage in daily social interactions as we explain the mental state around us. If a child does not fully develop theory of mind within this crucial 5-year period, they can suffer from communication barriers that follow them into adolescence and adulthood. Exposure to more people and the availability of stimuli that encourages social-cognitive growth is a factor that relies heavily on family.
Mathematical models
Developmental psychology is concerned not only with describing the characteristics of psychological change over time but also seeks to explain the principles and internal workings underlying these changes. Psychologists have attempted to better understand these factors by using models. A model must simply account for the means by which a process takes place. This is sometimes done in reference to changes in the brain that may correspond to changes in behavior over the course of the development.
Mathematical modeling is useful in developmental psychology for implementing theory in a precise and easy-to-study manner, allowing generation, explanation, integration, and prediction of diverse phenomena. Several modeling techniques are applied to development: symbolic, connectionist (neural network), or dynamical systems models.
Dynamic systems models illustrate how many different features of a complex system may interact to yield emergent behaviors and abilities. Nonlinear dynamics has been applied to human systems specifically to address issues that require attention to temporality such as life transitions, human development, and behavioral or emotional change over time. Nonlinear dynamic systems is currently being explored as a way to explain discrete phenomena of human development such as affect, second language acquisition, and locomotion.
Research areas
Neural development
One critical aspect of developmental psychology is the study of neural development, which investigates how the brain changes and develops during different stages of life. Neural development focuses on how the brain changes and develops during different stages of life. Studies have shown that the human brain undergoes rapid changes during prenatal and early postnatal periods. These changes include the formation of neurons, the development of neural networks, and the establishment of synaptic connections. The formation of neurons and the establishment of basic neural circuits in the developing brain are crucial for laying the foundation of the brain's structure and function, and disruptions during this period can have long-term effects on cognitive and emotional development.
Experiences and environmental factors play a crucial role in shaping neural development. Early sensory experiences, such as exposure to language and visual stimuli, can influence the development of neural pathways related to perception and language processing.
Genetic factors play a huge roll in neural development. Genetic factors can influence the timing and pattern of neural development, as well as the susceptibility to certain developmental disorders, such as autism spectrum disorder and attention-deficit/hyperactivity disorder.
Research finds that the adolescent brain undergoes significant changes in neural connectivity and plasticity. During this period, there is a pruning process where certain neural connections are strengthened while others are eliminated, resulting in more efficient neural networks and increased cognitive abilities, such as decision-making and impulse control.
The study of neural development provides crucial insights into the complex interplay between genetics, environment, and experiences in shaping the developing brain. By understanding the neural processes underlying developmental changes, researchers gain a better understanding of cognitive, emotional, and social development in humans.
Cognitive development
Cognitive development is primarily concerned with how infants and children acquire, develop, and use internal mental capabilities such as: problem-solving, memory, and language. Major topics in cognitive development are the study of language acquisition and the development of perceptual and motor skills. Piaget was one of the influential early psychologists to study the development of cognitive abilities. His theory suggests that development proceeds through a set of stages from infancy to adulthood and that there is an end point or goal.
Other accounts, such as that of Lev Vygotsky, have suggested that development does not progress through stages, but rather that the developmental process that begins at birth and continues until death is too complex for such structure and finality. Rather, from this viewpoint, developmental processes proceed more continuously. Thus, development should be analyzed, instead of treated as a product to obtain.
K. Warner Schaie has expanded the study of cognitive development into adulthood. Rather than being stable from adolescence, Schaie sees adults as progressing in the application of their cognitive abilities.
Modern cognitive development has integrated the considerations of cognitive psychology and the psychology of individual differences into the interpretation and modeling of development. Specifically, the neo-Piagetian theories of cognitive development showed that the successive levels or stages of cognitive development are associated with increasing processing efficiency and working memory capacity. These increases explain differences between stages, progression to higher stages, and individual differences of children who are the same-age and of the same grade-level. However, other theories have moved away from Piagetian stage theories, and are influenced by accounts of domain-specific information processing, which posit that development is guided by innate evolutionarily-specified and content-specific information processing mechanisms.
Social and emotional development
Developmental psychologists who are interested in social development examine how individuals develop social and emotional competencies. For example, they study how children form friendships, how they understand and deal with emotions, and how identity develops. Research in this area may involve study of the relationship between cognition or cognitive development and social behavior.
Emotional regulation or ER refers to an individual's ability to modulate emotional responses across a variety of contexts. In young children, this modulation is in part controlled externally, by parents and other authority figures. As children develop, they take on more and more responsibility for their internal state. Studies have shown that the development of ER is affected by the emotional regulation children observe in parents and caretakers, the emotional climate in the home, and the reaction of parents and caretakers to the child's emotions.
Music also has an influence on stimulating and enhancing the senses of a child through self-expression.
A child's social and emotional development can be disrupted by motor coordination problems, evidenced by the environmental stress hypothesis. The environmental hypothesis explains how children with coordination problems and developmental coordination disorder are exposed to several psychosocial consequences which act as secondary stressors, leading to an increase in internalizing symptoms such as depression and anxiety. Motor coordination problems affect fine and gross motor movement as well as perceptual-motor skills. Secondary stressors commonly identified include the tendency for children with poor motor skills to be less likely to participate in organized play with other children and more likely to feel socially isolated.
Social and emotional development focuses on five keys areas: Self-Awareness, Self Management, Social Awareness, Relationship Skills and Responsible Decision Making.
Physical development
Physical development concerns the physical maturation of an individual's body until it reaches the adult stature. Although physical growth is a highly regular process, all children differ tremendously in the timing of their growth spurts. Studies are being done to analyze how the differences in these timings affect and are related to other variables of developmental psychology such as information processing speed. Traditional measures of physical maturity using x-rays are less in practice nowadays, compared to simple measurements of body parts such as height, weight, head circumference, and arm span.
A few other studies and practices with physical developmental psychology are the phonological abilities of mature 5- to 11-year-olds, and the controversial hypotheses of left-handers being maturationally delayed compared to right-handers. A study by Eaton, Chipperfield, Ritchot, and Kostiuk in 1996 found in three different samples that there was no difference between right- and left-handers.
Memory development
Researchers interested in memory development look at the way our memory develops from childhood and onward. According to fuzzy-trace theory, a theory of cognition originally proposed by Valerie F. Reyna and Charles Brainerd, people have two separate memory processes: verbatim and gist. These two traces begin to develop at different times as well as at a different pace. Children as young as four years old have verbatim memory, memory for surface information, which increases up to early adulthood, at which point it begins to decline. On the other hand, our capacity for gist memory, memory for semantic information, increases up to early adulthood, at which point it is consistent through old age. Furthermore, one's reliance on gist memory traces increases as one ages.
Research methods and designs
Main research methods
Developmental psychology employs many of the research methods used in other areas of psychology. However, infants and children cannot be tested in the same ways as adults, so different methods are often used to study their development.
Developmental psychologists have a number of methods to study changes in individuals over time. Common research methods include systematic observation, including naturalistic observation or structured observation; self-reports, which could be clinical interviews or structured interviews; clinical or case study method; and ethnography or participant observation. These methods differ in the extent of control researchers impose on study conditions, and how they construct ideas about which variables to study. Every developmental investigation can be characterized in terms of whether its underlying strategy involves the experimental, correlational, or case study approach. The experimental method involves "actual manipulation of various treatments, circumstances, or events to which the participant or subject is exposed; the experimental design points to cause-and-effect relationships. This method allows for strong inferences to be made of causal relationships between the manipulation of one or more independent variables and subsequent behavior, as measured by the dependent variable. The advantage of using this research method is that it permits determination of cause-and-effect relationships among variables. On the other hand, the limitation is that data obtained in an artificial environment may lack generalizability. The correlational method explores the relationship between two or more events by gathering information about these variables without researcher intervention. The advantage of using a correlational design is that it estimates the strength and direction of relationships among variables in the natural environment; however, the limitation is that it does not permit determination of cause-and-effect relationships among variables. The case study approach allows investigations to obtain an in-depth understanding of an individual participant by collecting data based on interviews, structured questionnaires, observations, and test scores. Each of these methods have its strengths and weaknesses but the experimental method when appropriate is the preferred method of developmental scientists because it provides a controlled situation and conclusions to be drawn about cause-and-effect relationships.
Research designs
Most developmental studies, regardless of whether they employ the experimental, correlational, or case study method, can also be constructed using research designs. Research designs are logical frameworks used to make key comparisons within research studies such as:
cross-sectional design
longitudinal design
sequential design
microgenetic design
In a longitudinal study, a researcher observes many individuals born at or around the same time (a cohort) and carries out new observations as members of the cohort age. This method can be used to draw conclusions about which types of development are universal (or normative) and occur in most members of a cohort. As an example a longitudinal study of early literacy development examined in detail the early literacy experiences of one child in each of 30 families.
Researchers may also observe ways that development varies between individuals, and hypothesize about the causes of variation in their data. Longitudinal studies often require large amounts of time and funding, making them unfeasible in some situations. Also, because members of a cohort all experience historical events unique to their generation, apparently normative developmental trends may, in fact, be universal only to their cohort.
In a cross-sectional study, a researcher observes differences between individuals of different ages at the same time. This generally requires fewer resources than the longitudinal method, and because the individuals come from different cohorts, shared historical events are not so much of a confounding factor. By the same token, however, cross-sectional research may not be the most effective way to study differences between participants, as these differences may result not from their different ages but from their exposure to different historical events.
A third study design, the sequential design, combines both methodologies. Here, a researcher observes members of different birth cohorts at the same time, and then tracks all participants over time, charting changes in the groups. While much more resource-intensive, the format aids in a clearer distinction between what changes can be attributed to an individual or historical environment from those that are truly universal.
Because every method has some weaknesses, developmental psychologists rarely rely on one study or even one method to reach conclusions by finding consistent evidence from as many converging sources as possible.
Life stages of psychological development
Prenatal development
Prenatal development is of interest to psychologists investigating the context of early psychological development. The whole prenatal development involves three main stages: germinal stage, embryonic stage and fetal stage. Germinal stage begins at conception until 2 weeks; embryonic stage means the development from 2 weeks to 8 weeks; fetal stage represents 9 weeks until birth of the baby. The senses develop in the womb itself: a fetus can both see and hear by the second trimester (13 to 24 weeks of age). The sense of touch develops in the embryonic stage (5 to 8 weeks). Most of the brain's billions of neurons also are developed by the second trimester. Babies are hence born with some odor, taste and sound preferences, largely related to the mother's environment.
Some primitive reflexes too arise before birth and are still present in newborns. One hypothesis is that these reflexes are vestigial and have limited use in early human life. Piaget's theory of cognitive development suggested that some early reflexes are building blocks for infant sensorimotor development. For example, the tonic neck reflex may help development by bringing objects into the infant's field of view.
Other reflexes, such as the walking reflex, appear to be replaced by more sophisticated voluntary control later in infancy. This may be because the infant gains too much weight after birth to be strong enough to use the reflex, or because the reflex and subsequent development are functionally different. It has also been suggested that some reflexes (for example the moro and walking reflexes) are predominantly adaptations to life in the womb with little connection to early infant development. Primitive reflexes reappear in adults under certain conditions, such as neurological conditions like dementia or traumatic lesions.
Ultrasounds have shown that infants are capable of a range of movements in the womb, many of which appear to be more than simple reflexes. By the time they are born, infants can recognize and have a preference for their mother's voice suggesting some prenatal development of auditory perception. Prenatal development and birth complications may also be connected to neurodevelopmental disorders, for example in schizophrenia. With the advent of cognitive neuroscience, embryology and the neuroscience of prenatal development is of increasing interest to developmental psychology research.
Several environmental agents—teratogens—can cause damage during the prenatal period. These include prescription and nonprescription drugs, illegal drugs, tobacco, alcohol, environmental pollutants, infectious disease agents such as the rubella virus and the toxoplasmosis parasite, maternal malnutrition, maternal emotional stress, and Rh factor blood incompatibility between mother and child. There are many statistics which prove the effects of the aforementioned substances. A leading example of this would be that at least 100,000 "cocaine babies" were born in the United States annually in the late 1980s. "Cocaine babies" are proven to have quite severe and lasting difficulties which persist throughout infancy and right throughout childhood. The drug also encourages behavioural problems in the affected children and defects of various vital organs.
Infancy
From birth until the first year, children are referred to as infants. As they grow, children respond to their environment in unique ways. Developmental psychologists vary widely in their assessment of infant psychology, and the influence the outside world has upon it.
The majority of a newborn infant's time is spent sleeping. At first, their sleep cycles are evenly spread throughout the day and night, but after a couple of months, infants generally become diurnal. In human or rodent infants, there is always the observation of a diurnal cortisol rhythm, which is sometimes entrained with a maternal substance. Nevertheless, the circadian rhythm starts to take shape, and a 24-hour rhythm is observed in just some few months after birth.
Infants can be seen to have six states, grouped into pairs:
quiet sleep and active sleep (dreaming, when REM sleep occurs). Generally, there are various reasons as to why infants dream. Some argue that it is just a psychotherapy, which usually occurs normally in the brain. Dreaming is a form of processing and consolidating information that has been obtained during the day. Freud argues that dreams are a way of representing unconscious desires.
quiet waking, and active waking
fussing and crying. In a normal set up, infants have different reasons as to why they cry. Mostly, infants cry due to physical discomfort, hunger, or to receive attention or stimulation from their caregiver.
Infant perception
Infant perception is what a newborn can see, hear, smell, taste, and touch. These five features are considered as the "five senses". Because of these different senses, infants respond to stimuli differently.
Vision is significantly worse in infants than in older children. Infant sight tends to be blurry in early stages but improves over time. Color perception, similar to that seen in adults, has been demonstrated in infants as young as four months using habituation methods. Infants attain adult-like vision at about six months.
Hearing is well-developed prior to birth. Newborns prefer complex sounds to pure tones, human speech to other sounds, mother's voice to other voices, and the native language to other languages. Scientist believe these features are probably learned in the womb. Infants are fairly good at detecting the direction a sound comes from, and by 18 months their hearing ability is approximately equal to an adult's.
Smell and taste are present, with infants showing different expressions of disgust or pleasure when presented with pleasant odors (honey, milk, etc.) or unpleasant odors (rotten egg) and tastes (e.g. sour taste). Newborns are born with odor and taste preferences acquired in the womb from the smell and taste of amniotic fluid, in turn influenced by what the mother eats. Both breast- and bottle-fed babies around three days old prefer the smell of human milk to that of formula, indicating an innate preference. Older infants also prefer the smell of their mother to that of others.
Touch and feel is one of the better-developed senses at birth as it is one of the first senses to develop inside the womb. This is evidenced by the primitive reflexes described above, and the relatively advanced development of the somatosensory cortex.
Pain: Infants feel pain similarly, if not more strongly than older children, but pain relief in infants has not received so much attention as an area of research. Glucose is known to relieve pain in newborns.
Language
Babies are born with the ability to discriminate virtually all sounds of all human languages. Infants of around six months can differentiate between phonemes in their own language, but not between similar phonemes in another language. Notably, infants are able to differentiate between various durations and sound levels and can easily differentiate all the languages they have encountered, hence easy for infants to understand a certain language compared to an adult.
At this stage infants also start to babble, whereby they start making vowel consonant sound as they try to understand the true meaning of language and copy whatever they are hearing in their surrounding producing their own phonemes.
In various cultures, a distinct form of speech called "babytalk" is used when communicating with newborns and young children. This register consists of simplified terms for common topics such as family members, food, hygiene, and familiar animals. It also exhibits specific phonological patterns, such as substituting alveolar sounds with initial velar sounds, especially in languages like English. Furthermore, babytalk often involves morphological simplifications, such as regularizing verb conjugations (for instance, saying "corned" instead of "cornered" or "goed" instead of "went"). This language is typically taught to children and is perceived as their natural way of communication. Interestingly, in mythology and popular culture, certain characters, such as the "Hausa trickster" or the Warner Bros cartoon character "Tweety Pie", are portrayed as speaking in a babytalk-like manner.
Infant cognition: the Piagetian era
Piaget suggested that an infant's perception and understanding of the world depended on their motor development, which was required for the infant to link visual, tactile and motor representations of objects. The concept of object permanence refers to the knowledge that an object exists even when it is not directly perceived or visible; in other words, something is still there even if it is not visible. This is a crucial developmental milestone for infants, who learn that something is not necessarily lost forever just because it is hidden. When a child displays object permanence, they will look for a toy that is hidden, showing that they are aware that the item is still there even when it is covered by a blanket. Most babies start to exhibit symptoms of object permanence around the age of eight months. According to this theory, infants develop object permanence through touching and handling objects.
Piaget's sensorimotor stage comprised six sub-stages (see sensorimotor stages for more detail). In the early stages, development arises out of movements caused by primitive reflexes. Discovery of new behaviors results from classical and operant conditioning, and the formation of habits. From eight months the infant is able to uncover a hidden object but will persevere when the object is moved.
Piaget concluded that infants lacked object permanence before 18 months when infants' before this age failed to look for an object where it had last been seen. Instead, infants continued to look for an object where it was first seen, committing the "A-not-B error". Some researchers have suggested that before the age of 8–9 months, infants' inability to understand object permanence extends to people, which explains why infants at this age do not cry when their mothers are gone ("Out of sight, out of mind").
Recent findings in infant cognition
In the 1980s and 1990s, researchers developed new methods of assessing infants' understanding of the world with far more precision and subtlety than Piaget was able to do in his time. Since then, many studies based on these methods suggest that young infants understand far more about the world than first thought.
Based on recent findings, some researchers (such as Elizabeth Spelke and Renee Baillargeon) have proposed that an understanding of object permanence is not learned at all, but rather comprises part of the innate cognitive capacities of our species.
According to Jean Piaget's developmental psychology, object permanence, or the awareness that objects exist even when they are no longer visible, was thought to emerge gradually between the ages of 8 and 12 months. However, experts such as Elizabeth Spelke and Renee Baillargeon have questioned this notion. They studied infants' comprehension of object permanence at a young age using novel experimental approaches such as violation-of-expectation paradigms. These findings imply that children as young as 3 to 4 months old may have an innate awareness of object permanence. Baillargeon's "drawbridge" experiment, for example, showed that infants were surprised when they saw occurrences that contradicted object permanence expectations. This proposition has important consequences for our understanding of infant cognition, implying that infants may be born with core cognitive abilities rather than developing them via experience and learning.
Other research has suggested that young infants in their first six months of life may possess an understanding of numerous aspects of the world around them, including:
an early numerical cognition, that is, an ability to represent number and even compute the outcomes of addition and subtraction operations;
an ability to infer the goals of people in their environment;
an ability to engage in simple causal reasoning.
Critical periods of development
There are critical periods in infancy and childhood during which development of certain perceptual, sensorimotor, social and language systems depends crucially on environmental stimulation. Feral children such as Genie, deprived of adequate stimulation, fail to acquire important skills and are unable to learn in later childhood. In this case, Genie is used to represent the case of a feral child because she was socially neglected and abused while she was just a young girl. She underwent abnormal child psychology which involved problems with her linguistics. This happened because she was neglected while she was very young with no one to care about her and had less human contact. The concept of critical periods is also well-established in neurophysiology, from the work of Hubel and Wiesel among others. Neurophysiology in infants generally provides correlating details that exists between neurophysiological details and clinical features and also focuses on vital information on rare and common neurological disorders that affect infants.
Developmental delays
Studies have been done to look at the differences in children who have developmental delays versus typical development. Normally when being compared to one another, mental age (MA) is not taken into consideration. There still may be differences in developmentally delayed (DD) children vs. typical development (TD) behavioral, emotional and other mental disorders. When compared to MA children there is a bigger difference between normal developmental behaviors overall. DDs can cause lower MA, so comparing DDs with TDs may not be as accurate. Pairing DDs specifically with TD children at similar MA can be more accurate. There are levels of behavioral differences that are considered as normal at certain ages. When evaluating DDs and MA in children, consider whether those with DDs have a larger amount of behavior that is not typical for their MA group. Developmental delays tend to contribute to other disorders or difficulties than their TD counterparts.
Toddlerhood
Infants shift between ages of one and two to a developmental stage known as toddlerhood. In this stage, an infant's transition into toddlerhood is highlighted through self-awareness, developing maturity in language use, and presence of memory and imagination.
During toddlerhood, babies begin learning how to walk, talk, and make decisions for themselves. An important characteristic of this age period is the development of language, where children are learning how to communicate and express their emotions and desires through the use of vocal sounds, babbling, and eventually words. Self-control also begins to develop. At this age, children take initiative to explore, experiment and learn from making mistakes. Caretakers who encourage toddlers to try new things and test their limits, help the child become autonomous, self-reliant, and confident. If the caretaker is overprotective or disapproving of independent actions, the toddler may begin to doubt their abilities and feel ashamed of the desire for independence. The child's autonomic development is inhibited, leaving them less prepared to deal with the world in the future. Toddlers also begin to identify themselves in gender roles, acting according to their perception of what a man or woman should do.
Socially, the period of toddler-hood is commonly called the "terrible twos". Toddlers often use their new-found language abilities to voice their desires, but are often misunderstood by parents due to their language skills just beginning to develop. A person at this stage testing their independence is another reason behind the stage's infamous label. Tantrums in a fit of frustration are also common.
Childhood
Erik Erikson divides childhood into four stages, each with its distinct social crisis:
Stage 1: Infancy (0 to 1½) in which the psychosocial crisis is Trust vs. Mistrust
Stage 2: Early childhood (2½ to 3) in which the psychosocial crisis is Autonomy vs. Shame and doubt
Stage 3: Play age (3 to 5) in which the psychosocial crisis is Initiative vs. Guilt. (This stage is also called the "pre-school age", "exploratory age" and "toy age".)
Stage 4: School age (5 to 12) in which the psychosocial crisis is Industry vs. Inferiority
Infancy
As stated, the psychosocial crisis for Erikson is Trust versus Mistrust. Needs are the foundation for gaining or losing trust in the infant. If the needs are met, trust in the guardian and the world forms. If the needs are not met, or the infant is neglected, mistrust forms alongside feelings of anxiety and fear.
Early Childhood
Autonomy versus shame follows trust in infancy. The child begins to explore their world in this stage and discovers preferences in what they like. If autonomy is allowed, the child grows in independence and their abilities. If freedom of exploration is hindered, it leads to feelings of shame and low self-esteem.
Play (or preschool) ages 3–5.
In the earliest years, children are "completely dependent on the care of others". Therefore, they develop a "social relationship" with their care givers and, later, with family members. During their preschool years (3–5), they "enlarge their social horizons" to include people outside the family.
Preoperational and then operational thinking develops, which means actions are reversible, and egocentric thought diminishes.
The motor skills of preschoolers increase so they can do more things for themselves. They become more independent. No longer completely dependent on the care of others, the world of this age group expands. More people have a role in shaping their individual personalities. Preschoolers explore and question their world. For Jean Piaget, the child is "a little scientist exploring and reflecting on these explorations to increase competence" and this is done in "a very independent way".
Play is a major activity for ages 3–5. For Piaget, through play "a child reaches higher levels of cognitive development."
In their expanded world, children in the 3–5 age group attempt to find their own way. If this is done in a socially acceptable way, the child develops the initiative. If not, the child develops guilt. Children who develop "guilt" rather than "initiative" have failed Erikson's psychosocial crisis for the 3–5 age group.
Middle and Late childhood ages 6–12.
For Erik Erikson, the psychosocial crisis during middle childhood is Industry vs. Inferiority which, if successfully met, instills a sense of Competency in the child.
In all cultures, middle childhood is a time for developing "skills that will be needed in their society." School offers an arena in which children can gain a view of themselves as "industrious (and worthy)". They are "graded for their school work and often for their industry". They can also develop industry outside of school in sports, games, and doing volunteer work. Children who achieve "success in school or games might develop a feeling of competence."
The "peril during this period is that feelings of inadequacy and inferiority will develop. Parents and teachers can "undermine" a child's development by failing to recognize accomplishments or being overly critical of a child's efforts.
Children who are "encouraged and praised" develop a belief in their competence. Lack of encouragement or ability to excel lead to "feelings of inadequacy and inferiority".
The Centers for Disease Control (CDC) divides Middle Childhood into two stages, 6–8 years and 9–11 years, and gives "developmental milestones for each stage".
Middle Childhood (6–8).
Entering elementary school, children in this age group begin to thinks about the future and their "place in the world". Working with other students and wanting their friendship and acceptance become more important. This leads to "more independence from parents and family". As students, they develop the mental and verbal skills "to describe experiences and talk about thoughts and feelings". They become less self-centered and show "more concern for others".
Late Childhood (9–12).
For children ages 9–11 "friendships and peer relationships" increase in strength, complexity, and importance. This results in greater "peer pressure". They grow even less dependent on their families and they are challenged academically. To meet this challenge, they increase their attention span and learn to see other points of view.
Adolescence
Adolescence is the period of life between the onset of puberty and the full commitment to an adult social role, such as worker, parent, and/or citizen. It is the period known for the formation of personal and social identity (see Erik Erikson) and the discovery of moral purpose (see William Damon). Intelligence is demonstrated through the logical use of symbols related to abstract concepts and formal reasoning. A return to egocentric thought often occurs early in the period. Only 35% develop the capacity to reason formally during adolescence or adulthood. (Huitt, W. and Hummel, J. January 1998)
Erik Erikson labels this stage identity versus role confusion. Erikson emphasizes the importance of developing a sense of identity in adolescence because it affects the individual throughout their life. Identity is a lifelong process and is related with curiosity and active engagement. Role confusion is often considered the current state of identity of the individual. Identity exploration is the process of changing from role confusion to resolution.
During Erik Erikson's identity versus role uncertainty stage, which occurs in adolescence, people struggle to form a cohesive sense of self while exploring many social roles and prospective life routes. This time is characterized by deep introspection, self-examination, and the pursuit of self-understanding. Adolescents are confronted with questions regarding their identity, beliefs, and future goals. The major problem is building a strong sense of identity in the face of society standards, peer pressure, and personal preferences. Adolescents participate in identity exploration, commitment, and synthesis, actively seeking out new experiences, embracing ideals and aspirations, and merging their changing sense of self into a coherent identity. Successfully navigating this stage builds the groundwork for good psychological development in adulthood, allowing people to pursue meaningful relationships, make positive contributions to society, and handle life's adversities with perseverance and purpose.
It is divided into three parts, namely:
Early Adolescence: 9 to 13 years
Mid Adolescence: 13 to 15 years and
Late Adolescence: 15 to 18 years
The adolescent unconsciously explores questions such as "Who am I? Who do I want to be?" Like toddlers, adolescents must explore, test limits, become autonomous, and commit to an identity, or sense of self. Different roles, behaviors and ideologies must be tried out to select an identity. Role confusion and inability to choose vocation can result from a failure to achieve a sense of identity through, for example, friends.
Early adulthood
Early adulthood generally refers to the period between ages 18 to 39, and according to theorists such as Erik Erikson, is a stage where development is mainly focused on maintaining relationships. Erikson shows the importance of relationships by labeling this stage intimacy vs isolation. Intimacy suggests a process of becoming part of something larger than oneself by sacrificing in romantic relationships and working for both life and career goals. Other examples include creating bonds of intimacy, sustaining friendships, and starting a family. Some theorists state that development of intimacy skills rely on the resolution of previous developmental stages. A sense of identity gained in the previous stages is also necessary for intimacy to develop. If this skill is not learned the alternative is alienation, isolation, a fear of commitment, and the inability to depend on others.
Isolation, on the other hand, suggests something different than most might expect. Erikson defined it as a delay of commitment in order to maintain freedom. Yet, this decision does not come without consequences. Erikson explained that choosing isolation may affect one's chances of getting married, progressing in a career, and overall development.
A related framework for studying this part of the lifespan is that of emerging adulthood. Scholars of emerging adulthood, such as Jeffrey Arnett, are not necessarily interested in relationship development. Instead, this concept suggests that people transition after their teenage years into a period, not characterized as relationship building and an overall sense of constancy with life, but with years of living with parents, phases of self-discovery, and experimentation.
Middle adulthood
Middle adulthood generally refers to the period between ages 40 to 64. During this period, middle-aged adults experience a conflict between generativity and stagnation. Generativity is the sense of contributing to society, the next generation, or their immediate community. On the other hand, stagnation results in a lack of purpose. The adult's identity continues to develop in middle-adulthood. Middle-aged adults often adopt opposite gender characeristics. The adult realizes they are half-way through their life and often reevaluate vocational and social roles. Life circumstances can also cause a reexamination of identity.
Physically, the middle-aged experience a decline in muscular strength, reaction time, sensory keenness, and cardiac output. Also, women experience menopause at an average age of 48.8 and a sharp drop in the hormone estrogen. Men experience an equivalent endocrine system event to menopause. Andropause in males is a hormone fluctuation with physical and psychological effects that can be similar to those seen in menopausal females. As men age lowered testosterone levels can contribute to mood swings and a decline in sperm count. Sexual responsiveness can also be affected, including delays in erection and longer periods of penile stimulation required to achieve ejaculation.
The important influence of biological and social changes experienced by women and men in middle adulthood is reflected in the fact that depression is highest at age 48.5 around the world.
Old age
The World Health Organization finds "no general agreement on the age at which a person becomes old." Most "developed countries" set the age as 65 or 70. However, in developing countries inability to make "active contribution" to society, not chronological age, marks the beginning of old age. According to Erikson's stages of psychosocial development, old age is the stage in which individuals assess the quality of their lives.
Erikson labels this stage as integrity versus despair. For integrated persons, there is a sense of fulfillment in life. They have become self-aware and optimistic due to life's commitments and connection to others. While reflecting on life, people in this stage develop feelings of contentment with their experiences. If a person falls into despair, they are often disappointed about failures or missed chances in life. They may feel that the time left in life is an insufficient amount to turn things around.
Physically, older people experience a decline in muscular strength, reaction time, stamina, hearing, distance perception, and the sense of smell. They also are more susceptible to diseases such as cancer and pneumonia due to a weakened immune system. Programs aimed at balance, muscle strength, and mobility have been shown to reduce disability among mildly (but not more severely) disabled elderly.
Sexual expression depends in large part upon the emotional and physical health of the individual. Many older adults continue to be sexually active and satisfied with their sexual activity.
Mental disintegration may also occur, leading to dementia or ailments such as Alzheimer's disease. The average age of onset for dementia in males is 78.8 and 81.9 for women. It is generally believed that crystallized intelligence increases up to old age, while fluid intelligence decreases with age. Whether or not normal intelligence increases or decreases with age depends on the measure and study. Longitudinal studies show that perceptual speed, inductive reasoning, and spatial orientation decline. An article on adult cognitive development reports that cross-sectional studies show that "some abilities remained stable into early old age".
Parenting
Parenting variables alone have typically accounted for 20 to 50 percent of the variance in child outcomes.
All parents have their own parenting styles. Parenting styles, according to Kimberly Kopko, are "based upon two aspects of parenting behavior; control and warmth. Parental control refers to the degree to which parents manage their children's behavior. Parental warmth refers to the degree to which parents are accepting and responsive to their children's behavior."
Parenting styles
The following parenting styles have been described in the child development literature:
Authoritative parenting is characterized as parents who have high parental warmth, responsiveness, and demandingness, but rate low in negativity and conflict. These parents are assertive but not intrusive or overly restrictive. This method of parenting is associated with more positive social and academic outcomes. The beneficial outcomes of authoritative parenting are not necessarily universal. Among African American adolescents, authoritative parenting is not associated with academic achievement without peer support for achievement. Children who are raised by authoritative parents are "more likely to become independent, self-reliant, socially accepted, academically successful, and well-behaved. They are less likely to report depression and anxiety, and less likely to engage in antisocial behavior like delinquency and drug use."
Authoritarian parenting is characterized by low levels of warmth and responsiveness with high levels of demandingness and firm control. These parents focus on obedience and they monitor their children regularly. In general, this style of parenting is associated with maladaptive outcomes. The outcomes are more harmful for middle-class boys than girls, preschool white girls than preschool black girls, and for white boys than Hispanic boys.
Permissive parenting is characterized by high levels of responsiveness combined with low levels of demandingness. These parents are lenient and do not necessarily require mature behavior. They allow for a high degree of self-regulation and typically avoid confrontation. Compared to children raised using the authoritative style, preschool girls raised in permissive families are less assertive. Additionally, preschool children of both sexes are less cognitively competent than those children raised under authoritative parenting styles.
Rejecting or neglectful parenting is characterized by low levels of demandingness and responsiveness. These parents are usually unsupportive, unstructured, and disinterested in their children's lives. Low degrees of reactivity and demandingness are characteristics of this parenting style. Children in this category are typically the least competent of all the categories.
Mother and father factors
Parenting roles in child development have typically focused on the role of the mother. Recent literature, however, has looked toward the father as having an important role in child development. Affirming a role for fathers, studies have shown that children as young as 15 months benefit significantly from substantial engagement with their father. In particular, a study in the U.S. and New Zealand found the presence of the natural father was the most significant factor in reducing rates of early sexual activity and rates of teenage pregnancy in girls. Furthermore, another argument is that neither a mother nor a father is actually essential in successful parenting, and that single parents as well as homosexual couples can support positive child outcomes. According to this set of research, children need at least one consistently responsible adult with whom the child can have a positive emotional connection. Having more than one of these figures contributes to a higher likelihood of positive child outcomes.
Divorce
Another parental factor often debated in terms of its effects on child development is divorce. Divorce in itself is not a determining factor of negative child outcomes. In fact, the majority of children from divorcing families fall into the normal range on measures of psychological and cognitive functioning. A number of mediating factors play a role in determining the effects divorce has on a child, for example, divorcing families with young children often face harsher consequences in terms of demographic, social, and economic changes than do families with older children. Positive coparenting after divorce is part of a pattern associated with positive child coping, while hostile parenting behaviors lead to a destructive pattern leaving children at risk. Additionally, direct parental relationship with the child also affects the development of a child after a divorce. Overall, protective factors facilitating positive child development after a divorce are maternal warmth, positive father-child relationship, and cooperation between parents.
Cross-cultural
A way to improve developmental psychology is a representation of cross-cultural studies. The psychology field in general assumes that "basic" human developments are represented in any population, specifically the Western-Educated-Industrialized-Rich and Democratic (W.E.I.R.D.) subjects that are relied on for a majority of their studies. Previous research generalizes the findings done with W.E.I.R.D. samples because many in the Psychological field assume certain aspects of development are exempted from or are not affected by life experiences. However, many of the assumptions have been proven incorrect or are not supported by empirical research. For example, according to Kohlberg, moral reasoning is dependent on cognitive abilities. While both analytical and holistic cognitive systems do have the potential to develop in any adult, the West is still on the extreme end of analytical thinking, and the non-West tend to use holistic processes. Furthermore, moral reasoning in the West only considers aspects that support autonomy and the individual, whereas non-Western adults emphasize moral behaviors supporting the community and maintaining an image of holiness or divinity. Not all aspects of human development are universal and we can learn a lot from observing different regions and subjects.
Indian model of human development
An example of a non-Western model for development stages is the Indian model, focusing a large amount of its psychological research on morality and interpersonal progress. The developmental stages in Indian models are founded by Hinduism, which primarily teaches stages of life in the process of someone discovering their fate or Dharma. This cross-cultural model can add another perspective to psychological development in which the West behavioral sciences have not emphasized kinship, ethnicity, or religion.
Indian psychologists study the relevance of attentive families during the early stages of life. The early life stages conceptualize a different parenting style from the West because it does not try to rush children out of dependency. The family is meant to help the child grow into the next developmental stage at a particular age. This way, when children finally integrate into society, they are interconnected with those around them and reach renunciation when they are older. Children are raised in joint families so that in early childhood (ages 6 months to 2 years) the other family members help gradually wean the child from its mother. During ages 2 to 5, the parents do not rush toilet training. Instead of training the child to perform this behavior, the child learns to do it as they mature at their own pace.
This model of early human development encourages dependency, unlike Western models that value autonomy and independence. By being attentive and not forcing the child to become independent, they are confident and have a sense of belonging by late childhood and adolescence. This stage in life (5–15 years) is also when children start education and increase their knowledge of Dharma. It is within early and middle adulthood that we see moral development progress. Early, middle, and late adulthood are all concerned with caring for others and fulfilling Dharma. The main distinction between early adulthood to middle or late adulthood is how far their influence reaches. Early adulthood emphasizes the importance of fulfilling the immediate family needs, until later adulthood when they broaden their responsibilities to the general public. The old-age life stage development reaches renunciation or a complete understanding of Dharma.
The current mainstream views in the psychological field are against the Indian model for human development. The criticism against such models is that the parenting style is overly protective and encourages too much dependency. It focuses on interpersonal instead of individual goals. Also, there are some overlaps and similarities between Erikson's stages of human development and the Indian model but both of them still have major differences. The West prefers Erickson's ideas over the Indian model because they are supported by scientific studies. The life cycles based on Hinduism are not as favored, because it is not supported with research and it focuses on the ideal human development.
See also
Journals
Autism Research
Child Development
Development and Psychopathology
Developmental Neuropsychology
Developmental Psychology
Developmental Review
Developmental Science
Human Development (journal)
Journal of Abnormal Child Psychology
Journal of Adolescent Health
Journal of Autism and Developmental Disorders
Journal of Child Psychology and Psychiatry
Journal of Clinical Child and Adolescent Psychology
Journal of Pediatric Psychology
Journal of Research on Adolescence
Journal of Youth and Adolescence
Journal of the American Academy of Child and Adolescent Psychiatry
Psychology and Aging
Research in Autism Spectrum Disorders
References
Further reading
External links
The Society for Research in Child Development
The British Psychological Society, Developmental Psychology Section
Developmental Psychology: lessons for teaching and learning developmental psychology
GMU's On-Line Resources for Developmental Psychology: a web directory of developmental psychology organizations
Home Economics Archive: Research, Tradition, History (HEARTH)An e-book collection of over 1,000 books spanning 1850 to 1950, created by Cornell University's Mann Library. Includes several hundred works on human development, child raising, and family studies itemized in a specific bibliography.
Developmental psychology Subject Area page at PLOS
Behavioural sciences | Developmental psychology | [
"Biology"
] | 15,264 | [
"Behavioural sciences",
"Behavior",
"Developmental psychology"
] |
9,015 | https://en.wikipedia.org/wiki/DNA%20replication | In molecular biology, DNA replication is the biological process of producing two identical replicas of DNA from one original DNA molecule. DNA replication occurs in all living organisms acting as the most essential part of biological inheritance. This is essential for cell division during growth and repair of damaged tissues, while it also ensures that each of the new cells receives its own copy of the DNA. The cell possesses the distinctive property of division, which makes replication of DNA essential.
DNA is made up of a double helix of two complementary strands. The double helix describes the appearance of a double-stranded DNA which is thus composed of two linear strands that run opposite to each other and twist together. During replication, these strands are separated. Each strand of the original DNA molecule then serves as a template for the production of its counterpart, a process referred to as semiconservative replication. As a result of semi-conservative replication, the new helix will be composed of an original DNA strand as well as a newly synthesized strand. Cellular proofreading and error-checking mechanisms ensure near perfect fidelity for DNA replication.
In a cell, DNA replication begins at specific locations, or origins of replication, in the genome which contains the genetic material of an organism. Unwinding of DNA at the origin and synthesis of new strands, accommodated by an enzyme known as helicase, results in replication forks growing bi-directionally from the origin. A number of proteins are associated with the replication fork to help in the initiation and continuation of DNA synthesis. Most prominently, DNA polymerase synthesizes the new strands by adding nucleotides that complement each (template) strand. DNA replication occurs during the S-stage of interphase.
DNA replication (DNA amplification) can also be performed in vitro (artificially, outside a cell). DNA polymerases isolated from cells and artificial DNA primers can be used to start DNA synthesis at known sequences in a template DNA molecule. Polymerase chain reaction (PCR), ligase chain reaction (LCR), and transcription-mediated amplification (TMA) are examples. In March 2021, researchers reported evidence suggesting that a preliminary form of transfer RNA, a necessary component of translation, the biological synthesis of new proteins in accordance with the genetic code, could have been a replicator molecule itself in the very early development of life, or abiogenesis.
DNA structure
DNA exists as a double-stranded structure, with both strands coiled together to form the characteristic double helix. Each single strand of DNA is a chain of four types of nucleotides. Nucleotides in DNA contain a deoxyribose sugar, a phosphate, and a nucleobase. The four types of nucleotide correspond to the four nucleobases adenine, cytosine, guanine, and thymine, commonly abbreviated as A, C, G, and T. Adenine and guanine are purine bases, while cytosine and thymine are pyrimidines. These nucleotides form phosphodiester bonds, creating the phosphate-deoxyribose backbone of the DNA double helix with the nucleobases pointing inward (i.e., toward the opposing strand). Nucleobases are matched between strands through hydrogen bonds to form base pairs. Adenine pairs with thymine (two hydrogen bonds), and guanine pairs with cytosine (three hydrogen bonds).
DNA strands have a directionality, and the different ends of a single strand are called the "3′ (three-prime) end" and the "5′ (five-prime) end". By convention, if the base sequence of a single strand of DNA is given, the left end of the sequence is the 5′ end, while the right end of the sequence is the 3′ end. The strands of the double helix are anti-parallel, with one being 5′ to 3′, and the opposite strand 3′ to 5′. These terms refer to the carbon atom in deoxyribose to which the next phosphate in the chain attaches. Directionality has consequences in DNA synthesis, because DNA polymerase can synthesize DNA in only one direction by adding nucleotides to the 3′ end of a DNA strand.
The pairing of complementary bases in DNA (through hydrogen bonding) means that the information contained within each strand is redundant. Phosphodiester (intra-strand) bonds are stronger than hydrogen (inter-strand) bonds. The actual job of the phosphodiester bonds is where in DNA polymers connect the 5' carbon atom of one nucleotide to the 3' carbon atom of another nucleotide, while the hydrogen bonds stabilize DNA double helices across the helix axis but not in the direction of the axis. This makes it possible to separate the strands from one another. The nucleotides on a single strand can therefore be used to reconstruct nucleotides on a newly synthesized partner strand.
DNA polymerase
DNA polymerases are a family of enzymes that carry out all forms of DNA replication. DNA polymerases in general cannot initiate synthesis of new strands but can only extend an existing DNA or RNA strand paired with a template strand. To begin synthesis, a short fragment of RNA, called a primer, must be created and paired with the template DNA strand.
DNA polymerase adds a new strand of DNA by extending the 3′ end of an existing nucleotide chain, adding new nucleotides matched to the template strand, one at a time, via the creation of phosphodiester bonds. The energy for this process of DNA polymerization comes from hydrolysis of the high-energy phosphate (phosphoanhydride) bonds between the three phosphates attached to each unincorporated base. Free bases with their attached phosphate groups are called nucleotides; in particular, bases with three attached phosphate groups are called nucleoside triphosphates. When a nucleotide is being added to a growing DNA strand, the formation of a phosphodiester bond between the proximal phosphate of the nucleotide to the growing chain is accompanied by hydrolysis of a high-energy phosphate bond with release of the two distal phosphate groups as a pyrophosphate. Enzymatic hydrolysis of the resulting pyrophosphate into inorganic phosphate consumes a second high-energy phosphate bond and renders the reaction effectively irreversible.
In general, DNA polymerases are highly accurate, with an intrinsic error rate of less than one mistake for every 107 nucleotides added. Some DNA polymerases can also delete nucleotides from the end of a developing strand in order to fix mismatched bases. This is known as proofreading. Finally, post-replication mismatch repair mechanisms monitor the DNA for errors, being capable of distinguishing mismatches in the newly synthesized DNA Strand from the original strand sequence. Together, these three discrimination steps enable replication fidelity of less than one mistake for every 109 nucleotides added.
The rate of DNA replication in a living cell was first measured as the rate of phage T4 DNA elongation in phage-infected E. coli. During the period of exponential DNA increase at 37 °C, the rate was 749 nucleotides per second. The mutation rate per base pair per replication during phase T4 DNA synthesis is 1.7 per 108.
Replication process
DNA replication, like all biological polymerization processes, proceeds in three enzymatically catalyzed and coordinated steps: initiation, elongation and termination.
Initiation
For a cell to divide, it must first replicate its DNA. DNA replication is an all-or-none process; once replication begins, it proceeds to completion. Once replication is complete, it does not occur again in the same cell cycle. This is made possible by the division of initiation of the pre-replication complex.
Pre-replication complex
In late mitosis and early G1 phase, a large complex of initiator proteins assembles into the pre-replication complex at particular points in the DNA, known as "origins". In E. coli the primary initiator protein is Dna A; in yeast, this is the origin recognition complex. Sequences used by initiator proteins tend to be "AT-rich" (rich in adenine and thymine bases), because A-T base pairs have two hydrogen bonds (rather than the three formed in a C-G pair) and thus are easier to strand-separate. In eukaryotes, the origin recognition complex catalyzes the assembly of initiator proteins into the pre-replication complex. In addition, a recent report suggests that budding yeast ORC dimerizes in a cell cycle dependent manner to control licensing. In turn, the process of ORC dimerization is mediated by a cell cycle-dependent Noc3p dimerization cycle in vivo, and this role of Noc3p is separable from its role in ribosome biogenesis. An essential Noc3p dimerization cycle mediates ORC double-hexamer formation in replication licensing ORC and Noc3p are continuously bound to the chromatin throughout the cell cycle. Cdc6 and Cdt1 then associate with the bound origin recognition complex at the origin in order to form a larger complex necessary to load the Mcm complex onto the DNA. In eukaryotes, the Mcm complex is the helicase that will split the DNA helix at the replication forks and origins. The Mcm complex is recruited at late G1 phase and loaded by the ORC-Cdc6-Cdt1 complex onto the DNA via ATP-dependent protein remodeling. The loading of the MCM complex onto the origin DNA marks the completion of pre-replication complex formation.
If environmental conditions are right in late G1 phase, the G1 and G1/S cyclin-Cdk complexes are activated, which stimulate expression of genes that encode components of the DNA synthetic machinery. G1/S-Cdk activation also promotes the expression and activation of S-Cdk complexes, which may play a role in activating replication origins depending on species and cell type. Control of these Cdks vary depending on cell type and stage of development. This regulation is best understood in budding yeast, where the S cyclins Clb5 and Clb6 are primarily responsible for DNA replication. Clb5,6-Cdk1 complexes directly trigger the activation of replication origins and are therefore required throughout S phase to directly activate each origin.
In a similar manner, Cdc7 is also required through S phase to activate replication origins. Cdc7 is not active throughout the cell cycle, and its activation is strictly timed to avoid premature initiation of DNA replication. In late G1, Cdc7 activity rises abruptly as a result of association with the regulatory subunit DBF4, which binds Cdc7 directly and promotes its protein kinase activity. Cdc7 has been found to be a rate-limiting regulator of origin activity. Together, the G1/S-Cdks and/or S-Cdks and Cdc7 collaborate to directly activate the replication origins, leading to initiation of DNA synthesis.
Preinitiation complex
In early S phase, S-Cdk and Cdc7 activation lead to the assembly of the preinitiation complex, a massive protein complex formed at the origin. Formation of the preinitiation complex displaces Cdc6 and Cdt1 from the origin replication complex, inactivating and disassembling the pre-replication complex. Loading the preinitiation complex onto the origin activates the Mcm helicase, causing unwinding of the DNA helix. The preinitiation complex also loads α-primase and other DNA polymerases onto the DNA.
After α-primase synthesizes the first primers, the primer-template junctions interact with the clamp loader, which loads the sliding clamp onto the DNA to begin DNA synthesis. The components of the preinitiation complex remain associated with replication forks as they move out from the origin.
Elongation
DNA polymerase has 5′–3′ activity.
All known DNA replication systems require a free 3′ hydroxyl group before synthesis can be initiated (note: the DNA template is read in 3′ to 5′ direction whereas a new strand is synthesized in the 5′ to 3′ direction—this is often confused). Four distinct mechanisms for DNA synthesis are recognized:
All cellular life forms and many DNA viruses, phages and plasmids use a primase to synthesize a short RNA primer with a free 3′ OH group which is subsequently elongated by a DNA polymerase.
The retroelements (including retroviruses) employ a transfer RNA that primes DNA replication by providing a free 3′ OH that is used for elongation by the reverse transcriptase.
In the adenoviruses and the φ29 family of bacteriophages, the 3′ OH group is provided by the side chain of an amino acid of the genome attached protein (the terminal protein) to which nucleotides are added by the DNA polymerase to form a new strand.
In the single stranded DNA viruses—a group that includes the circoviruses, the geminiviruses, the parvoviruses and others—and also the many phages and plasmids that use the rolling circle replication (RCR) mechanism, the RCR endonuclease creates a nick in the genome strand (single stranded viruses) or one of the DNA strands (plasmids). The 5′ end of the nicked strand is transferred to a tyrosine residue on the nuclease and the free 3′ OH group is then used by the DNA polymerase to synthesize the new strand.
Cellular organisms use the first of these pathways since it is the most well-known. In this mechanism, once the two strands are separated, primase adds RNA primers to the template strands. The leading strand receives one RNA primer while the lagging strand receives several. The leading strand is continuously extended from the primer by a DNA polymerase with high processivity, while the lagging strand is extended discontinuously from each primer forming Okazaki fragments. RNase removes the primer RNA fragments, and a low processivity DNA polymerase distinct from the replicative polymerase enters to fill the gaps. When this is complete, a single nick on the leading strand and several nicks on the lagging strand can be found. Ligase works to fill these nicks in, thus completing the newly replicated DNA molecule.
The primase used in this process differs significantly between bacteria and archaea/eukaryotes. Bacteria use a primase belonging to the DnaG protein superfamily which contains a catalytic domain of the TOPRIM fold type. The TOPRIM fold contains an α/β core with four conserved strands in a Rossmann-like topology. This structure is also found in the catalytic domains of topoisomerase Ia, topoisomerase II, the OLD-family nucleases and DNA repair proteins related to the RecR protein.
The primase used by archaea and eukaryotes, in contrast, contains a highly derived version of the RNA recognition motif (RRM). This primase is structurally similar to many viral RNA-dependent RNA polymerases, reverse transcriptases, cyclic nucleotide generating cyclases and DNA polymerases of the A/B/Y families that are involved in DNA replication and repair. In eukaryotic replication, the primase forms a complex with Pol α.
Multiple DNA polymerases take on different roles in the DNA replication process. In E. coli, DNA Pol III is the polymerase enzyme primarily responsible for DNA replication. It assembles into a replication complex at the replication fork that exhibits extremely high processivity, remaining intact for the entire replication cycle. In contrast, DNA Pol I is the enzyme responsible for replacing RNA primers with DNA. DNA Pol I has a 5′ to 3′ exonuclease activity in addition to its polymerase activity, and uses its exonuclease activity to degrade the RNA primers ahead of it as it extends the DNA strand behind it, in a process called nick translation. Pol I is much less processive than Pol III because its primary function in DNA replication is to create many short DNA regions rather than a few very long regions.
In eukaryotes, the low-processivity enzyme, Pol α, helps to initiate replication because it forms a complex with primase. In eukaryotes, leading strand synthesis is thought to be conducted by Pol ε; however, this view has recently been challenged, suggesting a role for Pol δ. Primer removal is completed Pol δ while repair of DNA during replication is completed by Pol ε.
As DNA synthesis continues, the original DNA strands continue to unwind on each side of the bubble, forming a replication fork with two prongs. In bacteria, which have a single origin of replication on their circular chromosome, this process creates a "theta structure" (resembling the Greek letter theta: θ). In contrast, eukaryotes have longer linear chromosomes and initiate replication at multiple origins within these.
Replication fork
The replication fork is a structure that forms within the long helical DNA during DNA replication. It is produced by enzymes called helicases that break the hydrogen bonds that hold the DNA strands together in a helix. The resulting structure has two branching "prongs", each one made up of a single strand of DNA. These two strands serve as the template for the leading and lagging strands, which will be created as DNA polymerase matches complementary nucleotides to the templates; the templates may be properly referred to as the leading strand template and the lagging strand template.
DNA is read by DNA polymerase in the 3′ to 5′ direction, meaning the new strand is synthesized in the 5' to 3' direction. Since the leading and lagging strand templates are oriented in opposite directions at the replication fork, a major issue is how to achieve synthesis of new lagging strand DNA, whose direction of synthesis is opposite to the direction of the growing replication fork.
Leading strand
The leading strand is the strand of new DNA which is synthesized in the same direction as the growing replication fork. This sort of DNA replication is continuous.
Lagging strand
The lagging strand is the strand of new DNA whose direction of synthesis is opposite to the direction of the growing replication fork. Because of its orientation, replication of the lagging strand is more complicated as compared to that of the leading strand. As a consequence, the DNA polymerase on this strand is seen to "lag behind" the other strand.
The lagging strand is synthesized in short, separated segments. On the lagging strand template, a primase "reads" the template DNA and initiates synthesis of a short complementary RNA primer. A DNA polymerase extends the primed segments, forming Okazaki fragments. The RNA primers are then removed and replaced with DNA, and the fragments of DNA are joined by DNA ligase.
Dynamics at the replication fork
In all cases the helicase is composed of six polypeptides that wrap around only one strand of the DNA being replicated. The two polymerases are bound to the helicase hexamer. In eukaryotes the helicase wraps around the leading strand, and in prokaryotes it wraps around the lagging strand.
As helicase unwinds DNA at the replication fork, the DNA ahead is forced to rotate. This process results in a build-up of twists in the DNA ahead. This build-up creates a torsional load that would eventually stop the replication fork. Topoisomerases are enzymes that temporarily break the strands of DNA, relieving the tension caused by unwinding the two strands of the DNA helix; topoisomerases (including DNA gyrase) achieve this by adding negative supercoils to the DNA helix.
Bare single-stranded DNA tends to fold back on itself forming secondary structures; these structures can interfere with the movement of DNA polymerase. To prevent this, single-strand binding proteins bind to the DNA until a second strand is synthesized, preventing secondary structure formation.
Double-stranded DNA is coiled around histones that play an important role in regulating gene expression so the replicated DNA must be coiled around histones at the same places as the original DNA. To ensure this, histone chaperones disassemble the chromatin before it is replicated and replace the histones in the correct place. Some steps in this reassembly are somewhat speculative.
Clamp proteins act as a sliding clamp on DNA, allowing the DNA polymerase to bind to its template and aid in processivity. The inner face of the clamp enables DNA to be threaded through it. Once the polymerase reaches the end of the template or detects double-stranded DNA, the sliding clamp undergoes a conformational change that releases the DNA polymerase. Clamp-loading proteins are used to initially load the clamp, recognizing the junction between template and RNA primers.:274-5
DNA replication proteins
At the replication fork, many replication enzymes assemble on the DNA into a complex molecular machine called the replisome. The following is a list of major DNA replication enzymes that participate in the replisome:
In vitro single-molecule experiments (using optical tweezers and magnetic tweezers) have found synergetic interactions between the replisome enzymes (helicase, polymerase, and Single-strand DNA-binding protein) and with the DNA replication fork enhancing DNA-unwinding and DNA-replication. These results lead to the development of kinetic models accounting for the synergetic interactions and their stability.
Replication machinery
Replication machineries consist of factors involved in DNA replication and appearing on template ssDNAs. Replication machineries include primosotors are replication enzymes; DNA polymerase, DNA helicases, DNA clamps and DNA topoisomerases, and replication proteins; e.g. single-stranded DNA binding proteins (SSB). In the replication machineries these components coordinate. In most of the bacteria, all of the factors involved in DNA replication are located on replication forks and the complexes stay on the forks during DNA replication. Replication machineries are also referred to as replisomes, or DNA replication systems. These terms are generic terms for proteins located on replication forks. In eukaryotic and some bacterial cells the replisomes are not formed.
In an alternative figure, DNA factories are similar to projectors and DNAs are like as cinematic films passing constantly into the projectors. In the replication factory model, after both DNA helicases for leading strands and lagging strands are loaded on the template DNAs, the helicases run along the DNAs into each other. The helicases remain associated for the remainder of replication process. Peter Meister et al. observed directly replication sites in budding yeast by monitoring green fluorescent protein (GFP)-tagged DNA polymerases α. They detected DNA replication of pairs of the tagged loci spaced apart symmetrically from a replication origin and found that the distance between the pairs decreased markedly by time. This finding suggests that the mechanism of DNA replication goes with DNA factories. That is, couples of replication factories are loaded on replication origins and the factories associated with each other. Also, template DNAs move into the factories, which bring extrusion of the template ssDNAs and new DNAs. Meister's finding is the first direct evidence of replication factory model. Subsequent research has shown that DNA helicases form dimers in many eukaryotic cells and bacterial replication machineries stay in single intranuclear location during DNA synthesis.
Replication Factories Disentangle Sister Chromatids. The disentanglement is essential for distributing the chromatids into daughter cells after DNA replication. Because sister chromatids after DNA replication hold each other by Cohesin rings, there is the only chance for the disentanglement in DNA replication. Fixing of replication machineries as replication factories can improve the success rate of DNA replication. If replication forks move freely in chromosomes, catenation of nuclei is aggravated and impedes mitotic segregation.
Termination
Eukaryotes initiate DNA replication at multiple points in the chromosome, so replication forks meet and terminate at many points in the chromosome. Because eukaryotes have linear chromosomes, DNA replication is unable to reach the very end of the chromosomes. Due to this problem, DNA is lost in each replication cycle from the end of the chromosome. Telomeres are regions of repetitive DNA close to the ends and help prevent loss of genes due to this shortening. Shortening of the telomeres is a normal process in somatic cells. This shortens the telomeres of the daughter DNA chromosome. As a result, cells can only divide a certain number of times before the DNA loss prevents further division. (This is known as the Hayflick limit.) Within the germ cell line, which passes DNA to the next generation, telomerase extends the repetitive sequences of the telomere region to prevent degradation. Telomerase can become mistakenly active in somatic cells, sometimes leading to cancer formation. Increased telomerase activity is one of the hallmarks of cancer.
Termination requires that the progress of the DNA replication fork must stop or be blocked. Termination at a specific locus, when it occurs, involves the interaction between two components: (1) a termination site sequence in the DNA, and (2) a protein which binds to this sequence to physically stop DNA replication. In various bacterial species, this is named the DNA replication terminus site-binding protein, or Ter protein.
Because bacteria have circular chromosomes, termination of replication occurs when the two replication forks meet each other on the opposite end of the parental chromosome. E. coli regulates this process through the use of termination sequences that, when bound by the Tus protein, enable only one direction of replication fork to pass through. As a result, the replication forks are constrained to always meet within the termination region of the chromosome.
Regulation
Eukaryotes
Within eukaryotes, DNA replication is controlled within the context of the cell cycle. As the cell grows and divides, it progresses through stages in the cell cycle; DNA replication takes place during the S phase (synthesis phase). The progress of the eukaryotic cell through the cycle is controlled by cell cycle checkpoints. Progression through checkpoints is controlled through complex interactions between various proteins, including cyclins and cyclin-dependent kinases. Unlike bacteria, eukaryotic DNA replicates in the confines of the nucleus.
The G1/S checkpoint (restriction checkpoint) regulates whether eukaryotic cells enter the process of DNA replication and subsequent division. Cells that do not proceed through this checkpoint remain in the G0 stage and do not replicate their DNA.
Once the DNA has gone through the "G1/S" test, it can only be copied once in every cell cycle. When the Mcm complex moves away from the origin, the pre-replication complex is dismantled. Because a new Mcm complex cannot be loaded at an origin until the pre-replication subunits are reactivated, one origin of replication can not be used twice in the same cell cycle.
Activation of S-Cdks in early S phase promotes the destruction or inhibition of individual pre-replication complex components, preventing immediate reassembly. S and M-Cdks continue to block pre-replication complex assembly even after S phase is complete, ensuring that assembly cannot occur again until all Cdk activity is reduced in late mitosis.
In budding yeast, inhibition of assembly is caused by Cdk-dependent phosphorylation of pre-replication complex components. At the onset of S phase, phosphorylation of Cdc6 by Cdk1 causes the binding of Cdc6 to the SCF ubiquitin protein ligase, which causes proteolytic destruction of Cdc6. Cdk-dependent phosphorylation of Mcm proteins promotes their export out of the nucleus along with Cdt1 during S phase, preventing the loading of new Mcm complexes at origins during a single cell cycle. Cdk phosphorylation of the origin replication complex also inhibits pre-replication complex assembly. The individual presence of any of these three mechanisms is sufficient to inhibit pre-replication complex assembly. However, mutations of all three proteins in the same cell does trigger reinitiation at many origins of replication within one cell cycle.
In animal cells, the protein geminin is a key inhibitor of pre-replication complex assembly. Geminin binds Cdt1, preventing its binding to the origin recognition complex. In G1, levels of geminin are kept low by the APC, which ubiquitinates geminin to target it for degradation. When geminin is destroyed, Cdt1 is released, allowing it to function in pre-replication complex assembly. At the end of G1, the APC is inactivated, allowing geminin to accumulate and bind Cdt1.
Replication of chloroplast and mitochondrial genomes occurs independently of the cell cycle, through the process of D-loop replication.
Replication focus
In vertebrate cells, replication sites concentrate into positions called replication foci. Replication sites can be detected by immunostaining daughter strands and replication enzymes and monitoring GFP-tagged replication factors. By these methods it is found that replication foci of varying size and positions appear in S phase of cell division and their number per nucleus is far smaller than the number of genomic replication forks.
P. Heun et al.,(2001) tracked GFP-tagged replication foci in budding yeast cells and revealed that replication origins move constantly in G1 and S phase and the dynamics decreased significantly in S phase. Traditionally, replication sites were fixed on spatial structure of chromosomes by nuclear matrix or lamins. The Heun's results denied the traditional concepts, budding yeasts do not have lamins, and support that replication origins self-assemble and form replication foci.
By firing of replication origins, controlled spatially and temporally, the formation of replication foci is regulated. D. A. Jackson et al.(1998) revealed that neighboring origins fire simultaneously in mammalian cells. Spatial juxtaposition of replication sites brings clustering of replication forks. The clustering do rescue of stalled replication forks and favors normal progress of replication forks. Progress of replication forks is inhibited by many factors; collision with proteins or with complexes binding strongly on DNA, deficiency of dNTPs, nicks on template DNAs and so on. If replication forks get stuck and the rest of the sequences from the stuck forks are not copied, then the daughter strands get nick nick unreplicated sites. The un-replicated sites on one parent's strand hold the other strand together but not daughter strands. Therefore, the resulting sister chromatids cannot separate from each other and cannot divide into 2 daughter cells. When neighboring origins fire and a fork from one origin is stalled, fork from other origin access on an opposite direction of the stalled fork and duplicate the un-replicated sites. As other mechanism of the rescue there is application of dormant replication origins that excess origins do not fire in normal DNA replication.
Bacteria
Most bacteria do not go through a well-defined cell cycle but instead continuously copy their DNA; during rapid growth, this can result in the concurrent occurrence of multiple rounds of replication. In E. coli, the best-characterized bacteria, DNA replication is regulated through several mechanisms, including: the hemimethylation and sequestering of the origin sequence, the ratio of adenosine triphosphate (ATP) to adenosine diphosphate (ADP), and the levels of protein DnaA. All these control the binding of initiator proteins to the origin sequences.
Because E. coli methylates GATC DNA sequences, DNA synthesis results in hemimethylated sequences. This hemimethylated DNA is recognized by the protein SeqA, which binds and sequesters the origin sequence; in addition, DnaA (required for initiation of replication) binds less well to hemimethylated DNA. As a result, newly replicated origins are prevented from immediately initiating another round of DNA replication.
ATP builds up when the cell is in a rich medium, triggering DNA replication once the cell has reached a specific size. ATP competes with ADP to bind to DnaA, and the DnaA-ATP complex is able to initiate replication. A certain number of DnaA proteins are also required for DNA replication — each time the origin is copied, the number of binding sites for DnaA doubles, requiring the synthesis of more DnaA to enable another initiation of replication.
In fast-growing bacteria, such as E. coli, chromosome replication takes more time than dividing the cell. The bacteria solve this by initiating a new round of replication before the previous one has been terminated. The new round of replication will form the chromosome of the cell that is born two generations after the dividing cell. This mechanism creates overlapping replication cycles.
Problems with DNA replication
There are many events that contribute to replication stress, including:
Misincorporation of ribonucleotides
Unusual DNA structures
Conflicts between replication and transcription
Insufficiency of essential replication factors
Common fragile sites
Overexpression or constitutive activation of oncogenes
Chromatin inaccessibility
Polymerase chain reaction
Researchers commonly replicate DNA in vitro using the polymerase chain reaction (PCR). PCR uses a pair of primers to span a target region in template DNA, and then polymerizes partner strands in each direction from these primers using a thermostable DNA polymerase. Repeating this process through multiple cycles amplifies the targeted DNA region. At the start of each cycle, the mixture of template and primers is heated, separating the newly synthesized molecule and template. Then, as the mixture cools, both of these become templates for annealing of new primers, and the polymerase extends from these. As a result, the number of copies of the target region doubles each round, increasing exponentially.
See also
Autopoiesis
Cell (biology)
Cell division
Chromosome segregation
Data storage device
Gene
Gene expression
Epigenetics
Genome
Hachimoji DNA
Life
Replication (computing)
Self-replication
Notes
References
DNA replication
Senescence
Cellular processes
Molecular biology
Copying | DNA replication | [
"Chemistry",
"Biology"
] | 7,118 | [
"Genetics techniques",
"Senescence",
"DNA replication",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Metabolism"
] |
9,023 | https://en.wikipedia.org/wiki/Discounted%20cash%20flow | The discounted cash flow (DCF) analysis, in financial analysis, is a method used to value a security, project, company, or asset, that incorporates the time value of money.
Discounted cash flow analysis is widely used in investment finance, real estate development, corporate financial management, and patent valuation. Used in industry as early as the 1700s or 1800s, it was widely discussed in financial economics in the 1960s, and U.S. courts began employing the concept in the 1980s and 1990s.
Application
In discount cash flow analysis, all future cash flows are estimated and discounted by using cost of capital to give their present values (PVs). The sum of all future cash flows, both incoming and outgoing, is the net present value (NPV), which is taken as the value of the cash flows in question;
see aside.
For further context see ;
and for the mechanics see valuation using discounted cash flows, which includes modifications typical for startups, private equity and venture capital, corporate finance "projects", and mergers and acquisitions.
Using DCF analysis to compute the NPV takes as input cash flows and a discount rate and gives as output a present value. The opposite process takes cash flows and a price (present value) as inputs, and provides as output the discount rate; this is used in bond markets to obtain the yield.
History
Discounted cash flow calculations have been used in some form since money was first lent at interest in ancient times. Studies of ancient Egyptian and Babylonian mathematics suggest that they used techniques similar to discounting future cash flows. Modern discounted cash flow analysis has been used since at least the early 1700s in the UK coal industry.
Discounted cash flow valuation is differentiated from the accounting book value, which is based on the amount paid for the asset. Following the stock market crash of 1929, discounted cash flow analysis gained popularity as a valuation method for stocks. Irving Fisher in his 1930 book The Theory of Interest and John Burr Williams's 1938 text The Theory of Investment Value first formally expressed the DCF method in modern economic terms.
Mathematics
Discounted cash flows
The discounted cash flow formula is derived from the present value formula for calculating the time value of money
and compounding returns:
.
Thus the discounted present value (for one cash flow in one future period) is expressed as:
where
DPV is the discounted present value of the future cash flow (FV), or FV adjusted for the delay in receipt;
FV is the nominal value of a cash flow amount in a future period (see Mid-year adjustment);
r is the interest rate or discount rate, which reflects the cost of tying up capital and may also allow for the risk that the payment may not be received in full;
n is the time in years before the future cash flow occurs.
Where multiple cash flows in multiple time periods are discounted, it is necessary to sum them as follows:
for each future cash flow (FV) at any time period (t) in years from the present time, summed over all time periods. The sum can then be used as a net present value figure. If the amount to be paid at time 0 (now) for all the future cash flows is known, then that amount can be substituted for DPV and the equation can be solved for r, that is the internal rate of return.
All the above assumes that the interest rate remains constant throughout the whole period.
If the cash flow stream is assumed to continue indefinitely, the finite forecast is usually combined with the assumption of constant cash flow growth beyond the discrete projection period. The total value of such cash flow stream is the sum of the finite discounted cash flow forecast and the Terminal value (finance).
Continuous cash flows
For continuous cash flows, the summation in the above formula is replaced by an integration:
where is now the rate of cash flow, and .
Discount rate
The act of discounting future cash flows asks "how much money would have to be invested currently, at a given rate of return, to yield the forecast cash flow, at its future date?" In other words, discounting returns the present value of future cash flows, where the rate used is the cost of capital that appropriately reflects the risk, and timing, of the cash flows.
This "required return" thus incorporates:
Time value of money (risk-free rate) – according to the theory of time preference, investors would rather have cash immediately than having to wait and must therefore be compensated by paying for the delay.
Risk premium – reflects the extra return investors demand because they want to be compensated for the risk that the cash flow might not materialize after all.
For the latter, various models have been developed, where the premium is (typically) calculated as a function of the asset's performance with reference to some macroeconomic variable - for example, the CAPM compares the asset's historical returns to the "overall market's"; see and .
An alternate, although less common approach, is to apply a "fundamental valuation" method, such as the "T-model", which instead relies on accounting information.
Other methods of discounting, such as hyperbolic discounting, are studied in academia and said to reflect intuitive decision-making, but are not generally used in industry. In this context the above is referred to as "exponential discounting".
The terminology "expected return", although formally the mathematical expected value, is often used interchangeably with the above, where "expected" means "required" or "demanded" by investors.
The method may also be modified by industry, for example various formulae have been proposed when choosing a discount rate in a healthcare setting;
similarly in a mining setting, where risk-characteristics can differ (dramatically) by property.
Methods of appraisal of a company or project
For these valuation purposes, a number of different DCF methods are distinguished today, some of which are outlined below. The details are likely to vary depending on the capital structure of the company. However the assumptions used in the appraisal (especially the equity discount rate and the projection of the cash flows to be achieved) are likely to be at least as important as the precise model used. Both the income stream selected and the associated cost of capital model determine the valuation result obtained with each method. (This is one reason these valuation methods are formally referred to as the Discounted Future Economic Income methods.)
The below is offered as a high-level treatment; for the components / steps of business modeling here, see .
Equity-approach
Flows to equity approach (FTE)
Discount the cash flows available to the holders of equity capital, after allowing for cost of servicing debt capital
Advantages: Makes explicit allowance for the cost of debt capital
Disadvantages: Requires judgement on choice of discount rate
Entity-approach
Adjusted present value approach (APV)
Discount the cash flows before allowing for the debt capital (but allowing for the tax relief obtained on the debt capital)
Advantages: Simpler to apply if a specific project is being valued which does not have earmarked debt capital finance
Disadvantages: Requires judgement on choice of discount rate; no explicit allowance for cost of debt capital, which may be much higher than a risk-free rate
Weighted average cost of capital approach (WACC)
Derive a weighted cost of the capital obtained from the various sources and use that discount rate to discount the unlevered free cash flows from the project
Advantages: Overcomes the requirement for debt capital finance to be earmarked to particular projects
Disadvantages: Care must be exercised in the selection of the appropriate income stream. The net cash flow to total invested capital is the generally accepted choice.
Total cash flow approach (TCF)
This distinction illustrates that the Discounted Cash Flow method can be used to determine the value of various business ownership interests. These can include equity or debt holders.
Alternatively, the method can be used to value the company based on the value of total invested capital. In each case, the differences lie in the choice of the income stream and discount rate. For example, the net cash flow to total invested capital and WACC are appropriate when valuing a company based on the market value of all invested capital.
Shortcomings
The following difficulties are identified with the application of DCF in valuation:
Forecast reliability: Traditional DCF models assume we can accurately forecast revenue and earnings 3–5 years into the future. But studies have shown that growth is neither predictable nor persistent. (See Stock valuation#Growth rate and Sustainable growth rate#From a financial perspective.) In other terms, using DCF models is problematic due to the problem of induction, i.e. presupposing that a sequence of events in the future will occur as it always has in the past. Colloquially, in the world of finance, the problem of induction is often simplified with the common phrase: past returns are not indicative of future results. In fact, the SEC demands that all mutual funds use this sentence to warn their investors.This observation has led some to conclude that DCF models should only be used to value companies with steady cash flows. For example, DCF models are widely used to value mature companies in stable industry sectors, such as utilities. For industries that are especially unpredictable and thus harder to forecast, DCF models can prove especially challenging. Industry Examples:
Real Estate: Investors use DCF models to value commercial real estate development projects. This practice has two main shortcomings. First, the discount rate assumption relies on the market for competing investments at the time of the analysis, which may not persist into the future. Second, assumptions about ten-year income increases are usually based on historic increases in the market rent. Yet the cyclical nature of most real estate markets is not factored in. Most real estate loans are made during boom real estate markets and these markets usually last fewer than ten years. In this case, due to the problem of induction, using a DCF model to value commercial real estate during any but the early years of a boom market can lead to overvaluation.
Early-stage Technology Companies: In valuing startups, the DCF method can be applied a number of times, with differing assumptions, to assess a range of possible future outcomes—such as the best, worst and mostly likely case scenarios. Even so, the lack of historical company data and uncertainty about factors that can affect the company's development make DCF models especially difficult for valuing startups. There is a lack of credibility regarding future cash flows, future cost of capital, and the company's growth rate. By forecasting limited data into an unpredictable future, the problem of induction is especially pronounced.
Discount rate estimation: Traditionally, DCF models assume that the capital asset pricing model can be used to assess the riskiness of an investment and set an appropriate discount rate. Some economists, however, suggest that the capital asset pricing model has been empirically invalidated. various other models are proposed (see asset pricing), although all are subject to some theoretical or empirical criticism.
Input-output problem: DCF is merely a mechanical valuation tool, which makes it subject to the principle "garbage in, garbage out." Small changes in inputs can result in large changes in the value of a company. This is especially the case with terminal values, which make up a large proportion of the Discounted Cash Flow's final value.
Missing variables: Traditional DCF calculations only consider the financial costs and benefits of a decision. They do not include the environmental, social and governance performance of an organization. This criticism, true for all valuation techniques, is addressed through an approach called "IntFV" discussed below.
Integrated future value
To address the lack of integration of the short and long term importance, value and risks associated with natural and social capital into the traditional DCF calculation, companies are valuing their environmental, social and governance (ESG) performance through an Integrated Management approach to reporting, that expands DCF or Net Present Value to Integrated Future Value (IntFV).
This allows companies to value their investments not just for their financial return but also the long term environmental and social return of their investments. By highlighting environmental, social and governance performance in reporting, decision makers have the opportunity to identify new areas for value creation that are not revealed through traditional financial reporting.
As an example, the social cost of carbon is one value that can be incorporated into Integrated Future Value calculations to encompass the damage to society from greenhouse gas emissions that result from an investment.
This is an integrated approach to reporting that supports Integrated Bottom Line (IBL) decision making, which takes triple bottom line (TBL) a step further and combines financial, environmental and social performance reporting into one balance sheet. This approach provides decision makers with the insight to identify opportunities for value creation that promote growth and change within an organization.
See also
Adjusted present value
Capital asset pricing model
Capital budgeting
Cost of capital
Debt ratio
Economic value added
Enterprise value
Financial reporting
Flows to equity
Forecast period (finance)
Free cash flow
Internal rate of return
Market value added
Net present value
Owner earnings
Patent valuation
Present value of growth opportunities
Residual income valuation
Terminal value (finance)
Time value of money
Valuation using discounted cash flows
Weighted average cost of capital
References
Further reading
External links
Calculating Intrinsic Value Using the DCF Model, wealthyeducation.com
Calculating Terminal Value Using the DCF Model, wealthyeducation.com
Continuous compounding/cash flows, ocw.mit.edu
Getting Started With Discounted Cash Flows. The Street.
Cash flow
Engineering economics
Corporate finance
Valuation (finance) | Discounted cash flow | [
"Engineering"
] | 2,749 | [
"Engineering economics"
] |
9,032 | https://en.wikipedia.org/wiki/Drosophila | Drosophila () is a genus of fly, belonging to the family Drosophilidae, whose members are often called "small fruit flies" or pomace flies, vinegar flies, or wine flies, a reference to the characteristic of many species to linger around overripe or rotting fruit. They should not be confused with the Tephritidae, a related family, which are also called fruit flies (sometimes referred to as "true fruit flies"); tephritids feed primarily on unripe or ripe fruit, with many species being regarded as destructive agricultural pests, especially the Mediterranean fruit fly.
One species of Drosophila in particular, Drosophila melanogaster, has been heavily used in research in genetics and is a common model organism in developmental biology. The terms "fruit fly" and "Drosophila" are often used synonymously with D. melanogaster in modern biological literature. The entire genus, however, contains more than 1,500 species and is very diverse in appearance, behavior, and breeding habitat.
Etymology
The term "Drosophila", meaning "dew-loving", is a modern scientific Latin adaptation from Greek words , , "dew", and , , "lover".
Morphology
Drosophila species are small flies, typically pale yellow to reddish brown to black, with red eyes. When the eyes (essentially a film of lenses) are removed, the brain is revealed. Drosophila brain structure and function develop and age significantly from larval to adult stage. Developing brain structures make these flies a prime candidate for neuro-genetic research. According to a study published in Nature in October 2024, by the scientists examining the brain of an adult female Drosophila, the shape and location of each of its 130,000 neurons and 50 million synapsis were identified. In this study, the most detailed analysis ever conducted on the brain of an adult animal is represented. Many species, including the noted Hawaiian picture-wings, have distinct black patterns on the wings. The plumose (feathery) arista, bristling of the head and thorax, and wing venation are characters used to diagnose the family. Most are small, about long, but some, especially many of the Hawaiian species, are larger than a house fly.
Evolution
Detoxification mechanisms
Environmental challenge by natural toxins helped to prepare Drosophilae to detox DDT, by shaping the glutathione S-transferase mechanism that metabolizes both.
Selection
The Drosophila genome is subject to a high degree of selection, especially unusually widespread negative selection compared to other taxa. A majority of the genome is under selection of some sort, and a supermajority of this is occurring in non-coding DNA.
Effective population size has been credibly suggested to positively correlate with the effect size of both negative and positive selection. Recombination is likely to be a significant source of diversity. There is evidence that crossover is positively correlated with polymorphism in D. populations.
Biology
Habitat
Drosophila species are found all around the world, with more species in the tropical regions. Drosophila made their way to the Hawaiian Islands and radiated into over 800 species. They can be found in deserts, tropical rainforest, cities, swamps, and alpine zones. Some northern species hibernate. The northern species D. montana is the best cold-adapted, and is primarily found at high altitudes. Most species breed in various kinds of decaying plant and fungal material, including fruit, bark, slime fluxes, flowers, and mushrooms. Drosophila species that are fruit-breeding are attracted to various products of fermentation, especially ethanol and methanol. Fruits exploited by Drosophila species include those with a high pectin concentration, which is an indicator of how much alcohol will be produced during fermentation. Citrus, morinda, apples, pears, plums, and apricots belong into this category.
The larvae of at least one species, D. suzukii, can also feed in fresh fruit and can sometimes be a pest. A few species have switched to being parasites or predators. Many species can be attracted to baits of fermented bananas or mushrooms, but others are not attracted to any kind of baits. Males may congregate at patches of suitable breeding substrate to compete for the females, or form leks, conducting courtship in an area separate from breeding sites.
Several Drosophila species, including Drosophila melanogaster, D. immigrans, and D. simulans, are closely associated with humans, and are often referred to as domestic species. These and other species (D. subobscura, and from a related genus Zaprionus indianus) have been accidentally introduced around the world by human activities such as fruit transports.
Reproduction
Males of this genus are known to have the longest sperm cells of any studied organism on Earth, including one species, Drosophila bifurca, that has sperm cells that are long. The cells mostly consist of a long, thread-like tail, and are delivered to the females in tangled coils. The other members of the genus Drosophila also make relatively few giant sperm cells, with that of D. bifurca being the longest. D. melanogaster sperm cells are a more modest 1.8 mm long, although this is still about 35 times longer than a human sperm. Several species in the D. melanogaster species group are known to mate by traumatic insemination.
Drosophila species vary widely in their reproductive capacity. Those such as D. melanogaster that breed in large, relatively rare resources have ovaries that mature 10–20 eggs at a time, so that they can be laid together on one site. Others that breed in more-abundant but less nutritious substrates, such as leaves, may only lay one egg per day. The eggs have one or more respiratory filaments near the anterior end; the tips of these extend above the surface and allow oxygen to reach the embryo. Larvae feed not on the vegetable matter itself, but on the yeasts and microorganisms present on the decaying breeding substrate. Development time varies widely between species (between 7 and more than 60 days) and depends on the environmental factors such as temperature, breeding substrate, and crowding.
Fruit flies lay eggs in response to environmental cycles. Eggs laid at a time (e.g., night) during which likelihood of survival is greater than in eggs laid at other times (e.g., day) yield more larvae than eggs that were laid at those times. Ceteris paribus, the habit of laying eggs at this 'advantageous' time would yield more surviving offspring, and more grandchildren, than the habit of laying eggs during other times. This differential reproductive success would cause D. melanogaster to adapt to environmental cycles, because this behavior has a major reproductive advantage.
Their median lifespan is 35–45 days.
Aging
DNA damage accumulates in Drosophila intestinal stem cells with age. Deficiencies in the Drosophila DNA damage response, including deficiencies in expression of genes involved in DNA damage repair, accelerates intestinal stem cell (enterocyte) aging. Sharpless and Depinho reviewed evidence that stem cells undergo intrinsic aging and speculated that stem cells grow old, in part, as a result of DNA damage.
Mating systems
Courtship behavior
The following section is based on the following Drosophila species: Drosophila simulans and Drosophila melanogaster.
Courtship behavior of male Drosophila is an attractive behaviour. Females respond via their perception of the behavior portrayed by the male. Male and female Drosophila use a variety of sensory cues to initiate and assess courtship readiness of a potential mate. The cues include the following behaviours: positioning, pheromone secretion, following females, making tapping sounds with legs, singing, wing spreading, creating wing vibrations, genitalia licking, bending the stomach, attempt to copulate, and the copulatory act itself. The songs of Drosophila melanogaster and Drosophila simulans have been studied extensively. These luring songs are sinusoidal in nature and varies within and between species.
The courtship behavior of Drosophila melanogaster has also been assessed for sex-related genes, which have been implicated in courtship behavior in both the male and female. Recent experiments explore the role of fruitless (fru) and doublesex (dsx), a group of sex-behaviour linked genes.
The fruitless (fru) gene in Drosophila helps regulate the network for male courtship behavior; when a mutation to this gene occurs altered same sex sexual behavior in males is observed. Male Drosophila with the fru mutation direct their courtship towards other males as opposed to typical courtship, which would be directed towards females. Loss of the fru mutation leads back to the typical courtship behavior.
Pheromones
A novel class of pheromones was found to be conserved across the subgenus Drosophila in 11 desert dwelling species. These pheromones are triacylglycerides that are secreted exclusively by males from their ejaculatory bulb and transferred to females during mating. The function of the pheromones is to make the females unattractive to subsequent suitors and thus inhibit courtship by other males.
Polyandry
The following section is based on the following Drosophila species: Drosophila serrata, Drosophila pseudoobscura, Drosophila melanogaster, and Drosophila neotestacea. Polyandry is a prominent mating system among Drosophila. Females mating with multiple sex partners has been a beneficial mating strategy for Drosophila. The benefits include both pre and post copulatory mating. Pre-copulatory strategies are the behaviours associated with mate choice and the genetic contributions, such as production of gametes, that are exhibited by both male and female Drosophila regarding mate choice. Post copulatory strategies include sperm competition, mating frequency, and sex-ratio meiotic drive.
These lists are not inclusive. Polyandry among the Drosophila pseudoobscura in North America vary in their number of mating partners. There is a connection between the number of time females choose to mate and chromosomal variants of the third chromosome. It is believed that the presence of the inverted polymorphism is why re-mating by females occurs. The stability of these polymorphisms may be related to the sex-ratio meiotic drive.
However, for Drosophila subobscura, the main mating system is monandry, not normally seen in Drosophila.
Sperm competition
The following section is based on the following Drosophila species: Drosophila melanogaster, Drosophila simulans, and Drosophila mauritiana. Sperm competition is a process that polyandrous Drosophila females use to increase the fitness of their offspring. The female Drosophila has two sperm storage organs, the spermathecae and seminal receptacle, that allows her to choose the sperm that will be used to inseminate her eggs. However, some species of Drosophila have evolved to only use one or the other. Females have little control when it comes to cryptic female choice. Female Drosophila through cryptic choice, one of several post-copulatory mechanisms, which allows for the detection and expelling of sperm that reduces inbreeding possibilities. Manier et al. 2013 has categorized the post copulatory sexual selection of Drosophila melanogaster, Drosophila simulans, and Drosophila mauritiana into the following three stages: insemination, sperm storage, and fertilizable sperm. Among the preceding species there are variations at each stage that play a role in the natural selection process. This sperm competition has been found to be a driving force in the establishment of reproductive isolation during speciation.
Parthenogenesis and gynogenesis
Parthenogenesis does not occur in D. melanogaster, but in the gyn-f9 mutant, gynogenesis occurs at low frequency. The natural populations of D. mangebeirai are entirely female, making it the only obligate parthenogenetic species of Drosophila. Parthenogenesis is facultative in parthenogenetica and mercatorum.
Laboratory-cultured animals
D. melanogaster is a popular experimental animal because it is easily cultured en masse out of the wild, has a short generation time, and mutant animals are readily obtainable. In 1906, Thomas Hunt Morgan began his work on D. melanogaster and reported his first finding of a white eyed mutant in 1910 to the academic community. He was in search of a model organism to study genetic heredity and required a species that could randomly acquire genetic mutation that would visibly manifest as morphological changes in the adult animal. His work on Drosophila earned him the 1933 Nobel Prize in Medicine for identifying chromosomes as the vector of inheritance for genes. This and other Drosophila species are widely used in studies of genetics, embryogenesis, chronobiology, speciation, neurobiology, and other areas.
However, some species of Drosophila are difficult to culture in the laboratory, often because they breed on a single specific host in the wild. For some, it can be done with particular recipes for rearing media, or by introducing chemicals such as sterols that are found in the natural host; for others, it is (so far) impossible. In some cases, the larvae can develop on normal Drosophila lab medium, but the female will not lay eggs; for these it is often simply a matter of putting in a small piece of the natural host to receive the eggs.
The Drosophila Species Stock Center located at Cornell University in Ithaca, New York, maintains cultures of hundreds of species for researchers.
Use in genetic research
Drosophila is considered one of the most valuable genetic model organisms; both adults and embryos are experimental models. Drosophila is a prime candidate for genetic research because the relationship between human and fruit fly genes is very close. Human and fruit fly genes are so similar, that disease-producing genes in humans can be linked to those in flies. The fly has approximately 15,500 genes on its four chromosomes, whereas humans have about 22,000 genes among their 23 chromosomes. Thus the density of genes per chromosome in Drosophila is higher than the human genome. Low and manageable number of chromosomes make Drosophila species easier to study. These flies also carry genetic information and pass down traits throughout generations, much like their human counterparts. The traits can then be studied through different Drosophila lineages and the findings can be applied to deduce genetic trends in humans. Research conducted on Drosophila help determine the ground rules for transmission of genes in many organisms. Drosophila is a useful in vivo tool to analyze Alzheimer's disease. Rhomboid proteases were first detected in Drosophila but then found to be highly conserved across eukaryotes, mitochondria, and bacteria. Melanin's ability to protect DNA against ionizing radiation has been most extensively demonstrated in Drosophila, including in the formative study by Hopwood et al. 1985.
Microbiome
Like other animals, Drosophila is associated with various bacteria in its gut. The fly gut microbiota or microbiome seems to have a central influence on Drosophila fitness and life history characteristics. The microbiota in the gut of Drosophila represents an active current research field.
Drosophila species also harbour vertically transmitted endosymbionts, such as Wolbachia and Spiroplasma. These endosymbionts can act as reproductive manipulators, such as cytoplasmic incompatibility induced by Wolbachia or male-killing induced by the D. melanogaster Spiroplasma poulsonii (named MSRO). The male-killing factor of the D. melanogaster MSRO strain was discovered in 2018, solving a decades-old mystery of the cause of male-killing. This represents the first bacterial factor that affects eukaryotic cells in a sex-specific fashion, and is the first mechanism identified for male-killing phenotypes. Alternatively, they may protect theirs hosts from infection. Drosophila Wolbachia can reduce viral loads upon infection, and is explored as a mechanism of controlling viral diseases (e.g. Dengue fever) by transferring these Wolbachia to disease-vector mosquitoes. The S. poulsonii strain of Drosophila neotestacea protects its host from parasitic wasps and nematodes using toxins that preferentially attack the parasites instead of the host.
Since the Drosophila species is one of the most used model organisms, it was vastly used in genetics. However, the effect abiotic factors, such as temperature, has on the microbiome on Drosophila species has recently been of great interest. Certain variations in temperature have an impact on the microbiome. It was observed that higher temperatures (31 °C) lead to an increase of Acetobacter populations in the gut microbiome of Drosophila melanogaster as compared to lower temperatures (13 °C). In low temperatures (13 °C), the flies were more cold resistant and also had the highest concentration of Wolbachia.
The microbiome in the gut can also be transplanted among organisms. It was found that Drosophila melanogaster became more cold-tolerant when the gut microbiota from Drosophila melanogaster that were reared at low temperatures. This depicted that the gut microbiome is correlated to physiological processes.
Moreover, the microbiome plays a role in aggression, immunity, egg-laying preferences, locomotion and metabolism. As for aggression, it plays a role to a certain degree during courtship. It was observed that germ-free flies were not as competitive compared to the wild-type males. Microbiome of the Drosophila species is also known to promote aggression by octopamine OA signalling. The microbiome has been shown to impact these fruit flies' social interactions, specifically aggressive behaviour that is seen during courtship and mating.
Predators
Drosophila species are prey for many generalist predators, such as robber flies. In Hawaii, the introduction of yellowjackets from mainland United States has led to the decline of many of the larger species. The larvae are preyed on by other fly larvae, staphylinid beetles, and ants.
Neurochemistry
Fruit flies use several fast-acting neurotransmitters, similar to those found in humans, which allow neurons to communicate and coordinate behavior. Acetylcholine, glutamate, gamma-aminobutyric acid (GABA), dopamine, serotonin, and histamine are all neurotransmitters that can be found in humans, but Drosophila also have another neurotransmitter, octopamine, the analog of norepinephrine. Acetylcholine is the primary excitatory neurotransmitter and GABA is the primary inhibitory neurotransmitter utilized in the drosophila central nervous system. In Drosophila, the effects of many neurotransmitters can vary depending on the receptors and signaling pathways involved, allowing them to act as excitatory or inhibitory signals under different contexts. This versatility enables complex neural processing and behavioral flexibility.
Glutamate can serve as an excitatory neurotransmitter, specifically at the neuromuscular junction in fruit flies. This differs from vertebrates, where acetylcholine is used at these junctions.
In Drosophila, histamine primarily functions as a neurotransmitter in the visual system. It is released by photoreceptor cells to transmit visual information from the eye to the brain, making it essential for vision.
As with many Eukaryotes, this genus is known to express SNAREs, and as with several others the components of the SNARE complex are known to be somewhat substitutable: Although the loss of SNAP-25 - a component of neuronal SNAREs - is lethal, SNAP-24 can fully replace it. For another example, an R-SNARE not normally found in synapses can substitute for synaptobrevin.
Immunity
The Spätzle protein is a ligand of Toll. In addition to melanin's more commonly known roles in the endoskeleton and in neurochemistry, melanization is one step in the immune responses to some pathogens. Dudzic et al. 2019 additionally find a large number of shared serine protease messengers between Spätzle/Toll and melanization and a large amount of crosstalk between these pathways.
Systematics
The genus Drosophila as currently defined is paraphyletic (see below) and contains 1,450 described species, while the total number of species is estimated at thousands. The majority of the species are members of two subgenera: Drosophila (about 1,100 species) and Sophophora (including D. (S.) melanogaster; around 330 species).
The Hawaiian species of Drosophila (estimated to be more than 500, with roughly 380 species described) are sometimes recognized as a separate genus or subgenus, Idiomyia, but this is not widely accepted. About 250 species are part of the genus Scaptomyza, which arose from the Hawaiian Drosophila and later recolonized continental areas.
Evidence from phylogenetic studies suggests these genera arose from within the genus Drosophila:
Liodrosophila Duda, 1922
Mycodrosophila Oldenburg, 1914
Samoaia Malloch, 1934
Scaptomyza Hardy, 1849
Zaprionus Coquillett, 1901
Zygothrica Wiedemann, 1830
Hirtodrosophila Duda, 1923 (position uncertain)
Several of the subgeneric and generic names are based on anagrams of Drosophila, including Dorsilopha, Lordiphosa, Siphlodora, Phloridosa, and Psilodorha.
Genetics
Drosophila species are extensively used as model organisms in genetics (including population genetics), cell biology, biochemistry, and especially developmental biology. Therefore, extensive efforts are made to sequence drosophilid genomes. The genomes of these species have been fully sequenced:
Drosophila (Sophophora) melanogaster
Drosophila (Sophophora) simulans
Drosophila (Sophophora) sechellia
Drosophila (Sophophora) yakuba
Drosophila (Sophophora) erecta
Drosophila (Sophophora) ananassae
Drosophila (Sophophora) pseudoobscura
Drosophila (Sophophora) persimilis
Drosophila (Sophophora) willistoni
Drosophila (Drosophila) mojavensis
Drosophila (Drosophila) virilis
Drosophila (Drosophila) grimshawi
The data have been used for many purposes, including evolutionary genome comparisons. D. simulans and D. sechellia are sister species, and provide viable offspring when crossed, while D. melanogaster and D. simulans produce infertile hybrid offspring. The Drosophila genome is often compared with the genomes of more distantly related species such as the honeybee Apis mellifera or the mosquito Anopheles gambiae.
The Drosophila modEncode project conducted extensive work to annotate Drosophila genomes, profile transcripts, histone modifications, transcription factors, regulatory networks, and other aspects of Drosophila genetics, and make predictions about gene expression among others.
FlyBase serves as a centralized database of curated genomic data on Drosophila.
The has presented ten new genomes and combines those with previously released genomes for D. melanogaster and D. pseudoobscura to analyse the evolutionary history and common genomic structure of the genus. This includes the discovery of transposable elements (TEs) and illumination of their evolutionary history. Bartolomé et al. 2009 find at least of the TEs in D. melanogaster, D. simulans and D. yakuba have been acquired by horizontal transfer. They find an average rate of 0.035 horizontal transfer events per TE family per million years. Bartolomé also finds horizontal transfer of TEs follows other relatedness metrics, with transfer events between D. melanogaster and D. simulans being twice as common as either of them with D. yakuba.
See also
Drosophila hybrid sterility
Laboratory experiments of speciation
List of Drosophila species
Caenorhabditis 'Drosophilae' species supergroup, a group of species generally found on rotten fruits and transported by Drosophila flies
References
External links
FlyBase is a comprehensive database for information on the genetics and molecular biology of Drosophila. It includes data from the Drosophila Genome Projects and data curated from the literature.
is an integrated database of genomic, expression and protein data for Drosophila
University of California, Santa Cruz
breeds hundreds of species and supplies them to researchers
Lawrence Berkeley National Laboratory
is library of Drosophila on the web
– In India microinjection service for the generation of transgenic lines, Screening Platforms, Drosophila strain development
Drosophilidae genera
Taxa named by Carl Fredrik Fallén
Animal models | Drosophila | [
"Biology"
] | 5,354 | [
"Model organisms",
"Animal models"
] |
9,061 | https://en.wikipedia.org/wiki/Dolphin | A dolphin is an aquatic mammal in the clade Odontoceti (toothed whale). Dolphins belong to the families Delphinidae (the oceanic dolphins), Platanistidae (the Indian river dolphins), Iniidae (the New World river dolphins), Pontoporiidae (the brackish dolphins), and possibly extinct Lipotidae (baiji or Chinese river dolphin). There are 40 extant species named as dolphins.
Dolphins range in size from the and Maui's dolphin to the and orca. Various species of dolphins exhibit sexual dimorphism where the males are larger than females. They have streamlined bodies and two limbs that are modified into flippers. Though not quite as flexible as seals, they are faster; some dolphins can briefly travel at speeds of or leap about . Dolphins use their conical teeth to capture fast-moving prey. They have well-developed hearing which is adapted for both air and water; it is so well developed that some can survive even if they are blind. Some species are well adapted for diving to great depths. They have a layer of fat, or blubber, under the skin to keep warm in the cold water.
Dolphins are widespread. Most species prefer the warm waters of the tropic zones, but some, such as the right whale dolphin, prefer colder climates. Dolphins feed largely on fish and squid, but a few, such as the orca, feed on large mammals such as seals. Male dolphins typically mate with multiple females every year, but females only mate every two to three years. Calves are typically born in the spring and summer months and females bear all the responsibility for raising them. Mothers of some species fast and nurse their young for a relatively long period of time.
Dolphins produce a variety of vocalizations, usually in the form of clicks and whistles.
Dolphins are sometimes hunted in places such as Japan, in an activity known as dolphin drive hunting. Besides drive hunting, they also face threats from bycatch, habitat loss, and marine pollution. Dolphins have been depicted in various cultures worldwide. Dolphins are sometimes kept in captivity and trained to perform tricks. The most common dolphin species in captivity is the bottlenose dolphin, while there are around 60 orcas in captivity.
Etymology
The name is originally from Greek (delphís), "dolphin", which was related to the Greek (delphus), "womb". The animal's name can therefore be interpreted as meaning "a 'fish' with a womb". The name was transmitted via the Latin delphinus (the romanization of the later Greek δελφῖνος – delphinos), which in Medieval Latin became and in Old French daulphin, which reintroduced the ph into the word dolphin. The term mereswine ("sea pig") is also used.
The term dolphin can be used to refer to most species in the family Delphinidae (oceanic dolphins) and the river dolphin families of Iniidae (South American river dolphins), Pontoporiidae (La Plata dolphin), Lipotidae (Yangtze river dolphin) and Platanistidae (Ganges river dolphin and Indus river dolphin). Meanwhile, the mahi-mahi fish is called the dolphinfish. In common usage, the term whale is used only for the larger cetacean species, while the smaller ones with a beaked or longer nose are considered dolphins. The name dolphin is used casually as a synonym for bottlenose dolphin, the most common and familiar species of dolphin. There are six species of dolphins commonly thought of as whales, collectively known as blackfish: the orca, the melon-headed whale, the pygmy killer whale, the false killer whale, and the two species of pilot whales, all of which are classified under the family Delphinidae and qualify as dolphins. Although the terms dolphin and porpoise are sometimes used interchangeably, porpoise usually refers to the Phocoenidae family, which have a shorter beak and spade-shaped teeth and differ in their behavior.
A group of dolphins is called a school or a pod. Male dolphins are called bulls, females are called cows and young dolphins are called calves.
Hybridization
In 1933, three hybrid dolphins beached off the Irish coast; they were hybrids between Risso's and bottlenose dolphins. This mating was later repeated in captivity, producing a hybrid calf. In captivity, a bottlenose and a rough-toothed dolphin produced hybrid offspring. A common-bottlenose hybrid lives at SeaWorld California. Other dolphin hybrids live in captivity around the world or have been reported in the wild, such as a bottlenose-Atlantic spotted hybrid. The best known hybrid is the wholphin, a false killer whale-bottlenose dolphin hybrid. The wholphin is a fertile hybrid. Two wholphins currently live at the Sea Life Park in Hawaii; the first was born in 1985 from a male false killer whale and a female bottlenose. Wholphins have also been observed in the wild.
Evolution
Dolphins are descendants of land-dwelling mammals of the artiodactyl order (even-toed ungulates). They are related to the Indohyus, an extinct chevrotain-like ungulate, from which they split approximately 48 million years ago.
The primitive cetaceans, or archaeocetes, first took to the sea approximately 49 million years ago and became fully aquatic by 5–10 million years later.
Archaeoceti is a parvorder comprising ancient whales. These ancient whales are the predecessors of modern whales, stretching back to their first ancestor that spent their lives near (rarely in) the water. Likewise, the archaeocetes can be anywhere from near fully terrestrial, to semi-aquatic to fully aquatic, but what defines an archaeocete is the presence of visible legs or asymmetrical teeth. Their features became adapted for living in the marine environment. Major anatomical changes include the hearing set-up that channeled vibrations from the jaw to the earbone which occurred with Ambulocetus 49 million years ago, a streamlining of the body and the growth of flukes on the tail which occurred around 43 million years ago with Protocetus, the migration of the nasal openings toward the top of the cranium and the modification of the forelimbs into flippers which occurred with Basilosaurus 35 million years ago, and the shrinking and eventual disappearance of the hind limbs which took place with the first odontocetes and mysticetes 34 million years ago. The modern dolphin skeleton has two small, rod-shaped pelvic bones thought to be vestigial hind limbs. In October 2006, an unusual bottlenose dolphin was captured in Japan; it had small fins on each side of its genital slit, which scientists believe to be an unusually pronounced development of these vestigial hind limbs.
Today, the closest living relatives of cetaceans are the hippopotamuses; these share a semi-aquatic ancestor that branched off from other artiodactyls some 60 million years ago. Around 40 million years ago, a common ancestor between the two branched off into cetacea and anthracotheres; anthracotheres became extinct at the end of the Pleistocene two-and-a-half million years ago, eventually leaving only one surviving lineage: the two species of hippo.
Anatomy
Dolphins have torpedo-shaped bodies with generally non-flexible necks, limbs modified into flippers, a tail fin, and bulbous heads. Dolphin skulls have small eye orbits, long snouts, and eyes placed on the sides of its head; they lack external ear flaps. Dolphins range in size from the long and Maui's dolphin to the and orca. Overall, they tend to be dwarfed by other Cetartiodactyls. Several species have female-biased sexual dimorphism, with the females being larger than the males.
Dolphins have conical teeth, as opposed to porpoises' spade-shaped teeth. These conical teeth are used to catch swift prey such as fish, squid or large mammals, such as seals.
Breathing involves expelling stale air from their blowhole, in an upward blast, which may be visible in cold air, followed by inhaling fresh air into the lungs. Dolphins have rather small, unidentifiable spouts.
All dolphins have a thick layer of blubber, thickness varying on climate. This blubber can help with buoyancy, protection to some extent as predators would have a hard time getting through a thick layer of fat, and energy for leaner times; the primary usage for blubber is insulation from the harsh climate. Calves, generally, are born with a thin layer of blubber, which develops at different paces depending on the habitat.
Dolphins have a two-chambered stomach that is similar in structure to terrestrial carnivores. They have fundic and pyloric chambers.
Dolphins' reproductive organs are located inside the body, with genital slits on the ventral (belly) side. Males have two slits, one concealing the penis and one further behind for the anus. Females have one genital slit, housing the vagina and the anus, with a mammary slit on either side.
Integumentary system
The integumentary system is an organ system mostly consisting of skin, hair, nails and endocrine glands. The skin of dolphins is specialized to satisfy specific requirements, including protection, fat storage, heat regulation, and sensory perception. The skin of a dolphin is made up of two parts: the epidermis and the blubber, which consists of two layers including the dermis and subcutis.
The dolphin's skin is known to have a smooth rubber texture and is without hair and glands, except mammary glands. At birth, a newborn dolphin has hairs lined up in a single band on both sides of the rostrum, which is their jaw, and usually has a total length of 16–17 cm . The epidermis is characterized by the lack of keratin and by a prominent intertwine of epidermal rete pegs and long dermal papillae. The epidermal rete pegs are the epithelial extensions that project into the underlying connective tissue in both skin and mucous membranes. The dermal papillae are finger-like projections that help adhesion between the epidermal and dermal layers, as well as providing a larger surface area to nourish the epidermal layer. The thickness of a dolphin's epidermis varies, depending on species and age.
Blubber
Blubber is found within the dermis and subcutis layer. The dermis blends gradually with the adipose layer, which is known as fat, because the fat may extend up to the epidermis border and collagen fiber bundles extend throughout the whole subcutaneous blubber which is fat found under the skin. The thickness of the subcutaneous blubber or fat depends on the dolphin's health, development, location, reproductive state, and how well it feeds. This fat is thickest on the dolphin's back and belly. Most of the dolphin's body fat is accumulated in a thick layer of blubber. Blubber differs from fat in that, in addition to fat cells, it contains a fibrous network of connective tissue.
The blubber functions to streamline the body and to form specialized locomotor structures such as the dorsal fin, propulsive fluke blades and caudal keels. There are many nerve endings that resemble small, onion-like configurations that are present in the superficial portion of the dermis. Mechanoreceptors are found within the interlocks of the epidermis with dermal ridges. There are nerve fibers in the dermis that extend to the epidermis. These nerve endings are known to be highly proprioceptive, which explains sensory perception. Proprioception, which is also known as kinesthesia, is the body's ability to sense its location, movements and actions. Dolphins are sensitive to vibrations and small pressure changes. Blood vessels and nerve endings can be found within the dermis. There is a plexus of parallel running arteries and veins in the dorsal fin, fluke, and flippers. The blubber manipulates the blood vessels to help the dolphin stay warm. When the temperature drops, the blubber constricts the blood vessels to reduce blood flow in the dolphin. This allows the dolphin to spend less energy heating its own body, ultimately keeping the animal warmer without burning energy as quick. In order to release heat, the heat must pass the blubber layer. There are thermal windows that lack blubber, are not fully insulated and are somewhat thin and highly vascularized, including the dorsal fin, flukes, and flippers. These thermal windows are a good way for dolphins to get rid of excess heat if overheating. Additionally in order to conserve heat, dolphins use countercurrent heat exchange. Blood flows in different directions in order for heat to transfer across membranes. Heat from warm blood leaving the heart will heat up the cold blood that is headed back to the heart from the extremities, meaning that the heart always has warm blood and it decreases the heat lost to the water in those thermal windows.
Locomotion
Dolphins have two pectoral flippers, each containing four digits, a boneless dorsal fin for stability, and a fluke for propulsion. Although dolphins do not possess external hind limbs, some possess discrete rudimentary appendages, which may contain feet and digits. Orcas are fast swimmers in comparison to seals which typically cruise at ; the orca, in comparison, can travel at speeds up to . A study of a Pacific white-sided dolphin in an aquarium found fast burst acceleration, with the individual being able with 5 strokes (2.5 fluke beats) to go from 5.0 m s-1 to 8.7 m s-1 in 0.7 seconds.
The fusing of the neck vertebrae, while increasing stability when swimming at high speeds, decreases flexibility, which means most dolphins are unable to turn their heads. River dolphins have non-fused neck vertebrae and can turn their heads up to 90°. Dolphins swim by moving their fluke and rear body vertically, while their flippers are mainly used for steering. Some species porpoise out of the water, which allows them to travel faster. Their skeletal anatomy allows them to be fast swimmers. All species have a dorsal fin to prevent themselves from involuntarily spinning in the water.
Some dolphins are adapted for diving to great depths. In addition to their streamlined bodies, some can selectively slow their heart rate to conserve oxygen. Some can also re-route blood from tissue tolerant of water pressure to the heart, brain and other organs. Their hemoglobin and myoglobin store oxygen in body tissues, and they have twice as much myoglobin as hemoglobin.
Senses
A dolphin ear has specific adaptations to the marine environment. In humans, the middle ear works as an impedance equalizer between the outside air's low impedance and the cochlear fluid's high impedance. In dolphins, and other marine mammals, there is no great difference between the outer and inner environments. Instead of sound passing through the outer ear to the middle ear, dolphins receive sound through the throat, from which it passes through a low-impedance fat-filled cavity to the inner ear. The ear is acoustically isolated from the skull by air-filled sinus pockets, which allow for greater directional hearing underwater.
Dolphins generate sounds independently of respiration using recycled air that passes through air sacs and phonic (alternatively monkey) lips. Integral to the lips are oil-filled organs called dorsal bursae that have been suggested to be homologous to the sperm whale's spermaceti organ. High-frequency clicks pass through the sound-modifying organs of the extramandibular fat body, intramandibular fat body and the melon. This melon consists of fat, and the skull of any such creature containing a melon will have a large depression. This allows dolphins to use echolocation for orientation. Though most dolphins do not have hair, they do have hair follicles that may perform some sensory function. Beyond locating an object, echolocation also provides the animal with an idea on an object's shape and size, though how exactly this works is not yet understood. The small hairs on the rostrum of the boto (river dolphins of South America) are believed to function as a tactile sense, possibly to compensate for the boto's poor eyesight.
A dolphin eye is relatively small for its size, yet they do retain a good degree of eyesight. As well as this, the eyes of a dolphin are placed on the sides of its head, so their vision consists of two fields, rather than a binocular view like humans have. When dolphins surface, their lens and cornea correct the nearsightedness that results from the water's refraction of light. Their eyes contain both rod and cone cells, meaning they can see in both dim and bright light, but they have far more rod cells than they do cone cells. They lack short wavelength sensitive visual pigments in their cone cells, indicating a more limited capacity for color vision than most mammals. Most dolphins have slightly flattened eyeballs, enlarged pupils (which shrink as they surface to prevent damage), slightly flattened corneas and a tapetum lucidum (eye tissue behind the retina); these adaptations allow for large amounts of light to pass through the eye and, therefore, a very clear image of the surrounding area. They also have glands on the eyelids and outer corneal layer that act as protection for the cornea.
The olfactory lobes and nerve are absent in dolphins, suggesting that they have no sense of smell.
Dolphins are not thought to have a good sense of taste, as their taste buds are atrophied or missing altogether. Some have preferences for different kinds of fish, indicating some ability to taste.
Intelligence
Dolphins are known to teach, learn, cooperate, scheme, and grieve. The neocortex of many species is home to elongated spindle neurons that, prior to 2007, were known only in hominids. In humans, these cells are involved in social conduct, emotions, judgment, and theory of mind. Cetacean spindle neurons are found in areas of the brain that are analogous to where they are found in humans, suggesting that they perform a similar function.
Brain size was previously considered a major indicator of the intelligence of an animal. Since most of the brain is used for maintaining bodily functions, greater ratios of brain to body mass may increase the amount of brain mass available for more complex cognitive tasks. Allometric analysis indicates that mammalian brain size scales at approximately the ⅔ or ¾ exponent of the body mass. Comparison of a particular animal's brain size with the expected brain size based on such allometric analysis provides an encephalization quotient that can be used as another indication of animal intelligence. Orcas have the second largest brain mass of any animal on earth, next to the sperm whale. The brain to body mass ratio in some is second only to humans.
Self-awareness is seen, by some, to be a sign of highly developed, abstract thinking. Self-awareness, though not well-defined scientifically, is believed to be the precursor to more advanced processes like meta-cognitive reasoning (thinking about thinking) that are typical of humans. Research in this field has suggested that cetaceans, among others, possess self-awareness.
The most widely used test for self-awareness in animals is the mirror test in which a mirror is introduced to an animal, and the animal is then marked with a temporary dye. If the animal then goes to the mirror in order to view the mark, it has exhibited strong evidence of self-awareness.
Some disagree with these findings, arguing that the results of these tests are open to human interpretation and susceptible to the Clever Hans effect. This test is much less definitive than when used for primates, because primates can touch the mark or the mirror, while cetaceans cannot, making their alleged self-recognition behavior less certain. Skeptics argue that behaviors that are said to identify self-awareness resemble existing social behaviors, and so researchers could be misinterpreting self-awareness for social responses to another individual. The researchers counter-argue that the behaviors shown are evidence of self-awareness, as they are very different from normal responses to another individual. Whereas apes can merely touch the mark on themselves with their fingers, cetaceans show less definitive behavior of self-awareness; they can only twist and turn themselves to observe the mark.
In 1995, Marten and Psarakos used television to test dolphin self-awareness. They showed dolphins real-time video of themselves, video of another dolphin and recorded footage. They concluded that their evidence suggested self-awareness rather than social behavior. While this particular study has not been repeated since then, dolphins have since passed the mirror test. Some researchers have argued that evidence for self-awareness has not been convincingly demonstrated.
Behavior
Socialization
Dolphins are highly social animals, often living in pods of up to a dozen individuals, though pod sizes and structures vary greatly between species and locations. In places with a high abundance of food, pods can merge temporarily, forming a superpod; such groupings may exceed 1,000 dolphins. Membership in pods is not rigid; interchange is common. They establish strong social bonds, and will stay with injured or ill members, helping them to breathe by bringing them to the surface if needed. This altruism does not appear to be limited to their own species. The dolphin Moko in New Zealand has been observed guiding a female pygmy sperm whale together with her calf out of shallow water where they had stranded several times. They have also been seen protecting swimmers from sharks by swimming circles around the swimmers or charging the sharks to make them go away.
Dolphins communicate using a variety of clicks, whistle-like sounds and other vocalizations. Dolphins also use nonverbal communication by means of touch and posturing.
Dolphins also display culture, something long believed to be unique to humans (and possibly other primate species). In May 2005, a discovery in Australia found Indo-Pacific bottlenose dolphins (Tursiops aduncus) teaching their young to use tools. They cover their snouts with sponges to protect them while foraging. This knowledge is mostly transferred by mothers to daughters, unlike simian primates, where knowledge is generally passed on to both sexes. Using sponges as mouth protection is a learned behavior. Another learned behavior was discovered among river dolphins in Brazil, where some male dolphins use weeds and sticks as part of a sexual display.
Forms of care-giving between fellows and even for members of different species(see Moko (dolphin)) are recorded in various species – such as trying to save weakened fellows or female pilot whales holding up dead calves for long periods.
Dolphins engage in acts of aggression towards each other. The older a male dolphin is, the more likely his body is to be covered with bite scars. Male dolphins can get into disputes over companions and females. Acts of aggression can become so intense that targeted dolphins sometimes go into exile after losing a fight.
Male bottlenose dolphins have been known to engage in infanticide. Dolphins have also been known to kill porpoises (porpicide) for reasons which are not fully understood, as porpoises generally do not share the same diet as dolphins and are therefore not competitors for food supplies. The Cornwall Wildlife Trust records about one such death a year. Possible explanations include misdirected infanticide, misdirected sexual aggression or play behaviour.
Reproduction and sexuality
Dolphin copulation happens belly to belly; though many species engage in lengthy foreplay, the actual act is usually brief, but may be repeated several times within a short timespan. The gestation period varies with species; for the small tucuxi dolphin, this period is around 11 to 12 months, while for the orca, the gestation period is around 17 months. Typically dolphins give birth to a single calf, which is, unlike most other mammals, born tail first in most cases. They usually become sexually active at a young age, even before reaching sexual maturity. The age of sexual maturity varies by species and sex.
Dolphins are known to display non-reproductive sexual behavior, engaging in masturbation, stimulation of the genital area of other individuals using the rostrum or flippers, and homosexual contact.
Various species of dolphin have been known to engage in sexual behavior including copulation with dolphins of other species, and occasionally exhibit sexual behavior towards other animals, including humans. Sexual encounters may be violent, with male bottlenose dolphins sometimes showing aggressive behavior towards both females and other males. Male dolphins may also work together and attempt to herd females in estrus, keeping the females by their side by means of both physical aggression and intimidation, to increase their chances of reproductive success.
Sleeping
Generally, dolphins sleep with only one brain hemisphere in slow-wave sleep at a time, thus maintaining enough consciousness to breathe and to watch for possible predators and other threats. Sleep stages earlier in sleep can occur simultaneously in both hemispheres.
In captivity, dolphins seemingly enter a fully asleep state where both eyes are closed and there is no response to mild external stimuli. In this case, respiration is automatic; a tail kick reflex keeps the blowhole above the water if necessary. Anesthetized dolphins initially show a tail kick reflex. Though a similar state has been observed with wild sperm whales, it is not known if dolphins in the wild reach this state. The Indus river dolphin has a sleep method that is different from that of other dolphin species. Living in water with strong currents and potentially dangerous floating debris, it must swim continuously to avoid injury. As a result, this species sleeps in very short bursts which last between 4 and 60 seconds.
Feeding
There are various feeding methods among and within species, some apparently exclusive to a single population. Fish and squid are the main food, but the false killer whale and the orca also feed on other marine mammals. Orcas on occasion also hunt whale species larger than themselves. Different breeds of dolphins vary widely in the number of teeth they possess. The orca usually carries 40–56 teeth while the popular bottlenose dolphin has anywhere from 72 to 116 conical teeth and its smaller cousin the common dolphin has 188–268 teeth: the number of teeth that an individual carries varies widely between within a single species. Hybrids between common and bottlenose bred in captivity had a number of teeth intermediate between that of their parents.
One common feeding method is herding, where a pod squeezes a school of fish into a small volume, known as a bait ball. Individual members then take turns plowing through the ball, feeding on the stunned fish. Corralling is a method where dolphins chase fish into shallow water to catch them more easily. Orcas and bottlenose dolphins have also been known to drive their prey onto a beach to feed on it, a behaviour known as beach or strand feeding. Some species also whack fish with their flukes, stunning them and sometimes knocking them out of the water.
Reports of cooperative human-dolphin fishing date back to the ancient Roman author and natural philosopher Pliny the Elder. A modern human-dolphin partnership currently operates in Laguna, Santa Catarina, Brazil. Here, dolphins drive fish towards fishermen waiting along the shore and signal the men to cast their nets. The dolphins' reward is the fish that escape the nets.
In Shark Bay, Australia, dolphins catch fish by trapping them in huge conch shells. In "shelling", a dolphin brings the shell to the surface and shakes it, so that fish sheltering within fall into the dolphin's mouth. From 2007 to 2018, in 5,278 encounters with dolphins, researchers observed 19 dolphins shelling 42 times. The behavior spreads mainly within generations, rather than being passed from mother to offspring.
Vocalization
Dolphins are capable of making a broad range of sounds using nasal airsacs located just below the blowhole. Roughly three categories of sounds can be identified: frequency modulated whistles, burst-pulsed sounds, and clicks. Dolphins communicate with whistle-like sounds produced by vibrating connective tissue, similar to the way human vocal cords function, and through burst-pulsed sounds, though the nature and extent of that ability is not known. The clicks are directional and are for echolocation, often occurring in a short series called a click train. The click rate increases when approaching an object of interest. Dolphin echolocation clicks are amongst the loudest sounds made by marine animals.
Bottlenose dolphins have been found to have signature whistles, a whistle that is unique to a specific individual. These whistles are used in order for dolphins to communicate with one another by identifying an individual. It can be seen as the dolphin equivalent of a name for humans. These signature whistles are developed during a dolphin's first year; it continues to maintain the same sound throughout its lifetime. In order to obtain each individual whistle sound, dolphins undergo vocal production learning. This consists of an experience with other dolphins that modifies the signal structure of an existing whistle sound. An auditory experience influences the whistle development of each dolphin. Dolphins are able to communicate to one another by addressing another dolphin through mimicking their whistle. The signature whistle of a male bottlenose dolphin tends to be similar to that of his mother, while the signature whistle of a female bottlenose dolphin tends to be more distinguishing. Bottlenose dolphins have a strong memory when it comes to these signature whistles, as they are able to relate to a signature whistle of an individual they have not encountered for over twenty years. Research done on signature whistle usage by other dolphin species is relatively limited. The research on other species done so far has yielded varied outcomes and inconclusive results.
Because dolphins are generally associated in groups, communication is necessary. Signal masking is when other similar sounds (conspecific sounds) interfere with the original acoustic sound. In larger groups, individual whistle sounds are less prominent. Dolphins tend to travel in pods, upon which there are groups of dolphins that range from a few to many. Although they are traveling in these pods, the dolphins do not necessarily swim right next to each other. Rather, they swim within the same general vicinity. In order to prevent losing one of their pod members, there are higher whistle rates. Because their group members were spread out, this was done in order to continue traveling together.
Jumping and playing
Dolphins frequently leap above the water surface, this being done for various reasons. When travelling, jumping can save the dolphin energy as there is less friction while in the air. This type of travel is known as porpoising. Other reasons include orientation, social displays, fighting, non-verbal communication, entertainment and attempting to dislodge parasites.
Dolphins show various types of playful behavior, often including objects, self-made bubble rings, other dolphins or other animals. When playing with objects or small animals, common behavior includes carrying the object or animal along using various parts of the body, passing it along to other members of the group or taking it from another member, or throwing it out of the water. Dolphins have also been observed harassing animals in other ways, for example by dragging birds underwater without showing any intent to eat them. Playful behaviour that involves another animal species with active participation of the other animal has also been observed. Playful dolphin interactions with humans are the most obvious examples, followed by those with humpback whales and dogs.
Juvenile dolphins off the coast of Western Australia have been observed chasing, capturing, and chewing on blowfish. While some reports state that the dolphins are becoming intoxicated on the tetrodotoxin in the fishes' skin, other reports have characterized this behavior as the normal curiosity and exploration of their environment in which dolphins engage.
Tail-walking
Although this behaviour is highly unusual in wild dolphins, several Indo-Pacific bottlenose dolphins (Tursiops aduncus) of the Port River, north of Adelaide, South Australia, have been seen to have exhibit "tail-walking". This activity mimicks a standing posture, using the tail to run backwards along the water. To perform this movement, the dolphin "forces the majority of its body vertically out of the water and maintains the position by vigorously pumping its tail".
This started in 1988 when a female named Billie was rescued after becoming trapped in a polluted marina, and spent two weeks recuperating with captive dolphins. Billie had previously been observed swimming and frolicking with racehorses exercising in the Port River in the 1980s. After becoming trapped in a reedy estuary further down the coast, she was rescued and placed with several captive dolphins at a marine park to recuperate. There she observed the captive dolphins performing tail-walking. After being returned to the Port River, she continued to perform this trick, and another dolphin, Wave, copied her. Wave, a very active tail-walker, passed on the skill to her daughters, Ripple and Tallula.
After Billie's premature death, Wave started tail-walking much more frequently, and other dolphins in the group were observed also performing the behaviour. In 2011, up to 12 dolphins were observed tail-walking, but only females appeared to learn the skill. In October 2021, a dolphin was observed tail-walking over a number of hours.
Scientists have found the spread of this behaviour, through up to two generations, surprising, as it brings no apparent advantage, and is very energy-consuming. A 2018 study by Mike Rossley et al. suggested:
Threats
Dolphins have few marine enemies. Some species or specific populations have none, making them apex predators. For most of the smaller species of dolphins, only a few of the larger sharks, such as the bull shark, dusky shark, tiger shark and great white shark, are a potential risk, especially for calves. Some of the larger dolphin species, especially orcas, may also prey on smaller dolphins, but this seems rare. Dolphins also suffer from a wide variety of diseases and parasites. The Cetacean morbillivirus in particular has been known to cause regional epizootics often leaving hundreds of animals of various species dead. Symptoms of infection are often a severe combination of pneumonia, encephalitis and damage to the immune system, which greatly impair the cetacean's ability to swim and stay afloat unassisted. A study at the U.S. National Marine Mammal Foundation revealed that dolphins, like humans, develop a natural form of type 2 diabetes which may lead to a better understanding of the disease and new treatments for both humans and dolphins.
Dolphins can tolerate and recover from extreme injuries such as shark bites although the exact methods used to achieve this are not known. The healing process is rapid and even very deep wounds do not cause dolphins to hemorrhage to death. Furthermore, even gaping wounds restore in such a way that the animal's body shape is restored, and infection of such large wounds seems rare.
A study published in the journal Marine Mammal Science suggests that at least some dolphins survive shark attacks using everything from sophisticated combat moves to teaming up against the shark.
Humans
Some dolphin species are at risk of extinction, especially some river dolphin species such as the Amazon river dolphin, and the Ganges and Yangtze river dolphin, which are critically or seriously endangered. A 2006 survey found no individuals of the Yangtze river dolphin. The species now appears to be functionally extinct.
Pesticides, heavy metals, plastics, and other industrial and agricultural pollutants that do not disintegrate rapidly in the environment concentrate in predators such as dolphins. Injuries or deaths due to collisions with boats, especially their propellers, are also common.
Various fishing methods, most notably purse seine fishing for tuna and the use of drift and gill nets, unintentionally kill many dolphins. Accidental by-catch in gill nets and incidental captures in antipredator nets that protect marine fish farms are common and pose a risk for mainly local dolphin populations. In some parts of the world, such as Taiji in Japan and the Faroe Islands, dolphins are traditionally considered food and are killed in harpoon or drive hunts. Dolphin meat is high in mercury and may thus pose a health danger to humans when consumed.
Queensland's shark culling program, which has killed roughly 50,000 sharks since 1962, has also killed thousands of dolphins as bycatch. "Shark control" programs in both Queensland and New South Wales use shark nets and drum lines, which entangle and kill dolphins. Queensland's "shark control" program has killed more than 1,000 dolphins in recent years, and at least 32 dolphins have been killed in Queensland since 2014. A shark culling program in KwaZulu-Natal has killed at least 2,310 dolphins.
Dolphin safe labels attempt to reassure consumers that fish and other marine products have been caught in a dolphin-friendly way. The earliest campaigns with "dolphin safe" labels were initiated in the 1980s as a result of cooperation between marine activists and the major tuna companies, and involved decreasing incidental dolphin kills by up to 50% by changing the type of nets used to catch tuna. The dolphins are netted only while fishermen are in pursuit of smaller tuna. Albacore are not netted this way, making albacore the only truly dolphin-safe tuna.
Loud underwater noises, such as those resulting from naval sonar use, live firing exercises, and certain offshore construction projects such as wind farms, may be harmful to dolphins, increasing stress, damaging hearing, and causing decompression sickness by forcing them to surface too quickly to escape the noise.
Dolphins and other smaller cetaceans are also hunted in an activity known as dolphin drive hunting. This is accomplished by driving a pod together with boats and usually into a bay or onto a beach. Their escape is prevented by closing off the route to the ocean with other boats or nets. Dolphins are hunted this way in several places around the world, including the Solomon Islands, the Faroe Islands, Peru, and Japan, the most well-known practitioner of this method. By numbers, dolphins are mostly hunted for their meat, though some end up in dolphinariums. Despite the controversial nature of the hunt resulting in international criticism, and the possible health risk that the often polluted meat causes, thousands of dolphins are caught in drive hunts each year.
Impacts of climate change
Dolphins are marine mammals with broad geographic extent, making them susceptible to climate change in various ways. The most common effect of climate change on dolphins is the increasing water temperatures across the globe. This has caused a large variety of dolphin species to experience range shifts, in which the species move from their typical geographic region to cooler waters. Another side effect of increasing water temperatures is the increase in harmful algae blooms, which has caused a mass die-off of bottlenose dolphins.
In California, the 1982–83 El Niño warming event caused the near-bottom spawning market squid to leave southern California, which caused their predator, the pilot whale, to also leave. As the market squid returned six years later, Risso's dolphins came to feed on the squid. Bottlenose dolphins expanded their range from southern to central California, and stayed even after the warming event subsided. The Pacific white-sided dolphin has had a decline in population in the southwest Gulf of California, the southern boundary of their distribution. In the 1980s they were abundant with group sizes up to 200 across the entire cool season. Then, in the 2000s, only two groups were recorded with sizes of 20 and 30, and only across the central cool season. This decline was not related to a decline of other marine mammals or prey, so it was concluded to have been caused by climate change as it occurred during a period of warming. Additionally, the Pacific white-sided dolphin had an increase in occurrence on the west coast of Canada from 1984 to 1998.
In the Mediterranean, sea surface temperatures have increased, as well as salinity, upwelling intensity, and sea levels. Because of this, prey resources have been reduced causing a steep decline in the short-beaked common dolphin Mediterranean subpopulation, which was deemed endangered in 2003. This species now only exists in the Alboran Sea, due to its high productivity, distinct ecosystem, and differing conditions from the rest of the Mediterranean.
In northwest Europe, many dolphin species have experienced range shifts from the region's typically colder waters. Warm water dolphins, like the short-beaked common dolphin and striped dolphin, have expanded north of western Britain and into the northern North Sea, even in the winter, which may displace the white-beaked and Atlantic white-sided dolphin that are in that region. The white-beaked dolphin has shown an increase in the southern North Sea since the 1960s because of this. The rough-toothed dolphin and Atlantic spotted dolphin may move to northwest Europe. In northwest Scotland, white-beaked dolphins (local to the colder waters of the North Atlantic) have decreased while common dolphins (local to warmer waters) have increased from 1992 to 2003. Additionally, Fraser's dolphin, found in tropical waters, was recorded in the UK for the first time in 1996.
River dolphins are highly affected by climate change as high evaporation rates, increased water temperatures, decreased precipitation, and increased acidification occur. River dolphins typically have a higher densities when rivers have a lox index of freshwater degradation and better water quality. Specifically looking at the Ganges river dolphin, the high evaporation rates and increased flooding on the plains may lead to more human river regulation, decreasing the dolphin population.
As warmer waters lead to a decrease in dolphin prey, this led to other causes of dolphin population decrease. In the case of bottlenose dolphins, mullet populations decrease due to increasing water temperatures, which leads to a decrease in the dolphins' health and thus their population. At the Shark Bay World Heritage Area in Western Australia, the local Indo-Pacific bottlenose dolphin population had a significant decline after a marine heatwave in 2011. This heatwave caused a decrease in prey, which led to a decline in dolphin reproductive rates as female dolphins could not get enough nutrients to sustain a calf. The resultant decrease in fish population due to warming waters has also influenced humans to see dolphins as fishing competitors or even bait. Humans use dusky dolphins as bait or are killed off because they consume the same fish humans eat and sell for profit. In the central Brazilian Amazon alone, approximately 600 pink river dolphins are killed each year to be used as bait.
Relationships with humans
In history and religion
Dolphins have long played a role in human culture.
In Greek myths, dolphins were seen invariably as helpers of humankind. Dolphins also seem to have been important to the Minoans, judging by artistic evidence from the ruined palace at Knossos. During the 2009 excavations of a major Mycenaean city at Iklaina, a striking fragment of a wall painting came to light, depicting a ship with three human figures and dolphins. Dolphins are common in Greek mythology, and many coins from ancient Greece have been found which feature a man, a boy or a deity riding on the back of a dolphin. The Ancient Greeks welcomed dolphins; spotting dolphins riding in a ship's wake was considered a good omen. In both ancient and later art, Cupid is often shown riding a dolphin. A dolphin rescued the poet Arion from drowning and carried him safe to land, at Cape Matapan, a promontory forming the southernmost point of the Peloponnesus. There was a temple to Poseidon and a statue of Arion riding the dolphin.
The Greeks reimagined the Phoenician god Melqart as Melikertês (Melicertes) and made him the son of Athamas and Ino. He drowned but was transfigured as the marine deity Palaemon, while his mother became Leucothea. (cf Ino.) At Corinth, he was so closely connected with the cult of Poseidon that the Isthmian Games, originally instituted in Poseidon's honor, came to be looked upon as the funeral games of Melicertes. Phalanthus was another legendary character brought safely to shore (in Italy) on the back of a dolphin, according to Pausanias.
Dionysus was once captured by Etruscan pirates who mistook him for a wealthy prince they could ransom. After the ship set sail Dionysus invoked his divine powers, causing vines to overgrow the ship where the mast and sails had been. He turned the oars into serpents, so terrifying the sailors that they jumped overboard, but Dionysus took pity on them and transformed them into dolphins so that they would spend their lives providing help for those in need. Dolphins were also the messengers of Poseidon and sometimes did errands for him as well. Dolphins were sacred to both Aphrodite and Apollo.
"Dolfin" was the name of an aristocratic family in the maritime Republic of Venice, whose most prominent member was the 13th-century Doge Giovanni Dolfin.
In Hindu mythology the Ganges river dolphin is associated with Ganga, the deity of the Ganges river. The dolphin is said to be among the creatures which heralded the goddess' descent from the heavens and her mount, the Makara, is sometimes depicted as a dolphin.
The Boto, a species of river dolphin that resides in the Amazon River, are believed to be shapeshifters, or encantados, who are capable of having children with human women.
There are comparatively few surviving myths of dolphins in Polynesian cultures, in spite of their maritime traditions and reverence of other marine animals such as sharks and seabirds; unlike these, they are more often perceived as food than as totemic symbols. Dolphins are most clearly represented in Rapa Nui Rongorongo, and in the traditions of the Caroline Islands they are depicted similarly to the Boto, being sexually active shapeshifters.
Heraldry
Dolphins are also used as symbols, for instance in heraldry. When heraldry developed in the Middle Ages, little was known about the biology of the dolphin and it was often depicted as a sort of fish. The stylised heraldic dolphin still conventionally follows this tradition, sometimes showing the dolphin skin covered with fish scales.
A well-known historical example was the coat of arms of the former province of the Dauphiné in southern France, from which were derived the arms and the title of the Dauphin of France, the heir to the former throne of France (the title literally meaning "The Dolphin of France").
Dolphins are present in the coat of arms of Anguilla and the coat of arms of Romania, and the coat of arms of Barbados has a dolphin supporter.
The coat of arms of the town of Poole, Dorset, England, first recorded in 1563, includes a dolphin, which was historically depicted in stylised heraldic form, but which since 1976 has been depicted naturalistically.
In captivity
Species
The renewed popularity of dolphins in the 1960s resulted in the appearance of many dolphinaria around the world, making dolphins accessible to the public. Criticism and animal welfare laws forced many to close, although hundreds still exist around the world. In the United States, the best known are the SeaWorld marine mammal parks.
In the Middle East the best known are Dolphin Bay at Atlantis, The Palm and the Dubai Dolphinarium.
Various species of dolphins are kept in captivity. These small cetaceans are more often than not kept in theme parks, such as SeaWorld, commonly known as a dolphinarium. Bottlenose dolphins are the most common species of dolphin kept in dolphinariums as they are relatively easy to train, have a long lifespan in captivity and have a friendly appearance. Hundreds if not thousands of bottlenose dolphins live in captivity across the world, though exact numbers are hard to determine. Other species kept in captivity are spotted dolphins, false killer whales and common dolphins, Commerson's dolphins, as well as rough-toothed dolphins, but all in much lower numbers than the bottlenose dolphin. There are also fewer than ten pilot whales, Amazon river dolphins, Risso's dolphins, spinner dolphins, or tucuxi in captivity. An unusual and very rare hybrid dolphin, known as a wolphin, is kept at the Sea Life Park in Hawaii, which is a cross between a bottlenose dolphin and a false killer whale.
The number of orcas kept in captivity is very small, especially when compared to the number of bottlenose dolphins, with 60 captive orcas being held in aquaria . The orca's intelligence, trainability, striking appearance, playfulness in captivity and sheer size have made it a popular exhibit at aquaria and aquatic theme parks. From 1976 to 1997, 55 whales were taken from the wild in Iceland, 19 from Japan, and three from Argentina. These figures exclude animals that died during capture. Live captures fell dramatically in the 1990s, and by 1999, about 40% of the 48 animals on display in the world were captive-born.
Organizations such as the Mote Marine Laboratory rescue and rehabilitate sick, wounded, stranded or orphaned dolphins while others, such as the Whale and Dolphin Conservation and Hong Kong Dolphin Conservation Society, work on dolphin conservation and welfare. India has declared the dolphin as its national aquatic animal in an attempt to protect the endangered Ganges river dolphin. The Vikramshila Gangetic Dolphin Sanctuary has been created in the Ganges river for the protection of the animals.
Controversy
There is debate over the welfare of cetaceans in captivity, and often welfare can vary greatly dependent on the levels of care being provided at a particular facility. In the United States, facilities are regularly inspected by federal agencies to ensure that a high standard of welfare is maintained. Additionally, facilities can apply to become accredited by the Association of Zoos and Aquariums (AZA), which (for accreditation) requires "the highest standards of animal care and welfare in the world" to be achieved. Facilities such as SeaWorld and the Georgia Aquarium are accredited by the AZA. Organizations such as World Animal Protection and the Whale and Dolphin Conservation campaign against the practice of keeping them in captivity. In captivity, they often develop pathologies, such as the dorsal fin collapse seen in 60–90% of male orca. Captives have vastly reduced life expectancies, on average only living into their 20s, although there are examples of orcas living longer, including several over 30 years old, and two captive orcas, Corky II and Lolita, are in their mid-40s. In the wild, females who survive infancy live 46 years on average, and up to 70–80 years in rare cases. Wild males who survive infancy live 31 years on average, and up to 50–60 years. Captivity usually bears little resemblance to wild habitat, and captive whales' social groups are foreign to those found in the wild. Critics claim captive life is stressful due to these factors and the requirement to perform circus tricks that are not part of wild orca behavior. Wild orcas may travel up to in a day, and critics say the animals are too big and intelligent to be suitable for captivity. Captives occasionally act aggressively towards themselves, their tankmates, or humans, which critics say is a result of stress.
Although dolphins generally interact well with humans, some attacks have occurred, most of them resulting in small injuries. Orcas, the largest species of dolphin, have been involved in fatal attacks on humans in captivity. The record-holder of documented orca fatal attacks is a male named Tilikum, who lived at SeaWorld from 1992 until his death in 2017. Tilikum has played a role in the death of three people in three different incidents (1991, 1999 and 2010). Tilikum's behaviour sparked the production of the documentary Blackfish, which focuses on the consequences of keeping orcas in captivity. There are documented incidents in the wild, too, but none of them fatal.
Fatal attacks from other species are less common, but there is a registered occurrence off the coast of Brazil in 1994, when a man died after being attacked by a bottlenose dolphin named Tião. Tião had suffered harassment by human visitors, including attempts to stick ice cream sticks down his blowhole. Non-fatal incidents occur more frequently, both in the wild and in captivity.
While dolphin attacks occur far less frequently than attacks by other sea animals, such as sharks, some scientists are worried about the careless programs of human-dolphin interaction. Dr. Andrew J. Read, a biologist at the Duke University Marine Laboratory who studies dolphin attacks, points out that dolphins are large and wild predators, so people should be more careful when they interact with them.
Several scientists who have researched dolphin behaviour have proposed that dolphins' unusually high intelligence in comparison to other animals means that dolphins should be seen as non-human persons who should have their own specific rights and that it is morally unacceptable to keep them captive for entertainment purposes or to kill them either intentionally for consumption or unintentionally as by-catch. Four countries – Chile, Costa Rica, Hungary, and India – have declared dolphins to be "non-human persons" and have banned the capture and import of live dolphins for entertainment.
Military
A number of militaries have employed dolphins for various purposes from finding mines to rescuing lost or trapped humans. The military use of dolphins drew scrutiny during the Vietnam War, when rumors circulated that the United States Navy was training dolphins to kill Vietnamese divers. The United States Navy denies that at any point dolphins were trained for combat. Dolphins are still being trained by the United States Navy for other tasks as part of the U.S. Navy Marine Mammal Program. The Russian military is believed to have closed its marine mammal program in the early 1990s. In 2000 the press reported that dolphins trained to kill by the Soviet Navy had been sold to Iran.
The military is also interested in disguising underwater communications as artificial dolphin clicks.
Therapy
Dolphins are an increasingly popular choice of animal-assisted therapy for psychological problems and developmental disabilities. For example, a 2005 study found dolphins an effective treatment for mild to moderate depression. This study was criticized on several grounds, including a lack of knowledge on whether dolphins are more effective than common pets. Reviews of this and other published dolphin-assisted therapy (DAT) studies have found important methodological flaws and have concluded that there is no compelling scientific evidence that DAT is a legitimate therapy or that it affords more than fleeting mood improvement.
Consumption
Cuisine
In some parts of the world, such as Taiji, Japan and the Faroe Islands, dolphins are traditionally considered as food, and are killed in harpoon or drive hunts.
Dolphin meat is consumed in a small number of countries worldwide, which include Japan and Peru (where it is referred to as chancho marino, or "sea pork"). While Japan may be the best-known and most controversial example, only a very small minority of the population has ever sampled it.
Dolphin meat is dense and such a dark shade of red as to appear black. Fat is located in a layer of blubber between the meat and the skin. When dolphin meat is eaten in Japan, it is often cut into thin strips and eaten raw as sashimi, garnished with onion and either horseradish or grated garlic, much as with sashimi of whale or horse meat (basashi). When cooked, dolphin meat is cut into bite-size cubes and then batter-fried or simmered in a miso sauce with vegetables. Cooked dolphin meat has a flavor very similar to beef liver.
Health concerns
There have been human health concerns associated with the consumption of dolphin meat in Japan after tests showed that dolphin meat contained high levels of mercury. There are no known cases of mercury poisoning as a result of consuming dolphin meat, though the government continues to monitor people in areas where dolphin meat consumption is high. The Japanese government recommends that children and pregnant women avoid eating dolphin meat on a regular basis.
Similar concerns exist with the consumption of dolphin meat in the Faroe Islands, where prenatal exposure to methylmercury and PCBs primarily from the consumption of pilot whale meat has resulted in neuropsychological deficits amongst children.
See also
List of individual cetaceans
References
Further reading
Carwardine, M., Whales, Dolphins and Porpoises, Dorling Kindersley, 2000. .
Williams, Heathcote, Whale Nation, New York, Harmony Books, 1988. .
External links
Conservation, research and news:
De Rohan, Anuschka. "Why dolphins are deep thinkers", The Guardian, July 3, 2003.
The Dolphin Institute
The Oceania Project, Caring for Whales and Dolphins
Tursiops.org: Current Cetacean-related news
Photos:
PBS NOVA: Dolphins: Close Encounters
Animals that use echolocation
Extant Tortonian first appearances
Paraphyletic groups
National symbols of Anguilla
National symbols of Barbados
National symbols of Greece
National symbols of Malta
Mammal common names | Dolphin | [
"Biology"
] | 11,748 | [
"Phylogenetics",
"Paraphyletic groups"
] |
9,067 | https://en.wikipedia.org/wiki/Division%20ring | In algebra, a division ring, also called a skew field (or, occasionally, a sfield), is a nontrivial ring in which division by nonzero elements is defined. Specifically, it is a nontrivial ring in which every nonzero element has a multiplicative inverse, that is, an element usually denoted , such that . So, (right) division may be defined as , but this notation is avoided, as one may have .
A commutative division ring is a field. Wedderburn's little theorem asserts that all finite division rings are commutative and therefore finite fields.
Historically, division rings were sometimes referred to as fields, while fields were called "commutative fields". In some languages, such as French, the word equivalent to "field" ("corps") is used for both commutative and noncommutative cases, and the distinction between the two cases is made by adding qualificatives such as "corps commutatif" (commutative field) or "corps gauche" (skew field).
All division rings are simple. That is, they have no two-sided ideal besides the zero ideal and itself.
Relation to fields and linear algebra
All fields are division rings, and every non-field division ring is noncommutative. The best known example is the ring of quaternions. If one allows only rational instead of real coefficients in the constructions of the quaternions, one obtains another division ring. In general, if is a ring and is a simple module over , then, by Schur's lemma, the endomorphism ring of is a division ring; every division ring arises in this fashion from some simple module.
Much of linear algebra may be formulated, and remains correct, for modules over a division ring instead of vector spaces over a field. Doing so, one must specify whether one is considering right or left modules, and some care is needed in properly distinguishing left and right in formulas. In particular, every module has a basis, and Gaussian elimination can be used. So, everything that can be defined with these tools works on division algebras. Matrices and their products are defined similarly. However, a matrix that is left invertible need not to be right invertible, and if it is, its right inverse can differ from its left inverse. (See .)
Determinants are not defined over noncommutative division algebras, and everything that requires this concept cannot be generalized to noncommutative division algebras.
Working in coordinates, elements of a finite-dimensional right module can be represented by column vectors, which can be multiplied on the right by scalars, and on the left by matrices (representing linear maps); for elements of a finite-dimensional left module, row vectors must be used, which can be multiplied on the left by scalars, and on the right by matrices. The dual of a right module is a left module, and vice versa. The transpose of a matrix must be viewed as a matrix over the opposite division ring in order for the rule to remain valid.
Every module over a division ring is free; that is, it has a basis, and all bases of a module have the same number of elements. Linear maps between finite-dimensional modules over a division ring can be described by matrices; the fact that linear maps by definition commute with scalar multiplication is most conveniently represented in notation by writing them on the opposite side of vectors as scalars are. The Gaussian elimination algorithm remains applicable. The column rank of a matrix is the dimension of the right module generated by the columns, and the row rank is the dimension of the left module generated by the rows; the same proof as for the vector space case can be used to show that these ranks are the same and define the rank of a matrix.
Division rings are the only rings over which every module is free: a ring is a division ring if and only if every -module is free.
The center of a division ring is commutative and therefore a field. Every division ring is therefore a division algebra over its center. Division rings can be roughly classified according to whether or not they are finite dimensional or infinite dimensional over their centers. The former are called centrally finite and the latter centrally infinite. Every field is one dimensional over its center. The ring of Hamiltonian quaternions forms a four-dimensional algebra over its center, which is isomorphic to the real numbers.
Examples
As noted above, all fields are division rings.
The quaternions form a noncommutative division ring.
The subset of the quaternions , such that , , , and belong to a fixed subfield of the real numbers, is a noncommutative division ring. When this subfield is the field of rational numbers, this is the division ring of rational quaternions.
Let be an automorphism of the field Let denote the ring of formal Laurent series with complex coefficients, wherein multiplication is defined as follows: instead of simply allowing coefficients to commute directly with the indeterminate for define for each index If is a non-trivial automorphism of complex numbers (such as the conjugation), then the resulting ring of Laurent series is a noncommutative division ring known as a skew Laurent series ring; if then it features the standard multiplication of formal series. This concept can be generalized to the ring of Laurent series over any fixed field given a nontrivial
Main theorems
Wedderburn's little theorem: All finite division rings are commutative and therefore finite fields. (Ernst Witt gave a simple proof.)
Frobenius theorem: The only finite-dimensional associative division algebras over the reals are the reals themselves, the complex numbers, and the quaternions.
Related notions
Division rings used to be called "fields" in an older usage. In many languages, a word meaning "body" is used for division rings, in some languages designating either commutative or noncommutative division rings, while in others specifically designating commutative division rings (what we now call fields in English). A more complete comparison is found in the article on fields.
The name "skew field" has an interesting semantic feature: a modifier (here "skew") widens the scope of the base term (here "field"). Thus a field is a particular type of skew field, and not all skew fields are fields.
While division rings and algebras as discussed here are assumed to have associative multiplication, nonassociative division algebras such as the octonions are also of interest.
A near-field is an algebraic structure similar to a division ring, except that it has only one of the two distributive laws.
See also
Hua's identity
Notes
References
Further reading
External links
Proof of Wedderburn's Theorem at Planet Math
Grillet's Abstract Algebra, section VIII.5's characterization of division rings via their free modules.
Ring theory | Division ring | [
"Mathematics"
] | 1,460 | [
"Fields of abstract algebra",
"Ring theory"
] |
9,087 | https://en.wikipedia.org/wiki/Dynamical%20system | In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space, such as in a parametric curve. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, the random motion of particles in the air, and the number of fish each springtime in a lake. The most general definition unifies several concepts in mathematics such as ordinary differential equations and ergodic theory by allowing different choices of the space and how time is measured. Time can be measured by integers, by real or complex numbers or can be a more general algebraic object, losing the memory of its physical origin, and the space may be a manifold or simply a set, without the need of a smooth space-time structure defined on it.
At any given time, a dynamical system has a state representing a point in an appropriate state space. This state is often given by a tuple of real numbers or by a vector in a geometrical manifold. The evolution rule of the dynamical system is a function that describes what future states follow from the current state. Often the function is deterministic, that is, for a given time interval only one future state follows from the current state. However, some systems are stochastic, in that random events also affect the evolution of the state variables.
In physics, a dynamical system is described as a "particle or ensemble of particles whose state varies over time and thus obeys differential equations involving time derivatives". In order to make a prediction about the system's future behavior, an analytical solution of such equations or their integration over time through computer simulation is realized.
The study of dynamical systems is the focus of dynamical systems theory, which has applications to a wide variety of fields such as mathematics, physics, biology, chemistry, engineering, economics, history, and medicine. Dynamical systems are a fundamental part of chaos theory, logistic map dynamics, bifurcation theory, the self-assembly and self-organization processes, and the edge of chaos concept.
Overview
The concept of a dynamical system has its origins in Newtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is an implicit relation that gives the state of the system for only a short time into the future. (The relation is either a differential equation, difference equation or other time scale.) To determine the state for all future times requires iterating the relation many times—each advancing time a small step. The iteration procedure is referred to as solving the system or integrating the system. If the system can be solved, then, given an initial point, it is possible to determine all its future positions, a collection of points known as a trajectory or orbit.
Before the advent of computers, finding an orbit required sophisticated mathematical techniques and could be accomplished only for a small class of dynamical systems. Numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system.
For simple dynamical systems, knowing the trajectory is often sufficient, but most dynamical systems are too complicated to be understood in terms of individual trajectories. The difficulties arise because:
The systems studied may only be known approximately—the parameters of the system may not be known precisely or terms may be missing from the equations. The approximations used bring into question the validity or relevance of numerical solutions. To address these questions several notions of stability have been introduced in the study of dynamical systems, such as Lyapunov stability or structural stability. The stability of the dynamical system implies that there is a class of models or initial conditions for which the trajectories would be equivalent. The operation for comparing orbits to establish their equivalence changes with the different notions of stability.
The type of trajectory may be more important than one particular trajectory. Some trajectories may be periodic, whereas others may wander through many different states of the system. Applications often require enumerating these classes or maintaining the system within one class. Classifying all possible trajectories has led to the qualitative study of dynamical systems, that is, properties that do not change under coordinate changes. Linear dynamical systems and systems that have two numbers describing a state are examples of dynamical systems where the possible classes of orbits are understood.
The behavior of trajectories as a function of a parameter may be what is needed for an application. As a parameter is varied, the dynamical systems may have bifurcation points where the qualitative behavior of the dynamical system changes. For example, it may go from having only periodic motions to apparently erratic behavior, as in the transition to turbulence of a fluid.
The trajectories of the system may appear erratic, as if random. In these cases it may be necessary to compute averages using one very long trajectory or many different trajectories. The averages are well defined for ergodic systems and a more detailed understanding has been worked out for hyperbolic systems. Understanding the probabilistic aspects of dynamical systems has helped establish the foundations of statistical mechanics and of chaos.
History
Many people regard French mathematician Henri Poincaré as the founder of dynamical systems. Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). These papers included the Poincaré recurrence theorem, which states that certain systems will, after a sufficiently long but finite time, return to a state very close to the initial state.
Aleksandr Lyapunov developed many important approximation methods. His methods, which he developed in 1899, make it possible to define the stability of sets of ordinary differential equations. He created the modern theory of the stability of a dynamical system.
In 1913, George David Birkhoff proved Poincaré's "Last Geometric Theorem", a special case of the three-body problem, a result that made him world-famous. In 1927, he published his Dynamical Systems. Birkhoff's most durable result has been his 1931 discovery of what is now called the ergodic theorem. Combining insights from physics on the ergodic hypothesis with measure theory, this theorem solved, at least in principle, a fundamental problem of statistical mechanics. The ergodic theorem has also had repercussions for dynamics.
Stephen Smale made significant advances as well. His first contribution was the Smale horseshoe that jumpstarted significant research in dynamical systems. He also outlined a research program carried out by many others.
Oleksandr Mykolaiovych Sharkovsky developed Sharkovsky's theorem on the periods of discrete dynamical systems in 1964. One of the implications of the theorem is that if a discrete dynamical system on the real line has a periodic point of period 3, then it must have periodic points of every other period.
In the late 20th century the dynamical system perspective to partial differential equations started gaining popularity. Palestinian mechanical engineer Ali H. Nayfeh applied nonlinear dynamics in mechanical and engineering systems. His pioneering work in applied nonlinear dynamics has been influential in the construction and maintenance of machines and structures that are common in daily life, such as ships, cranes, bridges, buildings, skyscrapers, jet engines, rocket engines, aircraft and spacecraft.
Formal definition
In the most general sense,
a dynamical system is a tuple (T, X, Φ) where T is a monoid, written additively, X is a non-empty set and Φ is a function
with
(where is the 2nd projection map)
and for any x in X:
for and , where we have defined the set for any x in X.
In particular, in the case that we have for every x in X that and thus that Φ defines a monoid action of T on X.
The function Φ(t,x) is called the evolution function of the dynamical system: it associates to every point x in the set X a unique image, depending on the variable t, called the evolution parameter. X is called phase space or state space, while the variable x represents an initial state of the system.
We often write
if we take one of the variables as constant. The function
is called the flow through x and its graph is called the trajectory through x. The set
is called the orbit through x.
The orbit through x is the image of the flow through x.
A subset S of the state space X is called Φ-invariant if for all x in S and all t in T
Thus, in particular, if S is Φ-invariant, for all x in S. That is, the flow through x must be defined for all time for every element of S.
More commonly there are two classes of definitions for a dynamical system: one is motivated by ordinary differential equations and is geometrical in flavor; and the other is motivated by ergodic theory and is measure theoretical in flavor.
Geometrical definition
In the geometrical definition, a dynamical system is the tuple . is the domain for time – there are many choices, usually the reals or the integers, possibly restricted to be non-negative. is a manifold, i.e. locally a Banach space or Euclidean space, or in the discrete case a graph. f is an evolution rule t → f t (with ) such that f t is a diffeomorphism of the manifold to itself. So, f is a "smooth" mapping of the time-domain into the space of diffeomorphisms of the manifold to itself. In other terms, f(t) is a diffeomorphism, for every time t in the domain .
Real dynamical system
A real dynamical system, real-time dynamical system, continuous time dynamical system, or flow is a tuple (T, M, Φ) with T an open interval in the real numbers R, M a manifold locally diffeomorphic to a Banach space, and Φ a continuous function. If Φ is continuously differentiable we say the system is a differentiable dynamical system. If the manifold M is locally diffeomorphic to Rn, the dynamical system is finite-dimensional; if not, the dynamical system is infinite-dimensional. This does not assume a symplectic structure. When T is taken to be the reals, the dynamical system is called global or a flow; and if T is restricted to the non-negative reals, then the dynamical system is a semi-flow.
Discrete dynamical system
A discrete dynamical system, discrete-time dynamical system is a tuple (T, M, Φ), where M is a manifold locally diffeomorphic to a Banach space, and Φ is a function. When T is taken to be the integers, it is a cascade or a map. If T is restricted to the non-negative integers we call the system a semi-cascade.
Cellular automaton
A cellular automaton is a tuple (T, M, Φ), with T a lattice such as the integers or a higher-dimensional integer grid, M is a set of functions from an integer lattice (again, with one or more dimensions) to a finite set, and Φ a (locally defined) evolution function. As such cellular automata are dynamical systems. The lattice in M represents the "space" lattice, while the one in T represents the "time" lattice.
Multidimensional generalization
Dynamical systems are usually defined over a single independent variable, thought of as time. A more general class of systems are defined over multiple independent variables and are therefore called multidimensional systems. Such systems are useful for modeling, for example, image processing.
Compactification of a dynamical system
Given a global dynamical system (R, X, Φ) on a locally compact and Hausdorff topological space X, it is often useful to study the continuous extension Φ* of Φ to the one-point compactification X* of X. Although we lose the differential structure of the original system we can now use compactness arguments to analyze the new system (R, X*, Φ*).
In compact dynamical systems the limit set of any orbit is non-empty, compact and simply connected.
Measure theoretical definition
A dynamical system may be defined formally as a measure-preserving transformation of a measure space, the triplet (T, (X, Σ, μ), Φ). Here, T is a monoid (usually the non-negative integers), X is a set, and (X, Σ, μ) is a probability space, meaning that Σ is a sigma-algebra on X and μ is a finite measure on (X, Σ). A map Φ: X → X is said to be Σ-measurable if and only if, for every σ in Σ, one has . A map Φ is said to preserve the measure if and only if, for every σ in Σ, one has . Combining the above, a map Φ is said to be a measure-preserving transformation of X , if it is a map from X to itself, it is Σ-measurable, and is measure-preserving. The triplet (T, (X, Σ, μ), Φ), for such a Φ, is then defined to be a dynamical system.
The map Φ embodies the time evolution of the dynamical system. Thus, for discrete dynamical systems the iterates for every integer n are studied. For continuous dynamical systems, the map Φ is understood to be a finite time evolution map and the construction is more complicated.
Relation to geometric definition
The measure theoretical definition assumes the existence of a measure-preserving transformation. Many different invariant measures can be associated to any one evolution rule. If the dynamical system is given by a system of differential equations the appropriate measure must be determined. This makes it difficult to develop ergodic theory starting from differential equations, so it becomes convenient to have a dynamical systems-motivated definition within ergodic theory that side-steps the choice of measure and assumes the choice has been made. A simple construction (sometimes called the Krylov–Bogolyubov theorem) shows that for a large class of systems it is always possible to construct a measure so as to make the evolution rule of the dynamical system a measure-preserving transformation. In the construction a given measure of the state space is summed for all future points of a trajectory, assuring the invariance.
Some systems have a natural measure, such as the Liouville measure in Hamiltonian systems, chosen over other invariant measures, such as the measures supported on periodic orbits of the Hamiltonian system. For chaotic dissipative systems the choice of invariant measure is technically more challenging. The measure needs to be supported on the attractor, but attractors have zero Lebesgue measure and the invariant measures must be singular with respect to the Lebesgue measure. A small region of phase space shrinks under time evolution.
For hyperbolic dynamical systems, the Sinai–Ruelle–Bowen measures appear to be the natural choice. They are constructed on the geometrical structure of stable and unstable manifolds of the dynamical system; they behave physically under small perturbations; and they explain many of the observed statistics of hyperbolic systems.
Construction of dynamical systems
The concept of evolution in time is central to the theory of dynamical systems as seen in the previous sections: the basic reason for this fact is that the starting motivation of the theory was the study of time behavior of classical mechanical systems. But a system of ordinary differential equations must be solved before it becomes a dynamic system. For example, consider an initial value problem such as the following:
where
represents the velocity of the material point x
M is a finite dimensional manifold
v: T × M → TM is a vector field in Rn or Cn and represents the change of velocity induced by the known forces acting on the given material point in the phase space M. The change is not a vector in the phase space M, but is instead in the tangent space TM.
There is no need for higher order derivatives in the equation, nor for the parameter t in v(t,x), because these can be eliminated by considering systems of higher dimensions.
Depending on the properties of this vector field, the mechanical system is called
autonomous, when v(t, x) = v(x)
homogeneous when v(t, 0) = 0 for all t
The solution can be found using standard ODE techniques and is denoted as the evolution function already introduced above
The dynamical system is then (T, M, Φ).
Some formal manipulation of the system of differential equations shown above gives a more general form of equations a dynamical system must satisfy
where is a functional from the set of evolution functions to the field of the complex numbers.
This equation is useful when modeling mechanical systems with complicated constraints.
Many of the concepts in dynamical systems can be extended to infinite-dimensional manifolds—those that are locally Banach spaces—in which case the differential equations are partial differential equations.
Examples
Arnold's cat map
Baker's map is an example of a chaotic piecewise linear map
Billiards and outer billiards
Bouncing ball dynamics
Circle map
Complex quadratic polynomial
Double pendulum
Dyadic transformation
Hénon map
Irrational rotation
Kaplan–Yorke map
List of chaotic maps
Lorenz system
Quadratic map simulation system
Rössler map
Swinging Atwood's machine
Tent map
Linear dynamical systems
Linear dynamical systems can be solved in terms of simple functions and the behavior of all orbits classified. In a linear system the phase space is the N-dimensional Euclidean space, so any point in phase space can be represented by a vector with N numbers. The analysis of linear systems is possible because they satisfy a superposition principle: if u(t) and w(t) satisfy the differential equation for the vector field (but not necessarily the initial condition), then so will u(t) + w(t).
Flows
For a flow, the vector field v(x) is an affine function of the position in the phase space, that is,
with A a matrix, b a vector of numbers and x the position vector. The solution to this system can be found by using the superposition principle (linearity).
The case b ≠ 0 with A = 0 is just a straight line in the direction of b:
When b is zero and A ≠ 0 the origin is an equilibrium (or singular) point of the flow, that is, if x0 = 0, then the orbit remains there.
For other initial conditions, the equation of motion is given by the exponential of a matrix: for an initial point x0,
When b = 0, the eigenvalues of A determine the structure of the phase space. From the eigenvalues and the eigenvectors of A it is possible to determine if an initial point will converge or diverge to the equilibrium point at the origin.
The distance between two different initial conditions in the case A ≠ 0 will change exponentially in most cases, either converging exponentially fast towards a point, or diverging exponentially fast. Linear systems display sensitive dependence on initial conditions in the case of divergence. For nonlinear systems this is one of the (necessary but not sufficient) conditions for chaotic behavior.
Maps
A discrete-time, affine dynamical system has the form of a matrix difference equation:
with A a matrix and b a vector. As in the continuous case, the change of coordinates x → x + (1 − A) –1b removes the term b from the equation. In the new coordinate system, the origin is a fixed point of the map and the solutions are of the linear system A nx0.
The solutions for the map are no longer curves, but points that hop in the phase space. The orbits are organized in curves, or fibers, which are collections of points that map into themselves under the action of the map.
As in the continuous case, the eigenvalues and eigenvectors of A determine the structure of phase space. For example, if u1 is an eigenvector of A, with a real eigenvalue smaller than one, then the straight lines given by the points along α u1, with α ∈ R, is an invariant curve of the map. Points in this straight line run into the fixed point.
There are also many other discrete dynamical systems.
Local dynamics
The qualitative properties of dynamical systems do not change under a smooth change of coordinates (this is sometimes taken as a definition of qualitative): a singular point of the vector field (a point where v(x) = 0) will remain a singular point under smooth transformations; a periodic orbit is a loop in phase space and smooth deformations of the phase space cannot alter it being a loop. It is in the neighborhood of singular points and periodic orbits that the structure of a phase space of a dynamical system can be well understood. In the qualitative study of dynamical systems, the approach is to show that there is a change of coordinates (usually unspecified, but computable) that makes the dynamical system as simple as possible.
Rectification
A flow in most small patches of the phase space can be made very simple. If y is a point where the vector field v(y) ≠ 0, then there is a change of coordinates for a region around y where the vector field becomes a series of parallel vectors of the same magnitude. This is known as the rectification theorem.
The rectification theorem says that away from singular points the dynamics of a point in a small patch is a straight line. The patch can sometimes be enlarged by stitching several patches together, and when this works out in the whole phase space M the dynamical system is integrable. In most cases the patch cannot be extended to the entire phase space. There may be singular points in the vector field (where v(x) = 0); or the patches may become smaller and smaller as some point is approached. The more subtle reason is a global constraint, where the trajectory starts out in a patch, and after visiting a series of other patches comes back to the original one. If the next time the orbit loops around phase space in a different way, then it is impossible to rectify the vector field in the whole series of patches.
Near periodic orbits
In general, in the neighborhood of a periodic orbit the rectification theorem cannot be used. Poincaré developed an approach that transforms the analysis near a periodic orbit to the analysis of a map. Pick a point x0 in the orbit γ and consider the points in phase space in that neighborhood that are perpendicular to v(x0). These points are a Poincaré section S(γ, x0), of the orbit. The flow now defines a map, the Poincaré map F : S → S, for points starting in S and returning to S. Not all these points will take the same amount of time to come back, but the times will be close to the time it takes x0.
The intersection of the periodic orbit with the Poincaré section is a fixed point of the Poincaré map F. By a translation, the point can be assumed to be at x = 0. The Taylor series of the map is F(x) = J · x + O(x2), so a change of coordinates h can only be expected to simplify F to its linear part
This is known as the conjugation equation. Finding conditions for this equation to hold has been one of the major tasks of research in dynamical systems. Poincaré first approached it assuming all functions to be analytic and in the process discovered the non-resonant condition. If λ1, ..., λν are the eigenvalues of J they will be resonant if one eigenvalue is an integer linear combination of two or more of the others. As terms of the form λi – Σ (multiples of other eigenvalues) occurs in the denominator of the terms for the function h, the non-resonant condition is also known as the small divisor problem.
Conjugation results
The results on the existence of a solution to the conjugation equation depend on the eigenvalues of J and the degree of smoothness required from h. As J does not need to have any special symmetries, its eigenvalues will typically be complex numbers. When the eigenvalues of J are not in the unit circle, the dynamics near the fixed point x0 of F is called hyperbolic and when the eigenvalues are on the unit circle and complex, the dynamics is called elliptic.
In the hyperbolic case, the Hartman–Grobman theorem gives the conditions for the existence of a continuous function that maps the neighborhood of the fixed point of the map to the linear map J · x. The hyperbolic case is also structurally stable. Small changes in the vector field will only produce small changes in the Poincaré map and these small changes will reflect in small changes in the position of the eigenvalues of J in the complex plane, implying that the map is still hyperbolic.
The Kolmogorov–Arnold–Moser (KAM) theorem gives the behavior near an elliptic point.
Bifurcation theory
When the evolution map Φt (or the vector field it is derived from) depends on a parameter μ, the structure of the phase space will also depend on this parameter. Small changes may produce no qualitative changes in the phase space until a special value μ0 is reached. At this point the phase space changes qualitatively and the dynamical system is said to have gone through a bifurcation.
Bifurcation theory considers a structure in phase space (typically a fixed point, a periodic orbit, or an invariant torus) and studies its behavior as a function of the parameter μ. At the bifurcation point the structure may change its stability, split into new structures, or merge with other structures. By using Taylor series approximations of the maps and an understanding of the differences that may be eliminated by a change of coordinates, it is possible to catalog the bifurcations of dynamical systems.
The bifurcations of a hyperbolic fixed point x0 of a system family Fμ can be characterized by the eigenvalues of the first derivative of the system DFμ(x0) computed at the bifurcation point. For a map, the bifurcation will occur when there are eigenvalues of DFμ on the unit circle. For a flow, it will occur when there are eigenvalues on the imaginary axis. For more information, see the main article on Bifurcation theory.
Some bifurcations can lead to very complicated structures in phase space. For example, the Ruelle–Takens scenario describes how a periodic orbit bifurcates into a torus and the torus into a strange attractor. In another example, Feigenbaum period-doubling describes how a stable periodic orbit goes through a series of period-doubling bifurcations.
Ergodic systems
In many dynamical systems, it is possible to choose the coordinates of the system so that the volume (really a ν-dimensional volume) in phase space is invariant. This happens for mechanical systems derived from Newton's laws as long as the coordinates are the position and the momentum and the volume is measured in units of (position) × (momentum). The flow takes points of a subset A into the points Φ t(A) and invariance of the phase space means that
In the Hamiltonian formalism, given a coordinate it is possible to derive the appropriate (generalized) momentum such that the associated volume is preserved by the flow. The volume is said to be computed by the Liouville measure.
In a Hamiltonian system, not all possible configurations of position and momentum can be reached from an initial condition. Because of energy conservation, only the states with the same energy as the initial condition are accessible. The states with the same energy form an energy shell Ω, a sub-manifold of the phase space. The volume of the energy shell, computed using the Liouville measure, is preserved under evolution.
For systems where the volume is preserved by the flow, Poincaré discovered the recurrence theorem: Assume the phase space has a finite Liouville volume and let F be a phase space volume-preserving map and A a subset of the phase space. Then almost every point of A returns to A infinitely often. The Poincaré recurrence theorem was used by Zermelo to object to Boltzmann's derivation of the increase in entropy in a dynamical system of colliding atoms.
One of the questions raised by Boltzmann's work was the possible equality between time averages and space averages, what he called the ergodic hypothesis. The hypothesis states that the length of time a typical trajectory spends in a region A is vol(A)/vol(Ω).
The ergodic hypothesis turned out not to be the essential property needed for the development of statistical mechanics and a series of other ergodic-like properties were introduced to capture the relevant aspects of physical systems. Koopman approached the study of ergodic systems by the use of functional analysis. An observable a is a function that to each point of the phase space associates a number (say instantaneous pressure, or average height). The value of an observable can be computed at another time by using the evolution function φ t. This introduces an operator U t, the transfer operator,
By studying the spectral properties of the linear operator U it becomes possible to classify the ergodic properties of Φ t. In using the Koopman approach of considering the action of the flow on an observable function, the finite-dimensional nonlinear problem involving Φ t gets mapped into an infinite-dimensional linear problem involving U.
The Liouville measure restricted to the energy surface Ω is the basis for the averages computed in equilibrium statistical mechanics. An average in time along a trajectory is equivalent to an average in space computed with the Boltzmann factor exp(−βH). This idea has been generalized by Sinai, Bowen, and Ruelle (SRB) to a larger class of dynamical systems that includes dissipative systems. SRB measures replace the Boltzmann factor and they are defined on attractors of chaotic systems.
Nonlinear dynamical systems and chaos
Simple nonlinear dynamical systems, including piecewise linear systems, can exhibit strongly unpredictable behavior, which might seem to be random, despite the fact that they are fundamentally deterministic. This unpredictable behavior has been called chaos. Hyperbolic systems are precisely defined dynamical systems that exhibit the properties ascribed to chaotic systems. In hyperbolic systems the tangent spaces perpendicular to an orbit can be decomposed into a combination of two parts: one with the points that converge towards the orbit (the stable manifold) and another of the points that diverge from the orbit (the unstable manifold).
This branch of mathematics deals with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to a steady state in the long term, and if so, what are the possible attractors?" or "Does the long-term behavior of the system depend on its initial condition?"
The chaotic behavior of complex systems is not the issue. Meteorology has been known for years to involve complex—even chaotic—behavior. Chaos theory has been so surprising because chaos can be found within almost trivial systems. The Pomeau–Manneville scenario of the logistic map and the Fermi–Pasta–Ulam–Tsingou problem arose with just second-degree polynomials; the horseshoe map is piecewise linear.
Solutions of finite duration
For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that in these solutions the system will reach the value zero at some time, called an ending time, and then stay there forever after. This can occur only when system trajectories are not uniquely determined forwards and backwards in time by the dynamics, thus solutions of finite duration imply a form of "backwards-in-time unpredictability" closely related to the forwards-in-time unpredictability of chaos. This behavior cannot happen for Lipschitz continuous differential equations according to the proof of the Picard-Lindelof theorem. These solutions are non-Lipschitz functions at their ending times and cannot be analytical functions on the whole real line.
As example, the equation:
Admits the finite duration solution:
that is zero for and is not Lipschitz continuous at its ending time
See also
Behavioral modeling
Cognitive modeling
Complex dynamics
Dynamic approach to second language development
Feedback passivation
Infinite compositions of analytic functions
List of dynamical system topics
Oscillation
People in systems and control
Sharkovskii's theorem
Conley's fundamental theorem of dynamical systems
System dynamics
Systems theory
Principle of maximum caliber
References
online version of first edition on the EMIS site .
Further reading
Works providing a broad coverage:
(available as a reprint: )
Encyclopaedia of Mathematical Sciences () has a sub-series on dynamical systems with reviews of current research.
Introductory texts with a unique perspective:
Textbooks
Popularizations:
External links
Arxiv preprint server has daily submissions of (non-refereed) manuscripts in dynamical systems.
Encyclopedia of dynamical systems A part of Scholarpedia — peer-reviewed and written by invited experts.
Nonlinear Dynamics. Models of bifurcation and chaos by Elmer G. Wiens
Sci.Nonlinear FAQ 2.0 (Sept 2003) provides definitions, explanations and resources related to nonlinear science
Online books or lecture notes
Geometrical theory of dynamical systems. Nils Berglund's lecture notes for a course at ETH at the advanced undergraduate level.
Dynamical systems. George D. Birkhoff's 1927 book already takes a modern approach to dynamical systems.
Chaos: classical and quantum. An introduction to dynamical systems from the periodic orbit point of view.
Learning Dynamical Systems. Tutorial on learning dynamical systems.
Ordinary Differential Equations and Dynamical Systems. Lecture notes by Gerald Teschl
Research groups
Dynamical Systems Group Groningen, IWI, University of Groningen.
Chaos @ UMD. Concentrates on the applications of dynamical systems.
, SUNY Stony Brook. Lists of conferences, researchers, and some open problems.
Center for Dynamics and Geometry, Penn State.
Control and Dynamical Systems, Caltech.
Laboratory of Nonlinear Systems, Ecole Polytechnique Fédérale de Lausanne (EPFL).
Center for Dynamical Systems, University of Bremen
Systems Analysis, Modelling and Prediction Group, University of Oxford
Non-Linear Dynamics Group, Instituto Superior Técnico, Technical University of Lisbon
Dynamical Systems , IMPA, Instituto Nacional de Matemática Pura e Applicada.
Nonlinear Dynamics Workgroup , Institute of Computer Science, Czech Academy of Sciences.
UPC Dynamical Systems Group Barcelona, Polytechnical University of Catalonia.
Center for Control, Dynamical Systems, and Computation, University of California, Santa Barbara.
Systems theory
Mathematical and quantitative methods (economics) | Dynamical system | [
"Physics",
"Mathematics"
] | 7,304 | [
"Mechanics",
"Dynamical systems"
] |
9,101 | https://en.wikipedia.org/wiki/Device%20driver | In the context of an operating system, a device driver is a computer program that operates or controls a particular type of device that is attached to a computer or automaton. A driver provides a software interface to hardware devices, enabling operating systems and other computer programs to access hardware functions without needing to know precise details about the hardware being used.
A driver communicates with the device through the computer bus or communications subsystem to which the hardware connects. When a calling program invokes a routine in the driver, the driver issues commands to the device (drives it). Once the device sends data back to the driver, the driver may invoke routines in the original calling program.
Drivers are hardware dependent and operating-system-specific. They usually provide the interrupt handling required for any necessary asynchronous time-dependent hardware interface.
Purpose
The main purpose of device drivers is to provide abstraction by acting as a translator between a hardware device and the applications or operating systems that use it. Programmers can write higher-level application code independently of whatever specific hardware the end-user is using.
For example, a high-level application for interacting with a serial port may simply have two functions for "send data" and "receive data". At a lower level, a device driver implementing these functions would communicate to the particular serial port controller installed on a user's computer. The commands needed to control a 16550 UART are much different from the commands needed to control an FTDI serial port converter, but each hardware-specific device driver abstracts these details into the same (or similar) software interface.
Development
Writing a device driver requires an in-depth understanding of how the hardware and the software works for a given platform function. Because drivers require low-level access to hardware functions in order to operate, drivers typically operate in a highly privileged environment and can cause system operational issues if something goes wrong. In contrast, most user-level software on modern operating systems can be stopped without greatly affecting the rest of the system. Even drivers executing in user mode can crash a system if the device is erroneously programmed. These factors make it more difficult and dangerous to diagnose problems.
The task of writing drivers thus usually falls to software engineers or computer engineers who work for hardware-development companies. This is because they have better information than most outsiders about the design of their hardware. Moreover, it was traditionally considered in the hardware manufacturer's interest to guarantee that their clients can use their hardware in an optimal way. Typically, the Logical Device Driver (LDD) is written by the operating system vendor, while the Physical Device Driver (PDD) is implemented by the device vendor. However, in recent years, non-vendors have written numerous device drivers for proprietary devices, mainly for use with free and open source operating systems. In such cases, it is important that the hardware manufacturer provide information on how the device communicates. Although this information can instead be learned by reverse engineering, this is much more difficult with hardware than it is with software.
Microsoft has attempted to reduce system instability due to poorly written device drivers by creating a new framework for driver development, called Windows Driver Frameworks (WDF). This includes User-Mode Driver Framework (UMDF) that encourages development of certain types of drivers—primarily those that implement a message-based protocol for communicating with their devices—as user-mode drivers. If such drivers malfunction, they do not cause system instability. The Kernel-Mode Driver Framework (KMDF) model continues to allow development of kernel-mode device drivers but attempts to provide standard implementations of functions that are known to cause problems, including cancellation of I/O operations, power management, and plug-and-play device support.
Apple has an open-source framework for developing drivers on macOS, called I/O Kit.
In Linux environments, programmers can build device drivers as parts of the kernel, separately as loadable modules, or as user-mode drivers (for certain types of devices where kernel interfaces exist, such as for USB devices). Makedev includes a list of the devices in Linux, including ttyS (terminal), lp (parallel port), hd (disk), loop, and sound (these include mixer, sequencer, dsp, and audio).
Microsoft Windows .sys files and Linux .ko files can contain loadable device drivers. The advantage of loadable device drivers is that they can be loaded only when necessary and then unloaded, thus saving kernel memory.
Privilege levels
Depending on the operating system, device drivers may be permitted to run at various different privilege levels. The choice of which level of privilege the drivers are in is largely decided by the type of kernel an operating system uses. An operating system that uses a monolithic kernel, such as the Linux kernel, will typically run device drivers with the same privilege as all other kernel objects. By contrast, a system designed around microkernel, such as Minix, will place drivers as processes independent from the kernel but that use it for essential input-output functionalities and to pass messages between user programs and each other.
On Windows NT, a system with a hybrid kernel, it is common for device drivers to run in either kernel-mode or user-mode.
The most common mechanism for segregating memory into various privilege levels is via protection rings. On many systems, such as those with x86 and ARM processors, switching between rings imposes a performance penalty, a factor that operating system developers and embedded software engineers consider when creating drivers for devices which are preferred to be run with low latency, such as network interface cards. The primary benefit of running a driver in user mode is improved stability since a poorly written user-mode device driver cannot crash the system by overwriting kernel memory.
Applications
Because of the diversity of hardware and operating systems, drivers operate in many different environments. Drivers may interface with:
Printers
Video adapters
Network cards
Sound cards
PC chipsets
Power and battery management
Local buses of various sorts—in particular, for bus mastering on modern systems
Low-bandwidth I/O buses of various sorts (for pointing devices such as mice, keyboards, etc.)
Computer storage devices such as hard disk, CD-ROM, and floppy disk buses (ATA, SATA, SCSI, SAS)
Implementing support for different file systems
Image scanners
Digital cameras
Digital terrestrial television tuners
Radio frequency communication transceiver adapters for wireless personal area networks as used for short-distance and low-rate wireless communication in home automation, (such as example Bluetooth Low Energy (BLE), Thread, Zigbee, and Z-Wave).
IrDA adapters
Common levels of abstraction for device drivers include:
For hardware:
Interfacing directly
Writing to or reading from a device control register
Using some higher-level interface (e.g. Video BIOS)
Using another lower-level device driver (e.g. file system drivers using disk drivers)
Simulating work with hardware, while doing something entirely different
For software:
Allowing the operating system direct access to hardware resources
Implementing only primitives
Implementing an interface for non-driver software (e.g. TWAIN)
Implementing a language, sometimes quite high-level (e.g. PostScript)
So choosing and installing the correct device drivers for given hardware is often a key component of computer system configuration.
Virtual device drivers
Virtual device drivers represent a particular variant of device drivers. They are used to emulate a hardware device, particularly in virtualization environments, for example when a DOS program is run on a Microsoft Windows computer or when a guest operating system is run on, for example, a Xen host. Instead of enabling the guest operating system to dialog with hardware, virtual device drivers take the opposite role and emulates a piece of hardware, so that the guest operating system and its drivers running inside a virtual machine can have the illusion of accessing real hardware. Attempts by the guest operating system to access the hardware are routed to the virtual device driver in the host operating system as e.g., function calls. The virtual device driver can also send simulated processor-level events like interrupts into the virtual machine.
Virtual devices may also operate in a non-virtualized environment. For example, a virtual network adapter is used with a virtual private network, while a virtual disk device is used with iSCSI. A good example for virtual device drivers can be Daemon Tools.
There are several variants of virtual device drivers, such as VxDs, VLMs, and VDDs.
Open source drivers
Graphics device driver
Printers: CUPS
RAIDs: CCISS (Compaq Command Interface for SCSI-3 Support)
Scanners: SANE
Video: Vidix, Direct Rendering Infrastructure
Solaris descriptions of commonly used device drivers:
fas: Fast/wide SCSI controller
hme: Fast (10/100 Mbit/s) Ethernet
isp: Differential SCSI controllers and the SunSwift card
glm: (Gigabaud Link Module) UltraSCSI controllers
scsi: Small Computer Serial Interface (SCSI) devices
sf: soc+ or social Fiber Channel Arbitrated Loop (FCAL)
soc: SPARC Storage Array (SSA) controllers and the control device
social: Serial optical controllers for FCAL (soc+)
APIs
Windows Display Driver Model (WDDM) – the graphic display driver architecture for Windows Vista and later.
Unified Audio Model (UAM)
Windows Driver Foundation (WDF)
Declarative Componentized Hardware (DCH) - Universal Windows Platform driver
Windows Driver Model (WDM)
Network Driver Interface Specification (NDIS) – a standard network card driver API
Advanced Linux Sound Architecture (ALSA) – the standard Linux sound-driver interface
Scanner Access Now Easy (SANE) – a public-domain interface to raster-image scanner-hardware
Installable File System (IFS) – a filesystem API for IBM OS/2 and Microsoft Windows NT
Open Data-Link Interface (ODI) – network card API similar to NDIS
Uniform Driver Interface (UDI) – a cross-platform driver interface project
Dynax Driver Framework (dxd) – C++ open source cross-platform driver framework for KMDF and IOKit
Identifiers
A device on the PCI bus or USB is identified by two IDs which consist of two bytes each. The vendor ID identifies the vendor of the device. The device ID identifies a specific device from that manufacturer/vendor.
A PCI device has often an ID pair for the main chip of the device, and also a subsystem ID pair that identifies the vendor, which may be different from the chip manufacturer.
Security
Computers often have many diverse and customized device drivers running in their operating system (OS) kernel which often contain various bugs and vulnerabilities, making them a target for exploits. Bring Your Own Vulnerable Driver (BYOVD) uses signed, old drivers that contain flaws that allow hackers to insert malicious code into the kernel.
Drivers that may be vulnerable include those for WiFi and Bluetooth, gaming/graphics drivers, and drivers for printers.
There is a lack of effective kernel vulnerability detection tools, especially for closed-source OSes such as Microsoft Windows where the source code of the device drivers is mostly not public (open source) and drivers often have many privileges.
A group of security researchers considers the lack of isolation as one of the main factors undermining kernel security, and published an isolation framework to protect operating system kernels, primarily the monolithic Linux kernel whose drivers they say get ~80,000 commits per year.
See also
Driver (software)
Class driver
Device driver synthesis and verification
Driver wrapper
Free software
Firmware
Loadable kernel module
Makedev
Microcontroller
Open-source hardware
Printer driver
Replicant (operating system)
udev (userspace /dev)
References
External links
Windows Hardware Dev Center
Linux Hardware Compatibility Lists and Linux Drivers
Understanding Modern Device Drivers(Linux)
BinaryDriverHowto, Ubuntu.
Linux Drivers Source
Linux drivers
Computing terminology
Windows NT kernel | Device driver | [
"Technology"
] | 2,443 | [
"Computing terminology"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.