id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
4,734 | https://en.wikipedia.org/wiki/Bernoulli%27s%20inequality | In mathematics, Bernoulli's inequality (named after Jacob Bernoulli) is an inequality that approximates exponentiations of . It is often employed in real analysis. It has several useful variants:
Integer exponent
Case 1: for every integer and real number . The inequality is strict if and .
Case 2: for every integer and every real number .
Case 3: for every even integer and every real number .
Real exponent
for every real number and . The inequality is strict if and .
for every real number and .
History
Jacob Bernoulli first published the inequality in his treatise "Positiones Arithmeticae de Seriebus Infinitis" (Basel, 1689), where he used the inequality often.
According to Joseph E. Hofmann, Über die Exercitatio Geometrica des M. A. Ricci (1963), p. 177, the inequality is actually due to Sluse in his Mesolabum (1668 edition), Chapter IV "De maximis & minimis".
Proof for integer exponent
The first case has a simple inductive proof:
Suppose the statement is true for :
Then it follows that
Bernoulli's inequality can be proved for case 2, in which is a non-negative integer and , using mathematical induction in the following form:
we prove the inequality for ,
from validity for some r we deduce validity for .
For ,
is equivalent to which is true.
Similarly, for we have
Now suppose the statement is true for :
Then it follows that
since as well as . By the modified induction we conclude the statement is true for every non-negative integer .
By noting that if , then is negative gives case 3.
Generalizations
Generalization of exponent
The exponent can be generalized to an arbitrary real number as follows: if , then
for or , and
for .
This generalization can be proved by comparing derivatives. The strict versions of these inequalities require and .
Generalization of base
Instead of the inequality holds also in the form where are real numbers, all greater than , all with the same sign. Bernoulli's inequality is a special case when . This generalized inequality can be proved by mathematical induction.
In the first step we take . In this case the inequality is obviously true.
In the second step we assume validity of the inequality for numbers and deduce validity for numbers.
We assume thatis valid. After multiplying both sides with a positive number we get:
As all have the same sign, the products are all positive numbers. So the quantity on the right-hand side can be bounded as follows:what was to be shown.
Strengthened version
The following theorem presents a strengthened version of the Bernoulli inequality, incorporating additional terms to refine the estimate under specific conditions. Let the expoent be a nonnegative integer and let be a real number with if is odd and greater than 1. Then
with equality if and only if or .
Related inequalities
The following inequality estimates the -th power of from the other side. For any real numbers and with , one has
where 2.718.... This may be proved using the inequality
Alternative form
An alternative form of Bernoulli's inequality for and is:
This can be proved (for any integer ) by using the formula for geometric series: (using )
or equivalently
Alternative proofs
Arithmetic and geometric means
An elementary proof for and can be given using weighted AM-GM.
Let be two non-negative real constants. By weighted AM-GM on with weights respectively, we get
Note that
and
so our inequality is equivalent to
After substituting (bearing in mind that this implies ) our inequality turns into
which is Bernoulli's inequality.
Geometric series
Bernoulli's inequality
is equivalent to
and by the formula for geometric series (using y = 1 + x) we get
which leads to
Now if then by monotony of the powers each summand , and therefore their sum is greater and hence the product on the LHS of ().
If then by the same arguments and thus
all addends are non-positive and hence so is their sum. Since the product of two non-positive numbers is non-negative, we get again
().
Binomial theorem
One can prove Bernoulli's inequality for x ≥ 0 using the binomial theorem. It is true trivially for r = 0, so suppose r is a positive integer. Then Clearly and hence as required.
Using convexity
For the function is strictly convex. Therefore, for holds
and the reversed inequality is valid for and .
Another way of using convexity is to re-cast the desired inequality to for real and real . This inequality can be proved using the fact that the function is concave, and then using Jensen's inequality in the form to give:
which is the desired inequality.
Notes
References
External links
Bernoulli Inequality by Chris Boucher, Wolfram Demonstrations Project.
Inequalities | Bernoulli's inequality | [
"Mathematics"
] | 1,015 | [
"Binary relations",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems",
"Mathematical theorems"
] |
4,736 | https://en.wikipedia.org/wiki/Bastard%20Operator%20From%20Hell | The Bastard Operator From Hell (BOFH) is a fictional rogue computer operator created by Simon Travaglia, who takes out his anger on users (who are "lusers" to him) and others who pester him with their computer problems, uses his expertise against his enemies and manipulates his employer.
Several people have written stories about BOFHs, but only those by Simon Travaglia are considered canonical.
The BOFH stories were originally posted in 1992 to Usenet by Travaglia, with some being reprinted in Datamation. Since 2000 they have been published regularly in The Register (UK). Several collections of the stories have been published as books.
By extension, the term is also used to refer to any system administrator who displays the qualities of the original.
The early accounts of the BOFH took place in a university; later the scenes were set in an office workplace. In 2000 (BOFH 2k), the BOFH and his pimply-faced youth (PFY) assistant moved to a new company.
Other characters
The PFY (Pimply-Faced Youth, the assistant to the BOFH. Real name is Stephen) Possesses a temperament similar to the BOFH, and often either teams up with or plots against him.
The Boss (often portrayed as having no IT knowledge but believing otherwise; identity changes as successive bosses are sacked, leave, are committed, or have nasty "accidents")
CEO of the company – The PFY's uncle Brian from 1996 until 2000, when the BOFH and PFY moved to a new company.
The help desk operators, referred to as the "Helldesk" and often scolded for giving out the BOFH's personal number.
The Boss's secretary, Sharon.
The security department
George, the cleaner (an invaluable source of information to the BOFH and PFY)
Books
Influence
The protagonist in Charles Stross's The Laundry Files series of novels named himself Bob Oliver Francis Howard in reference to the BOFH. As Bob Howard is a self-chosen pseudonym, and Bob is a network manager when not working as a computational demonologist, the name is all too appropriate. In the novella Pimpf, he acquires a pimply-faced young assistant by the name of Peter-Fred Young.
BOFH is a text adventure game written by Howard A. Sherman, which took part in the 2002 Interactive Fiction Competition and was placed 26th out of 38.
Simon Travaglia
Simon Travaglia (born 1964) graduated from the University of Waikato, New Zealand in 1985. He worked as the IT infrastructure manager (2004–2008) and computer operator (1985–1992) at the University of Waikato and the infrastructure manager at the Waikato Innovation Park, Hamilton, New Zealand (since 2008). Since 1999 he is a freelance writer for The Register. He lives in Hautapu, New Zealand.
References
Further reading
External links
Computer humour
Internet culture
Internet slang
System administration
Fictional characters introduced in 1992
Fictional people in information technology | Bastard Operator From Hell | [
"Technology"
] | 630 | [
"Information systems",
"System administration"
] |
4,746 | https://en.wikipedia.org/wiki/Plague%20%28disease%29 | Plague is an infectious disease caused by the bacterium Yersinia pestis. Symptoms include fever, weakness and headache. Usually this begins one to seven days after exposure. There are three forms of plague, each affecting a different part of the body and causing associated symptoms. Pneumonic plague infects the lungs, causing shortness of breath, coughing and chest pain; bubonic plague affects the lymph nodes, making them swell; and septicemic plague infects the blood and can cause tissues to turn black and die.
The bubonic and septicemic forms are generally spread by flea bites or handling an infected animal, whereas pneumonic plague is generally spread between people through the air via infectious droplets. Diagnosis is typically by finding the bacterium in fluid from a lymph node, blood or sputum.
Those at high risk may be vaccinated. Those exposed to a case of pneumonic plague may be treated with preventive medication. If infected, treatment is with antibiotics and supportive care. Typically antibiotics include a combination of gentamicin and a fluoroquinolone. The risk of death with treatment is about 10% while without it is about 70%.
Globally, about 600 cases are reported a year. In 2017, the countries with the most cases include the Democratic Republic of the Congo, Madagascar and Peru. In the United States, infections occasionally occur in rural areas, where the bacteria are believed to circulate among rodents. It has historically occurred in large outbreaks, with the best known being the Black Death in the 14th century, which resulted in more than 50 million deaths in Europe.
Signs and symptoms
There are several different clinical manifestations of plague. The most common form is bubonic plague, followed by septicemic and pneumonic plague. Other clinical manifestations include plague meningitis, plague pharyngitis, and ocular plague. General symptoms of plague include fever, chills, headaches, and nausea. Many people experience swelling in their lymph nodes if they have bubonic plague. For those with pneumonic plague, symptoms may (or may not) include a cough, pain in the chest, and haemoptysis.
Bubonic plague
When a flea bites a human and contaminates the wound with regurgitated blood, the plague-causing bacteria are passed into the tissue. Y. pestis can reproduce inside cells, so even if phagocytosed, they can still survive. Once in the body, the bacteria can enter the lymphatic system, which drains interstitial fluid. Plague bacteria secrete several toxins, one of which is known to cause beta-adrenergic blockade.
Y. pestis spreads through the lymphatic vessels of the infected human until it reaches a lymph node, where it causes acute lymphadenitis. The swollen lymph nodes form the characteristic buboes associated with the disease, and autopsies of these buboes have revealed them to be mostly hemorrhagic or necrotic.
If the lymph node is overwhelmed, the infection can pass into the bloodstream, causing secondary septicemic plague and if the lungs are seeded, it can cause secondary pneumonic plague.
Septicemic plague
Lymphatics ultimately drain into the bloodstream, so the plague bacteria may enter the blood and travel to almost any part of the body. In septicemic plague, bacterial endotoxins cause disseminated intravascular coagulation (DIC), causing tiny clots throughout the body and possibly ischemic necrosis (tissue death due to lack of circulation/perfusion to that tissue) from the clots. DIC results in depletion of the body's clotting resources so that it can no longer control bleeding. Consequently, there is bleeding into the skin and other organs, which can cause red and/or black patchy rash and hemoptysis/hematemesis (coughing up/ vomiting of blood). There are bumps on the skin that look somewhat like insect bites; these are usually red, and sometimes white in the centre. Untreated, the septicemic plague is usually fatal. Early treatment with antibiotics reduces the mortality rate to between 4 and 15 per cent.
Pneumonic plague
The pneumonic form of plague arises from infection of the lungs. It causes coughing and thereby produces airborne droplets that contain bacterial cells and are likely to infect anyone inhaling them. The incubation period for pneumonic plague is short, usually two to four days, but sometimes just a few hours. The initial signs are indistinguishable from several other respiratory illnesses; they include headache, weakness, and spitting or vomiting of blood. The course of the disease is rapid; unless diagnosed and treated soon enough, typically within a few hours, death may follow in one to six days; in untreated cases, mortality is nearly 100%.
Cause
Transmission of Y. pestis to an uninfected individual is possible by any of the following means:
droplet contact – coughing or sneezing on another person
direct physical contact – touching an infected person, including sexual contact
indirect contact – usually by touching soil contamination or a contaminated surface
airborne transmission – if the microorganism can remain in the air for long periods
fecal-oral transmission – usually from contaminated food or water sources
vector borne transmission – carried by insects or other animals.
Yersinia pestis circulates in animal reservoirs, particularly in rodents, in the natural foci of infection found on all continents except Australia. The natural foci of plague are situated in a broad belt in the tropical and sub-tropical latitudes and the warmer parts of the temperate latitudes around the globe, between the parallels 55° N and 40° S.
Contrary to popular belief, rats did not directly start the spread of the bubonic plague. It is mainly a disease in the fleas (Xenopsylla cheopis) that infested the rats, making the rats themselves the first victims of the plague. Rodent-borne infection in a human occurs when a person is bitten by a flea that has been infected by biting a rodent that itself has been infected by the bite of a flea carrying the disease. The bacteria multiply inside the flea, sticking together to form a plug that blocks its stomach and causes it to starve. The flea then bites a host and continues to feed, even though it cannot quell its hunger, and consequently, the flea vomits blood tainted with the bacteria back into the bite wound. The bubonic plague bacterium then infects a new person and the flea eventually dies from starvation. Serious outbreaks of plague are usually started by other disease outbreaks in rodents or a rise in the rodent population.
A 21st-century study of a 1665 outbreak of plague in the village of Eyam in England's Derbyshire Dales – which isolated itself during the outbreak, facilitating modern study – found that three-quarters of cases are likely to have been due to human-to-human transmission, especially within families, a much larger proportion than previously thought.
Diagnosis
Symptoms of plague are usually non-specific and to definitively diagnose plague, laboratory testing is required. Y. pestis can be identified through both a microscope and by culturing a sample and this is used as a reference standard to confirm that a person has a case of plague. The sample can be obtained from the blood, mucus (sputum), or aspirate extracted from inflamed lymph nodes (buboes). If a person is administered antibiotics before a sample is taken or if there is a delay in transporting the person's sample to a laboratory and/or a poorly stored sample, there is a possibility for false negative results.
Polymerase chain reaction (PCR) may also be used to diagnose plague, by detecting the presence of bacterial genes such as the pla gene (plasmogen activator) and caf1 gene, (F1 capsule antigen). PCR testing requires a very small sample and is effective for both alive and dead bacteria. For this reason, if a person receives antibiotics before a sample is collected for laboratory testing, they may have a false negative culture and a positive PCR result.
Blood tests to detect antibodies against Y. pestis can also be used to diagnose plague, however, this requires taking blood samples at different periods to detect differences between the acute and convalescent phases of F1 antibody titres.
In 2020, a study about rapid diagnostic tests that detect the F1 capsule antigen (F1RDT) by sampling sputum or bubo aspirate was released. Results show rapid diagnostic F1RDT test can be used for people who have suspected pneumonic and bubonic plague but cannot be used in asymptomatic people. F1RDT may be useful in providing a fast result for prompt treatment and fast public health response as studies suggest that F1RDT is highly sensitive for both pneumonic and bubonic plague. However, when using the rapid test, both positive and negative results need to be confirmed to establish or reject the diagnosis of a confirmed case of plague and the test result needs to be interpreted within the epidemiological context as study findings indicate that although 40 out of 40 people who had the plague in a population of 1000 were correctly diagnosed, 317 people were diagnosed falsely as positive.
Prevention
Vaccination
Bacteriologist Waldemar Haffkine developed the first plague vaccine in 1897. He conducted a massive inoculation program in British India, and it is estimated that 26 million doses of Haffkine's anti-plague vaccine were sent out from Bombay between 1897 and 1925, reducing the plague mortality by 50–85%.
Since human plague is rare in most parts of the world as of 2023, routine vaccination is not needed other than for those at particularly high risk of exposure, nor for people living in areas with enzootic plague, meaning it occurs at regular, predictable rates in populations and specific areas, such as the western United States. It is not even indicated for most travellers to countries with known recent reported cases, particularly if their travel is limited to urban areas with modern hotels. The United States CDC thus only recommends vaccination for (1) all laboratory and field personnel who are working with Y. pestis organisms resistant to antimicrobials: (2) people engaged in aerosol experiments with Y. pestis; and (3) people engaged in field operations in areas with enzootic plague where preventing exposure is not possible (such as some disaster areas). A systematic review by the Cochrane Collaboration found no studies of sufficient quality to make any statement on the efficacy of the vaccine.
Early diagnosis
Diagnosing plague early leads to a decrease in transmission or spread of the disease.
Prophylaxis
Pre-exposure prophylaxis for first responders and health care providers who will care for patients with pneumonic plague is not considered necessary as long as standard and droplet precautions can be maintained. In cases of surgical mask shortages, patient overcrowding, poor ventilation in hospital wards, or other crises, pre-exposure prophylaxis might be warranted if sufficient supplies of antimicrobials are available.
Postexposure prophylaxis should be considered for people who had close (<6 feet), sustained contact with a patient with pneumonic plague and were not wearing adequate personal protective equipment. Antimicrobial postexposure prophylaxis also can be considered for laboratory workers accidentally exposed to infectious materials and people who had close (<6 feet) or direct contact with infected animals, such as veterinary staff, pet owners, and hunters.
Specific recommendations on pre- and post-exposure prophylaxis are available in the clinical guidelines on treatment and prophylaxis of plague published in 2021.
Treatments
If diagnosed in time, the various forms of plague are usually highly responsive to antibiotic therapy. The antibiotics often used are streptomycin, chloramphenicol and tetracycline. Amongst the newer generation of antibiotics, gentamicin and doxycycline have proven effective in monotherapeutic treatment of plague. Guidelines on treatment and prophylaxis of plague were published by the Centers for Disease Control and Prevention in 2021.
The plague bacterium could develop drug resistance and again become a major health threat. One case of a drug-resistant form of the bacterium was found in Madagascar in 1995. Further outbreaks in Madagascar were reported in November 2014 and October 2017.
Epidemiology
Globally about 600 cases are reported a year. In 2017, the countries with the most cases include the Democratic Republic of the Congo, Madagascar and Peru. It has historically occurred in large outbreaks, with the best known being the Black Death in the 14th century which resulted in more than 50 million dead. In recent years, cases have been distributed between small seasonal outbreaks which occur primarily in Madagascar, and sporadic outbreaks or isolated cases in endemic areas.
In 2022 the possible origin of all modern strands of Yersinia pestis DNA was found in human remains in three graves located in Kyrgyzstan, dated to 1338 and 1339. The siege of Caffa in Crimea in 1346, is known to have been the first plague outbreak with following strands, later to spread over Europe. Sequencing DNA compared to other ancient and modern strands paints a family tree of the bacteria. Bacteria today affecting marmots in Kyrgyzstan, are closest to the strand found in the graves, suggesting this is also the location where plague transferred from animals to humans.
Biological weapon
The plague has a long history as a biological weapon. Historical accounts from ancient China and medieval Europe details the use of infected animal carcasses, such as cows or horses, and human carcasses, by the Xiongnu/Huns, Mongols, Turks and other groups, to contaminate enemy water supplies. Han dynasty general Huo Qubing is recorded to have died of such contamination while engaging in warfare against the Xiongnu. Plague victims were also reported to have been tossed by catapult into cities under siege.
In 1347, the Genoese possession of Caffa, a great trade emporium on the Crimean peninsula, came under siege by an army of Mongol warriors of the Golden Horde under the command of Jani Beg. After a protracted siege during which the Mongol army was reportedly withering from the disease, they decided to use the infected corpses as a biological weapon. The corpses were catapulted over the city walls, infecting the inhabitants. This event might have led to the transfer of the Black Death via their ships into the south of Europe, possibly explaining its rapid spread.
During World War II, the Japanese Army developed weaponized plague, based on the breeding and release of large numbers of fleas. During the Japanese occupation of Manchuria, Unit 731 deliberately infected Chinese, Korean and Manchurian civilians and prisoners of war with the plague bacterium. These subjects, termed "maruta" or "logs", were then studied by dissection, others by vivisection while still conscious. Members of the unit such as Shiro Ishii were exonerated from the Tokyo tribunal by Douglas MacArthur but 12 of them were prosecuted in the Khabarovsk War Crime Trials in 1949 during which some admitted having spread bubonic plague within a radius around the city of Changde.
Ishii innovated bombs containing live mice and fleas, with very small explosive loads, to deliver the weaponized microbes, overcoming the problem of the explosive killing the infected animal and insect by the use of a ceramic, rather than metal, casing for the warhead. While no records survive of the actual usage of the ceramic shells, prototypes exist and are believed to have been used in experiments during WWII.
After World War II, both the United States and the Soviet Union developed means of weaponising pneumonic plague. Experiments included various delivery methods, vacuum drying, sizing the bacterium, developing strains resistant to antibiotics, combining the bacterium with other diseases (such as diphtheria), and genetic engineering. Scientists who worked in USSR bio-weapons programs have stated that the Soviet effort was formidable and that large stocks of weaponised plague bacteria were produced. Information on many of the Soviet and US projects is largely unavailable. Aerosolized pneumonic plague remains the most significant threat.
The plague can be easily treated with antibiotics. Some countries, such as the United States, have large supplies on hand if such an attack should occur, making the threat less severe.
See also
Timeline of plague
References
Further reading
External links
WHO Health topic
CDC Plague map world distribution, publications, information on bioterrorism preparedness and response regarding plague
Symptoms, causes, pictures of bubonic plague
Airborne diseases
Bacterium-related cutaneous conditions
Biological agents
Epidemics
Insect-borne diseases
Rodent-carried diseases
Zoonoses
Zoonotic bacterial diseases
Cat diseases
Wikipedia medicine articles ready to translate | Plague (disease) | [
"Biology",
"Environmental_science"
] | 3,518 | [
"Biological agents",
"Toxicology",
"Biological warfare"
] |
4,748 | https://en.wikipedia.org/wiki/Baudot%20code | The Baudot code () is an early character encoding for telegraphy invented by Émile Baudot in the 1870s. It was the predecessor to the International Telegraph Alphabet No. 2 (ITA2), the most common teleprinter code in use before ASCII. Each character in the alphabet is represented by a series of five bits, sent over a communication channel such as a telegraph wire or a radio signal by asynchronous serial communication. The symbol rate measurement is known as baud, and is derived from the same name.
History
Baudot code (ITA1)
In the below table, Columns I, II, III, IV, and V show the code; the Let. and Fig. columns show the letters and numbers for the Continental and UK versions; and the sort keys present the table in the order: alphabetical, Gray and UK
Baudot developed his first multiplexed telegraph in 1872 and patented it in 1874. In 1876, he changed from a six-bit code to a five-bit code, as suggested by Carl Friedrich Gauss and Wilhelm Weber in 1834,
with equal on and off intervals, which allowed for transmission of the Roman alphabet, and included punctuation and control signals. The code itself was not patented (only the machine) because French patent law does not allow concepts to be patented.
Baudot's 5-bit code was adapted to be sent from a manual keyboard, and no teleprinter equipment was ever constructed that used it in its original form. The code was entered on a keyboard which had just five piano-type keys and was operated using two fingers of the left hand and three fingers of the right hand. Once the keys had been pressed, they were locked down until mechanical contacts in a distributor unit passed over the sector connected to that particular keyboard, at which time the keyboard was unlocked ready for the next character to be entered, with an audible click (known as the "cadence signal") to warn the operator. Operators had to maintain a steady rhythm, and the usual speed of operation was 30 words per minute.
The table "shows the allocation of the Baudot code which was employed in the British Post Office for continental and inland services. A number of characters in the continental code are replaced by fractionals in the inland code. Code elements 1, 2 and 3 are transmitted by keys 1, 2 and 3, and these are operated by the first three fingers of the right hand. Code elements 4 and 5 are transmitted by keys 4 and 5, and these are operated by the first two fingers of the left hand."
Baudot's code became known as the International Telegraph Alphabet No. 1 (ITA1). It is no longer used.
Murray code
In 1901, Baudot's code was modified by Donald Murray (1865–1945), prompted by his development of a typewriter-like keyboard. The Murray system employed an intermediate step: an operator used a keyboard perforator to punch a paper tape and then a transmitter to send the message from the punched tape. At the receiving end of the line, a printing mechanism would print on a paper tape, and/or a reperforator would make a perforated copy of the message.
Because there was no longer a connection between the operator's hand movement and the bits transmitted, there was no concern about arranging the code to minimize operator fatigue. Instead, Murray designed the code to minimize wear on the machinery by assigning the code combinations with the fewest punched holes to the most frequently used characters. For example, the one-hole letters are E and T. The ten two-hole letters are AOINSHRDLZ, very similar to the "Etaoin shrdlu" order used in Linotype machines. Ten more letters, BCGFJMPUWY, have three holes each, and the four-hole letters are VXKQ.
The Murray code also introduced what became known as "format affectors" or "control characters" the CR (Carriage Return) and LF (Line Feed) codes. A few of Baudot's codes moved to the positions where they have stayed ever since: the NULL or BLANK and the DEL code. NULL/BLANK was used as an idle code for when no messages were being sent, but the same code was used to encode the space separation between words. Sequences of DEL codes (fully punched columns) were used at start or end of messages or between them which made it easier to separate distinct messages. (BELL codes could be inserted in those sequences to signal to the remote operator that a new message was coming or that transmission of a message was terminated).
Early British Creed machines also used the Murray system.
Western Union
Murray's code was adopted by Western Union which used it until the 1950s, with a few changes that consisted of omitting some characters and adding more control codes. An explicit SPC (space) character was introduced, in place of the BLANK/NULL, and a new BEL code rang a bell or otherwise produced an audible signal at the receiver. Additionally, the WRU or "Who aRe yoU?" code was introduced, which caused a receiving machine to send an identification stream back to the sender.
ITA2
In 1932, the CCITT introduced the International Telegraph Alphabet No. 2 (ITA2) code as an international standard, which was based on the Western Union code with some minor changes. The US standardized on a version of ITA2 called the American Teletypewriter code (US TTY) which was the basis for 5-bit teletypewriter codes until the debut of 7-bit ASCII in 1963.
Some code points (marked blue in the table) were reserved for national-specific usage.
The code position assigned to Null was in fact used only for the idle state of teleprinters. During long periods of idle time, the impulse rate was not synchronized between both devices (which could even be powered off or not permanently interconnected on commuted phone lines). To start a message it was first necessary to calibrate the impulse rate, a sequence of regularly timed "mark" pulses (1), by a group of five pulses, which could also be detected by simple passive electronic devices to turn on the teleprinter. This sequence of pulses generated a series of Erasure/Delete characters while also initializing the state of the receiver to the Letters shift mode. However, the first pulse could be lost, so this power on procedure could then be terminated by a single Null immediately followed by an Erasure/Delete character. To preserve the synchronization between devices, the Null code could not be used arbitrarily in the middle of messages (this was an improvement to the initial Baudot system where spaces were not explicitly differentiated, so it was difficult to maintain the pulse counters for repeating spaces on teleprinters). But it was then possible to resynchronize devices at any time by sending a Null in the middle of a message (immediately followed by an Erasure/Delete/LS control if followed by a letter, or by a FS control if followed by a figure). Sending Null controls also did not cause the paper band to advance to the next row (as nothing was punched), so this saved precious lengths of punchable paper band. On the other hand, the Erasure/Delete/LS control code was always punched and always shifted to the (initial) letters mode. According to some sources, the Null code point was reserved for country-internal usage only.
The Shift to Letters code (LS) is also usable as a way to cancel/delete text from a punched tape after it has been read, allowing the safe destruction of a message before discarding the punched band. Functionally, it can also play the same filler role as the Delete code in ASCII (or other 7-bit and 8-bit encodings, including EBCDIC for punched cards). After codes in a fragment of text have been replaced by an arbitrary number of LS codes, what follows is still preserved and decodable. It can also be used as an initiator to make sure that the decoding of the first code will not give a digit or another symbol from the figures page (because the Null code can be arbitrarily inserted near the end or beginning of a punch band, and has to be ignored, whereas the Space code is significant in text).
The cells marked as reserved for extensions (which use the LS code again a second time—just after the first LS code—to shift from the figures page to the letters shift page) has been defined to shift into a new mode. In this new mode, the letters page contains only lowercase letters, but retains access to a third code page for uppercase letters, either by encoding for a single letter (by sending LS before that letter), or locking (with FS+LS) for an unlimited number of capital letters or digits before then unlocking (with a single LS) to return to lowercase mode. The cell marked as "Reserved" is also usable (using the FS code from the figures shift page) to switch the page of figures (which normally contains digits and national lowercase letters or symbols) to a fourth page (where national letters are uppercase and other symbols may be encoded).
ITA2 is still used in telecommunications devices for the deaf (TDD), Telex, and some amateur radio applications, such as radioteletype ("RTTY"). ITA2 is also used in Enhanced Broadcast Solution, an early 21st-century financial protocol specified by Deutsche Börse, to reduce the character encoding footprint.
Nomenclature
Nearly all 20th-century teleprinter equipment used Western Union's code, ITA2, or variants thereof. Radio amateurs casually call ITA2 and variants "Baudot" incorrectly, and even the American Radio Relay League's Amateur Radio Handbook does so, though in more recent editions the tables of codes correctly identifies it as ITA2.
Character set
The values shown in each cell are the Unicode codepoints, given for comparison.
Original Baudot variants
Original Baudot, domestic UK
Original Baudot, Continental European
Original Baudot, ITA 1
Baudot–Murray variants
Murray Code
ITA 2 and US-TTY
Weather code
Meteorologists used a variant of ITA2 with the figures-case symbols, except for the ten digits, BEL and a few other characters, replaced by weather symbols:
Details
Note: This table presumes the space called "1" by Baudot and Murray is rightmost, and least significant. The way the transmitted bits were packed into larger codes varied by manufacturer. The most common solution allocates the bits from the least significant bit towards the most significant bit (leaving the three most significant bits of a byte unused).
In ITA2, characters are expressed using five bits. ITA2 uses two code sub-sets, the "letter shift" (LTRS), and the "figure shift" (FIGS). The FIGS character (11011) signals that the following characters are to be interpreted as being in the FIGS set, until this is reset by the LTRS (11111) character. In use, the LTRS or FIGS shift key is pressed and released, transmitting the corresponding shift character to the other machine. The desired letters or figures characters are then typed. Unlike a typewriter or modern computer keyboard, the shift key isn't kept depressed whilst the corresponding characters are typed. "ENQuiry" will trigger the other machine's answerback. It means "Who are you?"
CR is carriage return, LF is line feed, BEL is the bell character which rang a small bell (often used to alert operators to an incoming message), SP is space, and NUL is the null character (blank tape).
Note: the binary conversions of the codepoints are often shown in reverse order, depending on (presumably) from which side one views the paper tape. Note further that the "control" characters were chosen so that they were either symmetric or in useful pairs so that inserting a tape "upside down" did not result in problems for the equipment and the resulting printout could be deciphered. Thus FIGS (11011), LTRS (11111) and space (00100) are invariant, while CR (00010) and LF (01000), generally used as a pair, are treated the same regardless of order by page printers. LTRS could also be used to overpunch characters to be deleted on a paper tape (much like DEL in 7-bit ASCII).
The sequence RYRYRY... is often used in test messages, and at the start of every transmission. Since R is 01010 and Y is 10101, the sequence exercises much of a teleprinter's mechanical components at maximum stress. Also, at one time, fine-tuning of the receiver was done using two coloured lights (one for each tone). 'RYRYRY...' produced 0101010101..., which made the lights glow with equal brightness when the tuning was correct. This tuning sequence is only useful when ITA2 is used with two-tone FSK modulation, such as is commonly seen in radioteletype (RTTY) usage.
US implementations of Baudot code may differ in the addition of a few characters, such as #, & on the FIGS layer.
The Russian version of Baudot code (MTK-2) used three shift modes; the Cyrillic letter mode was activated by the character (00000). Because of the larger number of characters in the Cyrillic alphabet, the characters !, &, £ were omitted and replaced by Cyrillics, and BEL has the same code as Cyrillic letter Ю. The Cyrillic letters Ъ and Ё are omitted, and Ч is merged with the numeral 4.
See also
Bacon's cipher – A 5-bit binary encoding of the English alphabet devised by Francis Bacon in 1605.
List of information system character sets
CCIR 476
Explanatory notes
References
Further reading
MTK-2 code table
Baudot, Murray, ITA2, ITA5, etc.
External links
Amateur radio
Character encoding
Character sets
Telegraphy
1870s introductions
French inventions | Baudot code | [
"Technology"
] | 2,964 | [
"Natural language and computing",
"Character encoding"
] |
4,757 | https://en.wikipedia.org/wiki/Bestiary | A bestiary () is a compendium of beasts. Originating in the ancient world, bestiaries were made popular in the Middle Ages in illustrated volumes that described various animals and even rocks. The natural history and illustration of each beast was usually accompanied by a moral lesson. This reflected the belief that the world itself was the Word of God and that every living thing had its own special meaning. For example, the pelican, which was believed to tear open its breast to bring its young to life with its own blood, was a living representation of Jesus. Thus the bestiary is also a reference to the symbolic language of animals in Western Christian art and literature.
History
The bestiary — the medieval book of beasts — was among the most popular illuminated texts in northern Europe during the Middle Ages (about 500–1500). Medieval Christians understood every element of the world as a manifestation of God, and bestiaries largely focused on each animal's religious meaning. Much of what is in the bestiary came from the ancient Greeks and their philosophers. The earliest bestiary in the form in which it was later popularized was an anonymous 2nd-century Greek volume called the Physiologus, which itself summarized ancient knowledge and wisdom about animals in the writings of classical authors such as Aristotle's Historia Animalium and various works by Herodotus, Pliny the Elder, Solinus, Aelian and other naturalists.
Following the Physiologus, Saint Isidore of Seville (Book XII of the Etymologiae) and Saint Ambrose expanded the religious message with reference to passages from the Bible and the Septuagint. They and other authors freely expanded or modified pre-existing models, constantly refining the moral content without interest or access to much more detail regarding the factual content. Nevertheless, the often fanciful accounts of these beasts were widely read and generally believed to be true. A few observations found in bestiaries, such as the migration of birds, were discounted by the natural philosophers of later centuries, only to be rediscovered in the modern scientific era.
Medieval bestiaries are remarkably similar in sequence of the animals of which they treat. Bestiaries were particularly popular in England and France around the 12th century and were mainly compilations of earlier texts. The Aberdeen Bestiary is one of the best known of over 50 manuscript bestiaries surviving today.
Much influence comes from the Renaissance era and the general Middle Ages, as well as modern times. The Renaissance has been said to have started around the 14th century in Italy. Bestiaries influenced early heraldry in the Middle Ages, giving ideas for charges and also for the artistic form. Bestiaries continue to give inspiration to coats of arms created in our time.
Two illuminated Psalters, the Queen Mary Psalter (British Library Ms. Royal 2B, vii) and the Isabella Psalter (State Library, Munich), contain full Bestiary cycles. The bestiary in the Queen Mary Psalter is found in the "marginal" decorations that occupy about the bottom quarter of the page, and are unusually extensive and coherent in this work. In fact the bestiary has been expanded beyond the source in the Norman bestiary of Guillaume le Clerc to ninety animals. Some are placed in the text to make correspondences with the psalm they are illustrating.
Many decide to make their own bestiary with their own observations including knowledge from previous ones. These observations can be made in text form, as well as illustrated out. The Italian artist Leonardo da Vinci also made his own bestiary.
A volucrary is a similar collection of the symbols of birds that is sometimes found in conjunction with bestiaries. The most widely known volucrary in the Renaissance was Johannes de Cuba's Gart der Gesundheit which describes 122 birds and which was printed in 1485.
Bestiary content
The contents of medieval bestiaries were often obtained and created from combining older textual sources and accounts of animals, such as the Physiologus.
Medieval bestiaries contained detailed descriptions and illustrations of species native to Western Europe, exotic animals and what in modern times are considered to be imaginary animals. Descriptions of the animals included the physical characteristics associated with the creature, although these were often physiologically incorrect, along with the Christian morals that the animal represented. The description was then often accompanied by an artistic illustration of the animal as described in the bestiary. For example, in one bestiary the eagle is depicted in an illustration and is said to be the “king of birds.”
Bestiaries were organized in different ways based upon the sources they drew upon. The descriptions could be organized by animal groupings, such as terrestrial and marine creatures, or presented in an alphabetical manner. However, the texts gave no distinction between existing and imaginary animals. Descriptions of creatures such as dragons, unicorns, basilisk, griffin and caladrius were common in such works and found intermingled amongst accounts of bears, boars, deer, lions, and elephants. In one source, the author explains how fables and bestiaries are closely linked to one another as “each chapter of a bestiary, each fable in a collection, has a text and has a meaning.
This lack of separation has often been associated with the assumption that people during this time believed in what the modern period classifies as nonexistent or "imaginary creatures". However, this assumption is currently under debate, with various explanations being offered. Some scholars, such as Pamela Gravestock, have written on the theory that medieval people did not actually think such creatures existed but instead focused on the belief in the importance of the Christian morals these creatures represented, and that the importance of the moral did not change regardless if the animal existed or not. The historian of science David C. Lindberg pointed out that medieval bestiaries were rich in symbolism and allegory, so as to teach moral lessons and entertain, rather than to convey knowledge of the natural world.
Religious significance
The significance shown between animals and religion started much before bestiaries came into play. In many ancient civilizations there are references to animals and their meaning within that specific religion or mythology that we know of today. These civilizations included Egypt and their gods with the faces of animals or Greece which had symbolic animals for their godly beings, an example being Zeus and the eagle. With animals being a part of religion before bestiaries and their lessons came out, they were influenced by past observations of meaning as well as older civilizations and their interpretations.
As most of the students who read these bestiaries were monks and clerics, it is not impossible to say that there is a major religious significance within them. The bestiary was used to educate young men on the correct morals they should display. All of the animals presented in the bestiaries show some sort of lesson or meaning when presented. Much of the symbolism shown of animals. Much of what is proposed by the bestiaries mentions much of paganism because of the religious significance and time period of the medieval ages.
One of the main 'animals' mentioned in some of the bestiaries is dragons, which hold much significance in terms of religion and meaning. The unnatural part of dragon's history shows how important the church can be during this time. Much of what is covered in the article talks about how the dragon that is mentioned in some of the bestiaries shows a glimpse of the religious significance in many of these tales.
These bestiaries held much content in terms of religious significance. In almost every animal there is some way to connect it to a lesson from the church or a familiar religious story. With animals holding significance since ancient times, it is fair to say that bestiaries and their contents gave fuel to the context behind the animals, whether real or myth, and their meanings.
Modern bestiaries
In modern times, artists such as Henri de Toulouse-Lautrec and Saul Steinberg have produced their own bestiaries. Jorge Luis Borges wrote a contemporary bestiary of sorts, the Book of Imaginary Beings, which collects imaginary beasts from bestiaries and fiction. Nicholas Christopher wrote a literary novel called "The Bestiary" (Dial, 2007) that describes a lonely young man's efforts to track down the world's most complete bestiary. John Henry Fleming's Fearsome Creatures of Florida (Pocol Press, 2009) borrows from the medieval bestiary tradition to impart moral lessons about the environment. Caspar Henderson's The Book of Barely Imagined Beings (Granta 2012, University of Chicago Press 2013), subtitled "A 21st Century Bestiary", explores how humans imagine animals in a time of rapid environmental change. In July 2014, Jonathan Scott wrote The Blessed Book of Beasts, Eastern Christian Publications, featuring 101 animals from the various translations of the Bible, in keeping with the tradition of the bestiary found in the writings of the Saints, including Saint John Chrysostom. In today's world there is a discipline called cryptozoology which is the study of unknown species. This discipline can be linked to medieval bestiaries because in many cases the unknown animals can be the same, as well as having meaning or significance behind them.
The lists of monsters to be found in video games (such as NetHack, Dragon Quest, and Monster Hunter), as well as some tabletop role-playing games such as Pathfinder, are often termed bestiaries.
See also
Allegory in the Middle Ages
List of medieval bestiaries
Marine counterparts of land creatures
Animal representation in Western medieval art
References
“Animal Symbolism (Illustrated).” OpenSIUC, https://opensiuc.lib.siu.edu/cgi/viewcontent.cgi?article=2505&context=ocj. Accessed 5 March 2022.
Morrison, Elizabeth, and Larisa Grollemond. “An Introduction to the Bestiary, Book of Beasts in the Medieval World (article).” Khan Academy, https://www.khanacademy.org/humanities/medieval-world/beginners-guide-to-medieval-europe/manuscripts/a/an-introduction-to-the-bestiary-book-of-beasts-in-the-medieval-world. Accessed 2 March 2022.
Morrison, Elizabeth. “Beastly tales from the medieval bestiary.” The British Library, https://www.bl.uk/medieval-english-french-manuscripts/articles/beastly-tales-from-the-medieval-bestiary . Accessed 2 March 2022.
“The Renaissance | Boundless World History.” Lumen Learning, LumenCandela, https://courses.lumenlearning.com/boundless-worldhistory/chapter/the-renaissance/. Accessed 5 March 2022.
"The Medieval Bestiary", by James Grout, part of the Encyclopædia Romana.
McCulloch, Florence. (1962) Medieval Latin and French Bestiaries.
Clark, Willene B. and Meradith T. McMunn. eds. (1989) Beasts and Birds of the Middle Ages. The Bestiary and its Legacy.
Payne, Ann. (1990) Mediaeval Beasts.
George, Wilma and Brunsdon Yapp. (1991) The Naming of the Beasts: Natural History in the Medieval Bestiary.
Benton, Janetta Rebold. (1992) The Medieval Menagerie: Animals in the Art of the Middle Ages.
Lindberg, David C. (1992) The Beginnings of Western Science. The European Tradition in Philosophhical, Religious and Institutional Context, 600 B. C. to A. D. 1450
Flores, Nona C. (1993) "The Mirror of Nature Distorted: The Medieval Artist's Dilemma in Depicting Animals".
Hassig, Debra (1995) Medieval Bestiaries: Text, Image, Ideology.
Gravestock, Pamela. (1999) "Did Imaginary Animals Exist?"
Hassig, Debra, ed. (1999) The Mark of the Beast: The Medieval Bestiary in Art, Life, and Literature.
Notes
External links
The Bestiary: The Book of Beasts, T.H. White's translation of a medieval bestiary in the Cambridge University library; digitized by the University of Wisconsin–Madison libraries.
The Medieval Bestiary online, edited by David Badke.
The Bestiaire of Philippe de Thaon at the National Library of Denmark.
The Bestiary of Anne Walshe at the National Library of Denmark.
The Aberdeen Bestiary at the University of Aberdeen.
Exhibition (in English, but French version is fuller) at the Bibliothèque nationale de France
Christian Symbology Animals and their meanings in Christian texts.
Bestiairy - Monsters & Fabulous Creatures of Greek Myth & Legend with pictures
Types of illuminated manuscript
Medieval European legendary creatures
Medieval literature
Zoology | Bestiary | [
"Biology"
] | 2,650 | [
"Zoology"
] |
4,770 | https://en.wikipedia.org/wiki/Business%20ethics | Business ethics (also known as corporate ethics) is a form of applied ethics or professional ethics, that examines ethical principles and moral or ethical problems that can arise in a business environment. It applies to all aspects of business conduct and is relevant to the conduct of individuals and entire organizations. These ethics originate from individuals, organizational statements or the legal system. These norms, values, ethical, and unethical practices are the principles that guide a business.
Business ethics refers to contemporary organizational standards, principles, sets of values and norms that govern the actions and behavior of an individual in the business organization. Business ethics have two dimensions, normative business ethics or descriptive business ethics. As a corporate practice and a career specialization, the field is primarily normative. Academics attempting to understand business behavior employ descriptive methods. The range and quantity of business ethical issues reflects the interaction of profit-maximizing behavior with non-economic concerns.
Interest in business ethics accelerated dramatically during the 1980s and 1990s, both within major corporations and within academia. For example, most major corporations today promote their commitment to non-economic values under headings such as ethics codes and social responsibility charters.
Adam Smith said in 1776, "People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices." Governments use laws and regulations to point business behavior in what they perceive to be beneficial directions. Ethics implicitly regulates areas and details of behavior that lie beyond governmental control. The emergence of large corporations with limited relationships and sensitivity to the communities in which they operate accelerated the development of formal ethics regimes.
Maintaining an ethical status is the responsibility of the manager of the business. According to a 1990 article in the Journal of Business Ethics, "Managing ethical behavior is one of the most pervasive and complex problems facing business organizations today."
History
Business ethics reflect the norms of each historical period. As time passes, norms evolve, causing accepted behaviors to become objectionable. Business ethics and the resulting behavior evolved as well. Business was involved in slavery, colonialism, and the Cold War.
The term 'business ethics' came into common use in the United States in the early 1970s. By the mid-1980s at least 500 courses in business ethics reached 40,000 students, using some twenty textbooks and at least ten casebooks supported by professional societies, centers and journals of business ethics. The Society for Business Ethics was founded in 1980. European business schools adopted business ethics after 1987 commencing with the European Business Ethics Network. In 1982 the first single-authored books in the field appeared.
Firms began highlighting their ethical stature in the late 1980s and early 1990s, possibly in an attempt to distance themselves from the business scandals of the day, such as the savings and loan crisis. The concept of business ethics caught the attention of academics, media and business firms by the end of the Cold War. However, criticism of business practices was attacked for infringing the freedom of entrepreneurs and critics were accused of supporting communists. This scuttled the discourse of business ethics both in media and academia. The Defense Industry Initiative on Business Ethics and Conduct (DII) was created to support corporate ethical conduct. This era began the belief and support of self-regulation and free trade, which lifted tariffs and barriers and allowed businesses to merge and divest in an increasing global atmosphere.
Religious and philosophical origins
One of the earliest written treatments of business ethics is found in the Tirukkuṛaḷ, a Tamil book dated variously from 300 BCE to the 7th century CE and attributed to Thiruvalluvar. Many verses discuss business ethics, in particular, verse 113, adapting to a changing environment in verses 474, 426, and 140, learning the intricacies of different tasks in verses 462 and 677.
Overview
Business ethics reflects the philosophy of business, of which one aim is to determine the fundamental purposes of a company. Business purpose expresses the company's reason for existing. Modern discussion on the purpose of business has been freshened by views from thinkers such as Richard R. Ellesworth, Peter Drucker, and Nikos Mourkogiannis: Earlier views such as Milton Friedman's held that the purpose of a business organization is to make profit for shareholders. Nevertheless, the purpose of maximizing shareholder's wealth often "fails to energize employees". In practice, many non-shareholders also benefit from a firm's economic activity, among them employees through contractual compensation and its broader impact, consumers by the tangible or non-tangible value derived from their purchase choices; society as a whole through taxation and/or the company's involvement in social action when it occurs. On the other hand, if a company's purpose is to maximize shareholder returns, then sacrificing profits for other concerns is a violation of its fiduciary responsibility. Corporate entities are legal persons but this does not mean they are legally entitled to all of the rights and liabilities as natural persons.
Ethics are the rules or standards that govern our decisions on a daily basis. Many consider "ethics" with conscience or a simplistic sense of "right" and "wrong". Others would say that ethics is an internal code that governs an individual's conduct, ingrained into each person by family, faith, tradition, community, laws, and personal mores. Corporations and professional organizations, particularly licensing boards, generally will have a written code of ethics that governs standards of professional conduct expected of all in the field.
It is important to note that "law" and "ethics" are not synonymous, nor are the "legal" and "ethical" courses of action in a given situation necessarily the same. Statutes and regulations passed by legislative bodies and administrative boards set forth the "law". Slavery once was legal in the US, but one certainly would not say enslaving another was an "ethical" act.
Economist Milton Friedman wrote that corporate executives' "responsibility ... generally will be to make as much money as possible while conforming to their basic rules of the society, both those embodied in law and those embodied in ethical custom". Friedman also said, "the only entities who can have responsibilities are individuals ... A business cannot have responsibilities. So the question is, do corporate executives, provided they stay within the law, have responsibilities in their business activities other than to make as much money for their stockholders as possible? And my answer to that is, no, they do not." This view is known as the Friedman doctrine. A multi-country 2011 survey found support for this view among the "informed public" ranging from 30 to 80%. Ronald Duska and Jacques Cory have described Friedman's argument as consequentialist or utilitarian rather than pragmatic: Friedman's argument implies that unrestrained corporate freedom would benefit the most people in the long term. Duska argued that Friedman failed to differentiate two very different aspects of business: (1) the motive of individuals, who are generally motivated by profit to participate in business, and (2) the socially sanctioned purpose of business, or the reason why people allow businesses to exist, which is to provide goods and services to people. So Friedman was wrong that making a profit is the only concern of business, Duska argued.
Peter Drucker once said, "There is neither a separate ethics of business nor is one needed", implying that standards of personal ethics cover all business situations. However, Drucker in another instance said that the ultimate responsibility of company directors is not to harm—primum non nocere.
Philosopher and author Ayn Rand has put forth her idea of rational egoism, which also applies to business ethics. She stresses that position of the entrepreneur, who has to be responsible for his own happiness and the business is a means to said happiness, where the entrepreneur is not required to serve the interest of anyone else and no-one is entitled to his/her work.
Another view of business is that it must exhibit corporate social responsibility (CSR): an umbrella term indicating that an ethical business must act as a responsible citizen of the communities in which it operates even at the cost of profits or other goals. In the US and most other nations, corporate entities are legally treated as persons in some respects. For example, they can hold title to property, sue and be sued and are subject to taxation, although their free speech rights are limited. This can be interpreted to imply that they have independent ethical responsibilities. Duska argued that stakeholders expect a business to be ethical and that violating that expectation must be counterproductive for the business.
Ethical issues include the rights and duties between a company and its employees, suppliers, customers and neighbors, its fiduciary responsibility to its shareholders. Issues concerning relations between different companies include hostile take-overs and industrial espionage. Related issues include corporate governance; corporate social entrepreneurship; political contributions; legal issues such as the ethical debate over introducing a crime of corporate manslaughter; and the marketing of corporations' ethics policies.
According to research published by the Institute of Business Ethics and Ipsos MORI in late 2012, the three major areas of public concern regarding business ethics in Britain are executive pay, corporate tax avoidance and bribery and corruption.
Ethical standards of an entire organization can be damaged if a corporate psychopath is in charge. This will not only affect the company and its outcome but the employees who work under a corporate psychopath. The way a corporate psychopath can rise in a company is by their manipulation, scheming, and bullying. They do this in a way that can hide their true character and intentions within a company.
Functional business areas
Finance
Fundamentally, finance is a social science discipline. The discipline borders behavioral economics, sociology, economics, accounting and management. It concerns technical issues such as the mix of debt and equity, dividend policy, the evaluation of alternative investment projects, options, futures, swaps, and other derivatives, portfolio diversification and many others. Finance is often mistaken by the people to be a discipline free from ethical burdens. The 2008 financial crisis caused critics to challenge the ethics of the executives in charge of U.S. and European financial institutions and financial regulatory bodies. Finance ethics is overlooked for another reason—issues in finance are often addressed as matters of law rather than ethics.
Finance paradigm
Aristotle said, "the end and purpose of the polis is the good life". Adam Smith characterized the good life in terms of material goods and intellectual and moral excellences of character. Smith in his The Wealth of Nations commented, "All for ourselves, and nothing for other people, seems, in every age of the world, to have been the vile maxim of the masters of mankind." However, a section of economists influenced by the ideology of neoliberalism, interpreted the objective of economics to be maximization of economic growth through accelerated consumption and production of goods and services. Neoliberal ideology promoted finance from its position as a component of economics to its core. Proponents of the ideology hold that unrestricted financial flows, if redeemed from the shackles of "financial repressions", best help impoverished nations to grow. The theory holds that open financial systems accelerate economic growth by encouraging foreign capital inflows, thereby enabling higher levels of savings, investment, employment, productivity and "welfare", along with containing corruption. Neoliberals recommended that governments open their financial systems to the global market with minimal regulation over capital flows. The recommendations however, met with criticisms from various schools of ethical philosophy. Some pragmatic ethicists, found these claims to be unfalsifiable and a priori, although neither of these makes the recommendations false or unethical per se. Raising economic growth to the highest value necessarily means that welfare is subordinate, although advocates dispute this saying that economic growth provides more welfare than known alternatives. Since history shows that neither regulated nor unregulated firms always behave ethically, neither regime offers an ethical panacea.
Neoliberal recommendations to developing countries to unconditionally open up their economies to transnational finance corporations was fiercely contested by some ethicists. The claim that deregulation and the opening up of economies would reduce corruption was also contested.
Dobson observes, "a rational agent is simply one who pursues personal material advantage ad infinitum. In essence, to be rational in finance is to be individualistic, materialistic, and competitive. Business is a game played by individuals, as with all games the object is to win, and winning is measured in terms solely of material wealth. Within the discipline, this rationality concept is never questioned, and has indeed become the theory-of-the-firm's sine qua non". Financial ethics is in this view a mathematical function of shareholder wealth. Such simplifying assumptions were once necessary for the construction of mathematically robust models. However, signalling theory and agency theory extended the paradigm to greater realism.
Other issues
Fairness in trading practices, trading conditions, financial contracting, sales practices, consultancy services, tax payments, internal audit, external audit and executive compensation also, fall under the umbrella of finance and accounting. Particular corporate ethical/legal abuses include: creative accounting, earnings management, misleading financial analysis, insider trading, securities fraud, bribery/kickbacks and facilitation payments. Outside of corporations, bucket shops and forex scams are criminal manipulations of financial markets. Cases include accounting scandals, Enron, WorldCom and Satyam.
Human resource management
Human resource management occupies the sphere of activity of recruitment selection, orientation, performance appraisal, training and development, industrial relations and health and safety issues. Business Ethicists differ in their orientation towards labor ethics. Some assess human resource policies according to whether they support an egalitarian workplace and the dignity of labor.
Issues including employment itself, privacy, compensation in accord with comparable worth, collective bargaining (and/or its opposite) can be seen either as inalienable rights or as negotiable.
Discrimination by age (preferring the young or the old), gender/sexual harassment, race, religion, disability, weight and attractiveness. A common approach to remedying discrimination is affirmative action.
Once hired, employees have the right to the occasional cost of living increases, as well as raises based on merit. Promotions, however, are not a right, and there are often fewer openings than qualified applicants. It may seem unfair if an employee who has been with a company longer is passed over for a promotion, but it is not unethical. It is only unethical if the employer did not give the employee proper consideration or used improper criteria for the promotion. Each employer should know the distinction between what is unethical and what is illegal. If an action is illegal it is breaking the law but if an action seems morally incorrect that is unethical. In the workplace what is unethical does not mean illegal and should follow the guidelines put in place by OSHA (Occupational Safety and Health Administration), EEOC (Equal Employment Opportunity Commission), and other law binding entities.
Potential employees have ethical obligations to employers, involving intellectual property protection and whistle-blowing.
Employers must consider workplace safety, which may involve modifying the workplace, or providing appropriate training or hazard disclosure. This differentiates on the location and type of work that is taking place and can need to comply with the standards to protect employees and non-employees under workplace safety.
Larger economic issues such as immigration, trade policy, globalization and trade unionism affect workplaces and have an ethical dimension, but are often beyond the purview of individual companies.
Trade unions
Trade unions, for example, may push employers to establish due process for workers, but may also cause job loss by demanding unsustainable compensation and work rules.
Unionized workplaces may confront union busting and strike breaking and face the ethical implications of work rules that advantage some workers over others.
Management strategy
Among the many people management strategies that companies employ are a "soft" approach that regards employees as a source of creative energy and participants in workplace decision making, a "hard" version explicitly focused on control and Theory Z that emphasizes philosophy, culture and consensus. None ensure ethical behavior. Some studies claim that sustainable success requires a humanely treated and satisfied workforce.
Sales and marketing
Marketing ethics came of age only as late as the 1990s. Marketing ethics was approached from ethical perspectives of virtue or virtue ethics, deontology, consequentialism, pragmatism and relativism.
Ethics in marketing deals with the principles, values and/or ideas by which marketers (and marketing institutions) ought to act. Marketing ethics is also contested terrain, beyond the previously described issue of potential conflicts between profitability and other concerns. Ethical marketing issues include marketing redundant or dangerous products/services, transparency about environmental risks, transparency about product ingredients such as genetically modified organisms possible health risks, financial risks, security risks, etc., respect for consumer privacy and autonomy, advertising truthfulness and fairness in pricing & distribution.
According to Borgerson, and Schroeder (2008), marketing can influence individuals' perceptions of and interactions with other people, implying an ethical responsibility to avoid distorting those perceptions and interactions.
Marketing ethics involves pricing practices, including illegal actions such as price fixing and legal actions including price discrimination and price skimming. Certain promotional activities have drawn fire, including greenwashing, bait and switch, shilling, viral marketing, spam (electronic), pyramid schemes and multi-level marketing. Advertising has raised objections about attack ads, subliminal messages, sex in advertising and marketing in schools.
Inter-organizational relationships
Scholars in business and management have paid much attention to the ethical issues in the different forms of relationships between organizations such as buyer-supplier relationships, networks, alliances, or joint ventures. Drawing in particular on Transaction Cost Theory and Agency Theory, they note the risk of opportunistic and unethical practices between partners through, for instance, shirking, poaching, and other deceitful behaviors. In turn, research on inter-organizational relationships has observed the role of formal and informal mechanisms to both prevent unethical practices and mitigate their consequences. It especially discusses the importance of formal contracts and relational norms between partners to manage ethical issues.
Emerging issues
Being the most important element of a business, stakeholders' main concern is to determine whether or not the business is behaving ethically or unethically. The business's actions and decisions should be primarily ethical before it happens to become an ethical or even legal issue. "In the case of the government, community, and society what was merely an ethical issue can become a legal debate and eventually law."
Some emerging ethical issues are:
Corporate environmental responsibility: Businesses impacts on eco-systemic environments can no longer be neglected and ecosystems' impacts on business activities are becoming more imminent.
Fairness: The three aspects that motivate people to be fair is; equality, optimization, and reciprocity. Fairness is the quality of being just, equitable, and impartial.
Misuse of company's times and resources: This particular topic may not seem to be a very common one, but it is very important, as it costs a company billions of dollars on a yearly basis. This misuse is from late arrivals, leaving early, long lunch breaks, inappropriate sick days etc. This has been observed as a major form of misconduct in businesses today. One of the greatest ways employees participate in the misuse of company's time and resources is by using the company computer for personal use.
Consumer fraud: There are many different types of fraud, namely; friendly fraud, return fraud, wardrobing, price arbitrage, returning stolen goods. Fraud is a major unethical practice within businesses which should be paid special attention. Consumer fraud is when consumers attempt to deceive businesses for their very own benefit.
Abusive behavior: A common ethical issue among employees. Abusive behavior consists of inflicting intimidating acts on other employees. Such acts include harassing, using profanity, threatening someone physically and insulting them, and being annoying.
Production
This area of business ethics usually deals with the duties of a company to ensure that products and production processes do not needlessly cause harm. Since few goods and services can be produced and consumed with zero risks, determining the ethical course can be difficult. In some case, consumers demand products that harm them, such as tobacco products. Production may have environmental impacts, including pollution, habitat destruction and urban sprawl. The downstream effects of technologies nuclear power, genetically modified food and mobile phones may not be well understood. While the precautionary principle may prohibit introducing new technology whose consequences are not fully understood, that principle would have prohibited the newest technology introduced since the industrial revolution. Product testing protocols have been attacked for violating the rights of both humans and animals. There are sources that provide information on companies that are environmentally responsible or do not test on animals.
Property
The etymological root of property is the Latin , which refers to 'nature', 'quality', 'one's own', 'special characteristic', 'proper', 'intrinsic', 'inherent', 'regular', 'normal', 'genuine', 'thorough, complete, perfect' etc. The word property is value loaded and associated with the personal qualities of propriety and respectability, also implies questions relating to ownership. A 'proper' person owns and is true to herself or himself, and is thus genuine, perfect and pure.
Modern history of property rights
Modern discourse on property emerged by the turn of the 17th century within theological discussions of that time. For instance, John Locke justified property rights saying that God had made "the earth, and all inferior creatures, [in] common to all men".
In 1802 utilitarian Jeremy Bentham stated, "property and law are born together and die together".
One argument for property ownership is that it enhances individual liberty by extending the line of non-interference by the state or others around the person. Seen from this perspective, property right is absolute and property has a special and distinctive character that precedes its legal protection. Blackstone conceptualized property as the "sole and despotic dominion which one man claims and exercises over the external things of the world, in total exclusion of the right of any other individual in the universe".
Slaves as property
During the seventeenth and eighteenth centuries, slavery spread to European colonies including America, where colonial legislatures defined the legal status of slaves as a form of property.
Combined with theological justification, the property was taken to be essentially natural ordained by God. Property, which later gained meaning as ownership and appeared natural to Locke, Jefferson and to many of the 18th and 19th century intellectuals as land, labor or idea, and property right over slaves had the same theological and essentialized justification It was even held that the property in slaves was a sacred right. Wiecek says, "Yet slavery was more clearly and explicitly established under the Constitution than it had been under the Articles". In an 1857 judgment, US Supreme Court Chief Justice Roger B. Taney said, "The right of property in a slave is distinctly and expressly affirmed in the Constitution."
Natural right vs social construct
Neoliberals hold that private property rights are a non-negotiable natural right. Davies counters with "property is no different from other legal categories in that it is simply a consequence of the significance attached by law to the relationships between legal persons." Singer claims, "Property is a form of power, and the distribution of power is a political problem of the highest order". Rose finds, Property' is only an effect, a construction, of relationships between people, meaning that its objective character is contestable. Persons and things, are 'constituted' or 'fabricated' by legal and other normative techniques." Singer observes, "A private property regime is not, after all, a Hobbesian state of nature; it requires a working legal system that can define, allocate, and enforce property rights." Davis claims that common law theory generally favors the view that "property is not essentially a 'right to a thing', but rather a separable bundle of rights subsisting between persons which may vary according to the context and the object which is at stake".
In common parlance property rights involve a bundle of rights including occupancy, use and enjoyment, and the right to sell, devise, give, or lease all or part of these rights. Custodians of property have obligations as well as rights. Michelman writes, "A property regime thus depends on a great deal of cooperation, trustworthiness, and self-restraint among the people who enjoy it."
Menon claims that the autonomous individual, responsible for his/her own existence is a cultural construct moulded by Western culture rather than the truth about the human condition. Penner views property as an "illusion"—a "normative phantasm" without substance.
In the neoliberal literature, the property is part of the private side of a public/private dichotomy and acts a counterweight to state power. Davies counters that "any space may be subject to plural meanings or appropriations which do not necessarily come into conflict".
Private property has never been a universal doctrine, although since the end of the Cold War is it has become nearly so. Some societies, e.g., Native American bands, held land, if not all property, in common. When groups came into conflict, the victor often appropriated the loser's property. The rights paradigm tended to stabilize the distribution of property holdings on the presumption that title had been lawfully acquired.
Property does not exist in isolation, and so property rights too. Bryan claimed that property rights describe relations among people and not just relations between people and things Singer holds that the idea that owners have no legal obligations to others wrongly supposes that property rights hardly ever conflict with other legally protected interests. Singer continues implying that legal realists "did not take the character and structure of social relations as an important independent factor in choosing the rules that govern market life". Ethics of property rights begins with recognizing the vacuous nature of the notion of property.
Intellectual property
Intellectual property (IP) encompasses expressions of ideas, thoughts, codes, and information. "Intellectual property rights" (IPR) treat IP as a kind of real property, subject to analogous protections, rather than as a reproducible good or service. Boldrin and Levine argue that "government does not ordinarily enforce monopolies for producers of other goods. This is because it is widely recognized that monopoly creates many social costs. Intellectual monopoly is no different in this respect. The question we address is whether it also creates social benefits commensurate with these social costs."
International standards relating to Intellectual Property Rights are enforced through Agreement on Trade-Related Aspects of Intellectual Property Rights. In the US, IP other than copyrights is regulated by the United States Patent and Trademark Office.
The US Constitution included the power to protect intellectual property, empowering the Federal government "to promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries". Boldrin and Levine see no value in such state-enforced monopolies stating, "we ordinarily think of innovative monopoly as an oxymoron. Further, they comment, 'intellectual property' "is not like ordinary property at all, but constitutes a government grant of a costly and dangerous private monopoly over ideas. We show through theory and example that intellectual monopoly is not necessary for innovation and as a practical matter is damaging to growth, prosperity, and liberty". Steelman defends patent monopolies, writing, "Consider prescription drugs, for instance. Such drugs have benefited millions of people, improving or extending their lives. Patent protection enables drug companies to recoup their development costs because for a specific period of time they have the sole right to manufacture and distribute the products they have invented." The court cases by 39 pharmaceutical companies against South Africa's 1997 Medicines and Related Substances Control Amendment Act, which intended to provide affordable HIV medicines has been cited as a harmful effect of patents.
One attack on IPR is moral rather than utilitarian, claiming that inventions are mostly a collective, cumulative, path dependent, social creation and therefore, no one person or firm should be able to monopolize them even for a limited period. The opposing argument is that the benefits of innovation arrive sooner when patents encourage innovators and their investors to increase their commitments.
Roderick T. Long, a libertarian philosopher, argued:
Machlup concluded that patents do not have the intended effect of enhancing innovation. Self-declared anarchist Proudhon, in his 1847 seminal work noted, "Monopoly is the natural opposite of competition," and continued, "Competition is the vital force which animates the collective being: to destroy it, if such a supposition were possible, would be to kill society."
Mindeli and Pipiya argued that the knowledge economy is an economy of abundance because it relies on the "infinite potential" of knowledge and ideas rather than on the limited resources of natural resources, labor and capital. Allison envisioned an egalitarian distribution of knowledge. Kinsella claimed that IPR create artificial scarcity and reduce equality. Bouckaert wrote, "Natural scarcity is that which follows from the relationship between man and nature. Scarcity is natural when it is possible to conceive of it before any human, institutional, contractual arrangement. Artificial scarcity, on the other hand, is the outcome of such arrangements. Artificial scarcity can hardly serve as a justification for the legal framework that causes that scarcity. Such an argument would be completely circular. On the contrary, artificial scarcity itself needs a justification" Corporations fund much IP creation and can acquire IP they do not create, to which Menon and others have objected. Andersen claims that IPR has increasingly become an instrument in eroding public domain.
Ethical and legal issues include patent infringement, copyright infringement, trademark infringement, patent and copyright misuse, submarine patents, biological patents, patent, copyright and trademark trolling, employee raiding and monopolizing talent, bioprospecting, biopiracy and industrial espionage, digital rights management.
Notable IP copyright cases include A&M Records, Inc. v. Napster, Inc., Eldred v. Ashcroft, and Disney's lawsuit against the Air Pirates.
International issues
While business ethics emerged as a field in the 1970s, international business ethics did not emerge until the late 1990s, looking back on the international developments of that decade. Many new practical issues arose out of the international context of business. Theoretical issues such as cultural relativity of ethical values receive more emphasis in this field. Other, older issues can be grouped here as well. Issues and subfields include:
The search for universal values as a basis for international commercial behavior
Comparison of business ethical traditions in different countries and on the basis of their respective GDP and corruption rankings
Comparison of business ethical traditions from various religious perspectives
Ethical issues arising out of international business transactions—e.g., bioprospecting and biopiracy in the pharmaceutical industry; the fair trade movement; transfer pricing.
Issues such as globalization and cultural imperialism
Varying global standards—e.g., the use of child labor
The way in which multinationals take advantage of international differences, such as outsourcing production (e.g. clothes) and services (e.g. call centers) to low-wage countries
The permissibility of international commerce with pariah states
Foreign countries often use dumping as a competitive threat, selling products at prices lower than their normal value. This can lead to problems in domestic markets. It becomes difficult for these markets to compete with the pricing set by foreign markets. In 2009, the International Trade Commission has been researching anti-dumping laws. Dumping is often seen as an ethical issue, as larger companies are taking advantage of other less economically advanced companies.
Issues
Ethical issues often arise in business settings, whether through business transactions or forming new business relationships. It also has a huge focus in the auditing field whereby the type of verification can be directly dictated by ethical theory. An ethical issue in a business atmosphere may refer to any situation that requires business associates as individuals, or as a group (for example, a department or firm) to evaluate the morality of specific actions, and subsequently, make a decision amongst the choices. Some ethical issues of particular concern in today's evolving business market include such topics as: honesty, integrity, professional behaviors, environmental issues, harassment, and fraud to name a few. From a 2009 National Business Ethics survey, it was found that types of employee-observed ethical misconduct included abusive behavior (at a rate of 22 percent), discrimination (at a rate of 14 percent), improper hiring practices (at a rate of 10 percent), and company resource abuse (at a rate of percent).
The ethical issues associated with honesty are widespread and vary greatly in business, from the misuse of company time or resources to lying with malicious intent, engaging in bribery, or creating conflicts of interest within an organization. Honesty encompasses wholly the truthful speech and actions of an individual. Some cultures and belief systems even consider honesty to be an essential pillar of life, such as Confucianism and Buddhism (referred to as sacca, part of the Four Noble Truths). Many employees lie in order to reach goals, avoid assignments or negative issues; however, sacrificing honesty in order to gain status or reap rewards poses potential problems for the overall ethical culture organization, and jeopardizes organizational goals in the long run. Using company time or resources for personal use is also, commonly viewed as unethical because it boils down to stealing from the company. The misuse of resources costs companies billions of dollars each year, averaging about 4.25 hours per week of stolen time alone, and employees' abuse of Internet services is another main concern. Bribery, on the other hand, is not only considered unethical is business practices, but it is also illegal. In accordance with this, the Foreign Corrupt Practices Act was established in 1977 to deter international businesses from giving or receiving unwarranted payments and gifts that were intended to influence the decisions of executives and political officials. Although, small payments known as facilitation payments will not be considered unlawful under the Foreign Corrupt Practices Act if they are used towards regular public governance activities, such as permits or licenses.
Influential factors on business ethics
Many aspects of the work environment influence an individual's decision-making regarding ethics in the business world. When an individual is on the path of growing a company, many outside influences can pressure them to perform a certain way. The core of the person's performance in the workplace is rooted in their personal code of behavior. A person's personal code of ethics encompasses many different qualities such as integrity, honesty, communication, respect, compassion, and common goals. In addition, the ethical standards set forth by a person's superior(s) often translate into their own code of ethics. The company's policy is the 'umbrella' of ethics that play a major role in the personal development and decision-making processes that people make with respect to ethical behavior.
The ethics of a company and its individuals are heavily influenced by the state of their country. If a country is heavily plagued with poverty, large corporations continuously grow, but smaller companies begin to wither and are then forced to adapt and scavenge for any method of survival. As a result, the leadership of the company is often tempted to participate in unethical methods to obtain new business opportunities. Additionally, Social Media is arguably the most influential factor in ethics. The immediate access to so much information and the opinions of millions highly influence people's behaviors. The desire to conform with what is portrayed as the norm often manipulates our idea of what is morally and ethically sound. Popular trends on social media and the instant gratification that is received from participating in such quickly distort people's ideas and decisions.
Economic systems
Political economy and political philosophy have ethical implications, particularly regarding the distribution of economic benefits. John Rawls and Robert Nozick are both notable contributors. For example, Rawls has been interpreted as offering a critique of offshore outsourcing on social contract grounds.
Law and regulation
Laws are the written statutes, codes, and opinions of government organizations by which citizens, businesses, and persons present within a jurisdiction are expected to govern themselves or face legal sanction. Sanctions for violating the law can include (a) civil penalties, such as fines, pecuniary damages, and loss of licenses, property, rights, or privileges; (b) criminal penalties, such as fines, probation, imprisonment, or a combination thereof; or (c) both civil and criminal penalties.
Very often it is held that business is not bound by any ethics other than abiding by the law. Milton Friedman is the pioneer of the view. He held that corporations have the obligation to make a profit within the framework of the legal system, nothing more. Friedman made it explicit that the duty of the business leaders is, "to make as much money as possible while conforming to the basic rules of the society, both those embodied in the law and those embodied in ethical custom". Ethics for Friedman is nothing more than abiding by customs and laws. The reduction of ethics to abidance to laws and customs, however, have drawn serious criticisms.
Counter to Friedman's logic it is observed that legal procedures are technocratic, bureaucratic, rigid and obligatory whereas ethical act is conscientious, voluntary choice beyond normativity. Law is retroactive. Crime precedes law. Law against crime, to be passed, the crime must have happened. Laws are blind to the crimes undefined in it. Further, as per law, "conduct is not criminal unless forbidden by law which gives advance warning that such conduct is criminal". Also, the law presumes the accused is innocent until proven guilty and that the state must establish the guilt of the accused beyond reasonable doubt. As per liberal laws followed in most of the democracies, until the government prosecutor proves the firm guilty with the limited resources available to her, the accused is considered to be innocent. Though the liberal premises of law is necessary to protect individuals from being persecuted by Government, it is not a sufficient mechanism to make firms morally accountable.
Implementation
Corporate policies
As part of more comprehensive compliance and ethics programs, many companies have formulated internal policies pertaining to the ethical conduct of employees. These policies can be simple exhortations in broad, highly generalized language (typically called a corporate ethics statement), or they can be more detailed policies, containing specific behavioral requirements (typically called corporate ethics codes). They are generally meant to identify the company's expectations of workers and to offer guidance on handling some of the more common ethical problems that might arise in the course of doing business. It is hoped that having such a policy will lead to greater ethical awareness, consistency in application, and the avoidance of ethical disasters.
An increasing number of companies also require employees to attend seminars regarding business conduct, which often include discussion of the company's policies, specific case studies, and legal requirements. Some companies even require their employees to sign agreements stating that they will abide by the company's rules of conduct.
Many companies are assessing the environmental factors that can lead employees to engage in unethical conduct. A competitive business environment may call for unethical behavior. Lying has become expected in fields such as trading. An example of this are the issues surrounding the unethical actions of the Salomon Brothers.
Not everyone supports corporate policies that govern ethical conduct. Some claim that ethical problems are better dealt with by depending upon employees to use their own judgment.
Others believe that corporate ethics policies are primarily rooted in utilitarian concerns and that they are mainly to limit the company's legal liability or to curry public favor by giving the appearance of being a good corporate citizen. Ideally, the company will avoid a lawsuit because its employees will follow the rules. Should a lawsuit occur, the company can claim that the problem would not have arisen if the employee had only followed the code properly.
Some corporations have tried to burnish their ethical image by creating whistle-blower protections, such as anonymity. In the case of Citi, they call this the Ethics Hotline. Though it is unclear whether firms such as Citi take offences reported to these hotlines seriously or not. Sometimes there is a disconnection between the company's code of ethics and the company's actual practices. Thus, whether or not such conduct is explicitly sanctioned by management, at worst, this makes the policy duplicitous, and, at best, it is merely a marketing tool.
Jones and Parker wrote, "Most of what we read under the name business ethics is either sentimental common sense or a set of excuses for being unpleasant." Many manuals are procedural form filling exercises unconcerned about the real ethical dilemmas. For instance, the US Department of Commerce ethics program treats business ethics as a set of instructions and procedures to be followed by 'ethics officers'., some others claim being ethical is just for the sake of being ethical. Business ethicists may trivialize the subject, offering standard answers that do not reflect the situation's complexity.
Richard DeGeorge wrote in regard to the importance of maintaining a corporate code:
Ethics officers
Following a series of fraud, corruption, and abuse scandals that affected the United States defense industry in the mid-1980s, the Defense Industry Initiative (DII) was created to promote ethical business practices and ethics management in multiple industries. Subsequent to these scandals, many organizations began appointing ethics officers (also referred to as "compliance" officers). In 1991, the Ethics & Compliance Officer Association —originally the Ethics Officer Association (EOA)—was founded at the Center for Business Ethics at Bentley University as a professional association for ethics and compliance officers.
The 1991 passing of the Federal Sentencing Guidelines for Organizations in 1991 was another factor in many companies appointing ethics/compliance officers. These guidelines, intended to assist judges with sentencing, set standards organizations must follow to obtain a reduction in sentence if they should be convicted of a federal offense.
Following the high-profile corporate scandals of companies like Enron, WorldCom and Tyco between 2001 and 2004, and following the passage of the Sarbanes–Oxley Act, many small and mid-sized companies also began to appoint ethics officers.
Often reporting to the chief executive officer, ethics officers focus on uncovering or preventing unethical and illegal actions. This is accomplished by assessing the ethical implications of the company's activities, making recommendations on ethical policies, and disseminating information to employees.
The effectiveness of ethics officers is not clear. The establishment of an ethics officer position is likely to be insufficient in driving ethical business practices without a corporate culture that values ethical behavior. These values and behaviors should be consistently and systemically supported by those at the top of the organization. Employees with strong community involvement, loyalty to employers, superiors or owners, smart work practices, trust among the team members do inculcate a corporate culture.
Sustainability initiatives
Many corporate and business strategies now include sustainability. In addition to the traditional environmental 'green' sustainability concerns, business ethics practices have expanded to include social sustainability. Social sustainability focuses on issues related to human capital in the business supply chain, such as worker's rights, working conditions, child labor, and human trafficking. Incorporation of these considerations is increasing, as consumers and procurement officials demand documentation of a business's compliance with national and international initiatives, guidelines, and standards. Many industries have organizations dedicated to verifying ethical delivery of products from start to finish, such as the Kimberly Process, which aims to stop the flow of conflict diamonds into international markets, or the Fair Wear Foundation, dedicated to sustainability and fairness in the garment industry.
Initiatives in sustainability encompass "green" topics, as well as social sustainability. Tao et al. refer to a variety of "green" business practices including green strategy, green design, green production and green operation. There are however many different ways in which sustainability initiatives can be implemented by a company.
Improving operations
An organization can implement sustainability initiatives by improving its operations and manufacturing process so as to make it more aligned with environment, social, and governance issues. Johnson & Johnson incorporates policies from the Universal Declaration of Human Rights, International Covenant on Civil and Political Rights and International Covenant on Economic, Social and Cultural Rights, applying these principles not only for members of its supply chain but also internal operations. Walmart has made commitments to doubling its truck fleet efficiency by 2015 by replacing 2/3rds of its fleet with more fuel-efficient trucks, including hybrids. Dell has integrated alternative, recycled, and recyclable materials in its products and packaging design, improving energy efficiency and design for end-of-life and recyclability. Dell plans to reduce the energy intensity of its product portfolio by 80% by 2020.
Board leadership
The board of a company can decide to lower executive compensation by a given percentage, and give the percentage of compensation to a specific cause. This is an effort which can only be implemented from the top, as it will affect the compensation of all executives in the company. In Alcoa, an aluminum company based in the US, "1/5th of executive cash compensation is tied to safety, diversity, and environmental stewardship, which includes greenhouse gas emission reductions and energy efficiency" (Best Practices). This is not usually the case for most companies, where we see the board take a uniform step towards the environment, social, and governance issues. This is only the case for companies that are directly linked to utilities, energy, or material industries, something which Alcoa as an aluminum company, falls in line with. Instead, formal committees focused on the environment, social, and governance issues are more usually seen in governance committees and audit committees, rather than the board of directors. "According to research analysis done by Pearl Meyer in support of the NACD 2017 Director Compensation Report shows that among 1,400 public companies reviewed, only slightly more than five percent of boards have a designated committee to address ESG issues." (How compensation can).
Management accountability
Similar to board leadership, creating steering committees and other types of committees specialized for sustainability, senior executives are identified who are held accountable for meeting and constantly improving sustainability goals.
Executive compensation
Introducing bonus schemes that reward executives for meeting non-financial performance goals including safety targets, greenhouse gas emissions, reduction targets, and goals engaging stakeholders to help shape the companies public policy positions. Companies such as Exelon have implemented policies like this.
Stakeholder engagement
Other companies will keep sustainability within its strategy and goals, presenting findings at shareholder meetings, and actively tracking metrics on sustainability. Companies such as PepsiCo, Heineken, and FIFCO take steps in this direction to implement sustainability initiatives. (Best Practices). Companies such as Coca-Cola have actively tried improve their efficiency of water usage, hiring 3rd party auditors to evaluate their water management approach. FIFCO has also led successfully led water-management initiatives.
Employee engagement
Implementation of sustainability projects through directly appealing to employees (typically through the human resource department) is another option for companies to implement sustainability. This involves integrating sustainability into the company culture, with hiring practices and employee training. General Electric is a company that is taking the lead in implementing initiatives in this manner. Bank of America directly engaged employees by implement LEED (leadership in Energy and Environmental Design) certified buildings, with a fifth of its building meeting these certifications.
Supply chain management
Establishing requirements for not only internal operations but also first-tier suppliers as well as second-tier suppliers to help drive environmental and social expectations further down the supply chain. Companies such as Starbucks, FIFCO and Ford Motor Company have implemented requirements that suppliers must meet to win their business. Starbucks has led efforts in engaging suppliers and local communities where they operate to accelerate investment in sustainable farming. Starbucks set a goal of ethically sourcing 100% of its coffee beans by 2015.
Transparency
By revealing decision-making data about how sustainability was reached, companies can give away insights that can help others across the industry and beyond make more sustainable decisions. Nike launched its "making app" in 2013 which released data about the sustainability in the materials it was using. This ultimately allows other companies to make more sustainable design decisions and create lower impact products.
Academic discipline
As an academic discipline, business ethics emerged in the 1970s. Since no academic business ethics journals or conferences existed, researchers published in general management journals and attended general conferences. Over time, specialized peer-reviewed journals appeared, and more researchers entered the field. Corporate scandals in the earlier 2000s increased the field's popularity. As of 2009, sixteen academic journals devoted to various business ethics issues existed, with Journal of Business Ethics and Business Ethics Quarterly considered the leaders. Journal of Business Ethics Education publishes articles specifically about education in business ethics.
The International Business Development Institute is a global non-profit organization that represents 217 nations and all 50 United States. It offers a Charter in Business Development that focuses on ethical business practices and standards. The Charter is directed by Harvard University, MIT, and Fulbright Scholars, and it includes graduate-level coursework in economics, politics, marketing, management, technology, and legal aspects of business development as it pertains to business ethics. IBDI also oversees the International Business Development Institute of Asia which provides individuals living in 20 Asian nations the opportunity to earn the Charter.
Religious views
In Sharia law, followed by many Muslims, banking specifically prohibits charging interest on loans. Traditional Confucian thought discourages profit-seeking. Christianity offers the Golden Rule command, "Therefore all things whatsoever ye would that men should do to you, do ye even so to them: for this is the law and the prophets."
According to the article "Theory of the real economy", there is a more narrow point of view from the Christianity faith towards the relationship between ethics and religious traditions. This article stresses how Christianity is capable of establishing reliable boundaries for financial institutions. One criticism comes from Pope Benedict by describing the "damaging effects of the real economy of badly managed and largely speculative financial dealing." It is mentioned that Christianity has the potential to transform the nature of finance and investment but only if theologians and ethicist provide more evidence of what is real in the economic life. Business ethics receives an extensive treatment in Jewish thought and Rabbinic literature, both from an ethical (Mussar) and a legal (Halakha) perspective; see article Jewish business ethics for further discussion.
According to the article "Indian Philosophy and Business Ethics: A Review", by Chandrani Chattopadyay, Hindus follow "Dharma" as Business Ethics and unethical business practices are termed "Adharma". Businessmen are supposed to maintain steady-mindedness, self-purification, non-violence, concentration, clarity and control over senses. Books like Bhagavat Gita and Arthashastra contribute a lot towards conduct of ethical business.
Related disciplines
Business ethics is related to philosophy of economics, the branch of philosophy that deals with the philosophical, political, and ethical underpinnings of business and economics. Business ethics operates on the premise, for example, that the ethical operation of a private business is possible—those who dispute that premise, such as libertarian socialists (who contend that "business ethics" is an oxymoron) do so by definition outside of the domain of business ethics proper.
The philosophy of economics also deals with questions such as what, if any, are the social responsibilities of a business; business management theory; theories of individualism vs. collectivism; free will among participants in the marketplace; the role of self interest; invisible hand theories; the requirements of social justice; and natural rights, especially property rights, in relation to the business enterprise.
Business ethics is also related to political economy, which is economic analysis from political and historical perspectives. Political economy deals with the distributive consequences of economic actions.
See also
B Corporation (certification)
Business culture
Business law
Corporate behaviour
Corporate crime
Corporate social responsibility
Eastern ethics in business
Ethical altruism / Ethical egoism
Ethical code
Ethical consumerism
Ethical implications in contracts
Ethical job
Ethicism
Evil corporation
Moral psychology
Optimism bias
Organizational ethics
Penny stock scam
Philosophy and economics
Political corruption
Strategic misrepresentation
Strategic planning
Work ethic
Protestant work ethic
Notes
References
General references
Further reading
External links
Applied ethics
Industrial and organizational psychology | Business ethics | [
"Biology"
] | 10,734 | [
"Behavior",
"Human behavior",
"Applied ethics"
] |
4,775 | https://en.wikipedia.org/wiki/British%20Standards | British Standards (BS) are the standards produced by the BSI Group which is incorporated under a royal charter and which is formally designated as the national standards body (NSB) for the UK. The BSI Group produces British Standards under the authority of the charter, which lays down as one of the BSI's objectives to:
Formally, as stated in a 2002 memorandum of understanding between the BSI and the United Kingdom Government, British Standards are defined as:
Products and services which BSI certifies as having met the requirements of specific standards within designated schemes are awarded the Kitemark.
History
BSI Group began in 1901 as the Engineering Standards Committee, led by James Mansergh, to standardize the number and type of steel sections, in order to make British manufacturers more efficient and competitive. Over time the standards developed to cover many aspects of tangible engineering, and then engineering methodologies including quality systems, safety and security.
Creation
The BSI Group as a whole does not produce British Standards, as standards work within the BSI is decentralized. The governing board of BSI establishes a Standards Board. The Standards Board does little apart from setting up sector boards (a sector in BSI parlance being a field of standardization such as ICT, quality, agriculture, manufacturing, or fire). Each sector board, in turn, constitutes several technical committees. It is the technical committees that, formally, approve a British Standard, which is then presented to the secretary of the supervisory sector board for endorsement of the fact that the technical committee has indeed completed a task for which it was constituted.
Standards
The standards produced are titled British Standard XXXX[-P]:YYYY where XXXX is the number of the standard, P is the number of the part of the standard (where the standard is split into multiple parts) and YYYY is the year in which the standard came into effect. BSI Group currently has over 27,000 active standards. Products are commonly specified as meeting a particular British Standard, and in general, this can be done without any certification or independent testing. The standard simply provides a shorthand way of claiming that certain specifications are met, while encouraging manufacturers to adhere to a common method for such a specification.
The Kitemark can be used to indicate certification by BSI, but only where a Kitemark scheme has been set up around a particular standard. It is mainly applicable to safety and quality management standards. There is a common misunderstanding that Kitemarks are necessary to prove compliance with any BS standard, but in general, it is neither desirable nor possible that every standard be 'policed' in this way.
Following the move on harmonization of the standard in Europe, some British Standards are gradually being superseded or replaced by the relevant European Standards (EN).
Status of standards
Standards are continuously reviewed and developed and are periodically allocated one or more of the following status keywords.
Confirmed - the standard has been reviewed and confirmed as being current.
Current - the document is the current, most recently published one available.
Draft for public comment/DPC - a national stage in the development of a standard, where wider consultation is sought within the UK.
Obsolescent - indicating by amendment that the standard is not recommended for use for new equipment, but needs to be retained to provide for the servicing of equipment that is expected to have a long working life, or due to legislative issues.
Partially replaced - the standard has been partially replaced by one or more other standards.
Proposed for confirmation - the standard is being reviewed and it has been proposed that it is confirmed as the current standard.
Proposed for obsolescence - the standard is being reviewed and it has been proposed that it is made obsolescent.
Proposed for withdrawal - the standard is being reviewed and it has been proposed that it is withdrawn.
Revised - the standard has been revised.
Superseded - the standard has been replaced by one or more other standards.
Under review - the standard is under review.
Withdrawn - the document is no longer current and has been withdrawn.
Work in hand - there is work being undertaken on the standard and there may be a related draft for public comment available.
Examples
BS 0 A standard for standards specifies development, structure and drafting of standards.
BS 1 Lists of rolled sections for structural purposes
BS 2 Specification and sections of tramway rails and fishplates
BS 3 Report on influence of gauge length and section of test bar on the percentage of elongation
BS 4 Specification for structural steel sections
BS 5 Report on locomotives for Indian railways
BS 7 Dimensions of copper conductors insulated annealled, for electric power and light
BS 9 Specifications for bullhead railway rails
BS 11 Specifications and sections of Flat Bottom railway rails
BS 12 Specification for Portland Cement
BS 15 Specification for structural steel for bridges, etc., and general building construction
BS 16 Specification for telegraph material (insulators, pole fittings, et cetera)
BS 17 Interim report on electrical machinery
BS 22 Report on effect of temperature on insulating materials
BS 24 Specifications for material used in the construction of standards for railway rolling stock
BS 26 Second report on locomotives for Indian Railways (Superseding No 5)
BS 27 Report on standard systems of limit gauges for running fits
BS 28 Report on nuts, bolt heads and spanners
BS 31 Specification for steel conduits for electrical wiring
BS 32 Specification for steel bars for use in automatic machines
BS 33 Carbon filament electric lamps
BS 34 Tables of BS Whitworth, BS Fine and BS Pipe Threads
BS 35 Specification for Copper Alloy Bars for use in Automatic Machines
BS 36 Report on British Standards for Electrical Machinery
BS 37 Specification for Electricity Meters
BS 38 Report on British Standards Systems for Limit Gauges for Screw Threads
BS 42 Report on reciprocating steam engines for electrical purposes
BS 43 Specification for charcoal iron lip-welded boiler tubes
BS 45 Report on Dimensions for Sparking Plugs (for Internal Combustion Engines)
BS 47 Steel Fishplates for Bullhead and Flat Bottom Railway Rails, Specification and Sections of
BS 49 Specification for Ammetres and Voltmetres
BS 50 Third Report on Locomotives for Indian Railways (Superseding No. 5 and 26)
BS 53 Specification for Cold Drawn Weldless Steel Boiler Tubes for Locomotive Boilers
BS 54 Report on Screw Threads, Nuts and Bolt Heads for use in Automobile Construction
BS 56 Definitions of Yield Point and Elastic Limit
BS 57 Report on heads for Small Screws
BS 70 Report on Pneumatic Tyre Rims for automobiles, motorcycles and bicycles
BS 72 British Standardisation Rules for Electrical Machinery,
BS 73 Specification for Two-Pin Wall Plugs and Sockets (Five-, Fifteen- and Thirty-Ampere)
BS 76 Report of and Specifications for Tar and Pitch for Road Purposes
BS 77 Specification. Voltages for a.c. transmission and distribution systems
BS 80 Magnetos for automobile purposes
BS 81 Specification for Instrument Transformers
BS 82 Specification for Starters for Electric Motors
BS 84 Report on Screw Threads (British Standard Fine), and their Tolerances (Superseding parts of Reports Nos. 20 and 33)
BS 86 Report on Dimensions of Magnetos for Aircraft Purposes
BS 153 Specification for Steel Girder Bridges
BS 308 a now deleted standard for engineering drawing conventions, having been absorbed into BS 8888.
BS 317 for Hand-Shield and Side Entry Pattern Three-Pin Wall Plugs and Sockets (Two Pin and Earth Type)
BS 336 for fire hose couplings and ancillary equipment
BS 372 for Side-entry wall plugs and sockets for domestic purposes (Part 1 superseded BS 73 and Part 2 superseded BS 317)
BS 381 for colours used in identification, coding and other special purposes
BS 476 for fire resistance of building materials/elements
BS 499 Welding terms and symbols.
BS 546 for Two-pole and earthing-pin plugs, socket-outlets and socket-outlet adaptors for AC (50–60 Hz) circuits up to 250V
BS 857 for safety glass for land transport
BS 970 Specification for wrought steels for mechanical and allied engineering purposes
BS 987C Camouflage Colours
BS 1011 Recommendation for welding of metallic materials
BS 1088 for marine plywood
BS 1192 for Construction Drawing Practice. Part 5 (BS1192-5:1998) concerns Guide for structuring and exchange of CAD data.
BS 1361 for cartridge fuses for a.c. circuits in domestic and similar premises
BS 1362 for cartridge fuses for BS 1363 power plugs
BS 1363 for mains power plugs and sockets
BS 1377 Methods of test for soils for civil engineering.
BS 1380 Speed and Exposure Index of Photographic Negative Materials.
BS 1572 Colours for Flat Finishes for Wall Decoration
BS 1881 Testing Concrete
BS 1852 Specification for marking codes for resistors and capacitors
BS 2979 Transliteration of Cyrillic and Greek characters
BS 3621 Thief resistant lock assembly. Key egress.
BS 3943 Specification for plastics waste traps
BS 4142 Methods for rating and assessing industrial and commercial sound
BS 4293 for residual current-operated circuit-breakers
BS 4343 for industrial electrical power connectors
BS 4573 Specification for 2-pin reversible plugs and shaver socket-outlets
BS 4960 for weighing instruments for domestic cookery
BS 5252 for colour-coordination in building construction
BS 5400 for steel, concrete and composite bridges.
BS 5499 for graphical symbols and signs in building construction; including shape, colour and layout
BS 5544 for anti-bandit glazing (glazing resistant to manual attack)
BS 5750 for quality management, the ancestor of ISO 9000
BS 5837 for protection of trees during construction work
BS 5839 for fire detection and alarm systems for buildings
BS 5930 for site investigations
BS 5950 for structural steel
BS 5993 for Cricket balls
BS 6008 for preparation of a liquor of tea for use in sensory tests
BS 6312 for telephone plugs and sockets
BS 6651 code of practice for protection of structures against lightning; replaced by BS EN 62305 (IEC 62305) series.
BS 6879 for British geocodes, a superset of ISO 3166-2:GB
BS 7430 code of practice for earthing
BS 7671 Requirements for Electrical Installations, The IEE Wiring Regulations, produced by the IET.
BS 7799 for information security, the ancestor of the ISO/IEC 27000 family of standards, including 27002 (formerly 17799)
BS 7901 for recovery vehicles and vehicle recovery equipment
BS 7909 Code of practice for temporary electrical systems for entertainment and related purposes
BS 7919 Electric cables. Flexible cables rated up to 450/750 V, for use with appliances and equipment intended for industrial and similar environments
BS 7910 guide to methods for assessing the acceptability of flaws in metallic structures
BS 7925 Software testing
BS 7971 Protective clothing and equipment for use in violent situations and in training
BS 8110 for structural concrete
BS 8233 Guidance on sound insulation and noise reduction in buildings
BS 8484 for the provision of lone worker device services
BS 8485 for the characterization and remediation from ground gas in affected developments
BS 8494 for detecting and measuring carbon dioxide in ambient air or extraction systems
BS 8546 Travel adaptors compatible with UK plug and socket system.
BS 8888 for engineering drawing and technical product specification
BS 9251 for safety guidelines on fire sprinkler systems in residential buildings
BS 15000 for IT Service Management, (ITIL), now ISO/IEC 20000
BS 3G 101 for general requirements for mechanical and electromechanical aircraft indicators
BS EN 12195 Load restraining on road vehicles.
BS EN 60204 Safety of machinery
BS EN ISO 4210 - Cycles. Safety Requirements for Bicycles
PAS documents
BSI also publishes a series of Publicly Available Specification (PAS) documents.
PAS documents are a flexible and rapid standards development model open to all organizations. A PAS is a sponsored piece of work allowing organizations flexibility in the rapid creation of a standard while also allowing for a greater degree of control over the document's development. A typical development time frame for a PAS is around six to nine months. Once published by BSI, a PAS has all the functionality of a British Standard for the purposes of creating schemes such as management systems and product benchmarks as well as codes of practice. A PAS is a living document and after two years the document will be reviewed and a decision made with the client as to whether or not this should be taken forward to become a formal standard. The term PAS was originally an abbreviation for "product approval specification", a name which was subsequently changed to "publicly available specification". However, according to BSI, not all PAS documents are structured as specifications and the term is now sufficiently well established not to require any further amplification.
Examples
PAS 78: Guide to good practice in commissioning accessible websites
PAS 440: Responsible Innovation – Guide
PAS 9017: Plastics – Biodegradation of polyolefins in an open-air terrestrial environment – Specification
PAS 1881: Assuring safety for automated vehicle trials and testing – Specification
PAS 1201: Guide for describing graphene material
PAS 4444: Hydrogen fired gas appliances – Guide
Availability
Copies of British Standards are sold at the BSI Online Shop or can be accessed via subscription to British Standards Online (BSOL). They can also be ordered via the publishing units of many other national standards bodies (ANSI, DIN, etc.) and from several specialized suppliers of technical specifications.
British Standards, including European and international adoptions, are available in many university and public libraries that subscribe to the BSOL platform. Librarians and lecturers at UK-based subscribing universities have full access rights to the collection while students can copy/paste and print but not download a standard. Up to 10% of the content of a standard can be copy/pasted for personal or internal use and up to 5% of the collection made available as a paper or electronic reference collection at the subscribing university. Because of their reference material status standards are not available for interlibrary loan. Public library users in the UK may have access to BSOL on a view-only basis if their library service subscribes to the BSOL platform. Users may also be able to access the collection remotely if they have a valid library card and the library offers secure access to its resources.
The BSI Knowledge Centre in Chiswick, London can be contacted directly about viewing standards in their Members' Reading Room.
See also
Institute for Reference Materials and Measurements (EU)
References
External links
1901 establishments in the United Kingdom
International Electrotechnical Commission
Certification marks
Organizations established in 1901 | British Standards | [
"Mathematics",
"Engineering"
] | 2,948 | [
"Electrical engineering organizations",
"Symbols",
"International Electrotechnical Commission",
"Certification marks"
] |
4,781 | https://en.wikipedia.org/wiki/Benzodiazepine | Benzodiazepines (BZD, BDZ, BZs), colloquially known as "benzos", are a class of depressant drugs whose core chemical structure is the fusion of a benzene ring and a diazepine ring. They are prescribed to treat conditions such as anxiety disorders, insomnia, and seizures. The first benzodiazepine, chlordiazepoxide (Librium), was discovered accidentally by Leo Sternbach in 1955, and was made available in 1960 by Hoffmann–La Roche, which followed with the development of diazepam (Valium) three years later, in 1963. By 1977, benzodiazepines were the most prescribed medications globally; the introduction of selective serotonin reuptake inhibitors (SSRIs), among other factors, decreased rates of prescription, but they remain frequently used worldwide.
Benzodiazepines are depressants that enhance the effect of the neurotransmitter gamma-aminobutyric acid (GABA) at the GABAA receptor, resulting in sedative, hypnotic (sleep-inducing), anxiolytic (anti-anxiety), anticonvulsant, and muscle relaxant properties. High doses of many shorter-acting benzodiazepines may also cause anterograde amnesia and dissociation. These properties make benzodiazepines useful in treating anxiety, panic disorder, insomnia, agitation, seizures, muscle spasms, alcohol withdrawal and as a premedication for medical or dental procedures. Benzodiazepines are categorized as short, intermediate, or long-acting. Short- and intermediate-acting benzodiazepines are preferred for the treatment of insomnia; longer-acting benzodiazepines are recommended for the treatment of anxiety.
Benzodiazepines are generally viewed as safe and effective for short-term use of two to four weeks, although cognitive impairment and paradoxical effects such as aggression or behavioral disinhibition can occur. According to the Government of Victoria's (Australia) Department of Health, long-term use can cause "impaired thinking or memory loss, anxiety and depression, irritability, paranoia, aggression, etc." A minority of people have paradoxical reactions after taking benzodiazepines such as worsened agitation or panic.
Benzodiazepines are associated with an increased risk of suicide due to aggression, impulsivity, and negative withdrawal effects. Long-term use is controversial because of concerns about decreasing effectiveness, physical dependence, benzodiazepine withdrawal syndrome, and an increased risk of dementia and cancer. The elderly are at an increased risk of both short- and long-term adverse effects, and as a result, all benzodiazepines are listed in the Beers List of inappropriate medications for older adults. There is controversy concerning the safety of benzodiazepines in pregnancy. While they are not major teratogens, uncertainty remains as to whether they cause cleft palate in a small number of babies and whether neurobehavioural effects occur as a result of prenatal exposure; they are known to cause withdrawal symptoms in the newborn.
In an overdose, benzodiazepines can cause dangerous deep unconsciousness, but are less toxic than their predecessors, the barbiturates, and death rarely results when a benzodiazepine is the only drug taken. Combined with other central nervous system (CNS) depressants such as alcohol and opioids, the potential for toxicity and fatal overdose increases significantly. Benzodiazepines are commonly used recreationally and also often taken in combination with other addictive substances, and are controlled in most countries.
Medical uses
Benzodiazepines possess psycholeptic, sedative, hypnotic, anxiolytic, anticonvulsant, muscle relaxant, and amnesic actions, which are useful in a variety of indications such as alcohol dependence, seizures, anxiety disorders, panic, agitation, and insomnia. Most are administered orally; however, they can also be given intravenously, intramuscularly, or rectally. In general, benzodiazepines are well tolerated and are safe and effective drugs in the short term for a wide range of conditions. Tolerance can develop to their effects and there is also a risk of dependence, and upon discontinuation a withdrawal syndrome may occur. These factors, combined with other possible secondary effects after prolonged use such as psychomotor, cognitive, or memory impairments, limit their long-term applicability. The effects of long-term use or misuse include the tendency to cause or worsen cognitive deficits, depression, and anxiety. The College of Physicians and Surgeons of British Columbia recommends discontinuing the usage of benzodiazepines in those on opioids and those who have used them long term. Benzodiazepines can have serious adverse health outcomes, and these findings support clinical and regulatory efforts to reduce usage, especially in combination with non-benzodiazepine receptor agonists.
Panic disorder
Because of their effectiveness, tolerability, and rapid onset of anxiolytic action, benzodiazepines are frequently used for the treatment of anxiety associated with panic disorder. However, there is disagreement among expert bodies regarding the long-term use of benzodiazepines for panic disorder. The views range from those holding benzodiazepines are not effective long-term and should be reserved for treatment-resistant cases to those holding they are as effective in the long term as selective serotonin reuptake inhibitors (SSRIs).
American Psychiatric Association (APA) guidelines, published in January 2009, note that, in general, benzodiazepines are well tolerated, and their use for the initial treatment for panic disorder is strongly supported by numerous controlled trials. APA states that there is insufficient evidence to recommend any of the established panic disorder treatments over another. The choice of treatment between benzodiazepines, SSRIs, serotonin–norepinephrine reuptake inhibitors (SNRIs), tricyclic antidepressants, and psychotherapy should be based on the patient's history, preference, and other individual characteristics. Selective serotonin reuptake inhibitors are likely to be the best choice of pharmacotherapy for many patients with panic disorder, but benzodiazepines are also often used, and some studies suggest that these medications are still used with greater frequency than the SSRIs. One advantage of benzodiazepines is that they alleviate the anxiety symptoms much faster than antidepressants, and therefore may be preferred in patients for whom rapid symptom control is critical. However, this advantage is offset by the possibility of developing benzodiazepine dependence. APA does not recommend benzodiazepines for persons with depressive symptoms or a recent history of substance use disorder. APA guidelines state that, in general, pharmacotherapy of panic disorder should be continued for at least a year, and that clinical experience supports continuing benzodiazepine treatment to prevent recurrence. Although major concerns about benzodiazepine tolerance and withdrawal have been raised, there is no evidence for significant dose escalation in patients using benzodiazepines long-term. For many such patients, stable doses of benzodiazepines retain their efficacy over several years.
Guidelines issued by the UK-based National Institute for Health and Clinical Excellence (NICE), carried out a systematic review using different methodology and came to a different conclusion. They questioned the accuracy of studies that were not placebo-controlled. And, based on the findings of placebo-controlled studies, they do not recommend use of benzodiazepines beyond two to four weeks, as tolerance and physical dependence develop rapidly, with withdrawal symptoms including rebound anxiety occurring after six weeks or more of use. Nevertheless, benzodiazepines are still prescribed for long-term treatment of anxiety disorders, although specific antidepressants and psychological therapies are recommended as the first-line treatment options with the anticonvulsant drug pregabalin indicated as a second- or third-line treatment and suitable for long-term use. NICE stated that long-term use of benzodiazepines for panic disorder with or without agoraphobia is an unlicensed indication, does not have long-term efficacy, and is, therefore, not recommended by clinical guidelines. Psychological therapies such as cognitive behavioural therapy are recommended as a first-line therapy for panic disorder; benzodiazepine use has been found to interfere with therapeutic gains from these therapies.
Benzodiazepines are usually administered orally; however, very occasionally lorazepam or diazepam may be given intravenously for the treatment of panic attacks.
Generalized anxiety disorder
Benzodiazepines have robust efficacy in the short-term management of generalized anxiety disorder (GAD), but were not shown effective in producing long-term improvement overall. According to National Institute for Health and Clinical Excellence (NICE), benzodiazepines can be used in the immediate management of GAD, if necessary. However, they should not usually be given for longer than 2–4 weeks. The only medications NICE recommends for the longer term management of GAD are antidepressants.
Likewise, Canadian Psychiatric Association (CPA) recommends benzodiazepines alprazolam, bromazepam, lorazepam, and diazepam only as a second-line choice, if the treatment with two different antidepressants was unsuccessful. Although they are second-line agents, benzodiazepines can be used for a limited time to relieve severe anxiety and agitation. CPA guidelines note that after 4–6 weeks the effect of benzodiazepines may decrease to the level of placebo, and that benzodiazepines are less effective than antidepressants in alleviating ruminative worry, the core symptom of GAD. However, in some cases, a prolonged treatment with benzodiazepines as the add-on to an antidepressant may be justified.
A 2015 review found a larger effect with medications than talk therapy. Medications with benefit include serotonin-noradrenaline reuptake inhibitors, benzodiazepines, and selective serotonin reuptake inhibitors.
Anxiety
Benzodiazepines are sometimes used in the treatment of acute anxiety, since they result in rapid and marked relief of symptoms in most individuals; however, they are not recommended beyond 2–4 weeks of use due to risks of tolerance and dependence and a lack of long-term effectiveness. As for insomnia, they may also be used on an irregular/"as-needed" basis, such as in cases where said anxiety is at its worst. Compared to other pharmacological treatments, benzodiazepines are twice as likely to lead to a relapse of the underlying condition upon discontinuation. Psychological therapies and other pharmacological therapies are recommended for the long-term treatment of generalized anxiety disorder. Antidepressants have higher remission rates and are, in general, safe and effective in the short and long term.
Insomnia
Benzodiazepines can be useful for short-term treatment of insomnia. Their use beyond 2 to 4 weeks is not recommended due to the risk of dependence. The Committee on Safety of Medicines report recommended that where long-term use of benzodiazepines for insomnia is indicated then treatment should be intermittent wherever possible. It is preferred that benzodiazepines be taken intermittently and at the lowest effective dose. They improve sleep-related problems by shortening the time spent in bed before falling asleep, prolonging the sleep time, and, in general, reducing wakefulness. However, they worsen sleep quality by increasing light sleep and decreasing deep sleep. Other drawbacks of hypnotics, including benzodiazepines, are possible tolerance to their effects, rebound insomnia, and reduced slow-wave sleep and a withdrawal period typified by rebound insomnia and a prolonged period of anxiety and agitation.
The list of benzodiazepines approved for the treatment of insomnia is fairly similar among most countries, but which benzodiazepines are officially designated as first-line hypnotics prescribed for the treatment of insomnia varies between countries. Longer-acting benzodiazepines such as nitrazepam and diazepam have residual effects that may persist into the next day and are, in general, not recommended.
Since the release of nonbenzodiazepines, also known as z-drugs, in 1992 in response to safety concerns, individuals with insomnia and other sleep disorders have increasingly been prescribed nonbenzodiazepines (2.3% in 1993 to 13.7% of Americans in 2010), less often prescribed benzodiazepines (23.5% in 1993 to 10.8% in 2010). It is not clear as to whether the new non benzodiazepine hypnotics (Z-drugs) are better than the short-acting benzodiazepines. The efficacy of these two groups of medications is similar. According to the US Agency for Healthcare Research and Quality, indirect comparison indicates that side-effects from benzodiazepines may be about twice as frequent as from nonbenzodiazepines. Some experts suggest using nonbenzodiazepines preferentially as a first-line long-term treatment of insomnia. However, the UK National Institute for Health and Clinical Excellence did not find any convincing evidence in favor of Z-drugs. NICE review pointed out that short-acting Z-drugs were inappropriately compared in clinical trials with long-acting benzodiazepines. There have been no trials comparing short-acting Z-drugs with appropriate doses of short-acting benzodiazepines. Based on this, NICE recommended choosing the hypnotic based on cost and the patient's preference.
Older adults should not use benzodiazepines to treat insomnia unless other treatments have failed. When benzodiazepines are used, patients, their caretakers, and their physician should discuss the increased risk of harms, including evidence that shows twice the incidence of traffic collisions among driving patients, and falls and hip fracture for older patients.
Seizures
Prolonged convulsive epileptic seizures are a medical emergency that can usually be dealt with effectively by administering fast-acting benzodiazepines, which are potent anticonvulsants. In a hospital environment, intravenous clonazepam, lorazepam, and diazepam are first-line choices. In the community, intravenous administration is not practical and so rectal diazepam or buccal midazolam are used, with a preference for midazolam as its administration is easier and more socially acceptable.
When benzodiazepines were first introduced, they were enthusiastically adopted for treating all forms of epilepsy. However, drowsiness and tolerance become problems with continued use and none are now considered first-line choices for long-term epilepsy therapy. Clobazam is widely used by specialist epilepsy clinics worldwide and clonazepam is popular in the Netherlands, Belgium and France. Clobazam was approved for use in the United States in 2011. In the UK, both clobazam and clonazepam are second-line choices for treating many forms of epilepsy. Clobazam also has a useful role for very short-term seizure prophylaxis and in catamenial epilepsy. Discontinuation after long-term use in epilepsy requires additional caution because of the risks of rebound seizures. Therefore, the dose is slowly tapered over a period of up to six months or longer.
Alcohol withdrawal
Chlordiazepoxide is the most commonly used benzodiazepine for alcohol detoxification, but diazepam may be used as an alternative. Both are used in the detoxification of individuals who are motivated to stop drinking, and are prescribed for a short period of time to reduce the risks of developing tolerance and dependence to the benzodiazepine medication itself. The benzodiazepines with a longer half-life make detoxification more tolerable, and dangerous (and potentially lethal) alcohol withdrawal effects are less likely to occur. On the other hand, short-acting benzodiazepines may lead to breakthrough seizures, and are, therefore, not recommended for detoxification in an outpatient setting. Oxazepam and lorazepam are often used in patients at risk of drug accumulation, in particular, the elderly and those with cirrhosis, because they are metabolized differently from other benzodiazepines, through conjugation.
Benzodiazepines are the preferred choice in the management of alcohol withdrawal syndrome, in particular, for the prevention and treatment of the dangerous complication of seizures and in subduing severe delirium. Lorazepam is the only benzodiazepine with predictable intramuscular absorption and it is the most effective in preventing and controlling acute seizures.
Other indications
Benzodiazepines are often prescribed for a wide range of conditions:
They can sedate patients receiving mechanical ventilation or those in extreme distress. Caution is exercised in this situation due to the risk of respiratory depression, and it is recommended that benzodiazepine overdose treatment facilities should be available. They have also been found to increase the likelihood of later PTSD after people have been removed from ventilators.
Benzodiazepines are indicated in the management of breathlessness (shortness of breath) in advanced diseases, in particular where other treatments have failed to adequately control symptoms.
Benzodiazepines are effective as medication given a couple of hours before surgery to relieve anxiety. They also produce amnesia, which can be useful, as patients may not remember unpleasantness from the procedure. They are also used in patients with dental phobia as well as some ophthalmic procedures like refractive surgery; although such use is controversial and only recommended for those who are very anxious. Midazolam is the most commonly prescribed for this use because of its strong sedative actions and fast recovery time, as well as its water solubility, which reduces pain upon injection. Diazepam and lorazepam are sometimes used. Lorazepam has particularly marked amnesic properties that may make it more effective when amnesia is the desired effect.
Benzodiazepines are well known for their strong muscle-relaxing properties and can be useful in the treatment of muscle spasms, although tolerance often develops to their muscle relaxant effects. Baclofen or tizanidine are sometimes used as an alternative to benzodiazepines. Tizanidine has been found to have superior tolerability compared to diazepam and baclofen.
Benzodiazepines are also used to treat the acute panic caused by hallucinogen intoxication. Benzodiazepines are also used to calm the acutely agitated individual and can, if required, be given via an intramuscular injection. They can sometimes be effective in the short-term treatment of psychiatric emergencies such as acute psychosis as in schizophrenia or mania, bringing about rapid tranquillization and sedation until the effects of lithium or neuroleptics (antipsychotics) take effect. Lorazepam is most commonly used but clonazepam is sometimes prescribed for acute psychosis or mania; their long-term use is not recommended due to risks of dependence. Further research investigating the use of benzodiazepines alone and in combination with antipsychotic medications for treating acute psychosis is warranted.
Clonazepam, a benzodiazepine is used to treat many forms of parasomnia. Rapid eye movement behavior disorder responds well to low doses of clonazepam. Restless legs syndrome can be treated using clonazepam as a third line treatment option as the use of clonazepam is still investigational.
Benzodiazepines are sometimes used for obsessive–compulsive disorder (OCD), although they are generally believed ineffective for this indication. Effectiveness was, however, found in one small study. Benzodiazepines can be considered as a treatment option in treatment resistant cases.
Antipsychotics are generally a first-line treatment for delirium; however, when delirium is caused by alcohol or sedative hypnotic withdrawal, benzodiazepines are a first-line treatment.
There is some evidence that low doses of benzodiazepines reduce adverse effects of electroconvulsive therapy.
Contraindications
Benzodiazepines require special precaution if used in the elderly, during pregnancy, in children, alcohol or drug-dependent individuals and individuals with comorbid psychiatric disorders.
Because of their muscle relaxant action, benzodiazepines may cause respiratory depression in susceptible individuals. For that reason, they are contraindicated in people with myasthenia gravis, sleep apnea, bronchitis, and COPD. Caution is required when benzodiazepines are used in people with personality disorders or intellectual disability because of frequent paradoxical reactions. In major depression, they may precipitate suicidal tendencies and are sometimes used for suicidal overdoses. Individuals with a history of excessive alcohol use or non-medical use of opioids or barbiturates should avoid benzodiazepines, as there is a risk of life-threatening interactions with these drugs.
Pregnancy
In the United States, the Food and Drug Administration has categorized benzodiazepines into either category D or X meaning potential for harm in the unborn has been demonstrated.
Exposure to benzodiazepines during pregnancy has been associated with a slightly increased (from 0.06 to 0.07%) risk of cleft palate in newborns, a controversial conclusion as some studies find no association between benzodiazepines and cleft palate. Their use by expectant mothers shortly before the delivery may result in a floppy infant syndrome. Newborns with this condition tend to have hypotonia, hypothermia, lethargy, and breathing and feeding difficulties. Cases of neonatal withdrawal syndrome have been described in infants chronically exposed to benzodiazepines in utero. This syndrome may be hard to recognize, as it starts several days after delivery, for example, as late as 21 days for chlordiazepoxide. The symptoms include tremors, hypertonia, hyperreflexia, hyperactivity, and vomiting and may last for up to three to six months. Tapering down the dose during pregnancy may lessen its severity. If used in pregnancy, those benzodiazepines with a better and longer safety record, such as diazepam or chlordiazepoxide, are recommended over potentially more harmful benzodiazepines, such as temazepam or triazolam. Using the lowest effective dose for the shortest period of time minimizes the risks to the unborn child.
Elderly
The benefits of benzodiazepines are least and the risks are greatest in the elderly. They are listed as a potentially inappropriate medication for older adults by the American Geriatrics Society. The elderly are at an increased risk of dependence and are more sensitive to the adverse effects such as memory problems, daytime sedation, impaired motor coordination, and increased risk of motor vehicle accidents and falls, and an increased risk of hip fractures. The long-term effects of benzodiazepines and benzodiazepine dependence in the elderly can resemble dementia, depression, or anxiety syndromes, and progressively worsens over time. Adverse effects on cognition can be mistaken for the effects of old age. The benefits of withdrawal include improved cognition, alertness, mobility, reduced risk of incontinence, and a reduced risk of falls and fractures. The success of gradual-tapering benzodiazepines is as great in the elderly as in younger people. Benzodiazepines should be prescribed to the elderly only with caution and only for a short period at low doses. Short to intermediate-acting benzodiazepines are preferred in the elderly such as oxazepam and temazepam. The high potency benzodiazepines alprazolam and triazolam and long-acting benzodiazepines are not recommended in the elderly due to increased adverse effects. Nonbenzodiazepines such as zaleplon and zolpidem and low doses of sedating antidepressants are sometimes used as alternatives to benzodiazepines.
Long-term use of benzodiazepines is associated with increased risk of cognitive impairment and dementia, and reduction in prescribing levels is likely to reduce dementia risk. The association of a history of benzodiazepine use and cognitive decline is unclear, with some studies reporting a lower risk of cognitive decline in former users, some finding no association and some indicating an increased risk of cognitive decline.
Benzodiazepines are sometimes prescribed to treat behavioral symptoms of dementia. However, like antidepressants, they have little evidence of effectiveness, although antipsychotics have shown some benefit. Cognitive impairing effects of benzodiazepines that occur frequently in the elderly can also worsen dementia.
Adverse effects
The most common side-effects of benzodiazepines are related to their sedating and muscle-relaxing action. They include drowsiness, dizziness, and decreased alertness and concentration. Lack of coordination may result in falls and injuries particularly in the elderly. Another result is impairment of driving skills and increased likelihood of road traffic accidents. Decreased libido and erection problems are a common side effect. Depression and disinhibition may emerge. Hypotension and suppressed breathing (hypoventilation) may be encountered with intravenous use. Less common side effects include nausea and changes in appetite, blurred vision, confusion, euphoria, depersonalization and nightmares. Cases of liver toxicity have been described but are very rare.
The long-term effects of benzodiazepine use can include cognitive impairment as well as affective and behavioural problems. Feelings of turmoil, difficulty in thinking constructively, loss of sex-drive, agoraphobia and social phobia, increasing anxiety and depression, loss of interest in leisure pursuits and interests, and an inability to experience or express feelings can also occur. Not everyone, however, experiences problems with long-term use. Additionally, an altered perception of self, environment and relationships may occur. A study published in 2020 found that long-term use of prescription benzodiazepines is associated with an increase in all-cause mortality among those age 65 or younger, but not those older than 65. The study also found that all-cause mortality was increased further in cases in which benzodiazepines are co-prescribed with opioids, relative to cases in which benzodiazepines are prescribed without opioids, but again only in those age 65 or younger.
Compared to other sedative-hypnotics, visits to the hospital involving benzodiazepines had a 66% greater odds of a serious adverse health outcome. This included hospitalization, patient transfer, or death, and visits involving a combination of benzodiazepines and non-benzodiapine receptor agonists had almost four-times increased odds of a serious health outcome.
In September 2020, the US Food and Drug Administration (FDA) required the boxed warning be updated for all benzodiazepine medicines to describe the risks of abuse, misuse, addiction, physical dependence, and withdrawal reactions consistently across all the medicines in the class.
Cognitive effects
The short-term use of benzodiazepines adversely affects multiple areas of cognition, the most notable one being that it interferes with the formation and consolidation of memories of new material and may induce complete anterograde amnesia. However, researchers hold contrary opinions regarding the effects of long-term administration. One view is that many of the short-term effects continue into the long-term and may even worsen, and are not resolved after stopping benzodiazepine usage. Another view maintains that cognitive deficits in chronic benzodiazepine users occur only for a short period after the dose, or that the anxiety disorder is the cause of these deficits.
While the definitive studies are lacking, the former view received support from a 2004 meta-analysis of 13 small studies. This meta-analysis found that long-term use of benzodiazepines was associated with moderate to large adverse effects on all areas of cognition, with visuospatial memory being the most commonly detected impairment. Some of the other impairments reported were decreased IQ, visiomotor coordination, information processing, verbal learning and concentration. The authors of the meta-analysis and a later reviewer noted that the applicability of this meta-analysis is limited because the subjects were taken mostly from withdrawal clinics; the coexisting drug, alcohol use, and psychiatric disorders were not defined; and several of the included studies conducted the cognitive measurements during the withdrawal period.
Paradoxical effects
Paradoxical reactions, such as increased seizures in epileptics, aggression, violence, impulsivity, irritability and suicidal behavior sometimes occur. These reactions have been explained as consequences of disinhibition and the subsequent loss of control over socially unacceptable behavior. Paradoxical reactions are rare in the general population, with an incidence rate below 1% and similar to placebo. However, they occur with greater frequency in recreational abusers, individuals with borderline personality disorder, children, and patients on high-dosage regimes. In these groups, impulse control problems are perhaps the most important risk factor for disinhibition; learning disabilities and neurological disorders are also significant risks. Most reports of disinhibition involve high doses of high-potency benzodiazepines. Paradoxical effects may also appear after chronic use of benzodiazepines.
Long-term worsening of psychiatric symptoms
While benzodiazepines may have short-term benefits for anxiety, sleep and agitation in some patients, long-term (i.e., greater than 2–4 weeks) use can result in a worsening of the very symptoms the medications are meant to treat. Potential explanations include exacerbating cognitive problems that are already common in anxiety disorders, causing or worsening depression and suicidality, disrupting sleep architecture by inhibiting deep stage sleep, withdrawal symptoms or rebound symptoms in between doses mimicking or exacerbating underlying anxiety or sleep disorders, inhibiting the benefits of psychotherapy by inhibiting memory consolidation and reducing fear extinction, and reducing coping with trauma/stress and increasing vulnerability to future stress. The latter two explanations may be why benzodiazepines are ineffective and/or potentially harmful in PTSD and phobias. Anxiety, insomnia and irritability may be temporarily exacerbated during withdrawal, but psychiatric symptoms after discontinuation are usually less than even while taking benzodiazepines. Functioning significantly improves within 1 year of discontinuation.
Physical dependence, withdrawal and post-withdrawal syndromes
Tolerance
The main problem of the chronic use of benzodiazepines is the development of tolerance and dependence. Tolerance manifests itself as diminished pharmacological effect and develops relatively quickly to the sedative, hypnotic, anticonvulsant, and muscle relaxant actions of benzodiazepines. Tolerance to anti-anxiety effects develops more slowly with little evidence of continued effectiveness beyond four to six months of continued use. In general, tolerance to the amnesic effects does not occur. However, controversy exists as to tolerance to the anxiolytic effects with some evidence that benzodiazepines retain efficacy and opposing evidence from a systematic review of the literature that tolerance frequently occurs and some evidence that anxiety may worsen with long-term use. The question of tolerance to the amnesic effects of benzodiazepines is, likewise, unclear. Some evidence suggests that partial tolerance does develop, and that, "memory impairment is limited to a narrow window within 90 minutes after each dose".
A major disadvantage of benzodiazepines is that tolerance to therapeutic effects develops relatively quickly while many adverse effects persist. Tolerance develops to hypnotic and myorelaxant effects within days to weeks, and to anticonvulsant and anxiolytic effects within weeks to months. Therefore, benzodiazepines are unlikely to be effective long-term treatments for sleep and anxiety. While BZD therapeutic effects disappear with tolerance, depression and impulsivity with high suicidal risk commonly persist. Several studies have confirmed that long-term benzodiazepines are not significantly different from placebo for sleep or anxiety. This may explain why patients commonly increase doses over time and many eventually take more than one type of benzodiazepine after the first loses effectiveness. Additionally, because tolerance to benzodiazepine sedating effects develops more quickly than does tolerance to brainstem depressant effects, those taking more benzodiazepines to achieve desired effects may experience sudden respiratory depression, hypotension or death. Most patients with anxiety disorders and PTSD have symptoms that persist for at least several months, making tolerance to therapeutic effects a distinct problem for them and necessitating the need for more effective long-term treatment (e.g., psychotherapy, serotonergic antidepressants).
Withdrawal symptoms and management
Discontinuation of benzodiazepines or abrupt reduction of the dose, even after a relatively short course of treatment (two to four weeks), may result in two groups of symptoms, rebound and withdrawal. Rebound symptoms are the return of the symptoms for which the patient was treated but worse than before. Withdrawal symptoms are the new symptoms that occur when the benzodiazepine is stopped. They are the main sign of physical dependence.
The most frequent symptoms of withdrawal from benzodiazepines are insomnia, gastric problems, tremors, agitation, fearfulness, and muscle spasms. The less frequent effects are irritability, sweating, depersonalization, derealization, hypersensitivity to stimuli, depression, suicidal behavior, psychosis, seizures, and delirium tremens. Severe symptoms usually occur as a result of abrupt or over-rapid withdrawal. Abrupt withdrawal can be dangerous and lead to excitotoxicity, causing damage and even death to nerve cells as a result of excessive levels of the excitatory neurotransmitter glutamate. Increased glutamatergic activity is thought to be part of a compensatory mechanism to chronic GABAergic inhibition from benzodiazepines. Therefore, a gradual reduction regimen is recommended.
Symptoms may also occur during a gradual dosage reduction, but are typically less severe and may persist as part of a protracted withdrawal syndrome for months after cessation of benzodiazepines. Approximately 10% of patients experience a notable protracted withdrawal syndrome, which can persist for many months or in some cases a year or longer. Protracted symptoms tend to resemble those seen during the first couple of months of withdrawal but usually are of a sub-acute level of severity. Such symptoms do gradually lessen over time, eventually disappearing altogether.
Benzodiazepines have a reputation with patients and doctors for causing a severe and traumatic withdrawal; however, this is in large part due to the withdrawal process being poorly managed. Over-rapid withdrawal from benzodiazepines increases the severity of the withdrawal syndrome and increases the failure rate. A slow and gradual withdrawal customised to the individual and, if indicated, psychological support is the most effective way of managing the withdrawal. Opinion as to the time needed to complete withdrawal ranges from four weeks to several years. A goal of less than six months has been suggested, but due to factors such as dosage and type of benzodiazepine, reasons for prescription, lifestyle, personality, environmental stresses, and amount of available support, a year or more may be needed to withdraw.
Withdrawal is best managed by transferring the physically dependent patient to an equivalent dose of diazepam because it has the longest half-life of all of the benzodiazepines, is metabolised into long-acting active metabolites and is available in low-potency tablets, which can be quartered for smaller doses. A further benefit is that it is available in liquid form, which allows for even smaller reductions. Chlordiazepoxide, which also has a long half-life and long-acting active metabolites, can be used as an alternative.
Nonbenzodiazepines are contraindicated during benzodiazepine withdrawal as they are cross tolerant with benzodiazepines and can induce dependence. Alcohol is also cross tolerant with benzodiazepines and more toxic and thus caution is needed to avoid replacing one dependence with another. During withdrawal, fluoroquinolone-based antibiotics are best avoided if possible; they displace benzodiazepines from their binding site and reduce GABA function and, thus, may aggravate withdrawal symptoms. Antipsychotics are not recommended for benzodiazepine withdrawal (or other CNS depressant withdrawal states) especially clozapine, olanzapine or low potency phenothiazines, e.g., chlorpromazine as they lower the seizure threshold and can worsen withdrawal effects; if used extreme caution is required.
Withdrawal from long term benzodiazepines is beneficial for most individuals. Withdrawal of benzodiazepines from long-term users, in general, leads to improved physical and mental health particularly in the elderly; although some long term users report continued benefit from taking benzodiazepines, this may be the result of suppression of withdrawal effects.
Controversial associations
Beyond the well established link between benzodiazepines and psychomotor impairment resulting in motor vehicle accidents and falls leading to fracture; research in the 2000s and 2010s has raised the association between benzodiazepines (and Z-drugs) and other, as of yet unproven, adverse effects including dementia, cancer, infections, pancreatitis and respiratory disease exacerbations.
Dementia
A number of studies have drawn an association between long-term benzodiazepine use and neuro-degenerative disease, particularly Alzheimer's disease. It has been determined that long-term use of benzodiazepines is associated with increased dementia risk, even after controlling for protopathic bias.
Infections
Some observational studies have detected significant associations between benzodiazepines and respiratory infections such as pneumonia where others have not. A large meta-analysis of pre-marketing randomized controlled trials on the pharmacologically related Z-Drugs suggest a small increase in infection risk as well. An immunodeficiency effect from the action of benzodiazepines on GABA-A receptors has been postulated from animal studies.
Cancer
A meta-analysis of observational studies has determined an association between benzodiazepine use and cancer, though the risk across different agents and different cancers varied significantly. In terms of experimental basic science evidence, an analysis of carcinogenetic and genotoxicity data for various benzodiazepines has suggested a small possibility of carcinogenesis for a small number of benzodiazepines.
Pancreatitis
The evidence suggesting a link between benzodiazepines (and Z-Drugs) and pancreatic inflammation is very sparse and limited to a few observational studies from Taiwan. A criticism of confounding can be applied to these findings as with the other controversial associations above. Further well-designed research from other populations as well as a biologically plausible mechanism is required to confirm this association.
Overdose
Although benzodiazepines are much safer in overdose than their predecessors, the barbiturates, they can still cause problems in overdose. Taken alone, they rarely cause severe complications in overdose; statistics in England showed that benzodiazepines were responsible for 3.8% of all deaths by poisoning from a single drug. However, combining these drugs with alcohol, opiates or tricyclic antidepressants markedly raises the toxicity. The elderly are more sensitive to the side effects of benzodiazepines, and poisoning may even occur from their long-term use. The various benzodiazepines differ in their toxicity; temazepam appears most toxic in overdose and when used with other drugs. The symptoms of a benzodiazepine overdose may include; drowsiness, slurred speech, nystagmus, hypotension, ataxia, coma, respiratory depression, and cardiorespiratory arrest.
A reversal agent for benzodiazepines exists, flumazenil (Anexate), itself belonging to the chemical class of benzodiazepines. Its use as an antidote is not routinely recommended because of the high risk of resedation and seizures. In a double-blind, placebo-controlled trial of 326 people, 4 people had serious adverse events and 61% became resedated following the use of flumazenil. Numerous contraindications to its use exist. It is contraindicated in people with a history of long-term use of benzodiazepines, those having ingested a substance that lowers the seizure threshold or may cause an arrhythmia, and in those with abnormal vital signs. One study found that only 10% of the people presenting with a benzodiazepine overdose are suitable candidates for treatment with flumazenil.
Interactions
Individual benzodiazepines may have different interactions with certain drugs. Depending on their metabolism pathway, benzodiazepines can be divided roughly into two groups. The largest group consists of those that are metabolized by cytochrome P450 (CYP450) enzymes and possess significant potential for interactions with other drugs. The other group comprises those that are metabolized through glucuronidation, such as lorazepam, oxazepam, and temazepam, and, in general, have few drug interactions.
Many drugs, including oral contraceptives, some antibiotics, antidepressants, and antifungal agents, inhibit cytochrome enzymes in the liver. They reduce the rate of elimination of the benzodiazepines that are metabolized by CYP450, leading to possibly excessive drug accumulation and increased side-effects. In contrast, drugs that induce cytochrome P450 enzymes, such as St John's wort, the antibiotic rifampicin, and the anticonvulsants carbamazepine and phenytoin, accelerate elimination of many benzodiazepines and decrease their action. Taking benzodiazepines with alcohol, opioids and other central nervous system depressants potentiates their action. This often results in increased sedation, impaired motor coordination, suppressed breathing, and other adverse effects that have potential to be lethal. Antacids can slow down absorption of some benzodiazepines; however, this effect is marginal and inconsistent.
Pharmacology
Pharmacodynamics
Benzodiazepines work by increasing the effectiveness of the endogenous chemical, GABA, to decrease the excitability of neurons. This reduces the communication between neurons and, therefore, has a calming effect on many of the functions of the brain.
GABA controls the excitability of neurons by binding to the GABAA receptor. The GABAA receptor is a protein complex located in the synapses between neurons. All GABAA receptors contain an ion channel that conducts chloride ions across neuronal cell membranes and two binding sites for the neurotransmitter gamma-aminobutyric acid (GABA), while a subset of GABAA receptor complexes also contain a single binding site for benzodiazepines. Binding of benzodiazepines to this receptor complex does not alter binding of GABA. Unlike other positive allosteric modulators that increase ligand binding, benzodiazepine binding acts as a positive allosteric modulator by increasing the total conduction of chloride ions across the neuronal cell membrane when GABA is already bound to its receptor. This increased chloride ion influx hyperpolarizes the neuron's membrane potential. As a result, the difference between resting potential and threshold potential is increased and firing is less likely.
Different GABAA receptor subtypes have varying distributions within different regions of the brain and, therefore, control distinct neuronal circuits. Hence, activation of different GABAA receptor subtypes by benzodiazepines may result in distinct pharmacological actions. In terms of the mechanism of action of benzodiazepines, their similarities are too great to separate them into individual categories such as anxiolytic or hypnotic. For example, a hypnotic administered in low doses produces anxiety-relieving effects, whereas a benzodiazepine marketed as an anti-anxiety drug at higher doses induces sleep.
The subset of GABAA receptors that also bind benzodiazepines are referred to as benzodiazepine receptors (BzR). The GABAA receptor is a heteromer composed of five subunits, the most common ones being two αs, two βs, and one γ (α2β2γ1). For each subunit, many subtypes exist (α1–6, β1–3, and γ1–3). GABAA receptors that are made up of different combinations of subunit subtypes have different properties, different distributions in the brain and different activities relative to pharmacological and clinical effects. Benzodiazepines bind at the interface of the α and γ subunits on the GABAA receptor. Binding also requires that alpha subunits contain a histidine amino acid residue, (i.e., α1, α2, α3, and α5 containing GABAA receptors). For this reason, benzodiazepines show no affinity for GABAA receptors containing α4 and α6 subunits with an arginine instead of a histidine residue. Once bound to the benzodiazepine receptor, the benzodiazepine ligand locks the benzodiazepine receptor into a conformation in which it has a greater affinity for the GABA neurotransmitter. This increases the frequency of the opening of the associated chloride ion channel and hyperpolarizes the membrane of the associated neuron. The inhibitory effect of the available GABA is potentiated, leading to sedative and anxiolytic effects. For instance, those ligands with high activity at the α1 are associated with stronger hypnotic effects, whereas those with higher affinity for GABAA receptors containing α2 and/or α3 subunits have good anti-anxiety activity.
GABAA receptors participate in the regulation of synaptic pruning by prompting microglial spine engulfment. Benzodiazepines have been shown to upregulate microglial spine engulfment and prompt overzealous eradication of synaptic connections. This mechanism may help explain the increased risk of dementia associated with long-term benzodiazepine treatment.
The benzodiazepine class of drugs also interact with peripheral benzodiazepine receptors. Peripheral benzodiazepine receptors are present in peripheral nervous system tissues, glial cells, and to a lesser extent the central nervous system. These peripheral receptors are not structurally related or coupled to GABAA receptors. They modulate the immune system and are involved in the body response to injury. Benzodiazepines also function as weak adenosine reuptake inhibitors. It has been suggested that some of their anticonvulsant, anxiolytic, and muscle relaxant effects may be in part mediated by this action. Benzodiazepines have binding sites in the periphery, however their effects on muscle tone is not mediated through these peripheral receptors. The peripheral binding sites for benzodiazepines are present in immune cells and gastrointestinal tract.
Pharmacokinetics
A benzodiazepine can be placed into one of three groups by its elimination half-life, or time it takes for the body to eliminate half of the dose. Some benzodiazepines have long-acting active metabolites, such as diazepam and chlordiazepoxide, which are metabolised into desmethyldiazepam. Desmethyldiazepam has a half-life of 36–200 hours, and flurazepam, with the main active metabolite of desalkylflurazepam, with a half-life of 40–250 hours. These long-acting metabolites are partial agonists.
Short-acting compounds have a median half-life of 1–12 hours. They have few residual effects if taken before bedtime, rebound insomnia may occur upon discontinuation, and they might cause daytime withdrawal symptoms such as next day rebound anxiety with prolonged usage. Examples are brotizolam, midazolam, and triazolam.
Intermediate-acting compounds have a median half-life of 12–40 hours. They may have some residual effects in the first half of the day if used as a hypnotic. Rebound insomnia, however, is more common upon discontinuation of intermediate-acting benzodiazepines than longer-acting benzodiazepines. Examples are alprazolam, estazolam, flunitrazepam, clonazepam, lormetazepam, lorazepam, nitrazepam, and temazepam.
Long-acting compounds have a half-life of 40–250 hours. They have a risk of accumulation in the elderly and in individuals with severely impaired liver function, but they have a reduced severity of rebound effects and withdrawal. Examples are diazepam, clorazepate, chlordiazepoxide, and flurazepam.
Chemistry
Benzodiazepines share a similar chemical structure, and their effects in humans are mainly produced by the allosteric modification of a specific kind of neurotransmitter receptor, the GABAA receptor, which increases the overall conductance of these inhibitory channels; this results in the various therapeutic effects as well as adverse effects of benzodiazepines. Other less important modes of action are also known.
The term benzodiazepine is the chemical name for the heterocyclic ring system (see figure to the right), which is a fusion between the benzene and diazepine ring systems. Under Hantzsch–Widman nomenclature, a diazepine is a heterocycle with two nitrogen atoms, five carbon atom and the maximum possible number of cumulative double bonds. The "benzo" prefix indicates the benzene ring fused onto the diazepine ring.
Benzodiazepine drugs are substituted 1,4-benzodiazepines, although the chemical term can refer to many other compounds that do not have useful pharmacological properties. Different benzodiazepine drugs have different side groups attached to this central structure. The different side groups affect the binding of the molecule to the GABAA receptor and so modulate the pharmacological properties. Many of the pharmacologically active "classical" benzodiazepine drugs contain the 5-phenyl-1H-benzo[e] [1,4]diazepin-2(3H)-one substructure (see figure to the right). Benzodiazepines have been found to mimic protein reverse turns structurally, which enable them with their biological activity in many cases.
Nonbenzodiazepines also bind to the benzodiazepine binding site on the GABAA receptor and possess similar pharmacological properties. While the nonbenzodiazepines are by definition structurally unrelated to the benzodiazepines, both classes of drugs possess a common pharmacophore (see figure to the lower-right), which explains their binding to a common receptor site.
Types
2-keto compounds:
clorazepate, diazepam, flurazepam, halazepam, prazepam, and others
3-hydroxy compounds:
lorazepam, lormetazepam, oxazepam, temazepam
7-nitro compounds:
clonazepam, flunitrazepam, nimetazepam, nitrazepam
Triazolo compounds:
adinazolam, alprazolam, estazolam, triazolam
Imidazo compounds:
climazolam, loprazolam, midazolam
1,5-benzodiazepines:
clobazam
History
The first benzodiazepine, chlordiazepoxide (Librium), was synthesized in 1955 by Leo Sternbach while working at Hoffmann–La Roche on the development of tranquilizers. The pharmacological properties of the compounds prepared initially were disappointing, and Sternbach abandoned the project. Two years later, in April 1957, co-worker Earl Reeder noticed a "nicely crystalline" compound left over from the discontinued project while spring-cleaning in the lab. This compound, later named chlordiazepoxide, had not been tested in 1955 because of Sternbach's focus on other issues. Expecting pharmacology results to be negative, and hoping to publish the chemistry-related findings, researchers submitted it for a standard battery of animal tests. The compound showed very strong sedative, anticonvulsant, and muscle relaxant effects. These impressive clinical findings led to its speedy introduction throughout the world in 1960 under the brand name Librium. Following chlordiazepoxide, diazepam marketed by Hoffmann–La Roche under the brand name Valium in 1963, and for a while the two were the most commercially successful drugs. The introduction of benzodiazepines led to a decrease in the prescription of barbiturates, and by the 1970s they had largely replaced the older drugs for sedative and hypnotic uses.
The new group of drugs was initially greeted with optimism by the medical profession, but gradually concerns arose; in particular, the risk of dependence became evident in the 1980s. Benzodiazepines have a unique history in that they were responsible for the largest-ever class-action lawsuit against drug manufacturers in the United Kingdom, involving 14,000 patients and 1,800 law firms that alleged the manufacturers knew of the dependence potential but intentionally withheld this information from doctors. At the same time, 117 general practitioners and 50 health authorities were sued by patients to recover damages for the harmful effects of dependence and withdrawal. This led some doctors to require a signed consent form from their patients and to recommend that all patients be adequately warned of the risks of dependence and withdrawal before starting treatment with benzodiazepines. The court case against the drug manufacturers never reached a verdict; legal aid had been withdrawn and there were allegations that the consultant psychiatrists, the expert witnesses, had a conflict of interest. The court case fell through, at a cost of £30 million, and led to more cautious funding through legal aid for future cases. This made future class action lawsuits less likely to succeed, due to the high cost from financing a smaller number of cases, and increasing charges for losing the case for each person involved.
Although antidepressants with anxiolytic properties have been introduced, and there is increasing awareness of the adverse effects of benzodiazepines, prescriptions for short-term anxiety relief have not significantly dropped. For treatment of insomnia, benzodiazepines are now less popular than nonbenzodiazepines, which include zolpidem, zaleplon and eszopiclone. Nonbenzodiazepines are molecularly distinct, but nonetheless, they work on the same benzodiazepine receptors and produce similar sedative effects.
Benzodiazepines have been detected in plant specimens and brain samples of animals not exposed to synthetic sources, including a human brain from the 1940s. However, it is unclear whether these compounds are biosynthesized by microbes or by plants and animals themselves. A microbial biosynthetic pathway has been proposed.
Society and culture
Legal status
In the United States, benzodiazepines are Schedule IV drugs under the Federal Controlled Substances Act, even when not on the market (for example, flunitrazepam), with the exception of flualprazolam, etizolam, clonazolam, flubromazolam, and diclazepam which are placed in Schedule I.
In Canada, possession of benzodiazepines is legal for personal use. All benzodiazepines are categorized as Schedule IV substances under the Controlled Drugs and Substances Act.
In the United Kingdom, benzodiazepines are Class C controlled drugs, carrying the maximum penalty of 7 years imprisonment, an unlimited fine or both for possession and a maximum penalty of 14 years imprisonment, an unlimited fine or both for supplying benzodiazepines to others.
In the Netherlands, since October 1993, benzodiazepines, including formulations containing less than 20 mg of temazepam, are all placed on List 2 of the Opium Law. A prescription is needed for possession of all benzodiazepines. Temazepam formulations containing 20 mg or greater of the drug are placed on List 1, thus requiring doctors to write prescriptions in the List 1 format.
In East Asia and Southeast Asia, temazepam and nimetazepam are often heavily controlled and restricted. In certain countries, triazolam, flunitrazepam, flutoprazepam and midazolam are also restricted or controlled to certain degrees. In Hong Kong, all benzodiazepines are regulated under Schedule 1 of Hong Kong's Chapter 134 Dangerous Drugs Ordinance. Previously only brotizolam, flunitrazepam and triazolam were classed as dangerous drugs.
Internationally, benzodiazepines are categorized as Schedule IV controlled drugs, apart from flunitrazepam, which is a Schedule III drug under the Convention on Psychotropic Substances.
Recreational use
Benzodiazepines are considered major addictive substances. Non-medical benzodiazepine use is mostly limited to individuals who use other substances, i.e., people who engage in polysubstance use. On the international scene, benzodiazepines are categorized as Schedule IV controlled drugs by the INCB, apart from flunitrazepam, which is a Schedule III drug under the Convention on Psychotropic Substances. Some variation in drug scheduling exists in individual countries; for example, in the United Kingdom, midazolam and temazepam are Schedule III controlled drugs.
British law requires that temazepam (but not midazolam) be stored in safe custody. Safe custody requirements ensures that pharmacists and doctors holding stock of temazepam must store it in securely fixed double-locked steel safety cabinets and maintain a written register, which must be bound and contain separate entries for temazepam and must be written in ink with no use of correction fluid (although a written register is not required for temazepam in the United Kingdom). Disposal of expired stock must be witnessed by a designated inspector (either a local drug-enforcement police officer or official from health authority). Benzodiazepine use ranges from occasional binges on large doses, to chronic and compulsive drug use of high doses.
Benzodiazepines are commonly used recreationally by poly-drug users. Mortality is higher among poly-drug users that also use benzodiazepines. Heavy alcohol use also increases mortality among poly-drug users. Polydrug use involving benzodiazepines and alcohol can result in an increased risk of blackouts, risk-taking behaviours, seizures, and overdose. Dependence and tolerance, often coupled with dosage escalation, to benzodiazepines can develop rapidly among people who misuse drugs; withdrawal syndrome may appear after as little as three weeks of continuous use. Long-term use has the potential to cause both physical and psychological dependence and severe withdrawal symptoms such as depression, anxiety (often to the point of panic attacks), and agoraphobia. Benzodiazepines and, in particular, temazepam are sometimes used intravenously, which, if done incorrectly or in an unsterile manner, can lead to medical complications including abscesses, cellulitis, thrombophlebitis, arterial puncture, deep vein thrombosis, and gangrene. Sharing syringes and needles for this purpose also brings up the possibility of transmission of hepatitis, HIV, and other diseases. Benzodiazepines are also misused intranasally, which may have additional health consequences. Once benzodiazepine dependence has been established, a clinician usually converts the patient to an equivalent dose of diazepam before beginning a gradual reduction program.
A 1999–2005 Australian police survey of detainees reported preliminary findings that self-reported users of benzodiazepines were less likely than non-user detainees to work full-time and more likely to receive government benefits, use methamphetamine or heroin, and be arrested or imprisoned. Benzodiazepines are sometimes used for criminal purposes; they serve to incapacitate a victim in cases of drug assisted rape or robbery.
Overall, anecdotal evidence suggests that temazepam may be the most psychologically habit-forming (addictive) benzodiazepine. Non-medical temazepam use reached epidemic proportions in some parts of the world, in particular, in Europe and Australia, and is a major addictive substance in many Southeast Asian countries. This led authorities of various countries to place temazepam under a more restrictive legal status. Some countries, such as Sweden, banned the drug outright. Temazepam also has certain pharmacokinetic properties of absorption, distribution, elimination, and clearance that make it more apt to non-medical use compared to many other benzodiazepines.
Veterinary use
Benzodiazepines are used in veterinary practice in the treatment of various disorders and conditions. As in humans, they are used in the first-line management of seizures, status epilepticus, and tetanus, and as maintenance therapy in epilepsy (in particular, in cats). They are widely used in small and large animals (including horses, swine, cattle and exotic and wild animals) for their anxiolytic and sedative effects, as pre-medication before surgery, for induction of anesthesia and as adjuncts to anesthesia.
References
External links
Chemical classes of psychoactive drugs
Glycine receptor antagonists
Hallucinogen antidotes
Hypnotics
Muscle relaxants
Sedatives | Benzodiazepine | [
"Biology"
] | 13,374 | [
"Hypnotics",
"Behavior",
"Sleep"
] |
4,788 | https://en.wikipedia.org/wiki/Body%20mass%20index | Body mass index (BMI) is a value derived from the mass (weight) and height of a person. The BMI is defined as the body mass divided by the square of the body height, and is expressed in units of kg/m2, resulting from mass in kilograms (kg) and height in metres (m).
The BMI may be determined first by measuring its components by means of a weighing scale and a stadiometer. The multiplication and division may be carried out directly, by hand or using a calculator, or indirectly using a lookup table (or chart). The table displays BMI as a function of mass and height and may show other units of measurement (converted to metric units for the calculation). The table may also show contour lines or colours for different BMI categories.
The BMI is a convenient rule of thumb used to broadly categorize a person as based on tissue mass (muscle, fat, and bone) and height. Major adult BMI classifications are underweight (under 18.5 kg/m2), normal weight (18.5 to 24.9), overweight (25 to 29.9), and obese (30 or more). When used to predict an individual's health, rather than as a statistical measurement for groups, the BMI has limitations that can make it less useful than some of the alternatives, especially when applied to individuals with abdominal obesity, short stature, or high muscle mass.
BMIs under 20 and over 25 have been associated with higher all-cause mortality, with the risk increasing with distance from the 20–25 range.
History
Adolphe Quetelet, a Belgian astronomer, mathematician, statistician, and sociologist, devised the basis of the BMI between 1830 and 1850 as he developed what he called "social physics". Quetelet himself never intended for the index, then called the Quetelet Index, to be used as a means of medical assessment. Instead, it was a component of his study of , or the average man. Quetelet thought of the average man as a social ideal, and developed the body mass index as a means of discovering the socially ideal human person. According to Lars Grue and Arvid Heiberg in the Scandinavian Journal of Disability Research, Quetelet's idealization of the average man would be elaborated upon by Francis Galton a decade later in the development of Eugenics.
The modern term "body mass index" (BMI) for the ratio of human body weight to squared height was coined in a paper published in the July 1972 edition of the Journal of Chronic Diseases by Ancel Keys and others. In this paper, Keys argued that what he termed the BMI was "if not fully satisfactory, at least as good as any other relative weight index as an indicator of relative obesity".
The interest in an index that measures body fat came with observed increasing obesity in prosperous Western societies. Keys explicitly judged BMI as appropriate for population studies and inappropriate for individual evaluation. Nevertheless, due to its simplicity, it has come to be widely used for preliminary diagnoses. Additional metrics, such as waist circumference, can be more useful.
The BMI is expressed in kg/m2, resulting from mass in kilograms and height in metres. If pounds and inches are used, a conversion factor of 703 (kg/m2)/(lb/in2) is applied. (If pounds and feet are used, a conversion factor of 4.88 is used.) When the term BMI is used informally, the units are usually omitted.
BMI provides a simple numeric measure of a person's thickness or thinness, allowing health professionals to discuss weight problems more objectively with their patients. BMI was designed to be used as a simple means of classifying average sedentary (physically inactive) populations, with an average body composition. For such individuals, the BMI value recommendations are as follows: 18.5 to 24.9 kg/m2 may indicate optimal weight, lower than 18.5 may indicate underweight, 25 to 29.9 may indicate overweight, and 30 or more may indicate obese. Lean male athletes often have a high muscle-to-fat ratio and therefore a BMI that is misleadingly high relative to their body-fat percentage.
Categories
A common use of the BMI is to assess how far an individual's body weight departs from what is normal for a person's height. The weight excess or deficiency may, in part, be accounted for by body fat (adipose tissue) although other factors such as muscularity also affect BMI significantly (see discussion below and overweight).
The WHO regards an adult BMI of less than 18.5 as underweight and possibly indicative of malnutrition, an eating disorder, or other health problems, while a BMI of 25 or more is considered overweight and 30 or more is considered obese. In addition to the principle, international WHO BMI cut-off points (16, 17, 18.5, 25, 30, 35 and 40), four additional cut-off points for at-risk Asians were identified (23, 27.5, 32.5 and 37.5). These ranges of BMI values are valid only as statistical categories.
Children and youth
BMI is used differently for people aged 2 to 20. It is calculated in the same way as for adults but then compared to typical values for other children or youth of the same age. Instead of comparison against fixed thresholds for underweight and overweight, the BMI is compared against the percentiles for children of the same sex and age.
A BMI that is less than the 5th percentile is considered underweight and above the 95th percentile is considered obese. Children with a BMI between the 85th and 95th percentile are considered to be overweight.
Studies in Britain from 2013 have indicated that females between the ages 12 and 16 had a higher BMI than males of the same age by 1.0 kg/m2 on average.
International variations
These recommended distinctions along the linear scale may vary from time to time and country to country, making global, longitudinal surveys problematic. People from different populations and descent have different associations between BMI, percentage of body fat, and health risks, with a higher risk of type 2 diabetes mellitus and atherosclerotic cardiovascular disease at BMIs lower than the WHO cut-off point for overweight, 25 kg/m2, although the cut-off for observed risk varies among different populations. The cut-off for observed risk varies based on populations and subpopulations in Europe, Asia and Africa.
Hong Kong
The Hospital Authority of Hong Kong recommends the use of the following BMI ranges:
Japan
A 2000 study from the Japan Society for the Study of Obesity (JASSO) presents the following table of BMI categories:
Singapore
In Singapore, the BMI cut-off figures were revised in 2005 by the Health Promotion Board (HPB), motivated by studies showing that many Asian populations, including Singaporeans, have a higher proportion of body fat and increased risk for cardiovascular diseases and diabetes mellitus, compared with general BMI recommendations in other countries. The BMI cut-offs are presented with an emphasis on health risk rather than weight.
United Kingdom
In the UK, NICE guidance recommends prevention of type 2 diabetes should start at a BMI of 30 in White and 27.5 in Black African, African-Caribbean, South Asian, and Chinese populations.
Research since 2021 based on a large sample of almost 1.5 million people in England found that some ethnic groups would benefit from prevention at or above a BMI of (rounded):
30 in White
28 in Black
just below 30 in Black British
29 in Black African
27 in Black Other
26 in Black Caribbean
27 in Arab and Chinese
24 in South Asian
24 in Pakistani, Indian and Nepali
23 in Tamil and Sri Lankan
21 in Bangladeshi
United States
In 1998, the U.S. National Institutes of Health brought U.S. definitions in line with World Health Organization guidelines, lowering the normal/overweight cut-off from a BMI of 27.8 (men) and 27.3 (women) to a BMI of 25. This had the effect of redefining approximately 25 million Americans, previously healthy, to overweight.
This can partially explain the increase in the overweight diagnosis in the past 20 years, and the increase in sales of weight loss products during the same time. WHO also recommends lowering the normal/overweight threshold for southeast Asian body types to around BMI 23, and expects further revisions to emerge from clinical studies of different body types.
A survey in 2007 showed 63% of Americans were then overweight or obese, with 26% in the obese category (a BMI of 30 or more). By 2014, 37.7% of adults in the United States were obese, 35.0% of men and 40.4% of women; class 3 obesity (BMI over 40) values were 7.7% for men and 9.9% for women. The U.S. National Health and Nutrition Examination Survey of 2015–2016 showed that 71.6% of American men and women had BMIs over 25. Obesity—a BMI of 30 or more—was found in 39.8% of the US adults.
Consequences of elevated level in adults
The BMI ranges are based on the relationship between body weight and disease and death. Overweight and obese individuals are at an increased risk for the following diseases:
Coronary artery disease
Dyslipidemia
Type 2 diabetes
Gallbladder disease
Hypertension
Osteoarthritis
Sleep apnea
Stroke
Infertility
At least 10 cancers, including endometrial, breast, and colon cancer
Epidural lipomatosis
Among people who have never smoked, overweight/obesity is associated with 51% increase in mortality compared with people who have always been a normal weight.
Applications
Public health
The BMI is generally used as a means of correlation between groups related by general mass and can serve as a vague means of estimating adiposity. The duality of the BMI is that, while it is easy to use as a general calculation, it is limited as to how accurate and pertinent the data obtained from it can be. Generally, the index is suitable for recognizing trends within sedentary or overweight individuals because there is a smaller margin of error. The BMI has been used by the WHO as the standard for recording obesity statistics since the early 1980s.
This general correlation is particularly useful for consensus data regarding obesity or various other conditions because it can be used to build a semi-accurate representation from which a solution can be stipulated, or the RDA for a group can be calculated. Similarly, this is becoming more and more pertinent to the growth of children, since the majority of children are sedentary.
Cross-sectional studies indicated that sedentary people can decrease BMI by becoming more physically active. Smaller effects are seen in prospective cohort studies which lend to support active mobility as a means to prevent a further increase in BMI.
Legislation
In France, Italy, and Spain, legislation has been introduced banning the usage of fashion show models having a BMI below 18. In Israel, a model with BMI below 18.5 is banned. This is done to fight anorexia among models and people interested in fashion.
Relationship to health
A study published by Journal of the American Medical Association (JAMA) in 2005 showed that overweight people had a death rate similar to normal weight people as defined by BMI, while underweight and obese people had a higher death rate.
A study published by The Lancet in 2009 involving 900,000 adults showed that overweight and underweight people both had a mortality rate higher than normal weight people as defined by BMI. The optimal BMI was found to be in the range of 22.5–25. The average BMI of athletes is 22.4 for women and 23.6 for men.
High BMI is associated with type 2 diabetes only in people with high serum gamma-glutamyl transpeptidase.
In an analysis of 40 studies involving 250,000 people, patients with coronary artery disease with normal BMIs were at higher risk of death from cardiovascular disease than people whose BMIs put them in the overweight range (BMI 25–29.9).
One study found that BMI had a good general correlation with body fat percentage, and noted that obesity has overtaken smoking as the world's number one cause of death. But it also notes that in the study 50% of men and 62% of women were obese according to body fat defined obesity, while only 21% of men and 31% of women were obese according to BMI, meaning that BMI was found to underestimate the number of obese subjects.
A 2010 study that followed 11,000 subjects for up to eight years concluded that BMI is not the most appropriate measure for the risk of heart attack, stroke or death. A better measure was found to be the waist-to-height ratio. A 2011 study that followed 60,000 participants for up to 13 years found that waist–hip ratio was a better predictor of ischaemic heart disease mortality.
Limitations
The medical establishment and statistical community have both highlighted the limitations of BMI.
Racial and gender differences
Part of the statistical limitations of the BMI scale is the result of Quetelet's original sampling methods. As noted in his primary work, A Treatise on Man and the Development of His Faculties, the data from which Quetelet derived his formula was taken mostly from Scottish Highland soldiers and French Gendarmerie. The BMI was always designed as a metric for European men. For women, and people of non-European origin, the scale is often biased. As noted by sociologist Sabrina Strings, the BMI is largely inaccurate for black people especially, disproportionately labelling them as overweight even for healthy individuals. A 2012 study of BMI in an ethnically diverse population showed that "adult overweight and obesity were associated with an increased risk of mortality ... across the five racial/ethnic groups".
Scaling
The exponent in the denominator of the formula for BMI is arbitrary. The BMI depends upon weight and the square of height. Since mass increases to the third power of linear dimensions, taller individuals with exactly the same body shape and relative composition have a larger BMI. BMI is proportional to the mass and inversely proportional to the square of the height. So, if all body dimensions double, and mass scales naturally with the cube of the height, then BMI doubles instead of remaining the same. This results in taller people having a reported BMI that is uncharacteristically high, compared to their actual body fat levels. In comparison, the Ponderal index is based on the natural scaling of mass with the third power of the height.
However, many taller people are not just "scaled up" short people but tend to have narrower frames in proportion to their height. Carl Lavie has written that "The B.M.I. tables are excellent for identifying obesity and body fat in large populations, but they are far less reliable for determining fatness in individuals."
For US adults, exponent estimates range from 1.92 to 1.96 for males and from 1.45 to 1.95 for females.
Physical characteristics
The BMI overestimates roughly 10% for a large (or tall) frame and underestimates roughly 10% for a smaller frame (short stature). In other words, people with small frames would be carrying more fat than optimal, but their BMI indicates that they are normal. Conversely, large framed (or tall) individuals may be quite healthy, with a fairly low body fat percentage, but be classified as overweight by BMI.
For example, a height/weight chart may say the ideal weight (BMI 21.5) for a man is . But if that man has a slender build (small frame), he may be overweight at and should reduce by 10% to roughly (BMI 19.4). In the reverse, the man with a larger frame and more solid build should increase by 10%, to roughly (BMI 23.7). If one teeters on the edge of small/medium or medium/large, common sense should be used in calculating one's ideal weight. However, falling into one's ideal weight range for height and build is still not as accurate in determining health risk factors as waist-to-height ratio and actual body fat percentage.
Accurate frame size calculators use several measurements (wrist circumference, elbow width, neck circumference, and others) to determine what category an individual falls into for a given height. The BMI also fails to take into account loss of height through ageing. In this situation, BMI will increase without any corresponding increase in weight.
Muscle versus fat
Assumptions about the distribution between muscle mass and fat mass are inexact. BMI generally overestimates adiposity on those with leaner body mass (e.g., athletes) and underestimates excess adiposity on those with fattier body mass.
A study in June 2008 by Romero-Corral et al. examined 13,601 subjects from the United States' third National Health and Nutrition Examination Survey (NHANES III) and found that BMI-defined obesity (BMI ≥ 30) was present in 21% of men and 31% of women. Body fat-defined obesity was found in 50% of men and 62% of women. While BMI-defined obesity showed high specificity (95% for men and 99% for women), BMI showed poor sensitivity (36% for men and 49% for women). In other words, the BMI will be mostly correct when determining a person to be obese, but can err quite frequently when determining a person not to be. Despite this undercounting of obesity by BMI, BMI values in the intermediate BMI range of 20–30 were found to be associated with a wide range of body fat percentages. For men with a BMI of 25, about 20% have a body fat percentage below 20% and about 10% have body fat percentage above 30%.
Body composition for athletes is often better calculated using measures of body fat, as determined by such techniques as skinfold measurements or underwater weighing and the limitations of manual measurement have also led to alternative methods to measure obesity, such as the body volume indicator.
Variation in definitions of categories
It is not clear where on the BMI scale the threshold for overweight and obese should be set. Because of this, the standards have varied over the past few decades. Between 1980 and 2000 the U.S. Dietary Guidelines have defined overweight at a variety of levels ranging from a BMI of 24.9 to 27.1. In 1985, the National Institutes of Health (NIH) consensus conference recommended that overweight BMI be set at a BMI of 27.8 for men and 27.3 for women.
In 1998, an NIH report concluded that a BMI over 25 is overweight and a BMI over 30 is obese. In the 1990s the World Health Organization (WHO) decided that a BMI of 25 to 30 should be considered overweight and a BMI over 30 is obese, the standards the NIH set. This became the definitive guide for determining if someone is overweight.
One study found that the vast majority of people labelled 'overweight' and 'obese' according to current definitions do not in fact face any meaningful increased risk for early death. In a quantitative analysis of several studies, involving more than 600,000 men and women, the lowest mortality rates were found for people with BMIs between 23 and 29; most of the 25–30 range considered 'overweight' was not associated with higher risk.
Alternatives
Corpulence index (exponent of 3)
The corpulence index uses an exponent of 3 rather than 2. The corpulence index yields valid results even for very short and very tall people, which is a problem with BMI. For example, a tall person at an ideal body weight of gives a normal BMI of 20.74 and CI of 13.6, while a tall person with a weight of gives a BMI of 24.84, very close to an overweight BMI of 25, and a CI of 12.4, very close to a normal CI of 12.
New BMI (exponent of 2.5)
A study found that the best exponent E for predicting the fat percent would be between 2 and 2.5 in .
An exponent of 5/2 or 2.5 was proposed by Quetelet in the 19th century:
In general, we do not err much when we assume that during development the squares of the weight at different ages are as the fifth powers of the height
This exponent of 2.5 is used in a revised formula for Body Mass Index, proposed by Nick Trefethen, Professor of numerical analysis at the University of Oxford, which minimizes the distortions for shorter and taller individuals resulting from the use of an exponent of 2 in the traditional BMI formula:
The scaling factor of 1.3 was determined to make the proposed new BMI formula align with the traditional BMI formula for adults of average height, while the exponent of 2.5 is a compromise between the exponent of 2 in the traditional formula for BMI and the exponent of 3 that would be expected for the scaling of weight (which at constant density would theoretically scale with volume, i.e., as the cube of the height) with height. In Trefethen's analysis, an exponent of 2.5 was found to fit empirical data more closely with less distortion than either an exponent of 2 or 3.
BMI prime (exponent of 2, normalization factor)
BMI Prime, a modification of the BMI system, is the ratio of actual BMI to upper limit optimal BMI (currently defined at 25 kg/m2), i.e., the actual BMI expressed as a proportion of upper limit optimal. BMI Prime is a dimensionless number independent of units. Individuals with BMI Prime less than 0.74 are underweight; those with between 0.74 and 1.00 have optimal weight; and those at 1.00 or greater are overweight. BMI Prime is useful clinically because it shows by what ratio (e.g. 1.36) or percentage (e.g. 136%, or 36% above) a person deviates from the maximum optimal BMI.
For instance, a person with BMI 34 kg/m2 has a BMI Prime of 34/25 = 1.36, and is 36% over their upper mass limit. In South East Asian and South Chinese populations (see § international variations), BMI Prime should be calculated using an upper limit BMI of 23 in the denominator instead of 25. BMI Prime allows easy comparison between populations whose upper-limit optimal BMI values differ.
Waist circumference
Waist circumference is a good indicator of visceral fat, which poses more health risks than fat elsewhere. According to the U.S. National Institutes of Health (NIH), waist circumference in excess of for men and for (non-pregnant) women is considered to imply a high risk for type 2 diabetes, dyslipidemia, hypertension, and cardiovascular disease CVD. Waist circumference can be a better indicator of obesity-related disease risk than BMI. For example, this is the case in populations of Asian descent and older people. for men and for women has been stated to pose "higher risk", with the NIH figures "even higher".
Waist-to-hip circumference ratio has also been used, but has been found to be no better than waist circumference alone, and more complicated to measure.
A related indicator is waist circumference divided by height. A 2013 study identified critical threshold values for waist-to-height ratio according to age, with consequent significant reduction in life expectancy if exceeded. These are: 0.5 for people under 40 years of age, 0.5 to 0.6 for people aged 40–50, and 0.6 for people over 50 years of age.
Surface-based body shape index
The Surface-based Body Shape Index (SBSI) is far more rigorous and is based upon four key measurements: the body surface area (BSA), vertical trunk circumference (VTC), waist circumference (WC) and height (H). Data on 11,808 subjects from the National Health and Human Nutrition Examination Surveys (NHANES) 1999–2004, showed that SBSI outperformed BMI, waist circumference, and A Body Shape Index (ABSI), an alternative to BMI.
A simplified, dimensionless form of SBSI, known as SBSI*, has also been developed.
Modified body mass index
Within some medical contexts, such as familial amyloid polyneuropathy, serum albumin is factored in to produce a modified body mass index (mBMI). The mBMI can be obtained by multiplying the BMI by serum albumin, in grams per litre.
See also
Allometry
Body roundness index
Body water
History of anthropometry
List of countries by body mass index
Normal weight obesity
Obesity paradox
Relative Fat Mass
Somatotype and constitutional psychology
Explanatory notes
References
Further reading
External links
U.S. National Center for Health Statistics:
BMI Growth Charts for children and young adults
BMI calculator ages 20 and older
Belgian inventions
Body shape
Classification of obesity
Human body weight
Human height
Mathematics in medicine
Medical signs
Ratios | Body mass index | [
"Mathematics"
] | 5,356 | [
"Applied mathematics",
"Arithmetic",
"Mathematics in medicine",
"Ratios"
] |
4,801 | https://en.wikipedia.org/wiki/BeOS | BeOS is a discontinued operating system for personal computers that was developed by Be Inc. It was conceived for the company's BeBox personal computer which was released in 1995. BeOS was designed for multitasking, multithreading, and a graphical user interface. The OS was later sold to OEMs, retail, and directly to users; its last version was released as freeware.
Early BeOS releases are for PowerPC. It was ported to Macintosh and then x86. Be was ultimately unable to achieve a significant market share and ended development with dwindling finances, so Palm acquired the BeOS assets in 2001. Enthusiasts have since created derivate operating systems including Haiku, which will retain BeOS 5 compatibility as of Release R1.
Development
BeOS is the product of Apple Computer's former business executive Jean-Louis Gassée, with the underlying philosophy of building a "media OS" capable of up-and-coming digital media and multi-processors. Development began in the early 1990s, initially designed to run on AT&T Hobbit-based hardware before being modified to run on PowerPC-based processors: first Be's own BeBox system, and later Apple Computer's PowerPC Reference Platform and Common Hardware Reference Platform, with the hope that Apple would purchase or license BeOS as a replacement for its aging Mac OS.
The first version of BeOS shipped with the BeBox to a limited number of developers in October 1995. It supported analog and digital audio and MIDI streams, multiple video sources, and 3D computation. Developer Release 6 (DR6) was the first officially available version.
The BeOS Developer Release 7 (DR7) was released in April 1996. This includes full 32-bit color graphics, "workspaces" (virtual desktops), an FTP file server, and a web server.
DR8 was released in September 1996 with a new browser with MPEG and QuickTime video formats. It supports OpenGL, remote access, and Power Macintosh.
In 1996, Apple Computer CEO Gil Amelio started negotiations to buy Be Inc., but stalled when Be CEO Jean-Louis Gassée wanted $300 million and Apple offered $125 million. Apple's board of directors preferred NeXTSTEP and purchased Steve Jobs's NeXT instead.
The final developer's release introduced a 64-bit file system. BeOS Preview Release (PR1), the first for the general public, was released in mid 1997. It supports AppleTalk, PostScript printing, and Unicode. The price for the Full Pack was $49.95. Later that year, Preview Release 2 shipped with support for Macintosh's Hierarchical File System (HFS), support for 512MB RAM, and improvements to the user interface.
Release 3 (R3) shipped in March 1998 (initially $69.95, later $99.95), as the first to be ported to the Intel x86 platform in addition to PowerPC, and the first commercially available version of BeOS. The adoption of x86 was partly due to Apple's moves, with Steve Jobs stopping the Macintosh clone market, and Be's mounting debt.
BeOS Release 4 had a claimed performance improvement of up to 30 percent. Keyboard shortcuts were changed to mimic those of Windows. However it still lacked Novell NetWare support. It also brought additional drivers and support for the most common SCSI controllers on the x86 platform - from Adaptec and Symbios Logic. The bootloader switched from LILO to Be's own bootman.
In 2000, BeOS Release 5 (R5) was released. This was split between a Pro Edition, and a free version known as Personal Edition (BeOS PE) which was released for free online and by CD-ROM. BeOS PE could be booted from within Windows or Linux, and was intended as a consumer and developer preview. Also with R5, Be open sourced elements of the user interface. Be CEO Gassée said in 2001 that he was open to the idea of releasing the entire operating system's source code, but this never materialized.
Release 5 raised BeOS's popularity but it remained commercially unsuccessful, and BeOS eventually halted following the introduction of a stripped-down version for Internet appliances, BeIA, which became the company's business focus in place of BeOS. R5 is the final official release of BeOS as Be Inc. became defunct in 2001 following its sale to Palm Inc. BeOS R5.1 "Dano", which was under development before Be's sale to Palm and includes the BeOS Networking Environment (BONE) networking stack, was leaked to the public shortly after the company's close.
Version history table
Hardware support and licensees
After the discontinuation of the BeBox in January 1997, Power Computing began bundling BeOS (on a CD-ROM for optional installation) with its line of PowerPC-based Macintosh clones. These systems can dual boot either Mac OS or BeOS, with a start-up screen offering the choice. Motorola also announced in February 1997 that it would bundle BeOS with their Macintosh clones, the Motorola StarMax, along with MacOS. DayStar Digital was another licensee.
BeOS is compatible with many Macintosh models, but not PowerBook.
With BeOS Release 3 on the x86 platform, the operating system is compatible with most computers that run Windows. Hitachi is the first major x86 OEM to ship BeOS, selling the Hitachi Flora Prius line in Japan, and Fujitsu released the Silverline computers in Germany and the Nordic countries. Be was unable to attract further manufacturers due to their Microsoft contracts. Be closed in 2002, and sued Microsoft, claiming that Hitachi had been dissuaded from selling PCs loaded with BeOS. The case was eventually settled out of court for $23.25 million with no admission of liability on Microsoft's part.
Architecture
BeOS was developed as an original product, with a proprietary kernel, symmetric multiprocessing, preemptive multitasking, and pervasive multithreading. It runs in protected memory mode, with a C++ application framework based on shared libraries and modular code. Be initially offered CodeWarrior for application development, and later EGCS.
Its API is object oriented. The user interface was largely multithreaded: each window ran in its own thread, relying heavily on sending messages to communicate between threads; and these concepts are reflected into the API.
BeOS uses modern hardware facilities such as modular I/O bandwidth, a multithreaded graphics engine (with the OpenGL library), and a 64-bit journaling file system named BFS supporting files up to one terabyte each. BeOS has partial POSIX compatibility and a command-line interface through Bash, although internally it is not a Unix-derived operating system. Many Unix applications were ported to the BeOS command-line interface.
BeOS uses Unicode as the default GUI encoding, and support for input methods such as bidirectional text input was never realized.
Applications
BeOS is bundled with a unique web browser named NetPositive, the BeMail email client, and the PoorMan web server. Be operated the marketplace site BeDepot for the purchase and downloading of software including third party, and a website named BeWare listing apps for the platform. Some third party BeOS apps include the Gobe Productive office suite, the Mozilla project, and multimedia apps like Cinema 4D. Quake and Quake II were officially ported, and SimCity 3000 was in development.
Reception
Be did not disclose the number of BeOS users, but it was estimated to be running on between 50,000 and 100,000 computers in 1999, and Release 5 reportedly had over one million downloads. For a time it was viewed as a viable competitor to Mac OS and Windows, but its status as the "alternative operating system" was quickly surpassed by Linux by 1998.
Reception of the operating system was largely positive citing its true and "reliable" multitasking and support for multiple processors. Though its market penetration was low, it gained a niche multimedia userbase and acceptance by the audio community. Consequently, it was styled as a "media OS" due to its well-regarded ability to handle audio and video. BeOS received significant interest in Japan, and was also appealing to Amiga developers and users, who were looking for a newer platform.
BeOS and its successors have been used in media appliances, such as the Edirol DV-7 video editors from Roland Corporation, which run on a modified BeOS and the Tunetracker Radio Automation software that used to run it on BeOS and Zeta, and it was also sold as a "Station-in-a-Box" with the Zeta operating system included. In 2015, Tunetracker released a Haiku distribution bundled with its broadcasting software.
Legacy
The Tascam SX-1 digital audio recorder runs a heavily modified version of BeOS that will only launch the recording interface software. The RADAR 24, RADAR V and RADAR 6, hard disk-based, 24-track professional audio recorders from iZ Technology Corporation were based on BeOS 5. Magicbox, a manufacturer of signage and broadcast display machines, uses BeOS to power their Aavelin product line. Final Scratch, a 12-inch vinyl timecode record-driven DJ software and hardware system, was first developed on BeOS. The "ProFS" version was sold to a few dozen DJs prior to the 1.0 release, which ran on a Linux virtual partition.
Spiritual successors
After BeOS came to an end, Palm created PalmSource which used parts of BeOS's multimedia framework for its failed Palm OS Cobalt product (with the takeover of PalmSource, the BeOS rights were assigned to Access Co.). However, Palm refused the request of BeOS users to license the operating system. As a result, a few projects formed to recreate BeOS or its key elements with the eventual goal of then continuing where Be Inc. quit.
BeUnited, a BeOS oriented community, converted itself into a nonprofit organization in August 2001 to "define and promote open specifications for the delivery of the Open Standards BeOS-compatible Operating System (OSBOS) platform".
ZETA
Immediately after Palm's purchase of Be, a German company named yellowTAB started developing Zeta based on the BeOS R5.1 codebase and released it commercially. It was later distributed by magnussoft. During development by yellowTAB, the company received criticism from the BeOS community for refusing to discuss its legal position with regard to the BeOS codebase. Access Co. (which bought PalmSource, until then the holder of the intellectual property associated with BeOS) declared that yellowTAB had no right to distribute a modified version of BeOS, and magnussoft was forced to cease distribution of the operating system in 2007.
Haiku (OpenBeOS)
Haiku is a complete open source reimplementation of BeOS. It was originally named OpenBeOS and its first release in 2002 was a community update. Unlike Cosmoe and BlueEyedOS, it is directly compatible with BeOS applications. It is open source software. As of 2024, it was the only BeOS clone still under development, with the fifth beta in September 2024 still keeping BeOS 5 compatibility in its x86 32-bit images, with an increased number of ported modern drivers and GTK apps.
Others
BlueEyedOS tried to create a system under LGPL based on the Linux kernel and an X server that is compatible with BeOS. Work began under the name BlueOS in 2001 and a demo CD was released in 2003. The project was discontinued in February 2005.
Cosmoe, with an interface like BeOS, was designed by Bill Hayden as an open source operating system based on the source code of AtheOS and later OpenBeOS, but using the Linux kernel. ZevenOS was designed to continue where Cosmoe left off. In mid 2024, Cosmoe was resurrected by its original author after 17 years, with a much improved codebase based on contemporary Haiku.
BeFree started in 2003, initially developed under FreeBSD and later Linux.
See also
Access Co.
BeIA
Comparison of operating systems
Gobe Productive
Hitachi Flora Prius
References
Further reading
External links
The Dawn of Haiku, by Ryan Leavengood, IEEE Spectrum May 2012, p 40–43,51-54.
Mirror of the old www.be.com site Other Mirror of the old www.be.com site
BeOS Celebrating Ten Years
BeGroovy A blog and news archive for BeOS
BeOS: The Mac OS X might-have-been, reghardware.co.uk
Programming the Be Operating System: An O'Reilly Open Book
(BeOS)
BeOS
Discontinued operating systems
Object-oriented operating systems
PowerPC operating systems
X86 operating systems | BeOS | [
"Technology"
] | 2,660 | [
"BeOS",
"Computing platforms"
] |
4,805 | https://en.wikipedia.org/wiki/Behavior | Behavior (American English) or behaviour (British English) is the range of actions and mannerisms made by individuals, organisms, systems or artificial entities in some environment. These systems can include other systems or organisms as well as the inanimate physical environment. It is the computed response of the system or organism to various stimuli or inputs, whether internal or external, conscious or subconscious, overt or covert, and voluntary or involuntary. While some behavior is produced in response to an organism's environment (extrinsic motivation), behavior can also be the product of intrinsic motivation, also referred to as "agency" or "free will".
Taking a behavior informatics perspective, a behavior consists of actor, operation, interactions, and their properties. This can be represented as a behavior vector.
Models
Biology
Although disagreement exists as to how to precisely define behavior in a biological context, one common interpretation based on a meta-analysis of scientific literature states that "behavior is the internally coordinated responses (actions or inactions) of whole living organisms (individuals or groups) to internal or external stimuli".
A broader definition of behavior, applicable to plants and other organisms, is similar to the concept of phenotypic plasticity. It describes behavior as a response to an event or environment change during the course of the lifetime of an individual, differing from other physiological or biochemical changes that occur more rapidly, and excluding changes that are a result of development (ontogeny).
Behaviors can be either innate or learned from the environment.
Behaviour can be regarded as any action of an organism that changes its relationship to its environment. Behavior provides outputs from the organism to the environment.
Human behavior
The endocrine system and the nervous system likely influence human behavior. Complexity in the behavior of an organism may be correlated to the complexity of its nervous system. Generally, organisms with more complex nervous systems have a greater capacity to learn new responses and thus adjust their behavior.
Animal behavior
Ethology is the scientific and objective study of animal behavior, usually with a focus on behavior under natural conditions, and viewing behavior as an evolutionarily adaptive trait. Behaviorism is a term that also describes the scientific and objective study of animal behavior, usually referring to measured responses to stimuli or trained behavioral responses in a laboratory context, without a particular emphasis on evolutionary adaptivity.
Consumer behavior
Consumers behavior
Consumer behavior involves the processes consumers go through, and reactions they have towards products or services. It has to do with consumption, and the processes consumers go through around purchasing and consuming goods and services. Consumers recognize needs or wants, and go through a process to satisfy these needs. Consumer behavior is the process they go through as customers, which includes types of products purchased, amount spent, frequency of purchases and what influences them to make the purchase decision or not.
Circumstances that influence consumer behaviour are varied, with contributions from both internal and external factors. Internal factors include attitudes, needs, motives, preferences and perceptual processes, whilst external factors include marketing activities, social and economic factors, and cultural aspects. Doctor Lars Perner of the University of Southern California claims that there are also physical factors that influence consumer behavior, for example, if a consumer is hungry, then this physical feeling of hunger will influence them so that they go and purchase a sandwich to satisfy the hunger.
Consumer decision making
Lars Perner presents a model that outlines the decision-making process involved in consumer behaviour. The process initiates with the identification of a problem, wherein the consumer acknowledges an unsatisfied need or desire. Subsequently, the consumer proceeds to seek information, whereas for low-involvement products, the search tends to rely on internal resources, retrieving alternatives from memory. Conversely, for high-involvement products, the search is typically more extensive, involving activities like reviewing reports, reading reviews, or seeking recommendations from friends.
The consumer will then evaluate his or her alternatives, comparing price, and quality, doing trade-offs between products, and narrowing down the choice by eliminating the less appealing products until there is one left. After this has been identified, the consumer will purchase the product.
Finally, the consumer will evaluate the purchase decision, and the purchased product, bringing in factors such as value for money, quality of goods, and purchase experience. However, this logical process does not always happen this way, people are emotional and irrational creatures. People make decisions with emotion and then justify them with logic according to Robert Cialdini Ph.D. Psychology.
How the 4P's influence consumer behavior
The Marketing mix (4 P's) are a marketing tool and stand for Price, Promotion, Product, and Placement.
Due to the significant impact of business-to-consumer marketing on consumer behavior, the four elements of the marketing mix, known as the 4 P's (product, price, place, and promotion), exert a notable influence on consumer behavior. The price of a good or service is largely determined by the market, as businesses will set their prices to be similar to that of other businesses so as to remain competitive whilst making a profit. When market prices for a product are high, it will cause consumers to purchase less and use purchased goods for longer periods of time, meaning they are purchasing the product less often. Alternatively, when market prices for a product are low, consumers are more likely to purchase more of the product, and more often.
The way that promotion influences consumer behavior has changed over time. In the past, large promotional campaigns and heavy advertising would convert into sales for a business, but nowadays businesses can have success on products with little or no advertising. This is due to the Internet and in particular social media. They rely on word of mouth from consumers using social media, and as products trend online, so sales increase as products effectively promote themselves. Thus, promotion by businesses does not necessarily result in consumer behavior trending towards purchasing products.
The way that product influences consumer behavior is through consumer willingness to pay, and consumer preferences. This means that even if a company were to have a long history of products in the market, consumers will still pick a cheaper product over the company in question's product if it means they will pay less for something that is very similar. This is due to consumer willingness to pay, or their willingness to part with the money they have earned. The product also influences consumer behavior through customer preferences. For example, take Pepsi vs Coca-Cola, a Pepsi-drinker is less likely to purchase Coca-Cola, even if it is cheaper and more convenient. This is due to the preference of the consumer, and no matter how hard the opposing company tries they will not be able to force the customer to change their mind.
Product placement in the modern era has little influence on consumer behavior, due to the availability of goods online. If a customer can purchase a good from the comfort of their home instead of purchasing in-store, then the placement of products is not going to influence their purchase decision.
In management
Behavior outside of psychology includes
Organizational
In management, behaviors are associated with desired or undesired focuses. Managers generally note what the desired outcome is, but behavioral patterns can take over. These patterns are the reference to how often the desired behavior actually occurs. Before a behavior actually occurs, antecedents focus on the stimuli that influence the behavior that is about to happen. After the behavior occurs, consequences fall into place. Consequences consist of rewards or punishments.
Social behavior
Social behavior is behavior among two or more organisms within the same species, and encompasses any behavior in which one member affects the other. This is due to an interaction among those members. Social behavior can be seen as similar to an exchange of goods, with the expectation that when one gives, one will receive the same. This behavior can be affected by both the qualities of the individual and the environmental (situational) factors. Therefore, social behavior arises as a result of an interaction between the two—the organism and its environment. This means that, in regards to humans, social behavior can be determined by both the individual characteristics of the person, and the situation they are in.
Behavior informatics
Behavior informatics also called behavior computing, explores behavior intelligence and behavior insights from the informatics and computing perspectives.
Different from applied behavior analysis from the psychological perspective, BI builds computational theories, systems and tools to qualitatively and quantitatively model, represent, analyze, and manage behaviors of individuals, groups and/or organizations.
Health
Health behavior refers to a person's beliefs and actions regarding their health and well-being. Health behaviors are direct factors in maintaining a healthy lifestyle. Health behaviors are influenced by the social, cultural, and physical environments in which we live. They are shaped by individual choices and external constraints. Positive behaviors help promote health and prevent disease, while the opposite is true for risk behaviors. Health behaviors are early indicators of population health. Because of the time lag that often occurs between certain behaviors and the development of disease, these indicators may foreshadow the future burdens and benefits of health-risk and health-promoting behaviors.
Correlates
A variety of studies have examined the relationship between health behaviors and health outcomes (e.g., Blaxter 1990) and have demonstrated their role in both morbidity and mortality.
These studies have identified seven features of lifestyle which were associated with lower morbidity and higher subsequent long-term survival (Belloc and Breslow 1972):
Avoiding snacks
Eating breakfast regularly
Exercising regularly
Maintaining a desirable body weight
Moderate alcohol intake
Not smoking
Sleeping 7–8hrs per night
Health behaviors impact upon individuals' quality of life, by delaying the onset of chronic disease and extending active lifespan. Smoking, alcohol consumption, diet, gaps in primary care services and low screening uptake are all significant determinants of poor health, and changing such behaviors should lead to improved health.
For example, in US, Healthy People 2000, United States Department of Health and Human Services, lists increased physical activity, changes in nutrition and reductions in tobacco, alcohol and drug use as important for health promotion and disease prevention.
Treatment approach
Any interventions done are matched with the needs of each individual in an ethical and respected manner. Health belief model encourages increasing individuals' perceived susceptibility to negative health outcomes and making individuals aware of the severity of such negative health behavior outcomes. E.g. through health promotion messages. In addition, the health belief model suggests the need to focus on the benefits of health behaviors and the fact that barriers to action are easily overcome. The theory of planned behavior suggests using persuasive messages for tackling behavioral beliefs to increase the readiness to perform a behavior, called intentions. The theory of planned behavior advocates the need to tackle normative beliefs and control beliefs in any attempt to change behavior. Challenging the normative beliefs is not enough but to follow through the intention with self-efficacy from individual's mastery in problem solving and task completion is important to bring about a positive change. Self efficacy is often cemented through standard persuasive techniques.
See also
Applied behavior analysis
Behavioral cusp
Behavioral economics
Behavioral genetics
Behavioral sciences
Cognitive bias
Evolutionary physiology
Experimental analysis of behavior
Human sexual behavior
Herd behavior
Instinct
Mere-measurement effect
Motivation
Normality (behavior)
Organizational studies
Radical behaviorism
Reasoning
Rebellion
Social relation
Theories of political behavior
Work behavior
References
General
Cao, L. (2014). Behavior Informatics: A New Perspective. IEEE Intelligent Systems (Trends and Controversies), 29(4): 62–80.
Perner, L. (2008), Consumer behavior. University of Southern California, Marshall School of Business. Retrieved from http://www.consumerpsychologist.com/intro_Consumer_Behavior.html
Further reading
Bateson, P. (2017) behavior, Development and Evolution. Open Book Publishers, Cambridge. .
External links
What is behavior? Baby don't ask me, don't ask me, no more at Earthling Nature.
behaviorinformatics.org
Links to review articles by Eric Turkheimer and co-authors on behavior research
Links to IJCAI2013 tutorial on behavior informatics and computing | Behavior | [
"Biology"
] | 2,444 | [
"Behavior"
] |
4,816 | https://en.wikipedia.org/wiki/Biosphere | The biosphere (), also called the ecosphere (), is the worldwide sum of all ecosystems. It can also be termed the zone of life on the Earth. The biosphere (which is technically a spherical shell) is virtually a closed system with regard to matter, with minimal inputs and outputs. Regarding energy, it is an open system, with photosynthesis capturing solar energy at a rate of around 100 terawatts. By the most general biophysiological definition, the biosphere is the global ecological system integrating all living beings and their relationships, including their interaction with the elements of the lithosphere, cryosphere, hydrosphere, and atmosphere. The biosphere is postulated to have evolved, beginning with a process of biopoiesis (life created naturally from matter, such as simple organic compounds) or biogenesis (life created from living matter), at least some 3.5 billion years ago.
In a general sense, biospheres are any closed, self-regulating systems containing ecosystems. This includes artificial biospheres such as and , and potentially ones on other planets or moons.
Origin and use of the term
The term "biosphere" was coined in 1875 by geologist Eduard Suess, who defined it as the place on Earth's surface where life dwells.
While the concept has a geological origin, it is an indication of the effect of both Charles Darwin and Matthew F. Maury on the Earth sciences. The biosphere's ecological context comes from the 1920s (see Vladimir I. Vernadsky), preceding the 1935 introduction of the term "ecosystem" by Sir Arthur Tansley (see ecology history). Vernadsky defined ecology as the science of the biosphere. It is an interdisciplinary concept for integrating astronomy, geophysics, meteorology, biogeography, evolution, geology, geochemistry, hydrology and, generally speaking, all life and Earth sciences.
Narrow definition
Geochemists define the biosphere as being the total sum of living organisms (the "biomass" or "biota" as referred to by biologists and ecologists). In this sense, the biosphere is but one of four separate components of the geochemical model, the other three being geosphere, hydrosphere, and atmosphere. When these four component spheres are combined into one system, it is known as the ecosphere. This term was coined during the 1960s and encompasses both biological and physical components of the planet.
The Second International Conference on Closed Life Systems defined biospherics as the science and technology of analogs and models of Earth's biosphere; i.e., artificial Earth-like biospheres. Others may include the creation of artificial non-Earth biospheres—for example, human-centered biospheres or a native Martian biosphere—as part of the topic of biospherics.
Earth's biosphere
Overview
Currently, the total number of living cells on the Earth is estimated to be 1030; the total number since the beginning of Earth, as 1040, and the total number for the entire time of a habitable planet Earth as 1041. This is much larger than the total number of estimated stars (and Earth-like planets) in the observable universe as 1024, a number which is more than all the grains of beach sand on planet Earth; but less than the total number of atoms estimated in the observable universe as 1082; and the estimated total number of stars in an inflationary universe (observed and unobserved), as 10100.
Age
The earliest evidence for life on Earth includes biogenic graphite found in 3.7 billion-year-old metasedimentary rocks from Western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone from Western Australia. More recently, in 2015, "remains of biotic life" were found in 4.1 billion-year-old rocks in Western Australia. In 2017, putative fossilized microorganisms (or microfossils) were announced to have been discovered in hydrothermal vent precipitates in the Nuvvuagittuq Belt of Quebec, Canada that were as old as 4.28 billion years, the oldest record of life on earth, suggesting "an almost instantaneous emergence of life" after ocean formation 4.4 billion years ago, and not long after the formation of the Earth 4.54 billion years ago. According to biologist Stephen Blair Hedges, "If life arose relatively quickly on Earth ... then it could be common in the universe."
Extent
Every part of the planet, from the polar ice caps to the equator, features life of some kind. Recent advances in microbiology have demonstrated that microbes live deep beneath the Earth's terrestrial surface and that the total mass of microbial life in so-called "uninhabitable zones" may, in biomass, exceed all animal and plant life on the surface. The actual thickness of the biosphere on Earth is difficult to measure. Birds typically fly at altitudes as high as and fish live as much as underwater in the Puerto Rico Trench.
There are more extreme examples for life on the planet: Rüppell's vulture has been found at altitudes of ; bar-headed geese migrate at altitudes of at least ; yaks live at elevations as high as above sea level; mountain goats live up to . Herbivorous animals at these elevations depend on lichens, grasses, and herbs.
Life forms live in every part of the Earth's biosphere, including soil, hot springs, inside rocks at least deep underground, and at least high in the atmosphere. Marine life under many forms has been found in the deepest reaches of the world ocean while much of the deep sea remains to be explored.
Under certain test conditions, microorganisms have been observed to survive the vacuum of outer space. The total amount of soil and subsurface bacterial carbon is estimated as 5 × 1017 g. The mass of prokaryote microorganisms—which includes bacteria and archaea, but not the nucleated eukaryote microorganisms—may be as much as 0.8 trillion tons of carbon (of the total biosphere mass, estimated at between 1 and 4 trillion tons). Barophilic marine microbes have been found at more than a depth of in the Mariana Trench, the deepest spot in the Earth's oceans. In fact, single-celled life forms have been found in the deepest part of the Mariana Trench, by the Challenger Deep, at depths of . Other researchers reported related studies that microorganisms thrive inside rocks up to below the sea floor under of ocean off the coast of the northwestern United States, as well as beneath the seabed off Japan. Culturable thermophilic microbes have been extracted from cores drilled more than into the Earth's crust in Sweden, from rocks between . Temperature increases with increasing depth into the Earth's crust. The rate at which the temperature increases depends on many factors, including the type of crust (continental vs. oceanic), rock type, geographic location, etc. The greatest known temperature at which microbial life can exist is (Methanopyrus kandleri Strain 116). It is likely that the limit of life in the "deep biosphere" is defined by temperature rather than absolute depth. On 20 August 2014, scientists confirmed the existence of microorganisms living below the ice of Antarctica.
Earth's biosphere is divided into several biomes, inhabited by fairly similar flora and fauna. On land, biomes are separated primarily by latitude. Terrestrial biomes lying within the Arctic and Antarctic Circles are relatively barren of plant and animal life. In contrast, most of the more populous biomes lie near the equator.
Annual variation
Artificial biospheres
Experimental biospheres, also called closed ecological systems, have been created to study ecosystems and the potential for supporting life outside the Earth. These include spacecraft and the following terrestrial laboratories:
Biosphere 2 in Arizona, United States, 3.15 acres (13,000 m2).
BIOS-1, BIOS-2 and BIOS-3 at the Institute of Biophysics in Krasnoyarsk, Siberia, in what was then the Soviet Union.
Biosphere J (CEEF, Closed Ecology Experiment Facilities), an experiment in Japan.
Micro-Ecological Life Support System Alternative (MELiSSA) at Autonomous University of Barcelona
Extraterrestrial biospheres
No biospheres have been detected beyond the Earth; therefore, the existence of extraterrestrial biospheres remains hypothetical. The rare Earth hypothesis suggests they should be very rare, save ones composed of microbial life only. On the other hand, Earth analogs may be quite numerous, at least in the Milky Way galaxy, given the large number of planets. Three of the planets discovered orbiting TRAPPIST-1 could possibly contain biospheres. Given limited understanding of abiogenesis, it is currently unknown what percentage of these planets actually develop biospheres.
Based on observations by the Kepler Space Telescope team, it has been calculated that provided the probability of abiogenesis is higher than 1 to 1000, the closest alien biosphere should be within 100 light-years from the Earth.
It is also possible that artificial biospheres will be created in the future, for example with the terraforming of Mars.
See also
Biosphere model
Climate system
Cryosphere
Habitable zone
Homeostasis
Life-support system
Man and the Biosphere Programme
Montreal Biosphere
Noosphere
Rare biosphere
Shadow biosphere
Soil biomantle
Thomas Gold
Wardian case
Winogradsky column
References
Further reading
The Biosphere (A Scientific American Book), San Francisco, W.H. Freeman and Co., 1970, . This book, originally the December 1970 Scientific American issue, covers virtually every major concern and concept since debated regarding materials and energy resources (including solar energy), population trends, and environmental degradation (including global warming).
External links
Article on the Biosphere at Encyclopedia of Earth
GLOBIO.info, an ongoing programme to map the past, current and future impacts of human activities on the biosphere
Paul Crutzen Interview, freeview video of Paul Crutzen Nobel Laureate for his work on decomposition of ozone talking to Harry Kroto Nobel Laureate by the Vega Science Trust.
Atlas of the Biosphere
Oceanography
Superorganisms
Biological systems | Biosphere | [
"Physics",
"Biology",
"Environmental_science"
] | 2,131 | [
"Superorganisms",
"Hydrology",
"Symbiosis",
"Applied and interdisciplinary physics",
"Oceanography",
"nan"
] |
4,817 | https://en.wikipedia.org/wiki/Biological%20membrane | A biological membrane, biomembrane or cell membrane is a selectively permeable membrane that separates the interior of a cell from the external environment or creates intracellular compartments by serving as a boundary between one part of the cell and another. Biological membranes, in the form of eukaryotic cell membranes, consist of a phospholipid bilayer with embedded, integral and peripheral proteins used in communication and transportation of chemicals and ions. The bulk of lipids in a cell membrane provides a fluid matrix for proteins to rotate and laterally diffuse for physiological functioning. Proteins are adapted to high membrane fluidity environment of the lipid bilayer with the presence of an annular lipid shell, consisting of lipid molecules bound tightly to the surface of integral membrane proteins. The cell membranes are different from the isolating tissues formed by layers of cells, such as mucous membranes, basement membranes, and serous membranes.
Composition
Asymmetry
The lipid bilayer consists of two layers- an outer leaflet and an inner leaflet. The components of bilayers are distributed unequally between the two surfaces to create asymmetry between the outer and inner surfaces. This asymmetric organization is important for cell functions such as cell signaling. The asymmetry of the biological membrane reflects the different functions of the two leaflets of the membrane. As seen in the fluid membrane model of the phospholipid bilayer, the outer leaflet and inner leaflet of the membrane are asymmetrical in their composition. Certain proteins and lipids rest only on one surface of the membrane and not the other.
Both the plasma membrane and internal membranes have cytosolic and exoplasmic faces.
This orientation is maintained during membrane trafficking – proteins, lipids, glycoconjugates facing the lumen of the ER and Golgi get expressed on the extracellular side of the plasma membrane. In eukaryotic cells, new phospholipids are manufactured by enzymes bound to the part of the endoplasmic reticulum membrane that faces the cytosol. These enzymes, which use free fatty acids as substrates, deposit all newly made phospholipids into the cytosolic half of the bilayer. To enable the membrane as a whole to grow evenly, half of the new phospholipid molecules then have to be transferred to the opposite monolayer. This transfer is catalyzed by enzymes called flippases. In the plasma membrane, flippases transfer specific phospholipids selectively, so that different types become concentrated in each monolayer.
Using selective flippases is not the only way to produce asymmetry in lipid bilayers, however. In particular, a different mechanism operates for glycolipids—the lipids that show the most striking and consistent asymmetric distribution in animal cells.
Lipids
The biological membrane is made up of lipids with hydrophobic tails and hydrophilic heads. The hydrophobic tails are hydrocarbon tails whose length and saturation is important in characterizing the cell. Lipid rafts occur when lipid species and proteins aggregate in domains in the membrane. These help organize membrane components into localized areas that are involved in specific processes, such as signal transduction.
Red blood cells, or erythrocytes, have a unique lipid composition. The bilayer of red blood cells is composed of cholesterol and phospholipids in equal proportions by weight. Erythrocyte membrane plays a crucial role in blood clotting. In the bilayer of red blood cells is phosphatidylserine. This is usually in the cytoplasmic side of the membrane. However, it is flipped to the outer membrane to be used during blood clotting.
Proteins
Phospholipid bilayers contain different proteins. These membrane proteins have various functions and characteristics and catalyze different chemical reactions. Integral proteins span the membranes with different domains on either side. Integral proteins hold strong association with the lipid bilayer and cannot easily become detached. They will dissociate only with chemical treatment that breaks the membrane. Peripheral proteins are unlike integral proteins in that they hold weak interactions with the surface of the bilayer and can easily become dissociated from the membrane. Peripheral proteins are located on only one face of a membrane and create membrane asymmetry.
Oligosaccharides
Oligosaccharides are sugar containing polymers. In the membrane, they can be covalently bound to lipids to form glycolipids or covalently bound to proteins to form glycoproteins. Membranes contain sugar-containing lipid molecules known as glycolipids. In the bilayer, the sugar groups of glycolipids are exposed at the cell surface, where they can form hydrogen bonds. Glycolipids provide the most extreme example of asymmetry in the lipid bilayer. Glycolipids perform a vast number of functions in the biological membrane that are mainly communicative, including cell recognition and cell-cell adhesion. Glycoproteins are integral proteins. They play an important role in the immune response and protection.
Formation
The phospholipid bilayer is formed due to the aggregation of membrane lipids in aqueous solutions. Aggregation is caused by the hydrophobic effect, where hydrophobic ends come into contact with each other and are sequestered away from water. This arrangement maximises hydrogen bonding between hydrophilic heads and water while minimising unfavorable contact between hydrophobic tails and water. The increase in available hydrogen bonding increases the entropy of the system, creating a spontaneous process.
Function
Biological molecules are amphiphilic or amphipathic, i.e. are simultaneously hydrophobic and hydrophilic. The phospholipid bilayer contains charged hydrophilic headgroups, which interact with polar water. The layers also contain hydrophobic tails, which meet with the hydrophobic tails of the complementary layer. The hydrophobic tails are usually fatty acids that differ in lengths. The interactions of lipids, especially the hydrophobic tails, determine the lipid bilayer physical properties such as fluidity.
Membranes in cells typically define enclosed spaces or compartments in which cells may maintain a chemical or biochemical environment that differs from the outside. For example, the membrane around peroxisomes shields the rest of the cell from peroxides, chemicals that can be toxic to the cell, and the cell membrane separates a cell from its surrounding medium. Peroxisomes are one form of vacuole found in the cell that contain by-products of chemical reactions within the cell. Most organelles are defined by such membranes, and are called membrane-bound organelles.
Selective permeability
Probably the most important feature of a biomembrane is that it is a selectively permeable structure. This means that the size, charge, and other chemical properties of the atoms and molecules attempting to cross it will determine whether they succeed in doing so. Selective permeability is essential for effective separation of a cell or organelle from its surroundings. Biological membranes also have certain mechanical or elastic properties that allow them to change shape and move as required.
Generally, small hydrophobic molecules can readily cross phospholipid bilayers by simple diffusion.
Particles that are required for cellular function but are unable to diffuse freely across a membrane enter through a membrane transport protein or are taken in by means of endocytosis, where the membrane allows for a vacuole to join onto it and push its contents into the cell. Many types of specialized plasma membranes can separate cell from external environment: apical, basolateral, presynaptic and postsynaptic ones, membranes of flagella, cilia, microvillus, filopodia and lamellipodia, the sarcolemma of muscle cells, as well as specialized myelin and dendritic spine membranes of neurons. Plasma membranes can also form different types of "supramembrane" structures such as caveolae, postsynaptic density, podosome, invadopodium, desmosome, hemidesmosome, focal adhesion, and cell junctions. These types of membranes differ in lipid and protein composition.
Distinct types of membranes also create intracellular organelles: endosome; smooth and rough endoplasmic reticulum; sarcoplasmic reticulum; Golgi apparatus; lysosome; mitochondrion (inner and outer membranes); nucleus (inner and outer membranes); peroxisome; vacuole; cytoplasmic granules; cell vesicles (phagosome, autophagosome, clathrin-coated vesicles, COPI-coated and COPII-coated vesicles) and secretory vesicles (including synaptosome, acrosomes, melanosomes, and chromaffin granules).
Different types of biological membranes have diverse lipid and protein compositions. The content of membranes defines their physical and biological properties. Some components of membranes play a key role in medicine, such as the efflux pumps that pump drugs out of a cell.
Fluidity
The hydrophobic core of the phospholipid bilayer is constantly in motion because of rotations around the bonds of lipid tails. Hydrophobic tails of a bilayer bend and lock together. However, because of hydrogen bonding with water, the hydrophilic head groups exhibit less movement as their rotation and mobility are constrained. This results in increasing viscosity of the lipid bilayer closer to the hydrophilic heads.
Below a transition temperature, a lipid bilayer loses fluidity when the highly mobile lipids exhibits less movement becoming a gel-like solid. The transition temperature depends on such components of the lipid bilayer as the hydrocarbon chain length and the saturation of its fatty acids. Temperature-dependence fluidity constitutes an important physiological attribute for bacteria and cold-blooded organisms. These organisms maintain a constant fluidity by modifying membrane lipid fatty acid composition in accordance with differing temperatures.
In animal cells, membrane fluidity is modulated by the inclusion of the sterol cholesterol. This molecule is present in especially large amounts in the plasma membrane, where it constitutes approximately 20% of the lipids in the membrane by weight. Because cholesterol molecules are short and rigid, they fill the spaces between neighboring phospholipid molecules left by the kinks in their unsaturated hydrocarbon tails. In this way, cholesterol tends to stiffen the bilayer, making it more rigid and less permeable.
For all cells, membrane fluidity is important for many reasons. It enables membrane proteins to diffuse rapidly in the plane of the bilayer and to interact with one another, as is crucial, for example, in cell signaling. It permits membrane lipids and proteins to diffuse from sites where they are inserted into the bilayer after their synthesis to other regions of the cell. It allows membranes to fuse with one another and mix their molecules, and it ensures that membrane molecules are distributed evenly between daughter cells when a cell divides. If biological membranes were not fluid, it is hard to imagine how cells could live, grow, and reproduce.
The fluidity property is at the center of the Helfrich model which allows for calculating the energy cost of an elastic deformation to the membrane.
See also
Collodion bag
Fluid mosaic model
Osmosis
Membrane biology
Soft matter
References
External links
membrane
Soft matter | Biological membrane | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,382 | [
"Membrane biology",
"Soft matter",
"Condensed matter physics",
"Molecular biology"
] |
4,827 | https://en.wikipedia.org/wiki/Biomedical%20engineering | Biomedical engineering (BME) or medical engineering is the application of engineering principles and design concepts to medicine and biology for healthcare applications (e.g., diagnostic or therapeutic purposes). BME is also traditionally logical sciences to advance health care treatment, including diagnosis, monitoring, and therapy. Also included under the scope of a biomedical engineer is the management of current medical equipment in hospitals while adhering to relevant industry standards. This involves procurement, routine testing, preventive maintenance, and making equipment recommendations, a role also known as a Biomedical Equipment Technician (BMET) or as a clinical engineer.
Biomedical engineering has recently emerged as its own field of study, as compared to many other engineering fields. Such an evolution is common as a new field transitions from being an interdisciplinary specialization among already-established fields to being considered a field in itself. Much of the work in biomedical engineering consists of research and development, spanning a broad array of subfields (see below). Prominent biomedical engineering applications include the development of biocompatible prostheses, various diagnostic and therapeutic medical devices ranging from clinical equipment to micro-implants, imaging technologies such as MRI and EKG/ECG, regenerative tissue growth, and the development of pharmaceutical drugs including biopharmaceuticals.
Subfields and related fields
Bioinformatics
Bioinformatics is an interdisciplinary field that develops methods and software tools for understanding biological data. As an interdisciplinary field of science, bioinformatics combines computer science, statistics, mathematics, and engineering to analyze and interpret biological data.
Bioinformatics is considered both an umbrella term for the body of biological studies that use computer programming as part of their methodology, as well as a reference to specific analysis "pipelines" that are repeatedly used, particularly in the field of genomics. Common uses of bioinformatics include the identification of candidate genes and nucleotides (SNPs). Often, such identification is made with the aim of better understanding the genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. In a less formal way, bioinformatics also tries to understand the organizational principles within nucleic acid and protein sequences.
Biomechanics
Biomechanics is the study of the structure and function of the mechanical aspects of biological systems, at any level from whole organisms to organs, cells and cell organelles, using the methods of mechanics.
Biomaterials
A biomaterial is any matter, surface, or construct that interacts with living systems. As a science, biomaterials is about fifty years old. The study of biomaterials is called biomaterials science or biomaterials engineering. It has experienced steady and strong growth over its history, with many companies investing large amounts of money into the development of new products. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering and materials science.
Biomedical optics
Biomedical optics combines the principles of physics, engineering, and biology to study the interaction of biological tissue and light, and how this can be exploited for sensing, imaging, and treatment. It has a wide range of applications, including optical imaging, microscopy, ophthalmoscopy, spectroscopy, and therapy. Examples of biomedical optics techniques and technologies include optical coherence tomography (OCT), fluorescence microscopy, confocal microscopy, and photodynamic therapy (PDT). OCT, for example, uses light to create high-resolution, three-dimensional images of internal structures, such as the retina in the eye or the coronary arteries in the heart. Fluorescence microscopy involves labeling specific molecules with fluorescent dyes and visualizing them using light, providing insights into biological processes and disease mechanisms. More recently, adaptive optics is helping imaging by correcting aberrations in biological tissue, enabling higher resolution imaging and improved accuracy in procedures such as laser surgery and retinal imaging.
Tissue engineering
Tissue engineering, like genetic engineering (see below), is a major segment of biotechnology – which overlaps significantly with BME.
One of the goals of tissue engineering is to create artificial organs (via biological material) for patients that need organ transplants. Biomedical engineers are currently researching methods of creating such organs. Researchers have grown solid jawbones and tracheas from human stem cells towards this end. Several artificial urinary bladders have been grown in laboratories and transplanted successfully into human patients. Bioartificial organs, which use both synthetic and biological component, are also a focus area in research, such as with hepatic assist devices that use liver cells within an artificial bioreactor construct.
Genetic engineering
Genetic engineering, recombinant DNA technology, genetic modification/manipulation (GM) and gene splicing are terms that apply to the direct manipulation of an organism's genes. Unlike traditional breeding, an indirect method of genetic manipulation, genetic engineering utilizes modern tools such as molecular cloning and transformation to directly alter the structure and characteristics of target genes. Genetic engineering techniques have found success in numerous applications. Some examples include the improvement of crop technology (not a medical application, but see biological systems engineering), the manufacture of synthetic human insulin through the use of modified bacteria, the manufacture of erythropoietin in hamster ovary cells, and the production of new types of experimental mice such as the oncomouse (cancer mouse) for research.
Neural engineering
Neural engineering (also known as neuroengineering) is a discipline that uses engineering techniques to understand, repair, replace, or enhance neural systems. Neural engineers are uniquely qualified to solve design problems at the interface of living neural tissue and non-living constructs. Neural engineering can assist with numerous things, including the future development of prosthetics. For example, cognitive neural prosthetics (CNP) are being heavily researched and would allow for a chip implant to assist people who have prosthetics by providing signals to operate assistive devices.
Pharmaceutical engineering
Pharmaceutical engineering is an interdisciplinary science that includes drug engineering, novel drug delivery and targeting, pharmaceutical technology, unit operations of chemical engineering, and pharmaceutical analysis. It may be deemed as a part of pharmacy due to its focus on the use of technology on chemical agents in providing better medicinal treatment.
Hospital and medical devices
This is an extremely broad category—essentially covering all health care products that do not achieve their intended results through predominantly chemical (e.g., pharmaceuticals) or biological (e.g., vaccines) means, and do not involve metabolism.
A medical device is intended for use in:
the diagnosis of disease or other conditions
in the cure, mitigation, treatment, or prevention of disease.
Some examples include pacemakers, infusion pumps, the heart-lung machine, dialysis machines, artificial organs, implants, artificial limbs, corrective lenses, cochlear implants, ocular prosthetics, facial prosthetics, somato prosthetics, and dental implants.
Stereolithography is a practical example of medical modeling being used to create physical objects. Beyond modeling organs and the human body, emerging engineering techniques are also currently used in the research and development of new devices for innovative therapies, treatments, patient monitoring, of complex diseases.
Medical devices are regulated and classified (in the US) as follows (see also Regulation):
Class I devices present minimal potential for harm to the user and are often simpler in design than Class II or Class III devices. Devices in this category include tongue depressors, bedpans, elastic bandages, examination gloves, and hand-held surgical instruments, and other similar types of common equipment.
Class II devices are subject to special controls in addition to the general controls of Class I devices. Special controls may include special labeling requirements, mandatory performance standards, and postmarket surveillance. Devices in this class are typically non-invasive and include X-ray machines, PACS, powered wheelchairs, infusion pumps, and surgical drapes.
Class III devices generally require premarket approval (PMA) or premarket notification (510k), a scientific review to ensure the device's safety and effectiveness, in addition to the general controls of Class I. Examples include replacement heart valves, hip and knee joint implants, silicone gel-filled breast implants, implanted cerebellar stimulators, implantable pacemaker pulse generators and endosseous (intra-bone) implants.
Medical imaging
Medical/biomedical imaging is a major segment of medical devices. This area deals with enabling clinicians to directly or indirectly "view" things not visible in plain sight (such as due to their size, and/or location). This can involve utilizing ultrasound, magnetism, UV, radiology, and other means.
Alternatively, navigation-guided equipment utilizes electromagnetic tracking technology, such as catheter placement into the brain or feeding tube placement systems. For example, ENvizion Medical's ENvue, an electromagnetic navigation system for enteral feeding tube placement. The system uses an external field generator and several EM passive sensors enabling scaling of the display to the patient's body contour, and a real-time view of the feeding tube tip location and direction, which helps the medical staff ensure the correct placement in the GI tract.
Imaging technologies are often essential to medical diagnosis, and are typically the most complex equipment found in a hospital including: fluoroscopy, magnetic resonance imaging (MRI), nuclear medicine, positron emission tomography (PET), PET-CT scans, projection radiography such as X-rays and CT scans, tomography, ultrasound, optical microscopy, and electron microscopy.
Medical implants
An implant is a kind of medical device made to replace and act as a missing biological structure (as compared with a transplant, which indicates transplanted biomedical tissue). The surface of implants that contact the body might be made of a biomedical material such as titanium, silicone or apatite depending on what is the most functional. In some cases, implants contain electronics, e.g. artificial pacemakers and cochlear implants. Some implants are bioactive, such as subcutaneous drug delivery devices in the form of implantable pills or drug-eluting stents.
Bionics
Artificial body part replacements are one of the many applications of bionics. Concerned with the intricate and thorough study of the properties and function of human body systems, bionics may be applied to solve some engineering problems. Careful study of the different functions and processes of the eyes, ears, and other organs paved the way for improved cameras, television, radio transmitters and receivers, and many other tools.
Biomedical sensors
In recent years biomedical sensors based in microwave technology have gained more attention. Different sensors can be manufactured for specific uses in both diagnosing and monitoring disease conditions, for example microwave sensors can be used as a complementary technique to X-ray to monitor lower extremity trauma. The sensor monitor the dielectric properties and can thus notice change in tissue (bone, muscle, fat etc.) under the skin so when measuring at different times during the healing process the response from the sensor will change as the trauma heals.
Clinical engineering
Clinical engineering is the branch of biomedical engineering dealing with the actual implementation of medical equipment and technologies in hospitals or other clinical settings. Major roles of clinical engineers include training and supervising biomedical equipment technicians (BMETs), selecting technological products/services and logistically managing their implementation, working with governmental regulators on inspections/audits, and serving as technological consultants for other hospital staff (e.g. physicians, administrators, I.T., etc.). Clinical engineers also advise and collaborate with medical device producers regarding prospective design improvements based on clinical experiences, as well as monitor the progression of the state of the art so as to redirect procurement patterns accordingly.
Their inherent focus on practical implementation of technology has tended to keep them oriented more towards incremental-level redesigns and reconfigurations, as opposed to revolutionary research & development or ideas that would be many years from clinical adoption; however, there is a growing effort to expand this time-horizon over which clinical engineers can influence the trajectory of biomedical innovation. In their various roles, they form a "bridge" between the primary designers and the end-users, by combining the perspectives of being both close to the point-of-use, while also trained in product and process engineering. Clinical engineering departments will sometimes hire not just biomedical engineers, but also industrial/systems engineers to help address operations research/optimization, human factors, cost analysis, etc. Also, see safety engineering for a discussion of the procedures used to design safe systems. The clinical engineering department is constructed with a manager, supervisor, engineer, and technician. One engineer per eighty beds in the hospital is the ratio. Clinical engineers are also authorized to audit pharmaceutical and associated stores to monitor FDA recalls of invasive items.
Rehabilitation engineering
Rehabilitation engineering is the systematic application of engineering sciences to design, develop, adapt, test, evaluate, apply, and distribute technological solutions to problems confronted by individuals with disabilities. Functional areas addressed through rehabilitation engineering may include mobility, communications, hearing, vision, and cognition, and activities associated with employment, independent living, education, and integration into the community.
While some rehabilitation engineers have master's degrees in rehabilitation engineering, usually a subspecialty of Biomedical engineering, most rehabilitation engineers have an undergraduate or graduate degrees in biomedical engineering, mechanical engineering, or electrical engineering. A Portuguese university provides an undergraduate degree and a master's degree in Rehabilitation Engineering and Accessibility. Qualification to become a Rehab' Engineer in the UK is possible via a University BSc Honours Degree course such as Health Design & Technology Institute, Coventry University.
The rehabilitation process for people with disabilities often entails the design of assistive devices such as Walking aids intended to promote the inclusion of their users into the mainstream of society, commerce, and recreation.
Regulatory issues
Regulatory issues have been constantly increased in the last decades to respond to the many incidents caused by devices to patients. For example, from 2008 to 2011, in US, there were 119 FDA recalls of medical devices classified as class I. According to U.S. Food and Drug Administration (FDA), Class I recall is associated to "a situation in which there is a reasonable probability that the use of, or exposure to, a product will cause serious adverse health consequences or death"
Regardless of the country-specific legislation, the main regulatory objectives coincide worldwide. For example, in the medical device regulations, a product must be: 1) safe and 2) effective and 3) for all the manufactured devices (why is this part deleted?)
A product is safe if patients, users, and third parties do not run unacceptable risks of physical hazards (death, injuries, ...) in its intended use. Protective measures have to be introduced on the devices to reduce residual risks at an acceptable level if compared with the benefit derived from the use of it.
A product is effective if it performs as specified by the manufacturer in the intended use. Effectiveness is achieved through clinical evaluation, compliance to performance standards or demonstrations of substantial equivalence with an already marketed device.
The previous features have to be ensured for all the manufactured items of the medical device. This requires that a quality system shall be in place for all the relevant entities and processes that may impact safety and effectiveness over the whole medical device lifecycle.
The medical device engineering area is among the most heavily regulated fields of engineering, and practicing biomedical engineers must routinely consult and cooperate with regulatory law attorneys and other experts. The Food and Drug Administration (FDA) is the principal healthcare regulatory authority in the United States, having jurisdiction over medical devices, drugs, biologics, and combination products. The paramount objectives driving policy decisions by the FDA are safety and effectiveness of healthcare products that have to be assured through a quality system in place as specified under 21 CFR 829 regulation. In addition, because biomedical engineers often develop devices and technologies for "consumer" use, such as physical therapy devices (which are also "medical" devices), these may also be governed in some respects by the Consumer Product Safety Commission. The greatest hurdles tend to be 510K "clearance" (typically for Class 2 devices) or pre-market "approval" (typically for drugs and class 3 devices).
In the European context, safety effectiveness and quality is ensured through the "Conformity Assessment" which is defined as "the method by which a manufacturer demonstrates that its device complies with the requirements of the European Medical Device Directive". The directive specifies different procedures according to the class of the device ranging from the simple Declaration of Conformity (Annex VII) for Class I devices to EC verification (Annex IV), Production quality assurance (Annex V), Product quality assurance (Annex VI) and Full quality assurance (Annex II). The Medical Device Directive specifies detailed procedures for Certification. In general terms, these procedures include tests and verifications that are to be contained in specific deliveries such as the risk management file, the technical file, and the quality system deliveries. The risk management file is the first deliverable that conditions the following design and manufacturing steps. The risk management stage shall drive the product so that product risks are reduced at an acceptable level with respect to the benefits expected for the patients for the use of the device. The technical file contains all the documentation data and records supporting medical device certification. FDA technical file has similar content although organized in a different structure. The Quality System deliverables usually include procedures that ensure quality throughout all product life cycles. The same standard (ISO EN 13485) is usually applied for quality management systems in the US and worldwide.
In the European Union, there are certifying entities named "Notified Bodies", accredited by the European Member States. The Notified Bodies must ensure the effectiveness of the certification process for all medical devices apart from the class I devices where a declaration of conformity produced by the manufacturer is sufficient for marketing. Once a product has passed all the steps required by the Medical Device Directive, the device is entitled to bear a CE marking, indicating that the device is believed to be safe and effective when used as intended, and, therefore, it can be marketed within the European Union area.
The different regulatory arrangements sometimes result in particular technologies being developed first for either the U.S. or in Europe depending on the more favorable form of regulation. While nations often strive for substantive harmony to facilitate cross-national distribution, philosophical differences about the optimal extent of regulation can be a hindrance; more restrictive regulations seem appealing on an intuitive level, but critics decry the tradeoff cost in terms of slowing access to life-saving developments.
RoHS II
Directive 2011/65/EU, better known as RoHS 2 is a recast of legislation originally introduced in 2002. The original EU legislation "Restrictions of Certain Hazardous Substances in Electrical and Electronics Devices" (RoHS Directive 2002/95/EC) was replaced and superseded by 2011/65/EU published in July 2011 and commonly known as RoHS 2.
RoHS seeks to limit the dangerous substances in circulation in electronics products, in particular toxins and heavy metals, which are subsequently released into the environment when such devices are recycled.
The scope of RoHS 2 is widened to include products previously excluded, such as medical devices and industrial equipment. In addition, manufacturers are now obliged to provide conformity risk assessments and test reports – or explain why they are lacking. For the first time, not only manufacturers but also importers and distributors share a responsibility to ensure Electrical and Electronic Equipment within the scope of RoHS complies with the hazardous substances limits and have a CE mark on their products.
IEC 60601
The new International Standard IEC 60601 for home healthcare electro-medical devices defining the requirements for devices used in the home healthcare environment. IEC 60601-1-11 (2010) must now be incorporated into the design and verification of a wide range of home use and point of care medical devices along with other applicable standards in the IEC 60601 3rd edition series.
The mandatory date for implementation of the EN European version of the standard is June 1, 2013. The US FDA requires the use of the standard on June 30, 2013, while Health Canada recently extended the required date from June 2012 to April 2013. The North American agencies will only require these standards for new device submissions, while the EU will take the more severe approach of requiring all applicable devices being placed on the market to consider the home healthcare standard.
AS/NZS 3551:2012
AS/ANS 3551:2012 is the Australian and New Zealand standards for the management of medical devices. The standard specifies the procedures required to maintain a wide range of medical assets in a clinical setting (e.g. Hospital). The standards are based on the IEC 606101 standards.
The standard covers a wide range of medical equipment management elements including, procurement, acceptance testing, maintenance (electrical safety and preventive maintenance testing) and decommissioning.
Training and certification
Education
Biomedical engineers require considerable knowledge of both engineering and biology, and typically have a Bachelor's (B.Sc., B.S., B.Eng. or B.S.E.) or Master's (M.S., M.Sc., M.S.E., or M.Eng.) or a doctoral (Ph.D., or MD-PhD) degree in BME (Biomedical Engineering) or another branch of engineering with considerable potential for BME overlap. As interest in BME increases, many engineering colleges now have a Biomedical Engineering Department or Program, with offerings ranging from the undergraduate (B.Sc., B.S., B.Eng. or B.S.E.) to doctoral levels. Biomedical engineering has only recently been emerging as its own discipline rather than a cross-disciplinary hybrid specialization of other disciplines; and BME programs at all levels are becoming more widespread, including the Bachelor of Science in Biomedical Engineering which includes enough biological science content that many students use it as a "pre-med" major in preparation for medical school. The number of biomedical engineers is expected to rise as both a cause and effect of improvements in medical technology.
In the U.S., an increasing number of undergraduate programs are also becoming recognized by ABET as accredited bioengineering/biomedical engineering programs. As of 2023, 155 programs are currently accredited by ABET.
In Canada and Australia, accredited graduate programs in biomedical engineering are common. For example, McMaster University offers an M.A.Sc, an MD/PhD, and a PhD in Biomedical engineering. The first Canadian undergraduate BME program was offered at University of Guelph as a four-year B.Eng. program. The Polytechnique in Montreal is also offering a bachelors's degree in biomedical engineering as is Flinders University.
As with many degrees, the reputation and ranking of a program may factor into the desirability of a degree holder for either employment or graduate admission. The reputation of many undergraduate degrees is also linked to the institution's graduate or research programs, which have some tangible factors for rating, such as research funding and volume, publications and citations. With BME specifically, the ranking of a university's hospital and medical school can also be a significant factor in the perceived prestige of its BME department/program.
Graduate education is a particularly important aspect in BME. While many engineering fields (such as mechanical or electrical engineering) do not need graduate-level training to obtain an entry-level job in their field, the majority of BME positions do prefer or even require them. Since most BME-related professions involve scientific research, such as in pharmaceutical and medical device development, graduate education is almost a requirement (as undergraduate degrees typically do not involve sufficient research training and experience). This can be either a Masters or Doctoral level degree; while in certain specialties a Ph.D. is notably more common than in others, it is hardly ever the majority (except in academia). In fact, the perceived need for some kind of graduate credential is so strong that some undergraduate BME programs will actively discourage students from majoring in BME without an expressed intention to also obtain a master's degree or apply to medical school afterwards.
Graduate programs in BME, like in other scientific fields, are highly varied, and particular programs may emphasize certain aspects within the field. They may also feature extensive collaborative efforts with programs in other fields (such as the university's Medical School or other engineering divisions), owing again to the interdisciplinary nature of BME. M.S. and Ph.D. programs will typically require applicants to have an undergraduate degree in BME, or another engineering discipline (plus certain life science coursework), or life science (plus certain engineering coursework).
Education in BME also varies greatly around the world. By virtue of its extensive biotechnology sector, its numerous major universities, and relatively few internal barriers, the U.S. has progressed a great deal in its development of BME education and training opportunities. Europe, which also has a large biotechnology sector and an impressive education system, has encountered trouble in creating uniform standards as the European community attempts to supplant some of the national jurisdictional barriers that still exist. Recently, initiatives such as BIOMEDEA have sprung up to develop BME-related education and professional standards. Other countries, such as Australia, are recognizing and moving to correct deficiencies in their BME education. Also, as high technology endeavors are usually marks of developed nations, some areas of the world are prone to slower development in education, including in BME.
Licensure/certification
As with other learned professions, each state has certain (fairly similar) requirements for becoming licensed as a registered Professional Engineer (PE), but, in US, in industry such a license is not required to be an employee as an engineer in the majority of situations (due to an exception known as the industrial exemption, which effectively applies to the vast majority of American engineers). The US model has generally been only to require the practicing engineers offering engineering services that impact the public welfare, safety, safeguarding of life, health, or property to be licensed, while engineers working in private industry without a direct offering of engineering services to the public or other businesses, education, and government need not be licensed. This is notably not the case in many other countries, where a license is as legally necessary to practice engineering as it is for law or medicine.
Biomedical engineering is regulated in some countries, such as Australia, but registration is typically only recommended and not required.
In the UK, mechanical engineers working in the areas of Medical Engineering, Bioengineering or Biomedical engineering can gain Chartered Engineer status through the Institution of Mechanical Engineers. The Institution also runs the Engineering in Medicine and Health Division. The Institute of Physics and Engineering in Medicine (IPEM) has a panel for the accreditation of MSc courses in Biomedical Engineering and Chartered Engineering status can also be sought through IPEM.
The Fundamentals of Engineering exam – the first (and more general) of two licensure examinations for most U.S. jurisdictions—does now cover biology (although technically not BME). For the second exam, called the Principles and Practices, Part 2, or the Professional Engineering exam, candidates may select a particular engineering discipline's content to be tested on; there is currently not an option for BME with this, meaning that any biomedical engineers seeking a license must prepare to take this examination in another category (which does not affect the actual license, since most jurisdictions do not recognize discipline specialties anyway). However, the Biomedical Engineering Society (BMES) is, as of 2009, exploring the possibility of seeking to implement a BME-specific version of this exam to facilitate biomedical engineers pursuing licensure.
Beyond governmental registration, certain private-sector professional/industrial organizations also offer certifications with varying degrees of prominence. One such example is the Certified Clinical Engineer (CCE) certification for Clinical engineers.
Career prospects
In 2012 there were about 19,400 biomedical engineers employed in the US, and the field was predicted to grow by 5% (faster than average) from 2012 to 2022. Biomedical engineering has the highest percentage of female engineers compared to other common engineering professions. Now as of 2023, there are 19,700 jobs for this degree, the average pay for a person in this field is around $100,730.00 and making around $48.43 an hour. There is also expected to be a 7% increase in jobs from here 2023 to 2033 (even faster than the last average).
Notable figures
Julia Tutelman Apter (deceased) – One of the first specialists in neurophysiological research and a founding member of the Biomedical Engineering Society
Earl Bakken (deceased) – Invented the first transistorised pacemaker, co-founder of Medtronic.
Forrest Bird (deceased) – aviator and pioneer in the invention of mechanical ventilators
Y.C. Fung (deceased) – professor emeritus at the University of California, San Diego, considered by many to be the founder of modern biomechanics
Leslie Geddes (deceased) – professor emeritus at Purdue University, electrical engineer, inventor, and educator of over 2000 biomedical engineers, received a National Medal of Technology in 2006 from President George Bush for his more than 50 years of contributions that have spawned innovations ranging from burn treatments to miniature defibrillators, ligament repair to tiny blood pressure monitors for premature infants, as well as a new method for performing cardiopulmonary resuscitation (CPR).
Willem Johan Kolff (deceased) – pioneer of hemodialysis as well as in the field of artificial organs
Robert Langer – Institute Professor at MIT, runs the largest BME laboratory in the world, pioneer in drug delivery and tissue engineering
John Macleod (deceased) – one of the co-discoverers of insulin at Case Western Reserve University.
Alfred E. Mann – Physicist, entrepreneur and philanthropist. A pioneer in the field of Biomedical Engineering.
J. Thomas Mortimer – Emeritus professor of biomedical engineering at Case Western Reserve University. Pioneer in Functional Electrical Stimulation (FES)
Robert M. Nerem – professor emeritus at Georgia Institute of Technology. Pioneer in regenerative tissue, biomechanics, and author of over 300 published works. His works have been cited more than 20,000 times cumulatively.
P. Hunter Peckham – Donnell Professor of Biomedical Engineering and Orthopaedics at Case Western Reserve University. Pioneer in Functional Electrical Stimulation (FES)
Nicholas A. Peppas – Chaired Professor in Engineering, University of Texas at Austin, pioneer in drug delivery, biomaterials, hydrogels and nanobiotechnology.
Robert Plonsey – professor emeritus at Duke University, pioneer of electrophysiology
Otto Schmitt (deceased) – biophysicist with significant contributions to BME, working with biomimetics
Ascher Shapiro (deceased) – Institute Professor at MIT, contributed to the development of the BME field, medical devices (e.g. intra-aortic balloons)
Gordana Vunjak-Novakovic – University Professor at Columbia University, pioneer in tissue engineering and bioreactor design
John G. Webster – professor emeritus at the University of Wisconsin–Madison, a pioneer in the field of instrumentation amplifiers for the recording of electrophysiological signals
Fred Weibell, coauthor of Biomedical Instrumentation and Measurements
U.A. Whitaker (deceased) – provider of the Whitaker Foundation, which supported research and education in BME by providing over $700 million to various universities, helping to create 30 BME programs and helping finance the construction of 13 buildings
See also
Biomedical Engineering and Instrumentation Program (BEIP)
References
45. ^Bureau of Labor Statistics, U.S. Department of Labor, Occupational Outlook Handbook, "Bioengineers and Biomedical Engineers", retrieved October 27, 2024.
Further reading
External links | Biomedical engineering | [
"Engineering",
"Biology"
] | 6,530 | [
"Biological engineering",
"Medical technology",
"Biomedical engineering"
] |
4,831 | https://en.wikipedia.org/wiki/Bohr%20model | In atomic physics, the Bohr model or Rutherford–Bohr model was the first successful model of the atom. Developed from 1911 to 1918 by Niels Bohr and building on Ernest Rutherford's nuclear model, it supplanted the plum pudding model of J J Thomson only to be replaced by the quantum atomic model in the 1920s. It consists of a small, dense nucleus surrounded by orbiting electrons. It is analogous to the structure of the Solar System, but with attraction provided by electrostatic force rather than gravity, and with the electron energies quantized (assuming only discrete values).
In the history of atomic physics, it followed, and ultimately replaced, several earlier models, including Joseph Larmor's Solar System model (1897), Jean Perrin's model (1901), the cubical model (1902), Hantaro Nagaoka's Saturnian model (1904), the plum pudding model (1904), Arthur Haas's quantum model (1910), the Rutherford model (1911), and John William Nicholson's nuclear quantum model (1912). The improvement over the 1911 Rutherford model mainly concerned the new quantum mechanical interpretation introduced by Haas and Nicholson, but forsaking any attempt to explain radiation according to classical physics.
The model's key success lies in explaining the Rydberg formula for hydrogen's spectral emission lines. While the Rydberg formula had been known experimentally, it did not gain a theoretical basis until the Bohr model was introduced. Not only did the Bohr model explain the reasons for the structure of the Rydberg formula, it also provided a justification for the fundamental physical constants that make up the formula's empirical results.
The Bohr model is a relatively primitive model of the hydrogen atom, compared to the valence shell model. As a theory, it can be derived as a first-order approximation of the hydrogen atom using the broader and much more accurate quantum mechanics and thus may be considered to be an obsolete scientific theory. However, because of its simplicity, and its correct results for selected systems (see below for application), the Bohr model is still commonly taught to introduce students to quantum mechanics or energy level diagrams before moving on to the more accurate, but more complex, valence shell atom. A related quantum model was proposed by Arthur Erich Haas in 1910 but was rejected until the 1911 Solvay Congress where it was thoroughly discussed. The quantum theory of the period between Planck's discovery of the quantum (1900) and the advent of a mature quantum mechanics (1925) is often referred to as the old quantum theory.
Background
Until the second decade of the 20th century, atomic models were generally speculative. Even the concept of atoms, let alone atoms with internal structure, faced opposition from some scientists.
Planetary models
In the late 1800s speculations on the possible structure of the atom included planetary models with orbiting charged electrons.
These models faced a significant constraint.
In 1897, Joseph Larmor showed that an accelerating charge would radiate power according to classical electrodynamics, a result known as the Larmor formula. Since electrons forced to remain in orbit are continuously accelerating, they would be mechanically unstable. Larmor noted that electromagnetic effect of multiple electrons, suitable arranged, would cancel each other. Thus subsequent atomic models based on classical electrodynamics needed to adopt such special multi-electron arrangements.
Thomson's atom model
When Bohr began his work on a new atomic theory in the summer of 1912 the atomic model proposed by J J Thomson, now known as the Plum pudding model, was the best available. Thomson proposed a model with electrons rotating in coplanar rings within an atomic-sized, positively-charged, spherical volume. Thomson showed that this model was mechanically stable by lengthy calculations and was electrodynamically stable under his original assumption of thousands of electrons per atom. Moreover, he suggested that the particularly stable configurations of electrons in rings was connected to chemical properties of the atoms. He developed a formula for the scattering of beta particles that seemed to match experimental results.
However Thomson himself later showed that the atom had a factor of a thousand fewer electrons, challenging the stability argument and forcing the poorly understood positive sphere to have most of the atom's mass. Thomson was also unable to explain the many lines in atomic spectra.
Rutherford nuclear model
In 1908, Hans Geiger and Ernest Marsden demonstrated that alpha particle occasionally scatter at large angles, a result inconsistent with Thomson's model.
In 1911 Ernest Rutherford developed a new scattering model, showing that the observed large angle scattering could be explained by a compact, highly charged mass at the center of the atom.
Rutherford scattering did not involve the electrons and thus his model of the atom was incomplete.
Bohr begins his first paper on his atomic model by describing Rutherford's atom as consisting of a small, dense, positively charged nucleus attracting negatively charged electrons.
Atomic spectra
By the early twentieth century, it was expected that the atom would account for the many atomic spectral lines. These lines were summarized in empirical formula by Johann Balmer and Johannes Rydberg. In 1897, Lord Rayleigh showed that vibrations of electrical systems predicted spectral lines that depend on the square of the vibrational frequency, contradicting the empirical formula which depended directly on the frequency.
In 1907 Arthur W. Conway showed that, rather than the entire atom vibrating, vibrations of only one of the electrons in the system described by Thomson might be sufficient to account for spectral series. Although Bohr's model would also rely on just the electron to explain the spectrum, he did not assume an electrodynamical model for the atom.
The other important advance in the understanding of atomic spectra was the Rydberg–Ritz combination principle which related atomic spectral line frequencies to differences between 'terms', special frequencies characteristic of each element. Bohr would recognize the terms as energy levels of the atom divided by the Planck constant, leading to the modern view that the spectral lines result from energy differences.
Haas atomic model
In 1910, Arthur Erich Haas proposed a model of the hydrogen atom with an electron circulating on the surface of a sphere of positive charge. The model resembled Thomson's plum pudding model, but Haas added a radical new twist: he constrained the electron's potential energy, , on a sphere of radius to equal the frequency, , of the electron's orbit on the sphere times the Planck constant:
where represents the charge on the electron and the sphere. Haas combined this constraint with the balance-of-forces equation. The attractive force between the electron and the sphere balances the centrifugal force:
where is the mass of the electron. This combination relates the radius of the sphere to the Planck constant:
Haas solved for the Planck constant using the then-current value for the radius of the hydrogen atom.
Three years later, Bohr would use similar equations with different interpretation. Bohr took the Planck constant as given value and used the equations to predict, , the radius of the electron orbiting in the ground state of the hydrogen atom. This value is now called the Bohr radius.
Influence of the Solvay Conference
The first Solvay Conference, in 1911, was one of the first international physics conferences. Nine Nobel or future Nobel laureates attended, including
Ernest Rutherford, Bohr's mentor.
Bohr did not attend but he read the Solvay reports and discussed them with Rutherford.
The subject of the conference was the theory of radiation and the energy quanta of Max Planck's oscillators.
Planck's lecture at the conference ended with comments about atoms and the discussion that followed it concerned atomic models. Hendrik Lorentz raised the question of the composition of the atom based on Haas's model, a form of Thomson's plum pudding model with a quantum modification. Lorentz explained that the size of atoms could be taken to determine the Planck constant as Haas had done or the Planck constant could be taken as determining the size of atoms. Bohr would adopt the second path.
The discussions outlined the need for the quantum theory to be included in the atom. Planck explicitly mentions the failings of classical mechanics. While Bohr had already expressed a similar opinion in his PhD thesis, at Solvay the leading scientists of the day discussed a break with classical theories. Bohr's first paper on his atomic model cites the Solvay proceedings saying: "Whatever the alteration in the laws of motion of the electrons may be, it seems necessary to introduce in the laws in question a quantity foreign to the classical electrodynamics, i.e. Planck's constant, or as it often is called the elementary quantum of action." Encouraged by the Solvay discussions, Bohr would assume the atom was stable and abandon the efforts to stabilize classical models of the atom
Nicholson atom theory
In 1911 John William Nicholson published a model of the atom which would influence Bohr's model. Nicholson developed his model based on the analysis of astrophysical spectroscopy. He connected the observed spectral line frequencies with the orbits of electrons in his atoms. The connection he adopted associated the atomic electron orbital angular momentum with the Planck constant.
Whereas Planck focused on a quantum of energy, Nicholson's angular momentum quantum relates to orbital frequency.
This new concept gave Planck constant an atomic meaning for the first time. In his 1913 paper Bohr cites Nicholson as finding quantized angular momentum important for the atom.
The other critical influence of Nicholson work was his detailed analysis of spectra. Before Nicholson's work Bohr thought the spectral data was not useful for understanding atoms. In comparing his work to Nicholson's, Bohr came to understand the spectral data and their value. When he then learned from a friend about Balmer's compact formula for the spectral line data, Bohr quickly realized his model would match it in detail.
Nicholson's model was based on classical electrodynamics along the lines of J.J. Thomson's plum pudding model but his negative electrons orbiting a positive nucleus rather than circulating in a sphere. To avoid immediate collapse of this system he required that electrons come in pairs so the rotational acceleration of each electron was matched across the orbit. By 1913 Bohr had already shown, from the analysis of alpha particle energy loss, that hydrogen had only a single electron not a matched pair. Bohr's atomic model would abandon classical electrodynamics.
Nicholson's model of radiation was quantum but was attached to the orbits of the electrons. Bohr quantization would associate it with differences in energy levels of his model of hydrogen rather than the orbital frequency.
Bohr's previous work
Bohr completed his PhD in 1911 with a thesis 'Studies on the Electron Theory of Metals', an application of the classical electron theory of Hendrik Lorentz. Bohr noted two deficits of the classical model. The first concerned the specific heat of metals which James Clerk Maxwell noted in 1875: every additional degree of freedom in a theory of metals, like subatomic electrons, cause more disagreement with experiment. The second, the classical theory could not explain magnetism.
After his PhD, Bohr worked briefly in the lab of JJ Thomson before moving to Rutherford's lab in Manchester to study radioactivity. He arrived just after Rutherford completed his proposal of a compact nuclear core for atoms. Charles Galton Darwin, also at Manchester, had just completed an analysis of alpha particle energy loss in metals, concluding the electron collisions where the dominant cause of loss. Bohr showed in a subsequent paper that Darwin's results would improve by accounting for electron binding energy. Importantly this allowed Bohr to conclude that hydrogen atoms have a single electron.
Development
Next, Bohr was told by his friend, Hans Hansen, that the Balmer series is calculated using the Balmer formula, an empirical equation discovered by Johann Balmer in 1885 that described wavelengths of some spectral lines of hydrogen. This was further generalized by Johannes Rydberg in 1888, resulting in what is now known as the Rydberg formula.
After this, Bohr declared, "everything became clear".
In 1913 Niels Bohr put forth three postulates to provide an electron model consistent with Rutherford's nuclear model:
The electron is able to revolve in certain stable orbits around the nucleus without radiating any energy, contrary to what classical electromagnetism suggests. These stable orbits are called stationary orbits and are attained at certain discrete distances from the nucleus. The electron cannot have any other orbit in between the discrete ones.
The stationary orbits are attained at distances for which the angular momentum of the revolving electron is an integer multiple of the reduced Planck constant: , where is called the principal quantum number, and . The lowest value of is 1; this gives the smallest possible orbital radius, known as the Bohr radius, of 0.0529 nm for hydrogen. Once an electron is in this lowest orbit, it can get no closer to the nucleus. Starting from the angular momentum quantum rule as Bohr admits is previously given by Nicholson in his 1912 paper, Bohr was able to calculate the energies of the allowed orbits of the hydrogen atom and other hydrogen-like atoms and ions. These orbits are associated with definite energies and are also called energy shells or energy levels. In these orbits, the electron's acceleration does not result in radiation and energy loss. The Bohr model of an atom was based upon Planck's quantum theory of radiation.
Electrons can only gain and lose energy by jumping from one allowed orbit to another, absorbing or emitting electromagnetic radiation with a frequency determined by the energy difference of the levels according to the Planck relation: , where is the Planck constant.
Other points are:
Like Einstein's theory of the photoelectric effect, Bohr's formula assumes that during a quantum jump a discrete amount of energy is radiated. However, unlike Einstein, Bohr stuck to the classical Maxwell theory of the electromagnetic field. Quantization of the electromagnetic field was explained by the discreteness of the atomic energy levels; Bohr did not believe in the existence of photons.
According to the Maxwell theory the frequency of classical radiation is equal to the rotation frequency rot of the electron in its orbit, with harmonics at integer multiples of this frequency. This result is obtained from the Bohr model for jumps between energy levels and when is much smaller than . These jumps reproduce the frequency of the -th harmonic of orbit . For sufficiently large values of (so-called Rydberg states), the two orbits involved in the emission process have nearly the same rotation frequency, so that the classical orbital frequency is not ambiguous. But for small (or large ), the radiation frequency has no unambiguous classical interpretation. This marks the birth of the correspondence principle, requiring quantum theory to agree with the classical theory only in the limit of large quantum numbers.
The Bohr–Kramers–Slater theory (BKS theory) is a failed attempt to extend the Bohr model, which violates the conservation of energy and momentum in quantum jumps, with the conservation laws only holding on average.
Bohr's condition, that the angular momentum be an integer multiple of , was later reinterpreted in 1924 by de Broglie as a standing wave condition: the electron is described by a wave and a whole number of wavelengths must fit along the circumference of the electron's orbit:
According to de Broglie's hypothesis, matter particles such as the electron behave as waves. The de Broglie wavelength of an electron is
which implies that
or
where is the angular momentum of the orbiting electron. Writing for this angular momentum, the previous equation becomes
which is Bohr's second postulate.
Bohr described angular momentum of the electron orbit as while de Broglie's wavelength of described divided by the electron momentum. In 1913, however, Bohr justified his rule by appealing to the correspondence principle, without providing any sort of wave interpretation. In 1913, the wave behavior of matter particles such as the electron was not suspected.
In 1925, a new kind of mechanics was proposed, quantum mechanics, in which Bohr's model of electrons traveling in quantized orbits was extended into a more accurate model of electron motion. The new theory was proposed by Werner Heisenberg. Another form of the same theory, wave mechanics, was discovered by the Austrian physicist Erwin Schrödinger independently, and by different reasoning. Schrödinger employed de Broglie's matter waves, but sought wave solutions of a three-dimensional wave equation describing electrons that were constrained to move about the nucleus of a hydrogen-like atom, by being trapped by the potential of the positive nuclear charge.
Electron energy levels
The Bohr model gives almost exact results only for a system where two charged points orbit each other at speeds much less than that of light. This not only involves one-electron systems such as the hydrogen atom, singly ionized helium, and doubly ionized lithium, but it includes positronium and Rydberg states of any atom where one electron is far away from everything else. It can be used for K-line X-ray transition calculations if other assumptions are added (see Moseley's law below). In high energy physics, it can be used to calculate the masses of heavy quark mesons.
Calculation of the orbits requires two assumptions.
Classical mechanics
The electron is held in a circular orbit by electrostatic attraction. The centripetal force is equal to the Coulomb force.
where me is the electron's mass, e is the elementary charge, ke is the Coulomb constant and Z is the atom's atomic number. It is assumed here that the mass of the nucleus is much larger than the electron mass (which is a good assumption). This equation determines the electron's speed at any radius:
It also determines the electron's total energy at any radius:
The total energy is negative and inversely proportional to r. This means that it takes energy to pull the orbiting electron away from the proton. For infinite values of r, the energy is zero, corresponding to a motionless electron infinitely far from the proton. The total energy is half the potential energy, the difference being the kinetic energy of the electron. This is also true for noncircular orbits by the virial theorem.
A quantum rule
The angular momentum is an integer multiple of ħ:
Derivation
In classical mechanics, if an electron is orbiting around an atom with period T, and if its coupling to the electromagnetic field is weak, so that the orbit doesn't decay very much in one cycle, it will emit electromagnetic radiation in a pattern repeating at every period, so that the Fourier transform of the pattern will only have frequencies which are multiples of 1/T.
However, in quantum mechanics, the quantization of angular momentum leads to discrete energy levels of the orbits, and the emitted frequencies are quantized according to the energy differences between these levels. This discrete nature of energy levels introduces a fundamental departure from the classical radiation law, giving rise to distinct spectral lines in the emitted radiation.
Bohr assumes that the electron is circling the nucleus in an elliptical orbit obeying the rules of classical mechanics, but with no loss of radiation due to the Larmor formula.
Denoting the total energy as E, the negative electron charge as e, the positive nucleus charge as K=Z|e|, the electron mass as me, half the major axis of the ellipse as a, he starts with these equations:
E is assumed to be negative, because a positive energy is required to unbind the electron from the nucleus and put it at rest at an infinite distance.
Eq. (1a) is obtained from equating the centripetal force to the Coulombian force acting between the nucleus and the electron, considering that (where T is the average kinetic energy and U the average electrostatic potential), and that for Kepler's second law, the average separation between the electron and the nucleus is a.
Eq. (1b) is obtained from the same premises of eq. (1a) plus the virial theorem, stating that, for an elliptical orbit,
Then Bohr assumes that is an integer multiple of the energy of a quantum of light with half the frequency of the electron's revolution frequency, i.e.:
From eq. (1a,1b,2), it descends:
He further assumes that the orbit is circular, i.e. , and, denoting the angular momentum of the electron as L, introduces the equation:
Eq. (4) stems from the virial theorem, and from the classical mechanics relationships between the angular momentum, the kinetic energy and the frequency of revolution.
From eq. (1c,2,4), it stems:
where:
that is:
This results states that the angular momentum of the electron is an integer multiple of the reduced Planck constant.
Substituting the expression for the velocity gives an equation for r in terms of n:
so that the allowed orbit radius at any n is
The smallest possible value of r in the hydrogen atom () is called the Bohr radius and is equal to:
The energy of the n-th level for any atom is determined by the radius and quantum number:
An electron in the lowest energy level of hydrogen () therefore has about 13.6 eV less energy than a motionless electron infinitely far from the nucleus. The next energy level () is −3.4 eV. The third (3) is −1.51 eV, and so on. For larger values of n, these are also the binding energies of a highly excited atom with one electron in a large circular orbit around the rest of the atom. The hydrogen formula also coincides with the Wallis product.
The combination of natural constants in the energy formula is called the Rydberg energy (RE):
This expression is clarified by interpreting it in combinations that form more natural units:
is the rest mass energy of the electron (511 keV),
is the fine-structure constant,
.
Since this derivation is with the assumption that the nucleus is orbited by one electron, we can generalize this result by letting the nucleus have a charge , where Z is the atomic number. This will now give us energy levels for hydrogenic (hydrogen-like) atoms, which can serve as a rough order-of-magnitude approximation of the actual energy levels. So for nuclei with Z protons, the energy levels are (to a rough approximation):
The actual energy levels cannot be solved analytically for more than one electron (see n-body problem) because the electrons are not only affected by the nucleus but also interact with each other via the Coulomb force.
When Z = 1/α (), the motion becomes highly relativistic, and Z2 cancels the α2 in R; the orbit energy begins to be comparable to rest energy. Sufficiently large nuclei, if they were stable, would reduce their charge by creating a bound electron from the vacuum, ejecting the positron to infinity. This is the theoretical phenomenon of electromagnetic charge screening which predicts a maximum nuclear charge. Emission of such positrons has been observed in the collisions of heavy ions to create temporary super-heavy nuclei.
The Bohr formula properly uses the reduced mass of electron and proton in all situations, instead of the mass of the electron,
However, these numbers are very nearly the same, due to the much larger mass of the proton, about 1836.1 times the mass of the electron, so that the reduced mass in the system is the mass of the electron multiplied by the constant 1836.1/(1+1836.1) = 0.99946. This fact was historically important in convincing Rutherford of the importance of Bohr's model, for it explained the fact that the frequencies of lines in the spectra for singly ionized helium do not differ from those of hydrogen by a factor of exactly 4, but rather by 4 times the ratio of the reduced mass for the hydrogen vs. the helium systems, which was much closer to the experimental ratio than exactly 4.
For positronium, the formula uses the reduced mass also, but in this case, it is exactly the electron mass divided by 2. For any value of the radius, the electron and the positron are each moving at half the speed around their common center of mass, and each has only one fourth the kinetic energy. The total kinetic energy is half what it would be for a single electron moving around a heavy nucleus.
(positronium).
Rydberg formula
Beginning in late 1860s, Johann Balmer and later Johannes Rydberg and Walther Ritz developed increasingly accurate empirical formula matching measured atomic spectral lines.
Critical for Bohr's later work, Rydberg expressed his formula in terms of wave-number, equivalent to frequency. These formula contained a constant, , now known the Rydberg constant and a pair of integers indexing the lines:
Despite many attempts, no theory of the atom could reproduce these relatively simple formula.
In Bohr's theory describing the energies of transitions or quantum jumps between orbital energy levels is able to explain these formula. For the hydrogen atom Bohr starts with his derived formula for the energy released as a free electron moves into a stable circular orbit indexed by :
The energy difference between two such levels is then:
Therefore, Bohr's theory gives the Rydberg formula and moreover the numerical value the Rydberg constant for hydrogen in terms of more fundamental constants of nature, including the electron's charge, the electron's mass, and the Planck constant:
Since the energy of a photon is
these results can be expressed in terms of the wavelength of the photon given off:
Bohr's derivation of the Rydberg constant, as well as the concomitant agreement of Bohr's formula with experimentally observed spectral lines of the Lyman ( =1), Balmer ( =2), and Paschen ( =3) series, and successful theoretical prediction of other lines not yet observed, was one reason that his model was immediately accepted.
To apply to atoms with more than one electron, the Rydberg formula can be modified by replacing with or with where is constant representing a screening effect due to the inner-shell and other electrons (see Electron shell and the later discussion of the "Shell Model of the Atom" below). This was established empirically before Bohr presented his model.
Shell model (heavier atoms)
Bohr's original three papers in 1913 described mainly the electron configuration in lighter elements. Bohr called his electron shells, "rings" in 1913. Atomic orbitals within shells did not exist at the time of his planetary model. Bohr explains in Part 3 of his famous 1913 paper that the maximum electrons in a shell is eight, writing: "We see, further, that a ring of n electrons cannot rotate in a single ring round a nucleus of charge ne unless n < 8." For smaller atoms, the electron shells would be filled as follows: "rings of electrons will only join together if they contain equal numbers of electrons; and that accordingly the numbers of electrons on inner rings will only be 2, 4, 8". However, in larger atoms the innermost shell would contain eight electrons, "on the other hand, the periodic system of the elements strongly suggests that already in neon N = 10 an inner ring of eight electrons will occur". Bohr wrote "From the above we are led to the following possible scheme for the arrangement of the electrons in light atoms:"
In Bohr's third 1913 paper Part III called "Systems Containing Several Nuclei", he says that two atoms form molecules on a symmetrical plane and he reverts to describing hydrogen. The 1913 Bohr model did not discuss higher elements in detail and John William Nicholson was one of the first to prove in 1914 that it couldn't work for lithium, but was an attractive theory for hydrogen and ionized helium.
In 1921, following the work of chemists and others involved in work on the periodic table, Bohr extended the model of hydrogen to give an approximate model for heavier atoms. This gave a physical picture that reproduced many known atomic properties for the first time although these properties were proposed contemporarily with the identical work of chemist Charles Rugeley Bury
Bohr's partner in research during 1914 to 1916 was Walther Kossel who corrected Bohr's work to show that electrons interacted through the outer rings, and Kossel called the rings: "shells". Irving Langmuir is credited with the first viable arrangement of electrons in shells with only two in the first shell and going up to eight in the next according to the octet rule of 1904, although Kossel had already predicted a maximum of eight per shell in 1916. Heavier atoms have more protons in the nucleus, and more electrons to cancel the charge. Bohr took from these chemists the idea that each discrete orbit could only hold a certain number of electrons. Per Kossel, after that the orbit is full, the next level would have to be used. This gives the atom a shell structure designed by Kossel, Langmuir, and Bury, in which each shell corresponds to a Bohr orbit.
This model is even more approximate than the model of hydrogen, because it treats the electrons in each shell as non-interacting. But the repulsions of electrons are taken into account somewhat by the phenomenon of screening. The electrons in outer orbits do not only orbit the nucleus, but they also move around the inner electrons, so the effective charge Z that they feel is reduced by the number of the electrons in the inner orbit.
For example, the lithium atom has two electrons in the lowest 1s orbit, and these orbit at Z = 2. Each one sees the nuclear charge of Z = 3 minus the screening effect of the other, which crudely reduces the nuclear charge by 1 unit. This means that the innermost electrons orbit at approximately 1/2 the Bohr radius. The outermost electron in lithium orbits at roughly the Bohr radius, since the two inner electrons reduce the nuclear charge by 2. This outer electron should be at nearly one Bohr radius from the nucleus. Because the electrons strongly repel each other, the effective charge description is very approximate; the effective charge Z doesn't usually come out to be an integer.
The shell model was able to qualitatively explain many of the mysterious properties of atoms which became codified in the late 19th century in the periodic table of the elements. One property was the size of atoms, which could be determined approximately by measuring the viscosity of gases and density of pure crystalline solids. Atoms tend to get smaller toward the right in the periodic table, and become much larger at the next line of the table. Atoms to the right of the table tend to gain electrons, while atoms to the left tend to lose them. Every element on the last column of the table is chemically inert (noble gas).
In the shell model, this phenomenon is explained by shell-filling. Successive atoms become smaller because they are filling orbits of the same size, until the orbit is full, at which point the next atom in the table has a loosely bound outer electron, causing it to expand. The first Bohr orbit is filled when it has two electrons, which explains why helium is inert. The second orbit allows eight electrons, and when it is full the atom is neon, again inert. The third orbital contains eight again, except that in the more correct Sommerfeld treatment (reproduced in modern quantum mechanics) there are extra "d" electrons. The third orbit may hold an extra 10 d electrons, but these positions are not filled until a few more orbitals from the next level are filled (filling the n=3 d orbitals produces the 10 transition elements). The irregular filling pattern is an effect of interactions between electrons, which are not taken into account in either the Bohr or Sommerfeld models and which are difficult to calculate even in the modern treatment.
Moseley's law and calculation (K-alpha X-ray emission lines)
Niels Bohr said in 1962: "You see actually the Rutherford work was not taken seriously. We cannot understand today, but it was not taken seriously at all. There was no mention of it any place. The great change came from Moseley."
In 1913, Henry Moseley found an empirical relationship between the strongest X-ray line emitted by atoms under electron bombardment (then known as the K-alpha line), and their atomic number . Moseley's empiric formula was found to be derivable from Rydberg's formula and later Bohr's formula (Moseley actually mentions only Ernest Rutherford and Antonius Van den Broek in terms of models as these had been published before Moseley's work and Moseley's 1913 paper was published the same month as the first Bohr model paper). The two additional assumptions that [1] this X-ray line came from a transition between energy levels with quantum numbers 1 and 2, and [2], that the atomic number when used in the formula for atoms heavier than hydrogen, should be diminished by 1, to .
Moseley wrote to Bohr, puzzled about his results, but Bohr was not able to help. At that time, he thought that the postulated innermost "K" shell of electrons should have at least four electrons, not the two which would have neatly explained the result. So Moseley published his results without a theoretical explanation.
It was Walther Kossel in 1914 and in 1916 who explained that in the periodic table new elements would be created as electrons were added to the outer shell. In Kossel's paper, he writes: "This leads to the conclusion that the electrons, which are added further, should be put into concentric rings or shells, on each of which ... only a certain number of electrons—namely, eight in our case—should be arranged. As soon as one ring or shell is completed, a new one has to be started for the next element; the number of electrons, which are most easily accessible, and lie at the outermost periphery, increases again from element to element and, therefore, in the formation of each new shell the chemical periodicity is repeated." Later, chemist Langmuir realized that the effect was caused by charge screening, with an inner shell containing only 2 electrons. In his 1919 paper, Irving Langmuir postulated the existence of "cells" which could each only contain two electrons each, and these were arranged in "equidistant layers".
In the Moseley experiment, one of the innermost electrons in the atom is knocked out, leaving a vacancy in the lowest Bohr orbit, which contains a single remaining electron. This vacancy is then filled by an electron from the next orbit, which has n=2. But the n=2 electrons see an effective charge of Z − 1, which is the value appropriate for the charge of the nucleus, when a single electron remains in the lowest Bohr orbit to screen the nuclear charge +Z, and lower it by −1 (due to the electron's negative charge screening the nuclear positive charge). The energy gained by an electron dropping from the second shell to the first gives Moseley's law for K-alpha lines,
or
Here, Rv = RE/h is the Rydberg constant, in terms of frequency equal to 3.28 x 1015 Hz. For values of Z between 11 and 31 this latter relationship had been empirically derived by Moseley, in a simple (linear) plot of the square root of X-ray frequency against atomic number (however, for silver, Z = 47, the experimentally obtained screening term should be replaced by 0.4). Notwithstanding its restricted validity, Moseley's law not only established the objective meaning of atomic number, but as Bohr noted, it also did more than the Rydberg derivation to establish the validity of the Rutherford/Van den Broek/Bohr nuclear model of the atom, with atomic number (place on the periodic table) standing for whole units of nuclear charge. Van den Broek had published his model in January 1913 showing the periodic table was arranged according to charge while Bohr's atomic model was not published until July 1913.
The K-alpha line of Moseley's time is now known to be a pair of close lines, written as (Kα1 and Kα2) in Siegbahn notation.
Shortcomings
The Bohr model gives an incorrect value for the ground state orbital angular momentum: The angular momentum in the true ground state is known to be zero from experiment. Although mental pictures fail somewhat at these levels of scale, an electron in the lowest modern "orbital" with no orbital momentum, may be thought of as not to revolve "around" the nucleus at all, but merely to go tightly around it in an ellipse with zero area (this may be pictured as "back and forth", without striking or interacting with the nucleus). This is only reproduced in a more sophisticated semiclassical treatment like Sommerfeld's. Still, even the most sophisticated semiclassical model fails to explain the fact that the lowest energy state is spherically symmetric – it doesn't point in any particular direction.
In modern quantum mechanics, the electron in hydrogen is a spherical cloud of probability that grows denser near the nucleus. The rate-constant of probability-decay in hydrogen is equal to the inverse of the Bohr radius, but since Bohr worked with circular orbits, not zero area ellipses, the fact that these two numbers exactly agree is considered a "coincidence". (However, many such coincidental agreements are found between the semiclassical vs. full quantum mechanical treatment of the atom; these include identical energy levels in the hydrogen atom and the derivation of a fine-structure constant, which arises from the relativistic Bohr–Sommerfeld model (see below) and which happens to be equal to an entirely different concept, in full modern quantum mechanics).
The Bohr model also failed to explain:
Much of the spectra of larger atoms. At best, it can make predictions about the K-alpha and some L-alpha X-ray emission spectra for larger atoms, if two additional ad hoc assumptions are made. Emission spectra for atoms with a single outer-shell electron (atoms in the lithium group) can also be approximately predicted. Also, if the empiric electron–nuclear screening factors for many atoms are known, many other spectral lines can be deduced from the information, in similar atoms of differing elements, via the Ritz–Rydberg combination principles (see Rydberg formula). All these techniques essentially make use of Bohr's Newtonian energy-potential picture of the atom.
The relative intensities of spectral lines; although in some simple cases, Bohr's formula or modifications of it, was able to provide reasonable estimates (for example, calculations by Kramers for the Stark effect).
The existence of fine structure and hyperfine structure in spectral lines, which are known to be due to a variety of relativistic and subtle effects, as well as complications from electron spin.
The Zeeman effect – changes in spectral lines due to external magnetic fields; these are also due to more complicated quantum principles interacting with electron spin and orbital magnetic fields.
Doublets and triplets appear in the spectra of some atoms as very close pairs of lines. Bohr's model cannot say why some energy levels should be very close together.
Multi-electron atoms do not have energy levels predicted by the model. It does not work for (neutral) helium.
Refinements
Several enhancements to the Bohr model were proposed, most notably the Sommerfeld or Bohr–Sommerfeld models, which suggested that electrons travel in elliptical orbits around a nucleus instead of the Bohr model's circular orbits. This model supplemented the quantized angular momentum condition of the Bohr model with an additional radial quantization condition, the Wilson–Sommerfeld quantization condition
where pr is the radial momentum canonically conjugate to the coordinate qr, which is the radial position, and T is one full orbital period. The integral is the action of action-angle coordinates. This condition, suggested by the correspondence principle, is the only one possible, since the quantum numbers are adiabatic invariants.
The Bohr–Sommerfeld model was fundamentally inconsistent and led to many paradoxes. The magnetic quantum number measured the tilt of the orbital plane relative to the xy plane, and it could only take a few discrete values. This contradicted the obvious fact that an atom could have any orientation relative to the coordinates, without restriction. The Sommerfeld quantization can be performed in different canonical coordinates and sometimes gives different answers. The incorporation of radiation corrections was difficult, because it required finding action-angle coordinates for a combined radiation/atom system, which is difficult when the radiation is allowed to escape. The whole theory did not extend to non-integrable motions, which meant that many systems could not be treated even in principle. In the end, the model was replaced by the modern quantum-mechanical treatment of the hydrogen atom, which was first given by Wolfgang Pauli in 1925, using Heisenberg's matrix mechanics. The current picture of the hydrogen atom is based on the atomic orbitals of wave mechanics, which Erwin Schrödinger developed in 1926.
However, this is not to say that the Bohr–Sommerfeld model was without its successes. Calculations based on the Bohr–Sommerfeld model were able to accurately explain a number of more complex atomic spectral effects. For example, up to first-order perturbations, the Bohr model and quantum mechanics make the same predictions for the spectral line splitting in the Stark effect. At higher-order perturbations, however, the Bohr model and quantum mechanics differ, and measurements of the Stark effect under high field strengths helped confirm the correctness of quantum mechanics over the Bohr model. The prevailing theory behind this difference lies in the shapes of the orbitals of the electrons, which vary according to the energy state of the electron.
The Bohr–Sommerfeld quantization conditions lead to questions in modern mathematics. Consistent semiclassical quantization condition requires a certain type of structure on the phase space, which places topological limitations on the types of symplectic manifolds which can be quantized. In particular, the symplectic form should be the curvature form of a connection of a Hermitian line bundle, which is called a prequantization.
Bohr also updated his model in 1922, assuming that certain numbers of electrons (for example, 2, 8, and 18) correspond to stable "closed shells".
Model of the chemical bond
Niels Bohr proposed a model of the atom and a model of the chemical bond. According to his model for a diatomic molecule, the electrons of the atoms of the molecule form a rotating ring whose plane is perpendicular to the axis of the molecule and equidistant from the atomic nuclei. The dynamic equilibrium of the molecular system is achieved through the balance of forces between the forces of attraction of nuclei to the plane of the ring of electrons and the forces of mutual repulsion of the nuclei. The Bohr model of the chemical bond took into account the Coulomb repulsion – the electrons in the ring are at the maximum distance from each other.
Symbolism of planetary atomic models
Although Bohr's atomic model was superseded by quantum models in the 1920s, the visual image of electrons orbiting a nucleus has remained the popular concept of atoms.
The concept of an atom as a tiny planetary system has been widely used as a symbol for atoms and even for "atomic" energy (even though this is more properly considered nuclear energy). Examples of its use over the past century include but are not limited to:
The logo of the United States Atomic Energy Commission, which was in part responsible for its later usage in relation to nuclear fission technology in particular.
The flag of the International Atomic Energy Agency is a "crest-and-spinning-atom emblem", enclosed in olive branches.
The US minor league baseball Albuquerque Isotopes' logo shows baseballs as electrons orbiting a large letter "A".
A similar symbol, the atomic whirl, was chosen as the symbol for the American Atheists, and has come to be used as a symbol of atheism in general.
The Unicode Miscellaneous Symbols code point U+269B (⚛) for an atom looks like a planetary atom model.
The television show The Big Bang Theory uses a planetary-like image in its print logo.
The JavaScript library React uses planetary-like image as its logo.
On maps, it is generally used to indicate a nuclear power installation.
See also
1913 in science
Balmer's Constant
Bohr–Sommerfeld model
The Franck–Hertz experiment provided early support for the Bohr model.
The inert-pair effect is adequately explained by means of the Bohr model.
Introduction to quantum mechanics
References
Footnotes
Primary sources
Reprinted in The Collected Papers of Albert Einstein, A. Engel translator, (1997) Princeton University Press, Princeton. 6 p. 434. (provides an elegant reformulation of the Bohr–Sommerfeld quantization conditions, as well as an important insight into the quantization of non-integrable (chaotic) dynamical systems.)
Further reading
Reprint:
Klaus Hentschel: Elektronenbahnen, Quantensprünge und Spektren, in: Charlotte Bigg & Jochen Hennig (eds.) Atombilder. Ikonografien des Atoms in Wissenschaft und Öffentlichkeit des 20. Jahrhunderts, Göttingen: Wallstein-Verlag 2009, pp. 51–61
External links
Standing waves in Bohr's atomic model—An interactive simulation to intuitively explain the quantization condition of standing waves in Bohr's atomic mode
1913 in science
Atomic physics
Foundational quantum physics
Hydrogen physics
Niels Bohr
Old quantum theory | Bohr model | [
"Physics",
"Chemistry"
] | 9,410 | [
"Foundational quantum physics",
"Quantum mechanics",
"Old quantum theory",
"Atomic physics",
" molecular",
"Atomic",
" and optical physics"
] |
4,845 | https://en.wikipedia.org/wiki/Blood%20alcohol%20content | Blood alcohol content (BAC), also called blood alcohol concentration or blood alcohol level, is a measurement of alcohol intoxication used for legal or medical purposes.
BAC is expressed as mass of alcohol per volume of blood. In US and many international publications, BAC levels are written as a percentage such as 0.08%, i.e. there is 0.8 grams of alcohol per liter of blood. In different countries, the maximum permitted BAC when driving ranges from the limit of detection (zero tolerance) to 0.08% (0.8 ). BAC levels above 0.40% (4 g/L) can be potentially fatal.
According to Guinness World Records, 1.374% (13.74 g/L) is the highest BAC ever recorded in a human who survived the ordeal. The record was set in July 2013 by an unidentified Polish man found unconscious by the side of a road in the village of Tarnowska Wola, in south-east Poland. First responders reportedly did not believe the initial BAC readings taken at the scene, possibly due to it being almost 69 times greater than the Polish legal limit of 0.02% (0.2 g/L). However, the reading was later confirmed after the man was transported to a nearby hospital.
Units of measurement
BAC is generally defined as a fraction of weight of alcohol per volume of blood, with an SI coherent derived unit of kg/m3 or equivalently grams per liter (g/L). Countries differ in how this quantity is normally expressed. Common formats are listed in the table below. For example, the US and many international publications present BAC as a percentage, such as 0.05%. This would be interpreted as 0.05 grams per deciliter of blood. This same concentration could be expressed as 0.5‰ or 50 mg% in other countries.
It is also possible to use other units. For example, in the 1930s Widmark measured alcohol and blood by mass, and thus reported his concentrations in units of g/kg or mg/g, weight alcohol per weight blood. Blood is denser than water and 1 mL of blood has a mass of approximately 1.055 grams, thus a mass-volume BAC of 1 g/L corresponds to a mass-mass BAC of 0.948 mg/g. Sweden, Denmark, Norway, Finland, Germany, and Switzerland use mass-mass concentrations in their laws, but this distinction is often skipped over in public materials, implicitly assuming that 1 L of blood weighs 1 kg.
In pharmacokinetics, it is common to use the amount of substance, in moles, to quantify the dose. As the molar mass of ethanol is 46.07 g/mol, a BAC of 1 g/L is 21.706 mmol/L (21.706 mM).
Effects by alcohol level
The magnitude of sensory impairment may vary in people of differing weights. The NIAAA defines the term "binge drinking" as a pattern of drinking that brings a person's blood alcohol concentration (BAC) to 0.08 grams percent or above.
Estimation
Direct measurement
Blood samples for BAC analysis are typically obtained by taking a venous blood sample from the arm. A variety of methods exist for determining blood-alcohol concentration in a blood sample. Forensic laboratories typically use headspace-gas chromatography combined with mass spectrometry or flame ionization detection, as this method is accurate and efficient. Hospitals typically use enzyme multiplied immunoassay, which measures the co-enzyme NADH. This method is more subject to error but may be performed rapidly in parallel with other blood sample measurements.
In Germany, BAC is determined by measuring the serum level and then converting to whole blood by dividing by the factor 1.236. This calculation underestimates BAC by 4% to 10% compared to other methods.
By breathalyzer
The amount of alcohol on the breath can be measured, without requiring drawing blood, by blowing into a breathalyzer, resulting in a breath alcohol content (BrAC). The BrAC specifically correlates with the concentration of alcohol in arterial blood, satisfying the equation . Its correlation with the standard BAC found by drawing venous blood is less strong. Jurisdictions vary in the statutory conversion factor from BrAC to BAC, from 2000 to 2400. Many factors may affect the accuracy of a breathalyzer test, but they are the most common method for measuring alcohol concentrations in most jurisdictions.
By intake
Blood alcohol content can be quickly estimated by a model developed by Swedish professor Erik Widmark in the 1920s. The model corresponds to a pharmacokinetic single-compartment model with instantaneous absorption and zero-order kinetics for elimination. The model is most accurate when used to estimate BAC a few hours after drinking a single dose of alcohol in a fasted state, and can be within 20% CV of the true value. It is not at all realistic for the absorption phase, and is not accurate for BAC levels below 0.2 g/L (alcohol is not eliminated as quickly as predicted) and consumption with food (overestimating the peak BAC and time to return to zero). The equation varies depending on the units and approximations used, but in its simplest form is given by:
where:
is the estimated blood alcohol concentration (in g/L)
is the mass of alcohol consumed (g).
is the amount of time during which alcohol was present in the blood (usually time since consumption began), in hours.
is the rate at which alcohol is eliminated, averaging around 0.15 g/L/hr.
is the volume of distribution (L); typically body weight (kg) multiplied by 0.71 L/kg for men and 0.58 L/kg for women although estimation using TBW is more accurate.
A standard drink, defined by the WHO as 10 grams of pure alcohol, is the most frequently used measure in many countries. Examples:
A 80 kg man drinks 20 grams ethanol. After one hour:
A 70 kg woman drinks 10 grams ethanol. After one hour:
In terms of fluid ounces of alcohol consumed and weight in pounds, Widmark's formula can be simply approximated as
for a man or
for a woman, where EBAC and factors are given as g/dL (% BAC), such as a factor of 0.015% BAC per hour.
By standard drinks
The examples above define a standard drink as 0.6 fluid ounces (14 g or 17.7 mL) of ethanol, whereas other definitions exist, for example 10 grams of ethanol.
By training
If individuals are asked to estimate their BAC, then given accurate feedback via a breathalyzer, and this procedure is repeated a number of times during a drinking session, studies show that these individuals can learn to discriminate their BAC, to within a mean error of 9 mg/100 mL (0.009% BAC). The ability is robust to different types of alcohol, different drink quantities, and drinks with unknown levels of alcohol. Trained individuals can even drink alcoholic drinks so as to adjust or maintain their BAC at a desired level. Training the ability does not appear to require any information or procedure besides breathalyzer feedback, although most studies have provided information such as intoxication symptoms at different BAC levels. Subjects continue to retain the ability one month after training.
Post-mortem
After fatal accidents, it is common to check the blood alcohol levels of involved persons. However, soon after death, the body begins to putrefy, a biological process which produces ethanol. This can make it difficult to conclusively determine the blood alcohol content in autopsies, particularly in bodies recovered from water. For instance, following the 1975 Moorgate tube crash, the driver's kidneys had a blood alcohol concentration of 80 mg/100 mL, but it could not be established how much of this could be attributed to natural decomposition. Newer research has shown that vitreous (eye) fluid provides an accurate estimate of blood alcohol concentration that is less subject to the effects of decomposition or contamination.
Legal limits
For purposes of law enforcement, blood alcohol content is used to define intoxication and provides a rough measure of impairment. Although the degree of impairment may vary among individuals with the same blood alcohol content, it can be measured objectively and is therefore legally useful and difficult to contest in court. Most countries forbid operation of motor vehicles and heavy machinery above prescribed levels of blood alcohol content. Operation of boats and aircraft is also regulated. Some jurisdictions also regulate bicycling under the influence. The alcohol level at which a person is considered legally impaired to drive varies by country.
Test assumptions
Extrapolation
Retrograde extrapolation is the mathematical process by which someone's blood alcohol concentration at the time of driving is estimated by projecting backwards from a later chemical test. This involves estimating the absorption and elimination of alcohol in the interim between driving and testing. The rate of elimination in the average person is commonly estimated at 0.015 to 0.020 grams per deciliter per hour (g/dL/h), although again this can vary from person to person and in a given person from one moment to another. Metabolism can be affected by numerous factors, including such things as body temperature, the type of alcoholic beverage consumed, and the amount and type of food consumed.
In an increasing number of states, laws have been enacted to facilitate this speculative task: the blood alcohol content at the time of driving is legally presumed to be the same as when later tested. There are usually time limits put on this presumption, commonly two or three hours, and the defendant is permitted to offer evidence to rebut this presumption.
Forward extrapolation can also be attempted. If the amount of alcohol consumed is known, along with such variables as the weight and sex of the subject and period and rate of consumption, the blood alcohol level can be estimated by extrapolating forward. Although subject to the same infirmities as retrograde extrapolation—guessing based upon averages and unknown variables—this can be relevant in estimating BAC when driving and/or corroborating or contradicting the results of a later chemical test.
Metabolism
The pharmacokinetics of ethanol are well characterized by the ADME acronym (absorption, distribution, metabolism, excretion). Besides the dose ingested, factors such as the person's total body water, speed of drinking, the drink's nutritional content, and the contents of the stomach all influence the profile of blood alcohol content (BAC) over time. Breath alcohol content (BrAC) and BAC have similar profile shapes, so most forensic pharmacokinetic calculations can be done with either. Relatively few studies directly compare BrAC and BAC within subjects and characterize the difference in pharmacokinetic parameters. Comparing arterial and venous BAC, arterial BAC is higher during the absorption phase and lower in the postabsorptive declining phase.
Highest levels
Notes
References
Citations
General and cited references
Carnegie Library of Pittsburgh. Science and Technology Department. The Handy Science Answer Book. Pittsburgh: The Carnegie Library, 1997. .
Taylor, L., and S. Oberman. Drunk Driving Defense, 6th edition. New York: Aspen Law and Business, 2006. .
External links
Estimated alcohol
Alcohol law
Alcohol policy
Concentration indicators
Driving under the influence
Metabolism | Blood alcohol content | [
"Chemistry",
"Biology"
] | 2,360 | [
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
4,864 | https://en.wikipedia.org/wiki/Bucket%20argument | Isaac Newton's rotating bucket argument (also known as Newton's bucket) is a thought experiment that was designed to demonstrate that true rotational motion cannot be defined as the relative rotation of the body with respect to the immediately surrounding bodies. It is one of five arguments from the "properties, causes, and effects" of "true motion and rest" that support his contention that, in general, true motion and rest cannot be defined as special instances of motion or rest relative to other bodies, but instead can be defined only by reference to absolute space. Alternatively, these experiments provide an operational definition of what is meant by "absolute rotation", and do not pretend to address the question of "rotation relative to what?" General relativity dispenses with absolute space and with physics whose cause is external to the system, with the concept of geodesics of spacetime.
Background
These arguments, and a discussion of the distinctions between absolute and relative time, space, place and motion, appear in a scholium at the end of Definitions sections in Book I of Newton's work, The Mathematical Principles of Natural Philosophy (1687) (not to be confused with General Scholium at the end of Book III), which established the foundations of classical mechanics and introduced his law of universal gravitation, which yielded the first quantitatively adequate dynamical explanation of planetary motion.
Despite their embrace of the principle of rectilinear inertia and the recognition of the kinematical relativity of apparent motion (which underlies whether the Ptolemaic or the Copernican system is correct), natural philosophers of the seventeenth century continued to consider true motion and rest as physically separate descriptors of an individual body. The dominant view Newton opposed was devised by René Descartes, and was supported (in part) by Gottfried Leibniz. It held that empty space is a metaphysical impossibility because space is nothing other than the extension of matter, or, in other words, that when one speaks of the space between things one is actually making reference to the relationship that exists between those things and not to some entity that stands between them. Concordant with the above understanding, any assertion about the motion of a body boils down to a description over time in which the body under consideration is at t1 found in the vicinity of one group of "landmark" bodies and at some t2 is found in the vicinity of some other "landmark" body or bodies.
Descartes recognized that there would be a real difference, however, between a situation in which a body with movable parts and originally at rest with respect to a surrounding ring was itself accelerated to a certain angular velocity with respect to the ring, and another situation in which the surrounding ring were given a contrary acceleration with respect to the central object. With sole regard to the central object and the surrounding ring, the motions would be indistinguishable from each other assuming that both the central object and the surrounding ring were absolutely rigid objects. However, if neither the central object nor the surrounding ring were absolutely rigid then the parts of one or both of them would tend to fly out from the axis of rotation.
For contingent reasons having to do with the Inquisition, Descartes spoke of motion as both absolute and relative.
By the late 19th century, the contention that all motion is relative was re-introduced, notably by Ernst Mach (1883).
The argument
Newton discusses a bucket () filled with water hung by a cord. If the cord is twisted up tightly on itself and then the bucket is released, it begins to spin rapidly, not only with respect to the experimenter, but also in relation to the water it contains. (This situation would correspond to diagram B above.)
Although the relative motion at this stage is the greatest, the surface of the water remains flat, indicating that the parts of the water have no tendency to recede from the axis of relative motion, despite proximity to the pail. Eventually, as the cord continues to unwind, the surface of the water assumes a concave shape as it acquires the motion of the bucket spinning relative to the experimenter. This concave shape shows that the water is rotating, despite the fact that the water is at rest relative to the pail. In other words, it is not the relative motion of the pail and water that causes concavity of the water, contrary to the idea that motions can only be relative, and that there is no absolute motion. (This situation would correspond to diagram D.) Possibly the concavity of the water shows rotation relative to something else: say absolute space? Newton says: "One can find out and measure the true and absolute circular motion of the water".
In the 1846 Andrew Motte translation of Newton's words:
The argument that the motion is absolute, not relative, is incomplete, as it limits the participants relevant to the experiment to only the pail and the water, a limitation that has not been established. In fact, the concavity of the water clearly involves gravitational attraction, and by implication the Earth also is a participant. Here is a critique due to Mach arguing that only relative motion is established:
The degree in which Mach's hypothesis is integrated in general relativity is discussed in the article Mach's principle; it is generally held that general relativity is not entirely Machian.
All observers agree that the surface of rotating water is curved. However, the explanation of this curvature involves centrifugal force for all observers with the exception of a truly stationary observer, who finds the curvature is consistent with the rate of rotation of the water as they observe it, with no need for an additional centrifugal force. Thus, a stationary frame can be identified, and it is not necessary to ask "Stationary with respect to what?":
A supplementary thought experiment with the same objective of determining the occurrence of absolute rotation also was proposed by Newton: the example of observing two identical spheres in rotation about their center of gravity and tied together by a string. Occurrence of tension in the string is indicative of absolute rotation; see Rotating spheres.
Detailed analysis
The historic interest of the rotating bucket experiment is its usefulness in suggesting one can detect absolute rotation by observation of the shape of the surface of the water. However, one might question just how rotation brings about this change. Below are two approaches to understanding the concavity of the surface of rotating water in a bucket.
Newton's laws of motion
The shape of the surface of a rotating liquid in a bucket can be determined using Newton's laws for the various forces on an element of the surface. For example, see Knudsen and Hjorth. The analysis begins with the free body diagram in the co-rotating frame where the water appears stationary. The height of the water h = h(r) is a function of the radial distance r from the axis of rotation Ω, and the aim is to determine this function. An element of water volume on the surface is shown to be subject to three forces: the vertical force due to gravity Fg, the horizontal, radially outward centrifugal force FCfgl, and the force normal to the surface of the water Fn due to the rest of the water surrounding the selected element of surface. The force due to surrounding water is known to be normal to the surface of the water because a liquid in equilibrium cannot support shear stresses. To quote Anthony and Brackett:
Moreover, because the element of water does not move, the sum of all three forces must be zero. To sum to zero, the force of the water must point oppositely to the sum of the centrifugal and gravity forces, which means the surface of the water must adjust so its normal points in this direction. (A very similar problem is the design of a banked turn, where the slope of the turn is set so a car will not slide off the road. The analogy in the case of rotating bucket is that the element of water surface will "slide" up or down the surface unless the normal to the surface aligns with the vector resultant formed by the vector addition Fg + FCfgl.)
As r increases, the centrifugal force increases according to the relation (the equations are written per unit mass):
where Ω is the constant rate of rotation of the water. The gravitational force is unchanged at
where g is the acceleration due to gravity. These two forces add to make a resultant at an angle φ from the vertical given by
which clearly becomes larger as r increases. To ensure that this resultant is normal to the surface of the water, and therefore can be effectively nulled by the force of the water beneath, the normal to the surface must have the same angle, that is,
leading to the ordinary differential equation for the shape of the surface:
or, integrating:
where h(0) is the height of the water at r = 0. In other words, the surface of the water is parabolic in its dependence upon the radius.
Potential energy
The shape of the water's surface can be found in a different, very intuitive way using the interesting idea of the potential energy associated with the centrifugal force in the co-rotating frame.
In a reference frame uniformly rotating at angular rate Ω, the fictitious centrifugal force is conservative and has a potential energy of the form:
where r is the radius from the axis of rotation. This result can be verified by taking the gradient of the potential to obtain the radially outward force:
The meaning of the potential energy (stored work) is that movement of a test body from a larger radius to a smaller radius involves doing work against the centrifugal force and thus gaining potential energy. But this test body at the smaller radius where its elevation is lower has now lost equivalent gravitational potential energy.
Potential energy therefore explains the concavity of the water surface in a rotating bucket. Notice that at equilibrium the surface adopts a shape such that an element of volume at any location on its surface has the same potential energy as at any other. That being so, no element of water on the surface has any incentive to move position, because all positions are equivalent in energy. That is, equilibrium is attained. On the other hand, were surface regions with lower energy available, the water occupying surface locations of higher potential energy would move to occupy these positions of lower energy, inasmuch as there is no barrier to lateral movement in an ideal liquid.
We might imagine deliberately upsetting this equilibrium situation by somehow momentarily altering the surface shape of the water to make it different from an equal-energy surface. This change in shape would not be stable, and the water would not stay in our artificially contrived shape, but engage in a transient exploration of many shapes until non-ideal frictional forces introduced by sloshing, either against the sides of the bucket or by the non-ideal nature of the liquid, killed the oscillations and the water settled down to the equilibrium shape.
To see the principle of an equal-energy surface at work, imagine gradually increasing the rate of rotation of the bucket from zero. The water surface is flat at first, and clearly a surface of equal potential energy because all points on the surface are at the same height in the gravitational field acting upon the water. At some small angular rate of rotation, however, an element of surface water can achieve lower potential energy by moving outward under the influence of the centrifugal force; think of an object moving with the force of gravity closer to the Earth's center: the object lowers its potential energy by complying with a force. Because water is incompressible and must remain within the confines of the bucket, this outward movement increases the depth of water at the larger radius, increasing the height of the surface at larger radius, and lowering it at smaller radius. The surface of the water becomes slightly concave, with the consequence that the potential energy of the water at the greater radius is increased by the work done against gravity to achieve the greater height. As the height of water increases, movement toward the periphery becomes no longer advantageous, because the reduction in potential energy from working with the centrifugal force is balanced against the increase in energy working against gravity. Thus, at a given angular rate of rotation, a concave surface represents the stable situation, and the more rapid the rotation, the more concave this surface. If rotation is arrested, the energy stored in fashioning the concave surface must be dissipated, for example through friction, before an equilibrium flat surface is restored.
To implement a surface of constant potential energy quantitatively, let the height of the water be : then the potential energy per unit mass contributed by gravity is and the total potential energy per unit mass on the surface is
with the background energy level independent of r. In a static situation (no motion of the fluid in the rotating frame), this energy is constant independent of position r. Requiring the energy to be constant, we obtain the parabolic form:
where h(0) is the height at r = 0 (the axis). See Figures 1 and 2.
The principle of operation of the centrifuge also can be simply understood in terms of this expression for the potential energy, which shows that it is favorable energetically when the volume far from the axis of rotation is occupied by the heavier substance.
See also
Centrifugal force
Inertial frame of reference
Mach's principle
Philosophy of space and time: Absolutism vs. relationalism
Rotating reference frame
Rotating spheres
Rotational gravity
Sagnac effect
References
Further reading
The isotropy of the cosmic microwave background radiation is another indicator that the universe does not rotate. See:
External links
Newton's Views on Space, Time, and Motion from Stanford Encyclopedia of Philosophy, article by Robert Rynasiewicz. At the end of this article, loss of fine distinctions in the translations as compared to the original Latin text is discussed.
Life and Philosophy of Leibniz see section on Space, Time and Indiscernibles for Leibniz arguing against the idea of space acting as a causal agent.
Newton's Bucket An interactive applet illustrating the water shape, and an attached PDF file with a mathematical derivation of a more complete water-shape model than is given in this article.
Classical mechanics
Isaac Newton
Thought experiments in physics
Rotation | Bucket argument | [
"Physics"
] | 2,917 | [
"Physical phenomena",
"Classical mechanics",
"Rotation",
"Motion (physics)",
"Mechanics"
] |
4,868 | https://en.wikipedia.org/wiki/B.%20F.%20Skinner | Burrhus Frederic Skinner (March 20, 1904 – August 18, 1990) was an American psychologist, behaviorist, inventor, and social philosopher. He was the Edgar Pierce Professor of Psychology at Harvard University from 1958 until his retirement in 1974.
Skinner developed behavior analysis, especially the philosophy of radical behaviorism, and founded the experimental analysis of behavior, a school of experimental research psychology. He also used operant conditioning to strengthen behavior, considering the rate of response to be the most effective measure of response strength. To study operant conditioning, he invented the operant conditioning chamber (aka the Skinner box), and to measure rate he invented the cumulative recorder. Using these tools, he and Charles Ferster produced Skinner's most influential experimental work, outlined in their 1957 book Schedules of Reinforcement.
Skinner was a prolific author, publishing 21 books and 180 articles. He imagined the application of his ideas to the design of a human community in his 1948 utopian novel, Walden Two, while his analysis of human behavior culminated in his 1958 work, Verbal Behavior.
Skinner, John B. Watson and Ivan Pavlov, are considered to be the pioneers of modern behaviorism. Accordingly, a June 2002 survey listed Skinner as the most influential psychologist of the 20th century.
Biography
Skinner was born in Susquehanna, Pennsylvania, to Grace and William Skinner, the latter of whom was a lawyer. Skinner became an atheist after a Christian teacher tried to assuage his fear of the hell that his grandmother described. His brother Edward, two and a half years younger, died at age 16 of a cerebral hemorrhage.
Skinner's closest friend as a young boy was Raphael Miller, whom he called Doc because his father was a doctor. Doc and Skinner became friends due to their parents' religiousness and both had an interest in contraptions and gadgets. They had set up a telegraph line between their houses to send messages to each other, although they had to call each other on the telephone due to the confusing messages sent back and forth. During one summer, Doc and Skinner started an elderberry business to gather berries and sell them door to door. They found that when they picked the ripe berries, the unripe ones came off the branches too, so they built a device that was able to separate them. The device was a bent piece of metal to form a trough. They would pour water down the trough into a bucket, and the ripe berries would sink into the bucket and the unripe ones would be pushed over the edge to be thrown away.
Education
Skinner attended Hamilton College in Clinton, New York, with the intention of becoming a writer. He found himself at a social disadvantage at the college because of his intellectual attitude. He was a member of Lambda Chi Alpha fraternity.
He wrote for the school paper, but, as an atheist, he was critical of the traditional mores of his college. After receiving his Bachelor of Arts in English literature in 1926, he attended Harvard University, where he would later research and teach. While attending Harvard, a fellow student, Fred S. Keller, convinced Skinner that he could make an experimental science of the study of behavior. This led Skinner to invent a prototype for the Skinner box and to join Keller in the creation of other tools for small experiments.
After graduation, Skinner unsuccessfully tried to write a novel while he lived with his parents, a period that he later called the "Dark Years". He became disillusioned with his literary skills despite encouragement from the poet Robert Frost, concluding that he had little world experience and no strong personal perspective from which to write. His encounter with John B. Watson's behaviorism led him into graduate study in psychology and to the development of his own version of behaviorism.
Later life
Skinner received a PhD from Harvard in 1931, and remained there as a researcher for some years. In 1936, he went to the University of Minnesota in Minneapolis to teach. In 1945, he moved to Indiana University, where he was chair of the psychology department from 1946 to 1947, before returning to Harvard as a tenured professor in 1948. He remained at Harvard for the rest of his life. In 1973, Skinner was one of the signers of the Humanist Manifesto II.
In 1936, Skinner married Yvonne "Eve" Blue. The couple had two daughters, Julie (later Vargas) and Deborah (later Buzan; married Barry Buzan). Yvonne died in 1997, and is buried in Mount Auburn Cemetery, Cambridge, Massachusetts.
Skinner's public exposure had increased in the 1970s, he remained active even after his retirement in 1974, until his death. In 1989, Skinner was diagnosed with leukemia and died on August 18, 1990, in Cambridge, Massachusetts. Ten days before his death, he was given the lifetime achievement award by the American Psychological Association and gave a talk concerning his work.
Contributions to psychology
Behaviorism
Skinner referred to his approach to the study of behavior as radical behaviorism, which originated in the early 1900s as a reaction to depth psychology and other traditional forms of psychology, which often had difficulty making predictions that could be tested experimentally. This philosophy of behavioral science assumes that behavior is a consequence of environmental histories of reinforcement (see applied behavior analysis). In his words:
Foundations of Skinner's behaviorism
Skinner's ideas about behaviorism were largely set forth in his first book, The Behavior of Organisms (1938). Here, he gives a systematic description of the manner in which environmental variables control behavior. He distinguished two sorts of behavior which are controlled in different ways:
Respondent behaviors are elicited by stimuli, and may be modified through respondent conditioning, often called classical (or pavlovian) conditioning, in which a neutral stimulus is paired with an eliciting stimulus. Such behaviors may be measured by their latency or strength.
Operant behaviors are 'emitted', meaning that initially they are not induced by any particular stimulus. They are strengthened through operant conditioning (aka instrumental conditioning), in which the occurrence of a response yields a reinforcer. Such behaviors may be measured by their rate.
Both of these sorts of behavior had already been studied experimentally, most notably: respondents, by Ivan Pavlov; and operants, by Edward Thorndike. Skinner's account differed in some ways from earlier ones, and was one of the first accounts to bring them under one roof.
The idea that behavior is strengthened or weakened by its consequences raises several questions. Among the most commonly asked are these:
Operant responses are strengthened by reinforcement, but where do they come from in the first place?
Once it is in the organism's repertoire, how is a response directed or controlled?
How can very complex and seemingly novel behaviors be explained?
1. Origin of operant behavior
Skinner's answer to the first question was very much like Darwin's answer to the question of the origin of a 'new' bodily structure, namely, variation and selection. Similarly, the behavior of an individual varies from moment to moment; a variation that is followed by reinforcement is strengthened and becomes prominent in that individual's behavioral repertoire. Shaping was Skinner's term for the gradual modification of behavior by the reinforcement of desired variations. Skinner believed that 'superstitious' behavior can arise when a response happens to be followed by reinforcement to which it is actually unrelated.
2. Control of operant behavior
The second question, "how is operant behavior controlled?" arises because, to begin with, the behavior is "emitted" without reference to any particular stimulus. Skinner answered this question by saying that a stimulus comes to control an operant if it is present when the response is reinforced and absent when it is not. For example, if lever-pressing only brings food when a light is on, a rat, or a child, will learn to press the lever only when the light is on. Skinner summarized this relationship by saying that a discriminative stimulus (e.g. light or sound) sets the occasion for the reinforcement (food) of the operant (lever-press). This three-term contingency (stimulus-response-reinforcer) is one of Skinner's most important concepts, and sets his theory apart from theories that use only pair-wise associations.
3. Explaining complex behavior
Most behavior of humans cannot easily be described in terms of individual responses reinforced one by one, and Skinner devoted a great deal of effort to the problem of behavioral complexity. Some complex behavior can be seen as a sequence of relatively simple responses, and here Skinner invoked the idea of "chaining". Chaining is based on the fact, experimentally demonstrated, that a discriminative stimulus not only sets the occasion for subsequent behavior, but it can also reinforce a behavior that precedes it. That is, a discriminative stimulus is also a "conditioned reinforcer". For example, the light that sets the occasion for lever pressing may also be used to reinforce "turning around" in the presence of a noise. This results in the sequence "noise – turn-around – light – press lever – food." Much longer chains can be built by adding more stimuli and responses.
However, Skinner recognized that a great deal of behavior, especially human behavior, cannot be accounted for by gradual shaping or the construction of response sequences. Complex behavior often appears suddenly in its final form, as when a person first finds his way to the elevator by following instructions given at the front desk. To account for such behavior, Skinner introduced the concept of rule-governed behavior. First, relatively simple behaviors come under the control of verbal stimuli: the child learns to "jump," "open the book," and so on. After a large number of responses come under such verbal control, a sequence of verbal stimuli can evoke an almost unlimited variety of complex responses.
Reinforcement
Reinforcement, a key concept of behaviorism, is the primary process that shapes and controls behavior, and occurs in two ways: positive and negative. In The Behavior of Organisms (1938), Skinner defines negative reinforcement to be synonymous with punishment, i.e. the presentation of an aversive stimulus. This definition would subsequently be re-defined in Science and Human Behavior (1953).
In what has now become the standard set of definitions, positive reinforcement is the strengthening of behavior by the occurrence of some event (e.g., praise after some behavior is performed), whereas negative reinforcement is the strengthening of behavior by the removal or avoidance of some aversive event (e.g., opening and raising an umbrella over your head on a rainy day is reinforced by the cessation of rain falling on you).
Both types of reinforcement strengthen behavior, or increase the probability of a behavior reoccurring; the difference being in whether the reinforcing event is something applied (positive reinforcement) or something removed or avoided (negative reinforcement). Punishment can be the application of an aversive stimulus/event (positive punishment or punishment by contingent stimulation) or the removal of a desirable stimulus (negative punishment or punishment by contingent withdrawal). Though punishment is often used to suppress behavior, Skinner argued that this suppression is temporary and has a number of other, often unwanted, consequences. Extinction is the absence of a rewarding stimulus, which weakens behavior.
Writing in 1981, Skinner pointed out that Darwinian natural selection is, like reinforced behavior, "selection by consequences". Though, as he said, natural selection has now "made its case," he regretted that essentially the same process, "reinforcement", was less widely accepted as underlying human behavior.
Schedules of reinforcement
Skinner recognized that behavior is typically reinforced more than once, and, together with Charles Ferster, he did an extensive analysis of the various ways in which reinforcements could be arranged over time, calling it the schedules of reinforcement.
The most notable schedules of reinforcement studied by Skinner were continuous, interval (fixed or variable), and ratio (fixed or variable). All are methods used in operant conditioning.
Continuous reinforcement (CRF): each time a specific action is performed the subject receives a reinforcement. This method is effective when teaching a new behavior because it quickly establishes an association between the target behavior and the reinforcer.
Interval schedule: based on the time intervals between reinforcements.
Fixed interval schedule (FI): A procedure in which reinforcements are presented at fixed time periods, provided that the appropriate response is made. This schedule yields a response rate that is low just after reinforcement and becomes rapid just before the next reinforcement is scheduled.
Variable interval schedule (VI): A procedure in which behavior is reinforced after scheduled but unpredictable time durations following the previous reinforcement. This schedule yields the most stable rate of responding, with the average frequency of reinforcement determining the frequency of response.
Ratio schedules: based on the ratio of responses to reinforcements.
Fixed ratio schedule (FR): A procedure in which reinforcement is delivered after a specific number of responses have been made.
Variable ratio schedule (VR): A procedure in which reinforcement comes after a number of responses that is randomized from one reinforcement to the next (e.g. slot machines). The lower the number of responses required, the higher the response rate tends to be. Variable ratio schedules tend to produce very rapid and steady responding rates in contrast with fixed ratio schedules where the frequency of response usually drops after the reinforcement occurs.
Token economy
"Skinnerian" principles have been used to create token economies in a number of institutions, such as psychiatric hospitals. When participants behave in desirable ways, their behavior is reinforced with tokens that can be changed for such items as candy, cigarettes, coffee, or the exclusive use of a radio or television set.
Verbal Behavior
Challenged by Alfred North Whitehead during a casual discussion while at Harvard to provide an account of a randomly provided piece of verbal behavior, Skinner set about attempting to extend his then-new functional, inductive approach to the complexity of human verbal behavior. Developed over two decades, his work appeared in the book Verbal Behavior. Although Noam Chomsky was highly critical of Verbal Behavior, he conceded that Skinner's "S-R psychology" was worth a review. Behavior analysts reject Chomsky's appraisal of Skinner's work as merely "stimulus-response psychology," and some have argued that this mischaracterization highlights a poor understanding of Skinner's work and the field of behavior analysis as a whole.
Verbal Behavior had an uncharacteristically cool reception, partly as a result of Chomsky's review, partly because of Skinner's failure to address or rebut any of Chomsky's criticisms. Skinner's peers may have been slow to adopt the ideas presented in Verbal Behavior because of the absence of experimental evidence—unlike the empirical density that marked Skinner's experimental work.
Scientific inventions
Operant conditioning chamber
An operant conditioning chamber (also known as a "Skinner box") is a laboratory apparatus used in the experimental analysis of animal behavior. It was invented by Skinner while he was a graduate student at Harvard University. As used by Skinner, the box had a lever (for rats), or a disk in one wall (for pigeons). A press on this "manipulandum" could deliver food to the animal through an opening in the wall, and responses reinforced in this way increased in frequency. By controlling this reinforcement together with discriminative stimuli such as lights and tones, or punishments such as electric shocks, experimenters have used the operant box to study a wide variety of topics, including schedules of reinforcement, discriminative control, delayed response ("memory"), punishment, and so on. By channeling research in these directions, the operant conditioning chamber has had a huge influence on course of research in animal learning and its applications. It enabled great progress on problems that could be studied by measuring the rate, probability, or force of a simple, repeatable response. However, it discouraged the study of behavioral processes not easily conceptualized in such terms—spatial learning, in particular, which is now studied in quite different ways, for example, by the use of the water maze.
Cumulative recorder
The cumulative recorder makes a pen-and-ink record of simple repeated responses. Skinner designed it for use with the operant chamber as a convenient way to record and view the rate of responses such as a lever press or a key peck. In this device, a sheet of paper gradually unrolls over a cylinder. Each response steps a small pen across the paper, starting at one edge; when the pen reaches the other edge, it quickly resets to the initial side. The slope of the resulting ink line graphically displays the rate of the response; for example, rapid responses yield a steeply sloping line on the paper, slow responding yields a line of low slope. The cumulative recorder was a key tool used by Skinner in his analysis of behavior, and it was very widely adopted by other experimenters, gradually falling out of use with the advent of the laboratory computer and use of line graphs. Skinner's major experimental exploration of response rates, presented in his book with Charles Ferster, Schedules of Reinforcement, is full of cumulative records produced by this device.
Air crib
The air crib is an easily cleaned, temperature- and humidity-controlled box-bed intended to replace the standard infant crib. After raising one baby, Skinner felt that he could simplify the process for parents and improve the experience for children. He primarily thought of the idea to help his wife cope with the day-to-day tasks of child rearing. Skinner had some specific concerns about raising a baby in the rough environment where he lived in Minnesota. Keeping the child warm was a central priority (Faye, 2010). Though this was the main goal, it also was designed to reduce laundry, diaper rash, and cradle cap, while still allowing the baby to be more mobile and comfortable. Reportedly it had some success in these goals as it was advertised commercially with an estimate of 300 children who were raised in the air crib. Psychology Today tracked down 50 children and ran a short piece on the effects of the air crib. The reports came back positive and that these children and parents enjoyed using the crib (Epstein, 2005). One of these air cribs resides in the gallery at the Center for the History of Psychology in Akron, Ohio (Faye, 2010).
The air crib was designed with three solid walls and a safety-glass panel at the front which could be lowered to move the baby in and out of the crib. The floor was stretched canvas. Sheets were intended to be used over the canvas and were easily rolled off when soiled. Addressing Skinners' concern for temperature, a control box on top of the crib regulated temperature and humidity. Filtered air flowed through the crib from below. This crib was higher than most standard cribs, allowing easier access to the child without the need to bend over (Faye, 2010).
The air crib was a controversial invention. It was popularly characterized as a cruel pen, and it was often compared to Skinner's operant conditioning chamber (or "Skinner box"). Skinner's article in Ladies Home Journal, titled "Baby in a Box", caught the eye of many and contributed to skepticism about the device (Bjork, 1997). A picture published with the article showed the Skinners' daughter, Deborah, peering out of the crib with her hands and face pressed upon the glass. Skinner also used the term "experiment" when describing the crib, and this association with laboratory animal experimentation discouraged the crib's commercial success, although several companies attempted to produce and sell it.
In 2004, therapist Lauren Slater repeated a claim that Skinner may have used his baby daughter in some of his experiments. His outraged daughter publicly accused Slater of not making a good-faith effort to check her facts before publishing. Debora was quoted by the Guardian saying "According to Opening Skinner's Box: Great Psychological Experiments of the Twentieth Century, my father, who was a psychologist based at Harvard from the 1950s to the 90s, "used his infant daughter, Deborah, to prove his theories by putting her for a few hours a day in a laboratory box . . . in which all her needs were controlled and shaped". But it's not true. My father did nothing of the sort."
Teaching machine
The teaching machine was a mechanical device whose purpose was to administer a curriculum of programmed learning. The machine embodies key elements of Skinner's theory of learning and had important implications for education in general and classroom instruction in particular.
In one incarnation, the machine was a box that housed a list of questions that could be viewed one at a time through a small window. (see picture.) There was also a mechanism through which the learner could respond to each question. Upon delivering a correct answer, the learner would be rewarded.
Skinner advocated the use of teaching machines for a broad range of students (e.g., preschool aged to adult) and instructional purposes (e.g., reading and music). For example, one machine that he envisioned could teach rhythm. He wrote:The instructional potential of the teaching machine stemmed from several factors: it provided automatic, immediate and regular reinforcement without the use of aversive control; the material presented was coherent, yet varied and novel; the pace of learning could be adjusted to suit the individual. As a result, students were interested, attentive, and learned efficiently by producing the desired behavior, "learning by doing."
Teaching machines, though perhaps rudimentary, were not rigid instruments of instruction. They could be adjusted and improved based upon the students' performance. For example, if a student made many incorrect responses, the machine could be reprogrammed to provide less advanced prompts or questions—the idea being that students acquire behaviors most efficiently if they make few errors. Multiple-choice formats were not well-suited for teaching machines because they tended to increase student mistakes, and the contingencies of reinforcement were relatively uncontrolled.
Not only useful in teaching explicit skills, machines could also promote the development of a repertoire of behaviors that Skinner called self-management. Effective self-management means attending to stimuli appropriate to a task, avoiding distractions, reducing the opportunity of reward for competing behaviors, and so on. For example, machines encourage students to pay attention before receiving a reward. Skinner contrasted this with the common classroom practice of initially capturing students' attention (e.g., with a lively video) and delivering a reward (e.g., entertainment) before the students have actually performed any relevant behavior. This practice fails to reinforce correct behavior and actually counters the development of self-management.
Skinner pioneered the use of teaching machines in the classroom, especially at the primary level. Today computers run software that performs similar teaching tasks, and there has been a resurgence of interest in the topic related to the development of adaptive learning systems.
Pigeon-guided missile
During World War II, the US Navy required a weapon effective against surface ships, such as the German Bismarck class battleships. Although missile and TV technology existed, the size of the primitive guidance systems available rendered automatic guidance impractical. To solve this problem, Skinner initiated Project Pigeon, which was intended to provide a simple and effective guidance system. Skinner trained pigeons through operant conditioning to peck a camera obscura screen showing incoming targets on individual screens (Schultz-Figueroa, 2019). This system divided the nose cone of a missile into three compartments, with a pigeon placed in each. Within the ship, the three lenses projected an image of distant objects onto a screen in front of each bird. Thus, when the missile was launched from an aircraft within sight of an enemy ship, an image of the ship would appear on the screen. The screen was hinged, which connected the screens to the bomb's guidance system. This was done through four small rubber pneumatic tubes that were attached to each side of the frame, which directed a constant airflow to a pneumatic pickup system that controlled the thrusters of the bomb. Resulting in the missile being guided towards the targeted ship, through just the peck coming from the pigeon (Schultz-Figueroa, 2019).
Despite an effective demonstration, the project was abandoned, and eventually more conventional solutions, such as those based on radar, became available. Skinner complained that "our problem was no one would take us seriously." Before the project was completely abandoned it was tested extensively in the laboratory. After the United States Army ultimately denied it the United States Naval Research Laboratory picked up Skinner's Research and renamed it Project ORCON, which was a contraction of "organic" and "control". Skinner worked closely with the US Naval Research Laboratory continuously testing the pigeon's tracking capacity for guiding missiles to their intended targets. In the end, the pigeons' performance and accuracy relied on so many uncontrollable factors that Project ORCON, like Project Pigeon before it, was again discontinued. It was never used in the field.
Verbal summator
Early in his career Skinner became interested in "latent speech" and experimented with a device he called the verbal summator. This device can be thought of as an auditory version of the Rorschach inkblots. When using the device, human participants listened to incomprehensible auditory "garbage" but often read meaning into what they heard. Thus, as with the Rorschach blots, the device was intended to yield overt behavior that projected subconscious thoughts. Skinner's interest in projective testing was brief, but he later used observations with the summator in creating his theory of verbal behavior. The device also led other researchers to invent new tests such as the tautophone test, the auditory apperception test, and the Azzageddi test.
Influence on teaching
Along with psychology, education has also been influenced by Skinner's views, which are extensively presented in his book The Technology of Teaching, as well as reflected in Fred S. Keller's Personalized System of Instruction and Ogden R. Lindsley's Precision Teaching.
Skinner argued that education has two major purposes:
to teach repertoires of both verbal and nonverbal behavior; and
to interest students in learning.
He recommended bringing students' behavior under appropriate control by providing reinforcement only in the presence of stimuli relevant to the learning task. Because he believed that human behavior can be affected by small consequences, something as simple as "the opportunity to move forward after completing one stage of an activity" can be an effective reinforcer. Skinner was convinced that, to learn, a student must engage in behavior, and not just passively receive information.
Skinner believed that effective teaching must be based on positive reinforcement which is, he argued, more effective at changing and establishing behavior than punishment. He suggested that the main thing people learn from being punished is how to avoid punishment. For example, if a child is forced to practice playing an instrument, the child comes to associate practicing with punishment and thus develops feelings of dreadfulness and wishes to avoid practicing the instrument. This view had obvious implications for the then widespread practice of rote learning and punitive discipline in education. The use of educational activities as punishment may induce rebellious behavior such as vandalism or absence.
Because teachers are primarily responsible for modifying student behavior, Skinner argued that teachers must learn effective ways of teaching. In The Technology of Teaching (1968), Skinner has a chapter on why teachers fail: He says that teachers have not been given an in-depth understanding of teaching and learning. Without knowing the science underpinning teaching, teachers fall back on procedures that work poorly or not at all, such as:
using aversive techniques (which produce escape and avoidance and undesirable emotional effects);
relying on telling and explaining ("Unfortunately, a student does not learn simply when he is shown or told.");
failing to adapt learning tasks to the student's current level; and
failing to provide positive reinforcement frequently enough.
Skinner suggests that any age-appropriate skill can be taught. The steps are
Clearly specify the action or performance the student is to learn.
Break down the task into small achievable steps, going from simple to complex.
Let the student perform each step, reinforcing correct actions.
Adjust so that the student is always successful until finally the goal is reached.
Shift to intermittent reinforcement to maintain the student's performance.
Contributions to social theory
Skinner is popularly known mainly for his books Walden Two (1948) and Beyond Freedom and Dignity, (for which he made the cover of Time magazine). The former describes a fictional "experimental community" in 1940s United States. The productivity and happiness of citizens in this community is far greater than in the outside world because the residents practice scientific social planning and use operant conditioning in raising their children.
Walden Two, like Thoreau's Walden, champions a lifestyle that does not support war, or foster competition and social strife. It encourages a lifestyle of minimal consumption, rich social relationships, personal happiness, satisfying work, and leisure. In 1967, Kat Kinkade and others founded the Twin Oaks Community, using Walden Two as a blueprint. The community still exists and continues to use the Planner-Manager system and other aspects of the community described in Skinner's book, though behavior modification is not a community practice.
In Beyond Freedom and Dignity, Skinner suggests that a technology of behavior could help to make a better society. We would, however, have to accept that an autonomous agent is not the driving force of our actions. Skinner offers alternatives to punishment, and challenges his readers to use science and modern technology to construct a better society.
Political views
Skinner's political writings emphasized his hopes that an effective and human science of behavioral control – a technology of human behavior – could help with problems as yet unsolved and often aggravated by advances in technology such as the atomic bomb. Indeed, one of Skinner's goals was to prevent humanity from destroying itself. He saw political activity as the use of aversive or non-aversive means to control a population. Skinner favored the use of positive reinforcement as a means of control, citing Jean-Jacques Rousseau's novel Emile: or, On Education as an example of literature that "did not fear the power of positive reinforcement."
Skinner's book, Walden Two, presents a vision of a decentralized, localized society, which applies a practical, scientific approach and behavioral expertise to deal peacefully with social problems. (For example, his views led him to oppose corporal punishment in schools, and he wrote a letter to the California Senate that helped lead it to a ban on spanking.) Skinner's utopia is both a thought experiment and a rhetorical piece. In Walden Two, Skinner answers the problem that exists in many utopian novels – "What is the Good Life?" The book's answer is a life of friendship, health, art, a healthy balance between work and leisure, a minimum of unpleasantness, and a feeling that one has made worthwhile contributions to a society in which resources are ensured, in part, by minimizing consumption.
Skinner described his novel as "my New Atlantis", in reference to Bacon's utopia.
Superstition' in the Pigeon" experiment
One of Skinner's experiments examined the formation of superstition in one of his favorite experimental animals, the pigeon. Skinner placed a series of hungry pigeons in a cage attached to an automatic mechanism that delivered food to the pigeon "at regular intervals with no reference whatsoever to the bird's behavior." He discovered that the pigeons associated the delivery of the food with whatever chance actions they had been performing as it was delivered, and that they subsequently continued to perform these same actions.Skinner suggested that the pigeons behaved as if they were influencing the automatic mechanism with their "rituals", and that this experiment shed light on human behavior:Modern behavioral psychologists have disputed Skinner's "superstition" explanation for the behaviors he recorded. Subsequent research (e.g. Staddon and Simmelhag, 1971), while finding similar behavior, failed to find support for Skinner's "adventitious reinforcement" explanation for it. By looking at the timing of different behaviors within the interval, Staddon and Simmelhag were able to distinguish two classes of behavior: the terminal response, which occurred in anticipation of food, and interim responses, that occurred earlier in the interfood interval and were rarely contiguous with food. Terminal responses seem to reflect classical (as opposed to operant) conditioning, rather than adventitious reinforcement, guided by a process like that observed in 1968 by Brown and Jenkins in their "autoshaping" procedures. The causation of interim activities (such as the schedule-induced polydipsia seen in a similar situation with rats) also cannot be traced to adventitious reinforcement and its details are still obscure (Staddon, 1977).
Criticism
Noam Chomsky
American linguist Noam Chomsky published a review of Skinner's Verbal Behavior in the linguistics journal Language in 1959. Chomsky argued that Skinner's attempt to use behaviorism to explain human language amounted to little more than word games. Conditioned responses could not account for a child's ability to create or understand an infinite variety of novel sentences. Chomsky's review has been credited with launching the cognitive revolution in psychology and other disciplines. Skinner, who rarely responded directly to critics, never formally replied to Chomsky's critique, but endorsed Kenneth MacCorquodale's 1972 reply.
Many academics in the 1960s believed that Skinner's silence on the question meant Chomsky's criticism had been justified. But MacCorquodale wrote that Chomsky's criticism did not focus on Skinner's Verbal Behavior, but rather attacked a confusion of ideas from behavioral psychology. MacCorquodale also regretted Chomsky's aggressive tone. Furthermore, Chomsky had aimed at delivering a definitive refutation of Skinner by citing dozens of animal instinct and animal learning studies. On the one hand, he argued that the studies on animal instinct proved that animal behavior is innate, and therefore Skinner was mistaken. On the other, Chomsky's opinion of the studies on learning was that one cannot draw an analogy from animal studies to human behavior—or, that research on animal instinct refutes research on animal learning.
Chomsky also reviewed Skinner's Beyond Freedom and Dignity, using the same basic motives as his Verbal Behavior review. Among Chomsky's criticisms were that Skinner's laboratory work could not be extended to humans, that when it was extended to humans it represented "scientistic" behavior attempting to emulate science but which was not scientific, that Skinner was not a scientist because he rejected the hypothetico-deductive model of theory testing, and that Skinner had no science of behavior.
Psychodynamic psychology
Skinner has been repeatedly criticized for his supposed animosity towards Sigmund Freud, psychoanalysis, and psychodynamic psychology. Some have argued, however, that Skinner shared several of Freud's assumptions, and that he was influenced by Freudian points of view in more than one field, among them the analysis of defense mechanisms, such as repression. To study such phenomena, Skinner even designed his own projective test, the "verbal summator" described above.
J. E. R. Staddon
As understood by Skinner, ascribing dignity to individuals involves giving them credit for their actions. To say "Skinner is brilliant" means that Skinner is an originating force. If Skinner's determinist theory is right, he is merely the focus of his environment. He is not an originating force and he had no choice in saying the things he said or doing the things he did. Skinner's environment and genetics both allowed and compelled him to write his book. Similarly, the environment and genetic potentials of the advocates of freedom and dignity cause them to resist the reality that their own activities are deterministically grounded. J. E. R. Staddon has argued the compatibilist position; Skinner's determinism is not in any way contradictory to traditional notions of reward and punishment, as he believed.
Professional career
Roles
1936–1937 Instructor, University of Minnesota
1937–1939 Assistant Professor, University of Minnesota
1939–1945 Associate Professor, University of Minnesota
1945–1948 Professor and chair, Indiana University
1947–1948 William James Lecturer, Harvard University
1948–1958 Professor, Harvard University
1958–1974 Professor of Psychology, Harvard University
1949–1950 President, Midwestern Psychological Association
1954–1955 President, Eastern Psychological Association
1966–1967 President, Pavlovian Society of North America
1974–1990 Professor of Psychology and Social Relations Emeritus, Harvard University
Awards
1926 AB, Hamilton College
1930 MA, Harvard University
1930–1931 Thayer Fellowship
1931 PhD, Harvard University
1931–1932 Walker Fellowship
1931–1933 National Research Council Fellowship
1933–1936 Junior Fellowship, Harvard Society of Fellows
1942 Guggenheim Fellowship (postponed until 1944–1945)
1942 Howard Crosby Warren Medal, Society of Experimental Psychologists
1958 Distinguished Scientific Contribution Award, American Psychological Association
1958–1974 Edgar Pierce Professor of Psychology, Harvard University
1964–1974 Career Award, National Institute of Mental Health
1966 Edward Lee Thorndike Award, American Psychological Association
1968 National Medal of Science, National Science Foundation
1969 Overseas Fellow in Churchill College, Cambridge
1971 Gold Medal Award, American Psychological Foundation
1971 Joseph P. Kennedy Jr., Foundation for Mental Retardation International award
1972 Humanist of the Year, American Humanist Association
1972 Creative Leadership in Education Award, New York University
1972 Career Contribution Award, Massachusetts Psychological Association
1978 Distinguished Contributions to Educational Research Award and Development, American Educational Research Association
1978 National Association for Retarded Citizens Award
1985 Award for Excellence in Psychiatry, Albert Einstein School of Medicine
1985 President's Award, New York Academy of Science
1990 William James Fellow Award, American Psychological Society
1990 Lifetime Achievement Award, American Psychological Association
1991 Outstanding Member and Distinguished Professional Achievement Award, Society for Performance Improvement
1997 Scholar Hall of Fame Award, Academy of Resource and Development
2011 Committee for Skeptical Inquiry Pantheon of Skeptics—Inducted
2024 Ig Nobel Peace Prize for his work on the pigeon-guided bomb project.
Honorary degrees
Skinner received honorary degrees from:
Alfred University
Ball State University
Dickinson College
Hamilton College
Harvard University
Hobart and William Smith Colleges
Johns Hopkins University
Keio University
Long Island University C. W. Post Campus
McGill University
North Carolina State University
Ohio Wesleyan University
Ripon College
Rockford College
Tufts University
University of Chicago
University of Exeter
University of Missouri
University of North Texas
Western Michigan University
University of Maryland, Baltimore County.
Honorary societies
Skinner was inducted to the following honorary societies:
PSI CHI International Honor Society in Psychology
American Philosophical Society
American Academy of Arts and Sciences
United States National Academy of Sciences
Bibliography
1938. The Behavior of Organisms: An Experimental Analysis, 1938. , .
1948. Walden Two. (revised 1976 ed.).
1953. Science and Human Behavior. .
1957. Schedules of Reinforcement, with C. B. Ferster. .
1957. Verbal Behavior. .
1961. The Analysis of Behavior: A Program for Self Instruction, with James G. Holland. .
1968.The Technology of Teaching. New York: Appleton-Century-Crofts. .
1969. Contingencies of Reinforcement: A Theoretical Analysis. .
1971. Beyond Freedom and Dignity. .
1974. About Behaviorism. .
1976. Particulars of My Life: Part One of an Autobiography. .
1978. Reflections on Behaviorism and Society. .
1979. The Shaping of a Behaviorist: Part Two of an Autobiography. .
1980. Notebooks, edited by Robert Epstein. .
1982. Skinner for the Classroom, edited by R. Epstein. .
1983. Enjoy Old Age: A Program of Self-Management, with M. E. Vaughan. .
1983. A Matter of Consequences: Part Three of an Autobiography. , .
1987. Upon Further Reflection. .
1989. Recent Issues in the Analysis of Behavior. .
Cumulative Record: A Selection of Papers, 1959, 1961, 1972 and 1999 as Cumulative Record: Definitive Edition. (paperback)
Includes reprint: Skinner, B. F. 1945. "Baby in a Box." Ladies' Home Journal. — Skinner's original, personal account of the much-misrepresented "Baby in a box" device.
See also
Applied behavior analysis
Back to Freedom and Dignity
References
Notes
Citations
Further reading
Chiesa, M. (2004). Radical Behaviorism: The Philosophy and the Science.
Epstein, Robert (1997). "Skinner as self-manager." Journal of Applied Behavior Analysis 30:545–69. Retrieved 2 June 2005 – via ENVMED.rochester.edu
Sundberg, M. L. (2008) The VB-MAPP: The Verbal Behavior Milestones Assessment and Placement Program
Basil-Curzon, L. (2004) Teaching in Further Education: A outline of Principles and Practice
Hardin, C.J. (2004) Effective Classroom Management
Kaufhold, J. A. (2002) The Psychology of Learning and the Art of Teaching
Bjork, D. W. (1993) B. F. Skinner: A Life
Dews, P. B., ed. (1970) Festschrift For B. F. Skinner.New York: Appleton-Century-Crofts.
Evans, R. I. (1968) B. F. Skinner: the man and his ideas
Nye, Robert D. (1979) What Is B. F. Skinner Really Saying? Englewood Cliffs, NJ: Prentice-Hall.
Rutherford, A. (2009) Beyond the box: B. F. Skinner's technology of behavior from laboratory to life, 1950s–1970s.. Toronto: University of Toronto Press.
Sagal, P. T. (1981) Skinner's Philosophy. Washington, DC: University Press of America.
Smith, D. L. (2002). On Prediction and Control. B. F. Skinner and the Technological Ideal of Science. In W. E. Pickren & D. A. Dewsbury, (Eds.), Evolving Perspectives on the History of Psychology, Washington, D.C.: American Psychological Association.
Swirski, Peter (2011) "How I Stopped Worrying and Loved Engineering or Communal Life, Adaptations, and B.F. Skinner's Walden Two". American Utopia and Social Engineering in Literature, Social Thought, and Political History. New York, Routledge.
Wiener, D. N. (1996) B. F. Skinner: benign anarchist
Wolfgang, C.H. and Glickman, Carl D. (1986) Solving Discipline Problems Allyn and Bacon, Inc
External links
B. F. Skinner Foundation homepage
National Academy of Sciences biography
I was not a lab rat, response by Skinner's daughter about the "baby box"
Audio Recordings Society for Experimental Analysis of Behavior
Reprint of "the Minotaur of the Behaviorist Maze: Surviving Stanford's Learning House in the 1970s: Journal of Humanistic Psychology, Vol. 51, Number 3, July 2011. 266–272.
1904 births
1990 deaths
20th-century American inventors
20th-century atheists
20th-century American non-fiction writers
20th-century American philosophers
Action theorists
American atheists
20th-century American psychologists
American skeptics
Behaviourist psychologists
Burials at Mount Auburn Cemetery
Deaths from leukemia in Massachusetts
Determinists
Ethologists
Hamilton College (New York) alumni
Harvard Graduate School of Arts and Sciences alumni
Harvard University Department of Psychology faculty
Ig Nobel laureates
Members of the United States National Academy of Sciences
National Medal of Science laureates
People from Susquehanna County, Pennsylvania
Philosophers from Massachusetts
Philosophers from Pennsylvania
Philosophers from Minnesota
American philosophers of culture
American philosophers of education
American philosophers of language
American philosophers of mind
Philosophers of psychology
American philosophers of science
American philosophers of technology
American political philosophers
University of Minnesota faculty
Writers from Cambridge, Massachusetts
20th-century American zoologists
American educational psychologists
Members of the American Philosophical Society
APA Distinguished Scientific Award for an Early Career Contribution to Psychology recipients | B. F. Skinner | [
"Biology"
] | 8,968 | [
"Behaviourist psychologists",
"Behavior",
"Behaviorism"
] |
4,882 | https://en.wikipedia.org/wiki/Background%20radiation | Background radiation is a measure of the level of ionizing radiation present in the environment at a particular location which is due to deliberate introduction of radiation sources.
Background radiation originates from a variety of sources, both natural and artificial. These include both cosmic radiation and environmental radioactivity from naturally occurring radioactive materials (such as radon and radium), as well as man-made medical X-rays, fallout from nuclear weapons testing and nuclear accidents.
Definition
Background radiation is defined by the International Atomic Energy Agency as "Dose or the dose rate (or an observed measure related to the dose or dose rate) attributable to all sources other than the one(s) specified. A distinction is thus made between the dose which is already in a location, which is defined here as being "background", and the dose due to a deliberately introduced and specified source. This is important where radiation measurements are taken of a specified radiation source, where the existing background may affect this measurement. An example would be measurement of radioactive contamination in a gamma radiation background, which could increase the total reading above that expected from the contamination alone.
However, if no radiation source is specified as being of concern, then the total radiation dose measurement at a location is generally called the background radiation, and this is usually the case where an ambient dose rate is measured for environmental purposes.
Background dose rate examples
Background radiation varies with location and time, and the following table gives examples:
Natural background radiation
Radioactive material is found throughout nature. Detectable amounts occur naturally in soil, rocks, water, air, and vegetation, from which it is inhaled and ingested into the body. In addition to this internal exposure, humans also receive external exposure from radioactive materials that remain outside the body and from cosmic radiation from space. The worldwide average natural dose to humans is about per year. This is four times the worldwide average artificial radiation exposure, which in 2008 amounted to about per year. In some developed countries, like the US and Japan, artificial exposure is, on average, greater than the natural exposure, due to greater access to medical imaging. In Europe, average natural background exposure by country ranges from under annually in the United Kingdom to more than annually for some groups of people in Finland.
The International Atomic Energy Agency states:
"Exposure to radiation from natural sources is an inescapable feature of everyday life in both working and public environments. This exposure is in most cases of little or no concern to society, but in certain situations the introduction of health protection measures needs to be considered, for example when working with uranium and thorium ores and other Naturally Occurring Radioactive Material (NORM). These situations have become the focus of greater attention by the Agency in recent years."
Terrestrial sources
Terrestrial background radiation, for the purpose of the table above, only includes sources that remain external to the body. The major radionuclides of concern are potassium, uranium and thorium and their decay products, some of which, like radium and radon are intensely radioactive but occur in low concentrations. Most of these sources have been decreasing, due to radioactive decay since the formation of the Earth, because there is no significant amount currently transported to the Earth. Thus, the present activity on Earth from uranium-238 is only half as much as it originally was because of its 4.5 billion year half-life, and potassium-40 (half-life 1.25 billion years) is only at about 8% of original activity. But during the time that humans have existed the amount of radiation has decreased very little.
Many shorter half-life (and thus more intensely radioactive) isotopes have not decayed out of the terrestrial environment because of their on-going natural production. Examples of these are radium-226 (decay product of thorium-230 in decay chain of uranium-238) and radon-222 (a decay product of radium-226 in said chain).
Thorium and uranium (and their daughters) primarily undergo alpha and beta decay, and are not easily detectable. However, many of their daughter products are strong gamma emitters. Thorium-232 is detectable via a 239 keV peak from lead-212, 511, 583 and 2614 keV from thallium-208, and 911 and 969 keV from actinium-228. Uranium-238 manifests as 609, 1120, and 1764 keV peaks of bismuth-214 (cf. the same peak for atmospheric radon). Potassium-40 is detectable directly via its 1461 keV gamma peak.
The level over the sea and other large bodies of water tends to be about a tenth of the terrestrial background. Conversely, coastal areas (and areas by the side of fresh water) may have an additional contribution from dispersed sediment.
Airborne sources
The biggest source of natural background radiation is airborne radon, a radioactive gas that emanates from the ground. Radon and its isotopes, parent radionuclides, and decay products all contribute to an average inhaled dose of 1.26 mSv/a (millisievert per year). Radon is unevenly distributed and varies with weather, such that much higher doses apply to many areas of the world, where it represents a significant health hazard. Concentrations over 500 times the world average have been found inside buildings in Scandinavia, the United States, Iran, and the Czech Republic. Radon is a decay product of uranium, which is relatively common in the Earth's crust, but more concentrated in ore-bearing rocks scattered around the world. Radon seeps out of these ores into the atmosphere or into ground water or infiltrates into buildings. It can be inhaled into the lungs, along with its decay products, where they will reside for a period of time after exposure.
Although radon is naturally occurring, exposure can be enhanced or diminished by human activity, notably house construction. A poorly sealed dwelling floor, or poor basement ventilation, in an otherwise well insulated house can result in the accumulation of radon within the dwelling, exposing its residents to high concentrations. The widespread construction of well insulated and sealed homes in the northern industrialized world has led to radon becoming the primary source of background radiation in some localities in northern North America and Europe. Basement sealing and suction ventilation reduce exposure. Some building materials, for example lightweight concrete with alum shale, phosphogypsum and Italian tuff, may emanate radon if they contain radium and are porous to gas.
Radiation exposure from radon is indirect. Radon has a short half-life (4 days) and decays into other solid particulate radium-series radioactive nuclides. These radioactive particles are inhaled and remain lodged in the lungs, causing continued exposure. Radon is thus assumed to be the second leading cause of lung cancer after smoking, and accounts for 15,000 to 22,000 cancer deaths per year in the US alone. However, the discussion about the opposite experimental results is still going on.
About 100,000 Bq/m3 of radon was found in Stanley Watras's basement in 1984. He and his neighbours in Boyertown, Pennsylvania, United States may hold the record for the most radioactive dwellings in the world. International radiation protection organizations estimate that a committed dose may be calculated by multiplying the equilibrium equivalent concentration (EEC) of radon by a factor of 8 to 9 and the EEC of thoron by a factor of 40 .
Most of the atmospheric background is caused by radon and its decay products. The gamma spectrum shows prominent peaks at 609, 1120, and 1764 keV, belonging to bismuth-214, a radon decay product. The atmospheric background varies greatly with wind direction and meteorological conditions. Radon also can be released from the ground in bursts and then form "radon clouds" capable of traveling tens of kilometers.
Cosmic radiation
The Earth and all living things on it are constantly bombarded by radiation from outer space. This radiation primarily consists of positively charged ions from protons to iron and larger nuclei derived from outside the Solar System. This radiation interacts with atoms in the atmosphere to create an air shower of secondary radiation, including X-rays, muons, protons, alpha particles, pions, electrons, and neutrons. The immediate dose from cosmic radiation is largely from muons, neutrons, and electrons, and this dose varies in different parts of the world based largely on the geomagnetic field and altitude. For example, the city of Denver in the United States (at 1650 meters elevation) receives a cosmic ray dose roughly twice that of a location at sea level. This radiation is much more intense in the upper troposphere, around 10 km altitude, and is thus of particular concern for airline crews and frequent passengers, who spend many hours per year in this environment. During their flights airline crews typically get an additional occupational dose between per year and 2.19 mSv/year, according to various studies.
Similarly, cosmic rays cause higher background exposure in astronauts than in humans on the surface of Earth. Astronauts in low orbits, such as in the International Space Station or the Space Shuttle, are partially shielded by the magnetic field of the Earth, but also suffer from the Van Allen radiation belt which accumulates cosmic rays and results from the Earth's magnetic field. Outside low Earth orbit, as experienced by the Apollo astronauts who traveled to the Moon, this background radiation is much more intense, and represents a considerable obstacle to potential future long term human exploration of the Moon or Mars.
Cosmic rays also cause elemental transmutation in the atmosphere, in which secondary radiation generated by the cosmic rays combines with atomic nuclei in the atmosphere to generate different nuclides. Many so-called cosmogenic nuclides can be produced, but probably the most notable is carbon-14, which is produced by interactions with nitrogen atoms. These cosmogenic nuclides eventually reach the Earth's surface and can be incorporated into living organisms. The production of these nuclides varies slightly with short-term variations in solar cosmic ray flux, but is considered practically constant over long scales of thousands to millions of years. The constant production, incorporation into organisms and relatively short half-life of carbon-14 are the principles used in radiocarbon dating of ancient biological materials, such as wooden artifacts or human remains.
The cosmic radiation at sea level usually manifests as 511 keV gamma rays from annihilation of positrons created by nuclear reactions of high energy particles and gamma rays. At higher altitudes there is also the contribution of continuous bremsstrahlung spectrum.
Food and water
Two of the essential elements that make up the human body, namely potassium and carbon, have radioactive isotopes that add significantly to our background radiation dose. An average human contains about 17 milligrams of potassium-40 (40K) and about 24 nanograms (10−9 g) of carbon-14 (14C), (half-life 5,730 years). Excluding internal contamination by external radioactive material, these two are the largest components of internal radiation exposure from biologically functional components of the human body. About 4,000 nuclei of 40K decay per second, and a similar number of 14C. The energy of beta particles produced by 40K is about 10 times that from the beta particles from 14C decay.
14C is present in the human body at a level of about 3700 Bq (0.1 μCi) with a biological half-life of 40 days. This means there are about 3700 beta particles per second produced by the decay of 14C. However, a 14C atom is in the genetic information of about half the cells, while potassium is not a component of DNA. The decay of a 14C atom inside DNA in one person happens about 50 times per second, changing a carbon atom to one of nitrogen.
The global average internal dose from radionuclides other than radon and its decay products is 0.29 mSv/a, of which 0.17 mSv/a comes from 40K, 0.12 mSv/a comes from the uranium and thorium series, and 12 μSv/a comes from 14C.
Areas with high natural background radiation
Some areas have greater dosage than the country-wide averages. In the world in general, exceptionally high natural background locales include Ramsar in Iran, Guarapari in Brazil, Karunagappalli in India, Arkaroola in Australia and Yangjiang in China.
The highest level of purely natural radiation ever recorded on the Earth's surface was 90 μGy/h on a Brazilian black beach (areia preta in Portuguese) composed of monazite. This rate would convert to 0.8 Gy/a for year-round continuous exposure, but in fact the levels vary seasonally and are much lower in the nearest residences. The record measurement has not been duplicated and is omitted from UNSCEAR's latest reports. Nearby tourist beaches in Guarapari and Cumuruxatiba were later evaluated at 14 and 15 μGy/h. Note that the values quoted here are in Grays. To convert to Sieverts (Sv) a radiation weighting factor is required; these weighting factors vary from 1 (beta & gamma) to 20 (alpha particles).
The highest background radiation in an inhabited area is found in Ramsar, primarily due to the use of local naturally radioactive limestone as a building material. The 1000 most exposed residents receive an average external effective radiation dose of per year, six times the ICRP recommended limit for exposure to the public from artificial sources. They additionally receive a substantial internal dose from radon. Record radiation levels were found in a house where the effective dose due to ambient radiation fields was per year, and the internal committed dose from radon was per year. This unique case is over 80 times higher than the world average natural human exposure to radiation.
Epidemiological studies are underway to identify health effects associated with the high radiation levels in Ramsar. It is much too early to draw unambiguous statistically significant conclusions. While so far support for beneficial effects of chronic radiation (like longer lifespan) has been observed in few places only, a protective and adaptive effect is suggested by at least one study whose authors nonetheless caution that data from Ramsar are not yet sufficiently strong to relax existing regulatory dose limits. However, the recent statistical analyses discussed that there is no correlation between the risk of negative health effects and elevated level of natural background radiation.
Photoelectric
Background radiation doses in the immediate vicinity of particles of high atomic number materials, within the human body, have a small enhancement due to the photoelectric effect.
Neutron background
Most of the natural neutron background is a product of cosmic rays interacting with the atmosphere. The neutron energy peaks at around 1 MeV and rapidly drops above. At sea level, the production of neutrons is about 20 neutrons per second per kilogram of material interacting with the cosmic rays (or, about 100–300 neutrons per square meter per second). The flux is dependent on geomagnetic latitude, with a maximum near the magnetic poles. At solar minimums, due to lower solar magnetic field shielding, the flux is about twice as high vs the solar maximum. It also dramatically increases during solar flares. In the vicinity of larger heavier objects, e.g. buildings or ships, the neutron flux measures higher; this is known as "cosmic ray induced neutron signature", or "ship effect" as it was first detected with ships at sea.
Artificial background radiation
Atmospheric nuclear testing
Frequent above-ground nuclear explosions between the 1940s and 1960s scattered a substantial amount of radioactive contamination. Some of this contamination is local, rendering the immediate surroundings highly radioactive, while some of it is carried longer distances as nuclear fallout; some of this material is dispersed worldwide. The increase in background radiation due to these tests peaked in 1963 at about 0.15 mSv per year worldwide, or about 7% of average background dose from all sources. The Limited Test Ban Treaty of 1963 prohibited above-ground tests, thus by the year 2000 the worldwide dose from these tests has decreased to only 0.005 mSv per year.
This global fallout has caused up to 2.4 million deaths by 2020.
Occupational exposure
The International Commission on Radiological Protection recommends limiting occupational radiation exposure to 50 mSv (5 rem) per year, and 100 mSv (10 rem) in 5 years.
However, background radiation for occupational doses includes radiation that is not measured by radiation dose instruments in potential occupational exposure conditions. This includes both offsite "natural background radiation" and any medical radiation doses. This value is not typically measured or known from surveys, such that variations in the total dose to individual workers is not known. This can be a significant confounding factor in assessing radiation exposure effects in a population of workers who may have significantly different natural background and medical radiation doses. This is most significant when the occupational doses are very low.
At an IAEA conference in 2002, it was recommended that occupational doses below 1–2 mSv per year do not warrant regulatory scrutiny.
Nuclear accidents
Under normal circumstances, nuclear reactors release small amounts of radioactive gases, which cause small radiation exposures to the public. Events classified on the International Nuclear Event Scale as incidents typically do not release any additional radioactive substances into the environment. Large releases of radioactivity from nuclear reactors are extremely rare. To the present day, there were two major civilian accidents – the Chernobyl accident and the Fukushima I nuclear accidents – which caused substantial contamination. The Chernobyl accident was the only one to cause immediate deaths.
Total doses from the Chernobyl accident ranged from 10 to 50 mSv over 20 years for the inhabitants of the affected areas, with most of the dose received in the first years after the disaster, and over 100 mSv for liquidators. There were 28 deaths from acute radiation syndrome.
Total doses from the Fukushima I accidents were between 1 and 15 mSv for the inhabitants of the affected areas. Thyroid doses for children were below 50 mSv. 167 cleanup workers received doses above 100 mSv, with 6 of them receiving more than 250 mSv (the Japanese exposure limit for emergency response workers).
The average dose from the Three Mile Island accident was 0.01 mSv.
Non-civilian: In addition to the civilian accidents described above, several accidents at early nuclear weapons facilities – such as the Windscale fire, the contamination of the Techa River by the nuclear waste from the Mayak compound, and the Kyshtym disaster at the same compound – released substantial radioactivity into the environment. The Windscale fire resulted in thyroid doses of 5–20 mSv for adults and 10–60 mSv for children. The doses from the accidents at Mayak are unknown.
Nuclear fuel cycle
The Nuclear Regulatory Commission, the United States Environmental Protection Agency, and other U.S. and international agencies, require that licensees limit radiation exposure to individual members of the public to 1 mSv (100 mrem) per year.
Energy sources
Per UNECE life-cycle assessment, nearly all sources of energy result in some level of occupational and public exposure to radionuclides as result of their manufacturing or operations. The following table uses man·Sievert/GW-annum:
Coal burning
Coal plants emit radiation in the form of radioactive fly ash which is inhaled and ingested by neighbours, and incorporated into crops. A 1978 paper from Oak Ridge National Laboratory estimated that coal-fired power plants of that time may contribute a whole-body committed dose of 19 μSv/a to their immediate neighbours in a radius of 500 m. The United Nations Scientific Committee on the Effects of Atomic Radiation's 1988 report estimated the committed dose 1 km away to be 20 μSv/a for older plants or 1 μSv/a for newer plants with improved fly ash capture, but was unable to confirm these numbers by test. When coal is burned, uranium, thorium and all the uranium daughters accumulated by disintegration – radium, radon, polonium – are released. Radioactive materials previously buried underground in coal deposits are released as fly ash or, if fly ash is captured, may be incorporated into concrete manufactured with fly ash.
Other sources of dose uptake
Medical
The global average human exposure to artificial radiation is 0.6 mSv/a, primarily from medical imaging. This medical component can range much higher, with an average of 3 mSv per year across the USA population. Other human contributors include smoking, air travel, radioactive building materials, historical nuclear weapons testing, nuclear power accidents and nuclear industry operation.
A typical chest x-ray delivers 20 μSv (2 mrem) of effective dose. A dental x-ray delivers a dose of 5 to 10 μSv. A CT scan delivers an effective dose to the whole body ranging from 1 to 20 mSv (100 to 2000 mrem). The average American receives about 3 mSv of diagnostic medical dose per year; countries with the lowest levels of health care receive almost none. Radiation treatment for various diseases also accounts for some dose, both in individuals and in those around them.
Consumer items
Cigarettes contain polonium-210, originating from the decay products of radon, which stick to tobacco leaves. Heavy smoking results in a radiation dose of 160 mSv/year to localized spots at the bifurcations of segmental bronchi in the lungs from the decay of polonium-210. This dose is not readily comparable to the radiation protection limits, since the latter deal with whole body doses, while the dose from smoking is delivered to a very small portion of the body.
Radiation metrology
In a radiation metrology laboratory, background radiation refers to the measured value from any incidental sources that affect an instrument when a specific radiation source sample is being measured. This background contribution, which is established as a stable value by multiple measurements, usually before and after sample measurement, is subtracted from the rate measured when the sample is being measured.
This is in accordance with the International Atomic Energy Agency definition of background as being "Dose or dose rate (or an observed measure related to the dose or dose rate) attributable to all sources other than the one(s) specified.
The same issue occurs with radiation protection instruments, where a reading from an instrument may be affected by the background radiation. An example of this is a scintillation detector used for surface contamination monitoring. In an elevated gamma background the scintillator material will be affected by the background gamma, which will add to the reading obtained from any contamination which is being monitored. In extreme cases it will make the instrument unusable as the background swamps the lower level of radiation from the contamination. In such instruments the background can be continually monitored in the "Ready" state, and subtracted from any reading obtained when being used in "Measuring" mode.
Regular Radiation measurement is carried out at multiple levels. Government agencies compile radiation readings as part of environmental monitoring mandates, often making the readings available to the public and sometimes in near-real-time. Collaborative groups and private individuals may also make real-time readings available to the public. Instruments used for radiation measurement include the Geiger–Müller tube and the Scintillation detector. The former is usually more compact and affordable and reacts to several radiation types, while the latter is more complex and can detect specific radiation energies and types. Readings indicate radiation levels from all sources including background, and real-time readings are in general unvalidated, but correlation between independent detectors increases confidence in measured levels.
List of near-real-time government radiation measurement sites, employing multiple instrument types:
Europe and Canada: European Radiological Data Exchange Platform (EURDEP) Simple map of Gamma Dose Rates
USA: EPA Radnet near-real-time and laboratory data by state
List of international near-real-time collaborative/private measurement sites, employing primarily Geiger-Muller detectors:
GMC map: http://www.gmcmap.com/ (mix of old-data detector stations and some near-real-time ones)
Netc: http://www.netc.com/
Radmon: http://www.radmon.org/
Radiation Network: http://radiationnetwork.com/
Radioactive@Home: http://radioactiveathome.org/map/
Safecast: http://safecast.org/tilemap (the green circles are real-time detectors)
uRad Monitor: http://www.uradmonitor.com/
See also
Background radiation equivalent time (BRET)
Banana equivalent dose
Environmental radioactivity
Flight-time equivalent dose
Noise (electronics)
Low-background steel
References
External links
Background radiation description from the Radiation Effects Research Foundation
Environmental and Background Radiation FAQ from the Health Physics Society
Radiation Dose Chart from the American Nuclear Society
Radiation Dose Calculator from the United States Environmental Protection Agency
Cosmic rays
Ionizing radiation
Radioactivity | Background radiation | [
"Physics",
"Chemistry"
] | 5,107 | [
"Ionizing radiation",
"Physical phenomena",
"Cosmic rays",
"Astrophysics",
"Radiation",
"Nuclear physics",
"Radioactivity"
] |
4,890 | https://en.wikipedia.org/wiki/Bayesian%20probability | Bayesian probability ( or ) is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief.
The Bayesian interpretation of probability can be seen as an extension of propositional logic that enables reasoning with hypotheses; that is, with propositions whose truth or falsity is unknown. In the Bayesian view, a probability is assigned to a hypothesis, whereas under frequentist inference, a hypothesis is typically tested without being assigned a probability.
Bayesian probability belongs to the category of evidential probabilities; to evaluate the probability of a hypothesis, the Bayesian probabilist specifies a prior probability. This, in turn, is then updated to a posterior probability in the light of new, relevant data (evidence). The Bayesian interpretation provides a standard set of procedures and formulae to perform this calculation.
The term Bayesian derives from the 18th-century mathematician and theologian Thomas Bayes, who provided the first mathematical treatment of a non-trivial problem of statistical data analysis using what is now known as Bayesian inference. Mathematician Pierre-Simon Laplace pioneered and popularized what is now called Bayesian probability.
Bayesian methodology
Bayesian methods are characterized by concepts and procedures as follows:
The use of random variables, or more generally unknown quantities, to model all sources of uncertainty in statistical models including uncertainty resulting from lack of information (see also aleatoric and epistemic uncertainty).
The need to determine the prior probability distribution taking into account the available (prior) information.
The sequential use of Bayes' theorem: as more data become available, calculate the posterior distribution using Bayes' theorem; subsequently, the posterior distribution becomes the next prior.
While for the frequentist, a hypothesis is a proposition (which must be either true or false) so that the frequentist probability of a hypothesis is either 0 or 1, in Bayesian statistics, the probability that can be assigned to a hypothesis can also be in a range from 0 to 1 if the truth value is uncertain.
Objective and subjective Bayesian probabilities
Broadly speaking, there are two interpretations of Bayesian probability. For objectivists, who interpret probability as an extension of logic, probability quantifies the reasonable expectation that everyone (even a "robot") who shares the same knowledge should share in accordance with the rules of Bayesian statistics, which can be justified by Cox's theorem. For subjectivists, probability corresponds to a personal belief. Rationality and coherence allow for substantial variation within the constraints they pose; the constraints are justified by the Dutch book argument or by decision theory and de Finetti's theorem. The objective and subjective variants of Bayesian probability differ mainly in their interpretation and construction of the prior probability.
History
The term Bayesian derives from Thomas Bayes (1702–1761), who proved a special case of what is now called Bayes' theorem in a paper titled "An Essay Towards Solving a Problem in the Doctrine of Chances". In that special case, the prior and posterior distributions were beta distributions and the data came from Bernoulli trials. It was Pierre-Simon Laplace (1749–1827) who introduced a general version of the theorem and used it to approach problems in celestial mechanics, medical statistics, reliability, and jurisprudence. Early Bayesian inference, which used uniform priors following Laplace's principle of insufficient reason, was called "inverse probability" (because it infers backwards from observations to parameters, or from effects to causes). After the 1920s, "inverse probability" was largely supplanted by a collection of methods that came to be called frequentist statistics.
In the 20th century, the ideas of Laplace developed in two directions, giving rise to objective and subjective currents in Bayesian practice.
Harold Jeffreys' Theory of Probability (first published in 1939) played an important role in the revival of the Bayesian view of probability, followed by works by Abraham Wald (1950) and Leonard J. Savage (1954). The adjective Bayesian itself dates to the 1950s; the derived Bayesianism, neo-Bayesianism is of 1960s coinage. In the objectivist stream, the statistical analysis depends on only the model assumed and the data analysed. No subjective decisions need to be involved. In contrast, "subjectivist" statisticians deny the possibility of fully objective analysis for the general case.
In the 1980s, there was a dramatic growth in research and applications of Bayesian methods, mostly attributed to the discovery of Markov chain Monte Carlo methods and the consequent removal of many of the computational problems, and to an increasing interest in nonstandard, complex applications. While frequentist statistics remains strong (as demonstrated by the fact that much of undergraduate teaching is based on it ), Bayesian methods are widely accepted and used, e.g., in the field of machine learning.
Justification
The use of Bayesian probabilities as the basis of Bayesian inference has been supported by several arguments, such as Cox axioms, the Dutch book argument, arguments based on decision theory and de Finetti's theorem.
Axiomatic approach
Richard T. Cox showed that Bayesian updating follows from several axioms, including two functional equations and a hypothesis of differentiability. The assumption of differentiability or even continuity is controversial; Halpern found a counterexample based on his observation that the Boolean algebra of statements may be finite. Other axiomatizations have been suggested by various authors with the purpose of making the theory more rigorous.
Dutch book approach
Bruno de Finetti proposed the Dutch book argument based on betting. A clever bookmaker makes a Dutch book by setting the odds and bets to ensure that the bookmaker profits—at the expense of the gamblers—regardless of the outcome of the event (a horse race, for example) on which the gamblers bet. It is associated with probabilities implied by the odds not being coherent.
However, Ian Hacking noted that traditional Dutch book arguments did not specify Bayesian updating: they left open the possibility that non-Bayesian updating rules could avoid Dutch books. For example, Hacking writes "And neither the Dutch book argument, nor any other in the personalist arsenal of proofs of the probability axioms, entails the dynamic assumption. Not one entails Bayesianism. So the personalist requires the dynamic assumption to be Bayesian. It is true that in consistency a personalist could abandon the Bayesian model of learning from experience. Salt could lose its savour."
In fact, there are non-Bayesian updating rules that also avoid Dutch books (as discussed in the literature on "probability kinematics" following the publication of Richard C. Jeffrey's rule, which is itself regarded as Bayesian). The additional hypotheses sufficient to (uniquely) specify Bayesian updating are substantial and not universally seen as satisfactory.
Decision theory approach
A decision-theoretic justification of the use of Bayesian inference (and hence of Bayesian probabilities) was given by Abraham Wald, who proved that every admissible statistical procedure is either a Bayesian procedure or a limit of Bayesian procedures. Conversely, every Bayesian procedure is admissible.
Personal probabilities and objective methods for constructing priors
Following the work on expected utility theory of Ramsey and von Neumann, decision-theorists have accounted for rational behavior using a probability distribution for the agent. Johann Pfanzagl completed the Theory of Games and Economic Behavior by providing an axiomatization of subjective probability and utility, a task left uncompleted by von Neumann and Oskar Morgenstern: their original theory supposed that all the agents had the same probability distribution, as a convenience. Pfanzagl's axiomatization was endorsed by Oskar Morgenstern: "Von Neumann and I have anticipated ... [the question whether probabilities] might, perhaps more typically, be subjective and have stated specifically that in the latter case axioms could be found from which could derive the desired numerical utility together with a number for the probabilities (cf. p. 19 of The Theory of Games and Economic Behavior). We did not carry this out; it was demonstrated by Pfanzagl ... with all the necessary rigor".
Ramsey and Savage noted that the individual agent's probability distribution could be objectively studied in experiments. Procedures for testing hypotheses about probabilities (using finite samples) are due to Ramsey (1931) and de Finetti (1931, 1937, 1964, 1970). Both Bruno de Finetti and Frank P. Ramsey acknowledge their debts to pragmatic philosophy, particularly (for Ramsey) to Charles S. Peirce.
The "Ramsey test" for evaluating probability distributions is implementable in theory, and has kept experimental psychologists occupied for a half century.
This work demonstrates that Bayesian-probability propositions can be falsified, and so meet an empirical criterion of Charles S. Peirce, whose work inspired Ramsey. (This falsifiability-criterion was popularized by Karl Popper.)
Modern work on the experimental evaluation of personal probabilities uses the randomization, blinding, and Boolean-decision procedures of the Peirce-Jastrow experiment. Since individuals act according to different probability judgments, these agents' probabilities are "personal" (but amenable to objective study).
Personal probabilities are problematic for science and for some applications where decision-makers lack the knowledge or time to specify an informed probability-distribution (on which they are prepared to act). To meet the needs of science and of human limitations, Bayesian statisticians have developed "objective" methods for specifying prior probabilities.
Indeed, some Bayesians have argued the prior state of knowledge defines the (unique) prior probability-distribution for "regular" statistical problems; cf. well-posed problems. Finding the right method for constructing such "objective" priors (for appropriate classes of regular problems) has been the quest of statistical theorists from Laplace to John Maynard Keynes, Harold Jeffreys, and Edwin Thompson Jaynes. These theorists and their successors have suggested several methods for constructing "objective" priors (Unfortunately, it is not always clear how to assess the relative "objectivity" of the priors proposed under these methods):
Maximum entropy
Transformation group analysis
Reference analysis
Each of these methods contributes useful priors for "regular" one-parameter problems, and each prior can handle some challenging statistical models (with "irregularity" or several parameters). Each of these methods has been useful in Bayesian practice. Indeed, methods for constructing "objective" (alternatively, "default" or "ignorance") priors have been developed by avowed subjective (or "personal") Bayesians like James Berger (Duke University) and José-Miguel Bernardo (Universitat de València), simply because such priors are needed for Bayesian practice, particularly in science. The quest for "the universal method for constructing priors" continues to attract statistical theorists.
Thus, the Bayesian statistician needs either to use informed priors (using relevant expertise or previous data) or to choose among the competing methods for constructing "objective" priors.
See also
An Essay Towards Solving a Problem in the Doctrine of Chances
Bayesian epistemology
Bertrand paradox—a paradox in classical probability
Credal network
Credence (statistics)
De Finetti's game—a procedure for evaluating someone's subjective probability
Evidence under Bayes' theorem
Monty Hall problem
QBism—an interpretation of quantum mechanics based on subjective Bayesian probability
Reference class problem
References
Bibliography
(translation of de Finetti, 1931)
(translation of de Finetti, 1937, above)
, , two volumes.
Goertz, Gary and James Mahoney. 2012. A Tale of Two Cultures: Qualitative and Quantitative Research in the Social Sciences. Princeton University Press.
.
(Partly reprinted in )
(
Probability
Justification (epistemology)
Probability interpretations
Philosophy of mathematics
Philosophy of science | Bayesian probability | [
"Mathematics"
] | 2,482 | [
"Probability interpretations",
"nan"
] |
4,906 | https://en.wikipedia.org/wiki/Beta%20sheet | The beta sheet (β-sheet, also β-pleated sheet) is a common motif of the regular protein secondary structure. Beta sheets consist of beta strands (β-strands) connected laterally by at least two or three backbone hydrogen bonds, forming a generally twisted, pleated sheet. A β-strand is a stretch of polypeptide chain typically 3 to 10 amino acids long with backbone in an extended conformation. The supramolecular association of β-sheets has been implicated in the formation of the fibrils and protein aggregates observed in amyloidosis, Alzheimer's disease and other proteinopathies.
History
The first β-sheet structure was proposed by William Astbury in the 1930s. He proposed the idea of hydrogen bonding between the peptide bonds of parallel or antiparallel extended β-strands. However, Astbury did not have the necessary data on the bond geometry of the amino acids in order to build accurate models, especially since he did not then know that the peptide bond was planar. A refined version was proposed by Linus Pauling and Robert Corey in 1951. Their model incorporated the planarity of the peptide bond which they previously explained as resulting from keto-enol tautomerization.
Structure and orientation
Geometry
The majority of β-strands are arranged adjacent to other strands and form an extensive hydrogen bond network with their neighbors in which the N−H groups in the backbone of one strand establish hydrogen bonds with the C=O groups in the backbone of the adjacent strands. In the fully extended β-strand, successive side chains point straight up and straight down in an alternating pattern. Adjacent β-strands in a β-sheet are aligned so that their Cα atoms are adjacent and their side chains point in the same direction. The "pleated" appearance of β-strands arises from tetrahedral chemical bonding at the Cα atom; for example, if a side chain points straight up, then the bonds to the C′ must point slightly downwards, since its bond angle is approximately 109.5°. The pleating causes the distance between C and C to be approximately , rather than the expected from two fully extended trans peptides. The "sideways" distance between adjacent Cα atoms in hydrogen-bonded β-strands is roughly .
However, β-strands are rarely perfectly extended; rather, they exhibit a twist. The energetically preferred dihedral angles near (φ, ψ) = (–135°, 135°) (broadly, the upper left region of the Ramachandran plot) diverge significantly from the fully extended conformation (φ, ψ) = (–180°, 180°). The twist is often associated with alternating fluctuations in the dihedral angles to prevent the individual β-strands in a larger sheet from splaying apart. A good example of a strongly twisted β-hairpin can be seen in the protein BPTI.
The side chains point outwards from the folds of the pleats, roughly perpendicularly to the plane of the sheet; successive amino acid residues point outwards on alternating faces of the sheet.
Hydrogen bonding patterns
Because peptide chains have a directionality conferred by their N-terminus and C-terminus, β-strands too can be said to be directional. They are usually represented in protein topology diagrams by an arrow pointing toward the C-terminus. Adjacent β-strands can form hydrogen bonds in antiparallel, parallel, or mixed arrangements.
In an antiparallel arrangement, the successive β-strands alternate directions so that the N-terminus of one strand is adjacent to the C-terminus of the next. This is the arrangement that produces the strongest inter-strand stability because it allows the inter-strand hydrogen bonds between carbonyls and amines to be planar, which is their preferred orientation. The peptide backbone dihedral angles (φ, ψ) are about (–140°, 135°) in antiparallel sheets. In this case, if two atoms C and C are adjacent in two hydrogen-bonded β-strands, then they form two mutual backbone hydrogen bonds to each other's flanking peptide groups; this is known as a close pair of hydrogen bonds.
In a parallel arrangement, all of the N-termini of successive strands are oriented in the same direction; this orientation may be slightly less stable because it introduces nonplanarity in the inter-strand hydrogen bonding pattern. The dihedral angles (φ, ψ) are about (–120°, 115°) in parallel sheets. It is rare to find less than five interacting parallel strands in a motif, suggesting that a smaller number of strands may be unstable, however it is also fundamentally more difficult for parallel β-sheets to form because strands with N and C termini aligned necessarily must be very distant in sequence . There is also evidence that parallel β-sheet may be more stable since small amyloidogenic sequences appear to generally aggregate into β-sheet fibrils composed of primarily parallel β-sheet strands, where one would expect anti-parallel fibrils if anti-parallel were more stable.
In parallel β-sheet structure, if two atoms C and C are adjacent in two hydrogen-bonded β-strands, then they do not hydrogen bond to each other; rather, one residue forms hydrogen bonds to the residues that flank the other (but not vice versa). For example, residue i may form hydrogen bonds to residues j − 1 and j + 1; this is known as a wide pair of hydrogen bonds. By contrast, residue j may hydrogen-bond to different residues altogether, or to none at all.
The hydrogen bond arrangement in parallel beta sheet resembles that in an amide ring motif with 11 atoms.
Finally, an individual strand may exhibit a mixed bonding pattern, with a parallel strand on one side and an antiparallel strand on the other. Such arrangements are less common than a random distribution of orientations would suggest, suggesting that this pattern is less stable than the anti-parallel arrangement, however bioinformatic analysis always struggles with extracting structural thermodynamics since there are always numerous other structural features present in whole proteins. Also proteins are inherently constrained by folding kinetics as well as folding thermodynamics, so one must always be careful in concluding stability from bioinformatic analysis.
The hydrogen bonding of β-strands need not be perfect, but can exhibit localized disruptions known as β-bulges.
The hydrogen bonds lie roughly in the plane of the sheet, with the peptide carbonyl groups pointing in alternating directions with successive residues; for comparison, successive carbonyls point in the same direction in the alpha helix.
Amino acid propensities
Large aromatic residues (tyrosine, phenylalanine, tryptophan) and β-branched amino acids (threonine, valine, isoleucine) are favored to be found in β-strands in the middle of β-sheets. Different types of residues (such as proline) are likely to be found in the edge strands in β-sheets, presumably to avoid the "edge-to-edge" association between proteins that might lead to aggregation and amyloid formation.
Common structural motifs
β-hairpin motif
A very simple structural motif involving β-sheets is the β-hairpin, in which two antiparallel strands are linked by a short loop of two to five residues, of which one is frequently a glycine or a proline, both of which can assume the dihedral-angle conformations required for a tight turn or a β-bulge loop. Individual strands can also be linked in more elaborate ways with longer loops that may contain α-helices.
Greek key motif
The Greek key motif consists of four adjacent antiparallel strands and their linking loops. It consists of three antiparallel strands connected by hairpins, while the fourth is adjacent to the first and linked to the third by a longer loop. This type of structure forms easily during the protein folding process. It was named after a pattern common to Greek ornamental artwork (see meander).
β-α-β motif
Due to the chirality of their component amino acids, all strands exhibit right-handed twist evident in most higher-order β-sheet structures. In particular, the linking loop between two parallel strands almost always has a right-handed crossover chirality, which is strongly favored by the inherent twist of the sheet. This linking loop frequently contains a helical region, in which case it is called a β-α-β motif. A closely related motif called a β-α-β-α motif forms the basic component of the most commonly observed protein tertiary structure, the TIM barrel.
β-meander motif
A simple supersecondary protein topology composed of two or more consecutive antiparallel β-strands linked together by hairpin loops. This motif is common in β-sheets and can be found in several structural architectures including β-barrels and β-propellers.
The vast majority of β-meander regions in proteins are found packed against other motifs or sections of the polypeptide chain, forming portions of the hydrophobic core that canonically drives formation of the folded structure. However, several notable exceptions include the Outer Surface Protein A (OspA) variants and the Single Layer β-sheet Proteins (SLBPs) which contain single-layer β-sheets in the absence of a traditional hydrophobic core. These β-rich proteins feature an extended single-layer β-meander β-sheets that are primarily stabilized via inter-β-strand interactions and hydrophobic interactions present in the turn regions connecting individual strands.
Psi-loop motif
The psi-loop (Ψ-loop) motif consists of two antiparallel strands with one strand in between that is connected to both by hydrogen bonds. There are four possible strand topologies for single Ψ-loops. This motif is rare as the process resulting in its formation seems unlikely to occur during protein folding. The Ψ-loop was first identified in the aspartic protease family.
Structural architectures of proteins with β-sheets
β-sheets are present in all-β, α+β and α/β domains, and in many peptides or small proteins with poorly defined overall architecture. All-β domains may form β-barrels, β-sandwiches, β-prisms, β-propellers, and β-helices.
Structural topology
The topology of a β-sheet describes the order of hydrogen-bonded β-strands along the backbone. For example, the flavodoxin fold has a five-stranded, parallel β-sheet with topology 21345; thus, the edge strands are β-strand 2 and β-strand 5 along the backbone. Spelled out explicitly, β-strand 2 is H-bonded to β-strand 1, which is H-bonded to β-strand 3, which is H-bonded to β-strand 4, which is H-bonded to β-strand 5, the other edge strand. In the same system, the Greek key motif described above has a 4123 topology. The secondary structure of a β-sheet can be described roughly by giving the number of strands, their topology, and whether their hydrogen bonds are parallel or antiparallel.
β-sheets can be open, meaning that they have two edge strands (as in the flavodoxin fold or the immunoglobulin fold) or they can be closed β-barrels (such as the TIM barrel). β-Barrels are often described by their stagger or shear. Some open β-sheets are very curved and fold over on themselves (as in the SH3 domain) or form horseshoe shapes (as in the ribonuclease inhibitor). Open β-sheets can assemble face-to-face (such as the β-propeller domain or immunoglobulin fold) or edge-to-edge, forming one big β-sheet.
Dynamic features
β-pleated sheet structures are made from extended β-strand polypeptide chains, with strands linked to their neighbours by hydrogen bonds. Due to this extended backbone conformation, β-sheets resist stretching. β-sheets in proteins may carry out low-frequency accordion-like motion as observed by the Raman spectroscopy and analyzed with the quasi-continuum model.
Parallel β-helices
A β-helix is formed from repeating structural units consisting of two or three short β-strands linked by short loops. These units "stack" atop one another in a helical fashion so that successive repetitions of the same strand hydrogen-bond with each other in a parallel orientation. See the β-helix article for further information.
In lefthanded β-helices, the strands themselves are quite straight and untwisted; the resulting helical surfaces are nearly flat, forming a regular triangular prism shape, as shown for the 1QRE archaeal carbonic anhydrase at right. Other examples are the lipid A synthesis enzyme LpxA and insect antifreeze proteins with a regular array of Thr sidechains on one face that mimic the structure of ice.
Righthanded β-helices, typified by the pectate lyase enzyme shown at left or P22 phage tailspike protein, have a less regular cross-section, longer and indented on one of the sides; of the three linker loops, one is consistently just two residues long and the others are variable, often elaborated to form a binding or active site.
A two-sided β-helix (right-handed) is found in some bacterial metalloproteases; its two loops are each six residues long and bind stabilizing calcium ions to maintain the integrity of the structure, using the backbone and the Asp side chain oxygens of a GGXGXD sequence motif. This fold is called a β-roll in the SCOP classification.
In pathology
Some proteins that are disordered or helical as monomers, such as amyloid β (see amyloid plaque) can form β-sheet-rich oligomeric structures associated with pathological states. The amyloid β protein's oligomeric form is implicated as a cause of Alzheimer's. Its structure has yet to be determined in full, but recent data suggest that it may resemble an unusual two-strand β-helix.
The side chains from the amino acid residues found in a β-sheet structure may also be arranged such that many of the adjacent sidechains on one side of the sheet are hydrophobic, while many of those adjacent to each other on the alternate side of the sheet are polar or charged (hydrophilic), which can be useful if the sheet is to form a boundary between polar/watery and nonpolar/greasy environments.
See also
Collagen helix
Foldamers
Folding (chemistry)
Tertiary structure
α-helix
Structural motif
References
Further reading
External links
Anatomy & Taxonomy of Protein Structures -survey
NetSurfP - Secondary Structure and Surface Accessibility predictor
Protein structural motifs | Beta sheet | [
"Biology"
] | 3,070 | [
"Protein structural motifs",
"Protein classification"
] |
4,921 | https://en.wikipedia.org/wiki/Bacardi | Bacardi Limited ( , , ) is the largest privately held, family-owned spirits company in the world. Originally known for its Bacardí brand of white rum, it now has a portfolio of more than 200 brands and labels. Founded in Cuba in 1862 by the Spanish businessmen Facundo Bacardí Massó, Bacardi Limited has been family-owned for seven generations, and employs more than 8,000 people with sales in approximately 170 countries. Bacardi Limited is the group of companies as a whole and includes Bacardi International Limited.
Bacardi Limited is headquartered in Hamilton, Bermuda, and has a board of directors led by the original founder's great-great-grandson, Facundo L. Bacardí, the board's chairman.
History
Early history
Facundo Bacardí Massó, a Spanish wine merchant, was born in Sitges, Catalonia, Spain, on October 16, 1814, and immigrated to Santiago, Cuba, in 1830. At the time, rum was cheaply made and not considered a refined drink, and rarely sold in upmarket taverns or purchased by the growing emerging middle class on the island. Facundo began attempting to "tame" rum by isolating a proprietary strain of yeast harvested from local sugar cane still used in Bacardí production today. This yeast gives Bacardí rum its flavour profile. After experimenting with several techniques for close to ten years, Facundo pioneered charcoal rum filtration, which removed impurities from his rum. Facundo then created two separate distillates that he could blend together, balancing a variety of flavors: Aguardiente (a robust, flavorful distillate) and Redestillado (a refined, delicate distillate). Once Facundo achieved the perfect balance of flavors by marrying the two distillates together, he purposefully aged the rum in white oak barrels to develop subtle flavors and characteristics while mellowing out those that were unwanted. The final product was the first clear, light-bodied, and mixable "white" rum in the world.
Moving from the experimental stage to a more commercial endeavour as local sales began to grow, Facundo and his brother José purchased a Santiago de Cuba distillery on October 16, 1862, which housed a still made of copper and cast iron. In the rafters of this building lived fruit bats – the inspiration for the Bacardi bat logo. It was the idea of Doña Amalia, Facundo's wife, to adopt the bat to the rum bottle when she recognized its symbolism of family unity, good health, and good fortune to her husband's homeland of Spain. This logo was pragmatic considering the high illiteracy rate in the 19th century, enabling customers to easily identify the product.
The 1880s and 1890s were turbulent times for Cuba and the company. Emilio Bacardí, Don Facundo's eldest son, known for his forward thinking in both his professional and personal life and a passionate advocate for Cuban Independence was imprisoned twice for having fought in the rebel army against Spain in the Cuban War of Independence.
Emilio's brothers, Facundo and José, and their brother-in-law Enrique 'Henri' Schueg, remained in Cuba with the difficult task of sustaining the company during a period of war. With Don Facundo's passing in 1886, Doña Amalia sought refuge by exile in Kingston, Jamaica. At the end of the Cuban War of Independence during the US occupation of Cuba, "The Original Cuba Libre" and the Daiquiri cocktails were both created, with the then Cuban based Bacardí rum. In 1899, Emilio Bacardí became the first democratically elected mayor of Santiago, appointed US General Leonard Wood.
During his time in public office, Emilio established schools and hospitals, completed municipal projects such as the famous Padre Pico Street and the Bacardi Dam, financed the creation of parks, and decorated the city of Santiago with monuments and sculptures. In 1912, Emilio and his wife travelled to Egypt, where he purchased a mummy (still on display) for the future Emilio Bacardi Moreau Municipal Museum in Santiago de Cuba. In Santiago, his brother Facundo M. Bacardí continued to manage the company along with Schueg, who began the company's international expansion by opening bottling plants in Barcelona (1910) and New York City (1916). The New York plant was soon shut down due to Prohibition, yet during this time Cuba became a hotspot for US tourists, kicking off a period of rapid growth for the Bacardi company and the onset of cocktail culture in America.
In 1922, the family completed the expansion and renovation of the original distillery in Santiago, increasing the site's rum production capacity. In 1930 Schueg oversaw the construction and opening of Edificio Bacardí in Havana, regarded as one of the finest Art Deco buildings in Latin America, as the third generation of the Bacardí family entered the business. In 1927, Bacardi ventured outside the realm of spirits for the first time, with the introduction of an authentic Cuban Malt beer: Hatuey beer.
Bacardi's success in transitioning into an international brand and company was due mostly to Schueg, who branded Cuba as "The home of rum", and Bacardí as "The king of rums and the rum of Kings". Expansion began overseas, first to Mexico in 1931, where architects Ludwig Mies Van Der Rohe and Felix Candela designed office buildings and a bottling plant in Mexico City during the 1950s. The building complex was added to the tentative list of UNESCO's World Heritage Site list on 20 November 2001. In 1936, Bacardi began producing rum on U.S. territory in Puerto Rico after Prohibition which enabled the company to sell rum tariff-free in the United States. The company later expanded to the United States in 1944 with the opening of Bacardi Imports, Inc. in Manhattan, New York City.
During World War II, the company was led by Schueg's son-in-law, José "Pepin" Bosch. Pepin founded Bacardi Imports in New York City and became Cuba's Minister of the Treasury in 1949.
Cuban Revolution
During the Cuban Revolution in 1959, the Bacardí family (and hence the company) supported and aided the rebels. However, after the triumph of the revolutionaries, and turn to communism, the family maintained a fierce opposition to Fidel Castro's policies in Cuba in the 1960s. In his book, Bacardi and the Long Fight for Cuba, Tom Gjelten describes how the Bacardí family and the company left Cuba in exile after the Cuban government confiscated the company's Cuban assets without compensation on 14 October 1960, particularly nationalizing and banning all private property on the island as well as all bank accounts. However, due to concerns over the previous Cuban leader, Fulgencio Batista, the company had started foreign branches a few years before the revolution; the company moved the ownership of its trademarks, assets and proprietary formulas out of the country to the Bahamas prior to the revolution and already produced Bacardí rum at other distillery sites in Puerto Rico and Mexico. This helped the company survive after the Cuban government confiscated all Bacardí assets without compensation.
In 1965, over 100 years after the company was established in Cuba, Bacardi established new roots and found a new home with global headquarters in Hamilton, Bermuda. In February 2019, Bacardi's CEO, Mahesh Madhavan, stated that Bacardí's global headquarters would remain in Bermuda for the next "500 years" and that "Bermuda is our home now."
In 1999, Otto Reich, a lobbyist in Washington on behalf of Bacardí, drafted section 211 of the Omnibus Consolidated and Emergency Appropriations Act, FY1999, a bill that became known as the Bacardi Act. Section 211 denied trademark protection to products of Cuban businesses expropriated after the Cuban revolution, a provision sought by Bacardí. The act was aimed primarily at the Havana Club brand in the US. The brand was created by the José Arechabala S.A. and nationalised without compensation in the Cuban revolution, the Arechabala family left Cuba and stopped producing rum. They, therefore, allowed the US trademark registration for "Havana Club" to lapse in 1973. Taking advantage of the lapse, the Cuban government registered the mark in the US in 1976. This new law was drafted to invalidate the trademark registration. Section 211 has been challenged unsuccessfully by the Cuban government and the European Union in US courts. It was ruled illegal by the World Trade Organization (WTO) in 2001 and 2002. The US Congress has yet to re-examine the matter. The Cuban government assigned the brand to Pernod Ricard in 1993.
Bacardi rekindled the story of the Arechabala family and Havana Club in the United States when it launched the AMPARO Experience in 2018, an immersive play experience based in Miami, the city with the highest population of Cuban exiles. AMPARO "is the story of the family's entire history being erased and their heritage 'stolen'" according to playwright Vanessa Garcia.
Bacardi in the United States
In 1964, Bacardi opened new US offices in Miami, Florida. Exiled Cuban architect Enrique Gutierrez created a hurricane-proof building using a system of steel cables and pulleys that allowed the building to move slightly in the event of a strong shock. The steel cables are anchored into the bedrock and extend through marble-covered shafts up to the top floor, where they are led over large pulleys. Outside, on both sides of the eight-story building, more than 28,000 tiles painted and fired by Brazilian artist Francisco Brennand, depicting abstract blue flowers, were placed on the walls according to the artist's exact specifications.
In 1973, the company commissioned the square building in the plaza. Architect Ignacio Carrera-Justiz used cantilevered construction, a style invented by Frank Lloyd Wright. Wright observed how well trees with taproots withstood hurricane-force winds. The building, raised off the ground around a central core, features four massive walls made of sections of inch-thick hammered glass mural tapestries designed and manufactured in France. The striking design of the annex, affectionately known as the 'Jewel Box' building, came from a painting by German artist Johannes M. Dietz.
In 2006, Bacardi USA leased a 15-story headquarters complex in Coral Gables, Florida. At the time, Bacardi had employees in seven buildings across Miami-Dade County.
Bacardi vacated its former headquarters buildings on Biscayne Boulevard in Midtown Miami. The building currently serves as the headquarters of the National YoungArts Foundation. Miami citizens began a campaign to label the buildings as "historic". The Bacardi Buildings Complex has been a locally protected historic resource since Oct. 6, 2009, when it was designated by unanimous decision by the Historic and Environmental Preservation Board.
In 2007 Chad Oppenheim, the head of Oppenheim Architecture + Design, described the Bacardi buildings as "elegant, with a Modernist [look combined with] a local flavour". In April 2009, University of Miami professor of architecture Allan Schulman said "Miami's brand is its identity as a tropical city. The Bacardi buildings are exactly the sort that resonate with our consciousness of what Miami is about."
The American headquarters is in Coral Gables, Florida.
Bacardi and Cuba today
Bacardi drinks are not easily found in Cuba today. The main brand of rum in Cuba is Havana Club, produced by a company that was confiscated and nationalized by the government following the revolution. Bacardi later bought the brand from the original owners, the Arechabala family. In partnership with the French company Pernod Ricard, the Cuban government sells its Havana Club products internationally, except in the United States and its territories. Bacardi created the Real Havana Club rum based on the original recipe from the Arechabala family, manufactures it in Puerto Rico, and sells it in the United States. Bacardi continues to fight in the courts, attempting to legalize their Havana Club trademark outside the United States.
Acquisitions
Bacardi Limited has made numerous acquisitions to diversify away from the eponymous Bacardí rum brand.
In 1993, Bacardi merged with Martini & Rossi, the Italian producer of Martini vermouth and sparkling wines, creating the Bacardi-Martini group. Other associated brands include the Real Havana Club, Drambuie Scotch whisky liqueur, DiSaronno Amaretto, Eristoff vodka, Cazadores Tequila, B&B and Bénédictine liqueurs.
In 1998, Bacardi company acquired John Dewar & Sons, Ltd and Bombay Sapphire from Diageo for $2 billion.
In 2004, Bacardi purchased Grey Goose, a French-made vodka, from Sidney Frank for $2 billion.
In 2006 Bacardi purchased New Zealand vodka brand 42 Below.
In 2018, Bacardi purchased tequila manufacturer Patrón Spirits Company for $5.1 billion.
In 2023, Bacardi acquired the super-premium mezcal brand, Ilegal Mezcal.
In December 2023, Bacardi took majority control of Irish whiskey producer Teeling.
Brands
Bacardi beverage brands include:
Beer: Hatuey
Cachaça: Leblon
Cognac: Baron Otard, D'ussé
Gin: Bombay Sapphire, Bosford Rose, Oxley
Liqueur: Bénédictine, Cedlila, Get, Martini Spirito, Patrón Citrónge, St-Germain
Mezcal: Illegal
Rum: Bacardí, Banks, Castillo, Facundo, Havana Club (USA only), Pyrat, Santa Teresa, Single Cane Estate
Sparkling wine: Martini Alta Langa, Martini Asti, Martini Grandi Augur, Martini Magici Istanti, Martini Prosecco, Martini Riserva di Montellera. Martini Rosé
Tequila: Camino Real, Cazadores, Corzo, Patrón
Vermouth: Martini, Noilly Prat
Vodka: Eristoff, Grey Goose, Russian Prince, Ultimat Vodka, 42 Below
American whiskey: Angel's Envy, Stillhouse
Irish whiskey: Teeling
Scotch whisky:
Single malt Scotch whisky: Aberfeldy, Aultmore, Craigellachie, Deveron, Royal Brackla
Blended Scotch whisky: Dewar's, William Lawson's
Main brand
Bacardi Superior
Bacardi 8
Bacardi Gran Reserva
Bacardi Dark Rum
Bacardi White Rum
Bacardi Spiced Rum
Bacardi Gold Rum
Bacardi 151
Bacardi Gold
Bacardi Mojito
Bacardi Breezers
Bacardi Apple
Bacardi Lemon
Bacardi Carta Blanca
Awards
Bacardí rums have been entered for a number of international spirit ratings awards. Several Bacardí spirits have performed notably well. In 2020, Bacardí Superior, Bacardí Gold, Bacardí Black, Bacardí Añejo Cuatro were each awarded a gold medal by the International Quality Institute Monde Selection. In addition, both Bacardí Reserva Ocho and Bacardí Gran Reserva Diez were awarded the top honor of Grand Gold quality award.
Hemingway connection
Ernest Hemingway lived in Cuba from 1939 until shortly after the Cuban Revolution. He lived at Finca Vigía, in the small town of San Francisco de Paula, located very close to Bacardi's Modelo Brewery for Hatuey Beer in Cotorro, Havana.
In 1954, Compañía Ron Bacardi S.A. threw Hemingway a party when he was awarded the Nobel Prize in Literature – soon after the publication of his novel The Old Man and the Sea (1952) – in which he honored the company by mentioning its Hatuey beer. Hemingway also mentioned Bacardí and Hatuey in his novels To Have and Have Not (1937) and For Whom the Bell Tolls (1940). Guillermo Cabrera Infante wrote an account of the festivities for the periodical Ciclón, titled "El Viejo y la Marca" ("The Old Man and the Brand", a play on "El Viejo y el Mar", the book's Spanish title). In his account he described how "on one side there was a wooden stage with two streamers – Hatuey beer and Bacardi rum – on each end and a Cuban flag in the middle. Next to the stage was a bar, at which people crowded, ordering daiquiris and beer, all free." A sign at the event read "Bacardi rum welcomes the author of The Old Man and the Sea".
In his article "The Old Man and the Daiquiri", Wayne Curtis writes about how Hemingway's "home bar also held a bottle of Bacardí rum". Hemingway wrote in Islands in the Stream, "...this frozen daiquirí, so well beaten as it is, looks like the sea where the wave falls away from the bow of a ship when she is doing thirty knots."
Mishaps
Death of Day Davis
On August 16, 2012 temporary worker Lawrence Daquan "Day" Davis was crushed to death when a faulty palletizer he was cleaning was activated. It was his first day on the job at the Jacksonville, Florida Bacardi Bottling facility. No lockout/tagout procedures had been implemented. An Occupational Safety and Health Administration (OSHA) investigation found 12 safety violations, and Bacardi was fined $192,000, but reached an agreement where it paid $110,000.
Russian invasion of Ukraine
In March 2022, after Russia's invasion of Ukraine, Bacardi announced that it would halt all exports to Russia and freeze investment and advertising programs. From June 2022 through June 2023, the company imported $169 million worth of products and tripled its profits, and through the summer of 2023 it increased its business in Russia and sought new employees for its Russian branch. When this gained international media attention, the pledge disappeared from their company website. On August 10, 2023, Ukrainian authorities added Bacardi to their list of International Sponsors of War.
See also
Lubee Bat Conservancy, an organization in Gainesville, Florida, founded by Facundo's great-grandson Luis
References
External links
Map of Distillery in Puerto Rico from Google Maps
Bacardi brands
Rums
Distilleries
Alcoholic drink companies
Food and drink companies established in 1862
Food and drink companies of Bermuda
Privately held companies of Bermuda
1862 establishments in Cuba
Food and drink companies of Cuba
Sugar industry of British Overseas Territories | Bacardi | [
"Chemistry"
] | 3,865 | [
"Distilleries",
"Distillation"
] |
4,924 | https://en.wikipedia.org/wiki/Bunsen%20burner | A Bunsen burner, named after Robert Bunsen, is a kind of ambient air gas burner used as laboratory equipment; it produces a single open gas flame, and is used for heating, sterilization, and combustion.
The gas can be natural gas (which is mainly methane) or a liquefied petroleum gas, such as propane, butane, or a mixture. Combustion temperature achieved depends in part on the adiabatic flame temperature of the chosen fuel mixture.
History
In 1852, the University of Heidelberg hired Bunsen and promised him a new laboratory building. The city of Heidelberg had begun to install coal-gas street lighting, and the university laid gas lines to the new laboratory.
The designers of the building intended to use the gas not just for lighting, but also as fuel for burners for laboratory operations. For any burner lamp, it was desirable to maximize the temperature of its flame, and minimize its luminosity (which represented lost heating energy). Bunsen sought to improve existing laboratory burner lamps as regards economy, simplicity, and flame temperature, and adapt them to coal-gas fuel.
While the building was under construction in late 1854, Bunsen suggested certain design principles to the university's mechanic, Peter Desaga, and asked him to construct a prototype. Similar principles had been used in an earlier burner design by Michael Faraday, and in a device patented in 1856 by gas engineer R. W. Elsner. The Bunsen/Desaga design generated a hot, sootless, non-luminous flame by mixing the gas with air in a controlled fashion before combustion. Desaga created adjustable slits for air at the bottom of the cylindrical burner, with the flame issuing at the top. When the building opened early in 1855, Desaga had made 50 burners for Bunsen's students. Two years later Bunsen published a description, and many of his colleagues soon adopted the design. Bunsen burners are now used in laboratories around the world.
Operation
The device in use today safely burns a continuous stream of a flammable gas such as natural gas (which is principally methane) or a liquefied petroleum gas such as propane, butane, or a mixture of both.
The hose barb is connected to a gas nozzle on the laboratory bench with rubber tubing. Most laboratory benches are equipped with multiple gas nozzles connected to a central gas source, as well as vacuum, nitrogen, and steam nozzles. The gas then flows up through the base through a small hole at the bottom of the barrel and is directed upward. There are open slots in the side of the tube bottom to admit air into the stream using the Venturi effect, and the gas burns at the top of the tube once ignited by a flame or spark. The most common methods of lighting the burner are using a match or a spark lighter.
The amount of air mixed with the gas stream affects the completeness of the combustion reaction. Less air yields an incomplete and thus cooler reaction, while a gas stream well mixed with air provides oxygen in a stoichiometric amount and thus a complete and hotter reaction. The air flow can be controlled by opening or closing the slot openings at the base of the barrel, similar in function to the choke in a carburettor.
If the collar at the bottom of the tube is adjusted so more air can mix with the gas before combustion, the flame will burn hotter, appearing blue as a result. If the holes are closed, the gas will only mix with ambient air at the point of combustion, that is, only after it has exited the tube at the top. This reduced mixing produces an incomplete reaction, producing a cooler but brighter yellow, which is often called the "safety flame" or "luminous flame". The yellow flame is luminous due to small soot particles in the flame, which are heated to incandescence. The yellow flame is considered "dirty" because it leaves a layer of carbon on whatever it is heating. When the burner is regulated to produce a hot, blue flame, it can be nearly invisible against some backgrounds. The hottest part of the flame is the tip of the inner flame, while the coolest is the whole inner flame. Increasing the amount of fuel gas flow through the tube by opening the needle valve will increase the size of the flame. However, unless the airflow is adjusted as well, the flame temperature will decrease because an increased amount of gas is now mixed with the same amount of air, starving the flame of oxygen.
Generally, the burner is placed underneath a laboratory tripod, which supports a beaker or other container. The burner will often be placed on a suitable heatproof mat to protect the laboratory bench surface.
A Bunsen burner is also used in microbiology laboratories to sterilise pieces of equipment and to produce an updraft that forces airborne contaminants away from the working area.
Variants
Other burners based on the same principle exist. The most important alternatives to the Bunsen burner are:
Teclu burner – The lower part of its tube is conical, with a round screw nut below its base. The gap, set by the distance between the nut and the end of the tube, regulates the influx of the air in a way similar to the open slots of the Bunsen burner. The Teclu burner provides better mixing of air and fuel and can achieve higher flame temperatures than the Bunsen burner.
Meker burner – The lower part of its tube has more openings with larger total cross-section, admitting more air and facilitating better mixing of air and gas. The tube is wider and its top is covered with a wire grid. The grid separates the flame into an array of smaller flames with a common external envelope, and also prevents flashback to the bottom of the tube, which is a risk at high air-to-fuel ratios and limits the maximum rate of air intake in a conventional Bunsen burner. Flame temperatures of up to are achievable if properly used. The flame also burns without noise, unlike the Bunsen or Teclu burners.
Tirrill burner – The base of the burner has a needle valve which allows the regulation of gas intake directly from the burner, rather than from the gas source. Maximum temperature of flame can reach 1560 °C.
See also
Alcohol burner
Heating mantle
Meker-Fisher burner
References
External links
Burners
Combustion engineering
German inventions
Laboratory equipment | Bunsen burner | [
"Engineering"
] | 1,322 | [
"Combustion engineering",
"Industrial engineering"
] |
4,940 | https://en.wikipedia.org/wiki/Brass%20instrument | A brass instrument is a musical instrument that produces sound by sympathetic vibration of air in a tubular resonator in sympathy with the vibration of the player's lips. The term labrosone, from Latin elements meaning "lip" and "sound", is also used for the group, since instruments employing this "lip reed" method of sound production can be made from other materials like wood or animal horn, particularly early or traditional instruments such as the cornett, alphorn or shofar.
There are several factors involved in producing different pitches on a brass instrument. Slides, valves, crooks (though they are rarely used today), or keys are used to change vibratory length of tubing, thus changing the available harmonic series, while the player's embouchure, lip tension and air flow serve to select the specific harmonic produced from the available series.
The view of most scholars (see organology) is that the term "brass instrument" should be defined by the way the sound is made, as above, and not by whether the instrument is actually made of brass. Thus one finds brass instruments made of wood, like the alphorn, the cornett, the serpent and the didgeridoo, while some woodwind instruments are made of brass, like the saxophone.
Families
Modern brass instruments generally come in one of two families:
Valved brass instruments use a set of valves (typically three or four but as many as seven or more in some cases) operated by the player's fingers that introduce additional tubing, or crooks, into the instrument, changing its overall length. This family includes all of the modern brass instruments except the trombone: the trumpet, horn (also called French horn), euphonium, and tuba, as well as the cornet, flugelhorn, tenor horn (alto horn), baritone horn, sousaphone, and the mellophone. As valved instruments are predominant among the brasses today, a more thorough discussion of their workings can be found below. The valves are usually piston valves, but can be rotary valves; the latter are the norm for the horn (except in France) and are also common on the tuba.
Slide brass instruments use a slide to change the length of tubing. The main instruments in this category are the trombone family, though valve trombones are occasionally used, especially in jazz. The trombone family's ancestor, the sackbut, and the folk instrument bazooka are also in the slide family.
There are two other families that have, in general, become functionally obsolete for practical purposes. Instruments of both types, however, are sometimes used for period-instrument performances of Baroque or Classical pieces. In more modern compositions, they are occasionally used for their intonation or tone color.
Natural brass instruments only play notes in the instrument's harmonic series. These include the bugle and older variants of the trumpet and horn. The trumpet was a natural brass instrument prior to about 1795, and the horn before about 1820. In the 18th century, makers developed interchangeable crooks of different lengths, which let players use a single instrument in more than one key. Natural instruments are still played for period performances and some ceremonial functions, and are occasionally found in more modern scores, such as those by Richard Wagner and Richard Strauss.
Keyed or Fingered brass instruments used holes along the body of the instrument, which were covered by fingers or by finger-operated pads (keys) in a similar way to a woodwind instrument. These included the cornett, serpent, ophicleide, keyed bugle and keyed trumpet. They are more difficult to play than valved instruments.
Bore taper and diameter
Brass instruments may also be characterised by two generalizations about geometry of the bore, that is, the tubing between the mouthpiece and the flaring of the tubing into the bell. Those two generalizations are with regard to
the degree of taper or conicity of the bore and
the diameter of the bore with respect to its length.
Cylindrical vs. conical bore
While all modern valved and slide brass instruments consist in part of conical and in part of cylindrical tubing, they are divided as follows:
Cylindrical bore brass instruments are those in which approximately constant diameter tubing predominates. Cylindrical bore brass instruments are generally perceived as having a brighter, more penetrating tone quality compared to conical bore brass instruments. The trumpet, and all trombones are cylindrical bore. In particular, the slide design of the trombone necessitates this.
Conical bore brass instruments are those in which tubing of constantly increasing diameter predominates. Conical bore instruments are generally perceived as having a more mellow tone quality than the cylindrical bore brass instruments. The "British brass band" group of instruments fall into this category. This includes the flugelhorn, cornet, tenor horn (alto horn), baritone horn, horn, euphonium and tuba. Some conical bore brass instruments are more conical than others. For example, the flugelhorn differs from the cornet by having a higher percentage of its tubing length conical than does the cornet, in addition to possessing a wider bore than the cornet. In the 1910s and 1920s, the E. A. Couturier company built brass band instruments utilizing a patent for a continuous conical bore without cylindrical portions even for the valves or tuning slide.
Whole-tube vs. half-tube
The resonances of a brass instrument resemble a harmonic series, with the exception of the lowest resonance, which is significantly lower than the fundamental frequency of the series that the other resonances are overtones of. Depending on the instrument and the skill of the player, the missing fundamental of the series can still be played as a pedal tone, which relies mainly on vibration at the overtone frequencies to produce the fundamental pitch. The bore diameter in relation to length determines whether the fundamental tone or the first overtone is the lowest partial practically available to the player in terms of playability and musicality, dividing brass instruments into whole-tube and half-tube instruments. These terms stem from a comparison to organ pipes, which produce the same pitch as the fundamental pedal tone of a brass instrument of equal length.
Whole-tube instruments have larger bores in relation to tubing length, and can play the fundamental tone with ease and precision. The tuba and euphonium are examples of whole-tube brass instruments.
Half-tube instruments have smaller bores in relation to tubing length and cannot easily or accurately play the fundamental tone. The second partial (first overtone) is the lowest note of each tubing length practical to play on half-tube instruments. The trumpet and horn are examples of half-tube brass instruments.
Other brass instruments
The instruments in this list fall for various reasons outside the scope of much of the discussion above regarding families of brass instruments.
Alphorn (wood)
Conch (shell)
Didgeridoo (wood, Australia)
Natural horn (no valves or slides—except tuning crooks in some cases)
Jazzophone
Keyed bugle (keyed brass)
Keyed trumpet (keyed brass)
Serpent (keyed brass)
Ophicleide (keyed brass)
Shofar (animal horn)
Vladimirskiy rozhok (wood, Russia)
Vuvuzela (simple short horn, origins disputed but achieved fame or notoriety through many plastic examples in the 2010 World Cup)
Lur
Valves
Valves are used to change the length of tubing of a brass instrument allowing the player to reach the notes of various harmonic series. Each valve pressed diverts the air stream through additional tubing, individually or in conjunction with other valves. This lengthens the vibrating air column thus lowering the fundamental tone and associated harmonic series produced by the instrument. Designs exist, although rare, in which this behaviour is reversed, i.e., pressing a valve removes a length of tubing rather than adding one. One modern example of such an ascending valve is the Yamaha YSL-350C trombone, in which the extra valve tubing is normally engaged to pitch the instrument in B, and pressing the thumb lever removes a whole step to pitch the instrument in C. Valves require regular lubrication.
A core standard valve layout based on the action of three valves had become almost universal by (at latest) 1864 as witnessed by Arban's method published in that year. The effect of a particular combination of valves may be seen in the table below. This table is correct for the core three-valve layout on almost any modern valved brass instrument. The most common four-valve layout is a superset of the well-established three-valve layout and is noted in the table, despite the exposition of four-valve and also five-valve systems (the latter used on the tuba) being incomplete in this article.
Tuning
Since valves lower the pitch, a valve that makes a pitch too low (flat) creates an interval wider than desired, while a valve that plays sharp creates an interval narrower than desired. Intonation deficiencies of brass instruments that are independent of the tuning or temperament system are inherent in the physics of the most popular valve design, which uses a small number of valves in combination to avoid redundant and heavy lengths of tubing (this is entirely separate from the slight deficiencies between Western music's dominant equal (even) temperament system and the just (not equal) temperament of the harmonic series itself). Since each lengthening of the tubing has an inversely proportional effect on pitch (Pitch of brass instruments), while pitch perception is logarithmic, there is no way for a simple, uncompensated addition of length to be correct in every combination when compared with the pitches of the open tubing and the other valves.
Absolute tube length
For example, given a length of tubing equaling 100 units of length when open, one may obtain the following tuning discrepancies:
Playing notes using valves (notably 1st + 3rd and 1st + 2nd + 3rd) requires compensation to adjust the tuning appropriately, either by the player's lip-and-breath control, via mechanical assistance of some sort, or, in the case of horns, by the position of the stopping hand in the bell. 'T' stands for trigger on a trombone.
Relative tube length
Traditionally the valves lower the pitch of the instrument by adding extra lengths of tubing based on a just tuning:
1st valve: of main tube, making an interval of 9:8, a pythagorean major second
2nd valve: of main tube, making an interval of 16:15, a just minor second
3rd valve: of main tube, making an interval of 6:5, a just minor third
Combining the valves and the harmonics of the instrument leads to the following ratios and comparisons to 12-tone equal tuning and to a common five-limit tuning in C:
Tuning compensation
The additional tubing for each valve usually features a short tuning slide of its own for fine adjustment of the valve's tuning, except when it is too short to make this practicable. For the first and third valves this is often designed to be adjusted as the instrument is played, to account for the deficiencies in the valve system.
In most trumpets and cornets, the compensation must be provided by extending the third valve slide with the third or fourth finger, and the first valve slide with the left hand thumb (see Trigger or throw below). This is used to lower the pitch of the 1–3 and 1–2–3 valve combinations. On the trumpet and cornet, these valve combinations correspond to low D, low C, low G, and low F, so chromatically, to stay in tune, one must use this method.
In instruments with a fourth valve, such as tubas, euphoniums, piccolo trumpets, etc. that valve lowers the pitch by a perfect fourth; this is used to compensate for the sharpness of the valve combinations 1–3 and 1–2–3 (4 replaces 1–3, 2–4 replaces 1–2–3). All three normal valves may be used in addition to the fourth to increase the instrument's range downwards by a perfect fourth, although with increasingly severe intonation problems.
When four-valved models without any kind of compensation play in the corresponding register, the sharpness becomes so severe that players must finger the note a half-step below the one they are trying to play. This eliminates the note a half-step above their open fundamental.
Manufacturers of low brass instruments may choose one or a combination of four basic approaches to compensate for the tuning difficulties, whose respective merits are subject to debate:
Compensation system
In the Compensation system, each of the first two (or three) valves has an additional set of tubing extending from the back of the valve. When the third (or fourth) valve is depressed in combination with another one, the air is routed through both the usual set of tubing plus the extra one, so that the pitch is lowered by an appropriate amount. This allows compensating instruments to play with accurate intonation in the octave below their open second partial, which is critical for tubas and euphoniums in much of their repertoire.
The compensating system was applied to horns to serve a different purpose. It was used to allow a double horn in F and B to ease playing difficulties in the high register. In contrast to the system in use in tubas and euphoniums, the default 'side' of the horn is the longer F horn, with secondary lengths of tubing coming into play when the first, second or third valves are pressed; pressing the thumb valve takes these secondary valve slides and the extra length of main tubing out of play to produce a shorter B horn. A later "full double" design has completely separate valve section tubing for the two sides, and is considered superior, although rather heavier in weight.
Additional valves
Initially, compensated instruments tended to sound stuffy and blow less freely due to the air being doubled back through the main valves. In early designs, this led to sharp bends in the tubing and other obstructions of the air-flow. Some manufacturers therefore preferred adding more 'straight' valves instead, which for example could be pitched a little lower than the 2nd and 1st valves and were intended to be used instead of these in the respective valve combinations. While no longer featured in euphoniums for decades, many professional tubas are still built like this, with five valves being common on CC- and BB-tubas and five or six valves on F-tubas.
Compensating double horns can also suffer from the stuffiness resulting from the air being passed through the valve section twice, but as this really only affects the longer F side, a compensating double can be very useful for a 1st or 3rd horn player, who uses the F side less.
Additional sets of slides on each valve
Another approach was the addition of two sets of slides for different parts of the range. Some euphoniums and tubas were built like this, but today, this approach has become highly exotic for all instruments except horns, where it is the norm, usually in a double, sometimes even triple configuration.
Trigger or throw
Some valved brass instruments provide triggers or throws that manually lengthen (or, less commonly, shorten) the main tuning slide, a valve slide, or the main tubing. These mechanisms alter the pitch of notes that are naturally sharp in a specific register of the instrument, or shift the instrument to another playing range. Triggers and throws permit speedy adjustment while playing.
Trigger is used in two senses:
A trigger can be a mechanical lever that lengthens a slide when pressed in a contrary direction. Triggers are sprung in such a way that they return the slide to its original position when released.
The term "trigger" also describes a device engaging a valve to lengthen the main tubing, e.g. lowering the key of certain trombones from B to F.
A throw is a simple metal grip for the player's finger or thumb, attached to a valve slide. The general term "throw" can describe a u-hook, a saddle (u-shaped grips), or a ring (ring-shape grip) in which a player's finger or thumb rests. A player extends a finger or thumb to lengthen a slide, and retracts the finger to return the slide to its original position.
Examples of instruments that use triggers or throws
Trumpet or cornet
Triggers or throws are sometimes found on the first valve slide. They are operated by the player's thumb and are used to adjust a large range of notes using the first valve, most notably the player's written top line F, the A above directly above that, and the B above that. Other notes that require the first valve slide, but are not as problematic without it include the first line E, the F above that, the A above that, and the third line B.
Triggers or throws are often found on the third valve slide. They are operated by the player's fourth finger, and are used to adjust the lower D and C. Trumpets typically use throws, whilst cornets may have a throw or trigger.
Trombone
Trombone triggers are primarily but not exclusively installed on the F-trigger, bass, and contrabass trombones to alter the length of tubing, thus making certain ranges and pitches more accessible.
Euphoniums
A euphonium occasionally has a trigger on valves other than 2 (especially 3), although many professional quality euphoniums, and indeed other brass band instruments, have a trigger for the main tuning slide.
Mechanism
The two major types of valve mechanisms are rotary valves and piston valves. The first piston valve instruments were developed just after the start of the 19th century. The Stölzel valve (invented by Heinrich Stölzel in 1814) was an early variety. In the mid 19th century the Vienna valve was an improved design. However many professional musicians preferred rotary valves for quicker, more reliable action, until better designs of piston valves were mass manufactured towards the end of the 19th century. Since the early decades of the 20th century, piston valves have been the most common on brass instruments except for the orchestral horn and the tuba. See also the article Brass Instrument Valves.
Sound production in brass instruments
Because the player of a brass instrument has direct control of the prime vibrator (the lips), brass instruments exploit the player's ability to select the harmonic at which the instrument's column of air vibrates. By making the instrument about twice as long as the equivalent woodwind instrument and starting with the second harmonic, players can get a good range of notes simply by varying the tension of their lips (see embouchure).
Most brass instruments are fitted with a removable mouthpiece. Different shapes, sizes and styles of mouthpiece may be used to suit different embouchures, or to more easily produce certain tonal characteristics. Trumpets, trombones, and tubas are characteristically fitted with a cupped mouthpiece, while horns are fitted with a conical mouthpiece.
One interesting difference between a woodwind instrument and a brass instrument is that woodwind instruments are non-directional. This means that the sound produced propagates in all directions with approximately equal volume. Brass instruments, on the other hand, are highly directional, with most of the sound produced traveling straight outward from the bell. This difference makes it significantly more difficult to record a brass instrument accurately. It also plays a major role in some performance situations, such as in marching bands.
Manufacture
Metal
Traditionally the instruments are normally made of brass, polished and then lacquered to prevent corrosion. Some higher quality and higher cost instruments use gold or silver plating to prevent corrosion.
Alternatives to brass include other alloys containing significant amounts of copper or silver. These alloys are biostatic due to the oligodynamic effect, and thus suppress growth of molds, fungi or bacteria. Brass instruments constructed from stainless steel or aluminium have good sound quality but are rapidly colonized by microorganisms and become unpleasant to play.
Most higher quality instruments are designed to prevent or reduce galvanic corrosion between any steel in the valves and springs, and the brass of the tubing. This may take the form of desiccant design, to keep the valves dry, sacrificial zincs, replaceable valve cores and springs, plastic insulating washers, or nonconductive or noble materials for the valve cores and springs. Some instruments use several such features.
The process of making the large open end (bell) of a brass instrument is called metal beating. In making the bell of, for example, a trumpet, a person lays out a pattern and shapes sheet metal into a bell-shape using templates, machine tools, handtools, and blueprints. The maker cuts out the bell blank, using hand or power shears. He hammers the blank over a bell-shaped mandrel, and butts the seam, using a notching tool. The seam is brazed, using a torch and smoothed using a hammer or file. A draw bench or arbor press equipped with expandable lead plug is used to shape and smooth the bell and bell neck over a mandrel. A lathe is used to spin the bell head and to form a bead at the edge of bell head. Previously shaped bell necks are annealed, using a hand torch to soften the metal for further bending. Scratches are removed from the bell using abrasive-coated cloth.
Other materials
A few specialty instruments are made from wood.
Instruments made mostly from plastic emerged in the 2010s as a cheaper and more robust alternative to brass. Plastic instruments could come in almost any colour. The sound plastic instruments produce is different from the one of brass, lacquer, gold or silver. This is because plastic is much less dense, or rather has less matter in a given space as compared to the aforementioned which causes vibrations to occur differently. While originally seen as a gimmick, these plastic models have found increasing popularity during the last decade and are now viewed as practice tools that make for more convenient travel as well as a cheaper option for beginning players.
Ensembles
Brass instruments are one of the major classical instrument families and are played across a range of musical ensembles.
Orchestras include a varying number of brass instruments depending on music style and era, typically:
two or three trumpets
four to eight French horns
two or three tenor trombones
one bass trombone
one tuba
Baroque and classical period orchestras may include valveless trumpets or bugles, or have valved trumpets/cornets playing these parts, and they may include valveless horns, or have valved horns playing these parts.
Romantic, modern, and contemporary orchestras may include larger numbers of brass including more exotic instruments.
Concert bands generally have a larger brass section than an orchestra, typically:
four to six trumpets or cornets
four French horns
two to four tenor trombones
one or two bass trombones
two or three euphoniums or baritone horns
two or three tubas
British brass bands are made up entirely of brass, mostly conical bore instruments. Typical membership is:
one soprano cornet
nine cornets
one flugelhorn
three tenor (alto) horns
two baritone horns
two tenor trombones
one bass trombone
two euphoniums
two E tubas
two B tubas
Quintets are common small brass ensembles; a quintet typically contains:
two trumpets
one horn
one trombone
one tuba or bass trombone
Big bands and other jazz bands commonly contain cylindrical bore brass instruments.
A big band typically includes:
four trumpets
four tenor trombones
one bass trombone (in place of one of the tenor trombones)
Smaller jazz ensembles may include a single trumpet or trombone soloist.
Mexican bandas have:
three trumpets
three trombones
two alto horns, also called "charchetas" and "saxores"
one sousaphone, called "tuba"
Single brass instruments are also often used to accompany other instruments or ensembles such as an organ or a choir.
See also
Brass instrument valve
Drum and bugle corps (modern)
Haas (brass instrument makers)
Horn section
Pitch of brass instruments
Wind instruments
References
External links
Brass Instruments Information on individual Brass Instruments
The traditional manufacture of brass instruments, a 1991 video (RealPlayer format) featuring maker Robert Barclay; from the web site of the Canadian Museum of Civilization.
The Orchestra: A User's Manual – Brass
Brassmusic.Ru – Russian Brass Community
Acoustics of Brass Instruments from Music Acoustics at the University of New South Wales
Early Valve designs, John Ericson
3-Valve and 4-Valve Compensating Systems, David Werden
Metallic objects | Brass instrument | [
"Physics"
] | 5,010 | [
"Metallic objects",
"Physical objects",
"Matter"
] |
4,944 | https://en.wikipedia.org/wiki/Naive%20set%20theory | Naive set theory is any of several theories of sets used in the discussion of the foundations of mathematics.
Unlike axiomatic set theories, which are defined using formal logic, naive set theory is defined informally, in natural language. It describes the aspects of mathematical sets familiar in discrete mathematics (for example Venn diagrams and symbolic reasoning about their Boolean algebra), and suffices for the everyday use of set theory concepts in contemporary mathematics.
Sets are of great importance in mathematics; in modern formal treatments, most mathematical objects (numbers, relations, functions, etc.) are defined in terms of sets. Naive set theory suffices for many purposes, while also serving as a stepping stone towards more formal treatments.
Method
A naive theory in the sense of "naive set theory" is a non-formalized theory, that is, a theory that uses natural language to describe sets and operations on sets. Such theory treats sets as platonic absolute objects. The words and, or, if ... then, not, for some, for every are treated as in ordinary mathematics. As a matter of convenience, use of naive set theory and its formalism prevails even in higher mathematics – including in more formal settings of set theory itself.
The first development of set theory was a naive set theory. It was created at the end of the 19th century by Georg Cantor as part of his study of infinite sets and developed by Gottlob Frege in his Grundgesetze der Arithmetik.
Naive set theory may refer to several very distinct notions. It may refer to
Informal presentation of an axiomatic set theory, e.g. as in Naive Set Theory by Paul Halmos.
Early or later versions of Georg Cantor's theory and other informal systems.
Decidedly inconsistent theories (whether axiomatic or not), such as a theory of Gottlob Frege that yielded Russell's paradox, and theories of Giuseppe Peano and Richard Dedekind.
Paradoxes
The assumption that any property may be used to form a set, without restriction, leads to paradoxes. One common example is Russell's paradox: there is no set consisting of "all sets that do not contain themselves". Thus consistent systems of naive set theory must include some limitations on the principles which can be used to form sets.
Cantor's theory
Some believe that Georg Cantor's set theory was not actually implicated in the set-theoretic paradoxes (see Frápolli 1991). One difficulty in determining this with certainty is that Cantor did not provide an axiomatization of his system. By 1899, Cantor was aware of some of the paradoxes following from unrestricted interpretation of his theory, for instance Cantor's paradox and the Burali-Forti paradox, and did not believe that they discredited his theory. Cantor's paradox can actually be derived from the above (false) assumption—that any property may be used to form a set—using for " is a cardinal number". Frege explicitly axiomatized a theory in which a formalized version of naive set theory can be interpreted, and it is this formal theory which Bertrand Russell actually addressed when he presented his paradox, not necessarily a theory Cantorwho, as mentioned, was aware of several paradoxespresumably had in mind.
Axiomatic theories
Axiomatic set theory was developed in response to these early attempts to understand sets, with the goal of determining precisely what operations were allowed and when.
Consistency
A naive set theory is not necessarily inconsistent, if it correctly specifies the sets allowed to be considered. This can be done by the means of definitions, which are implicit axioms. It is possible to state all the axioms explicitly, as in the case of Halmos' Naive Set Theory, which is actually an informal presentation of the usual axiomatic Zermelo–Fraenkel set theory. It is "naive" in that the language and notations are those of ordinary informal mathematics, and in that it does not deal with consistency or completeness of the axiom system.
Likewise, an axiomatic set theory is not necessarily consistent: not necessarily free of paradoxes. It follows from Gödel's incompleteness theorems that a sufficiently complicated first order logic system (which includes most common axiomatic set theories) cannot be proved consistent from within the theory itself – even if it actually is consistent. However, the common axiomatic systems are generally believed to be consistent; by their axioms they do exclude some paradoxes, like Russell's paradox. Based on Gödel's theorem, it is just not known – and never can be – if there are no paradoxes at all in these theories or in any first-order set theory.
The term naive set theory is still today also used in some literature to refer to the set theories studied by Frege and Cantor, rather than to the informal counterparts of modern axiomatic set theory.
Utility
The choice between an axiomatic approach and other approaches is largely a matter of convenience. In everyday mathematics the best choice may be informal use of axiomatic set theory. References to particular axioms typically then occur only when demanded by tradition, e.g. the axiom of choice is often mentioned when used. Likewise, formal proofs occur only when warranted by exceptional circumstances. This informal usage of axiomatic set theory can have (depending on notation) precisely the appearance of naive set theory as outlined below. It is considerably easier to read and write (in the formulation of most statements, proofs, and lines of discussion) and is less error-prone than a strictly formal approach.
Sets, membership and equality
In naive set theory, a set is described as a well-defined collection of objects. These objects are called the elements or members of the set. Objects can be anything: numbers, people, other sets, etc. For instance, 4 is a member of the set of all even integers. Clearly, the set of even numbers is infinitely large; there is no requirement that a set be finite.
The definition of sets goes back to Georg Cantor. He wrote in his 1915 article Beiträge zur Begründung der transfiniten Mengenlehre:
Note on consistency
It does not follow from this definition how sets can be formed, and what operations on sets again will produce a set. The term "well-defined" in "well-defined collection of objects" cannot, by itself, guarantee the consistency and unambiguity of what exactly constitutes and what does not constitute a set. Attempting to achieve this would be the realm of axiomatic set theory or of axiomatic class theory.
The problem, in this context, with informally formulated set theories, not derived from (and implying) any particular axiomatic theory, is that there may be several widely differing formalized versions, that have both different sets and different rules for how new sets may be formed, that all conform to the original informal definition. For example, Cantor's verbatim definition allows for considerable freedom in what constitutes a set. On the other hand, it is unlikely that Cantor was particularly interested in sets containing cats and dogs, but rather only in sets containing purely mathematical objects. An example of such a class of sets could be the von Neumann universe. But even when fixing the class of sets under consideration, it is not always clear which rules for set formation are allowed without introducing paradoxes.
For the purpose of fixing the discussion below, the term "well-defined" should instead be interpreted as an intention, with either implicit or explicit rules (axioms or definitions), to rule out inconsistencies. The purpose is to keep the often deep and difficult issues of consistency away from the, usually simpler, context at hand. An explicit ruling out of all conceivable inconsistencies (paradoxes) cannot be achieved for an axiomatic set theory anyway, due to Gödel's second incompleteness theorem, so this does not at all hamper the utility of naive set theory as compared to axiomatic set theory in the simple contexts considered below. It merely simplifies the discussion. Consistency is henceforth taken for granted unless explicitly mentioned.
Membership
If x is a member of a set A, then it is also said that x belongs to A, or that x is in A. This is denoted by x ∈ A. The symbol ∈ is a derivation from the lowercase Greek letter epsilon, "ε", introduced by Giuseppe Peano in 1889 and is the first letter of the word ἐστί (means "is"). The symbol ∉ is often used to write x ∉ A, meaning "x is not in A".
Equality
Two sets A and B are defined to be equal when they have precisely the same elements, that is, if every element of A is an element of B and every element of B is an element of A. (See axiom of extensionality.) Thus a set is completely determined by its elements; the description is immaterial. For example, the set with elements 2, 3, and 5 is equal to the set of all prime numbers less than 6.
If the sets A and B are equal, this is denoted symbolically as A = B (as usual).
Empty set
The empty set, denoted as and sometimes , is a set with no members at all. Because a set is determined completely by its elements, there can be only one empty set. (See axiom of empty set.) Although the empty set has no members, it can be a member of other sets. Thus , because the former has no members and the latter has one member.
Specifying sets
The simplest way to describe a set is to list its elements between curly braces (known as defining a set extensionally). Thus denotes the set whose only elements are and .
(See axiom of pairing.)
Note the following points:
The order of elements is immaterial; for example, .
Repetition (multiplicity) of elements is irrelevant; for example, .
(These are consequences of the definition of equality in the previous section.)
This notation can be informally abused by saying something like to indicate the set of all dogs, but this example would usually be read by mathematicians as "the set containing the single element dogs".
An extreme (but correct) example of this notation is , which denotes the empty set.
The notation , or sometimes , is used to denote the set containing all objects for which the condition holds (known as defining a set intensionally).
For example, denotes the set of real numbers, denotes the set of everything with blonde hair.
This notation is called set-builder notation (or "set comprehension", particularly in the context of Functional programming).
Some variants of set builder notation are:
denotes the set of all that are already members of such that the condition holds for . For example, if is the set of integers, then is the set of all even integers. (See axiom of specification.)
denotes the set of all objects obtained by putting members of the set into the formula . For example, is again the set of all even integers. (See axiom of replacement.)
is the most general form of set builder notation. For example, {{math|{{mset|xs owner | x is a dog}}}} is the set of all dog owners.
Subsets
Given two sets A and B, A is a subset of B if every element of A is also an element of B.
In particular, each set B is a subset of itself; a subset of B that is not equal to B is called a proper subset.
If A is a subset of B, then one can also say that B is a superset of A, that A is contained in B, or that B contains A. In symbols, means that A is a subset of B, and means that B is a superset of A.
Some authors use the symbols ⊂ and ⊃ for subsets, and others use these symbols only for proper subsets. For clarity, one can explicitly use the symbols ⊊ and ⊋ to indicate non-equality.
As an illustration, let R be the set of real numbers, let Z be the set of integers, let O be the set of odd integers, and let P be the set of current or former U.S. Presidents.
Then O is a subset of Z, Z is a subset of R, and (hence) O is a subset of R, where in all cases subset may even be read as proper subset.
Not all sets are comparable in this way. For example, it is not the case either that R is a subset of P nor that P is a subset of R.
It follows immediately from the definition of equality of sets above that, given two sets A and B, if and only if and . In fact this is often given as the definition of equality. Usually when trying to prove that two sets are equal, one aims to show these two inclusions. The empty set is a subset of every set (the statement that all elements of the empty set are also members of any set A is vacuously true).
The set of all subsets of a given set A is called the power set of A and is denoted by or ; the "" is sometimes in a script font: . If the set A has n elements, then will have elements.
Universal sets and absolute complements
In certain contexts, one may consider all sets under consideration as being subsets of some given universal set.
For instance, when investigating properties of the real numbers R (and subsets of R), R may be taken as the universal set. A true universal set is not included in standard set theory (see Paradoxes below), but is included in some non-standard set theories.
Given a universal set U and a subset A of U, the complement of A (in U''') is defined as
.
In other words, AC ("A-complement"; sometimes simply A, "A-prime" ) is the set of all members of U which are not members of A.
Thus with R, Z and O defined as in the section on subsets, if Z is the universal set, then OC is the set of even integers, while if R is the universal set, then OC is the set of all real numbers that are either even integers or not integers at all.
Unions, intersections, and relative complements
Given two sets A and B, their union is the set consisting of all objects which are elements of A or of B or of both (see axiom of union). It is denoted by .
The intersection of A and B is the set of all objects which are both in A and in B. It is denoted by .
Finally, the relative complement of B relative to A, also known as the set theoretic difference of A and B, is the set of all objects that belong to A but not to B. It is written as or .
Symbolically, these are respectively
;
;
.
The set B doesn't have to be a subset of A for to make sense; this is the difference between the relative complement and the absolute complement () from the previous section.
To illustrate these ideas, let A be the set of left-handed people, and let B be the set of people with blond hair. Then is the set of all left-handed blond-haired people, while is the set of all people who are left-handed or blond-haired or both. , on the other hand, is the set of all people that are left-handed but not blond-haired, while is the set of all people who have blond hair but aren't left-handed.
Now let E be the set of all human beings, and let F be the set of all living things over 1000 years old. What is in this case? No living human being is over 1000 years old, so must be the empty set {}.
For any set A, the power set is a Boolean algebra under the operations of union and intersection.
Ordered pairs and Cartesian products
Intuitively, an ordered pair is simply a collection of two objects such that one can be distinguished as the first element and the other as the second element, and having the fundamental property that, two ordered pairs are equal if and only if their first elements are equal and their second elements are equal.
Formally, an ordered pair with first coordinate a, and second coordinate b, usually denoted by (a, b), can be defined as the set
It follows that, two ordered pairs (a,b) and (c,d) are equal if and only if and .
Alternatively, an ordered pair can be formally thought of as a set {a,b} with a total order.
(The notation (a, b) is also used to denote an open interval on the real number line, but the context should make it clear which meaning is intended. Otherwise, the notation ]a, b[ may be used to denote the open interval whereas (a, b) is used for the ordered pair).
If A and B are sets, then the Cartesian product (or simply product) is defined to be:
That is, is the set of all ordered pairs whose first coordinate is an element of A and whose second coordinate is an element of B.
This definition may be extended to a set of ordered triples, and more generally to sets of ordered n-tuples for any positive integer n.
It is even possible to define infinite Cartesian products, but this requires a more recondite definition of the product.
Cartesian products were first developed by René Descartes in the context of analytic geometry. If R denotes the set of all real numbers, then represents the Euclidean plane and represents three-dimensional Euclidean space.
Some important sets
There are some ubiquitous sets for which the notation is almost universal. Some of these are listed below. In the list, a, b, and c refer to natural numbers, and r and s are real numbers.
Natural numbers are used for counting. A blackboard bold capital N () often represents this set.
Integers appear as solutions for x in equations like x + a = b. A blackboard bold capital Z () often represents this set (from the German Zahlen, meaning numbers).
Rational numbers appear as solutions to equations like a + bx = c. A blackboard bold capital Q () often represents this set (for quotient, because R is used for the set of real numbers).
Algebraic numbers appear as solutions to polynomial equations (with integer coefficients) and may involve radicals (including ) and certain other irrational numbers. A Q with an overline () often represents this set. The overline denotes the operation of algebraic closure.
Real numbers represent the "real line" and include all numbers that can be approximated by rationals. These numbers may be rational or algebraic but may also be transcendental numbers, which cannot appear as solutions to polynomial equations with rational coefficients. A blackboard bold capital R () often represents this set.
Complex numbers are sums of a real and an imaginary number: . Here either or (or both) can be zero; thus, the set of real numbers and the set of strictly imaginary numbers are subsets of the set of complex numbers, which form an algebraic closure for the set of real numbers, meaning that every polynomial with coefficients in has at least one root in this set. A blackboard bold capital C () often represents this set. Note that since a number can be identified with a point in the plane, is basically "the same" as the Cartesian product ("the same" meaning that any point in one determines a unique point in the other and for the result of calculations, it doesn't matter which one is used for the calculation, as long as multiplication rule is appropriate for ).
Paradoxes in early set theory
The unrestricted formation principle of sets referred to as the axiom schema of unrestricted comprehension,
is the source of several early appearing paradoxes:
led, in the year 1897, to the Burali-Forti paradox, the first published antinomy.
produced Cantor's paradox in 1897.
yielded Cantor's second antinomy in the year 1899. Here the property is true for all , whatever may be, so would be a universal set, containing everything.
, i.e. the set of all sets that do not contain themselves as elements, gave Russell's paradox in 1902.
If the axiom schema of unrestricted comprehension is weakened to the axiom schema of specification or axiom schema of separation',
then all the above paradoxes disappear. There is a corollary. With the axiom schema of separation as an axiom of the theory, it follows, as a theorem of the theory:
Or, more spectacularly (Halmos' phrasing): There is no universe. Proof: Suppose that it exists and call it . Now apply the axiom schema of separation with and for use . This leads to Russell's paradox again. Hence cannot exist in this theory.
Related to the above constructions is formation of the set
,
where the statement following the implication certainly is false. It follows, from the definition of , using the usual inference rules (and some afterthought when reading the proof in the linked article below) both that and holds, hence . This is Curry's paradox.
It is (perhaps surprisingly) not the possibility of that is problematic. It is again the axiom schema of unrestricted comprehension allowing for . With the axiom schema of specification instead of unrestricted comprehension, the conclusion does not hold and hence is not a logical consequence.
Nonetheless, the possibility of is often removed explicitly or, e.g. in ZFC, implicitly, by demanding the axiom of regularity to hold. One consequence of it is
or, in other words, no set is an element of itself.
The axiom schema of separation is simply too weak (while unrestricted comprehension is a very strong axiom—too strong for set theory) to develop set theory with its usual operations and constructions outlined above. The axiom of regularity is of a restrictive nature as well. Therefore, one is led to the formulation of other axioms to guarantee the existence of enough sets to form a set theory. Some of these have been described informally above and many others are possible. Not all conceivable axioms can be combined freely into consistent theories. For example, the axiom of choice of ZFC is incompatible with the conceivable "every set of reals is Lebesgue measurable". The former implies the latter is false.
See also
Algebra of sets
Axiomatic set theory
Internal set theory
List of set identities and relations
Set theory
Set (mathematics)
Partially ordered set
Notes
References
Bourbaki, N., Elements of the History of Mathematics, John Meldrum (trans.), Springer-Verlag, Berlin, Germany, 1994.
; see also pdf version
Devlin, K.J., The Joy of Sets: Fundamentals of Contemporary Set Theory, 2nd edition, Springer-Verlag, New York, NY, 1993.
María J. Frápolli|Frápolli, María J., 1991, "Is Cantorian set theory an iterative conception of set?". Modern Logic, v. 1 n. 4, 1991, 302–318.
Kelley, J.L., General Topology, Van Nostrand Reinhold, New York, NY, 1955.
van Heijenoort, J., From Frege to Gödel, A Source Book in Mathematical Logic, 1879-1931'', Harvard University Press, Cambridge, MA, 1967. Reprinted with corrections, 1977. .
External links
Beginnings of set theory page at St. Andrews
Earliest Known Uses of Some of the Words of Mathematics (S)
Set theory
Systems of set theory | Naive set theory | [
"Mathematics"
] | 4,900 | [
"Mathematical logic",
"Set theory"
] |
4,947 | https://en.wikipedia.org/wiki/B%C3%A9zout%27s%20identity | In mathematics, Bézout's identity (also called Bézout's lemma), named after Étienne Bézout who proved it for polynomials, is the following theorem:
Here the greatest common divisor of and is taken to be . The integers and are called Bézout coefficients for ; they are not unique. A pair of Bézout coefficients can be computed by the extended Euclidean algorithm, and this pair is, in the case of integers one of the two pairs such that and ; equality occurs only if one of and is a multiple of the other.
As an example, the greatest common divisor of 15 and 69 is 3, and 3 can be written as a combination of 15 and 69 as , with Bézout coefficients −9 and 2.
Many other theorems in elementary number theory, such as Euclid's lemma or the Chinese remainder theorem, result from Bézout's identity.
A Bézout domain is an integral domain in which Bézout's identity holds. In particular, Bézout's identity holds in principal ideal domains. Every theorem that results from Bézout's identity is thus true in all principal ideal domains.
Structure of solutions
If and are not both zero and one pair of Bézout coefficients has been computed (for example, using the extended Euclidean algorithm), all pairs can be represented in the form
where is an arbitrary integer, is the greatest common divisor of and , and the fractions simplify to integers.
If and are both nonzero and none of them divides the other, then exactly two of the pairs of Bézout coefficients satisfy
If and are both positive, one has and for one of these pairs, and and for the other. If is a divisor of (including the case ), then one pair of Bézout coefficients is .
This relies on a property of Euclidean division: given two non-zero integers and , if does not divide , there is exactly one pair such that and , and another one such that and .
The two pairs of small Bézout's coefficients are obtained from the given one by choosing for in the above formula either of the two integers next to .
The extended Euclidean algorithm always produces one of these two minimal pairs.
Example
Let and , then . Then the following Bézout's identities are had, with the Bézout coefficients written in red for the minimal pairs and in blue for the other ones.
If is the original pair of Bézout coefficients, then yields the minimal pairs via , respectively ; that is, , and .
Existence proof
Given any nonzero integers and , let . The set is nonempty since it contains either or (with and ). Since is a nonempty set of positive integers, it has a minimum element , by the well-ordering principle. To prove that is the greatest common divisor of and , it must be proven that is a common divisor of and , and that for any other common divisor , one has .
The Euclidean division of by may be written as
The remainder is in , because
Thus is of the form , and hence . However, , and is the smallest positive integer in : the remainder can therefore not be in , making necessarily 0. This implies that is a divisor of . Similarly is also a divisor of , and therefore is a common divisor of and .
Now, let be any common divisor of and ; that is, there exist and such that and . One has thus
That is, is a divisor of . Since , this implies .
Generalizations
For three or more integers
Bézout's identity can be extended to more than two integers: if
then there are integers such that
has the following properties:
is the smallest positive integer of this form
every number of this form is a multiple of
For polynomials
Bézout's identity does not always hold for polynomials. For example, when working in the polynomial ring of integers: the greatest common divisor of and is x, but there does not exist any integer-coefficient polynomials and satisfying .
However, Bézout's identity works for univariate polynomials over a field exactly in the same ways as for integers. In particular the Bézout's coefficients and the greatest common divisor may be computed with the extended Euclidean algorithm.
As the common roots of two polynomials are the roots of their greatest common divisor, Bézout's identity and fundamental theorem of algebra imply the following result:
The generalization of this result to any number of polynomials and indeterminates is Hilbert's Nullstellensatz.
For principal ideal domains
As noted in the introduction, Bézout's identity works not only in the ring of integers, but also in any other principal ideal domain (PID).
That is, if is a PID, and and are elements of , and is a greatest common divisor of and ,
then there are elements and in such that . The reason is that the ideal is principal and equal to .
An integral domain in which Bézout's identity holds is called a Bézout domain.
History and attribution
The French mathematician Étienne Bézout (1730–1783) proved this identity for polynomials. The statement for integers can be found already in the work of an earlier French mathematician, Claude Gaspard Bachet de Méziriac (1581–1638).
Andrew Granville traced the association of Bézout's name with the identity to Bourbaki, arguing that it is a misattribution since the identity is implicit in Euclid's Elements.
See also
, an analogue of Bézout's identity for homogeneous polynomials in three indeterminates
Notes
External links
Online calculator for Bézout's identity.
Articles containing proofs
Diophantine equations
Lemmas in number theory | Bézout's identity | [
"Mathematics"
] | 1,208 | [
"Mathematical objects",
"Equations",
"Diophantine equations",
"Theorems in number theory",
"Articles containing proofs",
"Lemmas in number theory",
"Lemmas",
"Number theory"
] |
4,964 | https://en.wikipedia.org/wiki/Bernoulli%20number | In mathematics, the Bernoulli numbers are a sequence of rational numbers which occur frequently in analysis. The Bernoulli numbers appear in (and can be defined by) the Taylor series expansions of the tangent and hyperbolic tangent functions, in Faulhaber's formula for the sum of m-th powers of the first n positive integers, in the Euler–Maclaurin formula, and in expressions for certain values of the Riemann zeta function.
The values of the first 20 Bernoulli numbers are given in the adjacent table. Two conventions are used in the literature, denoted here by and ; they differ only for , where and . For every odd , . For every even , is negative if is divisible by 4 and positive otherwise. The Bernoulli numbers are special values of the Bernoulli polynomials , with and .
The Bernoulli numbers were discovered around the same time by the Swiss mathematician Jacob Bernoulli, after whom they are named, and independently by Japanese mathematician Seki Takakazu. Seki's discovery was posthumously published in 1712 in his work Katsuyō Sanpō; Bernoulli's, also posthumously, in his Ars Conjectandi of 1713. Ada Lovelace's note G on the Analytical Engine from 1842 describes an algorithm for generating Bernoulli numbers with Babbage's machine; it is disputed whether Lovelace or Babbage developed the algorithm. As a result, the Bernoulli numbers have the distinction of being the subject of the first published complex computer program.
Notation
The superscript used in this article distinguishes the two sign conventions for Bernoulli numbers. Only the term is affected:
with ( / ) is the sign convention prescribed by NIST and most modern textbooks.
with ( / ) was used in the older literature, and (since 2022) by Donald Knuth following Peter Luschny's "Bernoulli Manifesto".
In the formulas below, one can switch from one sign convention to the other with the relation , or for integer = 2 or greater, simply ignore it.
Since for all odd , and many formulas only involve even-index Bernoulli numbers, a few authors write "" instead of . This article does not follow that notation.
History
Early history
The Bernoulli numbers are rooted in the early history of the computation of sums of integer powers, which have been of interest to mathematicians since antiquity.
Methods to calculate the sum of the first positive integers, the sum of the squares and of the cubes of the first positive integers were known, but there were no real 'formulas', only descriptions given entirely in words. Among the great mathematicians of antiquity to consider this problem were Pythagoras (c. 572–497 BCE, Greece), Archimedes (287–212 BCE, Italy), Aryabhata (b. 476, India), Abu Bakr al-Karaji (d. 1019, Persia) and Abu Ali al-Hasan ibn al-Hasan ibn al-Haytham (965–1039, Iraq).
During the late sixteenth and early seventeenth centuries mathematicians made significant progress. In the West Thomas Harriot (1560–1621) of England, Johann Faulhaber (1580–1635) of Germany, Pierre de Fermat (1601–1665) and fellow French mathematician Blaise Pascal (1623–1662) all played important roles.
Thomas Harriot seems to have been the first to derive and write formulas for sums of powers using symbolic notation, but even he calculated only up to the sum of the fourth powers. Johann Faulhaber gave formulas for sums of powers up to the 17th power in his 1631 Academia Algebrae, far higher than anyone before him, but he did not give a general formula.
Blaise Pascal in 1654 proved Pascal's identity relating to the sums of the th powers of the first positive integers for .
The Swiss mathematician Jakob Bernoulli (1654–1705) was the first to realize the existence of a single sequence of constants which provide a uniform formula for all sums of powers.
The joy Bernoulli experienced when he hit upon the pattern needed to compute quickly and easily the coefficients of his formula for the sum of the th powers for any positive integer can be seen from his comment. He wrote:
"With the help of this table, it took me less than half of a quarter of an hour to find that the tenth powers of the first 1000 numbers being added together will yield the sum 91,409,924,241,424,243,424,241,924,242,500."
Bernoulli's result was published posthumously in Ars Conjectandi in 1713. Seki Takakazu independently discovered the Bernoulli numbers and his result was published a year earlier, also posthumously, in 1712. However, Seki did not present his method as a formula based on a sequence of constants.
Bernoulli's formula for sums of powers is the most useful and generalizable formulation to date. The coefficients in Bernoulli's formula are now called Bernoulli numbers, following a suggestion of Abraham de Moivre.
Bernoulli's formula is sometimes called Faulhaber's formula after Johann Faulhaber who found remarkable ways to calculate sum of powers but never stated Bernoulli's formula. According to Knuth a rigorous proof of Faulhaber's formula was first published by Carl Jacobi in 1834. Knuth's in-depth study of Faulhaber's formula concludes (the nonstandard notation on the LHS is explained further on):
"Faulhaber never discovered the Bernoulli numbers; i.e., he never realized that a single sequence of constants ... would provide a uniform
for all sums of powers. He never mentioned, for example, the fact that almost half of the coefficients turned out to be zero after he had converted his formulas for from polynomials in to polynomials in ."
In the above Knuth meant ; instead using the formula avoids subtraction:
Reconstruction of "Summae Potestatum"
The Bernoulli numbers (n)/(n) were introduced by Jakob Bernoulli in the book Ars Conjectandi published posthumously in 1713 page 97. The main formula can be seen in the second half of the corresponding facsimile. The constant coefficients denoted , , and by Bernoulli are mapped to the notation which is now prevalent as , , , . The expression means – the small dots are used as grouping symbols. Using today's terminology these expressions are falling factorial powers . The factorial notation as a shortcut for was not introduced until 100 years later. The integral symbol on the left hand side goes back to Gottfried Wilhelm Leibniz in 1675 who used it as a long letter for "summa" (sum). The letter on the left hand side is not an index of summation but gives the upper limit of the range of summation which is to be understood as . Putting things together, for positive , today a mathematician is likely to write Bernoulli's formula as:
This formula suggests setting when switching from the so-called 'archaic' enumeration which uses only the even indices 2, 4, 6... to the modern form (more on different conventions in the next paragraph). Most striking in this context is the fact that the falling factorial has for the value . Thus Bernoulli's formula can be written
if , recapturing the value Bernoulli gave to the coefficient at that position.
The formula for in the first half of the quotation by Bernoulli above contains an error at the last term; it should be instead of .
Definitions
Many characterizations of the Bernoulli numbers have been found in the last 300 years, and each could be used to introduce these numbers. Here only four of the most useful ones are mentioned:
a recursive equation,
an explicit formula,
a generating function,
an integral expression.
For the proof of the equivalence of the four approaches.
Recursive definition
The Bernoulli numbers obey the sum formulas
where and denotes the Kronecker delta.
The first of these is sometimes written as the formula (for m > 1)
where the power is expanded formally using the binomial theorem and is replaced by .
Solving for gives the recursive formulas
Explicit definition
In 1893 Louis Saalschütz listed a total of 38 explicit formulas for the Bernoulli numbers, usually giving some reference in the older literature. One of them is (for ):
Generating function
The exponential generating functions are
where the substitution is . The two generating functions only differ by t.
If we let and then
Then and for the m term in the series for is:
If
then we find that
showing that the values of obey the recursive formula for the Bernoulli numbers .
The (ordinary) generating function
is an asymptotic series. It contains the trigamma function .
Integral Expression
From the generating functions above, one can obtain the following integral formula for the even Bernoulli numbers:
Bernoulli numbers and the Riemann zeta function
The Bernoulli numbers can be expressed in terms of the Riemann zeta function:
for .
Here the argument of the zeta function is 0 or negative. As is zero for negative even integers (the trivial zeroes), if n>1 is odd, is zero.
By means of the zeta functional equation and the gamma reflection formula the following relation can be obtained:
for .
Now the argument of the zeta function is positive.
It then follows from () and Stirling's formula that
for .
Efficient computation of Bernoulli numbers
In some applications it is useful to be able to compute the Bernoulli numbers through modulo , where is a prime; for example to test whether Vandiver's conjecture holds for , or even just to determine whether is an irregular prime. It is not feasible to carry out such a computation using the above recursive formulae, since at least (a constant multiple of) arithmetic operations would be required. Fortunately, faster methods have been developed which require only operations (see big notation).
David Harvey describes an algorithm for computing Bernoulli numbers by computing modulo for many small primes , and then reconstructing via the Chinese remainder theorem. Harvey writes that the asymptotic time complexity of this algorithm is and claims that this implementation is significantly faster than implementations based on other methods. Using this implementation Harvey computed for . Harvey's implementation has been included in SageMath since version 3.1. Prior to that, Bernd Kellner computed to full precision for in December 2002 and Oleksandr Pavlyk for with Mathematica in April 2008.
{| class="wikitable defaultright col1left"
! Computer !! Year !! n !! Digits*
|-
| J. Bernoulli || ~1689 || 10 || 1
|-
| L. Euler || 1748 || 30 || 8
|-
| J. C. Adams || 1878 || 62 || 36
|-
| D. E. Knuth, T. J. Buckholtz || 1967 || ||
|-
| G. Fee, S. Plouffe || 1996 || ||
|-
| G. Fee, S. Plouffe || 1996 || ||
|-
| B. C. Kellner || 2002 || ||
|-
| O. Pavlyk || 2008 || ||
|-
| D. Harvey || 2008 || ||
|}
* Digits is to be understood as the exponent of 10 when is written as a real number in normalized scientific notation.
Applications of the Bernoulli numbers
Asymptotic analysis
Arguably the most important application of the Bernoulli numbers in mathematics is their use in the Euler–Maclaurin formula. Assuming that is a sufficiently often differentiable function the Euler–Maclaurin formula can be written as
This formulation assumes the convention . Using the convention the formula becomes
Here (i.e. the zeroth-order derivative of is just ). Moreover, let denote an antiderivative of . By the fundamental theorem of calculus,
Thus the last formula can be further simplified to the following succinct form of the Euler–Maclaurin formula
This form is for example the source for the important Euler–Maclaurin expansion of the zeta function
Here denotes the rising factorial power.
Bernoulli numbers are also frequently used in other kinds of asymptotic expansions. The following example is the classical Poincaré-type asymptotic expansion of the digamma function .
Sum of powers
Bernoulli numbers feature prominently in the closed form expression of the sum of the th powers of the first positive integers. For define
This expression can always be rewritten as a polynomial in of degree . The coefficients of these polynomials are related to the Bernoulli numbers by Bernoulli's formula:
where denotes the binomial coefficient.
For example, taking to be 1 gives the triangular numbers .
Taking to be 2 gives the square pyramidal numbers .
Some authors use the alternate convention for Bernoulli numbers and state Bernoulli's formula in this way:
Bernoulli's formula is sometimes called Faulhaber's formula after Johann Faulhaber who also found remarkable ways to calculate sums of powers.
Faulhaber's formula was generalized by V. Guo and J. Zeng to a -analog.
Taylor series
The Bernoulli numbers appear in the Taylor series expansion of many trigonometric functions and hyperbolic functions.
Laurent series
The Bernoulli numbers appear in the following Laurent series:
Digamma function:
Use in topology
The Kervaire–Milnor formula for the order of the cyclic group of diffeomorphism classes of exotic -spheres which bound parallelizable manifolds involves Bernoulli numbers. Let be the number of such exotic spheres for , then
The Hirzebruch signature theorem for the genus of a smooth oriented closed manifold of dimension 4n also involves Bernoulli numbers.
Connections with combinatorial numbers
The connection of the Bernoulli number to various kinds of combinatorial numbers is based on the classical theory of finite differences and on the combinatorial interpretation of the Bernoulli numbers as an instance of a fundamental combinatorial principle, the inclusion–exclusion principle.
Connection with Worpitzky numbers
The definition to proceed with was developed by Julius Worpitzky in 1883. Besides elementary arithmetic only the factorial function and the power function is employed. The signless Worpitzky numbers are defined as
They can also be expressed through the Stirling numbers of the second kind
A Bernoulli number is then introduced as an inclusion–exclusion sum of Worpitzky numbers weighted by the harmonic sequence 1, , , ...
This representation has .
Consider the sequence , . From Worpitzky's numbers , applied to is identical to the Akiyama–Tanigawa transform applied to (see Connection with Stirling numbers of the first kind). This can be seen via the table:
{| style="text-align:center"
|+ Identity ofWorpitzky's representation and Akiyama–Tanigawa transform
|-
|1|| || || || || ||0||1|| || || || ||0||0||1|| || || ||0||0||0||1|| || ||0||0||0||0||1||
|-
|1||−1|| || || || ||0||2||−2|| || || ||0||0||3||−3|| || ||0||0||0||4||−4|| || || || || || ||
|-
|1||−3||2|| || || ||0||4||−10||6|| || ||0||0||9||−21||12|| || || || || || || || || || || || ||
|-
|1||−7||12||−6|| || ||0||8||−38||54||−24|| || || || || || || || || || || || || || || || || || ||
|-
|1||−15||50||−60||24|| || || || || || || || || || || || || || || || || || || || || || || || ||
|-
|}
The first row represents .
Hence for the second fractional Euler numbers () / ():
A second formula representing the Bernoulli numbers by the Worpitzky numbers is for
The simplified second Worpitzky's representation of the second Bernoulli numbers is:
() / () = × () / ()
which links the second Bernoulli numbers to the second fractional Euler numbers. The beginning is:
The numerators of the first parentheses are (see Connection with Stirling numbers of the first kind).
Connection with Stirling numbers of the second kind
If one defines the Bernoulli polynomials as:
where for are the Bernoulli numbers,
and is a Stirling number of the second kind.
One also has the following for Bernoulli polynomials,
The coefficient of in is .
Comparing the coefficient of in the two expressions of Bernoulli polynomials, one has:
(resulting in ) which is an explicit formula for Bernoulli numbers and can be used to prove Von-Staudt Clausen theorem.
Connection with Stirling numbers of the first kind
The two main formulas relating the unsigned Stirling numbers of the first kind to the Bernoulli numbers (with ) are
and the inversion of this sum (for , )
Here the number are the rational Akiyama–Tanigawa numbers, the first few of which are displayed in the following table.
{| class="wikitable" style="text-align:center"
|+ Akiyama–Tanigawa number
! !!0!!1!!2!!3!!4
|-
! 0
| 1 || || || ||
|-
! 1
| || || || || ...
|-
! 2
| || || || ... || ...
|-
! 3
| 0 || || ... || ... || ...
|-
! 4
| − || ... || ... || ... || ...
|}
The Akiyama–Tanigawa numbers satisfy a simple recurrence relation which can be exploited to iteratively compute the Bernoulli numbers. This leads to the algorithm shown in the section 'algorithmic description' above. See /.
An autosequence is a sequence which has its inverse binomial transform equal to the signed sequence. If the main diagonal is zeroes = , the autosequence is of the first kind. Example: , the Fibonacci numbers. If the main diagonal is the first upper diagonal multiplied by 2, it is of the second kind. Example: /, the second Bernoulli numbers (see ). The Akiyama–Tanigawa transform applied to = 1/ leads to (n) / (n + 1). Hence:
{| class="wikitable" style="text-align:center"
|+ Akiyama–Tanigawa transform for the second Euler numbers
|-
! !! 0 !! 1 !! 2 !! 3 !! 4
|-
! 0
| 1 || || || ||
|-
! 1
| || || || || ...
|-
! 2
| 0 || || || ... || ...
|-
! 3
| − || − || ... || ... || ...
|-
! 4
| 0 || ... || ... || ... || ...
|}
See and . () / () are the second (fractional) Euler numbers and an autosequence of the second kind.
( = ) × ( = ) = = .
Also valuable for / (see Connection with Worpitzky numbers).
Connection with Pascal's triangle
There are formulas connecting Pascal's triangle to Bernoulli numbers
where is the determinant of a n-by-n Hessenberg matrix part of Pascal's triangle whose elements are:
Example:
Connection with Eulerian numbers
There are formulas connecting Eulerian numbers to Bernoulli numbers:
Both formulae are valid for if is set to . If is set to − they are valid only for and respectively.
A binary tree representation
The Stirling polynomials are related to the Bernoulli numbers by . S. C. Woon described an algorithm to compute as a binary tree:
Woon's recursive algorithm (for ) starts by assigning to the root node . Given a node of the tree, the left child of the node is and the right child . A node is written as in the initial part of the tree represented above with ± denoting the sign of .
Given a node the factorial of is defined as
Restricted to the nodes of a fixed tree-level the sum of is , thus
For example:
Integral representation and continuation
The integral
has as special values for .
For example, and . Here, is the Riemann zeta function, and is the imaginary unit. Leonhard Euler (Opera Omnia, Ser. 1, Vol. 10, p. 351) considered these numbers and calculated
Another similar integral representation is
The relation to the Euler numbers and
The Euler numbers are a sequence of integers intimately connected with the Bernoulli numbers. Comparing the
asymptotic expansions of the Bernoulli and the Euler numbers shows that the Euler numbers are in magnitude approximately times larger than the Bernoulli numbers . In consequence:
This asymptotic equation reveals that lies in the common root of both the Bernoulli and the Euler numbers. In fact could be computed from these rational approximations.
Bernoulli numbers can be expressed through the Euler numbers and vice versa. Since, for odd , (with the exception ), it suffices to consider the case when is even.
These conversion formulas express a connection between the Bernoulli and the Euler numbers. But more important, there is a deep arithmetic root common to both kinds of numbers, which can be expressed through a more fundamental sequence of numbers, also closely tied to . These numbers are defined for as
The magic of these numbers lies in the fact that they turn out to be rational numbers. This was first proved by Leonhard Euler in a landmark paper De summis serierum reciprocarum (On the sums of series of reciprocals) and has fascinated mathematicians ever since. The first few of these numbers are
( / )
These are the coefficients in the expansion of .
The Bernoulli numbers and Euler numbers can be understood as special views of these numbers, selected from the sequence and scaled for use in special applications.
The expression [ even] has the value 1 if is even and 0 otherwise (Iverson bracket).
These identities show that the quotient of Bernoulli and Euler numbers at the beginning of this section is just the special case of when is even. The are rational approximations to and two successive terms always enclose the true value of . Beginning with the sequence starts ( / ):
These rational numbers also appear in the last paragraph of Euler's paper cited above.
Consider the Akiyama–Tanigawa transform for the sequence () / ():
{| class="wikitable" style="text-align:right;"
! 0
|1||||0||−||−||−||0
|-
! 1
| || 1|| || 0|| −|| −||
|-
! 2
| −|| || || || || ||
|-
! 3
| −1|| −|| −|| || || ||
|-
! 4
| || −|| −|| || || ||
|-
! 5
| 8|| || || || || ||
|-
! 6
| −|| || || || || ||
|}
From the second, the numerators of the first column are the denominators of Euler's formula. The first column is − × .
An algorithmic view: the Seidel triangle
The sequence Sn has another unexpected yet important property: The denominators of Sn+1 divide the factorial . In other words: the numbers , sometimes called Euler zigzag numbers, are integers.
(). See ().
Their exponential generating function is the sum of the secant and tangent functions.
.
Thus the above representations of the Bernoulli and Euler numbers can be rewritten in terms of this sequence as
These identities make it easy to compute the Bernoulli and Euler numbers: the Euler numbers are given immediately by and the Bernoulli numbers are fractions obtained from by some easy shifting, avoiding rational arithmetic.
What remains is to find a convenient way to compute the numbers . However, already in 1877 Philipp Ludwig von Seidel published an ingenious algorithm, which makes it simple to calculate .
Start by putting 1 in row 0 and let denote the number of the row currently being filled
If is odd, then put the number on the left end of the row in the first position of the row , and fill the row from the left to the right, with every entry being the sum of the number to the left and the number to the upper
At the end of the row duplicate the last number.
If is even, proceed similar in the other direction.
Seidel's algorithm is in fact much more general (see the exposition of Dominique Dumont ) and was rediscovered several times thereafter.
Similar to Seidel's approach D. E. Knuth and T. J. Buckholtz gave a recurrence equation for the numbers and recommended this method for computing and 'on electronic computers using only simple operations on integers'.
V. I. Arnold rediscovered Seidel's algorithm and later Millar, Sloane and Young popularized Seidel's algorithm under the name boustrophedon transform.
Triangular form:
{| style="text-align:right"
| || || || || || || 1|| || || || || ||
|-
| || || || || || 1|| || 1|| || || || ||
|-
| || || || || 2|| || 2|| || 1|| || || ||
|-
| || || || 2|| || 4|| || 5|| || 5|| || ||
|-
| || || 16|| || 16|| || 14|| || 10|| || 5|| ||
|-
| || 16|| || 32|| || 46|| || 56|| || 61|| || 61||
|-
|272|| ||272|| ||256|| ||224|| ||178|| ||122|| || 61
|}
Only , with one 1, and , with two 1s, are in the OEIS.
Distribution with a supplementary 1 and one 0 in the following rows:
{| style="text-align:right"
| || || || || || || 1|| || || || || ||
|-
| || || || || || 0|| || 1|| || || || ||
|-
| || || || || −1|| || −1|| || 0|| || || ||
|-
| || || || 0|| || −1|| || −2|| || −2|| || ||
|-
| || || 5|| || 5|| || 4|| || 2|| || 0|| ||
|-
| || 0|| || 5|| || 10|| || 14|| || 16|| || 16||
|-
|−61|| ||−61|| ||−56|| ||−46|| ||−32|| ||−16|| || 0
|}
This is , a signed version of . The main andiagonal is . The main diagonal is . The central column is . Row sums: 1, 1, −2, −5, 16, 61.... See . See the array beginning with 1, 1, 0, −2, 0, 16, 0 below.
The Akiyama–Tanigawa algorithm applied to () / () yields:
{| style="text-align:right"
| 1|| 1|| || 0|| −|| −|| −
|-
| 0|| 1|| || 1|| 0|| −
|-
| −1|| −1|| || 4||
|-
| 0|| −5|| −|| 1
|-
| 5|| 5|| −
|-
| 0|| 61
|-
| −61
|}
1. The first column is . Its binomial transform leads to:
{| style="text-align:right"
|-
| 1|| 1|| 0|| −2|| 0|| 16|| 0
|-
|0||−1||−2||2||16||−16
|-
|−1||−1||4||14||−32
|-
|0||5||10||−46
|-
|5||5||−56
|-
|0||−61
|-
|−61
|}
The first row of this array is . The absolute values of the increasing antidiagonals are . The sum of the antidiagonals is
2. The second column is . Its binomial transform yields:
{| style="text-align:right"
|-
| 1|| 2|| 2|| −4|| −16|| 32|| 272
|-
|1||0||−6||−12||48||240
|-
|−1||−6||−6||60||192
|-
|−5||0||66||32
|-
|5||66||66
|-
|61||0
|-
|−61
|}
The first row of this array is . The absolute values of the second bisection are the double of the absolute values of the first bisection.
Consider the Akiyama-Tanigawa algorithm applied to () / ( () = abs( ()) + 1 = .
{| style="text-align:right"
|1||2||2||||1||||
|-
|−1||0||||2||||0
|-
|−1||−3||−||3||
|-
|2||−3||−||−13
|-
|5||21||−
|-
|−16||45
|-
|−61
|}
The first column whose the absolute values are could be the numerator of a trigonometric function.
is an autosequence of the first kind (the main diagonal is ). The corresponding array is:
{| style="text-align:right"
|0||−1||−1||2||5||−16||−61
|-
|−1||0||3||3||−21||−45
|-
|1||3||0||−24||−24
|-
|2||−3||−24||0
|-
|−5||−21||24
|-
|−16||45
|-
|−61
|}
The first two upper diagonals are = × . The sum of the antidiagonals is = 2 × (n + 1).
− is an autosequence of the second kind, like for instance / . Hence the array:
{| style="text-align:right"
|-
|2||1||−1||−2||5||16||−61
|-
|−1||−2||−1||7||11||−77
|-
|−1||1||8||4||−88
|-
|2||7||−4||−92
|-
|5||−11||−88
|-
|−16||−77
|-
|−61
|}
The main diagonal, here , is the double of the first upper one, here . The sum of the antidiagonals is = 2 × (1). − = 2 × .
A combinatorial view: alternating permutations
Around 1880, three years after the publication of Seidel's algorithm, Désiré André proved a now classic result of combinatorial analysis. Looking at the first terms of the Taylor expansion of the trigonometric functions
and André made a startling discovery.
The coefficients are the Euler numbers of odd and even index, respectively. In consequence the ordinary expansion of has as coefficients the rational numbers .
André then succeeded by means of a recurrence argument to show that the alternating permutations of odd size are enumerated by the Euler numbers of odd index (also called tangent numbers) and the alternating permutations of even size by the Euler numbers of even index (also called secant numbers).
Related sequences
The arithmetic mean of the first and the second Bernoulli numbers are the associate Bernoulli numbers:
, , , , , / . Via the second row of its inverse Akiyama–Tanigawa transform , they lead to Balmer series / .
The Akiyama–Tanigawa algorithm applied to () / () leads to the Bernoulli numbers / , / , or without , named intrinsic Bernoulli numbers .
{| style="text-align:center; padding-left; padding-right: 2em;"
|-
|1||||||||
|-
|||||||||
|-
|0||||||||
|-
|−||−||−||−||0
|-
|0||−||−||−||−
|}
Hence another link between the intrinsic Bernoulli numbers and the Balmer series via ().
() = 0, 2, 1, 6,... is a permutation of the non-negative numbers.
The terms of the first row are f(n) = . 2, f(n) is an autosequence of the second kind. 3/2, f(n) leads by its inverse binomial transform to 3/2 −1/2 1/3 −1/4 1/5 ... = 1/2 + log 2.
Consider g(n) = 1/2 – 1 / (n+2) = 0, 1/6, 1/4, 3/10, 1/3. The Akiyama-Tanagiwa transforms gives:
{| style="text-align:center; padding-left; padding-right:2em;"
|-
|0||||||||||||...
|-
|−||−||−||−||−||−||...
|-
|0||−||−||−||−||−||...
|-
|||||||||0||−||...
|}
0, g(n), is an autosequence of the second kind.
Euler () / () without the second term () are the fractional intrinsic Euler numbers The corresponding Akiyama transform is:
{| style="text-align:center; padding-left; padding-right: 2em;"
|-
|1||1||||||
|-
|0||||||||
|-
|−||−||0||||
|-
|0||−||−||−||−
|-
|||||−||−||−
|}
The first line is . preceded by a zero is an autosequence of the first kind. It is linked to the Oresme numbers. The numerators of the second line are preceded by 0. The difference table is:
{| style="text-align:center; padding-left; padding-right: 2em;"
|-
|0||1||1||||||||
|-
|1||0||−||−||−||−||−
|-
|−1||−||0||||||||
|}
Arithmetical properties of the Bernoulli numbers
The Bernoulli numbers can be expressed in terms of the Riemann zeta function as for integers provided for the expression is understood as the limiting value and the convention is used. This intimately relates them to the values of the zeta function at negative integers. As such, they could be expected to have and do have deep arithmetical properties. For example, the Agoh–Giuga conjecture postulates that is a prime number if and only if is congruent to −1 modulo . Divisibility properties of the Bernoulli numbers are related to the ideal class groups of cyclotomic fields by a theorem of Kummer and its strengthening in the Herbrand-Ribet theorem, and to class numbers of real quadratic fields by Ankeny–Artin–Chowla.
The Kummer theorems
The Bernoulli numbers are related to Fermat's Last Theorem (FLT) by Kummer's theorem, which says:
If the odd prime does not divide any of the numerators of the Bernoulli numbers then has no solutions in nonzero integers.
Prime numbers with this property are called regular primes. Another classical result of Kummer are the following congruences.
Let be an odd prime and an even number such that does not divide . Then for any non-negative integer
A generalization of these congruences goes by the name of -adic continuity.
-adic continuity
If , and are positive integers such that and are not divisible by and , then
Since , this can also be written
where and , so that and are nonpositive and not congruent to 1 modulo . This tells us that the Riemann zeta function, with taken out of the Euler product formula, is continuous in the -adic numbers on odd negative integers congruent modulo to a particular , and so can be extended to a continuous function for all -adic integers the -adic zeta function.
Ramanujan's congruences
The following relations, due to Ramanujan, provide a method for calculating Bernoulli numbers that is more efficient than the one given by their original recursive definition:
Von Staudt–Clausen theorem
The von Staudt–Clausen theorem was given by Karl Georg Christian von Staudt and Thomas Clausen independently in 1840. The theorem states that for every ,
is an integer. The sum extends over all primes for which divides .
A consequence of this is that the denominator of is given by the product of all primes for which divides . In particular, these denominators are square-free and divisible by 6.
Why do the odd Bernoulli numbers vanish?
The sum
can be evaluated for negative values of the index . Doing so will show that it is an odd function for even values of , which implies that the sum has only terms of odd index. This and the formula for the Bernoulli sum imply that is 0 for even and ; and that the term for is cancelled by the subtraction. The von Staudt–Clausen theorem combined with Worpitzky's representation also gives a combinatorial answer to this question (valid for n > 1).
From the von Staudt–Clausen theorem it is known that for odd the number is an integer. This seems trivial if one knows beforehand that the integer in question is zero. However, by applying Worpitzky's representation one gets
as a sum of integers, which is not trivial. Here a combinatorial fact comes to surface which explains the vanishing of the Bernoulli numbers at odd index. Let be the number of surjective maps from } to }, then . The last equation can only hold if
This equation can be proved by induction. The first two examples of this equation are
,
.
Thus the Bernoulli numbers vanish at odd index because some non-obvious combinatorial identities are embodied in the Bernoulli numbers.
A restatement of the Riemann hypothesis
The connection between the Bernoulli numbers and the Riemann zeta function is strong enough to provide an alternate formulation of the Riemann hypothesis (RH) which uses only the Bernoulli numbers. In fact Marcel Riesz proved that the RH is equivalent to the following assertion:
For every there exists a constant (depending on ) such that as .
Here is the Riesz function
denotes the rising factorial power in the notation of D. E. Knuth. The numbers occur frequently in the study of the zeta function and are significant because is a -integer for primes where does not divide . The are called divided Bernoulli numbers.
Generalized Bernoulli numbers
The generalized Bernoulli numbers are certain algebraic numbers, defined similarly to the Bernoulli numbers, that are related to special values of Dirichlet -functions in the same way that Bernoulli numbers are related to special values of the Riemann zeta function.
Let be a Dirichlet character modulo . The generalized Bernoulli numbers attached to are defined by
Apart from the exceptional , we have, for any Dirichlet character , that if .
Generalizing the relation between Bernoulli numbers and values of the Riemann zeta function at non-positive integers, one has the for all integers :
where is the Dirichlet -function of .
Eisenstein–Kronecker number
Eisenstein–Kronecker numbers are an analogue of the generalized Bernoulli numbers for imaginary quadratic fields. They are related to critical L-values of Hecke characters.
Appendix
Assorted identities
See also
Bernoulli polynomial
Bernoulli polynomials of the second kind
Bernoulli umbra
Bell number
Euler number
Genocchi number
Kummer's congruences
Poly-Bernoulli number
Hurwitz zeta function
Euler summation
Stirling polynomial
Sums of powers
Notes
References
Bibliography
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
External links
The first 498 Bernoulli Numbers from Project Gutenberg
A multimodular algorithm for computing Bernoulli numbers
The Bernoulli Number Page
Bernoulli number programs at LiteratePrograms
Number theory
Topology
Integer sequences
Eponymous numbers in mathematics | Bernoulli number | [
"Physics",
"Mathematics"
] | 9,313 | [
"Sequences and series",
"Discrete mathematics",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Mathematical objects",
"Combinatorics",
"Topology",
"Space",
"Geometry",
"Spacetime",
"Numbers",
"Number theory"
] |
5,009 | https://en.wikipedia.org/wiki/Zebrafish | The zebrafish (Danio rerio) is a freshwater fish belonging to the minnow family (Cyprinidae) of the order Cypriniformes. Native to South Asia, it is a popular aquarium fish, frequently sold under the trade name zebra danio (and thus often called a "tropical fish" although it is both tropical and subtropical).
The zebrafish is an important and widely used vertebrate model organism in scientific research, particularly developmental biology, but also gene function, oncology, teratology, and drug development, in particular pre-clinical development. It is also notable for its regenerative abilities, and has been modified by researchers to produce many transgenic strains.
Taxonomy
The zebrafish is a derived member of the genus Brachydanio, of the family Cyprinidae. It has a sister-group relationship with Danio aesculapii. Zebrafish are also closely related to the genus Devario, as demonstrated by a phylogenetic tree of close species.
Distribution
Range
The zebrafish is native to freshwater habitats in South Asia where it is found in India, Pakistan, Bangladesh, Nepal and Bhutan. The northern limit is in the South Himalayas, ranging from the Sutlej river basin in the Pakistan–India border region to the state of Arunachal Pradesh in northeast India. Its range is concentrated in the Ganges and Brahmaputra River basins, and the species was first described from Kosi River (lower Ganges basin) of India. Its range further south is more local, with scattered records from the Western and Eastern Ghats regions. It has frequently been said to occur in Myanmar (Burma), but this is entirely based on pre-1930 records and likely refers to close relatives only described later, notably Danio kyathit. Likewise, old records from Sri Lanka are highly questionable and remain unconfirmed.
Zebrafish have been introduced to California, Connecticut, Florida and New Mexico in the United States, presumably by deliberate release by aquarists or by escape from fish farms. The New Mexico population had been extirpated by 2003 and it is unclear if the others survive, as the last published records were decades ago. Elsewhere the species has been introduced to Colombia and Malaysia.
Habitats
Zebrafish typically inhabit moderately flowing to stagnant clear water of quite shallow depth in streams, canals, ditches, oxbow lakes, ponds and rice paddies. There is usually some vegetation, either submerged or overhanging from the banks, and the bottom is sandy, muddy or silty, often mixed with pebbles or gravel. In surveys of zebrafish locations throughout much of its Bangladeshi and Indian distribution, the water had a near-neutral to somewhat basic pH and mostly ranged from in temperature. One unusually cold site was only and another unusually warm site was , but the zebrafish still appeared healthy. The unusually cold temperature was at one of the highest known zebrafish locations at above sea level, although the species has been recorded to .
Description
The zebrafish is named for the five uniform, pigmented, horizontal, blue stripes on the side of the body, which are reminiscent of a zebra's stripes, and which extend to the end of the caudal fin. Its shape is fusiform and laterally compressed, with its mouth directed upwards. The male is torpedo-shaped, with gold stripes between the blue stripes; the female has a larger, whitish belly and silver stripes instead of gold. Adult females exhibit a small genital papilla in front of the anal fin origin. The zebrafish can reach up to in length, although they typically are in the wild with some variations depending on location. Its lifespan in captivity is around two to three years, although in ideal conditions, this may be extended to over five years. In the wild it is typically an annual species.
Psychology
In 2015, a study was published about zebrafishes' capacity for episodic memory. The individuals showed a capacity to remember context with respect to objects, locations and occasions (what, when, where). Episodic memory is a capacity of explicit memory systems, typically associated with conscious experience.
The Mauthner cells integrate a wide array of sensory stimuli to produce the escape reflex. Those stimuli are found to include the lateral line signals by McHenry et al. 2009 and visual signals consistent with looming objects by Temizer et al. 2015, Dunn et al. 2016, and Yao et al. 2016.
Reproduction
The approximate generation time for Danio rerio is three months. A male must be present for ovulation and spawning to occur. Zebrafish are asynchronous spawners and under optimal conditions (such as food availability and favorable water parameters) can spawn successfully frequently, even on a daily basis. Females are able to spawn at intervals of two to three days, laying hundreds of eggs in each clutch. Upon release, embryonic development begins; in absence of sperm, growth stops after the first few cell divisions. Fertilized eggs almost immediately become transparent, a characteristic that makes D. rerio a convenient research model species. Sex determination of common laboratory strains was shown to be a complex genetic trait, rather than to follow a simple ZW or XY system.
The zebrafish embryo develops rapidly, with precursors to all major organs appearing within 36 hours of fertilization. The embryo begins as a yolk with a single enormous cell on top (see image, 0 h panel), which divides into two (0.75 h panel) and continues dividing until there are thousands of small cells (3.25 h panel). The cells then migrate down the sides of the yolk (8 h panel) and begin forming a head and tail (16 h panel). The tail then grows and separates from the body (24 h panel). The yolk shrinks over time because the fish uses it for food as it matures during the first few days (72 h panel). After a few months, the adult fish reaches reproductive maturity (bottom panel).
To encourage the fish to spawn, some researchers use a fish tank with a sliding bottom insert, which reduces the depth of the pool to simulate the shore of a river. Zebrafish spawn best in the morning due to their Circadian rhythms. Researchers have been able to collect 10,000 embryos in 10 minutes using this method. In particular, one pair of adult fish is capable of laying 200–300 eggs in one morning in approximately 5 to 10 at time. Male zebrafish are furthermore known to respond to more pronounced markings on females, i.e., "good stripes", but in a group, males will mate with whichever females they can find. What attracts females is not currently understood. The presence of plants, even plastic plants, also apparently encourages spawning.
Exposure to environmentally relevant concentrations of diisononyl phthalate (DINP), commonly used in a large variety of plastic items, disrupt the endocannabinoid system and thereby affect reproduction in a sex-specific manner.
Feeding
Zebrafish feeding practices vary significantly across different developmental stages, reflecting their changing nutritional needs. For newly hatched larvae, which begin feeding at approximately 5 days post-fertilization (dpf), small live prey such as Paramecium or rotifers are commonly used until they reach 9–15 dpf. This early diet is crucial for their growth and survival, as these small organisms provide essential nutrients. As the larvae develop, from 15 dpf onwards, they are typically transitioned to a diet that includes brine shrimp nauplii and dry feeds, which are more nutritionally balanced and easier to manage in laboratory settings. For larvae aged 25 dpf, feeding rates can range from 50% to 300% of their body weight (BW) per day, depending on their size and growth requirements. As zebrafish grow into juveniles (30–90 dpf), the recommended feeding rate decreases to about 6–8% of their BW per day, with a focus on high-quality dry feeds that meet their protein and energy needs. Upon reaching adulthood (over 90 dpf), zebrafish typically require a feeding rate of around 5% of their BW per day. Throughout these stages, it is essential to adjust the particle size of the feed: less than 100 μm for newly hatched larvae, 100–200 μm for those between 16 and 30 dpf, and larger particles for juveniles and adults. This structured approach to feeding not only supports optimal growth and health but also enhances the reliability of experimental outcomes in research settings
In the aquarium
Zebrafish are hardy fish and considered good for beginner aquarists. Their enduring popularity can be attributed to their playful disposition, as well as their rapid breeding, aesthetics, cheap price and broad availability. They also do well in schools or shoals of six or more, and interact well with other fish species in the aquarium. However, they are susceptible to Oodinium or velvet disease, microsporidia (Pseudoloma neurophilia), and Mycobacterium species. Given the opportunity, adults eat hatchlings, which may be protected by separating the two groups with a net, breeding box or separate tank.
In captivity, zebrafish live approximately forty-two months. Some captive zebrafish can develop a curved spine.
They can range from a few centimeters to a few inches, and provide movement in a freshwater fish tank.
The zebra danio was also used to make genetically modified fish and were the first species to be sold as GloFish (fluorescent colored fish).
Strains
In late 2003, transgenic zebrafish that express green, red, and yellow fluorescent proteins became commercially available in the United States. The fluorescent strains are trade-named GloFish; other cultivated varieties include "golden", "sandy", "longfin" and "leopard".
The leopard danio, previously known as Danio frankei, is a spotted colour morph of the zebrafish which arose due to a pigment mutation. Xanthistic forms of both the zebra and leopard pattern, along with long-finned strains, have been obtained via selective breeding programs for the aquarium trade.
Various transgenic and mutant strains of zebrafish were stored at the China Zebrafish Resource Center (CZRC), a non-profit organization, which was jointly supported by the Ministry of Science and Technology of China and the Chinese Academy of Sciences.
Wild-type strains
The Zebrafish Information Network (ZFIN) provides up-to-date information about current known wild-type (WT) strains of D. rerio, some of which are listed below.
AB (AB)
AB/C32 (AB/C32)
AB/TL (AB/TL)
AB/Tuebingen (AB/TU)
C32 (C32)
Cologne (KOLN)
Darjeeling (DAR)
Ekkwill (EKW)
HK/AB (HK/AB)
HK/Sing (HK/SING)
Hong Kong (HK)
India (IND)
Indonesia (INDO)
Nadia (NA)
RIKEN WT (RW)
Singapore (SING)
SJA (SJA)
SJD (SJD)
SJD/C32 (SJD/C32)
Tuebingen (TU)
Tupfel long fin (TL)
Tupfel long fin nacre (TLN)
WIK (WIK)
WIK/AB (WIK/AB)
Hybrids
Hybrids between different Danio species may be fertile: for example, between D. rerio and D. nigrofasciatus.
Scientific research
D. rerio is a common and useful scientific model organism for studies of vertebrate development and gene function. Its use as a laboratory animal was pioneered by the American molecular biologist George Streisinger and his colleagues at the University of Oregon in the 1970s and 1980s; Streisinger's zebrafish clones were among the earliest successful vertebrate clones created. Its importance has been consolidated by successful large-scale forward genetic screens (commonly referred to as the Tübingen/Boston screens). The fish has a dedicated online database of genetic, genomic, and developmental information, the Zebrafish Information Network (ZFIN). The Zebrafish International Resource Center (ZIRC) is a genetic resource repository with 29,250 alleles available for distribution to the research community. D. rerio is also one of the few fish species to have been sent into space.
Research with D. rerio has yielded advances in the fields of developmental biology, oncology, toxicology, reproductive studies, teratology, genetics, neurobiology, environmental sciences, stem cell research, regenerative medicine, muscular dystrophies and evolutionary theory.
Model characteristics
As a model biological system, the zebrafish possesses numerous advantages for scientists. Its genome has been fully sequenced, and it has well-understood, easily observable and testable developmental behaviors. Its embryonic development is very rapid, and its embryos are relatively large, robust, and transparent, and able to develop outside their mother. Furthermore, well-characterized mutant strains are readily available.
Other advantages include the species' nearly constant size during early development, which enables simple staining techniques to be used, and the fact that its two-celled embryo can be fused into a single cell to create a homozygous embryo. The zebrafish embryos are transparent and they develop outside of the uterus, which allows scientists to study the details of development starting from fertilization and continuing throughout development. The zebrafish is also demonstrably similar to mammalian models and humans in toxicity testing, and exhibits a diurnal sleep cycle with similarities to mammalian sleep behavior. However, zebrafish are not a universally ideal research model; there are a number of disadvantages to their scientific use, such as the absence of a standard diet and the presence of small but important differences between zebrafish and mammals in the roles of some genes related to human disorders.
Regeneration
Zebrafish have the ability to regenerate their heart and lateral line hair cells during their larval stages. The cardiac regenerative process likely involves signaling pathways such as Notch and Wnt; hemodynamic changes in the damaged heart are sensed by ventricular endothelial cells and their associated cardiac cilia by way of the mechanosensitive ion channel TRPV4, subsequently facilitating the Notch signaling pathway via KLF2 and activating various downstream effectors such as BMP-2 and HER2/neu. In 2011, the British Heart Foundation ran an advertising campaign publicising its intention to study the applicability of this ability to humans, stating that it aimed to raise £50 million in research funding.
Zebrafish have also been found to regenerate photoreceptor cells and retinal neurons following injury, which has been shown to be mediated by the dedifferentiation and proliferation of Müller glia. Researchers frequently amputate the dorsal and ventral tail fins and analyze their regrowth to test for mutations. It has been found that histone demethylation occurs at the site of the amputation, switching the zebrafish's cells to an "active", regenerative, stem cell-like state. In 2012, Australian scientists published a study revealing that zebrafish use a specialised protein, known as fibroblast growth factor, to ensure their spinal cords heal without glial scarring after injury. In addition, hair cells of the posterior lateral line have also been found to regenerate following damage or developmental disruption. Study of gene expression during regeneration has allowed for the identification of several important signaling pathways involved in the process, such as Wnt signaling and Fibroblast growth factor.
In probing disorders of the nervous system, including neurodegenerative diseases, movement disorders, psychiatric disorders and deafness, researchers are using the zebrafish to understand how the genetic defects underlying these conditions cause functional abnormalities in the human brain, spinal cord and sensory organs. Researchers have also studied the zebrafish to gain new insights into the complexities of human musculoskeletal diseases, such as muscular dystrophy. Another focus of zebrafish research is to understand how a gene called Hedgehog, a biological signal that underlies a number of human cancers, controls cell growth.
Genetics
Background genetics
Inbred strains and traditional outbred stocks have not been developed for laboratory zebrafish, and the genetic variability of wild-type lines among institutions may contribute to the replication crisis in biomedical research. Genetic differences in wild-type lines among populations maintained at different research institutions have been demonstrated using both Single-nucleotide polymorphisms and microsatellite analysis.
Gene expression
Due to their fast and short life cycles and relatively large clutch sizes, D. rerio or zebrafish are a useful model for genetic studies. A common reverse genetics technique is to reduce gene expression or modify splicing using Morpholino antisense technology. Morpholino oligonucleotides (MO) are stable, synthetic macromolecules that contain the same bases as DNA or RNA; by binding to complementary RNA sequences, they can reduce the expression of specific genes or block other processes from occurring on RNA. MO can be injected into one cell of an embryo after the 32-cell stage, reducing gene expression in only cells descended from that cell. However, cells in the early embryo (less than 32 cells) are permeable to large molecules, allowing diffusion between cells. Guidelines for using Morpholinos in zebrafish describe appropriate control strategies. Morpholinos are commonly microinjected in 500pL directly into 1–2 cell stage zebrafish embryos. The morpholino is able to integrate into most cells of the embryo.
A known problem with gene knockdowns is that, because the genome underwent a duplication after the divergence of ray-finned fishes and lobe-finned fishes, it is not always easy to silence the activity of one of the two gene paralogs reliably due to complementation by the other paralog. Despite the complications of the zebrafish genome, a number of commercially available global platforms exist for analysis of both gene expression by microarrays and promoter regulation using ChIP-on-chip.
Genome sequencing
The Wellcome Trust Sanger Institute started the zebrafish genome sequencing project in 2001, and the full genome sequence of the Tuebingen reference strain is publicly available at the National Center for Biotechnology Information (NCBI)'s Zebrafish Genome Page. The zebrafish reference genome sequence is annotated as part of the Ensembl project, and is maintained by the Genome Reference Consortium.
In 2009, researchers at the Institute of Genomics and Integrative Biology in Delhi, India, announced the sequencing of the genome of a wild zebrafish strain, containing an estimated 1.7 billion genetic letters. The genome of the wild zebrafish was sequenced at 39-fold coverage. Comparative analysis with the zebrafish reference genome revealed over 5 million single nucleotide variations and over 1.6 million insertion deletion variations. The zebrafish reference genome sequence of 1.4GB and over 26,000 protein coding genes was published by Kerstin Howe et al. in 2013.
Mitochondrial DNA
In October 2001, researchers from the University of Oklahoma published D. rerio's complete mitochondrial DNA sequence. Its length is 16,596 base pairs. This is within 100 base pairs of other related species of fish, and it is notably only 18 pairs longer than the goldfish (Carassius auratus) and 21 longer than the carp (Cyprinus carpio). Its gene order and content are identical to the common vertebrate form of mitochondrial DNA. It contains 13 protein-coding genes and a noncoding control region containing the origin of replication for the heavy strand. In between a grouping of five tRNA genes, a sequence resembling vertebrate origin of light strand replication is found. It is difficult to draw evolutionary conclusions because it is difficult to determine whether base pair changes have adaptive significance via comparisons with other vertebrates' nucleotide sequences.
Developmental genetics
T-boxes and homeoboxes are vital in Danio similarly to other vertebrates. The Bruce et al. team are known for this area, and in Bruce et al. 2003 & Bruce et al. 2005 uncover the role of two of these elements in oocytes of this species. By interfering via a dominant nonfunctional allele and a morpholino they find the T-box transcription activator Eomesodermin and its target mtx2 – a transcription factor – are vital to epiboly. (In Bruce et al. 2003 they failed to support the possibility that Eomesodermin behaves like Vegt. Neither they nor anyone else has been able to locate any mutation which – in the mother – will prevent initiation of the mesoderm or endoderm development processes in this species.)
Pigmentation genes
In 1999, the nacre mutation was identified in the zebrafish ortholog of the mammalian MITF transcription factor. Mutations in human MITF result in eye defects and loss of pigment, a type of Waardenburg Syndrome. In December 2005, a study of the golden strain identified the gene responsible for its unusual pigmentation as SLC24A5, a solute carrier that appeared to be required for melanin production, and confirmed its function with a Morpholino knockdown. The orthologous gene was then characterized in humans and a one base pair difference was found to strongly segregate fair-skinned Europeans and dark-skinned Africans. Zebrafish with the nacre mutation have since been bred with fish with a roy orbison (roy) mutation to make Casper strain fish that have no melanophores or iridophores, and are transparent into adulthood. These fish are characterized by uniformly pigmented eyes and translucent skin.
Transgenesis
Transgenesis is a popular approach to study the function of genes in zebrafish. Construction of transgenic zebrafish is rather easy by a method using the Tol2 transposon system. Tol2 element which encodes a gene for a fully functional transposase capable of catalyzing transposition in the zebrafish germ lineage. Tol2 is the only natural DNA transposable element in vertebrates from which an autonomous member has been identified. Examples include the artificial interaction produced between LEF1 and Catenin beta-1/β-catenin/CTNNB1. Dorsky et al. 2002 investigated the developmental role of Wnt by transgenically expressing a Lef1/β-catenin reporter. The Tol2 transposon system was used to develop transgenic zebrafish as sensitive biosensors for heavy metal detection. This involved creating a transgenic zebrafish line expressing a fluorescent protein under the control of a heavy metal-responsive promoter, enabling the detection of low concentrations of cadmium (Cd2+) and zinc (Zn2+).
There are well-established protocols for editing zebrafish genes using CRISPR-Cas9 and this tool has been used to generate genetically modified models.
Transparent adult bodies
In 2008, researchers at Boston Children's Hospital developed a new strain of zebrafish, named Casper, whose adult bodies had transparent skin. This allows for detailed visualization of cellular activity, circulation, metastasis and many other phenomena. In 2019 researchers published a crossing of a prkdc-/- and a IL2rga-/- strain that produced transparent, immunodeficient offspring, lacking natural killer cells as well as B- and T-cells. This strain can be adapted to warm water and the absence of an immune system makes the use of patient derived xenografts possible. In January 2013, Japanese scientists genetically modified a transparent zebrafish specimen to produce a visible glow during periods of intense brain activity.
In January 2007, Chinese researchers at Fudan University genetically modified zebrafish to detect oestrogen pollution in lakes and rivers, which is linked to male infertility. The researchers cloned oestrogen-sensitive genes and injected them into the fertile eggs of zebrafish. The modified fish turned green if placed into water that was polluted by oestrogen.
RNA splicing
In 2015, researchers at Brown University discovered that 10% of zebrafish genes do not need to rely on the U2AF2 protein to initiate RNA splicing. These genes have the DNA base pairs AC and TG as repeated sequences at the ends of each intron. On the 3'ss (3' splicing site), the base pairs adenine and cytosine alternate and repeat, and on the 5'ss (5' splicing site), their complements thymine and guanine alternate and repeat as well. They found that there was less reliance on U2AF2 protein than in humans, in which the protein is required for the splicing process to occur. The pattern of repeating base pairs around introns that alters RNA secondary structure was found in other teleosts, but not in tetrapods. This indicates that an evolutionary change in tetrapods may have led to humans relying on the U2AF2 protein for RNA splicing while these genes in zebrafish undergo splicing regardless of the presence of the protein.
Orthology
D. rerio has three transferrins, all of which cluster closely with other vertebrates.
Inbreeding depression
When close relatives mate, progeny may exhibit the detrimental effects of inbreeding depression. Inbreeding depression is predominantly caused by the homozygous expression of recessive deleterious alleles. For zebrafish, inbreeding depression might be expected to be more severe in stressful environments, including those caused by anthropogenic pollution. Exposure of zebrafish to environmental stress induced by the chemical clotrimazole, an imidazole fungicide used in agriculture and in veterinary and human medicine, amplified the effects of inbreeding on key reproductive traits. Embryo viability was significantly reduced in inbred exposed fish and there was a tendency for inbred males to sire fewer offspring.
Aquaculture research
Zebrafish are common models for research into fish farming, including pathogens and parasites causing yield loss or spreading to adjacent wild populations.
This usefulness is less than it might be due to Danios taxonomic distance from the most common aquaculture species. Because the most common are salmonids and cod in the Protacanthopterygii and sea bass, sea bream,
tilapia, and flatfish, in the Percomorpha, zebrafish results may not be perfectly applicable. Various other models Goldfish (Carassius auratus), Medaka (Oryzias latipes), Stickleback (Gasterosteus aculeatus), Roach (Rutilus rutilus), Pufferfish (Takifugu rubripes), Swordtail (Xiphophorus hellerii) are less used normally but would be closer to particular target species.
The only exception are the Carp (including Grass Carp, Ctenopharyngodon idella) and Milkfish (Chanos chanos) which are quite close, both being in the Cyprinidae. However it should also be noted that Danio consistently proves to be a useful model for mammals in many cases and there is dramatically more genetic distance between them than between Danio and any farmed fish.
Neurochemistry
In a glucocorticoid receptor-defective mutant with reduced exploratory behavior, fluoxetine rescued the normal exploratory behavior. This demonstrates relationships between glucocorticoids, fluoxetine, and exploration in this fish.
DNA repair
Zebrafish have been used as a model for studying DNA repair pathways. Embryos of externally fertilized fish species, such as zebrafish during their development, are directly exposed to environmental conditions such as pollutants and reactive oxygen species that may cause damage to their DNA. To cope with such DNA damages, a variety of different DNA repair pathways are expressed during development. Zebrafish have, in recent years, proven to be a useful model for assessing environmental pollutants that might cause DNA damage.
Drug discovery and development
The zebrafish and zebrafish larva is a suitable model organism for drug discovery and development. As a vertebrate with 70% genetic homology with humans, it can be predictive of human health and disease, while its small size and fast development facilitates experiments on a larger and quicker scale than with more traditional in vivo studies, including the development of higher-throughput, automated investigative tools. As demonstrated through ongoing research programmes, the zebrafish model enables researchers not only to identify genes that might underlie human disease, but also to develop novel therapeutic agents in drug discovery programmes. Zebrafish embryos have proven to be a rapid, cost-efficient, and reliable teratology assay model.
Drug screens
Drug screens in zebrafish can be used to identify novel classes of compounds with biological effects, or to repurpose existing drugs for novel uses; an example of the latter would be a screen which found that a commonly used statin (rosuvastatin) can suppress the growth of prostate cancer. To date, 65 small-molecule screens have been carried out and at least one has led to clinical trials. Within these screens, many technical challenges remain to be resolved, including differing rates of drug absorption resulting in levels of internal exposure that cannot be extrapolated from the water concentration, and high levels of natural variation between individual animals.
Toxico- or pharmacokinetics
To understand drug effects, the internal drug exposure is essential, as this drives the pharmacological effect. Translating experimental results from zebrafish to higher vertebrates (like humans) requires concentration-effect relationships, which can be derived from pharmacokinetic and pharmacodynamic analysis.
Because of its small size, however, it is very challenging to quantify the internal drug exposure. Traditionally multiple blood samples would be drawn to characterize the drug concentration profile over time, but this technique remains to be developed. To date, only a single pharmacokinetic model for paracetamol has been developed in zebrafish larvae.
Computational data analysis
Using smart data analysis methods, pathophysiological and pharmacological processes can be understood and subsequently translated to higher vertebrates, including humans. An example is the use of systems pharmacology, which is the integration of systems biology and pharmacometrics.
Systems biology characterizes (part of) an organism by a mathematical description of all relevant processes. These can be for example different signal transduction pathways that upon a specific signal lead to a certain response. By quantifying these processes, their behaviour in healthy and diseased situation can be understood and predicted.
Pharmacometrics uses data from preclinical experiments and clinical trials to characterize the pharmacological processes that are underlying the relation between the drug dose and its response or clinical outcome. These can be for example the drug absorption in or clearance from the body, or its interaction with the target to achieve a certain effect. By quantifying these processes, their behaviour after different doses or in different patients can be understood and predicted to new doses or patients.
By integrating these two fields, systems pharmacology has the potential to improve the understanding of the interaction of the drug with the biological system by mathematical quantification and subsequent prediction to new situations, like new drugs or new organisms or patients.
Using these computational methods, the previously mentioned analysis of paracetamol internal exposure in zebrafish larvae showed reasonable correlation between paracetamol clearance in zebrafish with that of higher vertebrates, including humans.
Medical research
Cancer
Zebrafish have been used to make several transgenic models of cancer, including melanoma, leukemia, pancreatic cancer and hepatocellular carcinoma. Zebrafish expressing mutated forms of either the BRAF or NRAS oncogenes develop melanoma when placed onto a p53 deficient background. Histologically, these tumors strongly resemble the human disease, are fully transplantable, and exhibit large-scale genomic alterations. The BRAF melanoma model was utilized as a platform for two screens published in March 2011 in the journal Nature. In one study, the model was used as a tool to understand the functional importance of genes known to be amplified and overexpressed in human melanoma. One gene, SETDB1, markedly accelerated tumor formation in the zebrafish system, demonstrating its importance as a new melanoma oncogene. This was particularly significant because SETDB1 is known to be involved in the epigenetic regulation that is increasingly appreciated to be central to tumor cell biology.
In another study, an effort was made to therapeutically target the genetic program present in the tumor's origin neural crest cell using a chemical screening approach. This revealed that an inhibition of the DHODH protein (by a small molecule called leflunomide) prevented development of the neural crest stem cells which ultimately give rise to melanoma via interference with the process of transcriptional elongation. Because this approach would aim to target the "identity" of the melanoma cell rather than a single genetic mutation, leflunomide may have utility in treating human melanoma.
Cardiovascular disease
In cardiovascular research, the zebrafish has been used to model human myocardial infarction model. The zebrafish heart completely regenerates after about 2 months of injury without any scar formation. The Alpha-1 adrenergic signalling mechanism involved in this process was identified in a 2023 study. Zebrafish is also used as a model for blood clotting, blood vessel development, and congenital heart and kidney disease.
Immune system
In programmes of research into acute inflammation, a major underpinning process in many diseases, researchers have established a zebrafish model of inflammation, and its resolution. This approach allows detailed study of the genetic controls of inflammation and the possibility of identifying potential new drugs.
Zebrafish has been extensively used as a model organism to study vertebrate innate immunity. The innate immune system is capable of phagocytic activity by 28 to 30 h postfertilization (hpf) while adaptive immunity is not functionally mature until at least 4 weeks postfertilization.
Infectious diseases
As the immune system is relatively conserved between zebrafish and humans, many human infectious diseases can be modeled in zebrafish. The transparent early life stages are well suited for in vivo imaging and genetic dissection of host-pathogen interactions. Zebrafish models for a wide range of bacterial, viral and parasitic pathogens have already been established; for example, the zebrafish model for tuberculosis provides fundamental insights into the mechanisms of pathogenesis of mycobacteria. Other bacteria commonly studied using zebrafish models include Clostridioides difficile, Staphylococcus aureus, and Pseudomonas aeruginosa. Furthermore, robotic technology has been developed for high-throughput antimicrobial drug screening using zebrafish infection models.
Repairing retinal damage
Another notable characteristic of the zebrafish is that it possesses four types of cone cell, with ultraviolet-sensitive cells supplementing the red, green and blue cone cell subtypes found in humans. Zebrafish can thus observe a very wide spectrum of colours. The species is also studied to better understand the development of the retina; in particular, how the cone cells of the retina become arranged into the so-called 'cone mosaic'. Zebrafish, in addition to certain other teleost fish, are particularly noted for having extreme precision of cone cell arrangement.
This study of the zebrafish's retinal characteristics has also extrapolated into medical enquiry. In 2007, researchers at University College London grew a type of zebrafish adult stem cell found in the eyes of fish and mammals that develops into neurons in the retina. These could be injected into the eye to treat diseases that damage retinal neurons—nearly every disease of the eye, including macular degeneration, glaucoma, and diabetes-related blindness. The researchers studied Müller glial cells in the eyes of humans aged from 18 months to 91 years, and were able to develop them into all types of retinal neurons. They were also able to grow them easily in the lab. The stem cells successfully migrated into diseased rats' retinas, and took on the characteristics of the surrounding neurons. The team stated that they intended to develop the same approach in humans.
Muscular dystrophies
Muscular dystrophies (MD) are a heterogeneous group of genetic disorders that cause muscle weakness, abnormal contractions and muscle wasting, often leading to premature death. Zebrafish is widely used as model organism to study muscular dystrophies. For example, the sapje (sap) mutant is the zebrafish orthologue of human Duchenne muscular dystrophy (DMD). The Machuca-Tzili and co-workers applied zebrafish to determine the role of alternative splicing factor, MBNL, in myotonic dystrophy type 1 (DM1) pathogenesis. More recently, Todd et al. described a new zebrafish model designed to explore the impact of CUG repeat expression during early development in DM1 disease. Zebrafish is also an excellent animal model to study congenital muscular dystrophies including CMD Type 1 A (CMD 1A) caused by mutation in the human laminin α2 (LAMA2) gene. The zebrafish, because of its advantages discussed above, and in particular the ability of zebrafish embryos to absorb chemicals, has become a model of choice in screening and testing new drugs against muscular dystrophies.
Bone physiology and pathology
Zebrafish have been used as model organisms for bone metabolism, tissue turnover, and resorbing activity. These processes are largely evolutionary conserved. They have been used to study osteogenesis (bone formation), evaluating differentiation, matrix deposition activity, and cross-talk of skeletal cells, to create and isolate mutants modeling human bone diseases, and test new chemical compounds for the ability to revert bone defects. The larvae can be used to follow new (de novo) osteoblast formation during bone development. They start mineralising bone elements as early as 4 days post fertilisation. Recently, adult zebrafish are being used to study complex age related bone diseases such as osteoporosis and osteogenesis imperfecta. The (elasmoid) scales of zebrafish function as a protective external layer and are little bony plates made by osteoblasts. These exoskeletal structures are formed by bone matrix depositing osteoblasts and are remodeled by osteoclasts. The scales also act as the main calcium storage of the fish. They can be cultured ex-vivo (kept alive outside of the organism) in a multi-well plate, which allows manipulation with drugs and even screening for new drugs that could change bone metabolism (between osteoblasts and osteoclasts).
Diabetes
Zebrafish pancreas development is very homologous to mammals, such as mice. The signaling mechanisms and way the pancreas functions are very similar. The pancreas has an endocrine compartment, which contains a variety of cells. Pancreatic PP cells that produce polypeptides, and β-cells that produce insulin are two examples of those such cells. This structure of the pancreas, along with the glucose homeostasis system, are helpful in studying diseases, such as diabetes, that are related to the pancreas. Models for pancreas function, such as fluorescent staining of proteins, are useful in determining the processes of glucose homeostasis and the development of the pancreas. Glucose tolerance tests have been developed using zebrafish, and can now be used to test for glucose intolerance or diabetes in humans. The function of insulin are also being tested in zebrafish, which will further contribute to human medicine. The majority of work done surrounding knowledge on glucose homeostasis has come from work on zebrafish transferred to humans.
Obesity
Zebrafish have been used as a model system to study obesity, with research into both genetic obesity and over-nutrition induced obesity. Obese zebrafish, similar to obese mammals, show dysregulation of lipid controlling metabolic pathways, which leads to weight gain without normal lipid metabolism. Also like mammals, zebrafish store excess lipids in visceral, intramuscular, and subcutaneous adipose deposits. These reasons and others make zebrafish good models for studying obesity in humans and other species. Genetic obesity is usually studied in transgenic or mutated zebrafish with obesogenic genes. As an example, transgenic zebrafish with overexpressed AgRP, an endogenous melanocortin antagonist, showed increased body weight and adipose deposition during growth. Though zebrafish genes may not be the exact same as human genes, these tests could provide important insight into possible genetic causes and treatments for human genetic obesity. Diet-induced obesity zebrafish models are useful, as diet can be modified from a very early age. High fat diets and general overfeeding diets both show rapid increases in adipose deposition, increased BMI, hepatosteatosis, and hypertriglyceridemia. However, the normal fat, overfed specimens are still metabolically healthy, while high-fat diet specimens are not. Understanding differences between types of feeding-induced obesity could prove useful in human treatment of obesity and related health conditions.
Environmental toxicology
Zebrafish have been used as a model system in environmental toxicology studies.
Epilepsy
Zebrafish have been used as a model system to study epilepsy. Mammalian seizures can be recapitulated molecularly, behaviorally, and electrophysiologically, using a fraction of the resources required for experiments in mammals.
See also
Japanese rice fish or medaka, another fish used for genetic, developmental, and biomedical research
List of freshwater aquarium fish species
Denison barb
References
Further reading
External links
British Association of Zebrafish Husbandry
International Zebrafish Society (IZFS)
European Society for Fish Models in Biology and Medicine (EuFishBioMed)
The Zebrafish Information Network (ZFIN)
The Zebrafish International Resource Center (ZIRC)
The European Zebrafish Resource Center (EZRC)
The China Zebrafish Resource Center (CZRC)
The Zebrafish Genome Sequencing Project at the Wellcome Trust Sanger Institute
FishMap: The Zebrafish Community Genomics Browser at the Institute of Genomics and Integrative Biology (IGIB)
WebHome Zebrafish GenomeWiki Beta Preview at the IGIB
Genome sequencing initiative at the IGIB
Danio rerio at Danios.info
Sanger Institute Zebrafish Mutation Resource
Zebrafish genome via Ensembl
FishforScience.com – using zebrafish for medical research
FishForPharma
Breeding Zebrafish
Fish described in 1822
Danio
Fish of Bangladesh
Freshwater fish of India
Freshwater fish of Pakistan
Animal models
Stem cell research
Regenerative biomedicine
Animal models in neuroscience
Taxa named by Francis Buchanan-Hamilton
Fish of Nepal
Fish of Bhutan | Zebrafish | [
"Chemistry",
"Biology"
] | 8,921 | [
"Stem cell research",
"Model organisms",
"Translational medicine",
"Animal models",
"Tissue engineering"
] |
5,014 | https://en.wikipedia.org/wiki/Bistability | In a dynamical system, bistability means the system has two stable equilibrium states. A bistable structure can be resting in either of two states. An example of a mechanical device which is bistable is a light switch. The switch lever is designed to rest in the "on" or "off" position, but not between the two. Bistable behavior can occur in mechanical linkages, electronic circuits, nonlinear optical systems, chemical reactions, and physiological and biological systems.
In a conservative force field, bistability stems from the fact that the potential energy has two local minima, which are the stable equilibrium points. These rest states need not have equal potential energy. By mathematical arguments, a local maximum, an unstable equilibrium point, must lie between the two minima. At rest, a particle will be in one of the minimum equilibrium positions, because that corresponds to the state of lowest energy. The maximum can be visualized as a barrier between them.
A system can transition from one state of minimal energy to the other if it is given enough activation energy to penetrate the barrier (compare activation energy and Arrhenius equation for the chemical case). After the barrier has been reached, assuming the system has damping, it will relax into the other minimum state in a time called the relaxation time.
Bistability is widely used in digital electronics devices to store binary data. It is the essential characteristic of the flip-flop, a circuit which is a fundamental building block of computers and some types of semiconductor memory. A bistable device can store one bit of binary data, with one state representing a "0" and the other state a "1". It is also used in relaxation oscillators, multivibrators, and the Schmitt trigger.
Optical bistability is an attribute of certain optical devices where two resonant transmissions states are possible and stable, dependent on the input.
Bistability can also arise in biochemical systems, where it creates digital, switch-like outputs from the constituent chemical concentrations and activities. It is often associated with hysteresis in such systems.
Mathematical modelling
In the mathematical language of dynamic systems analysis, one of the simplest bistable systems is
This system describes a ball rolling down a curve with shape , and has three equilibrium points: , , and . The middle point is marginally stable ( is stable but will not converge to ), while the other two points are stable. The direction of change of over time depends on the initial condition . If the initial condition is positive (), then the solution approaches 1 over time, but if the initial condition is negative (), then approaches −1 over time. Thus, the dynamics are "bistable". The final state of the system can be either or , depending on the initial conditions.
The appearance of a bistable region can be understood for the model system
which undergoes a supercritical pitchfork bifurcation with bifurcation parameter .
In biological and chemical systems
Bistability is key for understanding basic phenomena of cellular functioning, such as decision-making processes in cell cycle progression, cellular differentiation, and apoptosis. It is also involved in loss of cellular homeostasis associated with early events in cancer onset and in prion diseases as well as in the origin of new species (speciation).
Bistability can be generated by a positive feedback loop with an ultrasensitive regulatory step. Positive feedback loops, such as the simple X activates Y and Y activates X motif, essentially link output signals to their input signals and have been noted to be an important regulatory motif in cellular signal transduction because positive feedback loops can create switches with an all-or-nothing decision. Studies have shown that numerous biological systems, such as Xenopus oocyte maturation, mammalian calcium signal transduction, and polarity in budding yeast, incorporate multiple positive feedback loops with different time scales (slow and fast). Having multiple linked positive feedback loops with different time scales ("dual-time switches") allows for (a) increased regulation: two switches that have independent changeable activation and deactivation times; and (b) noise filtering.
Bistability can also arise in a biochemical system only for a particular range of parameter values, where the parameter can often be interpreted as the strength of the feedback. In several typical examples, the system has only one stable fixed point at low values of the parameter. A saddle-node bifurcation gives rise to a pair of new fixed points emerging, one stable and the other unstable, at a critical value of the parameter. The unstable solution can then form another saddle-node bifurcation with the initial stable solution at a higher value of the parameter, leaving only the higher fixed solution. Thus, at values of the parameter between the two critical values, the system has two stable solutions. An example of a dynamical system that demonstrates similar features is
where is the output, and is the parameter, acting as the input.
Bistability can be modified to be more robust and to tolerate significant changes in concentrations of reactants, while still maintaining its "switch-like" character. Feedback on both the activator of a system and inhibitor make the system able to tolerate a wide range of concentrations. An example of this in cell biology is that activated CDK1 (Cyclin Dependent Kinase 1) activates its activator Cdc25 while at the same time inactivating its inactivator, Wee1, thus allowing for progression of a cell into mitosis. Without this double feedback, the system would still be bistable, but would not be able to tolerate such a wide range of concentrations.
Bistability has also been described in the embryonic development of Drosophila melanogaster (the fruit fly). Examples are anterior-posterior and dorso-ventral axis formation and eye development.
A prime example of bistability in biological systems is that of Sonic hedgehog (Shh), a secreted signaling molecule, which plays a critical role in development. Shh functions in diverse processes in development, including patterning limb bud tissue differentiation. The Shh signaling network behaves as a bistable switch, allowing the cell to abruptly switch states at precise Shh concentrations. gli1 and gli2 transcription is activated by Shh, and their gene products act as transcriptional activators for their own expression and for targets downstream of Shh signaling. Simultaneously, the Shh signaling network is controlled by a negative feedback loop wherein the Gli transcription factors activate the enhanced transcription of a repressor (Ptc). This signaling network illustrates the simultaneous positive and negative feedback loops whose exquisite sensitivity helps create a bistable switch.
Bistability can only arise in biological and chemical systems if three necessary conditions are fulfilled: positive feedback, a mechanism to filter out small stimuli and a mechanism to prevent increase without bound.
Bistable chemical systems have been studied extensively to analyze relaxation kinetics, non-equilibrium thermodynamics, stochastic resonance, as well as climate change. In bistable spatially extended systems the onset of local correlations and propagation of traveling waves have been analyzed.
Bistability is often accompanied by hysteresis. On a population level, if many realisations of a bistable system are considered (e.g. many bistable cells (speciation)), one typically observes bimodal distributions. In an ensemble average over the population, the result may simply look like a smooth transition, thus showing the value of single-cell resolution.
A specific type of instability is known as modehopping, which is bi-stability in the frequency space. Here trajectories can shoot between two stable limit cycles, and thus show similar characteristics as normal bi-stability when measured inside a Poincare section.
In mechanical systems
Bistability as applied in the design of mechanical systems is more commonly said to be "over centre"—that is, work is done on the system to move it just past the peak, at which point the mechanism goes "over centre" to its secondary stable position. The result is a toggle-type action- work applied to the system below a threshold sufficient to send it 'over center' results in no change to the mechanism's state.
Springs are a common method of achieving an "over centre" action. A spring attached to a simple two position ratchet-type mechanism can create a button or plunger that is clicked or toggled between two mechanical states. Many ballpoint and rollerball retractable pens employ this type of bistable mechanism.
An even more common example of an over-center device is an ordinary electric wall switch. These switches are often designed to snap firmly into the "on" or "off" position once the toggle handle has been moved a certain distance past the center-point.
A ratchet-and-pawl is an elaboration—a multi-stable "over center" system used to create irreversible motion. The pawl goes over center as it is turned in the forward direction. In this case, "over center" refers to the ratchet being stable and "locked" in a given position until clicked forward again; it has nothing to do with the ratchet being unable to turn in the reverse direction.
Gallery
See also
Multistability – the generalized case of more than two stable points
In psychology
ferroelectric, ferromagnetic, hysteresis, bistable perception
Schmitt trigger
strong Allee effect
Interferometric modulator display, a bistable reflective display technology found in mirasol displays by Qualcomm
References
External links
BiStable Reed Sensor
Digital electronics
2 (number)
es:Biestable | Bistability | [
"Engineering"
] | 1,985 | [
"Electronic engineering",
"Digital electronics"
] |
5,036 | https://en.wikipedia.org/wiki/Berry%20paradox | The Berry paradox is a self-referential paradox arising from an expression like "The smallest positive integer not definable in under sixty letters" (a phrase with fifty-seven letters).
Bertrand Russell, the first to discuss the paradox in print, attributed it to G. G. Berry (1867–1928), a junior librarian at Oxford's Bodleian Library. Russell called Berry "the only person in Oxford who understood mathematical logic". The paradox was called "Richard's paradox" by Jean-Yves Girard.
Overview
Consider the expression:
"The smallest positive integer not definable in under sixty letters."
Since there are only twenty-six letters in the English alphabet, there are finitely many phrases of under sixty letters, and hence finitely many positive integers that are defined by phrases of under sixty letters. Since there are infinitely many positive integers, this means that there are positive integers that cannot be defined by phrases of under sixty letters. If there are positive integers that satisfy a given property, then there is a smallest positive integer that satisfies that property; therefore, there is a smallest positive integer satisfying the property "not definable in under sixty letters". This is the integer to which the above expression refers. But the above expression is only fifty-seven letters long, therefore it is definable in under sixty letters, and is not the smallest positive integer not definable in under sixty letters, and is not defined by this expression. This is a paradox: there must be an integer defined by this expression, but since the expression is self-contradictory (any integer it defines is definable in under sixty letters), there cannot be any integer defined by it.
Mathematician and computer scientist Gregory Chaitin in The Unknowable (1999) adds this comment: "Well, the Mexican mathematical historian Alejandro Garcidiego has taken the trouble to find that letter [of Berry's from which Russell penned his remarks], and it is rather a different paradox. Berry’s letter actually talks about the first ordinal that can’t be named in a finite number of words. According to Cantor’s theory such an ordinal must exist, but we’ve just named it in a finite number of words, which is a contradiction."
Resolution
The Berry paradox as formulated above arises because of systematic ambiguity in the word "definable". In other formulations of the Berry paradox, such as one that instead reads: "...not nameable in less..." the term "nameable" is also one that has this systematic ambiguity. Terms of this kind give rise to vicious circle fallacies. Other terms with this type of ambiguity are: satisfiable, true, false, function, property, class, relation, cardinal, and ordinal. To resolve one of these paradoxes means to pinpoint exactly where our use of language went wrong and to provide restrictions on the use of language which may avoid them.
This family of paradoxes can be resolved by incorporating stratifications of meaning in language. Terms with systematic ambiguity may be written with subscripts denoting that one level of meaning is considered a higher priority than another in their interpretation. "The number not nameable0 in less than eleven words" may be nameable1 in less than eleven words under this scheme.
However, one can read Alfred Tarski's contributions to the Liar Paradox to find how this resolution in languages falls short. Alfred Tarski diagnosed the paradox as arising only in languages that are "semantically closed", by which he meant a language in which it is possible for one sentence to predicate truth (or falsehood) of another sentence in the same language (or even of itself). To avoid self-contradiction, it is necessary when discussing truth values to envision levels of languages, each of which can predicate truth (or falsehood) only of languages at a lower level. So, when one sentence refers to the truth-value of another, it is semantically higher. The sentence referred to is part of the "object language", while the referring sentence is considered to be a part of a "meta-language" with respect to the object language. It is legitimate for sentences in "languages" higher on the semantic hierarchy to refer to sentences lower in the "language" hierarchy, but not the other way around. This prevents a system from becoming self-referential.
However, this system is incomplete. One would like to be able to make statements such as "For every statement in level α of the hierarchy, there is a statement at level α+1 which asserts that the first statement is false." This is a true, meaningful statement about the hierarchy that Tarski defines, but it refers to statements at every level of the hierarchy, so it must be above every level of the hierarchy, and is therefore not possible within the hierarchy (although bounded versions of the sentence are possible). Saul Kripke is credited with identifying this incompleteness in Tarski's hierarchy in his highly cited paper "Outline of a theory of truth," and it is recognized as a general problem in hierarchical languages.
Formal analogues
Using programs or proofs of bounded lengths, it is possible to construct an analogue of the Berry expression in a formal mathematical language, as has been done by Gregory Chaitin. Though the formal analogue does not lead to a logical contradiction, it does prove certain impossibility results.
built on a formalized version of Berry's paradox to prove Gödel's incompleteness theorem in a new and much simpler way. The basic idea of his proof is that a proposition that holds of x if and only if x = n for some natural number n can be called a definition for n, and that the set {(n, k): n has a definition that is k symbols long} can be shown to be representable (using Gödel numbers). Then the proposition "m is the first number not definable in less than k symbols" can be formalized and shown to be a definition in the sense just stated.
Relationship with Kolmogorov complexity
It is not possible in general to unambiguously define what is the minimal number of symbols required to describe a given string (given a specific description mechanism). In this context, the terms string and number may be used interchangeably, since a number is actually a string of symbols, e.g. an English word (like the word "eleven" used in the paradox) while, on the other hand, it is possible to refer to any word with a number, e.g. by the number of its position in a given dictionary or by suitable encoding. Some long strings can be described exactly using fewer symbols than those required by their full representation, as is often achieved using data compression. The complexity of a given string is then defined as the minimal length that a description requires in order to (unambiguously) refer to the full representation of that string.
The Kolmogorov complexity is defined using formal languages, or Turing machines which avoids ambiguities about which string results from a given description. It can be proven that the Kolmogorov complexity is not computable. The proof by contradiction shows that if it were possible to compute the Kolmogorov complexity, then it would also be possible to systematically generate paradoxes similar to this one, i.e. descriptions shorter than what the complexity of the described string implies. That is to say, the definition of the Berry number is paradoxical because it is not actually possible to compute how many words are required to define a number, and we know that such computation is not possible because of the paradox.
See also
Self-reference
List of self–referential paradoxes
Notes
References
Reprinted in
Further reading
French, James D. (1988) "The False Assumption Underlying Berry's Paradox," Journal of Symbolic Logic 53: 1220–1223.
Russell, Bertrand (1906) "Les paradoxes de la logique", Revue de métaphysique et de morale 14: 627–650
External links
Roosen-Runge, Peter H. (1997) "Berry's Paradox."
Eponymous paradoxes
Mathematical paradoxes
Self-referential paradoxes
Algorithmic information theory
Logical paradoxes | Berry paradox | [
"Mathematics"
] | 1,700 | [
"Mathematical problems",
"Mathematical paradoxes"
] |
5,165 | https://en.wikipedia.org/wiki/Country | A country is a distinct part of the world, such as a state, nation, or other political entity. When referring to a specific polity, the term "country" may refer to a sovereign state, states with limited recognition, constituent country, or a dependent territory. Most sovereign states, but not all countries, are members of the United Nations. There is no universal agreement on the number of "countries" in the world since several states have disputed sovereignty status, limited recognition and a number of non-sovereign entities are commonly considered countries.
The definition and usage of the word "country" are flexible and have changed over time. The Economist wrote in 2010 that "any attempt to find a clear definition of a country soon runs into a thicket of exceptions and anomalies."
Areas much smaller than a political entity may be referred to as a "country", such as the West Country in England, "big sky country" (used in various contexts of the American West), "coal country" (used to describe coal-mining regions), or simply "the country" (used to describe a rural area). The term "country" is also used as a qualifier descriptively, such as country music or country living.
Etymology
The word country comes from Old French , which derives from Vulgar Latin () ("(land) lying opposite"; "(land) spread before"), derived from ("against, opposite"). It most likely entered the English language after the Franco-Norman invasion during the 11th century.
Definition of a country
In English the word has increasingly become associated with political divisions, so that one sense, associated with the indefinite article – "a country" – is now frequently applied as a synonym for a state or a former sovereign state. It may also be used as a synonym for "nation". Taking as examples Canada, Sri Lanka, and Yugoslavia, cultural anthropologist Clifford Geertz wrote in 1997 that "it is clear that the relationships between 'country' and 'nation' are so different from one [place] to the next as to be impossible to fold into a dichotomous opposition as they are into a promiscuous fusion."
Areas much smaller than a political state may be referred to as countries, such as the West Country in England, "big sky country" (used in various contexts of the American West), "coal country" (used to describe coal-mining regions in several sovereign states) and many other terms. The word "country" is also used for the sense of native sovereign territory, such as the widespread use of Indian country in the United States.
The term "country" in English may also be wielded to describe rural areas, or used in the form "countryside." Raymond Williams, a Welsh scholar, wrote in 1975:
The unclear definition of "country" in modern English was further commented upon by philosopher Simon Keller:
Melissa Lucashenko, an Aboriginal Australian writer, expressed the difficulty of defining "country" in a 2005 essay, "Unsettlement":
Statehood
When referring to a specific polity, the term "country" may refer to a sovereign state, states with limited recognition, constituent country, or a dependent territory. A sovereign state is a political entity that has supreme legitimate authority over a part of the world. There is no universal agreement on the number of "countries" in the world since several states have disputed sovereignty status, and a number of non-sovereign entities are commonly called countries. No definition is binding on all the members of the community of nations on the criteria for statehood. State practice relating to the recognition of a country typically falls somewhere between the declaratory and constitutive approaches. International law defines sovereign states as having a permanent population, defined territory, a government not under another, and the capacity to interact with other states.
The declarative theory outlined in the 1933 Montevideo Convention describes a state in Article 1 as:
Having a permanent population
Having a defined territory
Having a government
Having the ability to enter into relations with other states
The Montevideo Convention in Article 3 implies that a sovereign state can still be a sovereign state even if no other countries recognise that it exists. As a restatement of customary international law, the Montevideo Convention merely codified existing legal norms and its principles, and therefore does not apply merely to the signatories of international organizations (such as the United Nations), but to all subjects of international law as a whole. A similar opinion has been expressed by the European Economic Community, reiterated by the European Union, in the principal statement of its Badinter Committee, and by Judge Challis Professor, James Crawford.
According to the constitutive theory a state is a legal entity of international law if, and only if, it is recognised as sovereign by at least one other country. Because of this, new states could not immediately become part of the international community or be bound by international law, and recognised nations did not have to respect international law in their dealings with them. In 1912, L. F. L. Oppenheim said the following, regarding constitutive theory:
In 1976 the Organisation of African Unity define state recognition as:
Some countries, such as Taiwan, Sahrawi Republic and Kosovo have disputed sovereignty and/or limited recognition among some countries. Some sovereign states are unions of separate polities, each of which may also be considered a country in its own right, called constituent countries. The Danish Realm consists of Denmark proper, the Faroe Islands, and Greenland. The Kingdom of the Netherlands consists of the Netherlands proper, Aruba, Curaçao, and Sint Maarten. The United Kingdom consists of England, Scotland, Wales, and Northern Ireland.
Dependent territories are the territories of a sovereign state that are outside of its proper territory. These include the overseas territories of New Zealand, the dependencies of Norway, the British Overseas Territories and Crown Dependencies, the territories of the United States, the external territories of Australia, the special administrative regions of China, the autonomous regions of the Danish Realm, Åland, Overseas France, and the Caribbean Netherlands. Some dependent territories are treated as a separate "country of origin" in international trade, such as Hong Kong, Greenland, and Macau.
Identification
Symbols of a country may incorporate cultural, religious or political symbols of any nation that the country includes. Many categories of symbols can be seen in flags, coats of arms, or seals.
Name
Most countries have a long name and a short name. The long name is typically used in formal contexts and often describes the country's form of government. The short name is the country's common name by which it is typically identified. The International Organization for Standardization maintains a list of country codes as part of ISO 3166 to designate each country with a two-letter country code. The name of a country can hold cultural and diplomatic significance. Upper Volta changed its name to Burkina Faso to reflect the end of French colonization, and the name of North Macedonia was disputed for years due to a conflict with the similarly named Macedonia region in Greece. The ISO 3166-1 standard currently comprises 249 countries, 193 of which are sovereign states that are members of the United Nations.
Flags
Originally, flags representing a country would generally be the personal flag of its rulers; however, over time, the practice of using personal banners as flags of places was abandoned in favor of flags that had some significance to the nation, often its patron saint. Early examples of these were the maritime republics such as Genoa which could be said to have a national flag as early as the 12th century. However, these were still mostly used in the context of marine identification.
Although some flags date back earlier, widespread use of flags outside of military or naval context begins only with the rise of the idea of the nation state at the end of the 18th century and particularly are a product of the Age of Revolution. Revolutions such as those in France and America called for people to begin thinking of themselves as citizens as opposed to subjects under a king, and thus necessitated flags that represented the collective citizenry, not just the power and right of a ruling family. With nationalism becoming common across Europe in the 19th century, national flags came to represent most of the states of Europe. Flags also began fostering a sense of unity between different peoples, such as the Union Jack representing a union between England and Scotland, or began to represent unity between nations in a perceived shared struggle, for example, the Pan-Slavic colors or later Pan-Arab colors.
As Europeans colonized significant portions of the world, they exported ideas of nationhood and national symbols, including flags, with the adoption of a flag becoming seen as integral to the nation-building process. Political change, social reform, and revolutions combined with a growing sense of nationhood among ordinary people in the 19th and 20th centuries led to the birth of new nations and flags around the globe. With so many flags being created, interest in these designs began to develop and the study of flags, vexillology, at both professional and amateur levels, emerged. After World War II, Western vexillology went through a phase of rapid development, with many research facilities and publications being established.
National anthems
A national anthem is a patriotic musical composition symbolizing and evoking eulogies of the history and traditions of a country or nation. Though the custom of an officially adopted national anthem became popular only in the 19th century, some national anthems predate this period, often existing as patriotic songs long before designation as national anthem. Several countries remain without an official national anthem. In these cases, there are established de facto anthems played at sporting events or diplomatic receptions. These include the United Kingdom ("God Save the King") and Sweden (). Some sovereign states that are made up of multiple countries or constituencies have associated musical compositions for each of them (such as with the United Kingdom, Russia, and the Soviet Union). These are sometimes referred to as national anthems even though they are not sovereign states (for example, "Hen Wlad Fy Nhadau" is used for Wales, part of the United Kingdom).
Other symbols
Coats of arms or national emblems
Seals or stamps
National mottos
National colors
Patriotism
A positive emotional connection to a country a person belongs to is called patriotism. Patriotism is a sense of love for, devotion to, and sense of attachment to one's country. This attachment can be a combination of many different feelings, and language relating to one's homeland, including ethnic, cultural, political, or historical aspects. It encompasses a set of concepts closely related to nationalism, mostly civic nationalism and sometimes cultural nationalism.
Economy
Several organizations seek to identify trends to produce economy country classifications. Countries are often distinguished as developing countries or developed countries.
The United Nations Department of Economic and Social Affairs annually produces the World Economic Situation and Prospects Report classifies states as developed countries, economies in transition, or developing countries. The report classifies country development based on per capita gross national income (GNI). The UN identifies subgroups within broad categories based on geographical location or ad hoc criteria. The UN outlines the geographical regions for developing economies like Africa, East Asia, South Asia, Western Asia, Latin America, and the Caribbean. The 2019 report recognizes only developed countries in North America, Europe, Asia, and the Pacific. The majority of economies in transition and developing countries are found in Africa, Asia, Latin America, and the Caribbean.
The World Bank also classifies countries based on GNI per capita. The World Bank Atlas method classifies countries as low-income economies, lower-middle-income economies, upper-middle-income economies, or high-income economies. For the 2020 fiscal year, the World Bank defines low-income economies as countries with a GNI per capita of $1,025 or less in 2018; lower-middle-income economies as countries with a GNI per capita between $1,026 and $3,995; upper-middle-income economies as countries with a GNI per capita between $3,996 and $12,375; high-income economies as countries with a GNI per capita of $12,376 or more..
It also identifies regional trends. The World Bank defines its regions as East Asia and Pacific, Europe and Central Asia, Latin America and the Caribbean, Middle East and North Africa, North America, South Asia, and Sub-Saharan Africa. Lastly, the World Bank distinguishes countries based on its operational policies. The three categories include International Development Association (IDA) countries, International Bank for Reconstruction and Development (IBRD) countries, and Blend countries.
See also
Country (identity)
Lists by country
List of former sovereign states
Lists of sovereign states and dependent territories
List of sovereign states and dependent territories by continent
List of transcontinental countries
Micronation
Quasi-state
Notes
References
Works cited
Further reading
Defining what makes a country The Economist
External links
The CIA World Factbook
Country Studies from the United States Library of Congress
Foreign Information by Country and Country & Territory Guides from GovPubs at UCB Libraries
United Nations statistics division
Human geography | Country | [
"Environmental_science"
] | 2,664 | [
"Environmental social science",
"Human geography"
] |
5,170 | https://en.wikipedia.org/wiki/Combinatorics | Combinatorics is an area of mathematics primarily concerned with counting, both as a means and as an end to obtaining results, and certain properties of finite structures. It is closely related to many other areas of mathematics and has many applications ranging from logic to statistical physics and from evolutionary biology to computer science.
Combinatorics is well known for the breadth of the problems it tackles. Combinatorial problems arise in many areas of pure mathematics, notably in algebra, probability theory, topology, and geometry, as well as in its many application areas. Many combinatorial questions have historically been considered in isolation, giving an ad hoc solution to a problem arising in some mathematical context. In the later twentieth century, however, powerful and general theoretical methods were developed, making combinatorics into an independent branch of mathematics in its own right. One of the oldest and most accessible parts of combinatorics is graph theory, which by itself has numerous natural connections to other areas. Combinatorics is used frequently in computer science to obtain formulas and estimates in the analysis of algorithms.
Definition
The full scope of combinatorics is not universally agreed upon. According to H.J. Ryser, a definition of the subject is difficult because it crosses so many mathematical subdivisions. Insofar as an area can be described by the types of problems it addresses, combinatorics is involved with:
the enumeration (counting) of specified structures, sometimes referred to as arrangements or configurations in a very general sense, associated with finite systems,
the existence of such structures that satisfy certain given criteria,
the construction of these structures, perhaps in many ways, and
optimization: finding the "best" structure or solution among several possibilities, be it the "largest", "smallest" or satisfying some other optimality criterion.
Leon Mirsky has said: "combinatorics is a range of linked studies which have something in common and yet diverge widely in their objectives, their methods, and the degree of coherence they have attained." One way to define combinatorics is, perhaps, to describe its subdivisions with their problems and techniques. This is the approach that is used below. However, there are also purely historical reasons for including or not including some topics under the combinatorics umbrella. Although primarily concerned with finite systems, some combinatorial questions and techniques can be extended to an infinite (specifically, countable) but discrete setting.
History
Basic combinatorial concepts and enumerative results appeared throughout the ancient world. Indian physician Sushruta asserts in Sushruta Samhita that 63 combinations can be made out of 6 different tastes, taken one at a time, two at a time, etc., thus computing all 26 − 1 possibilities. Greek historian Plutarch discusses an argument between Chrysippus (3rd century BCE) and Hipparchus (2nd century BCE) of a rather delicate enumerative problem, which was later shown to be related to Schröder–Hipparchus numbers. Earlier, in the Ostomachion, Archimedes (3rd century BCE) may have considered the number of configurations of a tiling puzzle, while combinatorial interests possibly were present in lost works by Apollonius.
In the Middle Ages, combinatorics continued to be studied, largely outside of the European civilization. The Indian mathematician Mahāvīra () provided formulae for the number of permutations and combinations, and these formulas may have been familiar to Indian mathematicians as early as the 6th century CE. The philosopher and astronomer Rabbi Abraham ibn Ezra () established the symmetry of binomial coefficients, while a closed formula was obtained later by the talmudist and mathematician Levi ben Gerson (better known as Gersonides), in 1321.
The arithmetical triangle—a graphical diagram showing relationships among the binomial coefficients—was presented by mathematicians in treatises dating as far back as the 10th century, and would eventually become known as Pascal's triangle. Later, in Medieval England, campanology provided examples of what is now known as Hamiltonian cycles in certain Cayley graphs on permutations.
During the Renaissance, together with the rest of mathematics and the sciences, combinatorics enjoyed a rebirth. Works of Pascal, Newton, Jacob Bernoulli and Euler became foundational in the emerging field. In modern times, the works of J.J. Sylvester (late 19th century) and Percy MacMahon (early 20th century) helped lay the foundation for enumerative and algebraic combinatorics. Graph theory also enjoyed an increase of interest at the same time, especially in connection with the four color problem.
In the second half of the 20th century, combinatorics enjoyed a rapid growth, which led to establishment of dozens of new journals and conferences in the subject. In part, the growth was spurred by new connections and applications to other fields, ranging from algebra to probability, from functional analysis to number theory, etc. These connections shed the boundaries between combinatorics and parts of mathematics and theoretical computer science, but at the same time led to a partial fragmentation of the field.
Approaches and subfields of combinatorics
Enumerative combinatorics
Enumerative combinatorics is the most classical area of combinatorics and concentrates on counting the number of certain combinatorial objects. Although counting the number of elements in a set is a rather broad mathematical problem, many of the problems that arise in applications have a relatively simple combinatorial description. Fibonacci numbers is the basic example of a problem in enumerative combinatorics. The twelvefold way provides a unified framework for counting permutations, combinations and partitions.
Analytic combinatorics
Analytic combinatorics concerns the enumeration of combinatorial structures using tools from complex analysis and probability theory. In contrast with enumerative combinatorics, which uses explicit combinatorial formulae and generating functions to describe the results, analytic combinatorics aims at obtaining asymptotic formulae.
Partition theory
Partition theory studies various enumeration and asymptotic problems related to integer partitions, and is closely related to q-series, special functions and orthogonal polynomials. Originally a part of number theory and analysis, it is now considered a part of combinatorics or an independent field. It incorporates the bijective approach and various tools in analysis and analytic number theory and has connections with statistical mechanics. Partitions can be graphically visualized with Young diagrams or Ferrers diagrams. They occur in a number of branches of mathematics and physics, including the study of symmetric polynomials and of the symmetric group and in group representation theory in general.
Graph theory
Graphs are fundamental objects in combinatorics. Considerations of graph theory range from enumeration (e.g., the number of graphs on n vertices with k edges) to existing structures (e.g., Hamiltonian cycles) to algebraic representations (e.g., given a graph G and two numbers x and y, does the Tutte polynomial TG(x,y) have a combinatorial interpretation?). Although there are very strong connections between graph theory and combinatorics, they are sometimes thought of as separate subjects. While combinatorial methods apply to many graph theory problems, the two disciplines are generally used to seek solutions to different types of problems.
Design theory
Design theory is a study of combinatorial designs, which are collections of subsets with certain intersection properties. Block designs are combinatorial designs of a special type. This area is one of the oldest parts of combinatorics, such as in Kirkman's schoolgirl problem proposed in 1850. The solution of the problem is a special case of a Steiner system, which play an important role in the classification of finite simple groups. The area has further connections to coding theory and geometric combinatorics.
Combinatorial design theory can be applied to the area of design of experiments. Some of the basic theory of combinatorial designs originated in the statistician Ronald Fisher's work on the design of biological experiments. Modern applications are also found in a wide gamut of areas including finite geometry, tournament scheduling, lotteries, mathematical chemistry, mathematical biology, algorithm design and analysis, networking, group testing and cryptography.
Finite geometry
Finite geometry is the study of geometric systems having only a finite number of points. Structures analogous to those found in continuous geometries (Euclidean plane, real projective space, etc.) but defined combinatorially are the main items studied. This area provides a rich source of examples for design theory. It should not be confused with discrete geometry (combinatorial geometry).
Order theory
Order theory is the study of partially ordered sets, both finite and infinite. It provides a formal framework for describing statements such as "this is less than that" or "this precedes that". Various examples of partial orders appear in algebra, geometry, number theory and throughout combinatorics and graph theory. Notable classes and examples of partial orders include lattices and Boolean algebras.
Matroid theory
Matroid theory abstracts part of geometry. It studies the properties of sets (usually, finite sets) of vectors in a vector space that do not depend on the particular coefficients in a linear dependence relation. Not only the structure but also enumerative properties belong to matroid theory. Matroid theory was introduced by Hassler Whitney and studied as a part of order theory. It is now an independent field of study with a number of connections with other parts of combinatorics.
Extremal combinatorics
Extremal combinatorics studies how large or how small a collection of finite objects (numbers, graphs, vectors, sets, etc.) can be, if it has to satisfy certain restrictions. Much of extremal combinatorics concerns classes of set systems; this is called extremal set theory. For instance, in an n-element set, what is the largest number of k-element subsets that can pairwise intersect one another? What is the largest number of subsets of which none contains any other? The latter question is answered by Sperner's theorem, which gave rise to much of extremal set theory.
The types of questions addressed in this case are about the largest possible graph which satisfies certain properties. For example, the largest triangle-free graph on 2n vertices is a complete bipartite graph Kn,n. Often it is too hard even to find the extremal answer f(n) exactly and one can only give an asymptotic estimate.
Ramsey theory is another part of extremal combinatorics. It states that any sufficiently large configuration will contain some sort of order. It is an advanced generalization of the pigeonhole principle.
Probabilistic combinatorics
In probabilistic combinatorics, the questions are of the following type: what is the probability of a certain property for a random discrete object, such as a random graph? For instance, what is the average number of triangles in a random graph? Probabilistic methods are also used to determine the existence of combinatorial objects with certain prescribed properties (for which explicit examples might be difficult to find) by observing that the probability of randomly selecting an object with those properties is greater than 0. This approach (often referred to as the probabilistic method) proved highly effective in applications to extremal combinatorics and graph theory. A closely related area is the study of finite Markov chains, especially on combinatorial objects. Here again probabilistic tools are used to estimate the mixing time.
Often associated with Paul Erdős, who did the pioneering work on the subject, probabilistic combinatorics was traditionally viewed as a set of tools to study problems in other parts of combinatorics. The area recently grew to become an independent field of combinatorics.
Algebraic combinatorics
Algebraic combinatorics is an area of mathematics that employs methods of abstract algebra, notably group theory and representation theory, in various combinatorial contexts and, conversely, applies combinatorial techniques to problems in algebra. Algebraic combinatorics has come to be seen more expansively as an area of mathematics where the interaction of combinatorial and algebraic methods is particularly strong and significant. Thus the combinatorial topics may be enumerative in nature or involve matroids, polytopes, partially ordered sets, or finite geometries. On the algebraic side, besides group and representation theory, lattice theory and commutative algebra are common.
Combinatorics on words
Combinatorics on words deals with formal languages. It arose independently within several branches of mathematics, including number theory, group theory and probability. It has applications to enumerative combinatorics, fractal analysis, theoretical computer science, automata theory, and linguistics. While many applications are new, the classical Chomsky–Schützenberger hierarchy of classes of formal grammars is perhaps the best-known result in the field.
Geometric combinatorics
Geometric combinatorics is related to convex and discrete geometry. It asks, for example, how many faces of each dimension a convex polytope can have. Metric properties of polytopes play an important role as well, e.g. the Cauchy theorem on the rigidity of convex polytopes. Special polytopes are also considered, such as permutohedra, associahedra and Birkhoff polytopes. Combinatorial geometry is a historical name for discrete geometry.
It includes a number of subareas such as polyhedral combinatorics (the study of faces of convex polyhedra), convex geometry (the study of convex sets, in particular combinatorics of their intersections), and discrete geometry, which in turn has many applications to computational geometry. The study of regular polytopes, Archimedean solids, and kissing numbers is also a part of geometric combinatorics. Special polytopes are also considered, such as the permutohedron, associahedron and Birkhoff polytope.
Topological combinatorics
Combinatorial analogs of concepts and methods in topology are used to study graph coloring, fair division, partitions, partially ordered sets, decision trees, necklace problems and discrete Morse theory. It should not be confused with combinatorial topology which is an older name for algebraic topology.
Arithmetic combinatorics
Arithmetic combinatorics arose out of the interplay between number theory, combinatorics, ergodic theory, and harmonic analysis. It is about combinatorial estimates associated with arithmetic operations (addition, subtraction, multiplication, and division). Additive number theory (sometimes also called additive combinatorics) refers to the special case when only the operations of addition and subtraction are involved. One important technique in arithmetic combinatorics is the ergodic theory of dynamical systems.
Infinitary combinatorics
Infinitary combinatorics, or combinatorial set theory, is an extension of ideas in combinatorics to infinite sets. It is a part of set theory, an area of mathematical logic, but uses tools and ideas from both set theory and extremal combinatorics. Some of the things studied include continuous graphs and trees, extensions of Ramsey's theorem, and Martin's axiom. Recent developments concern combinatorics of the continuum and combinatorics on successors of singular cardinals.
Gian-Carlo Rota used the name continuous combinatorics to describe geometric probability, since there are many analogies between counting and measure.
Related fields
Combinatorial optimization
Combinatorial optimization is the study of optimization on discrete and combinatorial objects. It started as a part of combinatorics and graph theory, but is now viewed as a branch of applied mathematics and computer science, related to operations research, algorithm theory and computational complexity theory.
Coding theory
Coding theory started as a part of design theory with early combinatorial constructions of error-correcting codes. The main idea of the subject is to design efficient and reliable methods of data transmission. It is now a large field of study, part of information theory.
Discrete and computational geometry
Discrete geometry (also called combinatorial geometry) also began as a part of combinatorics, with early results on convex polytopes and kissing numbers. With the emergence of applications of discrete geometry to computational geometry, these two fields partially merged and became a separate field of study. There remain many connections with geometric and topological combinatorics, which themselves can be viewed as outgrowths of the early discrete geometry.
Combinatorics and dynamical systems
Combinatorial aspects of dynamical systems is another emerging field. Here dynamical systems can be defined on combinatorial objects. See for example
graph dynamical system.
Combinatorics and physics
There are increasing interactions between combinatorics and physics, particularly statistical physics. Examples include an exact solution of the Ising model, and a connection between the Potts model on one hand, and the chromatic and Tutte polynomials on the other hand.
See also
Combinatorial biology
Combinatorial chemistry
Combinatorial data analysis
Combinatorial game theory
Combinatorial group theory
Discrete mathematics
List of combinatorics topics
Phylogenetics
Polynomial method in combinatorics
Notes
References
Björner, Anders; and Stanley, Richard P.; (2010); A Combinatorial Miscellany
Bóna, Miklós; (2011); A Walk Through Combinatorics (3rd ed.).
Graham, Ronald L.; Groetschel, Martin; and Lovász, László; eds. (1996); Handbook of Combinatorics, Volumes 1 and 2. Amsterdam, NL, and Cambridge, MA: Elsevier (North-Holland) and MIT Press.
Lindner, Charles C.; and Rodger, Christopher A.; eds. (1997); Design Theory, CRC-Press. .
Stanley, Richard P. (1997, 1999); Enumerative Combinatorics, Volumes 1 and 2, Cambridge University Press.
van Lint, Jacobus H.; and Wilson, Richard M.; (2001); A Course in Combinatorics, 2nd ed., Cambridge University Press.
External links
Combinatorial Analysis – an article in Encyclopædia Britannica Eleventh Edition
Combinatorics, a MathWorld article with many references.
Combinatorics, from a MathPages.com portal.
The Hyperbook of Combinatorics, a collection of math articles links.
The Two Cultures of Mathematics by W.T. Gowers, article on problem solving vs theory building.
"Glossary of Terms in Combinatorics"
List of Combinatorics Software and Databases | Combinatorics | [
"Mathematics"
] | 3,813 | [
"Discrete mathematics",
"Combinatorics"
] |
5,176 | https://en.wikipedia.org/wiki/Calculus | Calculus is the mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizations of arithmetic operations.
Originally called infinitesimal calculus or "the calculus of infinitesimals", it has two major branches, differential calculus and integral calculus. The former concerns instantaneous rates of change, and the slopes of curves, while the latter concerns accumulation of quantities, and areas under or between curves. These two branches are related to each other by the fundamental theorem of calculus. They make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit. It is the "mathematical backbone" for dealing with problems where variables change with time or another reference variable.
Infinitesimal calculus was formulated separately in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz. Later work, including codifying the idea of limits, put these developments on a more solid conceptual footing. Today, calculus is widely used in science, engineering, biology, and even has applications in social science and other branches of math.
Etymology
In mathematics education, calculus is an abbreviation of both infinitesimal calculus and integral calculus, which denotes courses of elementary mathematical analysis.
In Latin, the word calculus means “small pebble”, (the diminutive of calx, meaning "stone"), a meaning which still persists in medicine. Because such pebbles were used for counting out distances, tallying votes, and doing abacus arithmetic, the word came to be the Latin word for calculation. In this sense, it was used in English at least as early as 1672, several years before the publications of Leibniz and Newton, who wrote their mathematical texts in Latin.
In addition to differential calculus and integral calculus, the term is also used for naming specific methods of computation or theories that imply some sort of computation. Examples of this usage include propositional calculus, Ricci calculus, calculus of variations, lambda calculus, sequent calculus, and process calculus. Furthermore, the term "calculus" has variously been applied in ethics and philosophy, for such systems as Bentham's felicific calculus, and the ethical calculus.
History
Modern calculus was developed in 17th-century Europe by Isaac Newton and Gottfried Wilhelm Leibniz (independently of each other, first publishing around the same time) but elements of it first appeared in ancient Egypt and later Greece, then in China and the Middle East, and still later again in medieval Europe and India.
Ancient precursors
Egypt
Calculations of volume and area, one goal of integral calculus, can be found in the Egyptian Moscow papyrus (), but the formulae are simple instructions, with no indication as to how they were obtained.
Greece
Laying the foundations for integral calculus and foreshadowing the concept of the limit, ancient Greek mathematician Eudoxus of Cnidus () developed the method of exhaustion to prove the formulas for cone and pyramid volumes.
During the Hellenistic period, this method was further developed by Archimedes (BC), who combined it with a concept of the indivisibles—a precursor to infinitesimals—allowing him to solve several problems now treated by integral calculus. In The Method of Mechanical Theorems he describes, for example, calculating the center of gravity of a solid hemisphere, the center of gravity of a frustum of a circular paraboloid, and the area of a region bounded by a parabola and one of its secant lines.
China
The method of exhaustion was later discovered independently in China by Liu Hui in the 3rd century AD to find the area of a circle. In the 5th century AD, Zu Gengzhi, son of Zu Chongzhi, established a method that would later be called Cavalieri's principle to find the volume of a sphere.
Medieval
Middle East
In the Middle East, Hasan Ibn al-Haytham, Latinized as Alhazen (AD) derived a formula for the sum of fourth powers. He used the results to carry out what would now be called an integration of this function, where the formulae for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid.
India
Bhāskara II () was acquainted with some ideas of differential calculus and suggested that the "differential coefficient" vanishes at an extremum value of the function. In his astronomical work, he gave a procedure that looked like a precursor to infinitesimal methods. Namely, if then This can be interpreted as the discovery that cosine is the derivative of sine. In the 14th century, Indian mathematicians gave a non-rigorous method, resembling differentiation, applicable to some trigonometric functions. Madhava of Sangamagrama and the Kerala School of Astronomy and Mathematics stated components of calculus, but according to Victor J. Katz they were not able to "combine many differing ideas under the two unifying themes of the derivative and the integral, show the connection between the two, and turn calculus into the great problem-solving tool we have today".
Modern
Johannes Kepler's work Stereometria Doliorum (1615) formed the basis of integral calculus. Kepler developed a method to calculate the area of an ellipse by adding up the lengths of many radii drawn from a focus of the ellipse.
Significant work was a treatise, the origin being Kepler's methods, written by Bonaventura Cavalieri, who argued that volumes and areas should be computed as the sums of the volumes and areas of infinitesimally thin cross-sections. The ideas were similar to Archimedes' in The Method, but this treatise is believed to have been lost in the 13th century and was only rediscovered in the early 20th century, and so would have been unknown to Cavalieri. Cavalieri's work was not well respected since his methods could lead to erroneous results, and the infinitesimal quantities he introduced were disreputable at first.
The formal study of calculus brought together Cavalieri's infinitesimals with the calculus of finite differences developed in Europe at around the same time. Pierre de Fermat, claiming that he borrowed from Diophantus, introduced the concept of adequality, which represented equality up to an infinitesimal error term. The combination was achieved by John Wallis, Isaac Barrow, and James Gregory, the latter two proving predecessors to the second fundamental theorem of calculus around 1670.
The product rule and chain rule, the notions of higher derivatives and Taylor series, and of analytic functions were used by Isaac Newton in an idiosyncratic notation which he applied to solve problems of mathematical physics. In his works, Newton rephrased his ideas to suit the mathematical idiom of the time, replacing calculations with infinitesimals by equivalent geometrical arguments which were considered beyond reproach. He used the methods of calculus to solve the problem of planetary motion, the shape of the surface of a rotating fluid, the oblateness of the earth, the motion of a weight sliding on a cycloid, and many other problems discussed in his Principia Mathematica (1687). In other work, he developed series expansions for functions, including fractional and irrational powers, and it was clear that he understood the principles of the Taylor series. He did not publish all these discoveries, and at this time infinitesimal methods were still considered disreputable.
These ideas were arranged into a true calculus of infinitesimals by Gottfried Wilhelm Leibniz, who was originally accused of plagiarism by Newton. He is now regarded as an independent inventor of and contributor to calculus. His contribution was to provide a clear set of rules for working with infinitesimal quantities, allowing the computation of second and higher derivatives, and providing the product rule and chain rule, in their differential and integral forms. Unlike Newton, Leibniz put painstaking effort into his choices of notation.
Today, Leibniz and Newton are usually both given credit for independently inventing and developing calculus. Newton was the first to apply calculus to general physics. Leibniz developed much of the notation used in calculus today. The basic insights that both Newton and Leibniz provided were the laws of differentiation and integration, emphasizing that differentiation and integration are inverse processes, second and higher derivatives, and the notion of an approximating polynomial series.
When Newton and Leibniz first published their results, there was great controversy over which mathematician (and therefore which country) deserved credit. Newton derived his results first (later to be published in his Method of Fluxions), but Leibniz published his "Nova Methodus pro Maximis et Minimis" first. Newton claimed Leibniz stole ideas from his unpublished notes, which Newton had shared with a few members of the Royal Society. This controversy divided English-speaking mathematicians from continental European mathematicians for many years, to the detriment of English mathematics. A careful examination of the papers of Leibniz and Newton shows that they arrived at their results independently, with Leibniz starting first with integration and Newton with differentiation. It is Leibniz, however, who gave the new discipline its name. Newton called his calculus "the science of fluxions", a term that endured in English schools into the 19th century. The first complete treatise on calculus to be written in English and use the Leibniz notation was not published until 1815.
Since the time of Leibniz and Newton, many mathematicians have contributed to the continuing development of calculus. One of the first and most complete works on both infinitesimal and integral calculus was written in 1748 by Maria Gaetana Agnesi.
Foundations
In calculus, foundations refers to the rigorous development of the subject from axioms and definitions. In early calculus, the use of infinitesimal quantities was thought unrigorous and was fiercely criticized by several authors, most notably Michel Rolle and Bishop Berkeley. Berkeley famously described infinitesimals as the ghosts of departed quantities in his book The Analyst in 1734. Working out a rigorous foundation for calculus occupied mathematicians for much of the century following Newton and Leibniz, and is still to some extent an active area of research today.
Several mathematicians, including Maclaurin, tried to prove the soundness of using infinitesimals, but it would not be until 150 years later when, due to the work of Cauchy and Weierstrass, a way was finally found to avoid mere "notions" of infinitely small quantities. The foundations of differential and integral calculus had been laid. In Cauchy's Cours d'Analyse, we find a broad range of foundational approaches, including a definition of continuity in terms of infinitesimals, and a (somewhat imprecise) prototype of an (ε, δ)-definition of limit in the definition of differentiation. In his work, Weierstrass formalized the concept of limit and eliminated infinitesimals (although his definition can validate nilsquare infinitesimals). Following the work of Weierstrass, it eventually became common to base calculus on limits instead of infinitesimal quantities, though the subject is still occasionally called "infinitesimal calculus". Bernhard Riemann used these ideas to give a precise definition of the integral. It was also during this period that the ideas of calculus were generalized to the complex plane with the development of complex analysis.
In modern mathematics, the foundations of calculus are included in the field of real analysis, which contains full definitions and proofs of the theorems of calculus. The reach of calculus has also been greatly extended. Henri Lebesgue invented measure theory, based on earlier developments by Émile Borel, and used it to define integrals of all but the most pathological functions. Laurent Schwartz introduced distributions, which can be used to take the derivative of any function whatsoever.
Limits are not the only rigorous approach to the foundation of calculus. Another way is to use Abraham Robinson's non-standard analysis. Robinson's approach, developed in the 1960s, uses technical machinery from mathematical logic to augment the real number system with infinitesimal and infinite numbers, as in the original Newton-Leibniz conception. The resulting numbers are called hyperreal numbers, and they can be used to give a Leibniz-like development of the usual rules of calculus. There is also smooth infinitesimal analysis, which differs from non-standard analysis in that it mandates neglecting higher-power infinitesimals during derivations. Based on the ideas of F. W. Lawvere and employing the methods of category theory, smooth infinitesimal analysis views all functions as being continuous and incapable of being expressed in terms of discrete entities. One aspect of this formulation is that the law of excluded middle does not hold. The law of excluded middle is also rejected in constructive mathematics, a branch of mathematics that insists that proofs of the existence of a number, function, or other mathematical object should give a construction of the object. Reformulations of calculus in a constructive framework are generally part of the subject of constructive analysis.
Significance
While many of the ideas of calculus had been developed earlier in Greece, China, India, Iraq, Persia, and Japan, the use of calculus began in Europe, during the 17th century, when Newton and Leibniz built on the work of earlier mathematicians to introduce its basic principles. The Hungarian polymath John von Neumann wrote of this work,
Applications of differential calculus include computations involving velocity and acceleration, the slope of a curve, and optimization. Applications of integral calculus include computations involving area, volume, arc length, center of mass, work, and pressure. More advanced applications include power series and Fourier series.
Calculus is also used to gain a more precise understanding of the nature of space, time, and motion. For centuries, mathematicians and philosophers wrestled with paradoxes involving division by zero or sums of infinitely many numbers. These questions arise in the study of motion and area. The ancient Greek philosopher Zeno of Elea gave several famous examples of such paradoxes. Calculus provides tools, especially the limit and the infinite series, that resolve the paradoxes.
Principles
Limits and infinitesimals
Calculus is usually developed by working with very small quantities. Historically, the first method of doing so was by infinitesimals. These are objects which can be treated like real numbers but which are, in some sense, "infinitely small". For example, an infinitesimal number could be greater than 0, but less than any number in the sequence 1, 1/2, 1/3, ... and thus less than any positive real number. From this point of view, calculus is a collection of techniques for manipulating infinitesimals. The symbols and were taken to be infinitesimal, and the derivative was their ratio.
The infinitesimal approach fell out of favor in the 19th century because it was difficult to make the notion of an infinitesimal precise. In the late 19th century, infinitesimals were replaced within academia by the epsilon, delta approach to limits. Limits describe the behavior of a function at a certain input in terms of its values at nearby inputs. They capture small-scale behavior using the intrinsic structure of the real number system (as a metric space with the least-upper-bound property). In this treatment, calculus is a collection of techniques for manipulating certain limits. Infinitesimals get replaced by sequences of smaller and smaller numbers, and the infinitely small behavior of a function is found by taking the limiting behavior for these sequences. Limits were thought to provide a more rigorous foundation for calculus, and for this reason, they became the standard approach during the 20th century. However, the infinitesimal concept was revived in the 20th century with the introduction of non-standard analysis and smooth infinitesimal analysis, which provided solid foundations for the manipulation of infinitesimals.
Differential calculus
Differential calculus is the study of the definition, properties, and applications of the derivative of a function. The process of finding the derivative is called differentiation. Given a function and a point in the domain, the derivative at that point is a way of encoding the small-scale behavior of the function near that point. By finding the derivative of a function at every point in its domain, it is possible to produce a new function, called the derivative function or just the derivative of the original function. In formal terms, the derivative is a linear operator which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to produce another function. The function produced by differentiating the squaring function turns out to be the doubling function.
In more explicit terms the "doubling function" may be denoted by and the "squaring function" by . The "derivative" now takes the function , defined by the expression "", as an input, that is all the information—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to output another function, the function , as will turn out.
In Lagrange's notation, the symbol for a derivative is an apostrophe-like mark called a prime. Thus, the derivative of a function called is denoted by , pronounced "f prime" or "f dash". For instance, if is the squaring function, then is its derivative (the doubling function from above).
If the input of the function represents time, then the derivative represents change concerning time. For example, if is a function that takes time as input and gives the position of a ball at that time as output, then the derivative of is how the position is changing in time, that is, it is the velocity of the ball.
If a function is linear (that is if the graph of the function is a straight line), then the function can be written as , where is the independent variable, is the dependent variable, is the y-intercept, and:
This gives an exact value for the slope of a straight line. If the graph of the function is not a straight line, however, then the change in divided by the change in varies. Derivatives give an exact meaning to the notion of change in output concerning change in input. To be concrete, let be a function, and fix a point in the domain of . is a point on the graph of the function. If is a number close to zero, then is a number close to . Therefore, is close to . The slope between these two points is
This expression is called a difference quotient. A line through two points on a curve is called a secant line, so is the slope of the secant line between and . The second line is only an approximation to the behavior of the function at the point because it does not account for what happens between and . It is not possible to discover the behavior at by setting to zero because this would require dividing by zero, which is undefined. The derivative is defined by taking the limit as tends to zero, meaning that it considers the behavior of for all small values of and extracts a consistent value for the case when equals zero:
Geometrically, the derivative is the slope of the tangent line to the graph of at . The tangent line is a limit of secant lines just as the derivative is a limit of difference quotients. For this reason, the derivative is sometimes called the slope of the function .
Here is a particular example, the derivative of the squaring function at the input 3. Let be the squaring function.
The slope of the tangent line to the squaring function at the point (3, 9) is 6, that is to say, it is going up six times as fast as it is going to the right. The limit process just described can be performed for any point in the domain of the squaring function. This defines the derivative function of the squaring function or just the derivative of the squaring function for short. A computation similar to the one above shows that the derivative of the squaring function is the doubling function.
Leibniz notation
A common notation, introduced by Leibniz, for the derivative in the example above is
In an approach based on limits, the symbol is to be interpreted not as the quotient of two numbers but as a shorthand for the limit computed above. Leibniz, however, did intend it to represent the quotient of two infinitesimally small numbers, being the infinitesimally small change in caused by an infinitesimally small change applied to . We can also think of as a differentiation operator, which takes a function as an input and gives another function, the derivative, as the output. For example:
In this usage, the in the denominator is read as "with respect to ". Another example of correct notation could be:
Even when calculus is developed using limits rather than infinitesimals, it is common to manipulate symbols like and as if they were real numbers; although it is possible to avoid such manipulations, they are sometimes notationally convenient in expressing operations such as the total derivative.
Integral calculus
Integral calculus is the study of the definitions, properties, and applications of two related concepts, the indefinite integral and the definite integral. The process of finding the value of an integral is called integration. The indefinite integral, also known as the antiderivative, is the inverse operation to the derivative. is an indefinite integral of when is a derivative of . (This use of lower- and upper-case letters for a function and its indefinite integral is common in calculus.) The definite integral inputs a function and outputs a number, which gives the algebraic sum of areas between the graph of the input and the x-axis. The technical definition of the definite integral involves the limit of a sum of areas of rectangles, called a Riemann sum.
A motivating example is the distance traveled in a given time. If the speed is constant, only multiplication is needed:
But if the speed changes, a more powerful method of finding the distance is necessary. One such method is to approximate the distance traveled by breaking up the time into many short intervals of time, then multiplying the time elapsed in each interval by one of the speeds in that interval, and then taking the sum (a Riemann sum) of the approximate distance traveled in each interval. The basic idea is that if only a short time elapses, then the speed will stay more or less the same. However, a Riemann sum only gives an approximation of the distance traveled. We must take the limit of all such Riemann sums to find the exact distance traveled.
When velocity is constant, the total distance traveled over the given time interval can be computed by multiplying velocity and time. For example, traveling a steady 50 mph for 3 hours results in a total distance of 150 miles. Plotting the velocity as a function of time yields a rectangle with a height equal to the velocity and a width equal to the time elapsed. Therefore, the product of velocity and time also calculates the rectangular area under the (constant) velocity curve. This connection between the area under a curve and the distance traveled can be extended to any irregularly shaped region exhibiting a fluctuating velocity over a given period. If represents speed as it varies over time, the distance traveled between the times represented by and is the area of the region between and the -axis, between and .
To approximate that area, an intuitive method would be to divide up the distance between and into several equal segments, the length of each segment represented by the symbol . For each small segment, we can choose one value of the function . Call that value . Then the area of the rectangle with base and height gives the distance (time multiplied by speed ) traveled in that segment. Associated with each segment is the average value of the function above it, . The sum of all such rectangles gives an approximation of the area between the axis and the curve, which is an approximation of the total distance traveled. A smaller value for will give more rectangles and in most cases a better approximation, but for an exact answer, we need to take a limit as approaches zero.
The symbol of integration is , an elongated S chosen to suggest summation. The definite integral is written as:
and is read "the integral from a to b of f-of-x with respect to x." The Leibniz notation is intended to suggest dividing the area under the curve into an infinite number of rectangles so that their width becomes the infinitesimally small .
The indefinite integral, or antiderivative, is written:
Functions differing by only a constant have the same derivative, and it can be shown that the antiderivative of a given function is a family of functions differing only by a constant. Since the derivative of the function , where is any constant, is , the antiderivative of the latter is given by:
The unspecified constant present in the indefinite integral or antiderivative is known as the constant of integration.
Fundamental theorem
The fundamental theorem of calculus states that differentiation and integration are inverse operations. More precisely, it relates the values of antiderivatives to definite integrals. Because it is usually easier to compute an antiderivative than to apply the definition of a definite integral, the fundamental theorem of calculus provides a practical way of computing definite integrals. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration.
The fundamental theorem of calculus states: If a function is continuous on the interval and if is a function whose derivative is on the interval , then
Furthermore, for every in the interval ,
This realization, made by both Newton and Leibniz, was key to the proliferation of analytic results after their work became known. (The extent to which Newton and Leibniz were influenced by immediate predecessors, and particularly what Leibniz may have learned from the work of Isaac Barrow, is difficult to determine because of the priority dispute between them.) The fundamental theorem provides an algebraic method of computing many definite integrals—without performing limit processes—by finding formulae for antiderivatives. It is also a prototype solution of a differential equation. Differential equations relate an unknown function to its derivatives and are ubiquitous in the sciences.
Applications
Calculus is used in every branch of the physical sciences, actuarial science, computer science, statistics, engineering, economics, business, medicine, demography, and in other fields wherever a problem can be mathematically modeled and an optimal solution is desired. It allows one to go from (non-constant) rates of change to the total change or vice versa, and many times in studying a problem we know one and are trying to find the other. Calculus can be used in conjunction with other mathematical disciplines. For example, it can be used with linear algebra to find the "best fit" linear approximation for a set of points in a domain. Or, it can be used in probability theory to determine the expectation value of a continuous random variable given a probability density function. In analytic geometry, the study of graphs of functions, calculus is used to find high points and low points (maxima and minima), slope, concavity and inflection points. Calculus is also used to find approximate solutions to equations; in practice, it is the standard way to solve differential equations and do root finding in most applications. Examples are methods such as Newton's method, fixed point iteration, and linear approximation. For instance, spacecraft use a variation of the Euler method to approximate curved courses within zero-gravity environments.
Physics makes particular use of calculus; all concepts in classical mechanics and electromagnetism are related through calculus. The mass of an object of known density, the moment of inertia of objects, and the potential energies due to gravitational and electromagnetic forces can all be found by the use of calculus. An example of the use of calculus in mechanics is Newton's second law of motion, which states that the derivative of an object's momentum concerning time equals the net force upon it. Alternatively, Newton's second law can be expressed by saying that the net force equals the object's mass times its acceleration, which is the time derivative of velocity and thus the second time derivative of spatial position. Starting from knowing how an object is accelerating, we use calculus to derive its path.
Maxwell's theory of electromagnetism and Einstein's theory of general relativity are also expressed in the language of differential calculus. Chemistry also uses calculus in determining reaction rates and in studying radioactive decay. In biology, population dynamics starts with reproduction and death rates to model population changes.
Green's theorem, which gives the relationship between a line integral around a simple closed curve C and a double integral over the plane region D bounded by C, is applied in an instrument known as a planimeter, which is used to calculate the area of a flat surface on a drawing. For example, it can be used to calculate the amount of area taken up by an irregularly shaped flower bed or swimming pool when designing the layout of a piece of property.
In the realm of medicine, calculus can be used to find the optimal branching angle of a blood vessel to maximize flow. Calculus can be applied to understand how quickly a drug is eliminated from a body or how quickly a cancerous tumor grows.
In economics, calculus allows for the determination of maximal profit by providing a way to easily calculate both marginal cost and marginal revenue.
See also
Glossary of calculus
List of calculus topics
List of derivatives and integrals in alternative calculi
List of differentiation identities
Publications in calculus
Table of integrals
References
Further reading
Uses synthetic differential geometry and nilpotent infinitesimals.
Keisler, H.J. (2000). Elementary Calculus: An Approach Using Infinitesimals. Retrieved 29 August 2010 from http://www.math.wisc.edu/~keisler/calc.html
External links
Calculus Made Easy (1914) by Silvanus P. Thompson Full text in PDF
Calculus.org: The Calculus page at University of California, Davis – contains resources and links to other sites
Earliest Known Uses of Some of the Words of Mathematics: Calculus & Analysis
The Role of Calculus in College Mathematics from ERICDigests.org
OpenCourseWare Calculus from the Massachusetts Institute of Technology
Infinitesimal Calculus – an article on its historical development, in Encyclopedia of Mathematics, ed. Michiel Hazewinkel.
Calculus training materials at imomath.com
The Excursion of Calculus, 1772 | Calculus | [
"Mathematics"
] | 6,351 | [
"Calculus"
] |
5,213 | https://en.wikipedia.org/wiki/Computing | Computing is any goal-oriented activity requiring, benefiting from, or creating computing machinery. It includes the study and experimentation of algorithmic processes, and the development of both hardware and software. Computing has scientific, engineering, mathematical, technological, and social aspects. Major computing disciplines include computer engineering, computer science, cybersecurity, data science, information systems, information technology, and software engineering.
The term computing is also synonymous with counting and calculating. In earlier times, it was used in reference to the action performed by mechanical computing machines, and before that, to human computers.
History
The history of computing is longer than the history of computing hardware and includes the history of methods intended for pen and paper (or for chalk and slate) with or without the aid of tables. Computing is intimately tied to the representation of numbers, though mathematical concepts necessary for computing existed before numeral systems. The earliest known tool for use in computation is the abacus, and it is thought to have been invented in Babylon circa between 2700 and 2300 BC. Abaci, of a more modern design, are still used as calculation tools today.
The first recorded proposal for using digital electronics in computing was the 1931 paper "The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena" by C. E. Wynn-Williams. Claude Shannon's 1938 paper "A Symbolic Analysis of Relay and Switching Circuits" then introduced the idea of using electronics for Boolean algebraic operations.
The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947. In 1953, the University of Manchester built the first transistorized computer, the Manchester Baby. However, early junction transistors were relatively bulky devices that were difficult to mass-produce, which limited them to a number of specialised applications.
In 1957, Frosch and Derick were able to manufacture the first silicon dioxide field effect transistors at Bell Labs, the first transistors in which drain and source were adjacent at the surface. Subsequently, a team demonstrated a working MOSFET at Bell Labs 1960. The MOSFET made it possible to build high-density integrated circuits, leading to what is known as the computer revolution or microcomputer revolution.
Computer
A computer is a machine that manipulates data according to a set of instructions called a computer program. The program has an executable form that the computer can use directly to execute the instructions. The same program in its human-readable source code form, enables a programmer to study and develop a sequence of steps known as an algorithm. Because the instructions can be carried out in different types of computers, a single set of source instructions converts to machine instructions according to the CPU type.
The execution process carries out the instructions in a computer program. Instructions express the computations performed by the computer. They trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions.
Computer hardware
Computer hardware includes the physical parts of a computer, including the central processing unit, memory, and input/output. Computational logic and computer architecture are key topics in the field of computer hardware.
Computer software
Computer software, or just software, is a collection of computer programs and related data, which provides instructions to a computer. Software refers to one or more computer programs and data held in the storage of the computer. It is a set of programs, procedures, algorithms, as well as its documentation concerned with the operation of a data processing system. Program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software. The term was coined to contrast with the old term hardware (meaning physical devices). In contrast to hardware, software is intangible.
Software is also sometimes used in a more narrow sense, meaning application software only.
System software
System software, or systems software, is computer software designed to operate and control computer hardware, and to provide a platform for running application software. System software includes operating systems, utility software, device drivers, window systems, and firmware. Frequently used development tools such as compilers, linkers, and debuggers are classified as system software. System software and middleware manage and integrate a computer's capabilities, but typically do not directly apply them in the performance of tasks that benefit the user, unlike application software.
Application software
Application software, also known as an application or an app, is computer software designed to help the user perform specific tasks. Examples include enterprise software, accounting software, office suites, graphics software, and media players. Many application programs deal principally with documents. Apps may be bundled with the computer and its system software, or may be published separately. Some users are satisfied with the bundled apps and need never install additional applications. The system software manages the hardware and serves the application, which in turn serves the user.
Application software applies the power of a particular computing platform or system software to a particular purpose. Some apps, such as Microsoft Office, are developed in multiple versions for several different platforms; others have narrower requirements and are generally referred to by the platform they run on. For example, a geography application for Windows or an Android application for education or Linux gaming. Applications that run only on one platform and increase the desirability of that platform due to the popularity of the application, known as killer applications.
Computer network
A computer network, often simply referred to as a network, is a collection of hardware components and computers interconnected by communication channels that allow the sharing of resources and information. When at least one process in one device is able to send or receive data to or from at least one process residing in a remote device, the two devices are said to be in a network. Networks may be classified according to a wide variety of characteristics such as the medium used to transport the data, communications protocol used, scale, topology, and organizational scope.
Communications protocols define the rules and data formats for exchanging information in a computer network, and provide the basis for network programming. One well-known communications protocol is Ethernet, a hardware and link layer standard that is ubiquitous in local area networks. Another common protocol is the Internet Protocol Suite, which defines a set of protocols for internetworking, i.e. for data communication between multiple networks, host-to-host data transfer, and application-specific data transmission formats.
Computer networking is sometimes considered a sub-discipline of electrical engineering, telecommunications, computer science, information technology, or computer engineering, since it relies upon the theoretical and practical application of these disciplines.
Internet
The Internet is a global system of interconnected computer networks that use the standard Internet Protocol Suite (TCP/IP) to serve billions of users. This includes millions of private, public, academic, business, and government networks, ranging in scope from local to global. These networks are linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries an extensive range of information resources and services, such as the inter-linked hypertext documents of the World Wide Web and the infrastructure to support email.
Computer programming
Computer programming is the process of writing, testing, debugging, and maintaining the source code and documentation of computer programs. This source code is written in a programming language, which is an artificial language that is often more restrictive than natural languages, but easily translated by the computer. Programming is used to invoke some desired behavior (customization) from the machine.
Writing high-quality source code requires knowledge of both the computer science domain and the domain in which the application will be used. The highest-quality software is thus often developed by a team of domain experts, each a specialist in some area of development. However, the term programmer may apply to a range of program quality, from hacker to open source contributor to professional. It is also possible for a single programmer to do most or all of the computer programming needed to generate the proof of concept to launch a new killer application.
Computer programmer
A programmer, computer programmer, or coder is a person who writes computer software. The term computer programmer can refer to a specialist in one area of computer programming or to a generalist who writes code for many kinds of software. One who practices or professes a formal approach to programming may also be known as a programmer analyst. A programmer's primary computer language (C, C++, Java, Lisp, Python, etc.) is often prefixed to the above titles, and those who work in a web environment often prefix their titles with Web. The term programmer can be used to refer to a software developer, software engineer, computer scientist, or software analyst. However, members of these professions typically possess other software engineering skills, beyond programming.
Computer industry
The computer industry is made up of businesses involved in developing computer software, designing computer hardware and computer networking infrastructures, manufacturing computer components, and providing information technology services, including system administration and maintenance.
The software industry includes businesses engaged in development, maintenance, and publication of software. The industry also includes software services, such as training, documentation, and consulting.
Sub-disciplines of computing
Computer engineering
Computer engineering is a discipline that integrates several fields of electrical engineering and computer science required to develop computer hardware and software. Computer engineers usually have training in electronic engineering (or electrical engineering), software design, and hardware-software integration, rather than just software engineering or electronic engineering. Computer engineers are involved in many hardware and software aspects of computing, from the design of individual microprocessors, personal computers, and supercomputers, to circuit design. This field of engineering includes not only the design of hardware within its own domain, but also the interactions between hardware and the context in which it operates.
Software engineering
Software engineering is the application of a systematic, disciplined, and quantifiable approach to the design, development, operation, and maintenance of software, and the study of these approaches. That is, the application of engineering to software. It is the act of using insights to conceive, model and scale a solution to a problem. The first reference to the term is the 1968 NATO Software Engineering Conference, and was intended to provoke thought regarding the perceived software crisis at the time. Software development, a widely used and more generic term, does not necessarily subsume the engineering paradigm. The generally accepted concepts of Software Engineering as an engineering discipline have been specified in the Guide to the Software Engineering Body of Knowledge (SWEBOK). The SWEBOK has become an internationally accepted standard in ISO/IEC TR 19759:2015.
Computer science
Computer science or computing science (abbreviated CS or Comp Sci) is the scientific and practical approach to computation and its applications. A computer scientist specializes in the theory of computation and the design of computational systems.
Its subfields can be divided into practical techniques for its implementation and application in computer systems, and purely theoretical areas. Some, such as computational complexity theory, which studies fundamental properties of computational problems, are highly abstract, while others, such as computer graphics, emphasize real-world applications. Others focus on the challenges in implementing computations. For example, programming language theory studies approaches to the description of computations, while the study of computer programming investigates the use of programming languages and complex systems. The field of human–computer interaction focuses on the challenges in making computers and computations useful, usable, and universally accessible to humans.
Cybersecurity
The field of cybersecurity pertains to the protection of computer systems and networks. This includes information and data privacy, preventing disruption of IT services and prevention of theft of and damage to hardware, software, and data.
Data science
Data science is a field that uses scientific and computing tools to extract information and insights from data, driven by the increasing volume and availability of data. Data mining, big data, statistics, machine learning and deep learning are all interwoven with data science.
Information systems
Information systems (IS) is the study of complementary networks of hardware and software (see information technology) that people and organizations use to collect, filter, process, create, and distribute data. The ACM's Computing Careers describes IS as:
The study of IS bridges business and computer science, using the theoretical foundations of information and computation to study various business models and related algorithmic processes within a computer science discipline. The field of Computer Information Systems (CIS) studies computers and algorithmic processes, including their principles, their software and hardware designs, their applications, and their impact on society while IS emphasizes functionality over design.
Information technology
Information technology (IT) is the application of computers and telecommunications equipment to store, retrieve, transmit, and manipulate data, often in the context of a business or other enterprise. The term is commonly used as a synonym for computers and computer networks, but also encompasses other information distribution technologies such as television and telephones. Several industries are associated with information technology, including computer hardware, software, electronics, semiconductors, internet, telecom equipment, e-commerce, and computer services.
Research and emerging technologies
DNA-based computing and quantum computing are areas of active research for both computing hardware and software, such as the development of quantum algorithms. Potential infrastructure for future technologies includes DNA origami on photolithography and quantum antennae for transferring information between ion traps. By 2011, researchers had entangled 14 qubits. Fast digital circuits, including those based on Josephson junctions and rapid single flux quantum technology, are becoming more nearly realizable with the discovery of nanoscale superconductors.
Fiber-optic and photonic (optical) devices, which already have been used to transport data over long distances, are starting to be used by data centers, along with CPU and semiconductor memory components. This allows the separation of RAM from CPU by optical interconnects. IBM has created an integrated circuit with both electronic and optical information processing in one chip. This is denoted CMOS-integrated nanophotonics (CINP). One benefit of optical interconnects is that motherboards, which formerly required a certain kind of system on a chip (SoC), can now move formerly dedicated memory and network controllers off the motherboards, spreading the controllers out onto the rack. This allows standardization of backplane interconnects and motherboards for multiple types of SoCs, which allows more timely upgrades of CPUs.
Another field of research is spintronics. Spintronics can provide computing power and storage, without heat buildup. Some research is being done on hybrid chips, which combine photonics and spintronics. There is also research ongoing on combining plasmonics, photonics, and electronics.
Cloud computing
Cloud computing is a model that allows for the use of computing resources, such as servers or applications, without the need for interaction between the owner of these resources and the end user. It is typically offered as a service, making it an example of Software as a Service, Platforms as a Service, and Infrastructure as a Service, depending on the functionality offered. Key characteristics include on-demand access, broad network access, and the capability of rapid scaling. It allows individual users or small business to benefit from economies of scale.
One area of interest in this field is its potential to support energy efficiency. Allowing thousands of instances of computation to occur on one single machine instead of thousands of individual machines could help save energy. It could also ease the transition to renewable energy source, since it would suffice to power one server farm with renewable energy, rather than millions of homes and offices.
However, this centralized computing model poses several challenges, especially in security and privacy. Current legislation does not sufficiently protect users from companies mishandling their data on company servers. This suggests potential for further legislative regulations on cloud computing and tech companies.
Quantum computing
Quantum computing is an area of research that brings together the disciplines of computer science, information theory, and quantum physics. While the idea of information as part of physics is relatively new, there appears to be a strong tie between information theory and quantum mechanics. Whereas traditional computing operates on a binary system of ones and zeros, quantum computing uses qubits. Qubits are capable of being in a superposition, i.e. in both states of one and zero, simultaneously. Thus, the value of the qubit is not between 1 and 0, but changes depending on when it is measured. This trait of qubits is known as quantum entanglement, and is the core idea of quantum computing that allows quantum computers to do large scale computations. Quantum computing is often used for scientific research in cases where traditional computers do not have the computing power to do the necessary calculations, such in molecular modeling. Large molecules and their reactions are far too complex for traditional computers to calculate, but the computational power of quantum computers could provide a tool to perform such calculations.
See also
Artificial intelligence
Computational science
Computational thinking
Computer algebra
Confidential computing
Creative computing
Data-centric computing
Electronic data processing
Enthusiast computing
Index of history of computing articles
Instruction set architecture
Lehmer sieve
Liquid computing
List of computer term etymologies
Mobile computing
Outline of computers
Outline of computing
Scientific computing
Spatial computing
Ubiquitous computing
Unconventional computing
Urban computing
Virtual reality
References
External links
FOLDOC: the Free On-Line Dictionary Of Computing | Computing | [
"Technology"
] | 3,530 | [
"nan"
] |
5,218 | https://en.wikipedia.org/wiki/Central%20processing%20unit | A central processing unit (CPU), also called a central processor, main processor, or just processor, is the primary processor in a given computer. Its electronic circuitry executes instructions of a computer program, such as arithmetic, logic, controlling, and input/output (I/O) operations. This role contrasts with that of external components, such as main memory and I/O circuitry, and specialized coprocessors such as graphics processing units (GPUs).
The form, design, and implementation of CPUs have changed over time, but their fundamental operation remains almost unchanged. Principal components of a CPU include the arithmetic–logic unit (ALU) that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations, and a control unit that orchestrates the fetching (from memory), decoding and execution (of instructions) by directing the coordinated operations of the ALU, registers, and other components. Modern CPUs devote a lot of semiconductor area to caches and instruction-level parallelism to increase performance and to CPU modes to support operating systems and virtualization.
Most modern CPUs are implemented on integrated circuit (IC) microprocessors, with one or more CPUs on a single IC chip. Microprocessor chips with multiple CPUs are called multi-core processors. The individual physical CPUs, called processor cores, can also be multithreaded to support CPU-level multithreading.
An IC that contains a CPU may also contain memory, peripheral interfaces, and other components of a computer; such integrated devices are variously called microcontrollers or systems on a chip (SoC).
History
Early computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called "fixed-program computers". The "central processing unit" term has been in use since as early as 1955. Since the term "CPU" is generally defined as a device for software (computer program) execution, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer.
The idea of a stored-program computer had been already present in the design of John Presper Eckert and John William Mauchly's ENIAC, but was initially omitted so that it could be finished sooner. On June 30, 1945, before ENIAC was made, mathematician John von Neumann distributed a paper entitled First Draft of a Report on the EDVAC. It was the outline of a stored-program computer that would eventually be completed in August 1949. EDVAC was designed to perform a certain number of instructions (or operations) of various types. Significantly, the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, which was the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program that EDVAC ran could be changed simply by changing the contents of the memory. EDVAC was not the first stored-program computer; the Manchester Baby, which was a small-scale experimental stored-program computer, ran its first program on 21 June 1948 and the Manchester Mark 1 ran its first program during the night of 16–17 June 1949.
Early CPUs were custom designs used as part of a larger and sometimes distinctive computer. However, this method of designing custom CPUs for a particular application has largely given way to the development of multi-purpose processors produced in large quantities. This standardization began in the era of discrete transistor mainframes and minicomputers, and has rapidly accelerated with the popularization of the integrated circuit (IC). The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, and sometimes even in toys.
While von Neumann is most often credited with the design of the stored-program computer because of his design of EDVAC, and the design became known as the von Neumann architecture, others before him, such as Konrad Zuse, had suggested and implemented similar ideas. The so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also used a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both. Most modern CPUs are primarily von Neumann in design, but CPUs with the Harvard architecture are seen as well, especially in embedded applications; for instance, the Atmel AVR microcontrollers are Harvard-architecture processors.
Relays and vacuum tubes (thermionic tubes) were commonly used as switching elements; a useful computer requires thousands or tens of thousands of switching devices. The overall speed of a system is dependent on the speed of the switches. Vacuum-tube computers such as EDVAC tended to average eight hours between failures, whereas relay computers—such as the slower but earlier Harvard Mark I—failed very rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, limited largely by the speed of the switching devices they were built with.
Transistor CPUs
The design complexity of CPUs increased as various technologies facilitated the building of smaller and more reliable electronic devices. The first such improvement came with the advent of the transistor. Transistorized CPUs during the 1950s and 1960s no longer had to be built out of bulky, unreliable, and fragile switching elements, like vacuum tubes and relays. With this improvement, more complex and reliable CPUs were built onto one or several printed circuit boards containing discrete (individual) components.
In 1964, IBM introduced its IBM System/360 computer architecture that was used in a series of computers capable of running the same programs with different speeds and performances. This was significant at a time when most electronic computers were incompatible with one another, even those made by the same manufacturer. To facilitate this improvement, IBM used the concept of a microprogram (often called "microcode"), which still sees widespread use in modern CPUs. The System/360 architecture was so popular that it dominated the mainframe computer market for decades and left a legacy that is continued by similar modern computers like the IBM zSeries. In 1965, Digital Equipment Corporation (DEC) introduced another influential computer aimed at the scientific and research markets—the PDP-8.
Transistor-based computers had several distinct advantages over their predecessors. Aside from facilitating increased reliability and lower power consumption, transistors also allowed CPUs to operate at much higher speeds because of the short switching time of a transistor in comparison to a tube or relay. The increased reliability and dramatically increased speed of the switching elements, which were almost exclusively transistors by this time; CPU clock rates in the tens of megahertz were easily obtained during this period. Additionally, while discrete transistor and IC CPUs were in heavy usage, new high-performance designs like single instruction, multiple data (SIMD) vector processors began to appear. These early experimental designs later gave rise to the era of specialized supercomputers like those made by Cray Inc and Fujitsu Ltd.
Small-scale integration CPUs
During this period, a method of manufacturing many interconnected transistors in a compact space was developed. The integrated circuit (IC) allowed a large number of transistors to be manufactured on a single semiconductor-based die, or "chip". At first, only very basic non-specialized digital circuits such as NOR gates were miniaturized into ICs. CPUs based on these "building block" ICs are generally referred to as "small-scale integration" (SSI) devices. SSI ICs, such as the ones used in the Apollo Guidance Computer, usually contained up to a few dozen transistors. To build an entire CPU out of SSI ICs required thousands of individual chips, but still consumed much less space and power than earlier discrete transistor designs.
IBM's System/370, follow-on to the System/360, used SSI ICs rather than Solid Logic Technology discrete-transistor modules. DEC's PDP-8/I and KI10 PDP-10 also switched from the individual transistors used by the PDP-8 and PDP-10 to SSI ICs, and their extremely popular PDP-11 line was originally built with SSI ICs, but was eventually implemented with LSI components once these became practical.
Large-scale integration CPUs
Lee Boysel published influential articles, including a 1967 "manifesto", which described how to build the equivalent of a 32-bit mainframe computer from a relatively small number of large-scale integration circuits (LSI). The only way to build LSI chips, which are chips with a hundred or more gates, was to build them using a metal–oxide–semiconductor (MOS) semiconductor manufacturing process (either PMOS logic, NMOS logic, or CMOS logic). However, some companies continued to build processors out of bipolar transistor–transistor logic (TTL) chips because bipolar junction transistors were faster than MOS chips up until the 1970s (a few companies such as Datapoint continued to build processors out of TTL chips until the early 1980s). In the 1960s, MOS ICs were slower and initially considered useful only in applications that required low power. Following the development of silicon-gate MOS technology by Federico Faggin at Fairchild Semiconductor in 1968, MOS ICs largely replaced bipolar TTL as the standard chip technology in the early 1970s.
As the microelectronic technology advanced, an increasing number of transistors were placed on ICs, decreasing the number of individual ICs needed for a complete CPU. MSI and LSI ICs increased transistor counts to hundreds, and then thousands. By 1968, the number of ICs required to build a complete CPU had been reduced to 24 ICs of eight different types, with each IC containing roughly 1000 MOSFETs. In stark contrast with its SSI and MSI predecessors, the first LSI implementation of the PDP-11 contained a CPU composed of only four LSI integrated circuits.
Microprocessors
Since microprocessors were first introduced they have almost completely overtaken all other central processing unit implementation methods. The first commercially available microprocessor, made in 1971, was the Intel 4004, and the first widely used microprocessor, made in 1974, was the Intel 8080. Mainframe and minicomputer manufacturers of the time launched proprietary IC development programs to upgrade their older computer architectures, and eventually produced instruction set compatible microprocessors that were backward-compatible with their older hardware and software. Combined with the advent and eventual success of the ubiquitous personal computer, the term CPU is now applied almost exclusively to microprocessors. Several CPUs (denoted cores) can be combined in a single processing chip.
Previous generations of CPUs were implemented as discrete components and numerous small integrated circuits (ICs) on one or more circuit boards. Microprocessors, on the other hand, are CPUs manufactured on a very small number of ICs; usually just one. The overall smaller CPU size, as a result of being implemented on a single die, means faster switching time because of physical factors like decreased gate parasitic capacitance. This has allowed synchronous microprocessors to have clock rates ranging from tens of megahertz to several gigahertz. Additionally, the ability to construct exceedingly small transistors on an IC has increased the complexity and number of transistors in a single CPU many fold. This widely observed trend is described by Moore's law, which had proven to be a fairly accurate predictor of the growth of CPU (and other IC) complexity until 2016.
While the complexity, size, construction and general form of CPUs have changed enormously since 1950, the basic design and function has not changed much at all. Almost all common CPUs today can be very accurately described as von Neumann stored-program machines. As Moore's law no longer holds, concerns have arisen about the limits of integrated circuit transistor technology. Extreme miniaturization of electronic gates is causing the effects of phenomena like electromigration and subthreshold leakage to become much more significant. These newer concerns are among the many factors causing researchers to investigate new methods of computing such as the quantum computer, as well as to expand the use of parallelism and other methods that extend the usefulness of the classical von Neumann model.
Operation
The fundamental operation of most CPUs, regardless of the physical form they take, is to execute a sequence of stored instructions that is called a program. The instructions to be executed are kept in some kind of computer memory. Nearly all CPUs follow the fetch, decode and execute steps in their operation, which are collectively known as the instruction cycle.
After the execution of an instruction, the entire process repeats, with the next instruction cycle normally fetching the next-in-sequence instruction because of the incremented value in the program counter. If a jump instruction was executed, the program counter will be modified to contain the address of the instruction that was jumped to and program execution continues normally. In more complex CPUs, multiple instructions can be fetched, decoded and executed simultaneously. This section describes what is generally referred to as the "classic RISC pipeline", which is quite common among the simple CPUs used in many electronic devices (often called microcontrollers). It largely ignores the important role of CPU cache, and therefore the access stage of the pipeline.
Some instructions manipulate the program counter rather than producing result data directly; such instructions are generally called "jumps" and facilitate program behavior like loops, conditional program execution (through the use of a conditional jump), and existence of functions. In some processors, some other instructions change the state of bits in a "flags" register. These flags can be used to influence how a program behaves, since they often indicate the outcome of various operations. For example, in such processors a "compare" instruction evaluates two values and sets or clears bits in the flags register to indicate which one is greater or whether they are equal; one of these flags could then be used by a later jump instruction to determine program flow.
Fetch
Fetch involves retrieving an instruction (which is represented by a number or sequence of numbers) from program memory. The instruction's location (address) in program memory is determined by the program counter (PC; called the "instruction pointer" in Intel x86 microprocessors), which stores a number that identifies the address of the next instruction to be fetched. After an instruction is fetched, the PC is incremented by the length of the instruction so that it will contain the address of the next instruction in the sequence. Often, the instruction to be fetched must be retrieved from relatively slow memory, causing the CPU to stall while waiting for the instruction to be returned. This issue is largely addressed in modern processors by caches and pipeline architectures (see below).
Decode
The instruction that the CPU fetches from memory determines what the CPU will do. In the decode step, performed by binary decoder circuitry known as the instruction decoder, the instruction is converted into signals that control other parts of the CPU.
The way in which the instruction is interpreted is defined by the CPU's instruction set architecture (ISA). Often, one group of bits (that is, a "field") within the instruction, called the opcode, indicates which operation is to be performed, while the remaining fields usually provide supplemental information required for the operation, such as the operands. Those operands may be specified as a constant value (called an immediate value), or as the location of a value that may be a processor register or a memory address, as determined by some addressing mode.
In some CPU designs, the instruction decoder is implemented as a hardwired, unchangeable binary decoder circuit. In others, a microprogram is used to translate instructions into sets of CPU configuration signals that are applied sequentially over multiple clock pulses. In some cases the memory that stores the microprogram is rewritable, making it possible to change the way in which the CPU decodes instructions.
Execute
After the fetch and decode steps, the execute step is performed. Depending on the CPU architecture, this may consist of a single action or a sequence of actions. During each action, control signals electrically enable or disable various parts of the CPU so they can perform all or part of the desired operation. The action is then completed, typically in response to a clock pulse. Very often the results are written to an internal CPU register for quick access by subsequent instructions. In other cases results may be written to slower, but less expensive and higher capacity main memory.
For example, if an instruction that performs addition is to be executed, registers containing operands (numbers to be summed) are activated, as are the parts of the arithmetic logic unit (ALU) that perform addition. When the clock pulse occurs, the operands flow from the source registers into the ALU, and the sum appears at its output. On subsequent clock pulses, other components are enabled (and disabled) to move the output (the sum of the operation) to storage (e.g., a register or memory). If the resulting sum is too large (i.e., it is larger than the ALU's output word size), an arithmetic overflow flag will be set, influencing the next operation.
Structure and implementation
Hardwired into a CPU's circuitry is a set of basic operations it can perform, called an instruction set. Such operations may involve, for example, adding or subtracting two numbers, comparing two numbers, or jumping to a different part of a program. Each instruction is represented by a unique combination of bits, known as the machine language opcode. While processing an instruction, the CPU decodes the opcode (via a binary decoder) into control signals, which orchestrate the behavior of the CPU. A complete machine language instruction consists of an opcode and, in many cases, additional bits that specify arguments for the operation (for example, the numbers to be summed in the case of an addition operation). Going up the complexity scale, a machine language program is a collection of machine language instructions that the CPU executes.
The actual mathematical operation for each instruction is performed by a combinational logic circuit within the CPU's processor known as the arithmetic–logic unit or ALU. In general, a CPU executes an instruction by fetching it from memory, using its ALU to perform an operation, and then storing the result to memory. Besides the instructions for integer mathematics and logic operations, various other machine instructions exist, such as those for loading data from memory and storing it back, branching operations, and mathematical operations on floating-point numbers performed by the CPU's floating-point unit (FPU).
Control unit
The control unit (CU) is a component of the CPU that directs the operation of the processor. It tells the computer's memory, arithmetic and logic unit and input and output devices how to respond to the instructions that have been sent to the processor.
It directs the operation of the other units by providing timing and control signals. Most computer resources are managed by the CU. It directs the flow of data between the CPU and the other devices. John von Neumann included the control unit as part of the von Neumann architecture. In modern computer designs, the control unit is typically an internal part of the CPU with its overall role and operation unchanged since its introduction.
Arithmetic logic unit
The arithmetic logic unit (ALU) is a digital circuit within the processor that performs integer arithmetic and bitwise logic operations. The inputs to the ALU are the data words to be operated on (called operands), status information from previous operations, and a code from the control unit indicating which operation to perform. Depending on the instruction being executed, the operands may come from internal CPU registers, external memory, or constants generated by the ALU itself.
When all input signals have settled and propagated through the ALU circuitry, the result of the performed operation appears at the ALU's outputs. The result consists of both a data word, which may be stored in a register or memory, and status information that is typically stored in a special, internal CPU register reserved for this purpose.
Modern CPUs typically contain more than one ALU to improve performance.
Address generation unit
The address generation unit (AGU), sometimes also called the address computation unit (ACU), is an execution unit inside the CPU that calculates addresses used by the CPU to access main memory. By having address calculations handled by separate circuitry that operates in parallel with the rest of the CPU, the number of CPU cycles required for executing various machine instructions can be reduced, bringing performance improvements.
While performing various operations, CPUs need to calculate memory addresses required for fetching data from the memory; for example, in-memory positions of array elements must be calculated before the CPU can fetch the data from actual memory locations. Those address-generation calculations involve different integer arithmetic operations, such as addition, subtraction, modulo operations, or bit shifts. Often, calculating a memory address involves more than one general-purpose machine instruction, which do not necessarily decode and execute quickly. By incorporating an AGU into a CPU design, together with introducing specialized instructions that use the AGU, various address-generation calculations can be offloaded from the rest of the CPU, and can often be executed quickly in a single CPU cycle.
Capabilities of an AGU depend on a particular CPU and its architecture. Thus, some AGUs implement and expose more address-calculation operations, while some also include more advanced specialized instructions that can operate on multiple operands at a time. Some CPU architectures include multiple AGUs so more than one address-calculation operation can be executed simultaneously, which brings further performance improvements due to the superscalar nature of advanced CPU designs. For example, Intel incorporates multiple AGUs into its Sandy Bridge and Haswell microarchitectures, which increase bandwidth of the CPU memory subsystem by allowing multiple memory-access instructions to be executed in parallel.
Memory management unit (MMU)
Many microprocessors (in smartphones and desktop, laptop, server computers) have a memory management unit, translating logical addresses into physical RAM addresses, providing memory protection and paging abilities, useful for virtual memory. Simpler processors, especially microcontrollers, usually don't include an MMU.
Cache
A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory. A cache is a smaller, faster memory, closer to a processor core, which stores copies of the data from frequently used main memory locations. Most CPUs have different independent caches, including instruction and data caches, where the data cache is usually organized as a hierarchy of more cache levels (L1, L2, L3, L4, etc.).
All modern (fast) CPUs (with few specialized exceptions) have multiple levels of CPU caches. The first CPUs that used a cache had only one level of cache; unlike later level 1 caches, it was not split into L1d (for data) and L1i (for instructions). Almost all current CPUs with caches have a split L1 cache. They also have L2 caches and, for larger processors, L3 caches as well. The L2 cache is usually not split and acts as a common repository for the already split L1 cache. Every core of a multi-core processor has a dedicated L2 cache and is usually not shared between the cores. The L3 cache, and higher-level caches, are shared between the cores and are not split. An L4 cache is currently uncommon, and is generally on dynamic random-access memory (DRAM), rather than on static random-access memory (SRAM), on a separate die or chip. That was also the case historically with L1, while bigger chips have allowed integration of it and generally all cache levels, with the possible exception of the last level. Each extra level of cache tends to be bigger and is optimized differently.
Other types of caches exist (that are not counted towards the "cache size" of the most important caches mentioned above), such as the translation lookaside buffer (TLB) that is part of the memory management unit (MMU) that most CPUs have.
Caches are generally sized in powers of two: 2, 8, 16 etc. KiB or MiB (for larger non-L1) sizes, although the IBM z13 has a 96 KiB L1 instruction cache.
Clock rate
Most CPUs are synchronous circuits, which means they employ a clock signal to pace their sequential operations. The clock signal is produced by an external oscillator circuit that generates a consistent number of pulses each second in the form of a periodic square wave. The frequency of the clock pulses determines the rate at which a CPU executes instructions and, consequently, the faster the clock, the more instructions the CPU will execute each second.
To ensure proper operation of the CPU, the clock period is longer than the maximum time needed for all signals to propagate (move) through the CPU. In setting the clock period to a value well above the worst-case propagation delay, it is possible to design the entire CPU and the way it moves data around the "edges" of the rising and falling clock signal. This has the advantage of simplifying the CPU significantly, both from a design perspective and a component-count perspective. However, it also carries the disadvantage that the entire CPU must wait on its slowest elements, even though some portions of it are much faster. This limitation has largely been compensated for by various methods of increasing CPU parallelism (see below).
However, architectural improvements alone do not solve all of the drawbacks of globally synchronous CPUs. For example, a clock signal is subject to the delays of any other electrical signal. Higher clock rates in increasingly complex CPUs make it more difficult to keep the clock signal in phase (synchronized) throughout the entire unit. This has led many modern CPUs to require multiple identical clock signals to be provided to avoid delaying a single signal significantly enough to cause the CPU to malfunction. Another major issue, as clock rates increase dramatically, is the amount of heat that is dissipated by the CPU. The constantly changing clock causes many components to switch regardless of whether they are being used at that time. In general, a component that is switching uses more energy than an element in a static state. Therefore, as clock rate increases, so does energy consumption, causing the CPU to require more heat dissipation in the form of CPU cooling solutions.
One method of dealing with the switching of unneeded components is called clock gating, which involves turning off the clock signal to unneeded components (effectively disabling them). However, this is often regarded as difficult to implement and therefore does not see common usage outside of very low-power designs. One notable recent CPU design that uses extensive clock gating is the IBM PowerPC-based Xenon used in the Xbox 360; this reduces the power requirements of the Xbox 360.
Clockless CPUs
Another method of addressing some of the problems with a global clock signal is the removal of the clock signal altogether. While removing the global clock signal makes the design process considerably more complex in many ways, asynchronous (or clockless) designs carry marked advantages in power consumption and heat dissipation in comparison with similar synchronous designs. While somewhat uncommon, entire asynchronous CPUs have been built without using a global clock signal. Two notable examples of this are the ARM compliant AMULET and the MIPS R3000 compatible MiniMIPS.
Rather than totally removing the clock signal, some CPU designs allow certain portions of the device to be asynchronous, such as using asynchronous ALUs in conjunction with superscalar pipelining to achieve some arithmetic performance gains. While it is not altogether clear whether totally asynchronous designs can perform at a comparable or better level than their synchronous counterparts, it is evident that they do at least excel in simpler math operations. This, combined with their excellent power consumption and heat dissipation properties, makes them very suitable for embedded computers.
Voltage regulator module
Many modern CPUs have a die-integrated power managing module which regulates on-demand voltage supply to the CPU circuitry allowing it to keep balance between performance and power consumption.
Integer range
Every CPU represents numerical values in a specific way. For example, some early digital computers represented numbers as familiar decimal (base 10) numeral system values, and others have employed more unusual representations such as ternary (base three). Nearly all modern CPUs represent numbers in binary form, with each digit being represented by some two-valued physical quantity such as a "high" or "low" voltage.
Related to numeric representation is the size and precision of integer numbers that a CPU can represent. In the case of a binary CPU, this is measured by the number of bits (significant digits of a binary encoded integer) that the CPU can process in one operation, which is commonly called word size, bit width, data path width, integer precision, or integer size. A CPU's integer size determines the range of integer values on which it can directly operate. For example, an 8-bit CPU can directly manipulate integers represented by eight bits, which have a range of 256 (28) discrete integer values.
Integer range can also affect the number of memory locations the CPU can directly address (an address is an integer value representing a specific memory location). For example, if a binary CPU uses 32 bits to represent a memory address then it can directly address 232 memory locations. To circumvent this limitation and for various other reasons, some CPUs use mechanisms (such as bank switching) that allow additional memory to be addressed.
CPUs with larger word sizes require more circuitry and consequently are physically larger, cost more and consume more power (and therefore generate more heat). As a result, smaller 4- or 8-bit microcontrollers are commonly used in modern applications even though CPUs with much larger word sizes (such as 16, 32, 64, even 128-bit) are available. When higher performance is required, however, the benefits of a larger word size (larger data ranges and address spaces) may outweigh the disadvantages. A CPU can have internal data paths shorter than the word size to reduce size and cost. For example, even though the IBM System/360 instruction set architecture was a 32-bit instruction set, the System/360 Model 30 and Model 40 had 8-bit data paths in the arithmetic logical unit, so that a 32-bit add required four cycles, one for each 8 bits of the operands, and, even though the Motorola 68000 series instruction set was a 32-bit instruction set, the Motorola 68000 and Motorola 68010 had 16-bit data paths in the arithmetic logical unit, so that a 32-bit add required two cycles.
To gain some of the advantages afforded by both lower and higher bit lengths, many instruction sets have different bit widths for integer and floating-point data, allowing CPUs implementing that instruction set to have different bit widths for different portions of the device. For example, the IBM System/360 instruction set was primarily 32 bit, but supported 64-bit floating-point values to facilitate greater accuracy and range in floating-point numbers. The System/360 Model 65 had an 8-bit adder for decimal and fixed-point binary arithmetic and a 60-bit adder for floating-point arithmetic. Many later CPU designs use similar mixed bit width, especially when the processor is meant for general-purpose use where a reasonable balance of integer and floating-point capability is required.
Parallelism
The description of the basic operation of a CPU offered in the previous section describes the simplest form that a CPU can take. This type of CPU, usually referred to as subscalar, operates on and executes one instruction on one or two pieces of data at a time, that is less than one instruction per clock cycle ().
This process gives rise to an inherent inefficiency in subscalar CPUs. Since only one instruction is executed at a time, the entire CPU must wait for that instruction to complete before proceeding to the next instruction. As a result, the subscalar CPU gets "hung up" on instructions which take more than one clock cycle to complete execution. Even adding a second execution unit (see below) does not improve performance much; rather than one pathway being hung up, now two pathways are hung up and the number of unused transistors is increased. This design, wherein the CPU's execution resources can operate on only one instruction at a time, can only possibly reach scalar performance (one instruction per clock cycle, ). However, the performance is nearly always subscalar (less than one instruction per clock cycle, ).
Attempts to achieve scalar and better performance have resulted in a variety of design methodologies that cause the CPU to behave less linearly and more in parallel. When referring to parallelism in CPUs, two terms are generally used to classify these design techniques:
instruction-level parallelism (ILP), which seeks to increase the rate at which instructions are executed within a CPU (that is, to increase the use of on-die execution resources);
task-level parallelism (TLP), which purposes to increase the number of threads or processes that a CPU can execute simultaneously.
Each methodology differs both in the ways in which they are implemented, as well as the relative effectiveness they afford in increasing the CPU's performance for an application.
Instruction-level parallelism
One of the simplest methods for increased parallelism is to begin the first steps of instruction fetching and decoding before the prior instruction finishes executing. This is a technique known as instruction pipelining, and is used in almost all modern general-purpose CPUs. Pipelining allows multiple instruction to be executed at a time by breaking the execution pathway into discrete stages. This separation can be compared to an assembly line, in which an instruction is made more complete at each stage until it exits the execution pipeline and is retired.
Pipelining does, however, introduce the possibility for a situation where the result of the previous operation is needed to complete the next operation; a condition often termed data dependency conflict. Therefore, pipelined processors must check for these sorts of conditions and delay a portion of the pipeline if necessary. A pipelined processor can become very nearly scalar, inhibited only by pipeline stalls (an instruction spending more than one clock cycle in a stage).
Improvements in instruction pipelining led to further decreases in the idle time of CPU components. Designs that are said to be superscalar include a long instruction pipeline and multiple identical execution units, such as load–store units, arithmetic–logic units, floating-point units and address generation units. In a superscalar pipeline, instructions are read and passed to a dispatcher, which decides whether or not the instructions can be executed in parallel (simultaneously). If so, they are dispatched to execution units, resulting in their simultaneous execution. In general, the number of instructions that a superscalar CPU will complete in a cycle is dependent on the number of instructions it is able to dispatch simultaneously to execution units.
Most of the difficulty in the design of a superscalar CPU architecture lies in creating an effective dispatcher. The dispatcher needs to be able to quickly determine whether instructions can be executed in parallel, as well as dispatch them in such a way as to keep as many execution units busy as possible. This requires that the instruction pipeline is filled as often as possible and requires significant amounts of CPU cache. It also makes hazard-avoiding techniques like branch prediction, speculative execution, register renaming, out-of-order execution and transactional memory crucial to maintaining high levels of performance. By attempting to predict which branch (or path) a conditional instruction will take, the CPU can minimize the number of times that the entire pipeline must wait until a conditional instruction is completed. Speculative execution often provides modest performance increases by executing portions of code that may not be needed after a conditional operation completes. Out-of-order execution somewhat rearranges the order in which instructions are executed to reduce delays due to data dependencies. Also in case of single instruction stream, multiple data stream, a case when a lot of data from the same type has to be processed, modern processors can disable parts of the pipeline so that when a single instruction is executed many times, the CPU skips the fetch and decode phases and thus greatly increases performance on certain occasions, especially in highly monotonous program engines such as video creation software and photo processing.
When a fraction of the CPU is superscalar, the part that is not suffers a performance penalty due to scheduling stalls. The Intel P5 Pentium had two superscalar ALUs which could accept one instruction per clock cycle each, but its FPU could not. Thus the P5 was integer superscalar but not floating point superscalar. Intel's successor to the P5 architecture, P6, added superscalar abilities to its floating-point features.
Simple pipelining and superscalar design increase a CPU's ILP by allowing it to execute instructions at rates surpassing one instruction per clock cycle. Most modern CPU designs are at least somewhat superscalar, and nearly all general purpose CPUs designed in the last decade are superscalar. In later years some of the emphasis in designing high-ILP computers has been moved out of the CPU's hardware and into its software interface, or instruction set architecture (ISA). The strategy of the very long instruction word (VLIW) causes some ILP to become implied directly by the software, reducing the CPU's work in boosting ILP and thereby reducing design complexity.
Task-level parallelism
Another strategy of achieving performance is to execute multiple threads or processes in parallel. This area of research is known as parallel computing. In Flynn's taxonomy, this strategy is known as multiple instruction stream, multiple data stream (MIMD).
One technology used for this purpose is multiprocessing (MP). The initial type of this technology is known as symmetric multiprocessing (SMP), where a small number of CPUs share a coherent view of their memory system. In this scheme, each CPU has additional hardware to maintain a constantly up-to-date view of memory. By avoiding stale views of memory, the CPUs can cooperate on the same program and programs can migrate from one CPU to another. To increase the number of cooperating CPUs beyond a handful, schemes such as non-uniform memory access (NUMA) and directory-based coherence protocols were introduced in the 1990s. SMP systems are limited to a small number of CPUs while NUMA systems have been built with thousands of processors. Initially, multiprocessing was built using multiple discrete CPUs and boards to implement the interconnect between the processors. When the processors and their interconnect are all implemented on a single chip, the technology is known as chip-level multiprocessing (CMP) and the single chip as a multi-core processor.
It was later recognized that finer-grain parallelism existed with a single program. A single program might have several threads (or functions) that could be executed separately or in parallel. Some of the earliest examples of this technology implemented input/output processing such as direct memory access as a separate thread from the computation thread. A more general approach to this technology was introduced in the 1970s when systems were designed to run multiple computation threads in parallel. This technology is known as multi-threading (MT). The approach is considered more cost-effective than multiprocessing, as only a small number of components within a CPU are replicated to support MT as opposed to the entire CPU in the case of MP. In MT, the execution units and the memory system including the caches are shared among multiple threads. The downside of MT is that the hardware support for multithreading is more visible to software than that of MP and thus supervisor software like operating systems have to undergo larger changes to support MT. One type of MT that was implemented is known as temporal multithreading, where one thread is executed until it is stalled waiting for data to return from external memory. In this scheme, the CPU would then quickly context switch to another thread which is ready to run, the switch often done in one CPU clock cycle, such as the UltraSPARC T1. Another type of MT is simultaneous multithreading, where instructions from multiple threads are executed in parallel within one CPU clock cycle.
For several decades from the 1970s to early 2000s, the focus in designing high performance general purpose CPUs was largely on achieving high ILP through technologies such as pipelining, caches, superscalar execution, out-of-order execution, etc. This trend culminated in large, power-hungry CPUs such as the Intel Pentium 4. By the early 2000s, CPU designers were thwarted from achieving higher performance from ILP techniques due to the growing disparity between CPU operating frequencies and main memory operating frequencies as well as escalating CPU power dissipation owing to more esoteric ILP techniques.
CPU designers then borrowed ideas from commercial computing markets such as transaction processing, where the aggregate performance of multiple programs, also known as throughput computing, was more important than the performance of a single thread or process.
This reversal of emphasis is evidenced by the proliferation of dual and more core processor designs and notably, Intel's newer designs resembling its less superscalar P6 architecture. Late designs in several processor families exhibit CMP, including the x86-64 Opteron and Athlon 64 X2, the SPARC UltraSPARC T1, IBM POWER4 and POWER5, as well as several video game console CPUs like the Xbox 360's triple-core PowerPC design, and the PlayStation 3's 7-core Cell microprocessor.
Data parallelism
A less common but increasingly important paradigm of processors (and indeed, computing in general) deals with data parallelism. The processors discussed earlier are all referred to as some type of scalar device. As the name implies, vector processors deal with multiple pieces of data in the context of one instruction. This contrasts with scalar processors, which deal with one piece of data for every instruction. Using Flynn's taxonomy, these two schemes of dealing with data are generally referred to as single instruction stream, multiple data stream (SIMD) and single instruction stream, single data stream (SISD), respectively. The great utility in creating processors that deal with vectors of data lies in optimizing tasks that tend to require the same operation (for example, a sum or a dot product) to be performed on a large set of data. Some classic examples of these types of tasks include multimedia applications (images, video and sound), as well as many types of scientific and engineering tasks. Whereas a scalar processor must complete the entire process of fetching, decoding and executing each instruction and value in a set of data, a vector processor can perform a single operation on a comparatively large set of data with one instruction. This is only possible when the application tends to require many steps which apply one operation to a large set of data.
Most early vector processors, such as the Cray-1, were associated almost exclusively with scientific research and cryptography applications. However, as multimedia has largely shifted to digital media, the need for some form of SIMD in general-purpose processors has become significant. Shortly after inclusion of floating-point units started to become commonplace in general-purpose processors, specifications for and implementations of SIMD execution units also began to appear for general-purpose processors. Some of these early SIMD specifications – like HP's Multimedia Acceleration eXtensions (MAX) and Intel's MMX – were integer-only. This proved to be a significant impediment for some software developers, since many of the applications that benefit from SIMD primarily deal with floating-point numbers. Progressively, developers refined and remade these early designs into some of the common modern SIMD specifications, which are usually associated with one instruction set architecture (ISA). Some notable modern examples include Intel's Streaming SIMD Extensions (SSE) and the PowerPC-related AltiVec (also known as VMX).
Hardware performance counter
Many modern architectures (including embedded ones) often include hardware performance counters (HPC), which enables low-level (instruction-level) collection, benchmarking, debugging or analysis of running software metrics. HPC may also be used to discover and analyze unusual or suspicious activity of the software, such as return-oriented programming (ROP) or sigreturn-oriented programming (SROP) exploits etc. This is usually done by software-security teams to assess and find malicious binary programs.
Many major vendors (such as IBM, Intel, AMD, and Arm) provide software interfaces (usually written in C/C++) that can be used to collect data from the CPU's registers in order to get metrics. Operating system vendors also provide software like perf (Linux) to record, benchmark, or trace CPU events running kernels and applications.
Hardware counters provide a low-overhead method for collecting comprehensive performance metrics related to a CPU's core elements (functional units, caches, main memory, etc.) – a significant advantage over software profilers. Additionally, they generally eliminate the need to modify the underlying source code of a program. Because hardware designs differ between architectures, the specific types and interpretations of hardware counters will also change.
Privileged modes
Most modern CPUs have privileged modes to support operating systems and virtualization.
Cloud computing can use virtualization to provide virtual central processing units (vCPUs) for separate users.
A host is the virtual equivalent of a physical machine, on which a virtual system is operating. When there are several physical machines operating in tandem and managed as a whole, the grouped computing and memory resources form a cluster. In some systems, it is possible to dynamically add and remove from a cluster. Resources available at a host and cluster level can be partitioned into resources pools with fine granularity.
Performance
The performance or speed of a processor depends on, among many other factors, the clock rate (generally given in multiples of hertz) and the instructions per clock (IPC), which together are the factors for the instructions per second (IPS) that the CPU can perform.
Many reported IPS values have represented "peak" execution rates on artificial instruction sequences with few branches, whereas realistic workloads consist of a mix of instructions and applications, some of which take longer to execute than others. The performance of the memory hierarchy also greatly affects processor performance, an issue barely considered in IPS calculations. Because of these problems, various standardized tests, often called "benchmarks" for this purpose such as SPECinthave been developed to attempt to measure the real effective performance in commonly used applications.
Processing performance of computers is increased by using multi-core processors, which essentially is plugging two or more individual processors (called cores in this sense) into one integrated circuit. Ideally, a dual core processor would be nearly twice as powerful as a single core processor. In practice, the performance gain is far smaller, only about 50%, due to imperfect software algorithms and implementation. Increasing the number of cores in a processor (i.e. dual-core, quad-core, etc.) increases the workload that can be handled. This means that the processor can now handle numerous asynchronous events, interrupts, etc. which can take a toll on the CPU when overwhelmed. These cores can be thought of as different floors in a processing plant, with each floor handling a different task. Sometimes, these cores will handle the same tasks as cores adjacent to them if a single core is not enough to handle the information. Multi-core CPUs enhance a computer's ability to run several tasks simultaneously by providing additional processing power. However, the increase in speed is not directly proportional to the number of cores added. This is because the cores need to interact through specific channels, and this inter-core communication consumes a portion of the available processing speed.
Due to specific capabilities of modern CPUs, such as simultaneous multithreading and uncore, which involve sharing of actual CPU resources while aiming at increased utilization, monitoring performance levels and hardware use gradually became a more complex task. As a response, some CPUs implement additional hardware logic that monitors actual use of various parts of a CPU and provides various counters accessible to software; an example is Intel's Performance Counter Monitor technology.
Overclocking
Overclocking is a process of increasing the clock speed of a CPU (and other components) to increase the performance of the CPU. Overclocking might increase CPU temperature and cause it to overheat, so most users do not overclock and leave the clock speed unchanged. Some versions of components (such as Intel's U version of its CPUss or Nvidia's OG GPUs) do not allow overclocking.
See also
Addressing mode
AMD Accelerated Processing Unit
Complex instruction set computer
Computer bus
Computer engineering
CPU core voltage
CPU socket
Data processing unit
Digital signal processor
Graphics processing unit
Comparison of instruction set architectures
Protection ring
Reduced instruction set computer
Stream processing
True Performance Index
Tensor Processing Unit
Wait state
Notes
References
External links
.
25 Microchips that shook the world – an article by the Institute of Electrical and Electronics Engineers.
Digital electronics
Electronic design
Electronic design automation | Central processing unit | [
"Engineering"
] | 10,449 | [
"Electronic design",
"Electronic engineering",
"Design",
"Digital electronics"
] |
5,225 | https://en.wikipedia.org/wiki/Code | In communications and information processing, code is a system of rules to convert information—such as a letter, word, sound, image, or gesture—into another form, sometimes shortened or secret, for communication through a communication channel or storage in a storage medium. An early example is an invention of language, which enabled a person, through speech, to communicate what they thought, saw, heard, or felt to others. But speech limits the range of communication to the distance a voice can carry and limits the audience to those present when the speech is uttered. The invention of writing, which converted spoken language into visual symbols, extended the range of communication across space and time.
The process of encoding converts information from a source into symbols for communication or storage. Decoding is the reverse process, converting code symbols back into a form that the recipient understands, such as English or/and Spanish.
One reason for coding is to enable communication in places where ordinary plain language, spoken or written, is difficult or impossible. For example, semaphore, where the configuration of flags held by a signaler or the arms of a semaphore tower encodes parts of the message, typically individual letters, and numbers. Another person standing a great distance away can interpret the flags and reproduce the words sent.
Theory
In information theory and computer science, a code is usually considered as an algorithm that uniquely represents symbols from some source alphabet, by encoded strings, which may be in some other target alphabet. An extension of the code for representing sequences of symbols over the source alphabet is obtained by concatenating the encoded strings.
Before giving a mathematically precise definition, this is a brief example. The mapping
is a code, whose source alphabet is the set and whose target alphabet is the set . Using the extension of the code, the encoded string 0011001 can be grouped into codewords as 0 011 0 01, and these in turn can be decoded to the sequence of source symbols acab.
Using terms from formal language theory, the precise mathematical definition of this concept is as follows: let S and T be two finite sets, called the source and target alphabets, respectively. A code is a total function mapping each symbol from S to a sequence of symbols over T. The extension of , is a homomorphism of into , which naturally maps each sequence of source symbols to a sequence of target symbols.
Variable-length codes
In this section, we consider codes that encode each source (clear text) character by a code word from some dictionary, and concatenation of such code words give us an encoded string. Variable-length codes are especially useful when clear text characters have different probabilities; see also entropy encoding.
A prefix code is a code with the "prefix property": there is no valid code word in the system that is a prefix (start) of any other valid code word in the set. Huffman coding is the most known algorithm for deriving prefix codes. Prefix codes are widely referred to as "Huffman codes" even when the code was not produced by a Huffman algorithm. Other examples of prefix codes are country calling codes, the country and publisher parts of ISBNs, and the Secondary Synchronization Codes used in the UMTS WCDMA 3G Wireless Standard.
Kraft's inequality characterizes the sets of codeword lengths that are possible in a prefix code. Virtually any uniquely decodable one-to-many code, not necessarily a prefix one, must satisfy Kraft's inequality.
Error-correcting codes
Codes may also be used to represent data in a way more resistant to errors in transmission or storage. This so-called error-correcting code works by including carefully crafted redundancy with the stored (or transmitted) data. Examples include Hamming codes, Reed–Solomon, Reed–Muller, Walsh–Hadamard, Bose–Chaudhuri–Hochquenghem, Turbo, Golay, algebraic geometry codes, low-density parity-check codes, and space–time codes.
Error detecting codes can be optimised to detect burst errors, or random errors.
Examples
Codes in communication used for brevity
A cable code replaces words (e.g. ship or invoice) with shorter words, allowing the same information to be sent with fewer characters, more quickly, and less expensively.
Codes can be used for brevity. When telegraph messages were the state of the art in rapid long-distance communication, elaborate systems of commercial codes that encoded complete phrases into single mouths (commonly five-minute groups) were developed, so that telegraphers became conversant with such "words" as BYOXO ("Are you trying to weasel out of our deal?"), LIOUY ("Why do you not answer my question?"), BMULD ("You're a skunk!"), or AYYLU ("Not clearly coded, repeat more clearly."). Code words were chosen for various reasons: length, pronounceability, etc. Meanings were chosen to fit perceived needs: commercial negotiations, military terms for military codes, diplomatic terms for diplomatic codes, any and all of the preceding for espionage codes. Codebooks and codebook publishers proliferated, including one run as a front for the American Black Chamber run by Herbert Yardley between the First and Second World Wars. The purpose of most of these codes was to save on cable costs. The use of data coding for data compression predates the computer era; an early example is the telegraph Morse code where more-frequently used characters have shorter representations. Techniques such as Huffman coding are now used by computer-based algorithms to compress large data files into a more compact form for storage or transmission.
Character encodings
Character encodings are representations of textual data. A given character encoding may be associated with a specific character set (the collection of characters which it can represent), though some character sets have multiple character encodings and vice versa. Character encodings may be broadly grouped according to the number of bytes required to represent a single character: there are single-byte encodings, multibyte (also called wide) encodings, and variable-width (also called variable-length) encodings. The earliest character encodings were single-byte, the best-known example of which is ASCII. ASCII remains in use today, for example in HTTP headers. However, single-byte encodings cannot model character sets with more than 256 characters. Scripts that require large character sets such as Chinese, Japanese and Korean must be represented with multibyte encodings. Early multibyte encodings were fixed-length, meaning that although each character was represented by more than one byte, all characters used the same number of bytes ("word length"), making them suitable for decoding with a lookup table. The final group, variable-width encodings, is a subset of multibyte encodings. These use more complex encoding and decoding logic to efficiently represent large character sets while keeping the representations of more commonly used characters shorter or maintaining backward compatibility properties. This group includes UTF-8, an encoding of the Unicode character set; UTF-8 is the most common encoding of text media on the Internet.
Genetic code
Biological organisms contain genetic material that is used to control their function and development. This is DNA, which contains units named genes from which messenger RNA is derived. This in turn produces proteins through a genetic code in which a series of triplets (codons) of four possible nucleotides can be translated into one of twenty possible amino acids. A sequence of codons results in a corresponding sequence of amino acids that form a protein molecule; a type of codon called a stop codon signals the end of the sequence.
Gödel code
In mathematics, a Gödel code is the basis for the proof of Gödel's incompleteness theorem. Here, the idea is to map mathematical notation to a natural number (using a Gödel numbering).
Other
There are codes using colors, like traffic lights, the color code employed to mark the nominal value of the electrical resistors or that of the trashcans devoted to specific types of garbage (paper, glass, organic, etc.).
In marketing, coupon codes can be used for a financial discount or rebate when purchasing a product from a (usual internet) retailer.
In military environments, specific sounds with the cornet are used for different uses: to mark some moments of the day, to command the infantry on the battlefield, etc.
Communication systems for sensory impairments, such as sign language for deaf people and braille for blind people, are based on movement or tactile codes.
Musical scores are the most common way to encode music.
Specific games have their own code systems to record the matches, e.g. chess notation.
Cryptography
In the history of cryptography, codes were once common for ensuring the confidentiality of communications, although ciphers are now used instead.
Secret codes intended to obscure the real messages, ranging from serious (mainly espionage in military, diplomacy, business, etc.) to trivial (romance, games) can be any kind of imaginative encoding: flowers, game cards, clothes, fans, hats, melodies, birds, etc., in which the sole requirement is the pre-agreement on the meaning by both the sender and the receiver.
Other examples
Other examples of encoding include:
Encoding (in cognition) - a basic perceptual process of interpreting incoming stimuli; technically speaking, it is a complex, multi-stage process of converting relatively objective sensory input (e.g., light, sound) into a subjectively meaningful experience.
A content format - a specific encoding format for converting a specific type of data to information.
Text encoding uses a markup language to tag the structure and other features of a text to facilitate processing by computers. (See also Text Encoding Initiative.)
Semantics encoding of formal language A informal language B is a method of representing all terms (e.g. programs or descriptions) of language A using language B.
Data compression transforms a signal into a code optimized for transmission or storage, generally done with a codec.
Neural encoding - the way in which information is represented in neurons.
Memory encoding - the process of converting sensations into memories.
Television encoding: NTSC, PAL and SECAM
Other examples of decoding include:
Decoding (computer science)
Decoding methods, methods in communication theory for decoding codewords sent over a noisy channel
Digital signal processing, the study of signals in a digital representation and the processing methods of these signals
Digital-to-analog converter, the use of analog circuit for decoding operations
Word decoding, the use of phonics to decipher print patterns and translate them into the sounds of language
Codes and acronyms
Acronyms and abbreviations can be considered codes, and in a sense, all languages and writing systems are codes for human thought.
International Air Transport Association airport codes are three-letter codes used to designate airports and used for bag tags. Station codes are similarly used on railways but are usually national, so the same code can be used for different stations if they are in different countries.
Occasionally, a code word achieves an independent existence (and meaning) while the original equivalent phrase is forgotten or at least no longer has the precise meaning attributed to the code word. For example, '30' was widely used in journalism to mean "end of story", and has been used in other contexts to signify "the end".
See also
ADDML
Asemic writing
Cipher
Code (semiotics)
Cultural code
Equipment codes
Quantum error correction
Semiotics
Universal language
References
Further reading
Signal processing | Code | [
"Technology",
"Engineering"
] | 2,391 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
5,233 | https://en.wikipedia.org/wiki/Carl%20Linnaeus | Carl Linnaeus (23 May 1707 – 10 January 1778), also known after ennoblement in 1761 as Carl von Linné, was a Swedish biologist and physician who formalised binomial nomenclature, the modern system of naming organisms. He is known as the "father of modern taxonomy". Many of his writings were in Latin; his name is rendered in Latin as and, after his 1761 ennoblement, as .
Linnaeus was the son of a curate and was born in Råshult, in the countryside of Småland, southern Sweden. He received most of his higher education at Uppsala University and began giving lectures in botany there in 1730. He lived abroad between 1735 and 1738, where he studied and also published the first edition of his in the Netherlands. He then returned to Sweden where he became professor of medicine and botany at Uppsala. In the 1740s, he was sent on several journeys through Sweden to find and classify plants and animals. In the 1750s and 1760s, he continued to collect and classify animals, plants, and minerals, while publishing several volumes. By the time of his death in 1778, he was one of the most acclaimed scientists in Europe.
Philosopher Jean-Jacques Rousseau sent him the message: "Tell him I know no greater man on Earth." Johann Wolfgang von Goethe wrote: "With the exception of William Shakespeare and Baruch Spinoza, I know no one among the no longer living who has influenced me more strongly." Swedish author August Strindberg wrote: "Linnaeus was in reality a poet who happened to become a naturalist." Linnaeus has been called (Prince of Botanists) and "The Pliny of the North". He is also considered one of the founders of modern ecology.
In botany, the abbreviation L. is used to indicate Linnaeus as the authority for a species' name. In zoology, the abbreviation Linnaeus is generally used; the abbreviations L., Linnæus and Linné are also used. In older publications, the abbreviation "Linn." is found. Linnaeus's remains constitute the type specimen for the species Homo sapiens following the International Code of Zoological Nomenclature, since the sole specimen that he is known to have examined was himself.
Early life
Childhood
Linnaeus was born in the village of Råshult in Småland, Sweden, on 23 May 1707. He was the first child of Nicolaus (Nils) Ingemarsson (who later adopted the family name Linnaeus) and Christina Brodersonia. His siblings were Anna Maria Linnæa, Sofia Juliana Linnæa, Samuel Linnæus (who would eventually succeed their father as rector of Stenbrohult and write a manual on beekeeping), and Emerentia Linnæa. His father taught him Latin as a small child.
One of a long line of peasants and priests, Nils was an amateur botanist, a Lutheran minister, and the curate of the small village of Stenbrohult in Småland. Christina was the daughter of the rector of Stenbrohult, Samuel Brodersonius.
A year after Linnaeus's birth, his grandfather Samuel Brodersonius died, and his father Nils became the rector of Stenbrohult. The family moved into the rectory from the curate's house.
Even in his early years, Linnaeus seemed to have a liking for plants, flowers in particular. Whenever he was upset, he was given a flower, which immediately calmed him. Nils spent much time in his garden and often showed flowers to Linnaeus and told him their names. Soon Linnaeus was given his own patch of earth where he could grow plants.
Carl's father was the first in his ancestry to adopt a permanent surname. Before that, ancestors had used the patronymic naming system of Scandinavian countries: his father was named Ingemarsson after his father Ingemar Bengtsson. When Nils was admitted to the Lund University, he had to take on a family name. He adopted the Latinate name Linnæus after a giant linden tree (or lime tree), in Swedish, that grew on the family homestead. This name was spelled with the æ ligature. When Carl was born, he was named Carl Linnæus, with his father's family name. The son also always spelled it with the æ ligature, both in handwritten documents and in publications. Carl's patronymic would have been Nilsson, as in Carl Nilsson Linnæus.
Early education
Linnaeus's father began teaching him basic Latin, religion, and geography at an early age. When Linnaeus was seven, Nils decided to hire a tutor for him. The parents picked Johan Telander, a son of a local yeoman. Linnaeus did not like him, writing in his autobiography that Telander "was better calculated to extinguish a child's talents than develop them".
Two years after his tutoring had begun, he was sent to the Lower Grammar School at Växjö in 1717. Linnaeus rarely studied, often going to the countryside to look for plants. At some point, his father went to visit him and, after hearing critical assessments by his preceptors, he decided to put the youth as an apprentice to some honest cobbler. He reached the last year of the Lower School when he was fifteen, which was taught by the headmaster, Daniel Lannerus, who was interested in botany. Lannerus noticed Linnaeus's interest in botany and gave him the run of his garden.
He also introduced him to Johan Rothman, the state doctor of Småland and a teacher at Katedralskolan (a gymnasium) in Växjö. Also a botanist, Rothman broadened Linnaeus's interest in botany and helped him develop an interest in medicine. By the age of 17, Linnaeus had become well acquainted with the existing botanical literature. He remarks in his journal that he "read day and night, knowing like the back of my hand, Arvidh Månsson's Rydaholm Book of Herbs, Tillandz's Flora Åboensis, Palmberg's Serta Florea Suecana, Bromelii's Chloros Gothica and Rudbeckii's Hortus Upsaliensis".
Linnaeus entered the Växjö Katedralskola in 1724, where he studied mainly Greek, Hebrew, theology and mathematics, a curriculum designed for boys preparing for the priesthood. In the last year at the gymnasium, Linnaeus's father visited to ask the professors how his son's studies were progressing; to his dismay, most said that the boy would never become a scholar. Rothman believed otherwise, suggesting Linnaeus could have a future in medicine. The doctor offered to have Linnaeus live with his family in Växjö and to teach him physiology and botany. Nils accepted this offer.
University studies
Lund
Rothman showed Linnaeus that botany was a serious subject. He taught Linnaeus to classify plants according to Tournefort's system. Linnaeus was also taught about the sexual reproduction of plants, according to Sébastien Vaillant. In 1727, Linnaeus, age 21, enrolled in Lund University in Skåne. He was registered as , the Latin form of his full name, which he also used later for his Latin publications.
Professor Kilian Stobæus, natural scientist, physician and historian, offered Linnaeus tutoring and lodging, as well as the use of his library, which included many books about botany. He also gave the student free admission to his lectures. In his spare time, Linnaeus explored the flora of Skåne, together with students sharing the same interests.
Uppsala
In August 1728, Linnaeus decided to attend Uppsala University on the advice of Rothman, who believed it would be a better choice if Linnaeus wanted to study both medicine and botany. Rothman based this recommendation on the two professors who taught at the medical faculty at Uppsala: Olof Rudbeck the Younger and Lars Roberg. Although Rudbeck and Roberg had undoubtedly been good professors, by then they were older and not so interested in teaching. Rudbeck no longer gave public lectures, and had others stand in for him. The botany, zoology, pharmacology and anatomy lectures were not in their best state. In Uppsala, Linnaeus met a new benefactor, Olof Celsius, who was a professor of theology and an amateur botanist. He received Linnaeus into his home and allowed him use of his library, which was one of the richest botanical libraries in Sweden.
In 1729, Linnaeus wrote a thesis, on plant sexual reproduction. This attracted the attention of Rudbeck; in May 1730, he selected Linnaeus to give lectures at the University although the young man was only a second-year student. His lectures were popular, and Linnaeus often addressed an audience of 300 people. In June, Linnaeus moved from Celsius's house to Rudbeck's to become the tutor of the three youngest of his 24 children. His friendship with Celsius did not wane and they continued their botanical expeditions. Over that winter, Linnaeus began to doubt Tournefort's system of classification and decided to create one of his own. His plan was to divide the plants by the number of stamens and pistils. He began writing several books, which would later result in, for example, and . He also produced a book on the plants grown in the Uppsala Botanical Garden, .
Rudbeck's former assistant, Nils Rosén, returned to the University in March 1731 with a degree in medicine. Rosén started giving anatomy lectures and tried to take over Linnaeus's botany lectures, but Rudbeck prevented that. Until December, Rosén tutored Linnaeus privately in medicine. In December, Linnaeus had a "disagreement" with Rudbeck's wife and had to move out of his mentor's house; his relationship with Rudbeck did not appear to suffer. That Christmas, Linnaeus returned home to Stenbrohult to visit his parents for the first time in about three years. His mother had disapproved of his failing to become a priest, but she was pleased to learn he was teaching at the University.
Expedition to Lapland
During a visit with his parents, Linnaeus told them about his plan to travel to Lapland; Rudbeck had made the journey in 1695, but the detailed results of his exploration were lost in a fire seven years afterwards. Linnaeus's hope was to find new plants, animals and possibly valuable minerals. He was also curious about the customs of the native Sami people, reindeer-herding nomads who wandered Scandinavia's vast tundras. In April 1732, Linnaeus was awarded a grant from the Royal Society of Sciences in Uppsala for his journey.
Linnaeus began his expedition from Uppsala on 12 May 1732, just before he turned 25. He travelled on foot and horse, bringing with him his journal, botanical and ornithological manuscripts and sheets of paper for pressing plants. Near Gävle he found great quantities of Campanula serpyllifolia, later known as Linnaea borealis, the twinflower that would become his favourite. He sometimes dismounted on the way to examine a flower or rock and was particularly interested in mosses and lichens, the latter a main part of the diet of the reindeer, a common and economically important animal in Lapland.
Linnaeus travelled clockwise around the coast of the Gulf of Bothnia, making major inland incursions from Umeå, Luleå and Tornio. He returned from his six-month-long, over expedition in October, having gathered and observed many plants, birds and rocks. Although Lapland was a region with limited biodiversity, Linnaeus described about 100 previously unidentified plants. These became the basis of his book . However, on the expedition to Lapland, Linnaeus used Latin names to describe organisms because he had not yet developed the binomial system.
In Linnaeus's ideas about nomenclature and classification were first used in a practical way, making this the first proto-modern Flora. The account covered 534 species, used the Linnaean classification system and included, for the described species, geographical distribution and taxonomic notes. It was Augustin Pyramus de Candolle who attributed Linnaeus with as the first example in the botanical genre of Flora writing. Botanical historian E. L. Greene described as "the most classic and delightful" of Linnaeus's works.
It was during this expedition that Linnaeus had a flash of insight regarding the classification of mammals. Upon observing the lower jawbone of a horse at the side of a road he was travelling, Linnaeus remarked: "If I only knew how many teeth and of what kind every animal had, how many teats and where they were placed, I should perhaps be able to work out a perfectly natural system for the arrangement of all quadrupeds."
In 1734, Linnaeus led a small group of students to Dalarna. Funded by the Governor of Dalarna, the expedition was to catalogue known natural resources and discover new ones, but also to gather intelligence on Norwegian mining activities at Røros.
Years in the Dutch Republic (1735–38)
Doctorate
His relations with Nils Rosén having worsened, Linnaeus accepted an invitation from Claes Sohlberg, son of a mining inspector, to spend the Christmas holiday in Falun, where Linnaeus was permitted to visit the mines.
In April 1735, at the suggestion of Sohlberg's father, Linnaeus and Sohlberg set out for the Dutch Republic, where Linnaeus intended to study medicine at the University of Harderwijk while tutoring Sohlberg in exchange for an annual salary. At the time, it was common for Swedes to pursue doctoral degrees in the Netherlands, then a highly revered place to study natural history.
On the way, the pair stopped in Hamburg, where they met the mayor, who proudly showed them a supposed wonder of nature in his possession: the taxidermied remains of a seven-headed hydra. Linnaeus quickly discovered the specimen was a fake, cobbled together from the jaws and paws of weasels and the skins of snakes. The provenance of the hydra suggested to Linnaeus that it had been manufactured by monks to represent the Beast of Revelation. Even at the risk of incurring the mayor's wrath, Linnaeus made his observations public, dashing the mayor's dreams of selling the hydra for an enormous sum. Linnaeus and Sohlberg were forced to flee from Hamburg.
Linnaeus began working towards his degree as soon as he reached Harderwijk, a university known for awarding degrees in as little as a week. He submitted a dissertation, written back in Sweden, entitled Dissertatio medica inauguralis in qua exhibetur hypothesis nova de febrium intermittentium causa, in which he laid out his hypothesis that malaria arose only in areas with clay-rich soils. Although he failed to identify the true source of disease transmission, (i.e., the Anopheles mosquito), he did correctly predict that Artemisia annua (wormwood) would become a source of antimalarial medications.
Within two weeks he had completed his oral and practical examinations and was awarded a doctoral degree.
That summer Linnaeus reunited with Peter Artedi, a friend from Uppsala with whom he had once made a pact that should either of the two predecease the other, the survivor would finish the decedent's work. Ten weeks later, Artedi drowned in the canals of Amsterdam, leaving behind an unfinished manuscript on the classification of fish.
Publishing of
One of the first scientists Linnaeus met in the Netherlands was Johan Frederik Gronovius, to whom Linnaeus showed one of the several manuscripts he had brought with him from Sweden. The manuscript described a new system for classifying plants. When Gronovius saw it, he was very impressed, and offered to help pay for the printing. With an additional monetary contribution by the Scottish doctor Isaac Lawson, the manuscript was published as (1735).
Linnaeus became acquainted with one of the most respected physicians and botanists in the Netherlands, Herman Boerhaave, who tried to convince Linnaeus to make a career there. Boerhaave offered him a journey to South Africa and America, but Linnaeus declined, stating he would not stand the heat. Instead, Boerhaave convinced Linnaeus that he should visit the botanist Johannes Burman. After his visit, Burman, impressed with his guest's knowledge, decided Linnaeus should stay with him during the winter. During his stay, Linnaeus helped Burman with his . Burman also helped Linnaeus with the books on which he was working: and .
George Clifford, Philip Miller, and Johann Jacob Dillenius
In August 1735, during Linnaeus's stay with Burman, he met George Clifford III, a director of the Dutch East India Company and the owner of a rich botanical garden at the estate of Hartekamp in Heemstede. Clifford was very impressed with Linnaeus's ability to classify plants, and invited him to become his physician and superintendent of his garden. Linnaeus had already agreed to stay with Burman over the winter, and could thus not accept immediately. However, Clifford offered to compensate Burman by offering him a copy of Sir Hans Sloane's Natural History of Jamaica, a rare book, if he let Linnaeus stay with him, and Burman accepted. On 24 September 1735, Linnaeus moved to Hartekamp to become personal physician to Clifford, and curator of Clifford's herbarium. He was paid 1,000 florins a year, with free board and lodging. Though the agreement was only for a winter of that year, Linnaeus practically stayed there until 1738. It was here that he wrote a book Hortus Cliffortianus, in the preface of which he described his experience as "the happiest time of my life". (A portion of Hartekamp was declared as public garden in April 1956 by the Heemstede local authority, and was named "Linnaeushof". It eventually became, as it is claimed, the biggest playground in Europe.)
In July 1736, Linnaeus travelled to England, at Clifford's expense. He went to London to visit Sir Hans Sloane, a collector of natural history, and to see his cabinet, as well as to visit the Chelsea Physic Garden and its keeper, Philip Miller. He taught Miller about his new system of subdividing plants, as described in . At first, Miller was reluctant to use the new binomial nomenclature, preferring instead the classifications of Joseph Pitton de Tournefort and John Ray. Nevertheless, Linnaeus applauded Miller's Gardeners Dictionary. The conservative Miller actually retained in his dictionary a number of pre-Linnaean binomial signifiers discarded by Linnaeus but which have been retained by modern botanists. He only fully changed to the Linnaean system in the edition of The Gardeners Dictionary of 1768. Miller ultimately was impressed, and from then on started to arrange the garden according to Linnaeus's system.
Linnaeus also travelled to Oxford University to visit the botanist Johann Jacob Dillenius. He failed to make Dillenius publicly fully accept his new classification system, though the two men remained in correspondence for many years afterwards. Linnaeus dedicated his Critica Botanica to him, as "opus botanicum quo absolutius mundus non-vidit". Linnaeus would later name a genus of tropical tree Dillenia in his honour. He then returned to Hartekamp, bringing with him many specimens of rare plants. The next year, 1737, he published , in which he described 935 genera of plants, and shortly thereafter he supplemented it with , with another sixty (sexaginta) genera.
His work at Hartekamp led to another book, , a catalogue of the botanical holdings in the herbarium and botanical garden of Hartekamp. He wrote it in nine months (completed in July 1737), but it was not published until 1738. It contains the first use of the name Nepenthes, which Linnaeus used to describe a genus of pitcher plants.
Linnaeus stayed with Clifford at Hartekamp until 18 October 1737 (new style), when he left the house to return to Sweden. Illness and the kindness of Dutch friends obliged him to stay some months longer in Holland. In May 1738, he set out for Sweden again. On the way home, he stayed in Paris for about a month, visiting botanists such as Antoine de Jussieu. After his return, Linnaeus never again left Sweden.
Return to Sweden
When Linnaeus returned to Sweden on 28 June 1738, he went to Falun, where he entered into an engagement to Sara Elisabeth Moræa. Three months later, he moved to Stockholm to find employment as a physician, and thus to make it possible to support a family. Once again, Linnaeus found a patron; he became acquainted with Count Carl Gustav Tessin, who helped him get work as a physician at the Admiralty. During this time in Stockholm, Linnaeus helped found the Royal Swedish Academy of Science; he became the first Praeses of the academy by drawing of lots.
Because his finances had improved and were now sufficient to support a family, he received permission to marry his fiancée, Sara Elisabeth Moræa. Their wedding was held 26 June 1739. Seventeen months later, Sara gave birth to their first son, Carl. Two years later, a daughter, Elisabeth Christina, was born, and the subsequent year Sara gave birth to Sara Magdalena, who died when 15 days old. Sara and Linnaeus would later have four other children: Lovisa, Sara Christina, Johannes and Sophia.
In May 1741, Linnaeus was appointed Professor of Medicine at Uppsala University, first with responsibility for medicine-related matters. Soon, he changed place with the other Professor of Medicine, Nils Rosén, and thus was responsible for the Botanical Garden (which he would thoroughly reconstruct and expand), botany and natural history, instead. In October that same year, his wife and nine-month-old son followed him to live in Uppsala.
Öland and Gotland
Ten days after he was appointed professor, he undertook an expedition to the island provinces of Öland and Gotland with six students from the university to look for plants useful in medicine. They stayed on Öland until 21 June, then sailed to Visby in Gotland. Linnaeus and the students stayed on Gotland for about a month, and then returned to Uppsala. During this expedition, they found 100 previously unrecorded plants. The observations from the expedition were later published in , written in Swedish. Like , it contained both zoological and botanical observations, as well as observations concerning the culture in Öland and Gotland.
During the summer of 1745, Linnaeus published two more books: and . was a strictly botanical book, while was zoological. Anders Celsius had created the temperature scale named after him in 1742. Celsius's scale was originally inverted compared to the way it is used today, with water boiling at 0 °C and freezing at 100 °C. Linnaeus was the one who inverted the scale to its present usage, in 1745.
Västergötland
In the summer of 1746, Linnaeus was once again commissioned by the Government to carry out an expedition, this time to the Swedish province of Västergötland. He set out from Uppsala on 12 June and returned on 11 August. On the expedition his primary companion was Erik Gustaf Lidbeck, a student who had accompanied him on his previous journey. Linnaeus described his findings from the expedition in the book , published the next year. After he returned from the journey, the Government decided Linnaeus should take on another expedition to the southernmost province Scania. This journey was postponed, as Linnaeus felt too busy.
In 1747, Linnaeus was given the title archiater, or chief physician, by the Swedish king Adolf Frederick—a mark of great respect. The same year he was elected member of the Academy of Sciences in Berlin.
Scania
In the spring of 1749, Linnaeus could finally journey to Scania, again commissioned by the government. With him he brought his student Olof Söderberg. On the way to Scania, he made his last visit to his brothers and sisters in Stenbrohult since his father had died the previous year. The expedition was similar to the previous journeys in most aspects, but this time he was also ordered to find the best place to grow walnut and Swedish whitebeam trees; these trees were used by the military to make rifles. While there, they also visited the Ramlösa mineral spa, where he remarked on the quality of its ferruginous water. The journey was successful, and Linnaeus's observations were published the next year in .
Rector of Uppsala University
In 1750, Linnaeus became rector of Uppsala University, starting a period where natural sciences were esteemed. Perhaps the most important contribution he made during his time at Uppsala was to teach; many of his students travelled to various places in the world to collect botanical samples. Linnaeus called the best of these students his "apostles". His lectures were normally very popular and were often held in the Botanical Garden. He tried to teach the students to think for themselves and not trust anybody, not even him. Even more popular than the lectures were the botanical excursions made every Saturday during summer, where Linnaeus and his students explored the flora and fauna in the vicinity of Uppsala.
Philosophia Botanica
Linnaeus published Philosophia Botanica in 1751. The book contained a complete survey of the taxonomy system he had been using in his earlier works. It also contained information of how to keep a journal on travels and how to maintain a botanical garden.
Nutrix Noverca
During Linnaeus's time it was normal for upper class women to have wet nurses for their babies. Linnaeus joined an ongoing campaign to end this practice in Sweden and promote breast-feeding by mothers. In 1752 Linnaeus published a thesis along with Frederick Lindberg, a physician student, based on their experiences. In the tradition of the period, this dissertation was essentially an idea of the presiding reviewer (prases) expounded upon by the student. Linnaeus's dissertation was translated into French by J. E. Gilibert in 1770 as . Linnaeus suggested that children might absorb the personality of their wet nurse through the milk. He admired the child care practices of the Lapps and pointed out how healthy their babies were compared to those of Europeans who employed wet nurses. He compared the behaviour of wild animals and pointed out how none of them denied their newborns their breastmilk. It is thought that his activism played a role in his choice of the term Mammalia for the class of organisms.
Species Plantarum
Linnaeus published Species Plantarum, the work which is now internationally accepted as the starting point of modern botanical nomenclature, in 1753. The first volume was issued on 24 May, the second volume followed on 16 August of the same year. The book contained 1,200 pages and was published in two volumes; it described over 7,300 species. The same year the king dubbed him knight of the Order of the Polar Star, the first civilian in Sweden to become a knight in this order. He was then seldom seen not wearing the order's insignia.
Ennoblement
Linnaeus felt Uppsala was too noisy and unhealthy, so he bought two farms in 1758: Hammarby and Sävja. The next year, he bought a neighbouring farm, Edeby. He spent the summers with his family at Hammarby; initially it only had a small one-storey house, but in 1762 a new, larger main building was added. In Hammarby, Linnaeus made a garden where he could grow plants that could not be grown in the Botanical Garden in Uppsala. He began constructing a museum on a hill behind Hammarby in 1766, where he moved his library and collection of plants. A fire that destroyed about one third of Uppsala and had threatened his residence there necessitated the move.
Since the initial release of in 1735, the book had been expanded and reprinted several times; the tenth edition was released in 1758. This edition established itself as the starting point for zoological nomenclature, the equivalent of .
The Swedish King Adolf Frederick granted Linnaeus nobility in 1757, but he was not ennobled until 1761. With his ennoblement, he took the name Carl von Linné (Latinised as ), 'Linné' being a shortened and gallicised version of 'Linnæus', and the German nobiliary particle 'von' signifying his ennoblement. The noble family's coat of arms prominently features a twinflower, one of Linnaeus's favourite plants; it was given the scientific name Linnaea borealis in his honour by Gronovius. The shield in the coat of arms is divided into thirds: red, black and green for the three kingdoms of nature (animal, mineral and vegetable) in Linnaean classification; in the centre is an egg "to denote Nature, which is continued and perpetuated in ovo." At the bottom is a phrase in Latin, borrowed from the Aeneid, which reads "Famam extendere factis": we extend our fame by our deeds. Linnaeus inscribed this personal motto in books that were given to him by friends.
After his ennoblement, Linnaeus continued teaching and writing. In total, he presided at 186 PhD ceremonies, with many of the dissertations written by himself. His reputation had spread over the world, and he corresponded with many different people. For example, Catherine II of Russia sent him seeds from her country. He also corresponded with Giovanni Antonio Scopoli, "the Linnaeus of the Austrian Empire", who was a doctor and a botanist in Idrija, Duchy of Carniola (nowadays Slovenia). Scopoli communicated all of his research, findings, and descriptions (for example of the olm and the dormouse, two little animals hitherto unknown to Linnaeus). Linnaeus greatly respected Scopoli and showed great interest in his work. He named a solanaceous genus, Scopolia, the source of scopolamine, after him, but because of the great distance between them, they never met.
Final years
Linnaeus was relieved of his duties in the Royal Swedish Academy of Science in 1763, but continued his work there as usual for more than ten years after. In 1769 he was elected to the American Philosophical Society for his work. He stepped down as rector at Uppsala University in December 1772, mostly due to his declining health.
Linnaeus's last years were troubled by illness. He had had a disease called the Uppsala fever in 1764, but survived due to the care of Rosén. He developed sciatica in 1773, and the next year, he had a stroke which partially paralysed him. He had a second stroke in 1776, losing the use of his right side and leaving him bereft of his memory; while still able to admire his own writings, he could not recognise himself as their author.
In December 1777, he had another stroke which greatly weakened him, and eventually led to his death on 10 January 1778 in Hammarby. Despite his desire to be buried in Hammarby, he was buried in Uppsala Cathedral on 22 January.
His library and collections were left to his widow Sara and their children. Joseph Banks, an eminent botanist, wished to purchase the collection, but his son Carl refused the offer and instead moved the collection to Uppsala. In 1783 Carl died and Sara inherited the collection, having outlived both her husband and son. She tried to sell it to Banks, but he was no longer interested; instead an acquaintance of his agreed to buy the collection. The acquaintance was a 24-year-old medical student, James Edward Smith, who bought the whole collection: 14,000 plants, 3,198 insects, 1,564 shells, about 3,000 letters and 1,600 books. Smith founded the Linnean Society of London five years later.
The von Linné name ended with his son Carl, who never married. His other son, Johannes, had died aged 3. There are over two hundred descendants of Linnaeus through two of his daughters.
Apostles
During Linnaeus's time as Professor and Rector of Uppsala University, he taught many devoted students, 17 of whom he called "apostles". They were the most promising, most committed students, and all of them made botanical expeditions to various places in the world, often with his help. The amount of this help varied; sometimes he used his influence as Rector to grant his apostles a scholarship or a place on an expedition. To most of the apostles he gave instructions of what to look for on their journeys. Abroad, the apostles collected and organised new plants, animals and minerals according to Linnaeus's system. Most of them also gave some of their collection to Linnaeus when their journey was finished. Thanks to these students, the Linnaean system of taxonomy spread through the world without Linnaeus ever having to travel outside Sweden after his return from Holland. The British botanist William T. Stearn notes, without Linnaeus's new system, it would not have been possible for the apostles to collect and organise so many new specimens. Many of the apostles died during their expeditions.
Early expeditions
Christopher Tärnström, the first apostle and a 43-year-old pastor with a wife and children, made his journey in 1746. He boarded a Swedish East India Company ship headed for China. Tärnström never reached his destination, dying of a tropical fever on Côn Sơn Island the same year. Tärnström's widow blamed Linnaeus for making her children fatherless, causing Linnaeus to prefer sending out younger, unmarried students after Tärnström. Six other apostles later died on their expeditions, including Pehr Forsskål and Pehr Löfling.
Two years after Tärnström's expedition, Finnish-born Pehr Kalm set out as the second apostle to North America. There he spent two-and-a-half years studying the flora and fauna of Pennsylvania, New York, New Jersey and Canada. Linnaeus was overjoyed when Kalm returned, bringing back with him many pressed flowers and seeds. At least 90 of the 700 North American species described in Species Plantarum had been brought back by Kalm.
Cook expeditions and Japan
Daniel Solander was living in Linnaeus's house during his time as a student in Uppsala. Linnaeus was very fond of him, promising Solander his eldest daughter's hand in marriage. On Linnaeus's recommendation, Solander travelled to England in 1760, where he met the English botanist Joseph Banks. With Banks, Solander joined James Cook on his expedition to Oceania on the Endeavour in 1768–71. Solander was not the only apostle to journey with James Cook; Anders Sparrman followed on the Resolution in 1772–75 bound for, among other places, Oceania and South America. Sparrman made many other expeditions, one of them to South Africa.
Perhaps the most famous and successful apostle was Carl Peter Thunberg, who embarked on a nine-year expedition in 1770. He stayed in South Africa for three years, then travelled to Japan. All foreigners in Japan were forced to stay on the island of Dejima outside Nagasaki, so it was thus hard for Thunberg to study the flora. He did, however, manage to persuade some of the translators to bring him different plants, and he also found plants in the gardens of Dejima. He returned to Sweden in 1779, one year after Linnaeus's death.
Major publications
Systema Naturae
The first edition of was printed in the Netherlands in 1735. It was a twelve-page work. By the time it reached its 10th edition in 1758, it classified 4,400 species of animals and 7,700 species of plants. People from all over the world sent their specimens to Linnaeus to be included. By the time he started work on the 12th edition, Linnaeus needed a new invention—the index card—to track classifications.
In Systema Naturae, the unwieldy names mostly used at the time, such as "", were supplemented with concise and now familiar "binomials", composed of the generic name, followed by a specific epithet—in the case given, Physalis angulata. These binomials could serve as a label to refer to the species. Higher taxa were constructed and arranged in a simple and orderly manner. Although the system, now known as binomial nomenclature, was partially developed by the Bauhin brothers (see Gaspard Bauhin and Johann Bauhin) almost 200 years earlier, Linnaeus was the first to use it consistently throughout the work, including in monospecific genera, and may be said to have popularised it within the scientific community.
After the decline in Linnaeus's health in the early 1770s, publication of editions of Systema Naturae went in two different directions. Another Swedish scientist, Johan Andreas Murray, issued the Regnum Vegetabile section separately in 1774 as the Systema Vegetabilium, rather confusingly labelled the 13th edition. Meanwhile, a 13th edition of the entire Systema appeared in parts between 1788 and 1793 under the editorship of Johann Friedrich Gmelin. It was through the Systema Vegetabilium that Linnaeus's work became widely known in England, following its translation from the Latin by the Lichfield Botanical Society as A System of Vegetables (1783–1785).
Orbis eruditi judicium de Caroli Linnaei MD scriptis
('Opinion of the learned world on the writings of Carl Linnaeus, Doctor') Published in 1740, this small octavo-sized pamphlet was presented to the State Library of New South Wales by the Linnean Society of NSW in 2018. This is considered among the rarest of all the writings of Linnaeus, and crucial to his career, securing him his appointment to a professorship of medicine at Uppsala University. From this position he laid the groundwork for his radical new theory of classifying and naming organisms for which he was considered the founder of modern taxonomy.
(or, more fully, ) was first published in 1753, as a two-volume work. Its prime importance is perhaps that it is the primary starting point of plant nomenclature as it exists today.
was first published in 1737, delineating plant genera. Around 10 editions were published, not all of them by Linnaeus himself; the most important is the 1754 fifth edition. In it Linnaeus divided the plant Kingdom into 24 classes. One, Cryptogamia, included all the plants with concealed reproductive parts (algae, fungi, mosses and liverworts and ferns).
(1751) was a summary of Linnaeus's thinking on plant classification and nomenclature, and an elaboration of the work he had previously published in (1736) and (1737). Other publications forming part of his plan to reform the foundations of botany include his and : all were printed in Holland (as were (1737) and (1735)), the Philosophia being simultaneously released in Stockholm.
Collections
At the end of his lifetime the Linnean collection in Uppsala was considered one of the finest collections of natural history objects in Sweden. Next to his own collection he had also built up a museum for the university of Uppsala, which was supplied by material donated by Carl Gyllenborg (in 1744–1745), crown-prince Adolf Fredrik (in 1745), Erik Petreus (in 1746), Claes Grill (in 1746), Magnus Lagerström (in 1748 and 1750) and Jonas Alströmer (in 1749). The relation between the museum and the private collection was not formalised and the steady flow of material from Linnean pupils were incorporated to the private collection rather than to the museum. Linnaeus felt his work was reflecting the harmony of nature and he said in 1754 "the earth is then nothing else but a museum of the all-wise creator's masterpieces, divided into three chambers". He had turned his own estate into a microcosm of that 'world museum'.
In April 1766 parts of the town were destroyed by a fire and the Linnean private collection was subsequently moved to a barn outside the town, and shortly afterwards to a single-room stone building close to his country house at Hammarby near Uppsala. This resulted in a physical separation between the two collections; the museum collection remained in the botanical garden of the university. Some material which needed special care (alcohol specimens) or ample storage space was moved from the private collection to the museum.
In Hammarby the Linnean private collections suffered seriously from damp and the depredations by mice and insects. Carl von Linné's son (Carl Linnaeus) inherited the collections in 1778 and retained them until his own death in 1783. Shortly after Carl von Linné's death his son confirmed that mice had caused "horrible damage" to the plants and that also moths and mould had caused considerable damage. He tried to rescue them from the neglect they had suffered during his father's later years, and also added further specimens. This last activity however reduced rather than augmented the scientific value of the original material.
In 1784 the young medical student James Edward Smith purchased the entire specimen collection, library, manuscripts, and correspondence of Carl Linnaeus from his widow and daughter and transferred the collections to London. Not all material in Linné's private collection was transported to England. Thirty-three fish specimens preserved in alcohol were not sent and were later lost.
In London Smith tended to neglect the zoological parts of the collection; he added some specimens and also gave some specimens away. Over the following centuries the Linnean collection in London suffered enormously at the hands of scientists who studied the collection, and in the process disturbed the original arrangement and labels, added specimens that did not belong to the original series and withdrew precious original type material.
Much material which had been intensively studied by Linné in his scientific career belonged to the collection of Queen Lovisa Ulrika (1720–1782) (in the Linnean publications referred to as "Museum Ludovicae Ulricae" or "M. L. U."). This collection was donated by her grandson King Gustav IV Adolf (1778–1837) to the museum in Uppsala in 1804. Another important collection in this respect was that of her husband King Adolf Fredrik (1710–1771) (in the Linnean sources known as "Museum Adolphi Friderici" or "Mus. Ad. Fr."), the wet parts (alcohol collection) of which were later donated to the Royal Swedish Academy of Sciences, and is today housed in the Swedish Museum of Natural History at Stockholm. The dry material was transferred to Uppsala.
System of taxonomy
The establishment of universally accepted conventions for the naming of organisms was Linnaeus's main contribution to taxonomy—his work marks the starting point of consistent use of binomial nomenclature. During the 18th century expansion of natural history knowledge, Linnaeus also developed what became known as the Linnaean taxonomy; the system of scientific classification now widely used in the biological sciences. A previous zoologist Rumphius (1627–1702) had more or less approximated the Linnaean system and his material contributed to the later development of the binomial scientific classification by Linnaeus.
The Linnaean system classified nature within a nested hierarchy, starting with three kingdoms. Kingdoms were divided into classes and they, in turn, into orders, and thence into genera (singular: genus), which were divided into species (singular: species). Below the rank of species he sometimes recognised taxa of a lower (unnamed) rank; these have since acquired standardised names such as variety in botany and subspecies in zoology. Modern taxonomy includes a rank of family between order and genus and a rank of phylum between kingdom and class that were not present in Linnaeus's original system.
Linnaeus's groupings were based upon shared physical characteristics, and not based upon differences. Of his higher groupings, only those for animals are still in use, and the groupings themselves have been significantly changed since their conception, as have the principles behind them. Nevertheless, Linnaeus is credited with establishing the idea of a hierarchical structure of classification which is based upon observable characteristics and intended to reflect natural relationships. While the underlying details concerning what are considered to be scientifically valid "observable characteristics" have changed with expanding knowledge (for example, DNA sequencing, unavailable in Linnaeus's time, has proven to be a tool of considerable utility for classifying living organisms and establishing their evolutionary relationships), the fundamental principle remains sound.
Human taxonomy
Linnaeus's system of taxonomy was especially noted as the first to include humans (Homo) taxonomically grouped with apes (Simia), under the header of Anthropomorpha.
German biologist Ernst Haeckel speaking in 1907 noted this as the "most important sign of Linnaeus's genius".
Linnaeus classified humans among the primates beginning with the first edition of . During his time at Hartekamp, he had the opportunity to examine several monkeys and noted similarities between them and man. He pointed out both species basically have the same anatomy; except for speech, he found no other differences. Thus he placed man and monkeys under the same category, Anthropomorpha, meaning "manlike". This classification received criticism from other biologists such as Johan Gottschalk Wallerius, Jacob Theodor Klein and Johann Georg Gmelin on the ground that it is illogical to describe man as human-like. In a letter to Gmelin from 1747, Linnaeus replied:
The theological concerns were twofold: first, putting man at the same level as monkeys or apes would lower the spiritually higher position that man was assumed to have in the great chain of being, and second, because the Bible says man was created in the image of God (theomorphism), if monkeys/apes and humans were not distinctly and separately designed, that would mean monkeys and apes were created in the image of God as well. This was something many could not accept. The conflict between world views that was caused by asserting man was a type of animal would simmer for a century until the much greater, and still ongoing, creation–evolution controversy began in earnest with the publication of On the Origin of Species by Charles Darwin in 1859.
After such criticism, Linnaeus felt he needed to explain himself more clearly. The 10th edition of introduced new terms, including Mammalia and Primates, the latter of which would replace Anthropomorpha as well as giving humans the full binomial Homo sapiens. The new classification received less criticism, but many natural historians still believed he had demoted humans from their former place of ruling over nature and not being a part of it. Linnaeus believed that man biologically belongs to the animal kingdom and had to be included in it. In his book , he said, "One should not vent one's wrath on animals, Theology decree that man has a soul and that the animals are mere 'automata mechanica,' but I believe they would be better advised that animals have a soul and that the difference is of nobility."
Linnaeus added a second species to the genus Homo in based on a figure and description by Jacobus Bontius from a 1658 publication: Homo troglodytes ("caveman") and published a third in 1771: Homo lar. Swedish historian Gunnar Broberg states that the new human species Linnaeus described were actually simians or native people clad in skins to frighten colonial settlers, whose appearance had been exaggerated in accounts to Linnaeus. For Homo troglodytes Linnaeus asked the Swedish East India Company to search for one, but they did not find any signs of its existence. Homo lar has since been reclassified as Hylobates lar, the lar gibbon.
In the first edition of , Linnaeus subdivided the human species into four varieties: "Europæus albesc[ens]" (whitish European), "Americanus rubesc[ens]" (reddish American), "Asiaticus fuscus" (tawny Asian) and "Africanus nigr[iculus]" (blackish African).
In the tenth edition of Systema Naturae he further detailed phenotypical characteristics for each variety, based on the concept of the four temperaments from classical antiquity, and changed the description of Asians' skin tone to "luridus" (yellow). While Linnaeus believed that these varieties resulted from environmental differences between the four known continents, the Linnean Society acknowledges that his categorization's focus on skin color and later inclusion of cultural and behavioral traits cemented colonial stereotypes and provided the foundations for scientific racism. Additionally, Linnaeus created a wastebasket taxon "monstrosus" for "wild and monstrous humans, unknown groups, and more or less abnormal people".
In 1959, W. T. Stearn designated Linnaeus to be the lectotype of H. sapiens.
Influences and economic beliefs
Linnaeus's applied science was inspired not only by the instrumental utilitarianism general to the early Enlightenment, but also by his adherence to the older economic doctrine of Cameralism. Additionally, Linnaeus was a state interventionist. He supported tariffs, levies, export bounties, quotas, embargoes, navigation acts, subsidised investment capital, ceilings on wages, cash grants, state-licensed producer monopolies, and cartels.
Commemoration
Anniversaries of Linnaeus's birth, especially in centennial years, have been marked by major celebrations. Linnaeus has appeared on numerous Swedish postage stamps and banknotes. There are numerous statues of Linnaeus in countries around the world. The Linnean Society of London has awarded the Linnean Medal for excellence in botany or zoology since 1888. Following approval by the Riksdag of Sweden, Växjö University and Kalmar College merged on 1 January 2010 to become Linnaeus University. Other things named after Linnaeus include the twinflower genus Linnaea, Linnaeosicyos (a monotypic genus in the family Cucurbitaceae), the crater Linné on the Earth's moon, a street in Cambridge, Massachusetts, and the cobalt sulfide mineral Linnaeite.
Commentary
Linnaeus wrote a description of himself in his autobiography Egenhändiga anteckningar af Carl Linnæus om sig sjelf : med anmärkningar och tillägg, which was published by his student Adam Afzelius in 1823:
Andrew Dickson White wrote in A History of the Warfare of Science with Theology in Christendom (1896):
The mathematical PageRank algorithm, applied to 24 multilingual Wikipedia editions in 2014, published in PLOS ONE in 2015, placed Carl Linnaeus at the top historical figure, above Jesus, Aristotle, Napoleon, and Adolf Hitler (in that order).
In the 21st century, Linnæus's taxonomy of human "races" has been problematised and discussed. Some critics claim that Linnæus was one of the forebears of the modern pseudoscientific notion of scientific racism, while others hold the view that while his classification was stereotyped, it did not imply that certain human "races" were superior to others.
Standard author abbreviation
Selected publications by Linnaeus
Linnaeus, Carl 1846 Fauna svecica. Sistens Animalia Sveciae Regni: Quadrupedia, Aves, Amphibia, Pisces, Insecta, Vermes, distributae per classes & ordines, genera & species. C. Wishoff & G.J. Wishoff, Lugdnuni Batavorum.
see also Species Plantarum
See also
Linnaeus's flower clock
Johann Bartsch, colleague
Centuria Insectorum
References
Notes
=
Citations
Sources
Further reading
512 pages. Original book: 516 pages.Review:
External links
Biographies
Biography at the Department of Systematic Botany, University of Uppsala
Biography at The Linnean Society of London
Biography from the University of California Museum of Paleontology
A four-minute biographical video from the London Natural History Museum on YouTube
Biography from Taxonomic Literature, 2nd Edition. 1976–2009.
Resources
The Linnean Society of London
The Linnaeus Apostles
The Linnean Collections
The Linnean Correspondence
Linnaeus's Disciples and Apostles
The Linnaean Dissertations
Linnean Herbarium
The Linnaeus Tercentenary
Works by Carl von Linné at the Biodiversity Heritage Library
Digital edition: "Critica Botanica" by the University and State Library Düsseldorf
Digital edition: "Classes plantarum seu systemata plantarum" by the University and State Library Düsseldorf
Oratio de telluris habitabilis incremento (1744) – full digital facsimile from Linda Hall Library
The 15 March 2007 issue of Nature featured a picture of Linnaeus on the cover with the heading "Linnaeus's Legacy" and devoted a substantial portion to items related to Linnaeus and Linnaean taxonomy.
1707 births
1778 deaths
18th-century lexicographers
18th-century male writers
18th-century Swedish botanists
Linne, Carl von
18th-century Swedish physicians
18th-century Swedish writers
18th-century Swedish zoologists
18th-century writers in Latin
Academic staff of Uppsala University
Age of Liberty people
Botanical nomenclature
Botanists active in Europe
Botanists with author abbreviations
Burials at Uppsala Cathedral
Fellows of the Royal Society
Historical definitions of race
Knights of the Order of the Polar Star
Members of the American Philosophical Society
Members of the French Academy of Sciences
Members of the Prussian Academy of Sciences
Members of the Royal Swedish Academy of Sciences
People from Älmhult Municipality
Pteridologists
Swedish arachnologists
Swedish autobiographers
Swedish biologists
Swedish bryologists
Swedish entomologists
Swedish expatriates in the Dutch Republic
Swedish Lutherans
Swedish mammalogists
Swedish mycologists
Swedish ornithologists
Swedish phycologists
Swedish taxonomists
Taxon authorities of Hypericum species
Terminologists
University of Harderwijk alumni
Uppsala University alumni | Carl Linnaeus | [
"Biology"
] | 11,003 | [
"Botanical nomenclature",
"Botanical terminology",
"Biological nomenclature"
] |
5,244 | https://en.wikipedia.org/wiki/Cipher | In cryptography, a cipher (or cypher) is an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term is encipherment. To encipher or encode is to convert information into cipher or code. In common parlance, "cipher" is synonymous with "code", as they are both a set of steps that encrypt a message; however, the concepts are distinct in cryptography, especially classical cryptography.
Codes generally substitute different length strings of characters in the output, while ciphers generally substitute the same number of characters as are input. A code maps one meaning with another. Words and phrases can be coded as letters or numbers. Codes typically have direct meaning from input to key. Codes primarily function to save time. Ciphers are algorithmic. The given input must follow the cipher's process to be solved. Ciphers are commonly used to encrypt written information.
Codes operated by substituting according to a large codebook which linked a random string of characters or numbers to a word or phrase. For example, "UQJHSE" could be the code for "Proceed to the following coordinates." When using a cipher the original information is known as plaintext, and the encrypted form as ciphertext. The ciphertext message contains all the information of the plaintext message, but is not in a format readable by a human or computer without the proper mechanism to decrypt it.
The operation of a cipher usually depends on a piece of auxiliary information, called a key (or, in traditional NSA parlance, a cryptovariable). The encrypting procedure is varied depending on the key, which changes the detailed operation of the algorithm. A key must be selected before using a cipher to encrypt a message. Without knowledge of the key, it should be extremely difficult, if not impossible, to decrypt the resulting ciphertext into readable plaintext.
Most modern ciphers can be categorized in several ways:
By whether they work on blocks of symbols usually of a fixed size (block ciphers), or on a continuous stream of symbols (stream ciphers).
By whether the same key is used for both encryption and decryption (symmetric key algorithms), or if a different key is used for each (asymmetric key algorithms). If the algorithm is symmetric, the key must be known to the recipient and sender and to no one else. If the algorithm is an asymmetric one, the enciphering key is different from, but closely related to, the deciphering key. If one key cannot be deduced from the other, the asymmetric key algorithm has the public/private key property and one of the keys may be made public without loss of confidentiality.
Etymology
Originating from the Arabic word for zero صفر (ṣifr), the word "cipher" spread to Europe as part of the Arabic numeral system during the Middle Ages. The Roman numeral system lacked the concept of zero, and this limited advances in mathematics. In this transition, the word was adopted into Medieval Latin as cifra, and then into Middle French as cifre. This eventually led to the English word cipher (minority spelling cypher). One theory for how the term came to refer to encoding is that the concept of zero was confusing to Europeans, and so the term came to refer to a message or communication that was not easily understood.
The term cipher was later also used to refer to any Arabic digit, or to calculation using them, so encoding text in the form of Arabic numerals is literally converting the text to "ciphers".
Versus codes
In casual contexts, "code" and "cipher" can typically be used interchangeably; however, the technical usages of the words refer to different concepts. Codes contain meaning; words and phrases are assigned to numbers or symbols, creating a shorter message.
An example of this is the commercial telegraph code which was used to shorten long telegraph messages which resulted from entering into commercial contracts using exchanges of telegrams.
Another example is given by whole word ciphers, which allow the user to replace an entire word with a symbol or character, much like the way written Japanese utilizes Kanji (meaning Chinese characters in Japanese) characters to supplement the native Japanese characters representing syllables. An example using English language with Kanji could be to replace "The quick brown fox jumps over the lazy dog" by "The quick brown 狐 jumps 上 the lazy 犬". Stenographers sometimes use specific symbols to abbreviate whole words.
Ciphers, on the other hand, work at a lower level: the level of individual letters, small groups of letters, or, in modern schemes, individual bits and blocks of bits. Some systems used both codes and ciphers in one system, using superencipherment to increase the security. In some cases the terms codes and ciphers are used synonymously with substitution and transposition, respectively.
Historically, cryptography was split into a dichotomy of codes and ciphers, while coding had its own terminology analogous to that of ciphers: "encoding, codetext, decoding" and so on.
However, codes have a variety of drawbacks, including susceptibility to cryptanalysis and the difficulty of managing a cumbersome codebook. Because of this, codes have fallen into disuse in modern cryptography, and ciphers are the dominant technique.
Types
There are a variety of different types of encryption. Algorithms used earlier in the history of cryptography are substantially different from modern methods, and modern ciphers can be classified according to how they operate and whether they use one or two keys.
Historical
The Caesar Cipher is one of the earliest known cryptographic systems. Julius Caesar used a cipher that shifts the letters in the alphabet in place by three and wrapping the remaining letters to the front to write to Marcus Tullius Cicero in approximately 50 BC.
Historical pen and paper ciphers used in the past are sometimes known as classical ciphers. They include simple substitution ciphers (such as ROT13) and transposition ciphers (such as a Rail Fence Cipher). For example, "GOOD DOG" can be encrypted as "PLLX XLP" where "L" substitutes for "O", "P" for "G", and "X" for "D" in the message. Transposition of the letters "GOOD DOG" can result in "DGOGDOO". These simple ciphers and examples are easy to crack, even without plaintext-ciphertext pairs.
In the 1640s, the Parliamentarian commander, Edward Montagu, 2nd Earl of Manchester, developed ciphers to send coded messages to his allies during the English Civil War.
Simple ciphers were replaced by polyalphabetic substitution ciphers (such as the Vigenère) which changed the substitution alphabet for every letter. For example, "GOOD DOG" can be encrypted as "PLSX TWF" where "L", "S", and "W" substitute for "O". With even a small amount of known or estimated plaintext, simple polyalphabetic substitution ciphers and letter transposition ciphers designed for pen and paper encryption are easy to crack. It is possible to create a secure pen and paper cipher based on a one-time pad, but these have other disadvantages.
During the early twentieth century, electro-mechanical machines were invented to do encryption and decryption using transposition, polyalphabetic substitution, and a kind of "additive" substitution. In rotor machines, several rotor disks provided polyalphabetic substitution, while plug boards provided another substitution. Keys were easily changed by changing the rotor disks and the plugboard wires. Although these encryption methods were more complex than previous schemes and required machines to encrypt and decrypt, other machines such as the British Bombe were invented to crack these encryption methods.
Modern
Modern encryption methods can be divided by two criteria: by type of key used, and by type of input data.
By type of key used ciphers are divided into:
symmetric key algorithms (Private-key cryptography), where one same key is used for encryption and decryption, and
asymmetric key algorithms (Public-key cryptography), where two different keys are used for encryption and decryption.
In a symmetric key algorithm (e.g., DES and AES), the sender and receiver must have a shared key set up in advance and kept secret from all other parties; the sender uses this key for encryption, and the receiver uses the same key for decryption. The design of AES (Advanced Encryption System) was beneficial because it aimed to overcome the flaws in the design of the DES (Data encryption standard). AES's designer's claim that the common means of modern cipher cryptanalytic attacks are ineffective against AES due to its design structure.[12]
Ciphers can be distinguished into two types by the type of input data:
block ciphers, which encrypt block of data of fixed size, and
stream ciphers, which encrypt continuous streams of data.
Key size and vulnerability
In a pure mathematical attack, (i.e., lacking any other information to help break a cipher) two factors above all count:
Computational power available, i.e., the computing power which can be brought to bear on the problem. It is important to note that average performance/capacity of a single computer is not the only factor to consider. An adversary can use multiple computers at once, for instance, to increase the speed of exhaustive search for a key (i.e., "brute force" attack) substantially.
Key size, i.e., the size of key used to encrypt a message. As the key size increases, so does the complexity of exhaustive search to the point where it becomes impractical to crack encryption directly.
Since the desired effect is computational difficulty, in theory one would choose an algorithm and desired difficulty level, thus decide the key length accordingly.
Claude Shannon proved, using information theory considerations, that any theoretically unbreakable cipher must have keys which are at least as long as the plaintext, and used only once: one-time pad.
See also
Autokey cipher
Cover-coding
Encryption software
List of ciphertexts
Steganography
Telegraph code
Notes
References
External links
Kish cypher
Cryptography | Cipher | [
"Mathematics",
"Engineering"
] | 2,171 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
5,259 | https://en.wikipedia.org/wiki/Common%20descent | Common descent is a concept in evolutionary biology applicable when one species is the ancestor of two or more species later in time. According to modern evolutionary biology, all living beings could be descendants of a unique ancestor commonly referred to as the last universal common ancestor (LUCA) of all life on Earth.
Common descent is an effect of speciation, in which multiple species derive from a single ancestral population. The more recent the ancestral population two species have in common, the more closely are they related. The most recent common ancestor of all currently living organisms is the last universal ancestor, which lived about 3.9 billion years ago. The two earliest pieces of evidence for life on Earth are graphite found to be biogenic in 3.7 billion-year-old metasedimentary rocks discovered in western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone discovered in Western Australia. All currently living organisms on Earth share a common genetic heritage, though the suggestion of substantial horizontal gene transfer during early evolution has led to questions about the monophyly (single ancestry) of life. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago in the Precambrian.
Universal common descent through an evolutionary process was first proposed by the British naturalist Charles Darwin in the concluding sentence of his 1859 book On the Origin of Species:
History
The idea that all living things (including things considered non-living by science) are related is a recurring theme in many indigenous worldviews across the world. Later on, in the 1740s, the French mathematician Pierre Louis Maupertuis arrived at the idea that all organisms had a common ancestor, and had diverged through random variation and natural selection.
In 1790, the philosopher Immanuel Kant wrote in Kritik der Urteilskraft (Critique of Judgment) that the similarity of animal forms implies a common original type, and thus a common parent.
In 1794, Charles Darwin's grandfather, Erasmus Darwin asked:
[W]ould it be too bold to imagine, that in the great length of time, since the earth began to exist, perhaps millions of ages before the commencement of the history of mankind, would it be too bold to imagine, that all warm-blooded animals have arisen from one living filament, which endued with animality, with the power of acquiring new parts attended with new propensities, directed by irritations, sensations, volitions, and associations; and thus possessing the faculty of continuing to improve by its own inherent activity, and of delivering down those improvements by generation to its posterity, world without end?
Charles Darwin's views about common descent, as expressed in On the Origin of Species, were that it was probable that there was only one progenitor for all life forms:
Therefore I should infer from analogy that probably all the organic beings which have ever lived on this earth have descended from some one primordial form, into which life was first breathed.
But he precedes that remark by, "Analogy would lead me one step further, namely, to the belief that all animals and plants have descended from some one prototype. But analogy may be a deceitful guide." And in the subsequent edition, he asserts rather, "We do not know all the possible transitional gradations between the simplest and the most perfect organs; it cannot be pretended that we know all the varied means of Distribution during the long lapse of years, or that we know how imperfect the Geological Record is. Grave as these several difficulties are, in my judgment they do not overthrow the theory of descent from a few created forms with subsequent modification".
Common descent was widely accepted amongst the scientific community after Darwin's publication. In 1907, Vernon Kellogg commented that "practically no naturalists of position and recognized attainment doubt the theory of descent."
In 2008, biologist T. Ryan Gregory noted that:
No reliable observation has ever been found to contradict the general notion of common descent. It should come as no surprise, then, that the scientific community at large has accepted evolutionary descent as a historical reality since Darwin's time and considers it among the most reliably established and fundamentally important facts in all of science.
Evidence
Common biochemistry
All known forms of life are based on the same fundamental biochemical organization: genetic information encoded in DNA, transcribed into RNA, through the effect of protein- and RNA-enzymes, then translated into proteins by (highly similar) ribosomes, with ATP, NADPH and others as energy sources. Analysis of small sequence differences in widely shared substances such as cytochrome c further supports universal common descent. Some 23 proteins are found in all organisms, serving as enzymes carrying out core functions like DNA replication. The fact that only one such set of enzymes exists is convincing evidence of a single ancestry. 6,331 genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago in the Precambrian.
Common genetic code
The genetic code (the "translation table" according to which DNA information is translated into amino acids, and hence proteins) is nearly identical for all known lifeforms, from bacteria and archaea to animals and plants. The universality of this code is generally regarded by biologists as definitive evidence in favor of universal common descent.
The way that codons (DNA triplets) are mapped to amino acids seems to be strongly optimised. Richard Egel argues that in particular the hydrophobic (non-polar) side-chains are well organised, suggesting that these enabled the earliest organisms to create peptides with water-repelling regions able to support the essential electron exchange (redox) reactions for energy transfer.
Selectively neutral similarities
Similarities which have no adaptive relevance cannot be explained by convergent evolution, and therefore they provide compelling support for universal common descent. Such evidence has come from two areas: amino acid sequences and DNA sequences. Proteins with the same three-dimensional structure need not have identical amino acid sequences; any irrelevant similarity between the sequences is evidence for common descent. In certain cases, there are several codons (DNA triplets) that code redundantly for the same amino acid. Since many species use the same codon at the same place to specify an amino acid that can be represented by more than one codon, that is evidence for their sharing a recent common ancestor. Had the amino acid sequences come from different ancestors, they would have been coded for by any of the redundant codons, and since the correct amino acids would already have been in place, natural selection would not have driven any change in the codons, however much time was available. Genetic drift could change the codons, but it would be extremely unlikely to make all the redundant codons in a whole sequence match exactly across multiple lineages. Similarly, shared nucleotide sequences, especially where these are apparently neutral such as the positioning of introns and pseudogenes, provide strong evidence of common ancestry.
Other similarities
Biologists often point to the universality of many aspects of cellular life as supportive evidence to the more compelling evidence listed above. These similarities include the energy carrier adenosine triphosphate (ATP), and the fact that all amino acids found in proteins are left-handed. It is, however, possible that these similarities resulted because of the laws of physics and chemistry - rather than through universal common descent - and therefore resulted in convergent evolution. In contrast, there is evidence for homology of the central subunits of transmembrane ATPases throughout all living organisms, especially how the rotating elements are bound to the membrane. This supports the assumption of a LUCA as a cellular organism, although primordial membranes may have been semipermeable and evolved later to the membranes of modern bacteria, and on a second path to those of modern archaea also.
Phylogenetic trees
Another important piece of evidence is from detailed phylogenetic trees (i.e., "genealogic trees" of species) mapping out the proposed divisions and common ancestors of all living species. In 2010, Douglas L. Theobald published a statistical analysis of available genetic data, mapping them to phylogenetic trees, that gave "strong quantitative support, by a formal test, for the unity of life."
Traditionally, these trees have been built using morphological methods, such as appearance, embryology, etc. Recently, it has been possible to construct these trees using molecular data, based on similarities and differences between genetic and protein sequences. All these methods produce essentially similar results, even though most genetic variation has no influence over external morphology. That phylogenetic trees based on different types of information agree with each other is strong evidence of a real underlying common descent.
Objections
Gene exchange clouds phylogenetic analysis
Theobald noted that substantial horizontal gene transfer could have occurred during early evolution. Bacteria today remain capable of gene exchange between distantly-related lineages. This weakens the basic assumption of phylogenetic analysis, that similarity of genomes implies common ancestry, because sufficient gene exchange would allow lineages to share much of their genome whether or not they shared an ancestor (monophyly). This has led to questions about the single ancestry of life. However, biologists consider it very unlikely that completely unrelated proto-organisms could have exchanged genes, as their different coding mechanisms would have resulted only in garble rather than functioning systems. Later, however, many organisms all derived from a single ancestor could readily have shared genes that all worked in the same way, and it appears that they have.
Convergent evolution
If early organisms had been driven by the same environmental conditions to evolve similar biochemistry convergently, they might independently have acquired similar genetic sequences. Theobald's "formal test" was accordingly criticised by Takahiro Yonezawa and colleagues for not including consideration of convergence. They argued that Theobald's test was insufficient to distinguish between the competing hypotheses. Theobald has defended his method against this claim, arguing that his tests distinguish between phylogenetic structure and mere sequence similarity. Therefore, Theobald argued, his results show that "real universally conserved proteins are homologous."
RNA world
The possibility is mentioned, above, that all living organisms may be descended from an original single-celled organism with a DNA genome, and that this implies a single origin for life. Although such a universal common ancestor may have existed, such a complex entity is unlikely to have arisen spontaneously from non-life and thus a cell with a DNA genome cannot reasonably be regarded as the origin of life. To understand the origin of life, it has been proposed that DNA based cellular life descended from relatively simple pre-cellular self-replicating RNA molecules able to undergo natural selection. During the course of evolution, this RNA world was replaced by the evolutionary emergence of the DNA world. A world of independently self-replicating RNA genomes apparently no longer exists (RNA viruses are dependent on host cells with DNA genomes). Because the RNA world is apparently gone, it is not clear how scientific evidence could be brought to bear on the question of whether there was a single origin of life event from which all life descended.
See also
The Ancestor's Tale
Urmetazoan
Bibliography
The book is available from The Complete Work of Charles Darwin Online. Retrieved 2015-11-23.
Retrieved 2015-11-23.
Notes
References
External links
29+ Evidences for Macroevolution: The Scientific Case for Common Descent from the TalkOrigins Archive.
The Tree of Life Web Project
Evolutionary biology
Descent
Most recent common ancestors | Common descent | [
"Biology"
] | 2,358 | [
"Evolutionary biology"
] |
5,267 | https://en.wikipedia.org/wiki/Constellation | A constellation is an area on the celestial sphere in which a group of visible stars forms a perceived pattern or outline, typically representing an animal, mythological subject, or inanimate object.
The first constellations were likely defined in prehistory. People used them to relate stories of their beliefs, experiences, creation, and mythology. Different cultures and countries invented their own constellations, some of which lasted into the early 20th century before today's constellations were internationally recognized. The recognition of constellations has changed significantly over time. Many changed in size or shape. Some became popular, only to drop into obscurity. Some were limited to a single culture or nation. Naming constellations also helped astronomers and navigators identify stars more easily.
Twelve (or thirteen) ancient constellations belong to the zodiac (straddling the ecliptic, which the Sun, Moon, and planets all traverse). The origins of the zodiac remain historically uncertain; its astrological divisions became prominent in Babylonian or Chaldean astronomy. Constellations appear in Western culture via Greece and are mentioned in the works of Hesiod, Eudoxus and Aratus. The traditional 48 constellations, consisting of the zodiac and 36 more (now 38, following the division of Argo Navis into three constellations) are listed by Ptolemy, a Greco-Roman astronomer from Alexandria, Egypt, in his Almagest. The formation of constellations was the subject of extensive mythology, most notably in the Metamorphoses of the Latin poet Ovid. Constellations in the far southern sky were added from the 15th century until the mid-18th century when European explorers began traveling to the Southern Hemisphere. Due to Roman and European transmission, each constellation has a Latin name.
In 1922, the International Astronomical Union (IAU) formally accepted the modern list of 88 constellations, and in 1928 adopted official constellation boundaries that together cover the entire celestial sphere. Any given point in a celestial coordinate system lies in one of the modern constellations. Some astronomical naming systems include the constellation where a given celestial object is found to convey its approximate location in the sky. The Flamsteed designation of a star, for example, consists of a number and the genitive form of the constellation's name.
Other star patterns or groups called asterisms are not constellations under the formal definition, but are also used by observers to navigate the night sky. Asterisms may be several stars within a constellation, or they may share stars with more than one constellation. Examples of asterisms include the teapot within the constellation Sagittarius, or the big dipper in the constellation of Ursa Major.
Terminology
The word constellation comes from the Late Latin term , which can be translated as "set of stars"; it came into use in Middle English during the 14th century. The Ancient Greek word for constellation is ἄστρον (). These terms historically referred to any recognisable pattern of stars whose appearance was associated with mythological characters or creatures, earthbound animals, or objects. Over time, among European astronomers, the constellations became clearly defined and widely recognised. In the 20th century, the International Astronomical Union (IAU) recognized 88 constellations.
A constellation or star that never sets below the horizon when viewed from a particular latitude on Earth is termed circumpolar. From the North Pole or South Pole, all constellations south or north of the celestial equator are circumpolar. Depending on the definition, equatorial constellations may include those that lie between declinations 45° north and 45° south, or those that pass through the declination range of the ecliptic (or zodiac) ranging between 23.5° north and 23.5° south.
Stars in constellations can appear near each other in the sky, but they usually lie at a variety of distances away from the Earth. Since each star has its own independent motion, all constellations will change slowly over time. After tens to hundreds of thousands of years, familiar outlines will become unrecognizable. Astronomers can predict the past or future constellation outlines by measuring common proper motions of individual stars by accurate astrometry and their radial velocities by astronomical spectroscopy.
The 88 constellations recognized by the IAU as well as those by cultures throughout history are imagined figures and shapes derived from the patterns of stars in the observable sky. Many officially recognized constellations are based on the imaginations of ancient, Near Eastern and Mediterranean mythologies. Some of these stories seem to relate to the appearance of the constellations, e.g. the assassination of Orion by Scorpius, their constellations appearing at opposite times of year.
Observation
Constellation positions change throughout the year due to night on Earth occurring at gradually different portions of its orbit around the Sun. As Earth rotates toward the east, the celestial sphere appears to rotate west, with stars circling counterclockwise around the northern pole star and clockwise around the southern pole star.
Because of Earth's 23.5° axial tilt, the zodiac is distributed equally across hemispheres (along the ecliptic), approximating a great circle. Zodiacal constellations of the northern sky are Pisces, Aries, Taurus, Gemini, Cancer, and Leo. In the southern sky are Virgo, Libra, Scorpius, Sagittarius, Capricornus, and Aquarius. The zodiac appears directly overhead from latitudes of 23.5° north to 23.5° south, depending on the time of year. In summer, the ecliptic appears higher up in the daytime and lower at night, while in winter the reverse is true, for both hemispheres.
Due to the Solar System's 60° tilt, the galactic plane of the Milky Way is inclined 60° from the ecliptic, between Taurus and Gemini (north) and Scorpius and Sagittarius (south and near which the Galactic Center can be found). The galaxy appears to pass through Aquila (near the celestial equator) and northern constellations Cygnus, Cassiopeia, Perseus, Auriga, and Orion (near Betelgeuse), as well as Monoceros (near the celestial equator), and southern constellations Puppis, Vela, Carina, Crux, Centaurus, Triangulum Australe, and Ara.
Northern hemisphere
Polaris, being the North Star, is the approximate center of the northern celestial hemisphere. It is part of Ursa Minor, constituting the end of the Little Dipper's handle.
From latitudes of around 35° north, in January, Ursa Major (containing the Big Dipper) appears to the northeast, while Cassiopeia is the northwest. To the west are Pisces (above the horizon) and Aries. To the southwest Cetus is near the horizon. Up high and to the south are Orion and Taurus. To the southeast above the horizon is Canis Major. Appearing above and to the east of Orion is Gemini: also in the east (and progressively closer to the horizon) are Cancer and Leo. In addition to Taurus, Perseus and Auriga appear overhead.
From the same latitude, in July, Cassiopeia (low in the sky) and Cepheus appear to the northeast. Ursa Major is now in the northwest. Boötes is high up in the west. Virgo is to the west, with Libra southwest and Scorpius south. Sagittarius and Capricorn are southeast. Cygnus (containing the Northern Cross) is to the east. Hercules is high in the sky along with Corona Borealis.
Southern hemisphere
January constellations include Pictor and Reticulum (near Hydrus and Mensa, respectively).
In July, Ara (adjacent to Triangulum Australe) and Scorpius can be seen.
Constellations near the pole star include Chamaeleon, Apus and Triangulum Australe (near Centaurus), Pavo, Hydrus, and Mensa.
Sigma Octantis is the closest star approximating a southern pole star, but is faint in the night sky. Thus, the pole can be triangulated using the constellation Crux as well as the stars Alpha and Beta Centauri (about 30° counterclockwise from Crux) of the constellation Centaurus (arching over Crux).
History of the early constellations
Lascaux Caves, southern France
It has been suggested that the 17,000-year-old cave paintings in Lascaux, southern France, depict star constellations such as Taurus, Orion's Belt, and the Pleiades. However, this view is not generally accepted among scientists.
Mesopotamia
Inscribed stones and clay writing tablets from Mesopotamia (in modern Iraq) dating to 3000 BC provide the earliest generally accepted evidence for humankind's identification of constellations. It seems that the bulk of the Mesopotamian constellations were created within a relatively short interval from around 1300 to 1000 BC. Mesopotamian constellations appeared later in many of the classical Greek constellations.
Ancient Near East
The oldest Babylonian catalogues of stars and constellations date back to the beginning of the Middle Bronze Age, most notably the Three Stars Each texts and the MUL.APIN, an expanded and revised version based on more accurate observation from around 1000 BC. However, the numerous Sumerian names in these catalogues suggest that they built on older, but otherwise unattested, Sumerian traditions of the Early Bronze Age.
The classical Zodiac is a revision of Neo-Babylonian constellations from the 6th century BC. The Greeks adopted the Babylonian constellations in the 4th century BC. Twenty Ptolemaic constellations are from the Ancient Near East. Another ten have the same stars but different names.
Biblical scholar E. W. Bullinger interpreted some of the creatures mentioned in the books of Ezekiel and Revelation as the middle signs of the four-quarters of the Zodiac, with the Lion as Leo, the Bull as Taurus, the Man representing Aquarius, and the Eagle standing in for Scorpio. The biblical Book of Job also makes reference to a number of constellations, including "bier", "fool" and "heap" (Job 9:9, 38:31–32), rendered as "Arcturus, Orion and Pleiades" by the KJV, but ‘Ayish "the bier" actually corresponding to Ursa Major. The term Mazzaroth , translated as a garland of crowns, is a hapax legomenon in Job 38:32, and it might refer to the zodiacal constellations.
Classical antiquity
There is only limited information on ancient Greek constellations, with some fragmentary evidence being found in the Works and Days of the Greek poet Hesiod, who mentioned the "heavenly bodies". Greek astronomy essentially adopted the older Babylonian system in the Hellenistic era, first introduced to Greece by Eudoxus of Cnidus in the 4th century BC. The original work of Eudoxus is lost, but it survives as a versification by Aratus, dating to the 3rd century BC. The most complete existing works dealing with the mythical origins of the constellations are by the Hellenistic writer termed pseudo-Eratosthenes and an early Roman writer styled pseudo-Hyginus. The basis of Western astronomy as taught during Late Antiquity and until the Early Modern period is the Almagest by Ptolemy, written in the 2nd century.
In the Ptolemaic Kingdom, native Egyptian tradition of anthropomorphic figures represented the planets, stars, and various constellations. Some of these were combined with Greek and Babylonian astronomical systems culminating in the Zodiac of Dendera; it remains unclear when this occurred, but most were placed during the Roman period between 2nd to 4th centuries AD. The oldest known depiction of the zodiac showing all the now familiar constellations, along with some original Egyptian constellations, decans, and planets. Ptolemy's Almagest remained the standard definition of constellations in the medieval period both in Europe and in Islamic astronomy.
Ancient China
Ancient China had a long tradition of observing celestial phenomena. Nonspecific Chinese star names, later categorized in the twenty-eight mansions, have been found on oracle bones from Anyang, dating back to the middle Shang dynasty. These constellations are some of the most important observations of Chinese sky, attested from the 5th century BC. Parallels to the earliest Babylonian (Sumerian) star catalogues suggest that the ancient Chinese system did not arise independently.
Three schools of classical Chinese astronomy in the Han period are attributed to astronomers of the earlier Warring States period. The constellations of the three schools were conflated into a single system by Chen Zhuo, an astronomer of the 3rd century (Three Kingdoms period). Chen Zhuo's work has been lost, but information on his system of constellations survives in Tang period records, notably by Qutan Xida. The oldest extant Chinese star chart dates to that period and was preserved as part of the Dunhuang Manuscripts. Native Chinese astronomy flourished during the Song dynasty, and during the Yuan dynasty became increasingly influenced by medieval Islamic astronomy (see Treatise on Astrology of the Kaiyuan Era). As maps were prepared during this period on more scientific lines, they were considered as more reliable.
A well-known map from the Song period is the Suzhou Astronomical Chart, which was prepared with carvings of stars on the planisphere of the Chinese sky on a stone plate; it is done accurately based on observations, and it shows the 1054 supernova in Taurus.
Influenced by European astronomy during the late Ming dynasty, charts depicted more stars but retained the traditional constellations. Newly observed stars were incorporated as supplementary to old constellations in the southern sky, which did not depict the traditional stars recorded by ancient Chinese astronomers. Further improvements were made during the later part of the Ming dynasty by Xu Guangqi and Johann Adam Schall von Bell, the German Jesuit and was recorded in Chongzhen Lishu (Calendrical Treatise of Chongzhen period, 1628). Traditional Chinese star maps incorporated 23 new constellations with 125 stars of the southern hemisphere of the sky based on the knowledge of Western star charts; with this improvement, the Chinese Sky was integrated with the World astronomy.
Early modern astronomy
Historically, the origins of the constellations of the northern and southern skies are distinctly different. Most northern constellations date to antiquity, with names based mostly on Classical Greek legends. Evidence of these constellations has survived in the form of star charts, whose oldest representation appears on the statue known as the Farnese Atlas, based perhaps on the star catalogue of the Greek astronomer Hipparchus. Southern constellations are more modern inventions, sometimes as substitutes for ancient constellations (e.g. Argo Navis). Some southern constellations had long names that were shortened to more usable forms; e.g. Musca Australis became simply Musca.
Some of the early constellations were never universally adopted. Stars were often grouped into constellations differently by different observers, and the arbitrary constellation boundaries often led to confusion as to which constellation a celestial object belonged. Before astronomers delineated precise boundaries (starting in the 19th century), constellations generally appeared as ill-defined regions of the sky. Today they now follow officially accepted designated lines of right ascension and declination based on those defined by Benjamin Gould in epoch 1875.0 in his star catalogue Uranometria Argentina.
The 1603 star atlas "Uranometria" of Johann Bayer assigned stars to individual constellations and formalized the division by assigning a series of Greek and Latin letters to the stars within each constellation. These are known today as Bayer designations. Subsequent star atlases led to the development of today's accepted modern constellations.
Origin of the southern constellations
The southern sky, below about −65° declination, was only partially catalogued by ancient Babylonians, Egyptians, Greeks, Chinese, and Persian astronomers of the north. The knowledge that northern and southern star patterns differed goes back to Classical writers, who describe, for example, the African circumnavigation expedition commissioned by Egyptian Pharaoh Necho II in c. 600 BC and those of Hanno the Navigator in c. 500 BC.
The history of southern constellations is not straightforward. Different groupings and different names were proposed by various observers, some reflecting national traditions or designed to promote various sponsors. Southern constellations were important from the 14th to 16th centuries, when sailors used the stars for celestial navigation. Italian explorers who recorded new southern constellations include Andrea Corsali, Antonio Pigafetta, and Amerigo Vespucci.
Many of the 88 IAU-recognized constellations in this region first appeared on celestial globes developed in the late 16th century by Petrus Plancius, based mainly on observations of the Dutch navigators Pieter Dirkszoon Keyser and Frederick de Houtman. These became widely known through Johann Bayer's star atlas Uranometria of 1603. Fourteen more were created in 1763 by the French astronomer Nicolas Louis de Lacaille, who also split the ancient constellation Argo Navis into three; these new figures appeared in his star catalogue, published in 1756.
Several modern proposals have not survived. The French astronomers Pierre Lemonnier and Joseph Lalande, for example, proposed constellations that were once popular but have since been dropped. The northern constellation Quadrans Muralis survived into the 19th century (when its name was attached to the Quadrantid meteor shower), but is now divided between Boötes and Draco.
88 modern constellations
A list of 88 constellations was produced for the IAU in 1922. It is roughly based on the traditional Greek constellations listed by Ptolemy in his Almagest in the 2nd century and Aratus' work Phenomena, with early modern modifications and additions (most importantly introducing constellations covering the parts of the southern sky unknown to Ptolemy) by Petrus Plancius (1592, 1597/98 and 1613), Johannes Hevelius (1690) and Nicolas Louis de Lacaille (1763), who introduced fourteen new constellations. Lacaille studied the stars of the southern hemisphere from 1751 until 1752 from the Cape of Good Hope, when he was said to have observed more than 10,000 stars using a refracting telescope with an aperture of .
In 1922, Henry Norris Russell produced a list of 88 constellations with three-letter abbreviations for them. However, these constellations did not have clear borders between them. In 1928, the IAU formally accepted the 88 modern constellations, with contiguous boundaries along vertical and horizontal lines of right ascension and declination developed by Eugene Delporte that, together, cover the entire celestial sphere; this list was finally published in 1930. Where possible, these modern constellations usually share the names of their Graeco-Roman predecessors, such as Orion, Leo, or Scorpius. The aim of this system is area-mapping, i.e. the division of the celestial sphere into contiguous fields. Out of the 88 modern constellations, 36 lie predominantly in the northern sky, and the other 52 predominantly in the southern.
The boundaries developed by Delporte used data that originated back to epoch B1875.0, which was when Benjamin A. Gould first made his proposal to designate boundaries for the celestial sphere, a suggestion on which Delporte based his work. The consequence of this early date is that because of the precession of the equinoxes, the borders on a modern star map, such as epoch J2000, are already somewhat skewed and no longer perfectly vertical or horizontal. This effect will increase over the years and centuries to come.
Symbols
The constellations have no official symbols, though those of the ecliptic may take the signs of the zodiac. Symbols for the other modern constellations, as well as older ones that still occur in modern nomenclature, have occasionally been published.
Dark cloud constellations
The Great Rift, a series of dark patches in the Milky Way, is most visible in the southern sky. Some cultures have discerned shapes in these patches. Members of the Inca civilization identified various dark areas or dark nebulae in the Milky Way as animals and associated their appearance with the seasonal rains. Australian Aboriginal astronomy also describes dark cloud constellations, the most famous being the "emu in the sky" whose head is formed by the Coalsack, a dark nebula, instead of the stars.
List of dark cloud constellations
Great Rift (astronomy)
Cygnus Rift
Serpens–Aquila Rift
Dark Horse (astronomy)
Rho Ophiuchi cloud complex
Emu in the sky
See also
Celestial cartography
Constellation family
Former constellations
Lists of stars by constellation
Constellations listed by Johannes Hevelius
Constellations listed by Lacaille
Constellations listed by Petrus Plancius
Constellations listed by Ptolemy
References
Footnotes
Citations
Further reading
Mythology, lore, history, and archaeoastronomy
Allen, Richard Hinckley. (1899) Star-Names And Their Meanings, G. E. Stechert, New York, hardcover; reprint 1963 as Star Names: Their Lore and Meaning, Dover Publications, Inc., Mineola, NY, softcover.
Olcott, William Tyler. (1911); Star Lore of All Ages, G. P. Putnam's Sons, New York, hardcover; reprint 2004 as Star Lore: Myths, Legends, and Facts, Dover Publications, Inc., Mineola, NY, softcover.
Kelley, David H. and Milone, Eugene F. (2004) Exploring Ancient Skies: An Encyclopedic Survey of Archaeoastronomy, Springer, hardcover.
Ridpath, Ian. (2018) Star Tales 2nd ed., Lutterworth Press, softcover.
Staal, Julius D. W. (1988) The New Patterns in the Sky: Myths and Legends of the Stars, McDonald & Woodward Publishing Co., hardcover, softcover.
Atlases and celestial maps
Becvar, Antonin. Atlas Coeli. Published as Atlas of the Heavens, Sky Publishing Corporation, Cambridge, MA, with coordinate grid transparency overlay.
Becvar, Antonin. (1962) Atlas Borealis 1950.0, Czechoslovak Academy of Sciences (Ceskoslovenske Akademie Ved), Praha, Czechoslovakia, 1st Edition, elephant folio hardcover, with small transparency overlay coordinate grid square and separate paper magnitude legend ruler. 2nd Edition 1972 and 1978 reprint, Czechoslovak Academy of Sciences (Ceskoslovenske Akademie Ved), Prague, Czechoslovakia, and Sky Publishing Corporation, Cambridge, MA, oversize folio softcover spiral-bound, with transparency overlay coordinate grid ruler.
National Geographic Society. (1957, 1970, 2001, 2007) The Heavens (1970), Cartographic Division of the National Geographic Society (NGS), Washington, DC, two-sided large map chart depicting the constellations of the heavens; as a special supplement to the August 1970 issue of National Geographic. Forerunner map as A Map of The Heavens, as a special supplement to the December 1957 issue. Current version 2001 (Tirion), with 2007 reprint.
Norton, Arthur Philip. (1910) Norton's Star Atlas, 20th Edition 2003 as Norton's Star Atlas and Reference Handbook, edited by Ridpath, Ian, Pi Press, , hardcover.
Sinnott, Roger W. and Perryman, Michael A.C. (1997) Millennium Star Atlas, Epoch 2000.0, Sky Publishing Corporation, Cambridge, MA, and European Space Agency (ESA), ESTEC, Noordwijk, The Netherlands. Subtitle: "An All-Sky Atlas Comprising One Million Stars to Visual Magnitude Eleven from the Hipparcos and Tycho Catalogues and Ten Thousand Nonstellar Objects". 3 volumes, hardcover, . Vol. 1, 0–8 Hours (Right Ascension), hardcover; Vol. 2, 8–16 Hours, hardcover; Vol. 3, 16–24 Hours, hardcover. Softcover version available. Supplemental separate purchasable coordinate grid transparent overlays.
Tirion, Wil; et al. (1987) Uranometria 2000.0, Willmann-Bell, Inc., Richmond, VA, 3 volumes, hardcover. Vol. 1 (1987): "The Northern Hemisphere to −6°", by Wil Tirion, Barry Rappaport, and George Lovi, hardcover, printed boards. Vol. 2 (1988): "The Southern Hemisphere to +6°", by Wil Tirion, Barry Rappaport and George Lovi, hardcover, printed boards. Vol. 3 (1993) as a separate added work: The Deep Sky Field Guide to Uranometria 2000.0, by Murray Cragin, James Lucyk, and Barry Rappaport, hardcover, printed boards. 2nd Edition 2001 as collective set of 3 volumes – Vol. 1: Uranometria 2000.0 Deep Sky Atlas, by Wil Tirion, Barry Rappaport, and Will Remaklus, hardcover, printed boards; Vol. 2: Uranometria 2000.0 Deep Sky Atlas, by Wil Tirion, Barry Rappaport, and Will Remaklus, hardcover, printed boards; Vol. 3: Uranometria 2000.0 Deep Sky Field Guide by Murray Cragin and Emil Bonanno, , hardcover, printed boards.
Tirion, Wil and Sinnott, Roger W. (1998) Sky Atlas 2000.0, various editions. 2nd Deluxe Edition, Cambridge University Press, Cambridge, England.
Catalogs
Becvar, Antonin. (1959) Atlas Coeli II Katalog 1950.0, Praha, 1960 Prague. Published 1964 as Atlas of the Heavens – II Catalogue 1950.0, Sky Publishing Corporation, Cambridge, MA
Hirshfeld, Alan and Sinnott, Roger W. (1982) Sky Catalogue 2000.0, Cambridge University Press and Sky Publishing Corporation, 1st Edition, 2 volumes. both vols., and vol. 1. "Volume 1: Stars to Magnitude 8.0", (Cambridge) and hardcover, softcover. Vol. 2 (1985) – "Volume 2: Double Stars, Variable Stars, and Nonstellar Objects", (Cambridge) hardcover, (Cambridge) softcover. 2nd Edition (1991) with additional third author François Ochsenbein, 2 volumes, . Vol. 1: (Cambridge) hardcover; (Cambridge) softcover . Vol. 2 (1999): (Cambridge) softcover and 0-933346-38-7 softcover – reprint of 1985 edition.
Yale University Observatory. (1908, et al.) Catalogue of Bright Stars, New Haven, CN. Referred to commonly as "Bright Star Catalogue". Various editions with various authors historically, the longest term revising author as (Ellen) Dorrit Hoffleit. 1st Edition 1908. 2nd Edition 1940 by Frank Schlesinger and Louise F. Jenkins. 3rd Edition (1964), 4th Edition, 5th Edition (1991), and 6th Edition (pending posthumous) by Hoffleit.
External links
IAU: The Constellations, including high quality maps.
Atlascoelestis, di Felice Stoppa.
Celestia free 3D realtime space-simulation (OpenGL)
Stellarium realtime sky rendering program (OpenGL)
Strasbourg Astronomical Data Center Files on official IAU constellation boundaries
Studies of Occidental Constellations and Star Names to the Classical Period: An Annotated Bibliography
Table of Constellations
Online Text: Hyginus, Astronomica translated by Mary Grant Greco-Roman constellation myths
Neave Planetarium Adobe Flash interactive web browser planetarium and stardome with realistic movement of stars and the planets.
Audio – Cain/Gay (2009) Astronomy Cast Constellations
The Greek Star-Map short essay by Gavin White
Bucur D. The network signature of constellation line figures. PLOS ONE 17(7): e0272270 (2022). A comparative analysis on the structure of constellation line figures across 56 sky cultures.
Constellations
Celestial cartography
Constellations
Concepts in astronomy | Constellation | [
"Physics",
"Astronomy"
] | 5,807 | [
"Celestial cartography",
"History of astronomy",
"Concepts in astronomy",
"Works about astronomy",
"Constellations",
"Sky regions"
] |
5,278 | https://en.wikipedia.org/wiki/Copyright | A copyright is a type of intellectual property that gives its owner the exclusive legal right to copy, distribute, adapt, display, and perform a creative work, usually for a limited time. The creative work may be in a literary, artistic, educational, or musical form. Copyright is intended to protect the original expression of an idea in the form of a creative work, but not the idea itself. A copyright is subject to limitations based on public interest considerations, such as the fair use doctrine in the United States and fair dealings doctrine in the United Kingdom.
Some jurisdictions require "fixing" copyrighted works in a tangible form. It is often shared among multiple authors, each of whom holds a set of rights to use or license the work, and who are commonly referred to as rights holders. These rights normally include reproduction, control over derivative works, distribution, public performance, and moral rights such as attribution.
Copyrights can be granted by public law and are in that case considered "territorial rights". This means that copyrights granted by the law of a certain state do not extend beyond the territory of that specific jurisdiction. Copyrights of this type vary by country; many countries, and sometimes a large group of countries, have made agreements with other countries on procedures applicable when works "cross" national borders or national rights are inconsistent.
Typically, the public law duration of a copyright expires 50 to 100 years after the creator dies, depending on the jurisdiction. Some countries require certain copyright formalities to establishing copyright, others recognize copyright in any completed work, without a formal registration. When the copyright of a work expires, it enters the public domain.
History
Background
The concept of copyright developed after the printing press came into use in Europe in the 15th and 16th centuries. It was associated with a common law and rooted in the civil law system. The printing press made it much cheaper to produce works, but as there was initially no copyright law, anyone could buy or rent a press and print any text.
Popular new works were immediately re-set and re-published by competitors, so printers needed a constant stream of new material. Fees paid to authors for new works were high and significantly supplemented the incomes of many academics.
Printing brought profound social changes. The rise in literacy across Europe led to a dramatic increase in the demand for reading matter. Prices of reprints were low, so publications could be bought by poorer people, creating a mass audience. In German-language markets before the advent of copyright, technical materials, like popular fiction, were inexpensive and widely available; it has been suggested this contributed to Germany's industrial and economic success.
Conception
The concept of copyright first developed in England. In reaction to the printing of "scandalous books and pamphlets", the English Parliament passed the Licensing of the Press Act 1662, which required all intended publications to be registered with the government-approved Stationers' Company, giving the Stationers the right to regulate what material could be printed.
The Statute of Anne, enacted in 1710 in England and Scotland, provided the first legislation to protect copyrights (but not authors' rights). The Copyright Act 1814 extended more rights for authors but did not protect British publications from being reprinted in the US. The Berne International Copyright Convention of 1886 finally provided protection for authors among the countries who signed the agreement, although the US did not join the Berne Convention until 1989.
In the US, the Constitution grants Congress the right to establish copyright and patent laws. Shortly after the Constitution was passed, Congress enacted the Copyright Act of 1790, modeling it after the Statute of Anne. While the national law protected authors' published works, authority was granted to the states to protect authors' unpublished works. The most recent major overhaul of copyright in the US, the Copyright Act of 1976, extended federal copyright to works as soon as they are created and "fixed", without requiring publication or registration. State law continues to apply to unpublished works that are not otherwise copyrighted by federal law. This act also changed the calculation of copyright term from a fixed term (then a maximum of fifty-six years) to "life of the author plus 50 years". These changes brought the US closer to conformity with the Berne Convention, and in 1989 the United States further revised its copyright law and joined the Berne Convention officially.
Copyright laws allow products of creative human activities, such as literary and artistic production, to be preferentially exploited and thus incentivized. Different cultural attitudes, social organizations, economic models and legal frameworks are seen to account for why copyright emerged in Europe and not, for example, in Asia. In the Middle Ages in Europe, there was generally a lack of any concept of literary property due to the general relations of production, the specific organization of literary production and the role of culture in society. The latter refers to the tendency of oral societies, such as that of Europe in the medieval period, to view knowledge as the product and expression of the collective, rather than to see it as individual property. However, with copyright laws, intellectual production comes to be seen as a product of an individual, with attendant rights. The most significant point is that patent and copyright laws support the expansion of the range of creative human activities that can be commodified. This parallels the ways in which capitalism led to the commodification of many aspects of social life that earlier had no monetary or economic value per se.
Copyright has developed into a concept that has a significant effect on nearly every modern industry, including not just literary work, but also forms of creative work such as sound recordings, films, photographs, software, and architecture.
National copyrights
Often seen as the first real copyright law, the 1709 British Statute of Anne gave authors and the publishers to whom they did chose to license their works, the right to publish the author's creations for a fixed period, after which the copyright expired. It was "An Act for the Encouragement of Learning, by Vesting the Copies of Printed Books in the Authors or the Purchasers of such Copies, during the Times therein mentioned."
The act also alluded to individual rights of the artist. It began:
A right to benefit financially from the work is articulated, and court rulings and legislation have recognized a right to control the work, such as ensuring that the integrity of it is preserved. An irrevocable right to be recognized as the work's creator appears in some countries' copyright laws.
The Copyright Clause of the United States, Constitution (1787) authorized copyright legislation: "To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries." That is, by guaranteeing them a period of time in which they alone could profit from their works, they would be enabled and encouraged to invest the time required to create them, and this would be good for society as a whole. A right to profit from the work has been the philosophical underpinning for much legislation extending the duration of copyright, to the life of the creator and beyond, to their heirs. Yet scholars like Lawrence Lessig have argued that copyright terms have been extended beyond the scope imagined by the Framers. Lessig refers to the Copyright Clause as the "Progress Clause" to emphasize the social dimension of intellectual property rights.
The original length of copyright in the United States was 14 years, and it had to be explicitly applied for. If the author wished, they could apply for a second 14‑year monopoly grant, but after that the work entered the public domain, so it could be used and built upon by others.
Continental law
In many jurisdictions of the European continent, comparable legal concepts to copyright did exist from the 16th century on but did change under Napoleonic rule into another legal concept: authors' rights or creator's right laws, from French: droits d'auteur and German Urheberrecht. In many modern-day publications the terms copyright and authors' rights are being mixed, or used as translations, but in a juridical sense the legal concepts do essentially differ. Authors' rights are, generally speaking, from the start absolute property rights of an author of original work that one does not have to apply for. The law is automatically connecting an original work as intellectual property to its creator. Although the concepts throughout the years have been mingled globally, due to international treaties and contracts, distinct differences between jurisdictions continue to exist.
Creator's law was enacted rather late in German speaking states and the economic historian Eckhard Höffner argues that the absence of possibilities to maintain copyright laws in all these states in the early 19th century, encouraged the publishing of low-priced paperbacks for the masses. This was profitable for authors and led to a proliferation of books, enhanced knowledge, and was ultimately an important factor in the ascendency of Germany as a power during that century. After the introduction of creator's rights, German publishers started to follow English customs, in issuing only expensive book editions for wealthy customers.
Empirical evidence derived from the exogenous differential introduction of author's right (Italian: diritto d’autore) in Napoleonic Italy shows that "basic copyrights increased both the number and the quality of operas, measured by their popularity and durability".
International copyright treaties
The 1886 Berne Convention first established recognition of authors' rights among sovereign nations, rather than merely bilaterally. Under the Berne Convention, protective rights for creative works do not have to be asserted or declared, as they are automatically in force at creation: an author need not "register" or "apply for" these protective rights in countries adhering to the Berne Convention. As soon as a work is "fixed", that is, written or recorded on some physical medium, its author is automatically entitled to all intellectual property rights in the work, and to any derivative works unless and until the author explicitly disclaims them, or until the rights expires. The Berne Convention also resulted in foreign authors being treated equivalently to domestic authors, in any country signed onto the convention. The UK signed the Berne Convention in 1887 but did not implement large parts of it until 100 years later with the passage of the Copyright, Designs and Patents Act 1988. Specially, for educational and scientific research purposes, the Berne Convention provides the developing countries issue compulsory licenses for the translation or reproduction of copyrighted works within the limits prescribed by the convention. This was a special provision that had been added at the time of 1971 revision of the convention, because of the strong demands of the developing countries. The United States did not sign the Berne Convention until 1989.
The United States and most Latin American countries instead entered into the Buenos Aires Convention in 1910, which required a copyright notice on the work (such as all rights reserved), and permitted signatory nations to limit the duration of copyrights to shorter and renewable terms. The Universal Copyright Convention was drafted in 1952 as another less demanding alternative to the Berne Convention, and ratified by nations such as the Soviet Union and developing nations.
The regulations of the Berne Convention are incorporated into the World Trade Organization's TRIPS agreement (1995), thus giving the Berne Convention effectively near-global application.
In 1961, the United International Bureaux for the Protection of Intellectual Property signed the Rome Convention for the Protection of Performers, Producers of Phonograms and Broadcasting Organizations. In 1996, this organization was succeeded by the founding of the World Intellectual Property Organization, which launched the 1996 WIPO Performances and Phonograms Treaty and the 2002 WIPO Copyright Treaty, which enacted greater restrictions on the use of technology to copy works in the nations that ratified it. The Trans-Pacific Partnership includes intellectual property provisions relating to copyright.
Copyright laws and authors' right laws are standardized somewhat through these international conventions such as the Berne Convention and Universal Copyright Convention. These multilateral treaties have been ratified by nearly all countries, and international organizations such as the European Union require their member states to comply with them. All member states of the World Trade Organization are obliged to establish minimum levels of copyright protection. Nevertheless, important differences between the national regimes continue to exist.
Obtaining protection
Ownership
The original holder of the copyright may be the employer of the author rather than the author themself if the work is a "work for hire". For example, in English law the Copyright, Designs and Patents Act 1988 provides that if a copyrighted work is made by an employee in the course of that employment, the copyright is automatically owned by the employer which would be a "Work for Hire". Typically, the first owner of a copyright is the person who created the work i.e. the author. But when more than one person creates the work, then a case of joint authorship can be made provided some criteria are met.
Eligible works
Copyright may apply to a wide range of creative, intellectual, or artistic forms, or "works". Specifics vary by jurisdiction, but these can include poems, theses, fictional characters, plays and other literary works, motion pictures, choreography, musical compositions, sound recordings, paintings, drawings, sculptures, photographs, computer software, radio and television broadcasts, and industrial designs. Graphic designs and industrial designs may have separate or overlapping laws applied to them in some jurisdictions.
Copyright does not cover ideas and information themselves, only the form or manner in which they are expressed. For example, the copyright to a Mickey Mouse cartoon restricts others from making copies of the cartoon or creating derivative works based on Disney's particular anthropomorphic mouse, but does not prohibit the creation of other works about anthropomorphic mice in general, so long as they are different enough not to be judged copies of Disney's.
Originality
Typically, a work must meet minimal standards of originality in order to qualify for copyright, and the copyright expires after a set period of time (some jurisdictions may allow this to be extended). Different countries impose different tests, although generally the requirements are low; in the United Kingdom there has to be some "skill, labour, and judgment" that has gone into it. In Australia and the United Kingdom it has been held that a single word is insufficient to comprise a copyright work. However, single words or a short string of words can sometimes be registered as a trademark instead.
Copyright law recognizes the right of an author based on whether the work actually is an original creation, rather than based on whether it is unique; two authors may own copyright on two substantially identical works, if it is determined that the duplication was coincidental, and neither was copied from the other.
Registration
In all countries where the Berne Convention standards apply, copyright is automatic and need not be obtained through official registration with any government office. Once an idea has been reduced to tangible form, for example by securing it in a fixed medium (such as a drawing, sheet music, photograph, a videotape, or a computer file), the copyright holder is entitled to enforce their exclusive rights. However, while registration is not needed to exercise copyright, in jurisdictions where the laws provide for registration, it serves as prima facie evidence of a valid copyright and enables the copyright holder to seek statutory damages and attorney's fees. (In the US, registering after an infringement only enables one to receive actual damages and lost profits.)
A widely circulated strategy to avoid the cost of copyright registration is referred to as the poor man's copyright. It proposes that the creator send the work to themself in a sealed envelope by registered mail, using the postmark to establish the date. This technique has not been recognized in any published opinions of the United States courts. The United States Copyright Office says the technique is not a substitute for actual registration. The United Kingdom Intellectual Property Office discusses the technique and notes that the technique (as well as commercial registries) does not constitute dispositive proof that the work is original or establish who created the work.
Fixing
The Berne Convention allows member countries to decide whether creative works must be "fixed" to enjoy copyright. Article 2, Section 2 of the Berne Convention states:
Some countries do not require that a work be produced in a particular form to obtain copyright protection. For instance, Spain, France, and Australia do not require fixation for copyright protection. The United States and Canada, on the other hand, require that most works must be "fixed in a tangible medium of expression" to obtain copyright protection. US law requires that the fixation be stable and permanent enough to be "perceived, reproduced or communicated for a period of more than transitory duration". Similarly, Canadian courts consider fixation to require that the work be "expressed to some extent at least in some material form, capable of identification and having a more or less permanent endurance".
Note this provision of US law:
Copyright notice
Before 1989, United States law required the use of a copyright notice, consisting of the copyright symbol (©, the letter C inside a circle; Unicode ), the abbreviation "Copr.", or the word "Copyright", followed by the year of the first publication of the work and the name of the copyright holder. Several years may be noted if the work has gone through substantial revisions. The proper copyright notice for sound recordings of musical or other audio works is a sound recording copyright symbol (℗, the letter P inside a circle, Unicode ), which indicates a sound recording copyright, with the letter P indicating a "phonorecord". In addition, the phrase All rights reserved which indicates that the copyright holder reserves, or holds for their own use was once required to assert copyright, but that phrase is now legally obsolete. Almost everything on the Internet has some sort of copyright attached to it. Whether these things are watermarked, signed, or have any other sort of indication of the copyright is a different story however.
In 1989 the United States enacted the Berne Convention Implementation Act, amending the Copyright Act of 1976 to conform to most of the provisions of the Berne Convention. As a result, the use of copyright notices has become optional to claim copyright, because the Berne Convention makes copyright automatic. However, the lack of notice of copyright using these marks may have consequences in terms of reduced damages in an infringement lawsuit – using notices of this form may reduce the likelihood of a defense of "innocent infringement" being successful.
Publisher's copyright
In the UK, the publisher of a work automatically owns the copyright in the "typographical arrangement of a published work", i.e. its layout and general appearance as a published work. This copyright lasts for 25 years after the end of the year in which the edition containing that arrangement was first published.
Enforcement
Copyrights are generally enforced by the holder in a civil law court, but there are also criminal infringement statutes in some jurisdictions. While central registries are kept in some countries which aid in proving claims of ownership, registering does not necessarily prove ownership, nor does the fact of copying (even without permission) necessarily prove that copyright was infringed. Criminal sanctions are generally aimed at serious counterfeiting activity, but are now becoming more commonplace as copyright collectives such as the RIAA are increasingly targeting the file sharing home Internet user. Thus far, however, most such cases against file sharers have been settled out of court. (See Legal aspects of file sharing)
In most jurisdictions the copyright holder must bear the cost of enforcing copyright. This will usually involve engaging legal representation, administrative or court costs. In light of this, many copyright disputes are settled by a direct approach to the infringing party in order to settle the dispute out of court.
Self-enforcement measures
With older technology like paintings, books, phonographs, and film, it is generally not feasible for consumers to make copies on their own, so producers can simply require payment when transferring physical possession of the storage medium. The equivalent for digital online content is a paywall.
The introduction of the photocopier, cassette tape, and videotape made it easier for consumers to copy materials like books and music, but each time a copy was made, it lost some fidelity. Digital media like text, audio, video, and software (even when stored on physical media like compact discs and DVDs) can be copied losslessly, and shared on the Internet, creating a much bigger threat to producer revenue. Some have used digital rights management technology to restrict non-playback access through encryption and other means. Digital watermarks can be used to trace copies, deterring infringement with a more credible threat of legal consequences. Copy protection is used for both digital and pre-Internet electronic media.
Copyright infringement
For a work to be considered to infringe upon copyright, its use must have occurred in a nation that has domestic copyright laws or adheres to a bilateral treaty or established international convention such as the Berne Convention or WIPO Copyright Treaty. Improper use of materials outside of legislation is deemed "unauthorized edition", not copyright infringement.
Statistics regarding the effects of copyright infringement are difficult to determine. Studies have attempted to determine whether there is a monetary loss for industries affected by copyright infringement by predicting what portion of pirated works would have been formally purchased if they had not been freely available. Other reports indicate that copyright infringement does not have an adverse effect on the entertainment industry, and can have a positive effect. In particular, a 2014 university study concluded that free music content, accessed on YouTube, does not necessarily hurt sales, instead has the potential to increase sales.
According to the IP Commission Report the annual cost of intellectual property infringement to the US economy "continues to exceed $225 billion in counterfeit goods, pirated software, and theft of trade secrets and could be as high as $600 billion." A 2019 study sponsored by the US Chamber of Commerce Global Innovation Policy Center (GIPC), in partnership with NERA Economic Consulting "estimates that global online piracy costs the U.S. economy at least $29.2 billion in lost revenue each year." An August 2021 report by the Digital Citizens Alliance states that "online criminals who offer stolen movies, TV shows, games, and live events through websites and apps are reaping $1.34 billion in annual advertising revenues." This comes as a result of users visiting pirate websites who are then subjected to pirated content, malware, and fraud.
Rights granted
According to World Intellectual Property Organisation, copyright protects two types of rights. Economic rights allow right owners to derive financial reward from the use of their works by others. Moral rights allow authors and creators to take certain actions to preserve and protect their link with their work. The author or creator may be the owner of the economic rights, or those rights may be transferred to one or more copyright owners. Many countries do not allow the transfer of moral rights.
Economic rights
With any kind of property, its owner may decide how it is to be used, and others can use it lawfully only if they have the owner's permission, often through a license. The owner's use of the property must, however, respect the legally recognised rights and interests of other members of society. So the owner of a copyright-protected work may decide how to use the work and may prevent others from using it without permission. National laws usually grant copyright owners exclusive rights to allow third parties to use their works, subject to the legally recognised rights and interests of others. Most copyright laws state that authors or other right owners have the right to authorise or prevent certain acts in relation to a work. Right owners can authorise or prohibit:
reproduction of the work in various forms, such as printed publications or sound recordings;
distribution of copies of the work;
public performance of the work;
broadcasting or other communication of the work to the public;
translation of the work into other languages; and
adaptation of the work, such as turning a novel into a screenplay.
Moral rights
Moral rights are concerned with the non-economic rights of a creator. They protect the creator's connection with a work as well as the integrity of the work. Moral rights are only accorded to individual authors and in many national laws they remain with the authors even after the authors have transferred their economic rights. In some EU countries, such as France, moral rights last indefinitely. In the UK, however, moral rights are finite. That is, the right of attribution and the right of integrity last only as long as the work is in copyright. When the copyright term comes to an end, so too do the moral rights in that work. This is just one reason why the moral rights regime within the UK is often regarded as weaker or inferior to the protection of moral rights in continental Europe and elsewhere in the world. The Berne Convention, in Article 6bis, requires its members to grant authors the following rights:
the right to claim authorship of a work (sometimes called the right of paternity or the right of attribution); and
the right to object to any distortion or modification of a work, or other derogatory action in relation to a work, which would be prejudicial to the author's honour or reputation (sometimes called the right of integrity).
These and other similar rights granted in national laws are generally known as the moral rights of authors. The Berne Convention requires these rights to be independent of authors' economic rights. Moral rights are only accorded to individual authors and in many national laws they remain with the authors even after the authors have transferred their economic rights. This means that even where, for example, a film producer or publisher owns the economic rights in a work, in many jurisdictions the individual author continues to have moral rights. Recently, as a part of the debates being held at the US Copyright Office on the question of inclusion of Moral Rights as a part of the framework of the Copyright Law in United States, the Copyright Office concluded that many diverse aspects of the current moral rights patchwork – including copyright law's derivative work right, state moral rights statutes, and contract law – are generally working well and should not be changed. Further, the Office concludes that there is no need for the creation of a blanket moral rights statute at this time. However, there are aspects of the US moral rights patchwork that could be improved to the benefit of individual authors and the copyright system as a whole.
In the copyright law of the United States, several exclusive rights are granted to the holder of a copyright, as are listed below:
protection of the work;
to determine and decide how, and under what conditions, the work may be marketed, publicly displayed, reproduced, distributed, etc.
to produce copies or reproductions of the work and to sell those copies; (including, typically, electronic copies)
to import or export the work;
to create derivative works; (works that adapt the original work)
to perform or display the work publicly;
to sell or cede these rights to others;
to transmit or display by radio, video or internet.
The basic right when a work is protected by copyright is that the holder may determine and decide how and under what conditions the protected work may be used by others. This includes the right to decide to distribute the work for free. This part of copyright is often overseen. The phrase "exclusive right" means that only the copyright holder is free to exercise those rights, and others are prohibited from using the work without the holder's permission. Copyright is sometimes called a "negative right", as it serves to prohibit certain people (e.g., readers, viewers, or listeners, and primarily publishers and would be publishers) from doing something they would otherwise be able to do, rather than permitting people (e.g., authors) to do something they would otherwise be unable to do. In this way it is similar to the unregistered design right in English law and European law. The rights of the copyright holder also permit them to not use or exploit their copyright, for some or all of the term. There is, however, a critique which rejects this assertion as being based on a philosophical interpretation of copyright law that is not universally shared. There is also debate on whether copyright should be considered a property right or a moral right.
UK copyright law gives creators both economic rights and moral rights. While 'copying' someone else's work without permission may constitute an infringement of their economic rights, that is, the reproduction right or the right of communication to the public, whereas, 'mutilating' it might infringe the creator's moral rights. In the UK, moral rights include the right to be identified as the author of the work, which is generally identified as the right of attribution, and the right not to have your work subjected to 'derogatory treatment', that is the right of integrity.
Indian copyright law is at parity with the international standards as contained in TRIPS. The Indian Copyright Act, 1957, pursuant to the amendments in 1999, 2002 and 2012, fully reflects the Berne Convention and the Universal Copyrights Convention, to which India is a party. India is also a party to the Geneva Convention for the Protection of Rights of Producers of Phonograms and is an active member of the World Intellectual Property Organization (WIPO) and United Nations Educational, Scientific and Cultural Organization (UNESCO). The Indian system provides both the economic and moral rights under different provisions of its Indian Copyright Act of 1957.
Duration
Copyright subsists for a variety of lengths in different jurisdictions. The length of the term can depend on several factors, including the type of work (e.g. musical composition, novel), whether the work has been published, and whether the work was created by an individual or a corporation. In most of the world, the default length of copyright is the life of the author plus either 50 or 70 years. In the United States, the term for most existing works is a fixed number of years after the date of creation or publication. Under most countries' laws (for example, the United States and the United Kingdom), copyrights expire at the end of the calendar year in which they would otherwise expire.
The length and requirements for copyright duration are subject to change by legislation, and since the early 20th century there have been a number of adjustments made in various countries, which can make determining the duration of a given copyright somewhat difficult. For example, the United States used to require copyrights to be renewed after 28 years to stay in force, and formerly required a copyright notice upon first publication to gain coverage. In Italy and France, there were post-wartime extensions that could increase the term by approximately 6 years in Italy and up to about 14 in France. Many countries have extended the length of their copyright terms (sometimes retroactively). International treaties establish minimum terms for copyrights, but individual countries may enforce longer terms than those.
In the United States, all books and other works, except for sound recordings, published before 1929 have expired copyrights and are in the public domain. The applicable date for sound recordings in the United States is before 1923. In addition, works published before 1964 that did not have their copyrights renewed 28 years after first publication year also are in the public domain. Hirtle points out that the great majority of these works (including 93% of the books) were not renewed after 28 years and are in the public domain. Books originally published outside the US by non-Americans are exempt from this renewal requirement, if they are still under copyright in their home country.
But if the intended exploitation of the work includes publication (or distribution of derivative work, such as a film based on a book protected by copyright) outside the US, the terms of copyright around the world must be considered. If the author has been dead more than 70 years, the work is in the public domain in most, but not all, countries.
In 1998, the length of a copyright in the United States was increased by 20 years under the Copyright Term Extension Act. This legislation was the subject of substantial criticism following allegations that the bill was strongly promoted by corporations which had valuable copyrights which otherwise would have expired.
Limitations and exceptions
In many jurisdictions, copyright law makes exceptions to these restrictions when the work is copied for the purpose of commentary or other related uses. United States copyright law does not cover names, titles, short phrases or listings (such as ingredients, recipes, labels, or formulas). However, there are protections available for those areas copyright does not cover, such as trademarks and patents.
Idea–expression dichotomy and the merger doctrine
The idea–expression divide differentiates between ideas and expression, and states that copyright protects only the original expression of ideas, and not the ideas themselves. This principle, first clarified in the 1879 case of Baker v. Selden, has since been codified by the Copyright Act of 1976 at 17 U.S.C. § 102(b).
The first-sale doctrine and exhaustion of rights
Copyright law does not restrict the owner of a copy from reselling legitimately obtained copies of copyrighted works, provided that those copies were originally produced by or with the permission of the copyright holder. It is therefore legal, for example, to resell a copyrighted book or CD. In the United States this is known as the first-sale doctrine, and was established by the courts to clarify the legality of reselling books in second-hand bookstores.
Some countries may have parallel importation restrictions that allow the copyright holder to control the aftermarket. This may mean for example that a copy of a book that does not infringe copyright in the country where it was printed does infringe copyright in a country into which it is imported for retailing. The first-sale doctrine is known as exhaustion of rights in other countries and is a principle which also applies, though somewhat differently, to patent and trademark rights. While this doctrine permits the transfer of the particular legitimate copy involved, it does not permit making or distributing additional copies.
In Kirtsaeng v. John Wiley & Sons, Inc., in 2013, the United States Supreme Court held in a 6–3 decision that the first-sale doctrine applies to goods manufactured abroad with the copyright owner's permission and then imported into the US without such permission. The case involved a plaintiff who imported Asian editions of textbooks that had been manufactured abroad with the publisher-plaintiff's permission. The defendant, without permission from the publisher, imported the textbooks and resold on eBay. The Supreme Court's holding severely limits the ability of copyright holders to prevent such importation.
In addition, copyright, in most cases, does not prohibit one from acts such as modifying, defacing, or destroying one's own legitimately obtained copy of a copyrighted work, so long as duplication is not involved. However, in countries that implement moral rights, a copyright holder can in some cases successfully prevent the mutilation or destruction of a work that is publicly visible.
Fair use and fair dealing
Copyright does not prohibit all copying or replication. In the United States, the fair use doctrine, codified by the Copyright Act of 1976 as 17 U.S.C. Section 107, permits some copying and distribution without permission of the copyright holder or payment to same. The statute does not clearly define fair use, but instead gives four non-exclusive factors to consider in a fair use analysis. Those factors are:
the purpose and character of one's use;
the nature of the copyrighted work;
what amount and proportion of the whole work was taken;
the effect of the use upon the potential market for or value of the copyrighted work.
In the United Kingdom and many other Commonwealth countries, a similar notion of fair dealing was established by the courts or through legislation. The concept is sometimes not well defined; however, in Canada, private copying for personal use has been expressly permitted by statute since 1999. In Alberta (Education) v. Canadian Copyright Licensing Agency (Access Copyright), 2012 SCC 37, the Supreme Court of Canada concluded that limited copying for educational purposes could also be justified under the fair dealing exemption. In Australia, the fair dealing exceptions under the Copyright Act 1968 (Cth) are a limited set of circumstances under which copyrighted material can be legally copied or adapted without the copyright holder's consent. Fair dealing uses are research and study; review and critique; news reportage and the giving of professional advice (i.e. legal advice). Under current Australian law, although it is still a breach of copyright to copy, reproduce or adapt copyright material for personal or private use without permission from the copyright owner, owners of a legitimate copy are permitted to "format shift" that work from one medium to another for personal, private use, or to "time shift" a broadcast work for later, once and only once, viewing or listening. Other technical exemptions from infringement may also apply, such as the temporary reproduction of a work in machine readable form for a computer.
In the United States the AHRA (Audio Home Recording Act Codified in Section 10, 1992) prohibits action against consumers making noncommercial recordings of music, in return for royalties on both media and devices plus mandatory copy-control mechanisms on recorders.
Later acts amended US copyright law so that for certain purposes making 10 copies or more is construed to be commercial, but there is no general rule permitting such copying. Indeed, making one complete copy of a work, or in many cases using a portion of it, for commercial purposes will not be considered fair use. The Digital Millennium Copyright Act prohibits the manufacture, importation, or distribution of devices whose intended use, or only significant commercial use, is to bypass an access or copy control put in place by a copyright owner. An appellate court has held that fair use is not a defense to engaging in such distribution. In Lenz v. Universal Music Corp., the United States Court of Appeals for the Ninth Circuit affirmed the lower court decision, holding that "fair use is 'authorized by the law' and a copyright holder must consider the existence of fair use before sending a takedown notification" under the Digital Millennium Copyright Act.
EU copyright laws recognise the right of EU member states to implement some national exceptions to copyright. Examples of those exceptions are:
photographic reproductions on paper or any similar medium of works (excluding sheet music) provided that the rightholders receives fair compensation;
reproduction made by libraries, educational establishments, museums or archives, which are non-commercial;
archival reproductions of broadcasts;
uses for the benefit of people with a disability;
for demonstration or repair of equipment;
for non-commercial research or private study;
when used in parody.
Accessible copies
It is legal in several countries including the United Kingdom and the United States to produce alternative versions (for example, in large print or braille) of a copyrighted work to provide improved access to a work for blind and visually impaired people without permission from the copyright holder.
Religious Service Exemption
In the US there is a Religious Service Exemption (1976 law, section 110[3]), namely "performance of a non-dramatic literary or musical work or of a dramatico-musical work of a religious nature or display of a work, in the course of services at a place of worship or other religious assembly" shall not constitute infringement of copyright.
Useful articles
In Canada, items deemed useful articles such as clothing designs are exempted from copyright protection under the Copyright Act if reproduced more than 50 times. Fast fashion brands may reproduce clothing designs from smaller companies without violating copyright protections.
Transfer, assignment and licensing
A copyright, or aspects of it (e.g. reproduction alone, all but moral rights), may be assigned or transferred from one party to another. For example, a musician who records an album will often sign an agreement with a record company in which the musician agrees to transfer all copyright in the recordings in exchange for royalties and other considerations. The creator (and original copyright holder) benefits, or expects to, from production and marketing capabilities far beyond those of the author. In the digital age of music, music may be copied and distributed at minimal cost through the Internet; however, the record industry attempts to provide promotion and marketing for the artist and their work so it can reach a much larger audience. A copyright holder need not transfer all rights completely, though many publishers will insist. Some of the rights may be transferred, or else the copyright holder may grant another party a non-exclusive license to copy or distribute the work in a particular region or for a specified period of time.
A transfer or license may have to meet particular formal requirements in order to be effective, for example under the Australian Copyright Act 1968 the copyright itself must be expressly transferred in writing. Under the US Copyright Act, a transfer of ownership in copyright must be memorialized in a writing signed by the transferor. For that purpose, ownership in copyright includes exclusive licenses of rights. Thus, exclusive licenses, to be effective, must be granted in a written instrument signed by the grantor. No special form of transfer or grant is required. A simple document that identifies the work involved and the rights being granted is sufficient. Non-exclusive grants (often called non-exclusive licenses) need not be in writing under US law. They can be oral or even implied by the behavior of the parties. Transfers of copyright ownership, including exclusive licenses, may and should be recorded in the U.S. Copyright Office. (Information on recording transfers is available on the Office's web site.) While recording is not required to make the grant effective, it offers important benefits, much like those obtained by recording a deed in a real estate transaction.
Copyright may also be licensed. Some jurisdictions may provide that certain classes of copyrighted works be made available under a prescribed statutory license (e.g. musical works in the United States used for radio broadcast or performance). This is also called a compulsory license, because under this scheme, anyone who wishes to copy a covered work does not need the permission of the copyright holder, but instead merely files the proper notice and pays a set fee established by statute (or by an agency decision under statutory guidance) for every copy made. Failure to follow the proper procedures would place the copier at risk of an infringement suit. Because of the difficulty of following every individual work, copyright collectives or collecting societies and performing rights organizations (such as ASCAP, BMI, and SESAC) have been formed to collect royalties for hundreds (thousands and more) works at once. Though this market solution bypasses the statutory license, the availability of the statutory fee still helps dictate the price per work collective rights organizations charge, driving it down to what avoidance of procedural hassle would justify.
Free licenses
Copyright licenses known as open or free licenses seek to grant several rights to licensees, either for a fee or not. Free in this context is not as much of a reference to price as it is to freedom. What constitutes free licensing has been characterised in a number of similar definitions, including by order of longevity the Free Software Definition, the Debian Free Software Guidelines, the Open Source Definition and the Definition of Free Cultural Works. Further refinements to these definitions have resulted in categories such as copyleft and permissive. Common examples of free licenses are the GNU General Public License, BSD licenses and some Creative Commons licenses.
Founded in 2001 by James Boyle, Lawrence Lessig, and Hal Abelson, the Creative Commons (CC) is a non-profit organization which aims to facilitate the legal sharing of creative works. To this end, the organization provides a number of generic copyright license options to the public, gratis. These licenses allow copyright holders to define conditions under which others may use a work and to specify what types of use are acceptable.
Terms of use have traditionally been negotiated on an individual basis between copyright holder and potential licensee. Therefore, a general CC license outlining which rights the copyright holder is willing to waive enables the general public to use such works more freely. Six general types of CC licenses are available (although some of them are not properly free per the above definitions and per Creative Commons' own advice). These are based upon copyright-holder stipulations such as whether they are willing to allow modifications to the work, whether they permit the creation of derivative works and whether they are willing to permit commercial use of the work. approximately 130 million individuals had received such licenses.
Criticism
Some sources are critical of particular aspects of the copyright system. This is known as a debate over copynorms. Particularly to the background of uploading content to internet platforms and the digital exchange of original work, there is discussion about the copyright aspects of downloading and streaming, the copyright aspects of hyperlinking and framing.
Concerns are often couched in the language of digital rights, digital freedom, database rights, open data or censorship. Discussions include Free Culture, a 2004 book by Lawrence Lessig. Lessig coined the term permission culture to describe a worst-case system. The documentaries Good Copy Bad Copy and RiP!: A Remix Manifesto discuss copyright. Some suggest an alternative compensation system. In Europe consumers are acting up against the rising costs of music, film and books, and as a result Pirate Parties have been created. Some groups reject copyright altogether, taking an anti-copyright stance. The perceived inability to enforce copyright online leads some to advocate ignoring legal statutes when on the web.
Public domain
Copyright, like other intellectual property rights, is subject to a statutorily determined term. Once the term of a copyright has expired, the formerly copyrighted work enters the public domain and may be used or exploited by anyone without obtaining permission, and normally without payment. However, in paying public domain regimes the user may still have to pay royalties to the state or to an authors' association. Courts in common law countries, such as the United States and the United Kingdom, have rejected the doctrine of a common law copyright. Public domain works should not be confused with works that are publicly available. Works posted in the internet, for example, are publicly available, but are not generally in the public domain. Copying such works may therefore violate the author's copyright.
See also
Adelphi Charter
Artificial scarcity
Authors' rights and related rights, roughly equivalent concepts in civil law countries
Conflict of laws
Copyfraud
Copyleft
Copyright abolition
Copyright Alliance
Copyright alternatives
Copyright for Creativity
Copyright in architecture in the United States
Copyright on the content of patents and in the context of patent prosecution
Criticism of copyright
Criticism of intellectual property
Directive on Copyright in the Digital Single Market (European Union)
Copyright infringement
Copyright Remedy Clarification Act (CRCA)
Digital rights management
Digital watermarking
Entertainment law
Freedom of panorama
Information literacies
Intellectual property protection of typefaces
List of Copyright Acts
List of copyright case law
Literary property
Model release
Paracopyright
Philosophy of copyright
Photography and the law
Pirate Party
Printing patent, a precursor to copyright
Private copying levy
Production music
Rent-seeking
Reproduction fees
Samizdat
Software copyright
Threshold pledge system
World Book and Copyright Day
References
Further reading
Armstrong, E. (1990). Before copyright: the French book-privilege system, 1498-1526. Cambridge University Press.
Atkinson, Juliette.(2012). Alexander the Great': Dumas's Conquest of Early-Victorian England". Papers of the Bibliographical Society of America 106 (4): 417–47.
Ellis, Sara R. Copyrighting Couture: An Examination of Fashion Design Protection and Why the DPPA and IDPPPA are a Step Towards the Solution to Counterfeit Chic, 78 Tenn. L. Rev. 163 (2010), available at Copyrighting Couture: An Examination of Fashion Design Protection and Why the DPPA and IDPPPA are a Step Towards the Solution to Counterfeit Chic.
Ghosemajumder, Shuman. Advanced Peer-Based Technology Business Models. MIT Sloan School of Management, 2002.
Johns, A. (2009). Piracy: the intellectual property wars from Gutenberg to Gates. University of Chicago Press.
Lehman, Bruce: Intellectual Property and the National Information Infrastructure (Report of the Working Group on Intellectual Property Rights, 1995)
Lindsey, Marc: Copyright Law on Campus. Washington State University Press, 2003. .
Loewenstein, J. (2002). The author's due: printing and the prehistory of copyright. The University of Chicago Press.
Mazzone, Jason. Copyfraud. SSRN
McDonagh, Luke. Is Creative use of Musical Works without a licence acceptable under Copyright? International Review of Intellectual Property and Competition Law (IIC) 4 (2012) 401–426, available at SSRN
Rife, by Martine Courant. Convention, Copyright, and Digital Writing (Southern Illinois University Press; 2013) 222 pages; Examines legal, pedagogical, and other aspects of online authorship.
Rose, M. (1995). Authors and Owners: The Invention of Copyright. Harvard University Press.
Shipley, David E. "Thin But Not Anorexic: Copyright Protection for Compilations and Other Fact Works" UGA Legal Studies Research Paper No. 08-001; Journal of Intellectual Property Law, Vol. 15, No. 1, 2007.
Silverthorne, Sean. Music Downloads: Pirates- or Customers?. . Harvard Business School Working Knowledge, 2004.
Sorce Keller, Marcello. "Originality, Authenticity and Copyright", Sonus, VII(2007), no. 2, pp. 77–85.
External links
A simplified guide.
WIPOLex from WIPO; global database of treaties and statutes relating to intellectual property
Copyright Berne Convention: Country List List of the 164 members of the Berne Convention for the protection of literary and artistic works
"Copyright and State Sovereign Immunity", August 2021, U.S. Copyright Office
"The Multi-Billion-Dollar Piracy Industry with Tom Galvin of Digital Citizens Alliance", 27 August 2021 by David Newhoff, The Illusion of More podcast
Education
Copyright Cortex
A Bibliography on the Origins of Copyright and Droit d'Auteur
MIT OpenCourseWare 6.912 Introduction to Copyright Law Free self-study course with video lectures as offered during the January 2006, Independent Activities Period (IAP)
US
Copyright Law of the United States Documents, US Government
Compendium of Copyright Practices (3rd ed.) United States Copyright Office
Copyright from UCB Libraries GovPubs
Early Copyright Records From the Rare Book and Special Collections Division at the Library of Congress
UK
Copyright: Detailed information at the UK Intellectual Property Office
Fact sheet P-01: UK copyright law (Issued April 2000, amended 25 November 2020) at the UK Copyright Service
Data management
Intellectual property law
Monopoly (economics)
Product management
Public records
Intangible assets | Copyright | [
"Technology"
] | 10,299 | [
"Data management",
"Data"
] |
5,285 | https://en.wikipedia.org/wiki/STS-51-F | STS-51-F (also known as Spacelab 2) was the 19th flight of NASA's Space Shuttle program and the eighth flight of Space Shuttle Challenger. It launched from Kennedy Space Center, Florida, on July 29, 1985, and landed eight days later on August 6, 1985.
While STS-51-F's primary payload was the Spacelab 2 laboratory module, the payload that received the most publicity was the Carbonated Beverage Dispenser Evaluation, which was an experiment in which both Coca-Cola and Pepsi tried to make their carbonated drinks available to astronauts. A helium-cooled infrared telescope (IRT) was also flown on this mission, and while it did have some problems, it observed 60% of the galactic plane in infrared light.
During launch, Challenger experienced multiple sensor failures in its Engine 1 Center SSME engine, which led to it shutting down and the shuttle had to perform an "Abort to Orbit" (ATO) emergency procedure. It is the only Shuttle mission to have carried out an abort after launching. As a result of the ATO, the mission was carried out at a slightly lower orbital altitude.
Crew
As with previous Spacelab missions, the crew was divided between two 12-hour shifts. Acton, Bridges and Henize made up the "Red Team" while Bartoe, England and Musgrave comprised the "Blue Team"; commander Fullerton could take either shift when needed. Challenger carried two Extravehicular Mobility Units (EMU) in the event of an emergency spacewalk, which would have been performed by England and Musgrave.
Crew seat assignments
Launch
STS-51-F's first launch attempt on July 12, 1985, was halted with the countdown at T−3 seconds after main engine ignition, when a malfunction of the number two RS-25 coolant valve caused an automatic launch abort. Challenger launched successfully on its second attempt on July 29, 1985, at 17:00 p.m. EDT, after a delay of 1 hour 37 minutes due to a problem with the table maintenance block update uplink.
At 3 minutes 31 seconds into the ascent, one of the center engine's two high-pressure fuel turbopump turbine discharge temperature sensors failed. Two minutes and twelve seconds later, the second sensor failed, causing the shutdown of the center engine. This was the only in-flight RS-25 failure of the Space Shuttle program. Approximately 8 minutes into the flight, one of the same temperature sensors in the right engine failed, and the remaining right-engine temperature sensor displayed readings near the redline for engine shutdown. Booster Systems Engineer Jenny M. Howard acted quickly to recommend that the crew inhibit any further automatic RS-25 shutdowns based on readings from the remaining sensors, preventing the potential shutdown of a second engine and a possible abort mode that may have resulted in the loss of crew and vehicle (LOCV).
The failed RS-25 resulted in an Abort to Orbit (ATO) trajectory, whereby the shuttle achieved a lower-than-planned orbital altitude. The plan had been for a by orbit, but the mission was carried out at by .
Mission summary
STS-51-F's primary payload was the laboratory module Spacelab 2. A special part of the modular Spacelab system, the "igloo", which was located at the head of a three-pallet train, provided on-site support to instruments mounted on pallets. The main mission objective was to verify performance of Spacelab systems, determine the interface capability of the orbiter, and measure the environment created by the spacecraft. Experiments covered life sciences, plasma physics, astronomy, high-energy astrophysics, solar physics, atmospheric physics and technology research. Despite mission replanning necessitated by Challengers abort to orbit trajectory, the Spacelab mission was declared a success.
The flight marked the first time the European Space Agency (ESA) Instrument Pointing System (IPS) was tested in orbit. This unique pointing instrument was designed with an accuracy of one arcsecond. Initially, some problems were experienced when it was commanded to track the Sun, but a series of software fixes were made and the problem was corrected. In addition, Anthony W. England became the second amateur radio operator to transmit from space during the mission.
Spacelab Infrared Telescope
The Spacelab Infrared Telescope (IRT) was also flown on the mission. The IRT was a aperture helium-cooled infrared telescope, observing light between wavelengths of 1.7 to 118 μm. It was thought heat emissions from the Shuttle would corrupt long-wavelength data, however it still returned useful astronomical data. Another problem was that a piece of mylar insulation broke loose and floated in the line-of-sight of the telescope. IRT collected infrared data on 60% of the galactic plane. (see also List of largest infrared telescopes) A later space mission that experienced a stray light problem from debris was Gaia astrometry spacecraft launch in 2013 by the ESA - the source of the stray light was later identified as the fibers of the sunshield, protruding beyond the edges of the shield.
Other payloads
The Plasma Diagnostics Package (PDP), which had been previously flown on STS-3, made its return on the mission, and was part of a set of plasma physics experiments designed to study the Earth's ionosphere. During the third day of the mission, it was grappled out of the payload bay by the Remote Manipulator System (Canadarm) and released for six hours. During this time, Challenger maneuvered around the PDP as part of a targeted proximity operations exercise. The PDP was successfully grappled by the Canadarm and returned to the payload bay at the beginning of the fourth day of the mission.
In a heavily publicized marketing experiment, astronauts aboard STS-51-F drank carbonated beverages from specially designed cans from Cola Wars competitors Coca-Cola and Pepsi. According to Acton, after Coke developed its experimental dispenser for an earlier shuttle flight, Pepsi insisted to American president Ronald Reagan that Coke should not be the first cola in space. The experiment was delayed until Pepsi could develop its own system, and the two companies' products were assigned to STS-51-F.
Blue Team tested Coke, and Red Team tested Pepsi. As part of the experiment, each team was photographed with the cola logo. Acton said that while the sophisticated Coke system "dispensed soda kind of like what we're used to drinking on Earth", the Pepsi can was a shaving cream can with the Pepsi logo on a paper wrapper, which "dispensed soda filled with bubbles" that was "not very drinkable". Acton said that when he gives speeches in schools, audiences are much more interested in hearing about the cola experiment than in solar physics. Post-flight, the astronauts revealed that they preferred Tang, in part because it could be mixed on-orbit with existing chilled-water supplies, whereas there was no dedicated refrigeration equipment on board to chill the cans, which also fizzed excessively in microgravity.
In an experiment during the mission, thruster rockets were fired at a point over Tasmania and also above Boston to create two "holes" – plasma depletion regions – in the ionosphere. A worldwide group of geophysicists collaborated with the observations made from Spacelab 2.
Landing
Challenger landed at Edwards Air Force Base, California, on August 6, 1985, at 12:45:26 p.m. PDT. Its rollout distance was . The mission had been extended by 17 orbits for additional payload activities due to the Abort to Orbit. The orbiter arrived back at Kennedy Space Center on August 11, 1985.
Mission insignia
The mission insignia was designed by Houston, Texas, artist Skip Bradley. is depicted ascending toward the heavens in search of new knowledge in the field of solar and stellar astronomy, with its Spacelab 2 payload. The constellations Leo and Orion are shown in the positions they were in relative to the Sun during the flight. The nineteen stars indicate that the mission is the 19th shuttle flight.
Legacy
One of the purposes of the mission was to test how suitable the Shuttle was for conducting infrared observations, and the IRT was operated on this mission. However, the orbiter was found to have some draw-backs for infrared astronomy, and this led to later infrared telescopes being free-flying from the Shuttle orbiter.
See also
List of human spaceflights
List of Space Shuttle missions
Salyut 7 (a space station of the Soviet Union also in orbit at this time)
Soyuz T-13 (a mission to salvage that space station in the summer of 1985)
References
External links
NASA mission summary
Press Kit
STS-51F Video Highlights
Space Coke can
Carbonated Drinks in Space
YouTube: STS-51F launch, abort and landing
July 12 launch attempt
Space Shuttle Missions Summary
Space Shuttle missions
Edwards Air Force Base
1985 in spaceflight
1985 in the United States
Crewed space observatories
Spacecraft launched in 1985
Spacecraft which reentered in 1985 | STS-51-F | [
"Astronomy"
] | 1,853 | [
"Space telescopes",
"Crewed space observatories"
] |
5,295 | https://en.wikipedia.org/wiki/Character%20encoding | Character encoding is the process of assigning numbers to graphical characters, especially the written characters of human language, allowing them to be stored, transmitted, and transformed using computers. The numerical values that make up a character encoding are known as code points and collectively comprise a code space, a code page, or character map.
Early character encodings that originated with optical or electrical telegraphy and in early computers could only represent a subset of the characters used in written languages, sometimes restricted to upper case letters, numerals and some punctuation only. Over time, character encodings capable of representing more characters were created, such as ASCII, the ISO/IEC 8859 encodings, various computer vendor encodings, and Unicode encodings such as UTF-8 and UTF-16.
The most popular character encoding on the World Wide Web is UTF-8, which is used in 98.2% of surveyed web sites, as of May 2024. In application programs and operating system tasks, both UTF-8 and UTF-16 are popular options.
History
The history of character codes illustrates the evolving need for machine-mediated character-based symbolic information over a distance, using once-novel electrical means. The earliest codes were based upon manual and hand-written encoding and cyphering systems, such as Bacon's cipher, Braille, international maritime signal flags, and the 4-digit encoding of Chinese characters for a Chinese telegraph code (Hans Schjellerup, 1869). With the adoption of electrical and electro-mechanical techniques these earliest codes were adapted to the new capabilities and limitations of the early machines. The earliest well-known electrically transmitted character code, Morse code, introduced in the 1840s, used a system of four "symbols" (short signal, long signal, short space, long space) to generate codes of variable length. Though some commercial use of Morse code was via machinery, it was often used as a manual code, generated by hand on a telegraph key and decipherable by ear, and persists in amateur radio and aeronautical use. Most codes are of fixed per-character length or variable-length sequences of fixed-length codes (e.g. Unicode).
Common examples of character encoding systems include Morse code, the Baudot code, the American Standard Code for Information Interchange (ASCII) and Unicode. Unicode, a well-defined and extensible encoding system, has replaced most earlier character encodings, but the path of code development to the present is fairly well known.
The Baudot code, a five-bit encoding, was created by Émile Baudot in 1870, patented in 1874, modified by Donald Murray in 1901, and standardized by CCITT as International Telegraph Alphabet No. 2 (ITA2) in 1930. The name baudot has been erroneously applied to ITA2 and its many variants. ITA2 suffered from many shortcomings and was often improved by many equipment manufacturers, sometimes creating compatibility issues.
Herman Hollerith invented punch card data encoding in the late 19th century to analyze census data. Initially, each hole position represented a different data element, but later, numeric information was encoded by numbering the lower rows 0 to 9, with a punch in a column representing its row number. Later alphabetic data was encoded by allowing more than one punch per column. Electromechanical tabulating machines represented date internally by the timing of pulses relative to the motion of the cards through the machine.
When IBM went to electronic processing, starting with the IBM 603 Electronic Multiplier, it used a variety of binary encoding schemes that were tied to the punch card code. IBM used several binary-coded decimal (BCD) six-bit character encoding schemes, starting as early as 1953 in its 702 and 704 computers, and in its later 7000 Series and 1400 series, as well as in associated peripherals. Since the punched card code then in use only allowed digits, upper-case English letters and a few special characters, six bits were sufficient. These BCD encodings extended existing simple four-bit numeric encoding to include alphabetic and special characters, mapping them easily to punch-card encoding which was already in widespread use. IBM's codes were used primarily with IBM equipment; other computer vendors of the era had their own character codes, often six-bit, such as the encoding used by the , but usually had the ability to read tapes produced on IBM equipment. IBM's BCD encodings were the precursors of their Extended Binary-Coded Decimal Interchange Code (usually abbreviated as EBCDIC), an eight-bit encoding scheme developed in 1963 for the IBM System/360 that featured a larger character set, including lower case letters.
In 1959 the U.S. military defined its Fieldata code, a six-or seven-bit code, introduced by the U.S. Army Signal Corps. While Fieldata addressed many of the then-modern issues (e.g. letter and digit codes arranged for machine collation), it fell short of its goals and was short-lived. In 1963 the first ASCII code was released (X3.4-1963) by the ASCII committee (which contained at least one member of the Fieldata committee, W. F. Leubbert), which addressed most of the shortcomings of Fieldata, using a simpler seven-bit code. Many of the changes were subtle, such as collatable character sets within certain numeric ranges. ASCII63 was a success, widely adopted by industry, and with the follow-up issue of the 1967 ASCII code (which added lower-case letters and fixed some "control code" issues) ASCII67 was adopted fairly widely. ASCII67's American-centric nature was somewhat addressed in the European ECMA-6 standard. Eight-bit extended ASCII encodings, such as various vendor extensions and the ISO/IEC 8859 series, supported all ASCII characters as well as additional non-ASCII characters.
In trying to develop universally interchangeable character encodings, researchers in the 1980s faced the dilemma that, on the one hand, it seemed necessary to add more bits to accommodate additional characters, but on the other hand, for the users of the relatively small character set of the Latin alphabet (who still constituted the majority of computer users), those additional bits were a colossal waste of then-scarce and expensive computing resources (as they would always be zeroed out for such users). In 1985, the average personal computer user's hard disk drive could store only about 10 megabytes, and it cost approximately US$250 on the wholesale market (and much higher if purchased separately at retail), so it was very important at the time to make every bit count.
The compromise solution that was eventually found and was to break the assumption (dating back to telegraph codes) that each character should always directly correspond to a particular sequence of bits. Instead, characters would first be mapped to a universal intermediate representation in the form of abstract numbers called code points. Code points would then be represented in a variety of ways and with various default numbers of bits per character (code units) depending on context. To encode code points higher than the length of the code unit, such as above 256 for eight-bit units, the solution was to implement variable-length encodings where an escape sequence would signal that subsequent bits should be parsed as a higher code point.
Terminology
Informally, the terms "character encoding", "character map", "character set" and "code page" are often used interchangeably. Historically, the same standard would specify a repertoire of characters and how they were to be encoded into a stream of code units — usually with a single character per code unit. However, due to the emergence of more sophisticated character encodings, the distinction between these terms has become important.
A character is a minimal unit of text that has semantic value.
A character set is a collection of elements used to represent text. For example, the Latin alphabet and Greek alphabet are both character sets.
A coded character set is a character set mapped to a set of unique numbers. For historical reasons, this is also often referred to as a code page.
A character repertoire is the set of characters that can be represented by a particular coded character set. The repertoire may be closed, meaning that no additions are allowed without creating a new standard (as is the case with ASCII and most of the ISO-8859 series); or it may be open, allowing additions (as is the case with Unicode and to a limited extent Windows code pages).
A code point is a value or position of a character in a coded character set.
A code space is the range of numerical values spanned by a coded character set.
A code unit is the minimum bit combination that can represent a character in a character encoding (in computer science terms, it is the word size of the character encoding). For example, common code units include 7-bit, 8-bit, 16-bit, and 32-bit. In some encodings, some characters are encoded using multiple code units; such an encoding is referred to as a variable-width encoding.
Code pages
"Code page" is a historical name for a coded character set.
Originally, a code page referred to a specific page number in the IBM standard character set manual, which would define a particular character encoding. Other vendors, including Microsoft, SAP, and Oracle Corporation, also published their own sets of code pages; the most well-known code page suites are "Windows" (based on Windows-1252) and "IBM"/"DOS" (based on code page 437).
Despite no longer referring to specific page numbers in a standard, many character encodings are still referred to by their code page number; likewise, the term "code page" is often still used to refer to character encodings in general.
The term "code page" is not used in Unix or Linux, where "charmap" is preferred, usually in the larger context of locales. IBM's Character Data Representation Architecture (CDRA) designates entities with coded character set identifiers (CCSIDs), each of which is variously called a "charset", "character set", "code page", or "CHARMAP".
Code units
The code unit size is equivalent to the bit measurement for the particular encoding:
A code unit in ASCII consists of 7 bits;
A code unit in UTF-8, EBCDIC and GB 18030 consists of 8 bits;
A code unit in UTF-16 consists of 16 bits;
A code unit in UTF-32 consists of 32 bits.
Code points
A code point is represented by a sequence of code units. The mapping is defined by the encoding. Thus, the number of code units required to represent a code point depends on the encoding:
UTF-8: code points map to a sequence of one, two, three or four code units.
UTF-16: code units are twice as long as 8-bit code units. Therefore, any code point with a scalar value less than U+10000 is encoded with a single code unit. Code points with a value U+10000 or higher require two code units each. These pairs of code units have a unique term in UTF-16: "Unicode surrogate pairs".
UTF-32: the 32-bit code unit is large enough that every code point is represented as a single code unit.
GB 18030: multiple code units per code point are common, because of the small code units. Code points are mapped to one, two, or four code units.
Characters
Exactly what constitutes a character varies between character encodings.
For example, for letters with diacritics, there are two distinct approaches that can be taken to encode them: they can be encoded either as a single unified character (known as a precomposed character), or as separate characters that combine into a single glyph. The former simplifies the text handling system, but the latter allows any letter/diacritic combination to be used in text. Ligatures pose similar problems.
Exactly how to handle glyph variants is a choice that must be made when constructing a particular character encoding. Some writing systems, such as Arabic and Hebrew, need to accommodate things like graphemes that are joined in different ways in different contexts, but represent the same semantic character.
Unicode encoding model
Unicode and its parallel standard, the ISO/IEC 10646 Universal Character Set, together constitute a unified standard for character encoding. Rather than mapping characters directly to bytes, Unicode separately defines a coded character set that maps characters to unique natural numbers (code points), how those code points are mapped to a series of fixed-size natural numbers (code units), and finally how those units are encoded as a stream of octets (bytes). The purpose of this decomposition is to establish a universal set of characters that can be encoded in a variety of ways. To describe this model precisely, Unicode uses its own set of terminology to describe its process:
An abstract character repertoire (ACR) is the full set of abstract characters that a system supports. Unicode has an open repertoire, meaning that new characters will be added to the repertoire over time.
A coded character set (CCS) is a function that maps characters to code points (each code point represents one character). For example, in a given repertoire, the capital letter "A" in the Latin alphabet might be represented by the code point 65, the character "B" by 66, and so on. Multiple coded character sets may share the same character repertoire; for example ISO/IEC 8859-1 and IBM code pages 037 and 500 all cover the same repertoire but map them to different code points.
A character encoding form (CEF) is the mapping of code points to code units to facilitate storage in a system that represents numbers as bit sequences of fixed length (i.e. practically any computer system). For example, a system that stores numeric information in 16-bit units can only directly represent code points 0 to 65,535 in each unit, but larger code points (say, 65,536 to 1.4 million) could be represented by using multiple 16-bit units. This correspondence is defined by a CEF.
A character encoding scheme (CES) is the mapping of code units to a sequence of octets to facilitate storage on an octet-based file system or transmission over an octet-based network. Simple character encoding schemes include UTF-8, UTF-16BE, UTF-32BE, UTF-16LE, and UTF-32LE; compound character encoding schemes, such as UTF-16, UTF-32 and ISO/IEC 2022, switch between several simple schemes by using a byte order mark or escape sequences; compressing schemes try to minimize the number of bytes used per code unit (such as SCSU and BOCU).
Although UTF-32BE and UTF-32LE are simpler CESes, most systems working with Unicode use either UTF-8, which is backward compatible with fixed-length ASCII and maps Unicode code points to variable-length sequences of octets, or UTF-16BE, which is backward compatible with fixed-length UCS-2BE and maps Unicode code points to variable-length sequences of 16-bit words. See comparison of Unicode encodings for a detailed discussion.
Finally, there may be a higher-level protocol which supplies additional information to select the particular variant of a Unicode character, particularly where there are regional variants that have been 'unified' in Unicode as the same character. An example is the XML attribute xml:lang.
The Unicode model uses the term "character map" for other systems which directly assign a sequence of characters to a sequence of bytes, covering all of the CCS, CEF and CES layers.
Unicode code points
In Unicode, a character can be referred to as 'U+' followed by its codepoint value in hexadecimal. The range of valid code points (the codespace) for the Unicode standard is U+0000 to U+10FFFF, inclusive, divided in 17 planes, identified by the numbers 0 to 16. Characters in the range U+0000 to U+FFFF are in plane 0, called the Basic Multilingual Plane (BMP). This plane contains the most commonly-used characters. Characters in the range U+10000 to U+10FFFF in the other planes are called supplementary characters.
The following table shows examples of code point values:
Example
Consider a string of the letters "ab̲c𐐀"—that is, a string containing a Unicode combining character () as well as a supplementary character (). This string has several Unicode representations which are logically equivalent, yet while each is suited to a diverse set of circumstances or range of requirements:
Four composed characters:
, , ,
Five graphemes:
, , , ,
Five Unicode code points:
, , , ,
Five UTF-32 code units (32-bit integer values):
, , , ,
Six UTF-16 code units (16-bit integers)
, , , , ,
Nine UTF-8 code units (8-bit values, or bytes)
, , , , , , , ,
Note in particular that 𐐀 is represented with either one 32-bit value (UTF-32), two 16-bit values (UTF-16), or four 8-bit values (UTF-8). Although each of those forms uses the same total number of bits (32) to represent the glyph, it is not obvious how the actual numeric byte values are related.
Transcoding
As a result of having many character encoding methods in use (and the need for backward compatibility with archived data), many computer programs have been developed to translate data between character encoding schemes, a process known as transcoding. Some of these are cited below.
Cross-platform:
Web browsers – most modern web browsers feature automatic character encoding detection. On Firefox 3, for example, see the View/Character Encoding submenu.
iconv – a program and standardized API to convert encodings
luit – a program that converts encoding of input and output to programs running interactively
International Components for Unicode – A set of C and Java libraries to perform charset conversion. uconv can be used from ICU4C.
Windows:
Encoding.Convert – .NET API
MultiByteToWideChar/WideCharToMultiByte – to convert from ANSI to Unicode & Unicode to ANSI
Common character encodings
The most used character encoding on the web is UTF-8, used in 98.2% of surveyed web sites, as of May 2024. In application programs and operating system tasks, both UTF-8 and UTF-16 are popular options.
ISO 646
ASCII
EBCDIC
ISO 8859:
ISO 8859-1 Western Europe
ISO 8859-2 Western and Central Europe
ISO 8859-3 Western Europe and South European (Turkish, Maltese plus Esperanto)
ISO 8859-4 Western Europe and Baltic countries (Lithuania, Estonia, Latvia and Lapp)
ISO 8859-5 Cyrillic alphabet
ISO 8859-6 Arabic
ISO 8859-7 Greek
ISO 8859-8 Hebrew
ISO 8859-9 Western Europe with amended Turkish character set
ISO 8859-10 Western Europe with rationalised character set for Nordic languages, including complete Icelandic set
ISO 8859-11 Thai
ISO 8859-13 Baltic languages plus Polish
ISO 8859-14 Celtic languages (Irish Gaelic, Scottish, Welsh)
ISO 8859-15 Added the Euro sign and other rationalisations to ISO 8859-1
ISO 8859-16 Central, Eastern and Southern European languages (Albanian, Bosnian, Croatian, Hungarian, Polish, Romanian, Serbian and Slovenian, but also French, German, Italian and Irish Gaelic)
CP437, CP720, CP737, CP850, CP852, CP855, CP857, CP858, CP860, CP861, CP862, CP863, CP865, CP866, CP869, CP872
MS-Windows character sets:
Windows-1250 for Central European languages that use Latin script, (Polish, Czech, Slovak, Hungarian, Slovene, Serbian, Croatian, Bosnian, Romanian and Albanian)
Windows-1251 for Cyrillic alphabets
Windows-1252 for Western languages
Windows-1253 for Greek
Windows-1254 for Turkish
Windows-1255 for Hebrew
Windows-1256 for Arabic
Windows-1257 for Baltic languages
Windows-1258 for Vietnamese
Mac OS Roman
KOI8-R, KOI8-U, KOI7
MIK
ISCII
TSCII
VISCII
JIS X 0208 is a widely deployed standard for Japanese character encoding that has several encoding forms.
Shift JIS (Microsoft Code page 932 is a dialect of Shift_JIS)
EUC-JP
ISO-2022-JP
JIS X 0213 is an extended version of JIS X 0208.
Shift_JIS-2004
EUC-JIS-2004
ISO-2022-JP-2004
Chinese Guobiao
GB 2312
GBK (Microsoft Code page 936)
GB 18030
Taiwan Big5 (a more famous variant is Microsoft Code page 950)
Hong Kong HKSCS
Korean
KS X 1001 is a Korean double-byte character encoding standard
EUC-KR
ISO-2022-KR
Unicode (and subsets thereof, such as the 16-bit 'Basic Multilingual Plane')
UTF-8
UTF-16
UTF-32
ANSEL or ISO/IEC 6937
See also
Percent-encoding
Alt code
Character encodings in HTML
:Category:Character encoding – articles related to character encoding in general
:Category:Character sets – articles detailing specific character encodings
Hexadecimal representations
Mojibake – character set mismap
Mojikyō – a system ("glyph set") that includes over 100,000 Chinese character drawings, modern and ancient, popular and obscure
Presentation layer
TRON, part of the TRON project, is an encoding system that does not use Han Unification; instead, it uses "control codes" to switch between 16-bit "planes" of characters.
Universal Character Set characters
Charset sniffing – used in some applications when character encoding metadata is not available
References
Further reading
External links
Character sets registered by Internet Assigned Numbers Authority (IANA)
Characters and encodings, by Jukka Korpela
Unicode Technical Report #17: Character Encoding Model
Decimal, Hexadecimal Character Codes in HTML Unicode – Encoding converter
The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) by Joel Spolsky (Oct 10, 2003)
Encoding | Character encoding | [
"Technology"
] | 4,729 | [
"Natural language and computing",
"Character encoding"
] |
5,299 | https://en.wikipedia.org/wiki/Carbon | Carbon () is a chemical element; it has symbol C and atomic number 6. It is nonmetallic and tetravalent—meaning that its atoms are able to form up to four covalent bonds due to its valence shell exhibiting 4 electrons. It belongs to group 14 of the periodic table. Carbon makes up about 0.025 percent of Earth's crust. Three isotopes occur naturally, C and C being stable, while C is a radionuclide, decaying with a half-life of 5,700 years. Carbon is one of the few elements known since antiquity.
Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass after hydrogen, helium, and oxygen. Carbon's abundance, its unique diversity of organic compounds, and its unusual ability to form polymers at the temperatures commonly encountered on Earth, enables this element to serve as a common element of all known life. It is the second most abundant element in the human body by mass (about 18.5%) after oxygen.
The atoms of carbon can bond together in diverse ways, resulting in various allotropes of carbon. Well-known allotropes include graphite, diamond, amorphous carbon, and fullerenes. The physical properties of carbon vary widely with the allotropic form. For example, graphite is opaque and black, while diamond is highly transparent. Graphite is soft enough to form a streak on paper (hence its name, from the Greek verb "γράφειν" which means "to write"), while diamond is the hardest naturally occurring material known. Graphite is a good electrical conductor while diamond has a low electrical conductivity. Under normal conditions, diamond, carbon nanotubes, and graphene have the highest thermal conductivities of all known materials. All carbon allotropes are solids under normal conditions, with graphite being the most thermodynamically stable form at standard temperature and pressure. They are chemically resistant and require high temperature to react even with oxygen.
The most common oxidation state of carbon in inorganic compounds is +4, while +2 is found in carbon monoxide and transition metal carbonyl complexes. The largest sources of inorganic carbon are limestones, dolomites and carbon dioxide, but significant quantities occur in organic deposits of coal, peat, oil, and methane clathrates. Carbon forms a vast number of compounds, with about two hundred million having been described and indexed; and yet that number is but a fraction of the number of theoretically possible compounds under standard conditions.
Characteristics
The allotropes of carbon include graphite, one of the softest known substances, and diamond, the hardest naturally occurring substance. It bonds readily with other small atoms, including other carbon atoms, and is capable of forming multiple stable covalent bonds with suitable multivalent atoms. Carbon is a component element in the large majority of all chemical compounds, with about two hundred million examples having been described in the published chemical literature. Carbon also has the highest sublimation point of all elements. At atmospheric pressure it has no melting point, as its triple point is at and , so it sublimes at about . Graphite is much more reactive than diamond at standard conditions, despite being more thermodynamically stable, as its delocalised pi system is much more vulnerable to attack. For example, graphite can be oxidised by hot concentrated nitric acid at standard conditions to mellitic acid, C6(CO2H)6, which preserves the hexagonal units of graphite while breaking up the larger structure.
Carbon sublimes in a carbon arc, which has a temperature of about 5800 K (5,530 °C or 9,980 °F). Thus, irrespective of its allotropic form, carbon remains solid at higher temperatures than the highest-melting-point metals such as tungsten or rhenium. Although thermodynamically prone to oxidation, carbon resists oxidation more effectively than elements such as iron and copper, which are weaker reducing agents at room temperature.
Carbon is the sixth element, with a ground-state electron configuration of 1s22s22p2, of which the four outer electrons are valence electrons. Its first four ionisation energies, 1086.5, 2352.6, 4620.5 and 6222.7 kJ/mol, are much higher than those of the heavier group-14 elements. The electronegativity of carbon is 2.5, significantly higher than the heavier group-14 elements (1.8–1.9), but close to most of the nearby nonmetals, as well as some of the second- and third-row transition metals. Carbon's covalent radii are normally taken as 77.2 pm (C−C), 66.7 pm (C=C) and 60.3 pm (C≡C), although these may vary depending on coordination number and what the carbon is bonded to. In general, covalent radius decreases with lower coordination number and higher bond order.
Carbon-based compounds form the basis of all known life on Earth, and the carbon-nitrogen-oxygen cycle provides a small portion of the energy produced by the Sun, and most of the energy in larger stars (e.g. Sirius). Although it forms an extraordinary variety of compounds, most forms of carbon are comparatively unreactive under normal conditions. At standard temperature and pressure, it resists all but the strongest oxidizers. It does not react with sulfuric acid, hydrochloric acid, chlorine or any alkalis. At elevated temperatures, carbon reacts with oxygen to form carbon oxides and will rob oxygen from metal oxides to leave the elemental metal. This exothermic reaction is used in the iron and steel industry to smelt iron and to control the carbon content of steel:
+ 4 C + 2 → 3 Fe + 4 .
Carbon reacts with sulfur to form carbon disulfide, and it reacts with steam in the coal-gas reaction used in coal gasification:
C + HO → CO + H.
Carbon combines with some metals at high temperatures to form metallic carbides, such as the iron carbide cementite in steel and tungsten carbide, widely used as an abrasive and for making hard tips for cutting tools.
The system of carbon allotropes spans a range of extremes:
Allotropes
Atomic carbon is a very short-lived species and, therefore, carbon is stabilized in various multi-atomic structures with diverse molecular configurations called allotropes. The three relatively well-known allotropes of carbon are amorphous carbon, graphite, and diamond. Once considered exotic, fullerenes are nowadays commonly synthesized and used in research; they include buckyballs, carbon nanotubes, carbon nanobuds and nanofibers. Several other exotic allotropes have also been discovered, such as lonsdaleite, glassy carbon, carbon nanofoam and linear acetylenic carbon (carbyne).
Graphene is a two-dimensional sheet of carbon with the atoms arranged in a hexagonal lattice. As of 2009, graphene appears to be the strongest material ever tested. The process of separating it from graphite will require some further technological development before it is economical for industrial processes. If successful, graphene could be used in the construction of a space elevator. It could also be used to safely store hydrogen for use in a hydrogen based engine in cars.
The amorphous form is an assortment of carbon atoms in a non-crystalline, irregular, glassy state, not held in a crystalline macrostructure. It is present as a powder, and is the main constituent of substances such as charcoal, lampblack (soot), and activated carbon. At normal pressures, carbon takes the form of graphite, in which each atom is bonded trigonally to three others in a plane composed of fused hexagonal rings, just like those in aromatic hydrocarbons. The resulting network is 2-dimensional, and the resulting flat sheets are stacked and loosely bonded through weak van der Waals forces. This gives graphite its softness and its cleaving properties (the sheets slip easily past one another). Because of the delocalization of one of the outer electrons of each atom to form a π-cloud, graphite conducts electricity, but only in the plane of each covalently bonded sheet. This results in a lower bulk electrical conductivity for carbon than for most metals. The delocalization also accounts for the energetic stability of graphite over diamond at room temperature.
At very high pressures, carbon forms the more compact allotrope, diamond, having nearly twice the density of graphite. Here, each atom is bonded tetrahedrally to four others, forming a 3-dimensional network of puckered six-membered rings of atoms. Diamond has the same cubic structure as silicon and germanium, and because of the strength of the carbon-carbon bonds, it is the hardest naturally occurring substance measured by resistance to scratching. Contrary to the popular belief that "diamonds are forever", they are thermodynamically unstable (ΔfG°(diamond, 298 K) = 2.9 kJ/mol) under normal conditions (298 K, 105 Pa) and should theoretically transform into graphite. But due to a high activation energy barrier, the transition into graphite is so slow at normal temperature that it is unnoticeable. However, at very high temperatures diamond will turn into graphite, and diamonds can burn up in a house fire. The bottom left corner of the phase diagram for carbon has not been scrutinized experimentally. Although a computational study employing density functional theory methods reached the conclusion that as and , diamond becomes more stable than graphite by approximately 1.1 kJ/mol, more recent and definitive experimental and computational studies show that graphite is more stable than diamond for , without applied pressure, by 2.7 kJ/mol at T = 0 K and 3.2 kJ/mol at T = 298.15 K. Under some conditions, carbon crystallizes as lonsdaleite, a hexagonal crystal lattice with all atoms covalently bonded and properties similar to those of diamond.
Fullerenes are a synthetic crystalline formation with a graphite-like structure, but in place of flat hexagonal cells only, some of the cells of which fullerenes are formed may be pentagons, nonplanar hexagons, or even heptagons of carbon atoms. The sheets are thus warped into spheres, ellipses, or cylinders. The properties of fullerenes (split into buckyballs, buckytubes, and nanobuds) have not yet been fully analyzed and represent an intense area of research in nanomaterials. The names fullerene and buckyball are given after Richard Buckminster Fuller, popularizer of geodesic domes, which resemble the structure of fullerenes. The buckyballs are fairly large molecules formed completely of carbon bonded trigonally, forming spheroids (the best-known and simplest is the soccerball-shaped C buckminsterfullerene). Carbon nanotubes (buckytubes) are structurally similar to buckyballs, except that each atom is bonded trigonally in a curved sheet that forms a hollow cylinder. Nanobuds were first reported in 2007 and are hybrid buckytube/buckyball materials (buckyballs are covalently bonded to the outer wall of a nanotube) that combine the properties of both in a single structure.
Of the other discovered allotropes, carbon nanofoam is a ferromagnetic allotrope discovered in 1997. It consists of a low-density cluster-assembly of carbon atoms strung together in a loose three-dimensional web, in which the atoms are bonded trigonally in six- and seven-membered rings. It is among the lightest known solids, with a density of about 2 kg/m. Similarly, glassy carbon contains a high proportion of closed porosity, but contrary to normal graphite, the graphitic layers are not stacked like pages in a book, but have a more random arrangement. Linear acetylenic carbon has the chemical structure −(C≡C)− . Carbon in this modification is linear with sp orbital hybridization, and is a polymer with alternating single and triple bonds. This carbyne is of considerable interest to nanotechnology as its Young's modulus is 40 times that of the hardest known material – diamond.
In 2015, a team at the North Carolina State University announced the development of another allotrope they have dubbed Q-carbon, created by a high-energy low-duration laser pulse on amorphous carbon dust. Q-carbon is reported to exhibit ferromagnetism, fluorescence, and a hardness superior to diamonds.
In the vapor phase, some of the carbon is in the form of highly reactive diatomic carbon dicarbon (). When excited, this gas glows green.
Occurrence
Carbon is the fourth most abundant chemical element in the observable universe by mass after hydrogen, helium, and oxygen. Carbon is abundant in the Sun, stars, comets, and in the atmospheres of most planets. Some meteorites contain microscopic diamonds that were formed when the Solar System was still a protoplanetary disk. Microscopic diamonds may also be formed by the intense pressure and high temperature at the sites of meteorite impacts.
In 2014 NASA announced a greatly upgraded database for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. More than 20% of the carbon in the universe may be associated with PAHs, complex compounds of carbon and hydrogen without oxygen. These compounds figure in the PAH world hypothesis where they are hypothesized to have a role in abiogenesis and formation of life. PAHs seem to have been formed "a couple of billion years" after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets.
It has been estimated that the solid earth as a whole contains 730 ppm of carbon, with 2000 ppm in the core and 120 ppm in the combined mantle and crust. Since the mass of the earth is , this would imply 4360 million gigatonnes of carbon. This is much more than the amount of carbon in the oceans or atmosphere (below).
In combination with oxygen in carbon dioxide, carbon is found in the Earth's atmosphere (approximately 900 gigatonnes of carbon — each ppm corresponds to 2.13 Gt) and dissolved in all water bodies (approximately 36,000 gigatonnes of carbon). Carbon in the biosphere has been estimated at 550 gigatonnes but with a large uncertainty, due mostly to a huge uncertainty in the amount of terrestrial deep subsurface bacteria. Hydrocarbons (such as coal, petroleum, and natural gas) contain carbon as well. Coal "reserves" (not "resources") amount to around 900 gigatonnes with perhaps 18,000 Gt of resources. Oil reserves are around 150 gigatonnes. Proven sources of natural gas are about (containing about 105 gigatonnes of carbon), but studies estimate another of "unconventional" deposits such as shale gas, representing about 540 gigatonnes of carbon.
Carbon is also found in methane hydrates in polar regions and under the seas. Various estimates put this carbon between 500, 2500, or 3,000 Gt.
According to one source, in the period from 1751 to 2008 about 347 gigatonnes of carbon were released as carbon dioxide to the atmosphere from burning of fossil fuels. Another source puts the amount added to the atmosphere for the period since 1750 at 879 Gt, and the total going to the atmosphere, sea, and land (such as peat bogs) at almost 2,000 Gt.
Carbon is a constituent (about 12% by mass) of the very large masses of carbonate rock (limestone, dolomite, marble, and others). Coal is very rich in carbon (anthracite contains 92–98%) and is the largest commercial source of mineral carbon, accounting for 4,000 gigatonnes or 80% of fossil fuel.
As for individual carbon allotropes, graphite is found in large quantities in the United States (mostly in New York and Texas), Russia, Mexico, Greenland, and India. Natural diamonds occur in the rock kimberlite, found in ancient volcanic "necks", or "pipes". Most diamond deposits are in Africa, notably in South Africa, Namibia, Botswana, the Republic of the Congo, and Sierra Leone. Diamond deposits have also been found in Arkansas, Canada, the Russian Arctic, Brazil, and in Northern and Western Australia. Diamonds are now also being recovered from the ocean floor off the Cape of Good Hope. Diamonds are found naturally, but about 30% of all industrial diamonds used in the U.S. are now manufactured.
Carbon-14 is formed in upper layers of the troposphere and the stratosphere at altitudes of 9–15 km by a reaction that is precipitated by cosmic rays. Thermal neutrons are produced that collide with the nuclei of nitrogen-14, forming carbon-14 and a proton. As such, of atmospheric carbon dioxide contains carbon-14.
Carbon-rich asteroids are relatively preponderant in the outer parts of the asteroid belt in the Solar System. These asteroids have not yet been directly sampled by scientists. The asteroids can be used in hypothetical space-based carbon mining, which may be possible in the future, but is currently technologically impossible.
Isotopes
Isotopes of carbon are atomic nuclei that contain six protons plus a number of neutrons (varying from 2 to 16). Carbon has two stable, naturally occurring isotopes. The isotope carbon-12 (C) forms 98.93% of the carbon on Earth, while carbon-13 (C) forms the remaining 1.07%. The concentration of C is further increased in biological materials because biochemical reactions discriminate against C. In 1961, the International Union of Pure and Applied Chemistry (IUPAC) adopted the isotope carbon-12 as the basis for atomic weights. Identification of carbon in nuclear magnetic resonance (NMR) experiments is done with the isotope C.
Carbon-14 (C) is a naturally occurring radioisotope, created in the upper atmosphere (lower stratosphere and upper troposphere) by interaction of nitrogen with cosmic rays. It is found in trace amounts on Earth of 1 part per trillion (0.0000000001%) or more, mostly confined to the atmosphere and superficial deposits, particularly of peat and other organic materials. This isotope decays by 0.158 MeV β emission. Because of its relatively short half-life of years, C is virtually absent in ancient rocks. The amount of C in the atmosphere and in living organisms is almost constant, but decreases predictably in their bodies after death. This principle is used in radiocarbon dating, invented in 1949, which has been used extensively to determine the age of carbonaceous materials with ages up to about 40,000 years.
There are 15 known isotopes of carbon and the shortest-lived of these is C which decays through proton emission and has a half-life of 3.5 s. The exotic C exhibits a nuclear halo, which means its radius is appreciably larger than would be expected if the nucleus were a sphere of constant density.
Formation in stars
Formation of the carbon atomic nucleus occurs within a giant or supergiant star through the triple-alpha process. This requires a nearly simultaneous collision of three alpha particles (helium nuclei), as the products of further nuclear fusion reactions of helium with hydrogen or another helium nucleus produce lithium-5 and beryllium-8 respectively, both of which are highly unstable and decay almost instantly back into smaller nuclei. The triple-alpha process happens in conditions of temperatures over 100 megakelvins and helium concentration that the rapid expansion and cooling of the early universe prohibited, and therefore no significant carbon was created during the Big Bang.
According to current physical cosmology theory, carbon is formed in the interiors of stars on the horizontal branch. When massive stars die as supernova, the carbon is scattered into space as dust. This dust becomes component material for the formation of the next-generation star systems with accreted planets. The Solar System is one such star system with an abundance of carbon, enabling the existence of life as we know it. It is the opinion of most scholars that all the carbon in the Solar System and the Milky Way comes from dying stars.
The CNO cycle is an additional hydrogen fusion mechanism that powers stars, wherein carbon operates as a catalyst.
Rotational transitions of various isotopic forms of carbon monoxide (for example, CO, CO, and CO) are detectable in the submillimeter wavelength range, and are used in the study of newly forming stars in molecular clouds.
Carbon cycle
Under terrestrial conditions, conversion of one element to another is very rare. Therefore, the amount of carbon on Earth is effectively constant. Thus, processes that use carbon must obtain it from somewhere and dispose of it somewhere else. The paths of carbon in the environment form the carbon cycle. For example, photosynthetic plants draw carbon dioxide from the atmosphere (or seawater) and build it into biomass, as in the Calvin cycle, a process of carbon fixation. Some of this biomass is eaten by animals, while some carbon is exhaled by animals as carbon dioxide. The carbon cycle is considerably more complicated than this short loop; for example, some carbon dioxide is dissolved in the oceans; if bacteria do not consume it, dead plant or animal matter may become petroleum or coal, which releases carbon when burned.
Compounds
Organic compounds
Carbon can form very long chains of interconnecting carbon–carbon bonds, a property that is called catenation. Carbon-carbon bonds are strong and stable. Through catenation, carbon forms a countless number of compounds. A tally of unique compounds shows that more contain carbon than do not. A similar claim can be made for hydrogen because most organic compounds contain hydrogen chemically bonded to carbon or another common element like oxygen or nitrogen.
The simplest form of an organic molecule is the hydrocarbon—a large family of organic molecules that are composed of hydrogen atoms bonded to a chain of carbon atoms. A hydrocarbon backbone can be substituted by other atoms, known as heteroatoms. Common heteroatoms that appear in organic compounds include oxygen, nitrogen, sulfur, phosphorus, and the nonradioactive halogens, as well as the metals lithium and magnesium. Organic compounds containing bonds to metal are known as organometallic compounds (see below). Certain groupings of atoms, often including heteroatoms, recur in large numbers of organic compounds. These collections, known as functional groups, confer common reactivity patterns and allow for the systematic study and categorization of organic compounds. Chain length, shape and functional groups all affect the properties of organic molecules.
In most stable compounds of carbon (and nearly all stable organic compounds), carbon obeys the octet rule and is tetravalent, meaning that a carbon atom forms a total of four covalent bonds (which may include double and triple bonds). Exceptions include a small number of stabilized carbocations (three bonds, positive charge), radicals (three bonds, neutral), carbanions (three bonds, negative charge) and carbenes (two bonds, neutral), although these species are much more likely to be encountered as unstable, reactive intermediates.
Carbon occurs in all known organic life and is the basis of organic chemistry. When united with hydrogen, it forms various hydrocarbons that are important to industry as refrigerants, lubricants, solvents, as chemical feedstock for the manufacture of plastics and petrochemicals, and as fossil fuels.
When combined with oxygen and hydrogen, carbon can form many groups of important biological compounds including sugars, lignans, chitins, alcohols, fats, aromatic esters, carotenoids and terpenes. With nitrogen, it forms alkaloids, and with the addition of sulfur also it forms antibiotics, amino acids, and rubber products. With the addition of phosphorus to these other elements, it forms DNA and RNA, the chemical-code carriers of life, and adenosine triphosphate (ATP), the most important energy-transfer molecule in all living cells. Norman Horowitz, head of the Mariner and Viking missions to Mars (1965–1976), considered that the unique characteristics of carbon made it unlikely that any other element could replace carbon, even on another planet, to generate the biochemistry necessary for life.
Inorganic compounds
Commonly carbon-containing compounds which are associated with minerals or which do not contain bonds to the other carbon atoms, halogens, or hydrogen, are treated separately from classical organic compounds; the definition is not rigid, and the classification of some compounds can vary from author to author (see reference articles above). Among these are the simple oxides of carbon. The most prominent oxide is carbon dioxide (). This was once the principal constituent of the paleoatmosphere, but is a minor component of the Earth's atmosphere today. Dissolved in water, it forms carbonic acid (), but as most compounds with multiple single-bonded oxygens on a single carbon it is unstable. Through this intermediate, though, resonance-stabilized carbonate ions are produced. Some important minerals are carbonates, notably calcite. Carbon disulfide () is similar. Nevertheless, due to its physical properties and its association with organic synthesis, carbon disulfide is sometimes classified as an organic solvent.
The other common oxide is carbon monoxide (CO). It is formed by incomplete combustion, and is a colorless, odorless gas. The molecules each contain a triple bond and are fairly polar, resulting in a tendency to bind permanently to hemoglobin molecules, displacing oxygen, which has a lower binding affinity. Cyanide (CN), has a similar structure, but behaves much like a halide ion (pseudohalogen). For example, it can form the nitride cyanogen molecule ((CN)), similar to diatomic halides. Likewise, the heavier analog of cyanide, cyaphide (CP), is also considered inorganic, though most simple derivatives are highly unstable. Other uncommon oxides are carbon suboxide (), the unstable dicarbon monoxide (CO), carbon trioxide (CO), cyclopentanepentone (CO), cyclohexanehexone (CO), and mellitic anhydride (CO). However, mellitic anhydride is the triple acyl anhydride of mellitic acid; moreover, it contains a benzene ring. Thus, many chemists consider it to be organic.
With reactive metals, such as tungsten, carbon forms either carbides (C) or acetylides () to form alloys with high melting points. These anions are also associated with methane and acetylene, both very weak acids. With an electronegativity of 2.5, carbon prefers to form covalent bonds. A few carbides are covalent lattices, like carborundum (SiC), which resembles diamond. Nevertheless, even the most polar and salt-like of carbides are not completely ionic compounds.
Organometallic compounds
Organometallic compounds by definition contain at least one carbon-metal covalent bond. A wide range of such compounds exist; major classes include simple alkyl-metal compounds (for example, tetraethyllead), η-alkene compounds (for example, Zeise's salt), and η-allyl compounds (for example, allylpalladium chloride dimer); metallocenes containing cyclopentadienyl ligands (for example, ferrocene); and transition metal carbene complexes. Many metal carbonyls and metal cyanides exist (for example, tetracarbonylnickel and potassium ferricyanide); some workers consider metal carbonyl and cyanide complexes without other carbon ligands to be purely inorganic, and not organometallic. However, most organometallic chemists consider metal complexes with any carbon ligand, even 'inorganic carbon' (e.g., carbonyls, cyanides, and certain types of carbides and acetylides) to be organometallic in nature. Metal complexes containing organic ligands without a carbon-metal covalent bond (e.g., metal carboxylates) are termed metalorganic compounds.
While carbon is understood to strongly prefer formation of four covalent bonds, other exotic bonding schemes are also known. Carboranes are highly stable dodecahedral derivatives of the [B12H12]2- unit, with one BH replaced with a CH+. Thus, the carbon is bonded to five boron atoms and one hydrogen atom. The cation [(PhPAu)C] contains an octahedral carbon bound to six phosphine-gold fragments. This phenomenon has been attributed to the aurophilicity of the gold ligands, which provide additional stabilization of an otherwise labile species. In nature, the iron-molybdenum cofactor (FeMoco) responsible for microbial nitrogen fixation likewise has an octahedral carbon center (formally a carbide, C(-IV)) bonded to six iron atoms. In 2016, it was confirmed that, in line with earlier theoretical predictions, the hexamethylbenzene dication contains a carbon atom with six bonds. More specifically, the dication could be described structurally by the formulation [MeC(η5-C5Me5)]2+, making it an "organic metallocene" in which a MeC3+ fragment is bonded to a η5-C5Me5− fragment through all five of the carbons of the ring.
It is important to note that in the cases above, each of the bonds to carbon contain less than two formal electron pairs. Thus, the formal electron count of these species does not exceed an octet. This makes them hypercoordinate but not hypervalent. Even in cases of alleged 10-C-5 species (that is, a carbon with five ligands and a formal electron count of ten), as reported by Akiba and co-workers, electronic structure calculations conclude that the electron population around carbon is still less than eight, as is true for other compounds featuring four-electron three-center bonding.
History and etymology
The English name carbon comes from the Latin carbo for coal and charcoal, whence also comes the French charbon, meaning charcoal. In German, Dutch and Danish, the names for carbon are Kohlenstoff, koolstof, and kulstof respectively, all literally meaning coal-substance.
Carbon was discovered in prehistory and was known in the forms of soot and charcoal to the earliest human civilizations. Diamonds were known probably as early as 2500 BCE in China, while carbon in the form of charcoal was made by the same chemistry as it is today, by heating wood in a pyramid covered with clay to exclude air.
In 1722, René Antoine Ferchault de Réaumur demonstrated that iron was transformed into steel through the absorption of some substance, now known to be carbon. In 1772, Antoine Lavoisier showed that diamonds are a form of carbon; when he burned samples of charcoal and diamond and found that neither produced any water and that both released the same amount of carbon dioxide per gram. In 1779, Carl Wilhelm Scheele showed that graphite, which had been thought of as a form of lead, was instead identical with charcoal but with a small admixture of iron, and that it gave "aerial acid" (his name for carbon dioxide) when oxidized with nitric acid. In 1786, the French scientists Claude Louis Berthollet, Gaspard Monge and C. A. Vandermonde confirmed that graphite was mostly carbon by oxidizing it in oxygen in much the same way Lavoisier had done with diamond. Some iron again was left, which the French scientists thought was necessary to the graphite structure. In their publication they proposed the name carbone (Latin carbonum) for the element in graphite which was given off as a gas upon burning graphite. Antoine Lavoisier then listed carbon as an element in his 1789 textbook.
A new allotrope of carbon, fullerene, that was discovered in 1985 includes nanostructured forms such as buckyballs and nanotubes. Their discoverers – Robert Curl, Harold Kroto, and Richard Smalley – received the Nobel Prize in Chemistry in 1996. The resulting renewed interest in new forms led to the discovery of further exotic allotropes, including glassy carbon, and the realization that "amorphous carbon" is not strictly amorphous.
Production
Graphite
Commercially viable natural deposits of graphite occur in many parts of the world, but the most important sources economically are in China, India, Brazil, and North Korea. Graphite deposits are of metamorphic origin, found in association with quartz, mica, and feldspars in schists, gneisses, and metamorphosed sandstones and limestone as lenses or veins, sometimes of a metre or more in thickness. Deposits of graphite in Borrowdale, Cumberland, England were at first of sufficient size and purity that, until the 19th century, pencils were made by sawing blocks of natural graphite into strips before encasing the strips in wood. Today, smaller deposits of graphite are obtained by crushing the parent rock and floating the lighter graphite out on water.
There are three types of natural graphite—amorphous, flake or crystalline flake, and vein or lump. Amorphous graphite is the lowest quality and most abundant. Contrary to science, in industry "amorphous" refers to very small crystal size rather than complete lack of crystal structure. Amorphous is used for lower value graphite products and is the lowest priced graphite. Large amorphous graphite deposits are found in China, Europe, Mexico and the United States. Flake graphite is less common and of higher quality than amorphous; it occurs as separate plates that crystallized in metamorphic rock. Flake graphite can be four times the price of amorphous. Good quality flakes can be processed into expandable graphite for many uses, such as flame retardants. The foremost deposits are found in Austria, Brazil, Canada, China, Germany and Madagascar. Vein or lump graphite is the rarest, most valuable, and highest quality type of natural graphite. It occurs in veins along intrusive contacts in solid lumps, and it is only commercially mined in Sri Lanka.
According to the USGS, world production of natural graphite was 1.1 million tonnes in 2010, to which China contributed 800,000 t, India 130,000 t, Brazil 76,000 t, North Korea 30,000 t and Canada 25,000 t. No natural graphite was reported mined in the United States, but 118,000 t of synthetic graphite with an estimated value of $998 million was produced in 2009.
Diamond
The diamond supply chain is controlled by a limited number of powerful businesses, and is also highly concentrated in a small number of locations around the world (see figure).
Only a very small fraction of the diamond ore consists of actual diamonds. The ore is crushed, during which care has to be taken in order to prevent larger diamonds from being destroyed in this process and subsequently the particles are sorted by density. Today, diamonds are located in the diamond-rich density fraction with the help of X-ray fluorescence, after which the final sorting steps are done by hand. Before the use of X-rays became commonplace, the separation was done with grease belts; diamonds have a stronger tendency to stick to grease than the other minerals in the ore.
Historically diamonds were known to be found only in alluvial deposits in southern India. India led the world in diamond production from the time of their discovery in approximately the 9th century BC to the mid-18th century AD, but the commercial potential of these sources had been exhausted by the late 18th century and at that time India was eclipsed by Brazil where the first non-Indian diamonds were found in 1725.
Diamond production of primary deposits (kimberlites and lamproites) only started in the 1870s after the discovery of the diamond fields in South Africa. Production has increased over time and an accumulated total of over 4.5 billion carats have been mined since that date. Most commercially viable diamond deposits were in Russia, Botswana, Australia and the Democratic Republic of Congo. By 2005, Russia produced almost one-fifth of the global diamond output (mostly in Yakutia territory; for example, Mir pipe and Udachnaya pipe) but the Argyle mine in Australia became the single largest source, producing 14 million carats in 2018. New finds, the Canadian mines at Diavik and Ekati, are expected to become even more valuable owing to their production of gem quality stones.
In the United States, diamonds have been found in Arkansas, Colorado, and Montana. In 2004, a startling discovery of a microscopic diamond in the United States led to the January 2008 bulk-sampling of kimberlite pipes in a remote part of Montana.
Applications
Carbon is essential to all known living systems, and without it life as we know it could not exist (see alternative biochemistry). The major economic use of carbon other than food and wood is in the form of hydrocarbons, most notably the fossil fuel methane gas and crude oil (petroleum). Crude oil is distilled in refineries by the petrochemical industry to produce gasoline, kerosene, and other products. Cellulose is a natural, carbon-containing polymer produced by plants in the form of wood, cotton, linen, and hemp. Cellulose is used primarily for maintaining structure in plants. Commercially valuable carbon polymers of animal origin include wool, cashmere, and silk. Plastics are made from synthetic carbon polymers, often with oxygen and nitrogen atoms included at regular intervals in the main polymer chain. The raw materials for many of these synthetic substances come from crude oil.
The uses of carbon and its compounds are extremely varied. It can form alloys with iron, of which the most common is carbon steel. Graphite is combined with clays to form the 'lead' used in pencils used for writing and drawing. It is also used as a lubricant and a pigment, as a moulding material in glass manufacture, in electrodes for dry batteries and in electroplating and electroforming, in brushes for electric motors, and as a neutron moderator in nuclear reactors.
Charcoal is used as a drawing material in artwork, barbecue grilling, iron smelting, and in many other applications. Wood, coal and oil are used as fuel for production of energy and heating. Gem quality diamond is used in jewelry, and industrial diamonds are used in drilling, cutting and polishing tools for machining metals and stone. Plastics are made from fossil hydrocarbons, and carbon fiber, made by pyrolysis of synthetic polyester fibers is used to reinforce plastics to form advanced, lightweight composite materials.
Carbon fiber is made by pyrolysis of extruded and stretched filaments of polyacrylonitrile (PAN) and other organic substances. The crystallographic structure and mechanical properties of the fiber depend on the type of starting material, and on the subsequent processing. Carbon fibers made from PAN have structure resembling narrow filaments of graphite, but thermal processing may re-order the structure into a continuous rolled sheet. The result is fibers with higher specific tensile strength than steel.
Carbon black is used as the black pigment in printing ink, artist's oil paint, and water colours, carbon paper, automotive finishes, India ink and laser printer toner. Carbon black is also used as a filler in rubber products such as tyres and in plastic compounds. Activated charcoal is used as an absorbent and adsorbent in filter material in applications as diverse as gas masks, water purification, and kitchen extractor hoods, and in medicine to absorb toxins, poisons, or gases from the digestive system. Carbon is used in chemical reduction at high temperatures. Coke is used to reduce iron ore into iron (smelting). Case hardening of steel is achieved by heating finished steel components in carbon powder. Carbides of silicon, tungsten, boron, and titanium are among the hardest known materials, and are used as abrasives in cutting and grinding tools. Carbon compounds make up most of the materials used in clothing, such as natural and synthetic textiles and leather, and almost all of the interior surfaces in the built environment other than glass, stone, drywall, and metal.
Diamonds
The diamond industry falls into two categories: one dealing with gem-grade diamonds and the other, with industrial-grade diamonds. While a large trade in both types of diamonds exists, the two markets function dramatically differently.
Unlike precious metals such as gold or platinum, gem diamonds do not trade as a commodity. There is a substantial mark-up in the sale of diamonds, and there is not a very active market for resale of diamonds.
Industrial diamonds are valued mostly for their hardness and heat conductivity, with the gemological qualities of clarity and color being mostly irrelevant. About 80% of mined diamonds (equal to about 100 million carats or 20 tonnes annually) are unsuitable for use as gemstones and relegated for industrial use (known as bort). Synthetic diamonds, invented in the 1950s, found almost immediate industrial applications; 3 billion carats (600 tonnes) of synthetic diamond is produced annually.
The dominant industrial use of diamond is in cutting, drilling, grinding, and polishing. Most of these applications do not require large diamonds; in fact, most diamonds of gem-quality except for their small size can be used industrially. Diamonds are embedded in drill tips or saw blades, or ground into a powder for use in grinding and polishing applications. Specialized applications include use in laboratories as containment for high-pressure experiments (see diamond anvil cell), high-performance bearings, and limited use in specialized windows. With the continuing advances in the production of synthetic diamonds, new applications are becoming feasible. Garnering much excitement is the possible use of diamond as a semiconductor suitable for microchips, and because of its exceptional heat conductance property, as a heat sink in electronics.
Precautions
Pure carbon has extremely low toxicity to humans and can be handled safely in the form of graphite or charcoal. It is resistant to dissolution or chemical attack, even in the acidic contents of the digestive tract. Consequently, once it enters into the body's tissues it is likely to remain there indefinitely. Carbon black was probably one of the first pigments to be used for tattooing, and Ötzi the Iceman was found to have carbon tattoos that survived during his life and for 5200 years after his death. Inhalation of coal dust or soot (carbon black) in large quantities can be dangerous, irritating lung tissues and causing the congestive lung disease, coalworker's pneumoconiosis. Diamond dust used as an abrasive can be harmful if ingested or inhaled. Microparticles of carbon are produced in diesel engine exhaust fumes, and may accumulate in the lungs. In these examples, the harm may result from contaminants (e.g., organic chemicals, heavy metals) rather than from the carbon itself.
Carbon generally has low toxicity to life on Earth; but carbon nanoparticles are deadly to Drosophila.
Carbon may burn vigorously and brightly in the presence of air at high temperatures. Large accumulations of coal, which have remained inert for hundreds of millions of years in the absence of oxygen, may spontaneously combust when exposed to air in coal mine waste tips, ship cargo holds and coal bunkers, and storage dumps.
In nuclear applications where graphite is used as a neutron moderator, accumulation of Wigner energy followed by a sudden, spontaneous release may occur. Annealing to at least 250 °C can release the energy safely, although in the Windscale fire the procedure went wrong, causing other reactor materials to combust.
The great variety of carbon compounds include such lethal poisons as tetrodotoxin, the lectin ricin from seeds of the castor oil plant Ricinus communis, cyanide (CN), and carbon monoxide; and such essentials to life as glucose and protein.
See also
Carbon chauvinism
Carbon detonation
Carbon footprint
Carbon star
Carbon planet
Gas carbon
Low-carbon economy
Timeline of carbon nanotubes
References
Bibliography
External links
Carbon at The Periodic Table of Videos (University of Nottingham)
Carbon on Britannica
Extensive Carbon page at asu.edu (archived 18 June 2010)
Electrochemical uses of carbon (archived 9 November 2001)
Carbon—Super Stuff. Animation with sound and interactive 3D-models. (archived 9 November 2012)
Allotropes of carbon
Chemical elements with hexagonal planar structure
Chemical elements
Native element minerals
Polyatomic nonmetals
Reactive nonmetals
Reducing agents | Carbon | [
"Physics",
"Chemistry",
"Materials_science"
] | 9,418 | [
"Allotropes of carbon",
"Chemical elements",
"Redox",
"Allotropes",
"Nonmetals",
"Reducing agents",
"Polyatomic nonmetals",
"Reactive nonmetals",
"Atoms",
"Matter"
] |
5,300 | https://en.wikipedia.org/wiki/Computer%20data%20storage | Computer data storage or digital data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers.
The central processing unit (CPU) of a computer is what manipulates data by performing computations. In practice, almost all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but less expensive and larger options further away. Generally, the fast technologies are referred to as "memory", while slower persistent technologies are referred to as "storage".
Even the first computer designs, Charles Babbage's Analytical Engine and Percy Ludgate's Analytical Machine, clearly distinguished between processing and memory (Babbage stored numbers as rotations of gears, while Ludgate stored numbers as displacements of rods in shuttles). This distinction was extended in the Von Neumann architecture, where the CPU consists of two main parts: The control unit and the arithmetic logic unit (ALU). The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data.
Functionality
Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, and other specialized devices. Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann machines.
Data organization and representation
A modern digital computer represents data using the binary numeral system. Text, numbers, pictures, audio, and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 0 or 1. The most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or simply data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes (40 million bits) with one byte per character.
Data are encoded by assigning a bit pattern to each character, digit, or multimedia object. Many standards exist for encoding (e.g. character encodings like ASCII, image encodings like JPEG, and video encodings like MPEG-4).
By adding bits to each encoded unit, redundancy allows the computer to detect errors in coded data and correct them based on mathematical algorithms. Errors generally occur in low probabilities due to random bit value flipping, or "physical bit fatigue", loss of the physical bit in the storage of its ability to maintain a distinguishable value (0 or 1), or due to errors in inter or intra-computer communication. A random bit flip (e.g. due to random radiation) is typically corrected upon detection. A bit or a group of malfunctioning physical bits (the specific defective bit is not always known; group definition depends on the specific storage device) is typically automatically fenced out, taken out of use by the device, and replaced with another functioning equivalent group in the device, where the corrected bit values are restored (if possible). The cyclic redundancy check (CRC) method is typically used in communications and storage for error detection. A detected error is then retried.
Data compression methods allow in many cases (such as a database) to represent a string of bits by a shorter bit string ("compress") and reconstruct the original string ("decompress") when needed. This utilizes substantially less storage (tens of percent) for many types of data at the cost of more computation (compress and decompress when needed). Analysis of the trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not.
For security reasons, certain types of data (e.g. credit card information) may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots.
Hierarchy of storage
Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary, tertiary, and off-line storage is also guided by cost per bit.
In contemporary usage, memory is usually fast but temporary semiconductor read-write memory, typically DRAM (dynamic RAM) or other such devices. Storage consists of storage devices and their media not directly accessible by the CPU (secondary or tertiary storage), typically hard disk drives, optical disc drives, and other devices slower than RAM but non-volatile (retaining contents when powered down).
Historically, memory has, depending on technology, been called central memory, core memory, core storage, drum, main memory, real storage, or internal memory. Meanwhile, slower persistent storage devices have been referred to as secondary storage, external memory, or auxiliary/peripheral storage.
Primary storage
Primary storage (also known as main memory, internal memory, or prime memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in a uniform manner.
Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic-core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive.
This led to modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. The particular types of RAM used for primary storage are volatile, meaning that they lose the information when not powered. Besides storing opened programs, it serves as disk cache and write buffer to improve both reading and writing performance. Operating systems borrow RAM capacity for caching so long as it's not needed by running software. Spare memory can be utilized as RAM drive for temporary high-speed data storage.
As shown in the diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity RAM:
Processor registers are located inside the processor. Each register typically holds a word of data (often 32 or 64 bits). CPU instructions instruct the arithmetic logic unit to perform various calculations or other operations on this data (or with the help of it). Registers are the fastest of all forms of computer data storage.
Processor cache is an intermediate stage between ultra-fast registers and much slower main memory. It was introduced solely to improve the performance of computers. Most actively used information in the main memory is just duplicated in the cache memory, which is faster, but of much lesser capacity. On the other hand, main memory is much slower, but has a much greater storage capacity than processor registers. Multi-level hierarchical cache setup is also commonly used—primary cache being smallest, fastest and located inside the processor; secondary cache being somewhat larger and slower.
Main memory is directly or indirectly connected to the central processing unit via a memory bus. It is actually two buses (not on the diagram): an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address, that indicates the desired location of data. Then it reads or writes the data in the memory cells using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks.
As the RAM types used for primary storage are volatile (uninitialized at start up), a computer containing only such storage would not have a source to read instructions from, in order to start the computer. Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of random access).
Many types of "ROM" are not literally read only, as updates to them are possible; however it is slow and memory must be erased in large portions before it can be re-written. Some embedded systems run programs directly from ROM (or similar), because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, and rather, use large capacities of secondary storage, which is non-volatile as well, and not as costly.
Recently, primary storage and secondary storage in some uses refer to what was historically called, respectively, secondary storage and tertiary storage.
The primary storage, including ROM, EEPROM, NOR flash, and RAM, are usually byte-addressable.
Secondary storage
Secondary storage (also known as external memory or auxiliary storage) differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfer the desired data to primary storage. Secondary storage is non-volatile (retaining data when its power is shut off). Modern computer systems typically have two orders of magnitude more secondary storage than primary storage because secondary storage is less expensive.
In modern computers, hard disk drives (HDDs) or solid-state drives (SSDs) are usually used as secondary storage. The access time per byte for HDDs or SSDs is typically measured in milliseconds (thousandths of a second), while the access time per byte for primary storage is measured in nanoseconds (billionths of a second). Thus, secondary storage is significantly slower than primary storage. Rotating optical storage devices, such as CD and DVD drives, have even longer access times. Other examples of secondary storage technologies include USB flash drives, floppy disks, magnetic tape, paper tape, punched cards, and RAM disks.
Once the disk read/write head on HDDs reaches the proper placement and the data, subsequent data on the track are very fast to access. To reduce the seek time and rotational latency, data are transferred to and from disks in large contiguous blocks. Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated paradigms have been developed to design efficient algorithms based on sequential and block access. Another way to reduce the I/O bottleneck is to use multiple disks in parallel to increase the bandwidth between primary and secondary memory.
Secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, while also providing metadata describing the owner of a certain file, the access time, the access permissions, and other information.
Most computer operating systems use the concept of virtual memory, allowing the utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks (pages) to a swap file or page file on secondary storage, retrieving them later when needed. If a lot of pages are moved to slower secondary storage, the system performance is degraded.
The secondary storage, including HDD, ODD and SSD, are usually block-addressable.
Tertiary storage
Tertiary storage or tertiary memory is a level below secondary storage. Typically, it involves a robotic mechanism which will mount (insert) and dismount removable mass storage media into a storage device according to the system's demands; such data are often copied to secondary storage before use. It is primarily used for archiving rarely accessed information since it is much slower than secondary storage (e.g. 5–60 seconds vs. 1–10 milliseconds). This is primarily useful for extraordinarily large data stores, accessed without human operators. Typical examples include tape libraries and optical jukeboxes.
When a computer needs to read information from the tertiary storage, it will first consult a catalog database to determine which tape or disc contains the information. Next, the computer will instruct a robotic arm to fetch the medium and place it in a drive. When the computer has finished reading the information, the robotic arm will return the medium to its place in the library.
Tertiary storage is also known as nearline storage because it is "near to online". The formal distinction between online, nearline, and offline storage is:
Online storage is immediately available for I/O.
Nearline storage is not immediately available, but can be made online quickly without human intervention.
Offline storage is not immediately available, and requires some human intervention to become online.
For example, always-on spinning hard disk drives are online storage, while spinning drives that spin down automatically, such as in massive arrays of idle disks (MAID), are nearline storage. Removable media such as tape cartridges that can be automatically loaded, as in tape libraries, are nearline storage, while tape cartridges that must be manually loaded are offline storage.
Off-line storage
Off-line storage is computer data storage on a medium or a device that is not under the control of a processing unit. The medium is recorded, usually in a secondary or tertiary storage device, and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction.
Off-line storage is used to transfer information since the detached medium can easily be physically transported. Additionally, it is useful for cases of disaster, where, for example, a fire destroys the original data, a medium in a remote location will be unaffected, enabling disaster recovery. Off-line storage increases general information security since it is physically inaccessible from a computer, and data confidentiality or integrity cannot be affected by computer-based attack techniques. Also, if the information stored for archival purposes is rarely accessed, off-line storage is less expensive than tertiary storage.
In modern personal computers, most secondary and tertiary storage media are also used for off-line storage. Optical discs and flash memory devices are the most popular, and to a much lesser extent removable hard disk drives; older examples include floppy disks and Zip disks. In enterprise uses, magnetic tape cartridges are predominant; older examples include open-reel magnetic tape and punched cards.
Characteristics of storage
Storage technologies at all levels of the storage hierarchy can be differentiated by evaluating certain core characteristics as well as measuring characteristics specific to a particular implementation. These core characteristics are volatility, mutability, accessibility, and addressability. For any particular implementation of any storage technology, the characteristics worth measuring are capacity and performance.
Volatility
Non-volatile memory retains the stored information even if not constantly supplied with electric power. It is suitable for long-term storage of information. Volatile memory requires constant power to maintain the stored information. The fastest memory technologies are volatile ones, although that is not a universal rule. Since the primary storage is required to be very fast, it predominantly uses volatile memory.
Dynamic random-access memory is a form of volatile memory that also requires the stored information to be periodically reread and rewritten, or refreshed, otherwise it would vanish. Static random-access memory is a form of volatile memory similar to DRAM with the exception that it never needs to be refreshed as long as power is applied; it loses its content when the power supply is lost.
An uninterruptible power supply (UPS) can be used to give a computer a brief window of time to move information from primary volatile storage into non-volatile storage before the batteries are exhausted. Some systems, for example EMC Symmetrix, have integrated batteries that maintain volatile storage for several minutes.
Mutability
Read/write storage or mutable storage Allows information to be overwritten at any time. A computer without some amount of read/write storage for primary storage purposes would be useless for many tasks. Modern computers typically use read/write storage also for secondary storage.
Slow write, fast read storage Read/write storage which allows information to be overwritten multiple times, but with the write operation being much slower than the read operation. Examples include CD-RW and SSD.
Write once storage Write once read many (WORM) allows the information to be written only once at some point after manufacture. Examples include semiconductor programmable read-only memory and CD-R.
Read only storage Retains the information stored at the time of manufacture. Examples include mask ROM ICs and CD-ROM.
Accessibility
Random access Any location in storage can be accessed at any moment in approximately the same amount of time. Such characteristic is well suited for primary and secondary storage. Most semiconductor memories, flash memories and hard disk drives provide random access, though both semiconductor and flash memories have minimal latency when compared to hard disk drives, as no mechanical parts need to be moved.
Sequential access The accessing of pieces of information will be in a serial order, one after the other; therefore the time to access a particular piece of information depends upon which piece of information was last accessed. Such characteristic is typical of off-line storage.
Addressability
Location-addressable Each individually accessible unit of information in storage is selected with its numerical memory address. In modern computers, location-addressable storage usually limits to primary storage, accessed internally by computer programs, since location-addressability is very efficient, but burdensome for humans.
File addressable Information is divided into files of variable length, and a particular file is selected with human-readable directory and file names. The underlying device is still location-addressable, but the operating system of a computer provides the file system abstraction to make the operation more understandable. In modern computers, secondary, tertiary and off-line storage use file systems.
Content-addressable Each individually accessible unit of information is selected based on the basis of (part of) the contents stored there. Content-addressable storage can be implemented using software (computer program) or hardware (computer device), with hardware being faster but more expensive option. Hardware content addressable memory is often used in a computer's CPU cache.
Capacity
Raw capacity The total amount of stored information that a storage device or medium can hold. It is expressed as a quantity of bits or bytes (e.g. 10.4 megabytes).
Memory storage density The compactness of stored information. It is the storage capacity of a medium divided with a unit of length, area or volume (e.g. 1.2 megabytes per square inch).
Performance
Latency The time it takes to access a particular location in storage. The relevant unit of measurement is typically nanosecond for primary storage, millisecond for secondary storage, and second for tertiary storage. It may make sense to separate read latency and write latency (especially for non-volatile memory) and in case of sequential access storage, minimum, maximum and average latency.
Throughput The rate at which information can be read from or written to the storage. In computer data storage, throughput is usually expressed in terms of megabytes per second (MB/s), though bit rate may also be used. As with latency, read rate and write rate may need to be differentiated. Also accessing media sequentially, as opposed to randomly, typically yields maximum throughput.
Granularity The size of the largest "chunk" of data that can be efficiently accessed as a single unit, e.g. without introducing additional latency.
Reliability The probability of spontaneous bit value change under various conditions, or overall failure rate.
Utilities such as hdparm and sar can be used to measure IO performance in Linux.
Energy use
Storage devices that reduce fan usage automatically shut-down during inactivity, and low power hard drives can reduce energy consumption by 90 percent.
2.5-inch hard disk drives often consume less power than larger ones. Low capacity solid-state drives have no moving parts and consume less power than hard disks. Also, memory may use more power than hard disks. Large caches, which are used to avoid hitting the memory wall, may also consume a large amount of power.
Security
Full disk encryption, volume and virtual disk encryption, andor file/folder encryption is readily available for most storage devices.
Hardware memory encryption is available in Intel Architecture, supporting Total Memory Encryption (TME) and page granular memory encryption with multiple keys (MKTME). and in SPARC M7 generation since October 2015.
Vulnerability and reliability
Distinct types of data storage have different points of failure and various methods of predictive failure analysis.
Vulnerabilities that can instantly lead to total loss are head crashing on mechanical hard drives and failure of electronic components on flash storage.
Error detection
Impending failure on hard disk drives is estimable using S.M.A.R.T. diagnostic data that includes the hours of operation and the count of spin-ups, though its reliability is disputed.
Flash storage may experience downspiking transfer rates as a result of accumulating errors, which the flash memory controller attempts to correct.
The health of optical media can be determined by measuring correctable minor errors, of which high counts signify deteriorating and/or low-quality media. Too many consecutive minor errors can lead to data corruption. Not all vendors and models of optical drives support error scanning.
Storage media
, the most commonly used data storage media are semiconductor, magnetic, and optical, while paper still sees some limited usage. Some other fundamental storage technologies, such as all-flash arrays (AFAs) are proposed for development.
Semiconductor
Semiconductor memory uses semiconductor-based integrated circuit (IC) chips to store information. Data are typically stored in metal–oxide–semiconductor (MOS) memory cells. A semiconductor memory chip may contain millions of memory cells, consisting of tiny MOS field-effect transistors (MOSFETs) and/or MOS capacitors. Both volatile and non-volatile forms of semiconductor memory exist, the former using standard MOSFETs and the latter using floating-gate MOSFETs.
In modern computers, primary storage almost exclusively consists of dynamic volatile semiconductor random-access memory (RAM), particularly dynamic random-access memory (DRAM). Since the turn of the century, a type of non-volatile floating-gate semiconductor memory known as flash memory has steadily gained share as off-line storage for home computers. Non-volatile semiconductor memory is also used for secondary storage in various advanced electronic devices and specialized computers that are designed for them.
As early as 2006, notebook and desktop computer manufacturers started using flash-based solid-state drives (SSDs) as default configuration options for the secondary storage either in addition to or instead of the more traditional HDD.
Magnetic
Magnetic storage uses different patterns of magnetization on a magnetically coated surface to store information. Magnetic storage is non-volatile. The information is accessed using one or more read/write heads which may contain one or more recording transducers. A read/write head only covers a part of the surface so that the head or medium or both must be moved relative to another in order to access data. In modern computers, magnetic storage will take these forms:
Magnetic disk;
Floppy disk, used for off-line storage;
Hard disk drive, used for secondary storage.
Magnetic tape, used for tertiary and off-line storage;
Carousel memory (magnetic rolls).
In early computers, magnetic storage was also used as:
Primary storage in a form of magnetic memory, or core memory, core rope memory, thin-film memory and/or twistor memory;
Tertiary (e.g. NCR CRAM) or off line storage in the form of magnetic cards;
Magnetic tape was then often used for secondary storage.
Magnetic storage does not have a definite limit of rewriting cycles like flash storage and re-writeable optical media, as altering magnetic fields causes no physical wear. Rather, their life span is limited by mechanical parts.
Optical
Optical storage, the typical optical disc, stores information in deformities on the surface of a circular disc and reads this information by illuminating the surface with a laser diode and observing the reflection. Optical disc storage is non-volatile. The deformities may be permanent (read only media), formed once (write once media) or reversible (recordable or read/write media). The following forms are in common use :
CD, CD-ROM, DVD, BD-ROM: Read only storage, used for mass distribution of digital information (music, video, computer programs);
CD-R, DVD-R, DVD+R, BD-R: Write once storage, used for tertiary and off-line storage;
CD-RW, DVD-RW, DVD+RW, DVD-RAM, BD-RE: Slow write, fast read storage, used for tertiary and off-line storage;
Ultra Density Optical or UDO is similar in capacity to BD-R or BD-RE and is slow write, fast read storage used for tertiary and off-line storage.
Magneto-optical disc storage is optical disc storage where the magnetic state on a ferromagnetic surface stores information. The information is read optically and written by combining magnetic and optical methods. Magneto-optical disc storage is non-volatile, sequential access, slow write, fast read storage used for tertiary and off-line storage.
3D optical data storage has also been proposed.
Light induced magnetization melting in magnetic photoconductors has also been proposed for high-speed low-energy consumption magneto-optical storage.
Paper
Paper data storage, typically in the form of paper tape or punched cards, has long been used to store information for automatic processing, particularly before general-purpose computers existed. Information was recorded by punching holes into the paper or cardboard medium and was read mechanically (or later optically) to determine whether a particular location on the medium was solid or contained a hole. Barcodes make it possible for objects that are sold or transported to have some computer-readable information securely attached.
Relatively small amounts of digital data (compared to other digital data storage) may be backed up on paper as a matrix barcode for very long-term storage, as the longevity of paper typically exceeds even magnetic data storage.
Other storage media or substrates
Vacuum-tube memory A Williams tube used a cathode-ray tube, and a Selectron tube used a large vacuum tube to store information. These primary storage devices were short-lived in the market, since the Williams tube was unreliable, and the Selectron tube was expensive.
Electro-acoustic memory Delay-line memory used sound waves in a substance such as mercury to store information. Delay-line memory was dynamic volatile, cycle sequential read/write storage, and was used for primary storage.
Optical tape is a medium for optical storage, generally consisting of a long and narrow strip of plastic, onto which patterns can be written and from which the patterns can be read back. It shares some technologies with cinema film stock and optical discs, but is compatible with neither. The motivation behind developing this technology was the possibility of far greater storage capacities than either magnetic tape or optical discs.
Phase-change memory uses different mechanical phases of phase-change material to store information in an X–Y addressable matrix and reads the information by observing the varying electrical resistance of the material. Phase-change memory would be non-volatile, random-access read/write storage, and might be used for primary, secondary and off-line storage. Most rewritable and many write-once optical disks already use phase-change material to store information.
Holographic data storage stores information optically inside crystals or photopolymers. Holographic storage can utilize the whole volume of the storage medium, unlike optical disc storage, which is limited to a small number of surface layers. Holographic storage would be non-volatile, sequential-access, and either write-once or read/write storage. It might be used for secondary and off-line storage. See Holographic Versatile Disc (HVD).
Molecular memory stores information in polymer that can store electric charge. Molecular memory might be especially suited for primary storage. The theoretical storage capacity of molecular memory is 10 terabits per square inch (16 Gbit/mm2).
Magnetic photoconductors store magnetic information, which can be modified by low-light illumination.
DNA stores information in DNA nucleotides. It was first done in 2012, when researchers achieved a ratio of 1.28 petabytes per gram of DNA. In March 2017 scientists reported that a new algorithm called a DNA fountain achieved 85% of the theoretical limit, at 215 petabytes per gram of DNA.
Related technologies
Redundancy
While a group of bits malfunction may be resolved by error detection and correction mechanisms (see above), storage device malfunction requires different solutions. The following solutions are commonly used and valid for most storage devices:
Device mirroring (replication) – A common solution to the problem is constantly maintaining an identical copy of device content on another device (typically of the same type). The downside is that this doubles the storage, and both devices (copies) need to be updated simultaneously with some overhead and possibly some delays. The upside is the possible concurrent reading of the same data group by two independent processes, which increases performance. When one of the replicated devices is detected to be defective, the other copy is still operational and is being utilized to generate a new copy on another device (usually available operational in a pool of stand-by devices for this purpose).
Redundant array of independent disks (RAID) – This method generalizes the device mirroring above by allowing one device in a group of devices to fail and be replaced with the content restored (Device mirroring is RAID with n=2). RAID groups of n=5 or n=6 are common. n>2 saves storage, when compared with n=2, at the cost of more processing during both regular operation (with often reduced performance) and defective device replacement.
Device mirroring and typical RAID are designed to handle a single device failure in the RAID group of devices. However, if a second failure occurs before the RAID group is completely repaired from the first failure, then data can be lost. The probability of a single failure is typically small. Thus the probability of two failures in the same RAID group in time proximity is much smaller (approximately the probability squared, i.e., multiplied by itself). If a database cannot tolerate even such a smaller probability of data loss, then the RAID group itself is replicated (mirrored). In many cases such mirroring is done geographically remotely, in a different storage array, to handle recovery from disasters (see disaster recovery above).
Network connectivity
A secondary or tertiary storage may connect to a computer utilizing computer networks. This concept does not pertain to the primary storage, which is shared between multiple processors to a lesser degree.
Direct-attached storage (DAS) is a traditional mass storage, that does not use any network. This is still a most popular approach. This retronym was coined recently, together with NAS and SAN.
Network-attached storage (NAS) is mass storage attached to a computer which another computer can access at file level over a local area network, a private wide area network, or in the case of online file storage, over the Internet. NAS is commonly associated with the NFS and CIFS/SMB protocols.
Storage area network (SAN) is a specialized network, that provides other computers with storage capacity. The crucial difference between NAS and SAN, is that NAS presents and manages file systems to client computers, while SAN provides access at block-addressing (raw) level, leaving it to attaching systems to manage data or file systems within the provided capacity. SAN is commonly associated with Fibre Channel networks.
Robotic storage
Large quantities of individual magnetic tapes, and optical or magneto-optical discs may be stored in robotic tertiary storage devices. In tape storage field they are known as tape libraries, and in optical storage field optical jukeboxes, or optical disk libraries per analogy. The smallest forms of either technology containing just one drive device are referred to as autoloaders or autochangers.
Robotic-access storage devices may have a number of slots, each holding individual media, and usually one or more picking robots that traverse the slots and load media to built-in drives. The arrangement of the slots and picking devices affects performance. Important characteristics of such storage are possible expansion options: adding slots, modules, drives, robots. Tape libraries may have from 10 to more than 100,000 slots, and provide terabytes or petabytes of near-line information. Optical jukeboxes are somewhat smaller solutions, up to 1,000 slots.
Robotic storage is used for backups, and for high-capacity archives in imaging, medical, and video industries. Hierarchical storage management is a most known archiving strategy of automatically migrating long-unused files from fast hard disk storage to libraries or jukeboxes. If the files are needed, they are retrieved back to disk.
See also
Primary storage topics
Aperture (computer memory)
Dynamic random-access memory (DRAM)
Memory latency
Mass storage
Memory cell (disambiguation)
Memory management
Memory leak
Virtual memory
Memory protection
Page address register
Stable storage
Static random-access memory (SRAM)
Secondary, tertiary and off-line storage topics
Cloud storage
Hybrid cloud storage
Data deduplication
Data proliferation
Data storage tag used for capturing research data
Disk utility
File system
List of file formats
Global filesystem
Flash memory
Geoplexing
Information repository
Noise-predictive maximum-likelihood detection
Object(-based) storage
Removable media
Solid-state drive
Spindle
Virtual tape library
Wait state
Write buffer
Write protection
Cold data
Data storage conferences
Storage Networking World
Storage World Conference
Notes
References
Further reading
Memory & storage, Computer history museum
Computer architecture | Computer data storage | [
"Technology",
"Engineering"
] | 7,047 | [
"Computers",
"Computer engineering",
"Computer architecture"
] |
5,306 | https://en.wikipedia.org/wiki/Chemical%20equilibrium | In a chemical reaction, chemical equilibrium is the state in which both the reactants and products are present in concentrations which have no further tendency to change with time, so that there is no observable change in the properties of the system. This state results when the forward reaction proceeds at the same rate as the reverse reaction. The reaction rates of the forward and backward reactions are generally not zero, but they are equal. Thus, there are no net changes in the concentrations of the reactants and products. Such a state is known as dynamic equilibrium.
Historical introduction
The concept of chemical equilibrium was developed in 1803, after Berthollet found that some chemical reactions are reversible. For any reaction mixture to exist at equilibrium, the rates of the forward and backward (reverse) reactions must be equal. In the following chemical equation, arrows point both ways to indicate equilibrium. A and B are reactant chemical species, S and T are product species, and α, β, σ, and τ are the stoichiometric coefficients of the respective reactants and products:
α A + β B σ S + τ T
The equilibrium concentration position of a reaction is said to lie "far to the right" if, at equilibrium, nearly all the reactants are consumed. Conversely the equilibrium position is said to be "far to the left" if hardly any product is formed from the reactants.
Guldberg and Waage (1865), building on Berthollet's ideas, proposed the law of mass action:
where A, B, S and T are active masses and k+ and k− are rate constants. Since at equilibrium forward and backward rates are equal:
and the ratio of the rate constants is also a constant, now known as an equilibrium constant.
By convention, the products form the numerator.
However, the law of mass action is valid only for concerted one-step reactions that proceed through a single transition state and is not valid in general because rate equations do not, in general, follow the stoichiometry of the reaction as Guldberg and Waage had proposed (see, for example, nucleophilic aliphatic substitution by SN1 or reaction of hydrogen and bromine to form hydrogen bromide). Equality of forward and backward reaction rates, however, is a necessary condition for chemical equilibrium, though it is not sufficient to explain why equilibrium occurs.
Despite the limitations of this derivation, the equilibrium constant for a reaction is indeed a constant, independent of the activities of the various species involved, though it does depend on temperature as observed by the van 't Hoff equation. Adding a catalyst will affect both the forward reaction and the reverse reaction in the same way and will not have an effect on the equilibrium constant. The catalyst will speed up both reactions thereby increasing the speed at which equilibrium is reached.
Although the macroscopic equilibrium concentrations are constant in time, reactions do occur at the molecular level. For example, in the case of acetic acid dissolved in water and forming acetate and hydronium ions,
a proton may hop from one molecule of acetic acid onto a water molecule and then onto an acetate anion to form another molecule of acetic acid and leaving the number of acetic acid molecules unchanged. This is an example of dynamic equilibrium. Equilibria, like the rest of thermodynamics, are statistical phenomena, averages of microscopic behavior.
Le Châtelier's principle (1884) predicts the behavior of an equilibrium system when changes to its reaction conditions occur. If a dynamic equilibrium is disturbed by changing the conditions, the position of equilibrium moves to partially reverse the change. For example, adding more S (to the chemical reaction above) from the outside will cause an excess of products, and the system will try to counteract this by increasing the reverse reaction and pushing the equilibrium point backward (though the equilibrium constant will stay the same).
If mineral acid is added to the acetic acid mixture, increasing the concentration of hydronium ion, the amount of dissociation must decrease as the reaction is driven to the left in accordance with this principle. This can also be deduced from the equilibrium constant expression for the reaction:
If {H3O+} increases {CH3CO2H} must increase and must decrease. The H2O is left out, as it is the solvent and its concentration remains high and nearly constant.
J. W. Gibbs suggested in 1873 that equilibrium is attained when the "available energy" (now known as Gibbs free energy or Gibbs energy) of the system is at its minimum value, assuming the reaction is carried out at a constant temperature and pressure. What this means is that the derivative of the Gibbs energy with respect to reaction coordinate (a measure of the extent of reaction that has occurred, ranging from zero for all reactants to a maximum for all products) vanishes (because dG = 0), signaling a stationary point. This derivative is called the reaction Gibbs energy (or energy change) and corresponds to the difference between the chemical potentials of reactants and products at the composition of the reaction mixture. This criterion is both necessary and sufficient. If a mixture is not at equilibrium, the liberation of the excess Gibbs energy (or Helmholtz energy at constant volume reactions) is the "driving force" for the composition of the mixture to change until equilibrium is reached. The equilibrium constant can be related to the standard Gibbs free energy change for the reaction by the equation
where R is the universal gas constant and T the temperature.
When the reactants are dissolved in a medium of high ionic strength the quotient of activity coefficients may be taken to be constant. In that case the concentration quotient, Kc,
where [A] is the concentration of A, etc., is independent of the analytical concentration of the reactants. For this reason, equilibrium constants for solutions are usually determined in media of high ionic strength. Kc varies with ionic strength, temperature and pressure (or volume). Likewise Kp for gases depends on partial pressure. These constants are easier to measure and encountered in high-school chemistry courses.
Thermodynamics
At constant temperature and pressure, one must consider the Gibbs free energy, G, while at constant temperature and volume, one must consider the Helmholtz free energy, A, for the reaction; and at constant internal energy and volume, one must consider the entropy, S, for the reaction.
The constant volume case is important in geochemistry and atmospheric chemistry where pressure variations are significant. Note that, if reactants and products were in standard state (completely pure), then there would be no reversibility and no equilibrium. Indeed, they would necessarily occupy disjoint volumes of space. The mixing of the products and reactants contributes a large entropy increase (known as entropy of mixing) to states containing equal mixture of products and reactants and gives rise to a distinctive minimum in the Gibbs energy as a function of the extent of reaction. The standard Gibbs energy change, together with the Gibbs energy of mixing, determine the equilibrium state.
In this article only the constant pressure case is considered. The relation between the Gibbs free energy and the equilibrium constant can be found by considering chemical potentials.
At constant temperature and pressure in the absence of an applied voltage, the Gibbs free energy, G, for the reaction depends only on the extent of reaction: ξ (Greek letter xi), and can only decrease according to the second law of thermodynamics. It means that the derivative of G with respect to ξ must be negative if the reaction happens; at the equilibrium this derivative is equal to zero.
:equilibrium
In order to meet the thermodynamic condition for equilibrium, the Gibbs energy must be stationary, meaning that the derivative of G with respect to the extent of reaction, ξ, must be zero. It can be shown that in this case, the sum of chemical potentials times the stoichiometric coefficients of the products is equal to the sum of those corresponding to the reactants. Therefore, the sum of the Gibbs energies of the reactants must be the equal to the sum of the Gibbs energies of the products.
where μ is in this case a partial molar Gibbs energy, a chemical potential. The chemical potential of a reagent A is a function of the activity, {A} of that reagent.
(where μ is the standard chemical potential).
The definition of the Gibbs energy equation interacts with the fundamental thermodynamic relation to produce
.
Inserting dNi = νi dξ into the above equation gives a stoichiometric coefficient () and a differential that denotes the reaction occurring to an infinitesimal extent (dξ). At constant pressure and temperature the above equations can be written as
which is the Gibbs free energy change for the reaction. This results in:
.
By substituting the chemical potentials:
,
the relationship becomes:
:
which is the standard Gibbs energy change for the reaction that can be calculated using thermodynamical tables.
The reaction quotient is defined as:
Therefore,
At equilibrium:
leading to:
and
Obtaining the value of the standard Gibbs energy change, allows the calculation of the equilibrium constant.
Addition of reactants or products
For a reactional system at equilibrium: Qr = Keq; ξ = ξeq.
If the activities of constituents are modified, the value of the reaction quotient changes and becomes different from the equilibrium constant: Qr ≠ Keq and then
If activity of a reagent i increases the reaction quotient decreases. Then and The reaction will shift to the right (i.e. in the forward direction, and thus more products will form).
If activity of a product j increases, then and The reaction will shift to the left (i.e. in the reverse direction, and thus less products will form).
Note that activities and equilibrium constants are dimensionless numbers.
Treatment of activity
The expression for the equilibrium constant can be rewritten as the product of a concentration quotient, Kc and an activity coefficient quotient, Γ.
[A] is the concentration of reagent A, etc. It is possible in principle to obtain values of the activity coefficients, γ. For solutions, equations such as the Debye–Hückel equation or extensions such as Davies equation Specific ion interaction theory or Pitzer equations may be used. However this is not always possible. It is common practice to assume that Γ is a constant, and to use the concentration quotient in place of the thermodynamic equilibrium constant. It is also general practice to use the term equilibrium constant instead of the more accurate concentration quotient. This practice will be followed here.
For reactions in the gas phase partial pressure is used in place of concentration and fugacity coefficient in place of activity coefficient. In the real world, for example, when making ammonia in industry, fugacity coefficients must be taken into account. Fugacity, f, is the product of partial pressure and fugacity coefficient. The chemical potential of a species in the real gas phase is given by
so the general expression defining an equilibrium constant is valid for both solution and gas phases.
Concentration quotients
In aqueous solution, equilibrium constants are usually determined in the presence of an "inert" electrolyte such as sodium nitrate, NaNO3, or potassium perchlorate, KClO4. The ionic strength of a solution is given by
where ci and zi stand for the concentration and ionic charge of ion type i, and the sum is taken over all the N types of charged species in solution. When the concentration of dissolved salt is much higher than the analytical concentrations of the reagents, the ions originating from the dissolved salt determine the ionic strength, and the ionic strength is effectively constant. Since activity coefficients depend on ionic strength, the activity coefficients of the species are effectively independent of concentration. Thus, the assumption that Γ is constant is justified. The concentration quotient is a simple multiple of the equilibrium constant.
However, Kc will vary with ionic strength. If it is measured at a series of different ionic strengths, the value can be extrapolated to zero ionic strength. The concentration quotient obtained in this manner is known, paradoxically, as a thermodynamic equilibrium constant.
Before using a published value of an equilibrium constant in conditions of ionic strength different from the conditions used in its determination, the value should be adjusted.
Metastable mixtures
A mixture may appear to have no tendency to change, though it is not at equilibrium. For example, a mixture of SO2 and O2 is metastable as there is a kinetic barrier to formation of the product, SO3.
2 SO2 + O2 2 SO3
The barrier can be overcome when a catalyst is also present in the mixture as in the contact process, but the catalyst does not affect the equilibrium concentrations.
Likewise, the formation of bicarbonate from carbon dioxide and water is very slow under normal conditions
but almost instantaneous in the presence of the catalytic enzyme carbonic anhydrase.
Pure substances
When pure substances (liquids or solids) are involved in equilibria their activities do not appear in the equilibrium constant because their numerical values are considered one.
Applying the general formula for an equilibrium constant to the specific case of a dilute solution of acetic acid in water one obtains
CH3CO2H + H2O CH3CO2− + H3O+
For all but very concentrated solutions, the water can be considered a "pure" liquid, and therefore it has an activity of one. The equilibrium constant expression is therefore usually written as
.
A particular case is the self-ionization of water
2 H2O H3O+ + OH−
Because water is the solvent, and has an activity of one, the self-ionization constant of water is defined as
It is perfectly legitimate to write [H+] for the hydronium ion concentration, since the state of solvation of the proton is constant (in dilute solutions) and so does not affect the equilibrium concentrations. Kw varies with variation in ionic strength and/or temperature.
The concentrations of H+ and OH− are not independent quantities. Most commonly [OH−] is replaced by Kw[H+]−1 in equilibrium constant expressions which would otherwise include hydroxide ion.
Solids also do not appear in the equilibrium constant expression, if they are considered to be pure and thus their activities taken to be one. An example is the Boudouard reaction:
2 CO CO2 + C
for which the equation (without solid carbon) is written as:
Multiple equilibria
Consider the case of a dibasic acid H2A. When dissolved in water, the mixture will contain H2A, HA− and A2−. This equilibrium can be split into two steps in each of which one proton is liberated.
K1 and K2 are examples of stepwise equilibrium constants. The overall equilibrium constant, βD, is product of the stepwise constants.
Note that these constants are dissociation constants because the products on the right hand side of the equilibrium expression are dissociation products. In many systems, it is preferable to use association constants.
β1 and β2 are examples of association constants. Clearly and ; and
For multiple equilibrium systems, also see: theory of Response reactions.
Effect of temperature
The effect of changing temperature on an equilibrium constant is given by the van 't Hoff equation
Thus, for exothermic reactions (ΔH is negative), K decreases with an increase in temperature, but, for endothermic reactions, (ΔH is positive) K increases with an increase temperature. An alternative formulation is
At first sight this appears to offer a means of obtaining the standard molar enthalpy of the reaction by studying the variation of K with temperature. In practice, however, the method is unreliable because error propagation almost always gives very large errors on the values calculated in this way.
Effect of electric and magnetic fields
The effect of electric field on equilibrium has been studied by Manfred Eigen among others.
Types of equilibrium
Equilibrium can be broadly classified as heterogeneous and homogeneous equilibrium. Homogeneous equilibrium consists of reactants and products belonging in the same phase whereas heterogeneous equilibrium comes into play for reactants and products in different phases.
In the gas phase: rocket engines
The industrial synthesis such as ammonia in the Haber–Bosch process (depicted right) takes place through a succession of equilibrium steps including adsorption processes
Atmospheric chemistry
Seawater and other natural waters: chemical oceanography
Distribution between two phases
log D distribution coefficient: important for pharmaceuticals where lipophilicity is a significant property of a drug
Liquid–liquid extraction, Ion exchange, Chromatography
Solubility product
Uptake and release of oxygen by hemoglobin in blood
Acid–base equilibria: acid dissociation constant, hydrolysis, buffer solutions, indicators, acid–base homeostasis
Metal–ligand complexation: sequestering agents, chelation therapy, MRI contrast reagents, Schlenk equilibrium
Adduct formation: host–guest chemistry, supramolecular chemistry, molecular recognition, dinitrogen tetroxide
In certain oscillating reactions, the approach to equilibrium is not asymptotically but in the form of a damped oscillation .
The related Nernst equation in electrochemistry gives the difference in electrode potential as a function of redox concentrations.
When molecules on each side of the equilibrium are able to further react irreversibly in secondary reactions, the final product ratio is determined according to the Curtin–Hammett principle.
In these applications, terms such as stability constant, formation constant, binding constant, affinity constant, association constant and dissociation constant are used. In biochemistry, it is common to give units for binding constants, which serve to define the concentration units used when the constant's value was determined.
Composition of a mixture
When the only equilibrium is that of the formation of a 1:1 adduct as the composition of a mixture, there are many ways that the composition of a mixture can be calculated. For example, see ICE table for a traditional method of calculating the pH of a solution of a weak acid.
There are three approaches to the general calculation of the composition of a mixture at equilibrium.
The most basic approach is to manipulate the various equilibrium constants until the desired concentrations are expressed in terms of measured equilibrium constants (equivalent to measuring chemical potentials) and initial conditions.
Minimize the Gibbs energy of the system.
Satisfy the equation of mass balance. The equations of mass balance are simply statements that demonstrate that the total concentration of each reactant must be constant by the law of conservation of mass.
Mass-balance equations
In general, the calculations are rather complicated or complex. For instance, in the case of a dibasic acid, H2A dissolved in water the two reactants can be specified as the conjugate base, A2−, and the proton, H+. The following equations of mass-balance could apply equally well to a base such as 1,2-diaminoethane, in which case the base itself is designated as the reactant A:
with TA the total concentration of species A. Note that it is customary to omit the ionic charges when writing and using these equations.
When the equilibrium constants are known and the total concentrations are specified there are two equations in two unknown "free concentrations" [A] and [H]. This follows from the fact that [HA] = β1[A][H], [H2A] = β2[A][H]2 and [OH] = Kw[H]−1
so the concentrations of the "complexes" are calculated from the free concentrations and the equilibrium constants.
General expressions applicable to all systems with two reagents, A and B would be
It is easy to see how this can be extended to three or more reagents.
Polybasic acids
The composition of solutions containing reactants A and H is easy to calculate as a function of p[H]. When [H] is known, the free concentration [A] is calculated from the mass-balance equation in A.
The diagram alongside, shows an example of the hydrolysis of the aluminium Lewis acid Al3+(aq) shows the species concentrations for a 5 × 10−6 M solution of an aluminium salt as a function of pH. Each concentration is shown as a percentage of the total aluminium.
Solution and precipitation
The diagram above illustrates the point that a precipitate that is not one of the main species in the solution equilibrium may be formed. At pH just below 5.5 the main species present in a 5 μM solution of Al3+ are aluminium hydroxides Al(OH)2+, and , but on raising the pH Al(OH)3 precipitates from the solution. This occurs because Al(OH)3 has a very large lattice energy. As the pH rises more and more Al(OH)3 comes out of solution. This is an example of Le Châtelier's principle in action: Increasing the concentration of the hydroxide ion causes more aluminium hydroxide to precipitate, which removes hydroxide from the solution. When the hydroxide concentration becomes sufficiently high the soluble aluminate, , is formed.
Another common instance where precipitation occurs is when a metal cation interacts with an anionic ligand to form an electrically neutral complex. If the complex is hydrophobic, it will precipitate out of water. This occurs with the nickel ion Ni2+ and dimethylglyoxime, (dmgH2): in this case the lattice energy of the solid is not particularly large, but it greatly exceeds the energy of solvation of the molecule Ni(dmgH)2.
Minimization of Gibbs energy
At equilibrium, at a specified temperature and pressure, and with no external forces, the Gibbs free energy G is at a minimum:
where μj is the chemical potential of molecular species j, and Nj is the amount of molecular species j. It may be expressed in terms of thermodynamic activity as:
where is the chemical potential in the standard state, R is the gas constant T is the absolute temperature, and Aj is the activity.
For a closed system, no particles may enter or leave, although they may combine in various ways. The total number of atoms of each element will remain constant. This means that the minimization above must be subjected to the constraints:
where aij is the number of atoms of element i in molecule j and b is the total number of atoms of element i, which is a constant, since the system is closed. If there are a total of k types of atoms in the system, then there will be k such equations. If ions are involved, an additional row is added to the aij matrix specifying the respective charge on each molecule which will sum to zero.
This is a standard problem in optimisation, known as constrained minimisation. The most common method of solving it is using the method of Lagrange multipliers (although other methods may be used).
Define:
where the λi are the Lagrange multipliers, one for each element. This allows each of the Nj and λj to be treated independently, and it can be shown using the tools of multivariate calculus that the equilibrium condition is given by
(For proof see Lagrange multipliers.) This is a set of (m + k) equations in (m + k) unknowns (the Nj and the λi) and may, therefore, be solved for the equilibrium concentrations Nj as long as the chemical activities are known as functions of the concentrations at the given temperature and pressure. (In the ideal case, activities are proportional to concentrations.) (See Thermodynamic databases for pure substances.) Note that the second equation is just the initial constraints for minimization.
This method of calculating equilibrium chemical concentrations is useful for systems with a large number of different molecules. The use of k atomic element conservation equations for the mass constraint is straightforward, and replaces the use of the stoichiometric coefficient equations. The results are consistent with those specified by chemical equations. For example, if equilibrium is specified by a single chemical equation:,
where νj is the stoichiometric coefficient for the j th molecule (negative for reactants, positive for products) and Rj is the symbol for the j th molecule, a properly balanced equation will obey:
Multiplying the first equilibrium condition by νj and using the above equation yields:
As above, defining ΔG
where Kc is the equilibrium constant, and ΔG will be zero at equilibrium.
Analogous procedures exist for the minimization of other thermodynamic potentials.
See also
Acidosis
Alkalosis
Arterial blood gas
Benesi–Hildebrand method
Determination of equilibrium constants
Equilibrium constant
Henderson–Hasselbalch equation
Mass-action ratio
Michaelis–Menten kinetics
pCO2
pH
pKa
Redox equilibria
Steady state (chemistry)
Thermodynamic databases for pure substances
Non-random two-liquid model (NRTL model) – Phase equilibrium calculations
UNIQUAC model – Phase equilibrium calculations
References
Further reading
Mainly concerned with gas-phase equilibria.
External links
Analytical chemistry
Physical chemistry | Chemical equilibrium | [
"Physics",
"Chemistry"
] | 5,214 | [
"Equilibrium chemistry",
"Physical chemistry",
"Applied and interdisciplinary physics",
"nan"
] |
5,308 | https://en.wikipedia.org/wiki/Combination | In mathematics, a combination is a selection of items from a set that has distinct members, such that the order of selection does not matter (unlike permutations). For example, given three fruits, say an apple, an orange and a pear, there are three combinations of two that can be drawn from this set: an apple and a pear; an apple and an orange; or a pear and an orange. More formally, a k-combination of a set S is a subset of k distinct elements of S. So, two combinations are identical if and only if each combination has the same members. (The arrangement of the members in each set does not matter.) If the set has n elements, the number of k-combinations, denoted by or , is equal to the binomial coefficient
which can be written using factorials as whenever , and which is zero when . This formula can be derived from the fact that each k-combination of a set S of n members has permutations so or . The set of all k-combinations of a set S is often denoted by .
A combination is a selection of n things taken k at a time without repetition. To refer to combinations in which repetition is allowed, the terms k-combination with repetition, k-multiset, or k-selection, are often used. If, in the above example, it were possible to have two of any one kind of fruit there would be 3 more 2-selections: one with two apples, one with two oranges, and one with two pears.
Although the set of three fruits was small enough to write a complete list of combinations, this becomes impractical as the size of the set increases. For example, a poker hand can be described as a 5-combination (k = 5) of cards from a 52 card deck (n = 52). The 5 cards of the hand are all distinct, and the order of cards in the hand does not matter. There are 2,598,960 such combinations, and the chance of drawing any one hand at random is 1 / 2,598,960.
Number of k-combinations
The number of k-combinations from a given set S of n elements is often denoted in elementary combinatorics texts by , or by a variation such as , , , or even (the last form is standard in French, Romanian, Russian, and Chinese texts). The same number however occurs in many other mathematical contexts, where it is denoted by (often read as "n choose k"); notably it occurs as a coefficient in the binomial formula, hence its name binomial coefficient. One can define for all natural numbers k at once by the relation
from which it is clear that
and further
for .
To see that these coefficients count k-combinations from S, one can first consider a collection of n distinct variables Xs labeled by the elements s of S, and expand the product over all elements of S:
it has 2n distinct terms corresponding to all the subsets of S, each subset giving the product of the corresponding variables Xs. Now setting all of the Xs equal to the unlabeled variable X, so that the product becomes , the term for each k-combination from S becomes Xk, so that the coefficient of that power in the result equals the number of such k-combinations.
Binomial coefficients can be computed explicitly in various ways. To get all of them for the expansions up to , one can use (in addition to the basic cases already given) the recursion relation
for 0 < k < n, which follows from =; this leads to the construction of Pascal's triangle.
For determining an individual binomial coefficient, it is more practical to use the formula
The numerator gives the number of k-permutations of n, i.e., of sequences of k distinct elements of S, while the denominator gives the number of such k-permutations that give the same k-combination when the order is ignored.
When k exceeds n/2, the above formula contains factors common to the numerator and the denominator, and canceling them out gives the relation
for 0 ≤ k ≤ n. This expresses a symmetry that is evident from the binomial formula, and can also be understood in terms of k-combinations by taking the complement of such a combination, which is an -combination.
Finally there is a formula which exhibits this symmetry directly, and has the merit of being easy to remember:
where n! denotes the factorial of n. It is obtained from the previous formula by multiplying denominator and numerator by !, so it is certainly computationally less efficient than that formula.
The last formula can be understood directly, by considering the n! permutations of all the elements of S. Each such permutation gives a k-combination by selecting its first k elements. There are many duplicate selections: any combined permutation of the first k elements among each other, and of the final (n − k) elements among each other produces the same combination; this explains the division in the formula.
From the above formulas follow relations between adjacent numbers in Pascal's triangle in all three directions:
Together with the basic cases , these allow successive computation of respectively all numbers of combinations from the same set (a row in Pascal's triangle), of k-combinations of sets of growing sizes, and of combinations with a complement of fixed size .
Example of counting combinations
As a specific example, one can compute the number of five-card hands possible from a standard fifty-two card deck as:
Alternatively one may use the formula in terms of factorials and cancel the factors in the numerator against parts of the factors in the denominator, after which only multiplication of the remaining factors is required:
Another alternative computation, equivalent to the first, is based on writing
which gives
When evaluated in the following order, , this can be computed using only integer arithmetic. The reason is that when each division occurs, the intermediate result that is produced is itself a binomial coefficient, so no remainders ever occur.
Using the symmetric formula in terms of factorials without performing simplifications gives a rather extensive calculation:
Enumerating k-combinations
One can enumerate all k-combinations of a given set S of n elements in some fixed order, which establishes a bijection from an interval of integers with the set of those k-combinations. Assuming S is itself ordered, for instance S = { 1, 2, ..., n }, there are two natural possibilities for ordering its k-combinations: by comparing their smallest elements first (as in the illustrations above) or by comparing their largest elements first. The latter option has the advantage that adding a new largest element to S will not change the initial part of the enumeration, but just add the new k-combinations of the larger set after the previous ones. Repeating this process, the enumeration can be extended indefinitely with k-combinations of ever larger sets. If moreover the intervals of the integers are taken to start at 0, then the k-combination at a given place i in the enumeration can be computed easily from i, and the bijection so obtained is known as the combinatorial number system. It is also known as "rank"/"ranking" and "unranking" in computational mathematics.
There are many ways to enumerate k combinations. One way is to track k index numbers of the elements selected, starting with {0 .. k−1} (zero-based) or {1 .. k} (one-based) as the first allowed k-combination. Then, repeatedly move to the next allowed k-combination by incrementing the smallest index number for which this would not create two equal index numbers, at the same time resetting all smaller index numbers to their initial values.
Number of combinations with repetition
A k-combination with repetitions, or k-multicombination, or multisubset of size k from a set S of size n is given by a set of k not necessarily distinct elements of S, where order is not taken into account: two sequences define the same multiset if one can be obtained from the other by permuting the terms. In other words, it is a sample of k elements from a set of n elements allowing for duplicates (i.e., with replacement) but disregarding different orderings (e.g. {2,1,2} = {1,2,2}). Associate an index to each element of S and think of the elements of S as types of objects, then we can let denote the number of elements of type i in a multisubset. The number of multisubsets of size k is then the number of nonnegative integer (so allowing zero) solutions of the Diophantine equation:
If S has n elements, the number of such k-multisubsets is denoted by
a notation that is analogous to the binomial coefficient which counts k-subsets. This expression, n multichoose k, can also be given in terms of binomial coefficients:
This relationship can be easily proved using a representation known as stars and bars.
A solution of the above Diophantine equation can be represented by stars, a separator (a bar), then more stars, another separator, and so on. The total number of stars in this representation is k and the number of bars is n - 1 (since a separation into n parts needs n-1 separators). Thus, a string of k + n - 1 (or n + k - 1) symbols (stars and bars) corresponds to a solution if there are k stars in the string. Any solution can be represented by choosing k out of positions to place stars and filling the remaining positions with bars. For example, the solution of the equation (n = 4 and k = 10) can be represented by
The number of such strings is the number of ways to place 10 stars in 13 positions, which is the number of 10-multisubsets of a set with 4 elements.
As with binomial coefficients, there are several relationships between these multichoose expressions. For example, for ,
This identity follows from interchanging the stars and bars in the above representation.
Example of counting multisubsets
For example, if you have four types of donuts (n = 4) on a menu to choose from and you want three donuts (k = 3), the number of ways to choose the donuts with repetition can be calculated as
This result can be verified by listing all the 3-multisubsets of the set S = {1,2,3,4}. This is displayed in the following table. The second column lists the donuts you actually chose, the third column shows the nonnegative integer solutions of the equation and the last column gives the stars and bars representation of the solutions.
Number of k-combinations for all k
The number of k-combinations for all k is the number of subsets of a set of n elements. There are several ways to see that this number is 2n. In terms of combinations, , which is the sum of the nth row (counting from 0) of the binomial coefficients in Pascal's triangle. These combinations (subsets) are enumerated by the 1 digits of the set of base 2 numbers counting from 0 to 2n − 1, where each digit position is an item from the set of n.
Given 3 cards numbered 1 to 3, there are 8 distinct combinations (subsets), including the empty set:
Representing these subsets (in the same order) as base 2 numerals:
0 – 000
1 – 001
2 – 010
3 – 011
4 – 100
5 – 101
6 – 110
7 – 111
Probability: sampling a random combination
There are various algorithms to pick out a random combination from a given set or list. Rejection sampling is extremely slow for large sample sizes. One way to select a k-combination efficiently from a population of size n is to iterate across each element of the population, and at each step pick that element with a dynamically changing probability of (see Reservoir sampling). Another is to pick a random non-negative integer less than and convert it into a combination using the combinatorial number system.
Number of ways to put objects into bins
A combination can also be thought of as a selection of two sets of items: those that go into the chosen bin and those that go into the unchosen bin. This can be generalized to any number of bins with the constraint that every item must go to exactly one bin. The number of ways to put objects into bins is given by the multinomial coefficient
where n is the number of items, m is the number of bins, and is the number of items that go into bin i.
One way to see why this equation holds is to first number the objects arbitrarily from 1 to n and put the objects with numbers into the first bin in order, the objects with numbers into the second bin in order, and so on. There are distinct numberings, but many of them are equivalent, because only the set of items in a bin matters, not their order in it. Every combined permutation of each bins' contents produces an equivalent way of putting items into bins. As a result, every equivalence class consists of distinct numberings, and the number of equivalence classes is .
The binomial coefficient is the special case where k items go into the chosen bin and the remaining items go into the unchosen bin:
See also
Binomial coefficient
Combinatorics
Block design
Kneser graph
List of permutation topics
Multiset
Probability
Notes
References
Erwin Kreyszig, Advanced Engineering Mathematics, John Wiley & Sons, INC, 1999.
External links
Topcoder tutorial on combinatorics
Many Common types of permutation and combination math problems, with detailed solutions
The Unknown Formula For combinations when choices can be repeated and order does not matter
The dice roll with a given sum problem An application of the combinations with repetition to rolling multiple dice
Combinatorics | Combination | [
"Mathematics"
] | 2,928 | [
"Discrete mathematics",
"Combinatorics"
] |
5,309 | https://en.wikipedia.org/wiki/Software | Software consists of computer programs that instruct the execution of a computer. Software also includes design documents and specifications.
The history of software is closely tied to the development of digital computers in the mid-20th century. Early programs were written in the machine language specific to the hardware. The introduction of high-level programming languages in 1958 allowed for more human-readable instructions, making software development easier and more portable across different computer architectures. Software in a programming language is run through a compiler or interpreter to execute on the architecture's hardware. Over time, software has become complex, owing to developments in networking, operating systems, and databases.
Software can generally be categorized into two main types:
operating systems, which manage hardware resources and provide services for applications
application software, which performs specific tasks for users
The rise of cloud computing has introduced the new software delivery model Software as a Service (SaaS). In SaaS, applications are hosted by a provider and accessed over the Internet.
The process of developing software involves several stages. The stages include software design, programming, testing, release, and maintenance. Software quality assurance and security are critical aspects of software development, as bugs and security vulnerabilities can lead to system failures and security breaches. Additionally, legal issues such as software licenses and intellectual property rights play a significant role in the distribution of software products.
History
The first use of the word software is credited to mathematician John Wilder Tukey in 1958.
The first programmable computers, which appeared at the end of the 1940s, were programmed in machine language. Machine language is difficult to debug and not portable across different computers. Initially, hardware resources were more expensive than human resources. As programs became complex, programmer productivity became the bottleneck. The introduction of high-level programming languages in 1958 hid the details of the hardware and expressed the underlying algorithms into the code . Early languages include Fortran, Lisp, and COBOL.
Types
There are two main types of software:
Operating systems are "the layer of software that manages a computer's resources for its users and their applications". There are three main purposes that an operating system fulfills:
Allocating resources between different applications, deciding when they will receive central processing unit (CPU) time or space in memory.
Providing an interface that abstracts the details of accessing hardware details (like physical memory) to make things easier for programmers.
Offering common services, such as an interface for accessing network and disk devices. This enables an application to be run on different hardware without needing to be rewritten.
Application software runs on top of the operating system and uses the computer's resources to perform a task. There are many different types of application software because the range of tasks that can be performed with modern computers is so large. Applications account for most software and require the environment provided by an operating system, and often other applications, in order to function.
Software can also be categorized by how it is deployed. Traditional applications are purchased with a perpetual license for a specific version of the software, downloaded, and run on hardware belonging to the purchaser. The rise of the Internet and cloud computing enabled a new model, software as a service (SaaS), in which the provider hosts the software (usually built on top of rented infrastructure or platforms) and provides the use of the software to customers, often in exchange for a subscription fee. By 2023, SaaS products—which are usually delivered via a web application—had become the primary method that companies deliver applications.
Software development and maintenance
Software companies aim to deliver a high-quality product on time and under budget. A challenge is that software development effort estimation is often inaccurate. Software development begins by conceiving the project, evaluating its feasibility, analyzing the business requirements, and making a software design. Most software projects speed up their development by reusing or incorporating existing software, either in the form of commercial off-the-shelf (COTS) or open-source software. Software quality assurance is typically a combination of manual code review by other engineers and automated software testing. Due to time constraints, testing cannot cover all aspects of the software's intended functionality, so developers often focus on the most critical functionality. Formal methods are used in some safety-critical systems to prove the correctness of code, while user acceptance testing helps to ensure that the product meets customer expectations. There are a variety of software development methodologies, which vary from completing all steps in order to concurrent and iterative models. Software development is driven by requirements taken from prospective users, as opposed to maintenance, which is driven by events such as a change request.
Frequently, software is released in an incomplete state when the development team runs out of time or funding. Despite testing and quality assurance, virtually all software contains bugs where the system does not work as intended. Post-release software maintenance is necessary to remediate these bugs when they are found and keep the software working as the environment changes over time. New features are often added after the release. Over time, the level of maintenance becomes increasingly restricted before being cut off entirely when the product is withdrawn from the market. As software ages, it becomes known as legacy software and can remain in use for decades, even if there is no one left who knows how to fix it. Over the lifetime of the product, software maintenance is estimated to comprise 75 percent or more of the total development cost.
Completing a software project involves various forms of expertise, not just in software programmers but also testing, documentation writing, project management, graphic design, user experience, user support, marketing, and fundraising.
Quality and security
Software quality is defined as meeting the stated requirements as well as customer expectations. Quality is an overarching term that can refer to a code's correct and efficient behavior, its reusability and portability, or the ease of modification. It is usually more cost-effective to build quality into the product from the beginning rather than try to add it later in the development process. Higher quality code will reduce lifetime cost to both suppliers and customers as it is more reliable and easier to maintain. Software failures in safety-critical systems can be very serious including death. By some estimates, the cost of poor quality software can be as high as 20 to 40 percent of sales. Despite developers' goal of delivering a product that works entirely as intended, virtually all software contains bugs.
The rise of the Internet also greatly increased the need for computer security as it enabled malicious actors to conduct cyberattacks remotely. If a bug creates a security risk, it is called a vulnerability. Software patches are often released to fix identified vulnerabilities, but those that remain unknown (zero days) as well as those that have not been patched are still liable for exploitation. Vulnerabilities vary in their ability to be exploited by malicious actors, and the actual risk is dependent on the nature of the vulnerability as well as the value of the surrounding system. Although some vulnerabilities can only be used for denial of service attacks that compromise a system's availability, others allow the attacker to inject and run their own code (called malware), without the user being aware of it. To thwart cyberattacks, all software in the system must be designed to withstand and recover from external attack. Despite efforts to ensure security, a significant fraction of computers are infected with malware.
Encoding and execution
Programming languages
Programming languages are the format in which software is written. Since the 1950s, thousands of different programming languages have been invented; some have been in use for decades, while others have fallen into disuse. Some definitions classify machine code—the exact instructions directly implemented by the hardware—and assembly language—a more human-readable alternative to machine code whose statements can be translated one-to-one into machine code—as programming languages. Programs written in the high-level programming languages used to create software share a few main characteristics: knowledge of machine code is not necessary to write them, they can be ported to other computer systems, and they are more concise and human-readable than machine code. They must be both human-readable and capable of being translated into unambiguous instructions for computer hardware.
Compilation, interpretation, and execution
The invention of high-level programming languages was simultaneous with the compilers needed to translate them automatically into machine code. Most programs do not contain all the resources needed to run them and rely on external libraries. Part of the compiler's function is to link these files in such a way that the program can be executed by the hardware. Once compiled, the program can be saved as an object file and the loader (part of the operating system) can take this saved file and execute it as a process on the computer hardware. Some programming languages use an interpreter instead of a compiler. An interpreter converts the program into machine code at run time, which makes them 10 to 100 times slower than compiled programming languages.
Legal issues
Liability
Software is often released with the knowledge that it is incomplete or contains bugs. Purchasers knowingly buy it in this state, which has led to a legal regime where liability for software products is significantly curtailed compared to other products.
Licenses
Source code is protected by copyright law that vests the owner with the exclusive right to copy the code. The underlying ideas or algorithms are not protected by copyright law, but are often treated as a trade secret and concealed by such methods as non-disclosure agreements. Software copyright has been recognized since the mid-1970s and is vested in the company that makes the software, not the employees or contractors who wrote it. The use of most software is governed by an agreement (software license) between the copyright holder and the user. Proprietary software is usually sold under a restrictive license that limits copying and reuse (often enforced with tools such as digital rights management (DRM)). Open-source licenses, in contrast, allow free use and redistribution of software with few conditions. Most open-source licenses used for software require that modifications be released under the same license, which can create complications when open-source software is reused in proprietary projects.
Patents
Patents give an inventor an exclusive, time-limited license for a novel product or process. Ideas about what software could accomplish are not protected by law and concrete implementations are instead covered by copyright law. In some countries, a requirement for the claimed invention to have an effect on the physical world may also be part of the requirements for a software patent to be held valid. Software patents have been historically controversial. Before the 1998 case State Street Bank & Trust Co. v. Signature Financial Group, Inc., software patents were generally not recognized in the United States. In that case, the Supreme Court decided that business processes could be patented. Patent applications are complex and costly, and lawsuits involving patents can drive up the cost of products. Unlike copyrights, patents generally only apply in the jurisdiction where they were issued.
Impact
Engineer Capers Jones writes that "computers and software are making profound changes to every aspect of human life: education, work, warfare, entertainment, medicine, law, and everything else". It has become ubiquitous in everyday life in developed countries. In many cases, software augments the functionality of existing technologies such as household appliances and elevators. Software also spawned entirely new technologies such as the Internet, video games, mobile phones, and GPS. New methods of communication, including email, forums, blogs, microblogging, wikis, and social media, were enabled by the Internet. Massive amounts of knowledge exceeding any paper-based library are now available with a quick web search. Most creative professionals have switched to software-based tools such as computer-aided design, 3D modeling, digital image editing, and computer animation. Almost every complex device is controlled by software.
References
Sources | Software | [
"Technology",
"Engineering"
] | 2,387 | [
"Software engineering",
"Computer science",
"Software",
"nan"
] |
5,311 | https://en.wikipedia.org/wiki/Computer%20programming | Computer programming or coding is the composition of sequences of instructions, called programs, that computers can follow to perform tasks. It involves designing and implementing algorithms, step-by-step specifications of procedures, by writing code in one or more programming languages. Programmers typically use high-level programming languages that are more easily intelligible to humans than machine code, which is directly executed by the central processing unit. Proficient programming usually requires expertise in several different subjects, including knowledge of the application domain, details of programming languages and generic code libraries, specialized algorithms, and formal logic.
Auxiliary tasks accompanying and related to programming include analyzing requirements, testing, debugging (investigating and fixing problems), implementation of build systems, and management of derived artifacts, such as programs' machine code. While these are sometimes considered programming, often the term software development is used for this larger overall process – with the terms programming, implementation, and coding reserved for the writing and editing of code per se. Sometimes software development is known as software engineering, especially when it employs formal methods or follows an engineering design process.
History
Programmable devices have existed for centuries. As early as the 9th century, a programmable music sequencer was invented by the Persian Banu Musa brothers, who described an automated mechanical flute player in the Book of Ingenious Devices. In 1206, the Arab engineer Al-Jazari invented a programmable drum machine where a musical mechanical automaton could be made to play different rhythms and drum patterns, via pegs and cams. In 1801, the Jacquard loom could produce entirely different weaves by changing the "program" – a series of pasteboard cards with holes punched in them.
Code-breaking algorithms have also existed for centuries. In the 9th century, the Arab mathematician Al-Kindi described a cryptographic algorithm for deciphering encrypted code, in A Manuscript on Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest code-breaking algorithm.
The first computer program is generally dated to 1843 when mathematician Ada Lovelace published an algorithm to calculate a sequence of Bernoulli numbers, intended to be carried out by Charles Babbage's Analytical Engine. However, Charles Babbage himself had written a program for the AE in 1837.
In the 1880s, Herman Hollerith invented the concept of storing data in machine-readable form. Later a control panel (plug board) added to his 1906 Type I Tabulator allowed it to be programmed for different jobs, and by the late 1940s, unit record equipment such as the IBM 602 and IBM 604, were programmed by control panels in a similar way, as were the first electronic computers. However, with the concept of the stored-program computer introduced in 1949, both programs and data were stored and manipulated in the same way in computer memory.
Machine language
Machine code was the language of early programs, written in the instruction set of the particular machine, often in binary notation. Assembly languages were soon developed that let the programmer specify instructions in a text format (e.g., ADD X, TOTAL), with abbreviations for each operation code and meaningful names for specifying addresses. However, because an assembly language is little more than a different notation for a machine language, two machines with different instruction sets also have different assembly languages.
Compiler languages
High-level languages made the process of developing a program simpler and more understandable, and less bound to the underlying hardware.
The first compiler related tool, the A-0 System, was developed in 1952 by Grace Hopper, who also coined the term 'compiler'. FORTRAN, the first widely used high-level language to have a functional implementation, came out in 1957, and many other languages were soon developed—in particular, COBOL aimed at commercial data processing, and Lisp for computer research.
These compiled languages allow the programmer to write programs in terms that are syntactically richer, and more capable of abstracting the code, making it easy to target varying machine instruction sets via compilation declarations and heuristics. Compilers harnessed the power of computers to make programming easier by allowing programmers to specify calculations by entering a formula using infix notation.
Source code entry
Programs were mostly entered using punched cards or paper tape. By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Text editors were also developed that allowed changes and corrections to be made much more easily than with punched cards.
Modern programming
Quality requirements
Whatever the approach to development may be, the final program must satisfy some fundamental properties. The following properties are among the most important:
Reliability: how often the results of a program are correct. This depends on conceptual correctness of algorithms and minimization of programming mistakes, such as mistakes in resource management (e.g., buffer overflows and race conditions) and logic errors (such as division by zero or off-by-one errors).
Robustness: how well a program anticipates problems due to errors (not bugs). This includes situations such as incorrect, inappropriate or corrupt data, unavailability of needed resources such as memory, operating system services, and network connections, user error, and unexpected power outages.
Usability: the ergonomics of a program: the ease with which a person can use the program for its intended purpose or in some cases even unanticipated purposes. Such issues can make or break its success even regardless of other issues. This involves a wide range of textual, graphical, and sometimes hardware elements that improve the clarity, intuitiveness, cohesiveness, and completeness of a program's user interface.
Portability: the range of computer hardware and operating system platforms on which the source code of a program can be compiled/interpreted and run. This depends on differences in the programming facilities provided by the different platforms, including hardware and operating system resources, expected behavior of the hardware and operating system, and availability of platform-specific compilers (and sometimes libraries) for the language of the source code.
Maintainability: the ease with which a program can be modified by its present or future developers in order to make improvements or to customize, fix bugs and security holes, or adapt it to new environments. Good practices during initial development make the difference in this regard. This quality may not be directly apparent to the end user but it can significantly affect the fate of a program over the long term.
Efficiency/performance: Measure of system resources a program consumes (processor time, memory space, slow devices such as disks, network bandwidth and to some extent even user interaction): the less, the better. This also includes careful management of resources, for example cleaning up temporary files and eliminating memory leaks. This is often discussed under the shadow of a chosen programming language. Although the language certainly affects performance, even slower languages, such as Python, can execute programs instantly from a human perspective. Speed, resource usage, and performance are important for programs that bottleneck the system, but efficient use of programmer time is also important and is related to cost: more hardware may be cheaper.
Using automated tests and fitness functions can help to maintain some of the aforementioned attributes.
Readability of source code
In computer programming, readability refers to the ease with which a human reader can comprehend the purpose, control flow, and operation of source code. It affects the aspects of quality above, including portability, usability and most importantly maintainability.
Readability is important because programmers spend the majority of their time reading, trying to understand, reusing, and modifying existing source code, rather than writing new source code. Unreadable code often leads to bugs, inefficiencies, and duplicated code. A study found that a few simple readability transformations made code shorter and drastically reduced the time to understand it.
Following a consistent programming style often helps readability. However, readability is more than just programming style. Many factors, having little or nothing to do with the ability of the computer to efficiently compile and execute the code, contribute to readability. Some of these factors include:
Different indent styles (whitespace)
Comments
Decomposition
Naming conventions for objects (such as variables, classes, functions, procedures, etc.)
The presentation aspects of this (such as indents, line breaks, color highlighting, and so on) are often handled by the source code editor, but the content aspects reflect the programmer's talent and skills.
Various visual programming languages have also been developed with the intent to resolve readability concerns by adopting non-traditional approaches to code structure and display. Integrated development environments (IDEs) aim to integrate all such help. Techniques like Code refactoring can enhance readability.
Algorithmic complexity
The academic field and the engineering practice of computer programming are concerned with discovering and implementing the most efficient algorithms for a given class of problems. For this purpose, algorithms are classified into orders using Big O notation, which expresses resource use—such as execution time or memory consumption—in terms of the size of an input. Expert programmers are familiar with a variety of well-established algorithms and their respective complexities and use this knowledge to choose algorithms that are best suited to the circumstances.
Methodologies
The first step in most formal software development processes is requirements analysis, followed by testing to determine value modeling, implementation, and failure elimination (debugging). There exist a lot of different approaches for each of those tasks. One approach popular for requirements analysis is Use Case analysis. Many programmers use forms of Agile software development where the various stages of formal software development are more integrated together into short cycles that take a few weeks rather than years. There are many approaches to the Software development process.
Popular modeling techniques include Object-Oriented Analysis and Design (OOAD) and Model-Driven Architecture (MDA). The Unified Modeling Language (UML) is a notation used for both the OOAD and MDA.
A similar technique used for database design is Entity-Relationship Modeling (ER Modeling).
Implementation techniques include imperative languages (object-oriented or procedural), functional languages, and logic programming languages.
Measuring language usage
It is very difficult to determine what are the most popular modern programming languages. Methods of measuring programming language popularity include: counting the number of job advertisements that mention the language, the number of books sold and courses teaching the language (this overestimates the importance of newer languages), and estimates of the number of existing lines of code written in the language (this underestimates the number of users of business languages such as COBOL).
Some languages are very popular for particular kinds of applications, while some languages are regularly used to write many different kinds of applications. For example, COBOL is still strong in corporate data centers often on large mainframe computers, Fortran in engineering applications, scripting languages in Web development, and C in embedded software. Many applications use a mix of several languages in their construction and use. New languages are generally designed around the syntax of a prior language with new functionality added, (for example C++ adds object-orientation to C, and Java adds memory management and bytecode to C++, but as a result, loses efficiency and the ability for low-level manipulation).
Debugging
Debugging is a very important task in the software development process since having defects in a program can have significant consequences for its users. Some languages are more prone to some kinds of faults because their specification does not require compilers to perform as much checking as other languages. Use of a static code analysis tool can help detect some possible problems. Normally the first step in debugging is to attempt to reproduce the problem. This can be a non-trivial task, for example as with parallel processes or some unusual software bugs. Also, specific user environment and usage history can make it difficult to reproduce the problem.
After the bug is reproduced, the input of the program may need to be simplified to make it easier to debug. For example, when a bug in a compiler can make it crash when parsing some large source file, a simplification of the test case that results in only few lines from the original source file can be sufficient to reproduce the same crash. Trial-and-error/divide-and-conquer is needed: the programmer will try to remove some parts of the original test case and check if the problem still exists. When debugging the problem in a GUI, the programmer can try to skip some user interaction from the original problem description and check if the remaining actions are sufficient for bugs to appear. Scripting and breakpointing are also part of this process.
Debugging is often done with IDEs. Standalone debuggers like GDB are also used, and these often provide less of a visual environment, usually using a command line. Some text editors such as Emacs allow GDB to be invoked through them, to provide a visual environment.
Programming languages
Different programming languages support different styles of programming (called programming paradigms). The choice of language used is subject to many considerations, such as company policy, suitability to task, availability of third-party packages, or individual preference. Ideally, the programming language best suited for the task at hand will be selected. Trade-offs from this ideal involve finding enough programmers who know the language to build a team, the availability of compilers for that language, and the efficiency with which programs written in a given language execute. Languages form an approximate spectrum from "low-level" to "high-level"; "low-level" languages are typically more machine-oriented and faster to execute, whereas "high-level" languages are more abstract and easier to use but execute less quickly. It is usually easier to code in "high-level" languages than in "low-level" ones.
Programming languages are essential for software development. They are the building blocks for all software, from the simplest applications to the most sophisticated ones.
Allen Downey, in his book How To Think Like A Computer Scientist, writes:
The details look different in different languages, but a few basic instructions appear in just about every language:
Input: Gather data from the keyboard, a file, or some other device.
Output: Display data on the screen or send data to a file or other device.
Arithmetic: Perform basic arithmetical operations like addition and multiplication.
Conditional Execution: Check for certain conditions and execute the appropriate sequence of statements.
Repetition: Perform some action repeatedly, usually with some variation.
Many computer languages provide a mechanism to call functions provided by shared libraries. Provided the functions in a library follow the appropriate run-time conventions (e.g., method of passing arguments), then these functions may be written in any other language.
Learning to program
Learning to program has a long history related to professional standards and practices, academic initiatives and curriculum, and commercial books and materials for students, self-taught learners, hobbyists, and others who desire to create or customize software for personal use. Since the 1960s, learning to program has taken on the characteristics of a popular movement, with the rise of academic disciplines, inspirational leaders, collective identities, and strategies to grow the movement and make institutionalize change. Through these social ideals and educational agendas, learning to code has become important not just for scientists and engineers, but for millions of citizens who have come to believe that creating software is beneficial to society and its members.
Context
In 1957, there were approximately 15,000 computer programmers employed in the U.S., a figure that accounts for 80% of the world’s active developers. In 2014, there were approximately 18.5 million professional programmers in the world, of which 11 million can be considered professional and 7.5 million student or hobbyists. Before the rise of the commercial Internet in the mid-1990s, most programmers learned about software construction through books, magazines, user groups, and informal instruction methods, with academic coursework and corporate training playing important roles for professional workers.
The first book containing specific instructions about how to program a computer may have been Maurice Wilkes, David Wheeler, and Stanley Gill's Preparation of Programs for an Electronic Digital Computer (1951). The book offered a selection of common subroutines for handling basic operations on the EDSAC, one of the world’s first stored-program computers.
When high-level languages arrived, they were introduced by numerous books and materials that explained language keywords, managing program flow, working with data, and other concepts. These languages included FLOW-MATIC, COBOL, FORTRAN, ALGOL, Pascal, BASIC, and C. An example of an early programming primer from these years is Marshal H. Wrubel's A Primer of Programming for Digital Computers (1959), which included step-by-step instructions for filling out coding sheets, creating punched cards, and using the keywords in IBM’s early FORTRAN system. Daniel McCracken's A Guide to FORTRAN Programming (1961) presented FORTRAN to a larger audience, including students and office workers.
In 1961, Alan Perlis suggested that all university freshmen at Carnegie Technical Institute take a course in computer programming. His advice was published in the popular technical journal Computers and Automation, which became a regular source of information for professional programmers.
Programmers soon had a range of learning texts at their disposal. Programmer’s references listed keywords and functions related to a language, often in alphabetical order, as well as technical information about compilers and related systems. An early example was IBM’s Programmers’ Reference Manual: the FORTRAN Automatic Coding System for the IBM 704 EDPM (1956).
Over time, the genre of programmer’s guides emerged, which presented the features of a language in tutorial or step by step format. Many early primers started with a program known as “Hello, World”, which presented the shortest program a developer could create in a given system. Programmer’s guides then went on to discuss core topics like declaring variables, data types, formulas, flow control, user-defined functions, manipulating data, and other topics.
Early and influential programmer’s guides included John G. Kemeny and Thomas E. Kurtz’s BASIC Programming (1967), Kathleen Jensen and Niklaus Wirth’s The Pascal User Manual and Report (1971), and Brian Kernighan and Dennis Ritchie’s The C Programming Language (1978). Similar books for popular audiences (but with a much lighter tone) included Bob Albrecht’s My Computer Loves Me When I Speak BASIC (1972), Al Kelley and Ira Pohl’s A Book on C (1984), and Dan Gookin's C for Dummies (1994).
Beyond language-specific primers, there were numerous books and academic journals that introduced professional programming practices. Many were designed for university courses in computer science, software engineering, or related disciplines. Donald Knuth’s The Art of Computer Programming (1968 and later), presented hundreds of computational algorithms and their analysis. The Elements of Programming Style (1974), by Brian W. Kernighan and P. J. Plauger, concerned itself with programming style, the idea that programs should be written not only to satisfy the compiler but human readers. Jon Bentley’s Programming Pearls (1986) offered practical advice about the art and craft of programming in professional and academic contexts. Texts specifically designed for students included Doug Cooper and Michael Clancy's Oh Pascal! (1982), Alfred Aho’s Data Structures and Algorithms (1983), and Daniel Watt's Learning with Logo (1983).
Technical Publishers
As personal computers became mass-market products, thousands of trade books and magazines sought to teach professional, hobbyist, and casual users to write computer programs. A sample of these learning resources includes BASIC Computer Games, Microcomputer Edition (1978), by David Ahl; Programming the Z80 (1979), by Rodnay Zaks; Programmer’s CP/M Handbook (1983), by Andy Johnson-Laird; C Primer Plus (1984), by Mitchell Waite and The Waite Group; The Peter Norton Programmer’s Guide to the IBM PC (1985), by Peter Norton; Advanced MS-DOS (1986), by Ray Duncan; Learn BASIC Now (1989), by Michael Halvorson and David Rygymr; Programming Windows (1992 and later), by Charles Petzold; Code Complete: A Practical Handbook for Software Construction (1993), by Steve McConnell; and Tricks of the Game-Programming Gurus (1994), by André LaMothe.
The PC software industry spurred the creation of numerous book publishers that offered programming primers and tutorials, as well as books for advanced software developers. These publishers included Addison-Wesley, IDG, Macmillan Inc., McGraw-Hill, Microsoft Press, O’Reilly Media, Prentice Hall, Sybex, Ventana Press, Waite Group Press, Wiley (publisher), Wrox Press, and Ziff-Davis.
Computer magazines and journals also provided learning content for professional and hobbyist programmers. A partial list of these resources includes Amiga World, Byte (magazine), Communications of the ACM, Computer (magazine), Compute!, Computer Language (magazine), Computers and Electronics, Dr. Dobb’s Journal, IEEE Software, Macworld, PC Magazine, PC/Computing, and UnixWorld.
Digital Learning / Online Resources
Between 2000 and 2010, computer book and magazine publishers declined significantly as providers of programming instruction, as programmers moved to Internet resources to expand their access to information. This shift brought forward new digital products and mechanisms to learn programming skills. During the transition, digital books from publishers transferred information that had traditionally been delivered in print to new and expanding audiences.
Important Internet resources for learning to code included blogs, wikis, videos, online databases, subscription sites, and custom websites focused on coding skills. New commercial resources included YouTube videos, Lynda.com tutorials (later LinkedIn Learning), Khan Academy, Codecademy, GitHub, and numerous coding bootcamps.
Most software development systems and game engines included rich online help resources, including integrated development environments (IDEs), context-sensitive help, APIs, and other digital resources. Commercial software development kits (SDKs) also provided a collection of software development tools and documentation in one installable package.
Commercial and non-profit organizations published learning websites for developers, created blogs, and established newsfeeds and social media resources about programming. Corporations like Apple, Microsoft, Oracle, Google, and Amazon built corporate websites providing support for programmers, including resources like the Microsoft Developer Network (MSDN). Contemporary movements like Hour of Code (Code.org) show how learning to program has become associated with digital learning strategies, education agendas, and corporate philanthropy.
Programmers
Computer programmers are those who write computer software. Their jobs usually involve:
Prototyping
Coding
Debugging
Documentation
Integration
Maintenance
Requirements analysis
Software architecture
Software testing
Specification
Although programming has been presented in the media as a somewhat mathematical subject, some research shows that good programmers have strong skills in natural human languages, and that learning to code is similar to learning a foreign language.
See also
Code smell
Computer networking
Competitive programming
Programming best practices
Systems programming
References
Sources
Further reading
A.K. Hartmann, Practical Guide to Computer Simulations, Singapore: World Scientific (2009)
A. Hunt, D. Thomas, and W. Cunningham, The Pragmatic Programmer. From Journeyman to Master, Amsterdam: Addison-Wesley Longman (1999)
Brian W. Kernighan, The Practice of Programming, Pearson (1999)
Weinberg, Gerald M., The Psychology of Computer Programming, New York: Van Nostrand Reinhold (1971)
Edsger W. Dijkstra, A Discipline of Programming, Prentice-Hall (1976)
O.-J. Dahl, E.W.Dijkstra, C.A.R. Hoare, Structured Programming, Academic Press (1972)
David Gries, The Science of Programming, Springer-Verlag (1981)
External links
Programming | Computer programming | [
"Technology",
"Engineering"
] | 4,922 | [
"Software engineering",
"Computer programming",
"Computers"
] |
5,320 | https://en.wikipedia.org/wiki/Carbon%20nanotube | A carbon nanotube (CNT) is a tube made of carbon with a diameter in the nanometre range (nanoscale). They are one of the allotropes of carbon. Two broad classes of carbon nanotubes are recognized:
Single-walled carbon nanotubes (SWCNTs) have diameters around 0.5–2.0 nanometres, about 100,000 times smaller than the width of a human hair. They can be idealised as cutouts from a two-dimensional graphene sheet rolled up to form a hollow cylinder.
Multi-walled carbon nanotubes (MWCNTs) consist of nested single-wall carbon nanotubes in a nested, tube-in-tube structure. Double- and triple-walled carbon nanotubes are special cases of MWCNT.
Carbon nanotubes can exhibit remarkable properties, such as exceptional tensile strength and thermal conductivity because of their nanostructure and strength of the bonds between carbon atoms. Some SWCNT structures exhibit high electrical conductivity while others are semiconductors. In addition, carbon nanotubes can be chemically modified. These properties are expected to be valuable in many areas of technology, such as electronics, optics, composite materials (replacing or complementing carbon fibres), nanotechnology (including nanomedicine), and other applications of materials science.
The predicted properties for SWCNTs were tantalising, but a path to synthesising them was lacking until 1993, when Iijima and Ichihashi at NEC, and Bethune and others at IBM independently discovered that co-vaporising carbon and transition metals such as iron and cobalt could specifically catalyse SWCNT formation. These discoveries triggered research that succeeded in greatly increasing the efficiency of the catalytic production technique, and led to an explosion of work to characterise and find applications for SWCNTs.
History
The true identity of the discoverers of carbon nanotubes is a subject of some controversy. A 2006 editorial written by Marc Monthioux and Vladimir Kuznetsov in the journal Carbon described the origin of the carbon nanotube. A large percentage of academic and popular literature attributes the discovery of hollow, nanometre-size tubes composed of graphitic carbon to Sumio Iijima of NEC in 1991. His paper initiated a flurry of excitement and could be credited with inspiring the many scientists now studying applications of carbon nanotubes. Though Iijima has been given much of the credit for discovering carbon nanotubes, it turns out that the timeline of carbon nanotubes goes back much further than 1991.
In 1952, L. V. Radushkevich and V. M. Lukyanovich published clear images of 50-nanometre diameter tubes made of carbon in the Journal of Physical Chemistry Of Russia. This discovery was largely unnoticed, as the article was published in Russian, and Western scientists' access to Soviet press was limited during the Cold War. Monthioux and Kuznetsov mentioned in their Carbon editorial:
In 1976, Morinobu Endo of CNRS observed hollow tubes of rolled up graphite sheets synthesised by a chemical vapour-growth technique. The first specimens observed would later come to be known as single-walled carbon nanotubes (SWNTs). Endo, in his early review of vapor-phase-grown carbon fibers (VPCF), also reminded us that he had observed a hollow tube, linearly extended with parallel carbon layer faces near the fiber core. This appears to be the observation of multi-walled carbon nanotubes at the center of the fiber. The mass-produced MWCNTs today are strongly related to the VPGCF developed by Endo. In fact, they call it the "Endo-process", out of respect for his early work and patents. In 1979, John Abrahamson presented evidence of carbon nanotubes at the 14th Biennial Conference of Carbon at Pennsylvania State University. The conference paper described carbon nanotubes as carbon fibers that were produced on carbon anodes during arc discharge. A characterization of these fibers was given, as well as hypotheses for their growth in a nitrogen atmosphere at low pressures.
In 1981, a group of Soviet scientists published the results of chemical and structural characterization of carbon nanoparticles produced by a thermocatalytic disproportionation of carbon monoxide. Using TEM images and XRD patterns, the authors suggested that their "carbon multi-layer tubular crystals" were formed by rolling graphene layers into cylinders. They speculated that via this rolling, many different arrangements of graphene hexagonal nets are possible. They suggested two such possible arrangements: a circular arrangement (armchair nanotube); and a spiral, helical arrangement (chiral tube).
In 1987, Howard G. Tennent of Hyperion Catalysis was issued a U.S. patent for the production of "cylindrical discrete carbon fibrils" with a "constant diameter between about 3.5 and about 70 nanometers..., length 102 times the diameter, and an outer region of multiple essentially continuous layers of ordered carbon atoms and a distinct inner core...."
Helping to create the initial excitement associated with carbon nanotubes were Iijima's 1991 discovery of multi-walled carbon nanotubes in the insoluble material of arc-burned graphite rods; and Mintmire, Dunlap, and White's independent prediction that if single-walled carbon nanotubes could be made, they would exhibit remarkable conducting properties. Nanotube research accelerated greatly following the independent discoveries by Iijima and Ichihashi at NEC and Bethune et al. at IBM of methods to specifically produce single-walled carbon nanotubes by adding transition-metal catalysts to the carbon in an arc discharge. Thess et al. refined this catalytic method by vaporizing the carbon/transition-metal combination in a high-temperature furnace, which greatly improved the yield and purity of the SWNTs and made them widely available for characterization and application experiments. The arc discharge technique, well known to produce the famed Buckminsterfullerene, thus played a role in the discoveries of both multi- and single-wall nanotubes, extending the run of serendipitous discoveries relating to fullerenes. The discovery of nanotubes remains a contentious issue. Many believe that Iijima's report in 1991 is of particular importance because it brought carbon nanotubes into the awareness of the scientific community as a whole.
In 2020, during an archaeological excavation of Keezhadi in Tamil Nadu, India, ~2600-year-old pottery was discovered whose coatings appear to contain carbon nanotubes. The robust mechanical properties of the nanotubes are partially why the coatings have lasted for so many years, say the scientists.
Structure of SWCNTs
Basic details
The structure of an ideal (infinitely long) single-walled carbon nanotube is that of a regular hexagonal lattice drawn on an infinite cylindrical surface, whose vertices are the positions of the carbon atoms. Since the length of the carbon-carbon bonds is fairly fixed, there are constraints on the diameter of the cylinder and the arrangement of the atoms on it.
In the study of nanotubes, one defines a zigzag path on a graphene-like lattice as a path that turns 60 degrees, alternating left and right, after stepping through each bond. It is also conventional to define an armchair path as one that makes two left turns of 60 degrees followed by two right turns every four steps. On some carbon nanotubes, there is a closed zigzag path that goes around the tube. One says that the tube is of the zigzag type or configuration, or simply is a zigzag nanotube. If the tube is instead encircled by a closed armchair path, it is said to be of the armchair type, or an armchair nanotube. An infinite nanotube that is of one type consists entirely of closed paths of that type, connected to each other.
The zigzag and armchair configurations are not the only structures that a single-walled nanotube can have. To describe the structure of a general infinitely long tube, one should imagine it being sliced open by a cut parallel to its axis, that goes through some atom A, and then unrolled flat on the plane, so that its atoms and bonds coincide with those of an imaginary graphene sheet—more precisely, with an infinitely long strip of that sheet. The two halves of the atom A will end up on opposite edges of the strip, over two atoms A1 and A2 of the graphene. The line from A1 to A2 will correspond to the circumference of the cylinder that went through the atom A, and will be perpendicular to the edges of the strip. In the graphene lattice, the atoms can be split into two classes, depending on the directions of their three bonds. Half the atoms have their three bonds directed the same way, and half have their three bonds rotated 180 degrees relative to the first half. The atoms A1 and A2, which correspond to the same atom A on the cylinder, must be in the same class. It follows that the circumference of the tube and the angle of the strip are not arbitrary, because they are constrained to the lengths and directions of the lines that connect pairs of graphene atoms in the same class.
Let u and v be two linearly independent vectors that connect the graphene atom A1 to two of its nearest atoms with the same bond directions. That is, if one numbers consecutive carbons around a graphene cell with C1 to C6, then u can be the vector from C1 to C3, and v be the vector from C1 to C5. Then, for any other atom A2 with same class as A1, the vector from A1 to A2 can be written as a linear combination n u + m v, where n and m are integers. And, conversely, each pair of integers (n,m) defines a possible position for A2. Given n and m, one can reverse this theoretical operation by drawing the vector w on the graphene lattice, cutting a strip of the latter along lines perpendicular to w through its endpoints A1 and A2, and rolling the strip into a cylinder so as to bring those two points together. If this construction is applied to a pair (k,0), the result is a zigzag nanotube, with closed zigzag paths of 2k atoms. If it is applied to a pair (k,k), one obtains an armchair tube, with closed armchair paths of 4k atoms.
Types
The structure of the nanotube is not changed if the strip is rotated by 60 degrees clockwise around A1 before applying the hypothetical reconstruction above. Such a rotation changes the corresponding pair (n,m) to the pair (−2m,n+m). It follows that many possible positions of A2 relative to A1 — that is, many pairs (n,m) — correspond to the same arrangement of atoms on the nanotube. That is the case, for example, of the six pairs (1,2), (−2,3), (−3,1), (−1,−2), (2,−3), and (3,−1). In particular, the pairs (k,0) and (0,k) describe the same nanotube geometry. These redundancies can be avoided by considering only pairs (n,m) such that n > 0 and m ≥ 0; that is, where the direction of the vector w lies between those of u (inclusive) and v (exclusive). It can be verified that every nanotube has exactly one pair (n,m) that satisfies those conditions, which is called the tube's type. Conversely, for every type there is a hypothetical nanotube. In fact, two nanotubes have the same type if and only if one can be conceptually rotated and translated so as to match the other exactly. Instead of the type (n,m), the structure of a carbon nanotube can be specified by giving the length of the vector w (that is, the circumference of the nanotube), and the angle α between the directions of u and w,
may range from 0 (inclusive) to 60 degrees clockwise (exclusive). If the diagram is drawn with u horizontal, the latter is the tilt of the strip away from the vertical.
Chirality and mirror symmetry
A nanotube is chiral if it has type (n,m), with m > 0 and m ≠ n; then its enantiomer (mirror image) has type (m,n), which is different from (n,m). This operation corresponds to mirroring the unrolled strip about the line L through A1 that makes an angle of 30 degrees clockwise from the direction of the u vector (that is, with the direction of the vector u+v). The only types of nanotubes that are achiral are the (k,0) "zigzag" tubes and the (k,k) "armchair" tubes. If two enantiomers are to be considered the same structure, then one may consider only types (n,m) with 0 ≤ m ≤ n and n > 0. Then the angle α between u and w, which may range from 0 to 30 degrees (inclusive both), is called the "chiral angle" of the nanotube.
Circumference and diameter
From n and m one can also compute the circumference c, which is the length of the vector w, which turns out to be:
in picometres. The diameter of the tube is then , that is
also in picometres. (These formulas are only approximate, especially for small n and m where the bonds are strained; and they do not take into account the thickness of the wall.)
The tilt angle α between u and w and the circumference c are related to the type indices n and m by:
where arg(x,y) is the clockwise angle between the X-axis and the vector (x,y); a function that is available in many programming languages as atan2(y,x). Conversely, given c and α, one can get the type (n,m) by the formulas:
which must evaluate to integers.
Physical limits
Narrowest examples
If n and m are too small, the structure described by the pair (n,m) will describe a molecule that cannot be reasonably called a "tube", and may not even be stable. For example, the structure theoretically described by the pair (1,0) (the limiting "zigzag" type) would be just a chain of carbons. That is a real molecule, the carbyne; which has some characteristics of nanotubes (such as orbital hybridization, high tensile strength, etc.) — but has no hollow space, and may not be obtainable as a condensed phase. The pair (2,0) would theoretically yield a chain of fused 4-cycles; and (1,1), the limiting "armchair" structure, would yield a chain of bi-connected 4-rings. These structures may not be realizable.
The thinnest carbon nanotube proper is the armchair structure with type (2,2), which has a diameter of 0.3 nm. This nanotube was grown inside a multi-walled carbon nanotube. Assigning of the carbon nanotube type was done by a combination of high-resolution transmission electron microscopy (HRTEM), Raman spectroscopy, and density functional theory (DFT) calculations.
The thinnest freestanding single-walled carbon nanotube is about 0.43 nm in diameter. Researchers suggested that it can be either (5,1) or (4,2) SWCNT, but the exact type of the carbon nanotube remains questionable. (3,3), (4,3), and (5,1) carbon nanotubes (all about 0.4 nm in diameter) were unambiguously identified using aberration-corrected high-resolution transmission electron microscopy inside double-walled CNTs.
Length
The observation of the longest carbon nanotubes grown so far, around 0.5 metre (550 mm) long, was reported in 2013. These nanotubes were grown on silicon substrates using an improved chemical vapor deposition (CVD) method and represent electrically uniform arrays of single-walled carbon nanotubes.
The shortest carbon nanotube can be considered to be the organic compound cycloparaphenylene, which was synthesized in 2008 by Ramesh Jasti. Other small molecule carbon nanotubes have been synthesized since.
Density
The highest density of CNTs was achieved in 2013, grown on a conductive titanium-coated copper surface that was coated with co-catalysts cobalt and molybdenum at lower than typical temperatures of 450 °C. The tubes averaged a height of 380 nm and a mass density of 1.6 g cm−3. The material showed ohmic conductivity (lowest resistance ~22 kΩ).
Variants
There is no consensus on some terms describing carbon nanotubes in the scientific literature: both "-wall" and "-walled" are being used in combination with "single", "double", "triple", or "multi", and the letter C is often omitted in the abbreviation, for example, multi-walled carbon nanotube (MWNT). The International Standards Organization typically uses "single-walled carbon nanotube (SWCNT)" or "multi-walled carbon nanotube (MWCNT)" in its documents.
Multi-walled
Multi-walled nanotubes (MWNTs) consist of multiple rolled layers (concentric tubes) of graphene. There are two models that can be used to describe the structures of multi-walled nanotubes. In the Russian Doll model, sheets of graphite are arranged in concentric cylinders, e.g., a (0,8) single-walled nanotube (SWNT) within a larger (0,17) single-walled nanotube. In the Parchment model, a single sheet of graphite is rolled in around itself, resembling a scroll of parchment or a rolled newspaper. The interlayer distance in multi-walled nanotubes is close to the distance between graphene layers in graphite, approximately 3.4 Å. The Russian Doll structure is observed more commonly. Its individual shells can be described as SWNTs, which can be metallic or semiconducting. Because of statistical probability and restrictions on the relative diameters of the individual tubes, one of the shells, and thus the whole MWNT, is usually a zero-gap metal.
Double-walled carbon nanotubes (DWNTs) form a special class of nanotubes because their morphology and properties are similar to those of SWNTs but they are more resistant to attacks by chemicals. This is especially important when it is necessary to graft chemical functions to the surface of the nanotubes (functionalization) to add properties to the CNT. Covalent functionalization of SWNTs will break some C=C double bonds, leaving "holes" in the structure on the nanotube and thus modifying both its mechanical and electrical properties. In the case of DWNTs, only the outer wall is modified. DWNT synthesis on the gram-scale by the CCVD technique was first proposed in 2003 from the selective reduction of oxide solutions in methane and hydrogen.
The telescopic motion ability of inner shells, allowing them to act as low-friction, low-wear nanobearings and nanosprings, may make them a desirable material in nanoelectromechanical systems (NEMS) . The retraction force that occurs to telescopic motion is caused by the Lennard-Jones interaction between shells, and its value is about 1.5 nN.
Junctions and crosslinking
Junctions between two or more nanotubes have been widely discussed theoretically. Such junctions are quite frequently observed in samples prepared by arc discharge as well as by chemical vapor deposition. The electronic properties of such junctions were first considered theoretically by Lambin et al., who pointed out that a connection between a metallic tube and a semiconducting one would represent a nanoscale heterojunction. Such a junction could therefore form a component of a nanotube-based electronic circuit. The adjacent image shows a junction between two multiwalled nanotubes.
Junctions between nanotubes and graphene have been considered theoretically and studied experimentally. Nanotube-graphene junctions form the basis of pillared graphene, in which parallel graphene sheets are separated by short nanotubes. Pillared graphene represents a class of three-dimensional carbon nanotube architectures.
Recently, several studies have highlighted the prospect of using carbon nanotubes as building blocks to fabricate three-dimensional macroscopic (>100 nm in all three dimensions) all-carbon devices. Lalwani et al. have reported a novel radical-initiated thermal crosslinking method to fabricate macroscopic, free-standing, porous, all-carbon scaffolds using single- and multi-walled carbon nanotubes as building blocks. These scaffolds possess macro-, micro-, and nano-structured pores, and the porosity can be tailored for specific applications. These 3D all-carbon scaffolds/architectures may be used for the fabrication of the next generation of energy storage, supercapacitors, field emission transistors, high-performance catalysis, photovoltaics, and biomedical devices, implants, and sensors.
Other morphologies
Carbon nanobuds are a newly created material combining two previously discovered allotropes of carbon: carbon nanotubes and fullerenes. In this new material, fullerene-like "buds" are covalently bonded to the outer sidewalls of the underlying carbon nanotube. This hybrid material has useful properties of both fullerenes and carbon nanotubes. In particular, they have been found to be exceptionally good field emitters. In composite materials, the attached fullerene molecules may function as molecular anchors preventing slipping of the nanotubes, thus improving the composite's mechanical properties.
A carbon peapod is a novel hybrid carbon material which traps fullerene inside a carbon nanotube. It can possess interesting magnetic properties with heating and irradiation. It can also be applied as an oscillator during theoretical investigations and predictions.
In theory, a nanotorus is a carbon nanotube bent into a torus (doughnut shape). Nanotori are predicted to have many unique properties, such as magnetic moments 1000 times larger than that previously expected for certain specific radii. Properties such as magnetic moment, thermal stability, etc. vary widely depending on the radius of the torus and the radius of the tube.
Graphenated carbon nanotubes are a relatively new hybrid that combines graphitic foliates grown along the sidewalls of multiwalled or bamboo-style CNTs. The foliate density can vary as a function of deposition conditions (e.g., temperature and time) with their structure ranging from a few layers of graphene (< 10) to thicker, more graphite-like. The fundamental advantage of an integrated graphene-CNT structure is the high surface area three-dimensional framework of the CNTs coupled with the high edge density of graphene. Depositing a high density of graphene foliates along the length of aligned CNTs can significantly increase the total charge capacity per unit of nominal area as compared to other carbon nanostructures.
Cup-stacked carbon nanotubes (CSCNTs) differ from other quasi-1D carbon structures, which normally behave as quasi-metallic conductors of electrons. CSCNTs exhibit semiconducting behavior because of the stacking microstructure of graphene layers.
Properties
Many properties of single-walled carbon nanotubes depend significantly on the (n,m) type, and this dependence is non-monotonic (see Kataura plot). In particular, the band gap can vary from zero to about 2 eV and the electrical conductivity can show metallic or semiconducting behavior.
Mechanical
Carbon nanotubes are the strongest and stiffest materials yet discovered in terms of tensile strength and elastic modulus. This strength results from the covalent sp2 bonds formed between the individual carbon atoms. In 2000, a multiwalled carbon nanotube was tested to have a tensile strength of . (For illustration, this translates into the ability to endure tension of a weight equivalent to on a cable with cross-section of ). Further studies, such as one conducted in 2008, revealed that individual CNT shells have strengths of up to ≈, which is in agreement with quantum/atomistic models. Because carbon nanotubes have a low density for a solid of 1.3 to 1.4 g/cm3, its specific strength of up to 48,000 kN·m/kg is the best of known materials, compared to high-carbon steel's 154 kN·m/kg.
Although the strength of individual CNT shells is extremely high, weak shear interactions between adjacent shells and tubes lead to significant reduction in the effective strength of multiwalled carbon nanotubes and carbon nanotube bundles down to only a few GPa. This limitation has been recently addressed by applying high-energy electron irradiation, which crosslinks inner shells and tubes, and effectively increases the strength of these materials to ≈60 GPa for multiwalled carbon nanotubes and ≈17 GPa for double-walled carbon nanotube bundles. CNTs are not nearly as strong under compression. Because of their hollow structure and high aspect ratio, they tend to undergo buckling when placed under compressive, torsional, or bending stress.
On the other hand, there is evidence that in the radial direction they are rather soft. The first transmission electron microscope observation of radial elasticity suggested that even van der Waals forces can deform two adjacent nanotubes. Later, nanoindentations with an atomic force microscope were performed by several groups to quantitatively measure the radial elasticity of multiwalled carbon nanotubes and tapping/contact mode atomic force microscopy was also performed on single-walled carbon nanotubes. Their high Young's modulus in the linear direction, of on the order of several GPa (and even up to an experimentally-measured 1.8 TPa, for nanotubes near 2.4 μm in length), further suggests they may be soft in the radial direction.
Electrical
Unlike graphene, which is a two-dimensional semimetal, carbon nanotubes are either metallic or semiconducting along the tubular axis. For a given (n,m) nanotube, if n = m, the nanotube is metallic; if n − m is a multiple of 3 and n ≠ m, then the nanotube is quasi-metallic with a very small band gap, otherwise the nanotube is a moderate semiconductor.
Thus, all armchair (n = m) nanotubes are metallic, and nanotubes (6,4), (9,1), etc. are semiconducting.
Carbon nanotubes are not semimetallic because the degenerate point (the point where the π [bonding] band meets the π* [anti-bonding] band, at which the energy goes to zero) is slightly shifted away from the K point in the Brillouin zone because of the curvature of the tube surface, causing hybridization between the σ* and π* anti-bonding bands, modifying the band dispersion.
The rule regarding metallic versus semiconductor behavior has exceptions because curvature effects in small-diameter tubes can strongly influence electrical properties. Thus, a (5,0) SWCNT that should be semiconducting in fact is metallic according to the calculations. Likewise, zigzag and chiral SWCNTs with small diameters that should be metallic have a finite gap (armchair nanotubes remain metallic). In theory, metallic nanotubes can carry an electric current density of 4 × 109 A/cm2, which is more than 1,000 times greater than those of metals such as copper, where for copper interconnects, current densities are limited by electromigration. Carbon nanotubes are thus being explored as interconnects and conductivity-enhancing components in composite materials, and many groups are attempting to commercialize highly conducting electrical wire assembled from individual carbon nanotubes. There are significant challenges to be overcome however, such as undesired current saturation under voltage, and the much more resistive nanotube-to-nanotube junctions and impurities, all of which lower the electrical conductivity of the macroscopic nanotube wires by orders of magnitude, as compared to the conductivity of the individual nanotubes.
Because of its nanoscale cross-section, electrons propagate only along the tube's axis. As a result, carbon nanotubes are frequently referred to as one-dimensional conductors. The maximum electrical conductance of a single-walled carbon nanotube is 2G0, where G0 = 2e2/h is the conductance of a single ballistic quantum channel.
Because of the role of the π-electron system in determining the electronic properties of graphene, doping in carbon nanotubes differs from that of bulk crystalline semiconductors from the same group of the periodic table (e.g., silicon). Graphitic substitution of carbon atoms in the nanotube wall by boron or nitrogen dopants leads to p-type and n-type behavior, respectively, as would be expected in silicon. However, some non-substitutional (intercalated or adsorbed) dopants introduced into a carbon nanotube, such as alkali metals and electron-rich metallocenes, result in n-type conduction because they donate electrons to the π-electron system of the nanotube. By contrast, π-electron acceptors such as FeCl3 or electron-deficient metallocenes function as p-type dopants because they draw π-electrons away from the top of the valence band.
Intrinsic superconductivity has been reported, although other experiments found no evidence of this, leaving the claim a subject of debate.
In 2021, Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT, published department findings on the use of carbon nanotubes to create an electric current. By immersing the structures in an organic solvent, the liquid drew electrons out of the carbon particles. Strano was quoted as saying, "This allows you to do electrochemistry, but with no wires," and represents a significant breakthrough in the technology. Future applications include powering micro- or nanoscale robots, as well as driving alcohol oxidation reactions, which are important in the chemicals industry.
Crystallographic defects also affect the tube's electrical properties. A common result is lowered conductivity through the defective region of the tube. A defect in metallic armchair-type tubes (which can conduct electricity) can cause the surrounding region to become semiconducting, and single monatomic vacancies induce magnetic properties.
Electromechanical
Semiconducting carbon nanotubes have shown piezoresistive property when applying mechanical force. The structural deformation causes a change in the band gap which effects the conductance. This property has the potential to be used in strain sensors.
Optical
Carbon nanotubes have useful absorption, photoluminescence (fluorescence), and Raman spectroscopy properties. Spectroscopic methods offer the possibility of quick and non-destructive characterization of relatively large amounts of carbon nanotubes. There is a strong demand for such characterization from the industrial point of view: numerous parameters of nanotube synthesis can be changed, intentionally or unintentionally, to alter the nanotube quality, such as the non-tubular carbon content, structure (chirality) of the produced nanotubes, and structural defects. These features then determine nearly all other significant optical, mechanical, and electrical properties.
Carbon nanotube optical properties have been explored for use in applications such as for light-emitting diodes (LEDs) and photo-detectors based on a single nanotube have been produced in the lab. Their unique feature is not the efficiency, which is yet relatively low, but the narrow selectivity in the wavelength of emission and detection of light and the possibility of its fine-tuning through the nanotube structure. In addition, bolometer and optoelectronic memory devices have been realised on ensembles of single-walled carbon nanotubes. Nanotube fluorescence has been investigated for the purposes of imaging and sensing in biomedical applications.
Thermal
All nanotubes are expected to be very good thermal conductors along the tube, exhibiting a property known as "ballistic conduction", but good insulators lateral to the tube axis. Measurements show that an individual SWNT has a room-temperature thermal conductivity along its axis of about 3500 W·m−1·K−1; compare this to copper, a metal well known for its good thermal conductivity, which transmits 385 W·m−1·K−1. An individual SWNT has a room-temperature thermal conductivity lateral to its axis (in the radial direction) of about 1.52 W·m−1·K−1, which is about as thermally conductive as soil. Macroscopic assemblies of nanotubes such as films or fibres have reached up to 1500 W·m−1·K−1 so far. Networks composed of nanotubes demonstrate different values of thermal conductivity, from the level of thermal insulation with the thermal conductivity of 0.1 W·m−1·K−1 to such high values. That is dependent on the amount of contribution to the thermal resistance of the system caused by the presence of impurities, misalignments and other factors. The temperature stability of carbon nanotubes is estimated to be up to 2800 °C in vacuum and about 750 °C in air.
Crystallographic defects strongly affect the tube's thermal properties. Such defects lead to phonon scattering, which in turn increases the relaxation rate of the phonons. This reduces the mean free path and reduces the thermal conductivity of nanotube structures. Phonon transport simulations indicate that substitutional defects such as nitrogen or boron will primarily lead to the scattering of high-frequency optical phonons. However, larger-scale defects such as Stone–Wales defects cause phonon scattering over a wide range of frequencies, leading to a greater reduction in thermal conductivity.
Antibacterial
Recently, carbon-nanotubes have been shown to have antibacterial properties. They disrupt normal bacterial function by causing physical/mechanical damage, facilitating oxidative stress or lipid extraction, inhibiting bacterial metabolism, and isolating functional sites via wrapping with CNM-containing nanomaterials.
Synthesis
Techniques have been developed to produce nanotubes in sizeable quantities, including arc discharge, laser ablation, chemical vapor deposition (CVD) and high-pressure carbon monoxide disproportionation (HiPCO). Among these arc discharge, laser ablation are batch by batch process, Chemical Vapor Deposition can be used both for batch by batch or continuous processes, and HiPCO is gas phase continuous process. Most of these processes take place in a vacuum or with process gases. The CVD growth method is popular, as it yields high quantity and has a degree of control over diameter, length and morphology. Using particulate catalysts, large quantities of nanotubes can be synthesized by these methods, and industrialisation is well on its way, with several CNT and CNT fibers factory around the world. One problem of CVD processes is the high variability in the nanotube's characteristics The HiPCO process advances in catalysis and continuous growth are making CNTs more commercially viable. The HiPCO process helps in producing high purity single-walled carbon nanotubes in higher quantity. The HiPCO reactor operates at high temperature 900–1100 °C and high pressure ~30–50 bar. It uses carbon monoxide as the carbon source and iron pentacarbonyl or nickel tetracarbonyl as a catalyst. These catalysts provide a nucleation site for the nanotubes to grow, while cheaper iron-based catalysts like Ferrocene can be used for CVD process.
Vertically aligned carbon nanotube arrays are also grown by thermal chemical vapor deposition. A substrate (quartz, silicon, stainless steel, carbon fibers, etc.) is coated with a catalytic metal (Fe, Co, Ni) layer. Typically that layer is iron and is deposited via sputtering to a thickness of 1–5 nm. A 10–50 nm underlayer of alumina is often also put down on the substrate first. This imparts controllable wetting and good interfacial properties.
When the substrate is heated to the growth temperature (~600 to 850 °C), the continuous iron film breaks up into small islands with each island then nucleating a carbon nanotube. The sputtered thickness controls the island size and this in turn determines the nanotube diameter. Thinner iron layers drive down the diameter of the islands and drive down the diameter of the nanotubes grown. The amount of time the metal island can sit at the growth temperature is limited as they are mobile and can merge into larger (but fewer) islands. Annealing at the growth temperature reduces the site density (number of CNT/mm2) while increasing the catalyst diameter.
The as-prepared carbon nanotubes always have impurities such as other forms of carbon (amorphous carbon, fullerene, etc.) and non-carbonaceous impurities (metal used for catalyst). These impurities need to be removed to make use of the carbon nanotubes in applications.
Purification
As-synthesized carbon nanotubes typically contain impurities and most importantly different chiralities of carbon nanotubes. Therefore, multiple methods have been developed to purify them including polymer-assisted, density gradient ultracentrifugation (DGU), chromatography and aqueous two-phase extraction (ATPE). These methods have been reviewed in multiple articles.
Certain polymers selectively disperse or wrap CNTs of a particular chirality, metallic character or diameter. For example, poly(phenylenevinylenes) disperses CNTs of specific diameters (0.75–0.84 nm) and polyfluorenes are highly selective for semiconducting CNTs. It involves mainly two steps, sonicate the mixture (CNTs and polymers in solvent), centrifuge and the supernatant are desired CNTs.
Density gradient ultracentrifugation is a method based on the density difference of CNTs, so that different components are layered in centrifuge tubes under centrifugal force. Chromatography-based methods include size exclusion (SEC), ion-exchange (IEX) and gel chromatography. For SEC, CNTs are separated due to the difference in size using a stationary phase with different pore size. As for IEX, the separation is achieved based on their differential adsorption and desorption onto chemically functionalized resins packed in an IEX column, so understanding the interaction between CNTs mixtures and resins is important. The first IEX is reported to separate DNA-SWCNTs. Gel chromatography is based on the partition of CNTs between stationary and mobile phase, it's found semiconducting CNTs are more strongly attracted by gel than metallic CNTs. While it shows potential, the current application is limited to the separation of semiconducting (n,m) species.
ATPE uses two water-soluble polymers such as polyethylene glycol (PEG) and dextran. When mixed, two immiscible aqueous phases form spontaneously, and each of the two phases shows a different affinity to CNTs. Partition depends on the solvation energy difference between two similar phases of microscale volumes. By changing the separation system or temperatures, and adding strong oxidants, reductants, or salts, the partition of CNTs species into the two phases can be adjusted.
Despite the progress that has been made to separate and purify CNTs, many challenges remain, such as the growth of chirality-controlled CNTs, so that no further purification is needed, or large-scale purification.
Advantages of monochiral CNTs
Monochiral CNTs have the advantage that they do contain less or no impurities, well-defined non-congested optical spectra. This allows to create for example CNT-based biosensors with higher sensitivity and selectivity. For example, monochiral SWCNTs are necessary for multiplexed and ratiometric sensing schemes, enhanced sensitivity of biocompatibility.
Functionalization
Carbon nanotubes can be functionalized to attain desired properties that can be used in a wide variety of applications. The two main methods of carbon nanotube functionalization are covalent and non-covalent modifications. Because of their apparent hydrophobic nature, carbon nanotubes tend to agglomerate hindering their dispersion in solvents or viscous polymer melts. The resulting nanotube bundles or aggregates reduce the mechanical performance of the final composite. The surface of the carbon nanotubes can be modified to reduce the hydrophobicity and improve interfacial adhesion to a bulk polymer through chemical attachment.
Chemical routes such as covalent functionalization have been studied extensively, which involves the oxidation of CNTs via strong acids (e.g. sulfuric acid, nitric acid, or a mixture of both) in order to set the carboxylic groups onto the surface of the CNTs as the final product or for further modification by esterification or amination. Free radical grafting is a promising technique among covalent functionalization methods, in which alkyl or aryl peroxides, substituted anilines, and diazonium salts are used as the starting agents.
Functionalization can improve CNTs characteristically weak dispersibility in many solvents, such as water - a consequence of their strong intermolecular p–p interactions. This can enhance the processing and manipulation of insoluble CNTs, rendering them useful for synthesizing innovative CNT nanofluids with impressive properties that are tunable for a wide range of applications.
Free radical grafting of macromolecules (as the functional group) onto the surface of CNTs can improve the solubility of CNTs compared to common acid treatments which involve the attachment of small molecules such as hydroxyl onto the surface of CNTs. The solubility of CNTs can be improved significantly by free-radical grafting because the large functional molecules facilitate the dispersion of CNTs in a variety of solvents even at a low degree of functionalization. Recently an innovative environmentally friendly approach has been developed for the covalent functionalization of multi-walled carbon nanotubes (MWCNTs) using clove buds. This approach is innovative and green because it does not use toxic and hazardous acids which are typically used in common carbon nanomaterial functionalization procedures. The MWCNTs are functionalized in one pot using a free radical grafting reaction. The clove-functionalized MWCNTs are then dispersed in water producing a highly stable multi-walled carbon nanotube aqueous suspension (nanofluids).
The surface of carbon nanotubes can be chemically modified by coating spinel nanoparticles by hydrothermal synthesis and can be used for water oxidation purposes.
In addition, the surface of carbon nanotubes can be fluorinated or halofluorinated by heating while in contact with a fluoroorganic substance, thereby forming partially fluorinated carbons (so-called Fluocar materials) with grafted (halo)fluoroalkyl functionality.
Modeling
Carbon nanotubes are modelled in a similar manner as traditional composites in which a reinforcement phase is surrounded by a matrix phase. Ideal models such as cylindrical, hexagonal and square models are common. The size of the micromechanics model is highly function of the studied mechanical properties. The concept of representative volume element (RVE) is used to determine the appropriate size and configuration of the computer model to replicate the actual behavior of the CNT-reinforced nanocomposite. Depending on the material property of interest (thermal, electrical, modulus, creep), one RVE might predict the property better than the alternatives. While the implementation of the ideal model is computationally efficient, they do not represent microstructural features observed in scanning electron microscopy of actual nanocomposites. To incorporate realistic modeling, computer models are also generated to incorporate variability such as waviness, orientation and agglomeration of multiwall or single-wall carbon nanotubes.
Metrology
There are many metrology standards and reference materials available for carbon nanotubes.
For single-wall carbon nanotubes, ISO/TS 10868 describes a measurement method for the diameter, purity, and fraction of metallic nanotubes through optical absorption spectroscopy, while ISO/TS 10797 and ISO/TS 10798 establish methods to characterize the morphology and elemental composition of single-wall carbon nanotubes, using transmission electron microscopy and scanning electron microscopy respectively, coupled with energy dispersive X-ray spectrometry analysis.
NIST SRM 2483 is a soot of single-wall carbon nanotubes used as a reference material for elemental analysis, and was characterized using thermogravimetric analysis, prompt gamma activation analysis, induced neutron activation analysis, inductively coupled plasma mass spectroscopy, resonant Raman scattering, UV-visible-near infrared fluorescence spectroscopy and absorption spectroscopy, scanning electron microscopy, and transmission electron microscopy. The Canadian National Research Council also offers a certified reference material SWCNT-1 for elemental analysis using neutron activation analysis and inductively coupled plasma mass spectroscopy. NIST RM 8281 is a mixture of three lengths of single-wall carbon nanotube.
For multiwall carbon nanotubes, ISO/TR 10929 identifies the basic properties and the content of impurities, while ISO/TS 11888 describes morphology using scanning electron microscopy, transmission electron microscopy, viscometry, and light scattering analysis. ISO/TS 10798 is also valid for multiwall carbon nanotubes.
Safety and health
The National Institute for Occupational Safety and Health (NIOSH) is the leading United States federal agency conducting research and providing guidance on the occupational safety and health implications and applications of nanomaterials. Early scientific studies have indicated that nanoscale particles may pose a greater health risk than bulk materials due to a relative increase in surface area per unit mass. Increase in length and diameter of CNT is correlated to increased toxicity and pathological alterations in lung. The biological interactions of nanotubes are not well understood, and the field is open to continued toxicological studies. It is often difficult to separate confounding factors, and since carbon is relatively biologically inert, some of the toxicity attributed to carbon nanotubes may be instead due to residual metal catalyst contamination. In previous studies, only Mitsui-7 was reliably demonstrated to be carcinogenic, although for unclear/unknown reasons. Unlike many common mineral fibers (such as asbestos), most SWCNTs and MWCNTs do not fit the size and aspect-ratio criteria to be classified as respirable fibers. In 2013, given that the long-term health effects have not yet been measured, NIOSH published a Current Intelligence Bulletin detailing the potential hazards and recommended exposure limit for carbon nanotubes and fibers. The U.S. National Institute for Occupational Safety and Health has determined non-regulatory recommended exposure limits (RELs) of 1 μg/m3 for carbon nanotubes and carbon nanofibers as background-corrected elemental carbon as an 8-hour time-weighted average (TWA) respirable mass concentration. Although CNT caused pulmonary inflammation and toxicity in mice, exposure to aerosols generated from sanding of composites containing polymer-coated MWCNTs, representative of the actual end-product, did not exert such toxicity.
As of October 2016, single-wall carbon nanotubes have been registered through the European Union's Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) regulations, based on evaluation of the potentially hazardous properties of SWCNT. Based on this registration, SWCNT commercialization is allowed in the EU up to 100 metric tons. Currently, the type of SWCNT registered through REACH is limited to the specific type of single-wall carbon nanotubes manufactured by OCSiAl, which submitted the application.
Applications
Carbon nanotubes are currently used in multiple industrial and consumer applications. These include battery components, polymer composites, to improve the mechanical, thermal and electrical properties of the bulk product, and as a highly absorptive black paint. Many other applications are under development, including field effect transistors for electronics, high-strength fabrics, biosensors for biomedical and agricultural applications, and many others.
Biomedical Applications
Because of their relatively large surface area, CNTs are capable of interacting with a wide variety of therapeutic and diagnostic agents (drugs, genes, vaccines, antibodies, biosensors, etc.). This can be utilized to assist in drug delivery directly into cells. In addition, CNTs have recently been used as reinforcements in implants and scaffolds due to their suitable reaction area, high elastic modulus, and load transfer capability.
CNTs have been shown to increase the effectiveness of bioactive coatings for the attachment, proliferation, and differentiation of osteoblasts, and has been used as a bone substitution material.
CNTs may be used as reinforcing materials for chitosan-containing coatings used on implants and medical scaffolds.
Biosensing
SWCNTs have nanoscale dimensions that fit to the size of biological species. Due to this size compatibility and their large surface-to-volume ratio, they are sensitive to changes in their chemical environment. Through covalent and non-covalent surface functionalization, SWCNTs can be precisely tailored for selective molecular interactions with a target analyte. The SWCNT represents the transduction unit that converts the interaction into a signal change (optical or electrical). Due to continuous progress in the development of detection strategies, there are numerous examples of the use of SWCNTs as highly sensitive nanosensors (even down to the single molecule level) for a variety of important biomolecules. Examples include the detection of reactive oxygen and nitrogen species, neurotransmitters, other small molecules, lipids, proteins, sugars, DNA/RNA, enzymes as well as bacteria.
The signal change manifests itself in an increase or decrease in the current (electrical) or in a change in the intensity or wavelength of the fluorescence emission (optical). Depending on the type of application, both electrical or optical signal transmission can be advantageous. For sensitive measurement of electronic changes, field-effect transistors (FET) are often used in which the flow of charges within the SWCNTs is measured. The FET structures allow easy on-chip integration and can be parallelized to detect multiple target analytes simultaneously. However, such sensors are more invasive for in vivo applications, as the entire device has to be inserted into the body. Optical detection with semiconducting SWCNTs is based on the radiative recombination of excitons in the near-infrared (NIR) by prior optical (fluorescence) or electrical excitation (electroluminescence). The emission in the NIR enables detection in the biological transparency window, where optical sensor applications benefit from reduced scattering and autofluorescence of biological samples and consequently a high signal-to-noise ratio. Compared to optical sensors in the UV or visible range, the penetration depth in biological tissue is also increased. In addition to the advantage of a contactless readout SWCNTs have excellent photostability, which enables long-term sensor applications. Furthermore, the nanoscale size of SWCNTs allows dense coating of surfaces which enables chemical imaging, e.g. of cellular release processes with high spatial and temporal resolution. Detection of several target analytes is possible by the spatial arrangement of different SWCNT sensors in arrays or by hyperspectral detection based on monochiral SWCNT sensors that emit at different emission wavelengths. For fluorescence applications, however, optical filters to distinguish between excitation and emission and a NIR-sensitive detector must be used. Standard silicon detectors can also be used if monochiral SWCNTs (extractable by special purification processes) emitting closer to the visible range (800 – 900 nm) are used. In order to avoid susceptibility of optical sensors to fluctuating ambient light, internal references such as SWCNTs that are modified to be non-responsive or stable NIR emitters can be used. An alternative is to measure fluorescence lifetimes instead of fluorescence intensities. Overall, SWCNTs therefore have great potential as building blocks for various biosensors.
To render SWCNTs suitable for biosensing, their surface needs to be modified to ensure colloidal stability and provide a handle for biological recognition. Therefore, biosensing and surface modifications (functionalization) are closely related.
Potential future applications include biomedical and environmental applications such as monitoring plant health in agriculture, standoff process control in bioreactors, research/diagnostics of neuronal communication and numerous diseases such as coagulation disorders, diabetes, cancer, microbial and viral infections, testing the efficacy of pharmaceuticals or infection monitoring using smart implants. In industry, SWCNTs are already used as sensors in the detection of gases and odors in the form of an electronic nose or in enzyme screening.
Other current applications
Easton-Bell Sports, Inc. have been in partnership with Zyvex Performance Materials, using CNT technology in a number of their bicycle components – including flat and riser handlebars, cranks, forks, seatposts, stems and aero bars.
Amroy Europe Oy manufactures Hybtonite carbon nano-epoxy resins where carbon nanotubes have been chemically activated to bond to epoxy, resulting in a composite material that is 20% to 30% stronger than other composite materials. It has been used for wind turbines, marine paints and a variety of sports gear such as skis, ice hockey sticks, baseball bats, hunting arrows, and surfboards.
Surrey NanoSystems synthesizes carbon nanotubes to create vantablack ultra-absorptive black paint.
"Gecko tape" (also called "nano tape") is often commercially sold as double-sided adhesive tape. It can be used to hang lightweight items such as pictures and decorative items on smooth walls without punching holes in the wall. The carbon nanotube arrays comprising the synthetic setae leave no residue after removal and can stay sticky in extreme temperatures.
Tips for atomic force microscope probes.
Applications under development
Applications of nanotubes in development in academia and industry include:
Medical devices: Using single wall carbon nanotubes in medical devices results in no skin contamination, high flexibility, and softness, which are crucial for healthcare applications.
Wearable electronics and 5G/6G communication: Electrodes with single wall carbon nanotubes (SWCNTs) exhibit excellent electrochemical properties and flexibility.
Bitumen and asphalt: The world's first test section of road pavement with single wall carbon nanotubes (SWCNTs) showed a 67% increase in resistance to cracks and ruts, increasing the lifespan of the materials.
Nanocomposites for aviation, automotive, and renewable energy markets: Modifying resin with just 0.02% single wall carbon nanotubes (SWCNTs) increases electrical conductivity by 276% without compromising the mechanical properties of fiber-reinforced polymers, also improving flexural properties and delaying thermal degradation.
Additive manufacturing: single wall carbon nanotubes (SWCNTs) are mixed with a suitable printing medium or used as a filler material in the printing process, creating complex structures with enhanced mechanical and electrical properties.
Utilizing carbon nanotubes as the channel material of carbon nanotube field-effect transistors.
Using carbon nanotubes as a scaffold for diverse microfabrication techniques.
Energy dissipation in self-organized nanostructures under the influence of an electric field.
Using carbon nanotubes for environmental monitoring due to their active surface area and their ability to absorb gases.
Jack Andraka used carbon nanotubes in his pancreatic cancer test. His method of testing won the Intel International Science and Engineering Fair Gordon E. Moore Award in the spring of 2012.
The Boeing Company has patented the use of carbon nanotubes for structural health monitoring of composites used in aircraft structures. This technology is hoped to greatly reduce the risk of an in-flight failure caused by structural degradation of aircraft.
Zyvex Technologies has also built a 54' maritime vessel, the Piranha Unmanned Surface Vessel, as a technology demonstrator for what is possible using CNT technology. CNTs help improve the structural performance of the vessel, resulting in a lightweight 8,000 lb boat that can carry a payload of 15,000 lb over a range of 2,500 miles.
IMEC is using carbon nanotubes for pellicles in semiconductor lithography.
In tissue engineering, carbon nanotubes have been used as scaffolding for bone growth.
Carbon nanotubes can serve as additives to various structural materials. For instance, nanotubes form a tiny portion of the material(s) in some (primarily carbon fiber) baseball bats, golf clubs, car parts, or damascus steel.
IBM expected carbon nanotube transistors to be used on Integrated Circuits by 2020.
SWCNTs have found use in long lasting, faster charged lithium ion batteries; polyamide car parts for e-painting; automotive primers for cost benefits and better aesthetics of topcoats; ESD floors; electrically conductive lining coatings for tanks and pipes; rubber parts with improved heat and oil aging stability; conductive gelcoats for ATEX requirements and tooling conductive gelcoats for increased safety and efficiency; and heating fiber coatings for infrastructure elements.
Potential/Future applications
The strength and flexibility of carbon nanotubes makes them of potential use in controlling other nanoscale structures, which suggests they will have an important role in nanotechnology engineering. The highest tensile strength of an individual multi-walled carbon nanotube has been tested to be 63 GPa. Carbon nanotubes were found in Damascus steel from the 17th century, possibly helping to account for the legendary strength of the swords made of it. Recently, several studies have highlighted the prospect of using carbon nanotubes as building blocks to fabricate three-dimensional macroscopic (>1mm in all three dimensions) all-carbon devices. Lalwani et al. have reported a novel radical initiated thermal crosslinking method to fabricated macroscopic, free-standing, porous, all-carbon scaffolds using single- and multi-walled carbon nanotubes as building blocks. These scaffolds possess macro-, micro-, and nano- structured pores and the porosity can be tailored for specific applications. These 3D all-carbon scaffolds/architectures may be used for the fabrication of the next generation of energy storage, supercapacitors, field emission transistors, high-performance catalysis, photovoltaics, and biomedical devices and implants.
CNTs are potential candidates for future via and wire material in nano-scale VLSI circuits. Eliminating electromigration reliability concerns that plague today's Cu interconnects, isolated (single and multi-wall) CNTs can carry current densities in excess of 1000 MA/cm2 without electromigration damage.
Single-walled nanotubes are likely candidates for miniaturizing electronics. The most basic building block of these systems is an electric wire, and SWNTs with diameters of an order of a nanometre can be excellent conductors. One useful application of SWNTs is in the development of the first intermolecular field-effect transistors (FET). The first intermolecular logic gate using SWCNT FETs was made in 2001. A logic gate requires both a p-FET and an n-FET. Because SWNTs are p-FETs when exposed to oxygen and n-FETs otherwise, it is possible to expose half of an SWNT to oxygen and protect the other half from it. The resulting SWNT acts as a not logic gate with both p- and n-type FETs in the same molecule.
Large quantities of pure CNTs can be made into a freestanding sheet or film by surface-engineered tape-casting (SETC) fabrication technique which is a scalable method to fabricate flexible and foldable sheets with superior properties. Another reported form factor is CNT fiber (a.k.a. filament) by wet spinning. The fiber is either directly spun from the synthesis pot or spun from pre-made dissolved CNTs. Individual fibers can be turned into a yarn. Apart from its strength and flexibility, the main advantage is making an electrically conducting yarn. The electronic properties of individual CNT fibers (i.e. bundle of individual CNT) are governed by the two-dimensional structure of CNTs. The fibers were measured to have a resistivity only one order of magnitude higher than metallic conductors at . By further optimizing the CNTs and CNT fibers, CNT fibers with improved electrical properties could be developed.
CNT-based yarns are suitable for applications in energy and electrochemical water treatment when coated with an ion-exchange membrane. Also, CNT-based yarns could replace copper as a winding material. Pyrhönen et al. (2015) have built a motor using CNT winding.
See also
Buckypaper
Carbide-derived carbon
Carbon nanocone
Carbon nanofibers
Carbon nanoscrolls
Carbon nanotube computer
Carbon nanotubes in photovoltaics
Colossal carbon tube
Diamond nanothread
Filamentous carbon
Molecular modelling
Nanoflower
Nano-I-beam
Ninithi (nanotube modelling software)
Optical properties of carbon nanotubes
Organic semiconductor
References
This article incorporates public domain text from the National Institute of Environmental Health Sciences (NIEHS) as quoted.
External links
Nanocarbon: From Graphene to Buckyballs. Interactive 3D models of cyclohexane, benzene, graphene, graphite, chiral & non-chiral nanotubes, and C60 Buckyballs – WeCanFigureThisOut.org.
C60 and Carbon Nanotubes a short video explaining how nanotubes can be made from modified graphite sheets and the three different types of nanotubes that are formed
Learning module for Bandstructure of Carbon Nanotubes and Nanoribbons
Selection of free-download articles on carbon nanotubes
WOLFRAM Demonstrations Project: Electronic Band Structure of a Single-Walled Carbon Nanotube by the Zone-Folding Method
WOLFRAM Demonstrations Project: Electronic Structure of a Single-Walled Carbon Nanotube in Tight-Binding Wannier Representation
Allotropes of carbon
Transparent electrodes
Refractory materials
Space elevator
Discovery and invention controversies
Nanomaterials | Carbon nanotube | [
"Physics",
"Chemistry",
"Materials_science",
"Astronomy",
"Technology"
] | 13,240 | [
"Exploratory engineering",
"Astronomical hypotheses",
"Allotropes of carbon",
"Allotropes",
"Refractory materials",
"Space elevator",
"Materials",
"Nanotechnology",
"Nanomaterials",
"Matter"
] |
5,323 | https://en.wikipedia.org/wiki/Computer%20science | Computer science is the study of computation, information, and automation. Computer science spans theoretical disciplines (such as algorithms, theory of computation, and information theory) to applied disciplines (including the design and implementation of hardware and software).
Algorithms and data structures are central to computer science.
The theory of computation concerns abstract models of computation and general classes of problems that can be solved using them. The fields of cryptography and computer security involve studying the means for secure communication and preventing security vulnerabilities. Computer graphics and computational geometry address the generation of images. Programming language theory considers different ways to describe computational processes, and database theory concerns the management of repositories of data. Human–computer interaction investigates the interfaces through which humans and computers interact, and software engineering focuses on the design and principles behind developing software. Areas such as operating systems, networks and embedded systems investigate the principles and design behind complex systems. Computer architecture describes the construction of computer components and computer-operated equipment. Artificial intelligence and machine learning aim to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, planning and learning found in humans and animals. Within artificial intelligence, computer vision aims to understand and process image and video data, while natural language processing aims to understand and process textual and linguistic data.
The fundamental concern of computer science is determining what can and cannot be automated. The Turing Award is generally recognized as the highest distinction in computer science.
History
The earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division. Algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment.
Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623. In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. Leibniz may be considered the first computer scientist and information theorist, because of various reasons, including the fact that he documented the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry when he invented his simplified arithmometer, the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which eventually gave him the idea of the first programmable mechanical calculator, his Analytical Engine. He started developing this machine in 1834, and "in less than two years, he had sketched out many of the salient features of the modern computer". "A crucial step was the adoption of a punched card system derived from the Jacquard loom" making it infinitely programmable. In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, which is considered to be the first published algorithm ever specifically tailored for implementation on a computer. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information; eventually his company became part of IBM. Following Babbage, although unaware of his earlier work, Percy Ludgate in 1909 published the 2nd of the only two designs for mechanical analytical engines in history. In 1914, the Spanish engineer Leonardo Torres Quevedo published his Essays on Automatics, and designed, inspired by Babbage, a theoretical electromechanical calculating machine which was to be controlled by a read-only program. The paper also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, a prototype that demonstrated the feasibility of an electromechanical analytical engine, on which commands could be typed and the results printed automatically. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, which was making all kinds of punched card equipment and was also in the calculator business to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit. When the machine was finished, some hailed it as "Babbage's dream come true".
During the 1940s, with the development of new and more powerful computing machines such as the Atanasoff–Berry computer and ENIAC, the term computer came to refer to the machines rather than their human predecessors. As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City. The renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world. Ultimately, the close relationship between IBM and Columbia University was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s. The world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science department in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights.
Etymology and scope
Although first proposed in 1956, the term "computer science" appears in a 1959 article in Communications of the ACM,
in which Louis Fein argues for the creation of a Graduate School in Computer Sciences analogous to the creation of Harvard Business School in 1921. Louis justifies the name by arguing that, like management science, the subject is applied and interdisciplinary in nature, while having the characteristics typical of an academic discipline.
His efforts, and those of others such as numerical analyst George Forsythe, were rewarded: universities went on to create such departments, starting with Purdue in 1962. Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed. Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy, to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. An alternative term, also proposed by Naur, is data science; this is now used for a multi-disciplinary field of data analysis, including statistics and databases.
In the early days of computing, a number of terms for the practitioners of the field of computing were suggested (albeit facetiously) in the Communications of the ACM—turingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist. Three months later in the same journal, comptologist was suggested, followed next year by hypologist. The term computics has also been suggested. In Europe, terms derived from contracted translations of the expression "automatic information" (e.g. "" in Italian) or "information and mathematics" are often used, e.g. (French), (German), (Italian, Dutch), (Spanish, Portuguese), (Slavic languages and Hungarian) or (, which means informatics) in Greek. Similar words have also been adopted in the UK (as in the School of Informatics, University of Edinburgh). "In the U.S., however, informatics is linked with applied computing, or computing in the context of another domain."
A folkloric quotation, often attributed to—but almost certainly not first formulated by—Edsger Dijkstra, states that "computer science is no more about computers than astronomy is about telescopes." The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been exchange of ideas between the various computer-related disciplines. Computer science research also often intersects other disciplines, such as cognitive science, linguistics, mathematics, physics, biology, Earth science, statistics, philosophy, and logic.
Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science. Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel, Alan Turing, John von Neumann, Rózsa Péter and Alonzo Church and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra.
The relationship between computer science and software engineering is a contentious issue, which is further muddied by disputes over what the term "software engineering" means, and how computer science is defined. David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines.
The academic, political, and funding aspects of computer science tend to depend on whether a department is formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment with computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research.
Philosophy
Epistemology of computer science
Despite the word science in its name, there is debate over whether or not computer science is a discipline of science, mathematics, or engineering. Allen Newell and Herbert A. Simon argued in 1975, It has since been argued that computer science can be classified as an empirical science since it makes use of empirical testing to evaluate the correctness of programs, but a problem remains in defining the laws and theorems of computer science (if any exist) and defining the nature of experiments in computer science. Proponents of classifying computer science as an engineering discipline argue that the reliability of computational systems is investigated in the same way as bridges in civil engineering and airplanes in aerospace engineering. They also argue that while empirical sciences observe what presently exists, computer science observes what is possible to exist and while scientists discover laws from observation, no proper laws have been found in computer science and it is instead concerned with creating phenomena.
Proponents of classifying computer science as a mathematical discipline argue that computer programs are physical realizations of mathematical entities and programs that can be deductively reasoned through mathematical formal methods. Computer scientists Edsger W. Dijkstra and Tony Hoare regard instructions for computer programs as mathematical sentences and interpret formal semantics for programming languages as mathematical axiomatic systems.
Paradigms of computer science
A number of computer scientists have argued for the distinction of three separate paradigms in computer science. Peter Wegner argued that those paradigms are science, technology, and mathematics. Peter Denning's working group argued that they are theory, abstraction (modeling), and design. Amnon H. Eden described them as the "rationalist paradigm" (which treats computer science as a branch of mathematics, which is prevalent in theoretical computer science, and mainly employs deductive reasoning), the "technocratic paradigm" (which might be found in engineering approaches, most prominently in software engineering), and the "scientific paradigm" (which approaches computer-related artifacts from the empirical perspective of natural sciences, identifiable in some branches of artificial intelligence).
Computer science focuses on methods involved in design, specification, programming, verification, implementation and testing of human-made computing systems.
Fields
As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software.
CSAB, formerly called Computing Sciences Accreditation Board—which is made up of representatives of the Association for Computing Machinery (ACM), and the IEEE Computer Society (IEEE CS)—identifies four areas that it considers crucial to the discipline of computer science: theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and communication, database systems, parallel computation, distributed computation, human–computer interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science.
Theoretical computer science
Theoretical computer science is mathematical and abstract in spirit, but it derives its motivation from practical and everyday computation. It aims to understand the nature of computation and, as a consequence of this understanding, provide more efficient methodologies.
Theory of computation
According to Peter Denning, the fundamental question underlying computer science is, "What can be automated?" Theory of computation is focused on answering fundamental questions about what can be computed and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a multitude of computational problems.
The famous P = NP? problem, one of the Millennium Prize Problems, is an open problem in the theory of computation.
Information and coding theory
Information theory, closely related to probability and statistics, is related to the quantification of information. This was developed by Claude Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data.
Coding theory is the study of the properties of codes (systems for converting information from one form to another) and their fitness for a specific application. Codes are used for data compression, cryptography, error detection and correction, and more recently also for network coding. Codes are studied for the purpose of designing efficient and reliable data transmission methods.
Data structures and algorithms
Data structures and algorithms are the studies of commonly used computational methods and their computational efficiency.
Programming language theory and formal methods
Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering, and linguistics. It is an active research area, with numerous dedicated academic journals.
Formal methods are a particular kind of mathematically based technique for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. They form an important theoretical underpinning for software engineering, especially where safety or security is involved. Formal methods are a useful adjunct to software testing since they help avoid errors and can also give a framework for testing. For industrial use, tool support is required. However, the high cost of using formal methods means that they are usually only used in the development of high-integrity and life-critical systems, where safety or security is of utmost importance. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification.
Applied computer science
Computer graphics and visualization
Computer graphics is the study of digital visual contents and involves the synthesis and manipulation of image data. The study is connected to many other fields in computer science, including computer vision, image processing, and computational geometry, and is heavily applied in the fields of special effects and video games.
Image and sound processing
Information can take the form of images, sound, video or other multimedia. Bits of information can be streamed via signals. Its processing is the central notion of informatics, the European view on computing, which studies information processing algorithms independently of the type of information carrier – whether it is electrical, mechanical or biological. This field plays important role in information theory, telecommunications, information engineering and has applications in medical image computing and speech synthesis, among others. What is the lower bound on the complexity of fast Fourier transform algorithms? is one of the unsolved problems in theoretical computer science.
Computational science, finance and engineering
Scientific computing (or computational science) is the field of study concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems. A major usage of scientific computing is simulation of various processes, including computational fluid dynamics, physical, electrical, and electronic systems and circuits, as well as societies and social situations (notably war games) along with their habitats, among many others. Modern computers enable optimization of such designs as complete aircraft. Notable in electrical and electronic circuit design are SPICE, as well as software for physical realization of new (or modified) designs. The latter includes essential design software for integrated circuits.
Human–computer interaction
Human–computer interaction (HCI) is the field of study and research concerned with the design and use of computer systems, mainly based on the analysis of the interaction between humans and computer interfaces. HCI has several subfields that focus on the relationship between emotions, social behavior and brain activity with computers.
Software engineering
Software engineering is the study of designing, implementing, and modifying the software in order to ensure it is of high quality, affordable, maintainable, and fast to build. It is a systematic approach to software design, involving the application of engineering practices to software. Software engineering deals with the organizing and analyzing of software—it does not just deal with the creation or manufacture of new software, but its internal arrangement and maintenance. For example software testing, systems engineering, technical debt and software development processes.
Artificial intelligence
Artificial intelligence (AI) aims to or is required to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, learning, and communication found in humans and animals. From its origins in cybernetics and in the Dartmouth Conference (1956), artificial intelligence research has been necessarily cross-disciplinary, drawing on areas of expertise such as applied mathematics, symbolic logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence. AI is associated in the popular mind with robotic development, but the main field of practical application has been as an embedded component in areas of software development, which require computational understanding. The starting point in the late 1940s was Alan Turing's question "Can computers think?", and the question remains effectively unanswered, although the Turing test is still used to assess computer output on the scale of human intelligence. But the automation of evaluative and predictive tasks has been increasingly successful as a substitute for human monitoring and intervention in domains of computer application involving complex real-world data.
Computer systems
Computer architecture and microarchitecture
Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure of a computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory. Computer engineers study computational logic and design of computer hardware, from individual processor components, microcontrollers, personal computers to supercomputers and embedded systems. The term "architecture" in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks Jr., members of the Machine Organization department in IBM's main research center in 1959.
Concurrent, parallel and distributed computing
Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi and the parallel random access machine model. When multiple computers are connected in a network while using concurrency, this is known as a distributed system. Computers within that distributed system have their own private memory, and information can be exchanged to achieve common goals.
Computer networks
This branch of computer science aims to manage networks between computers worldwide.
Computer security and cryptography
Computer security is a branch of computer technology with the objective of protecting information from unauthorized access, disruption, or modification while maintaining the accessibility and usability of the system for its intended users.
Historical cryptography is the art of writing and deciphering secret messages. Modern cryptography is the scientific study of problems relating to distributed computations that can be attacked. Technologies studied in modern cryptography include symmetric and asymmetric encryption, digital signatures, cryptographic hash functions, key-agreement protocols, blockchain, zero-knowledge proofs, and garbled circuits.
Databases and data mining
A database is intended to organize, store, and retrieve large amounts of data easily. Digital databases are managed using database management systems to store, create, maintain, and search data, through database models and query languages. Data mining is a process of discovering patterns in large data sets.
Discoveries
The philosopher of computing Bill Rapaport noted three Great Insights of Computer Science:
Gottfried Wilhelm Leibniz's, George Boole's, Alan Turing's, Claude Shannon's, and Samuel Morse's insight: there are only two objects that a computer has to deal with in order to represent "anything".
All the information about any computable problem can be represented using only 0 and 1 (or any other bistable pair that can flip-flop between two easily distinguishable states, such as "on/off", "magnetized/de-magnetized", "high-voltage/low-voltage", etc.).
Alan Turing's insight: there are only five actions that a computer has to perform in order to do "anything".
Every algorithm can be expressed in a language for a computer consisting of only five basic instructions:
move left one location;
move right one location;
read symbol at current location;
print 0 at current location;
print 1 at current location.
Corrado Böhm and Giuseppe Jacopini's insight: there are only three ways of combining these actions (into more complex ones) that are needed in order for a computer to do "anything".
Only three rules are needed to combine any set of basic instructions into more complex ones:
sequence: first do this, then do that;
selection: IF such-and-such is the case, THEN do this, ELSE do that;
repetition: WHILE such-and-such is the case, DO this.
The three rules of Boehm's and Jacopini's insight can be further simplified with the use of goto (which means it is more elementary than structured programming).
Programming paradigms
Programming languages can be used to accomplish different tasks in different ways. Common programming paradigms include:
Functional programming, a style of building the structure and elements of computer programs that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It is a declarative programming paradigm, which means programming is done with expressions or declarations instead of statements.
Imperative programming, a programming paradigm that uses statements that change a program's state. In much the same way that the imperative mood in natural languages expresses commands, an imperative program consists of commands for the computer to perform. Imperative programming focuses on describing how a program operates.
Object-oriented programming, a programming paradigm based on the concept of "objects", which may contain data, in the form of fields, often known as attributes; and code, in the form of procedures, often known as methods. A feature of objects is that an object's procedures can access and often modify the data fields of the object with which they are associated. Thus object-oriented computer programs are made out of objects that interact with one another.
Service-oriented programming, a programming paradigm that uses "services" as the unit of computer work, to design and implement integrated business applications and mission critical software programs.
Many languages offer support for multiple paradigms, making the distinction more a matter of style than of technical capabilities.
Research
Conferences are important events for computer science research. During these conferences, researchers from the public and private sectors present their recent work and meet. Unlike in most other academic fields, in computer science, the prestige of conference papers is greater than that of journal publications. One proposed explanation for this is the quick development of this relatively new field requires rapid review and distribution of results, a task better handled by conferences than by journals.
See also
Computer science education
Glossary of computer science
List of computer scientists
List of computer science awards
List of pioneers in computer science
Outline of computer science
Notes
References
Further reading
Peter J. Denning. Is computer science science?, Communications of the ACM, April 2005.
Peter J. Denning, Great principles in computing curricula, Technical Symposium on Computer Science Education, 2004.
External links
DBLP Computer Science Bibliography
Association for Computing Machinery
Institute of Electrical and Electronics Engineers
Formal sciences | Computer science | [
"Technology"
] | 5,195 | [
"Computer science"
] |
5,326 | https://en.wikipedia.org/wiki/Creationism | Creationism is the religious belief that nature, and aspects such as the universe, Earth, life, and humans, originated with supernatural acts of divine creation, and is often pseudoscientific. In its broadest sense, creationism includes various religious views, which differ in their acceptance or rejection of modern scientific concepts such as evolution that describe the origin and development of natural phenomena.
The term creationism most often refers to belief in special creation: the claim that the universe and lifeforms were created as they exist today by divine action, and that the only true explanations are those which are compatible with a Christian fundamentalist literal interpretation of the creation myth found in the Bible's Genesis creation narrative. Since the 1970s, the most common form of this has been Young Earth creationism which posits special creation of the universe and lifeforms within the last 10,000 years on the basis of flood geology, and promotes pseudoscientific creation science. From the 18th century onward, Old Earth creationism accepted geological time harmonized with Genesis through gap or day-age theory, while supporting anti-evolution. Modern old-Earth creationists support progressive creationism and continue to reject evolutionary explanations. Following political controversy, creation science was reformulated as intelligent design and neo-creationism.
Mainline Protestants and the Catholic Church reconcile modern science with their faith in Creation through forms of theistic evolution which hold that God purposefully created through the laws of nature, and accept evolution. Some groups call their belief evolutionary creationism. Less prominently, there are also members of the Islamic and Hindu faiths who are creationists. Use of the term "creationist" in this context dates back to Charles Darwin's unpublished 1842 sketch draft for what became On the Origin of Species, and he used the term later in letters to colleagues. In 1873, Asa Gray published an article in The Nation saying a "special creationist" who held that species "were supernaturally originated just as they are, by the very terms of his doctrine places them out of the reach of scientific explanation."
Biblical basis
The basis for many creationists' beliefs is a literal or quasi-literal interpretation of the Book of Genesis. The Genesis creation narratives (Genesis 1–2) describe how God brings the Universe into being in a series of creative acts over six days and places the first man and woman (Adam and Eve) in the Garden of Eden. This story is the basis of creationist cosmology and biology. The Genesis flood narrative (Genesis 6–9) tells how God destroys the world and all life through a great flood, saving representatives of each form of life by means of Noah's Ark. This forms the basis of creationist geology, better known as flood geology.
Recent decades have seen attempts to de-link creationism from the Bible and recast it as science; these include creation science and intelligent design.
Types
To counter the common misunderstanding that the creation–evolution controversy was a simple dichotomy of views, with "creationists" set against "evolutionists", Eugenie Scott of the National Center for Science Education produced a diagram and description of a continuum of religious views as a spectrum ranging from extreme literal biblical creationism to materialist evolution, grouped under main headings. This was used in public presentations, then published in 1999 in Reports of the NCSE. Other versions of a taxonomy of creationists were produced, and comparisons made between the different groupings. In 2009 Scott produced a revised continuum taking account of these issues, emphasizing that intelligent design creationism overlaps other types, and each type is a grouping of various beliefs and positions. The revised diagram is labelled to shows a spectrum relating to positions on the age of the Earth, and the part played by special creation as against evolution. This was published in the book Evolution Vs. Creationism: An Introduction, and the NCSE website rewritten on the basis of the book version.
The main general types are listed below.
Young Earth creationism
Young Earth creationists such as Ken Ham and Doug Phillips believe that God created the Earth within the last ten thousand years, with a literalist interpretation of the Genesis creation narrative, within the approximate time-frame of biblical genealogies. Most young Earth creationists believe that the universe has a similar age as the Earth. A few assign a much older age to the universe than to Earth. Young Earth creationism gives the universe an age consistent with the Ussher chronology and other young Earth time frames. Other young Earth creationists believe that the Earth and the universe were created with the appearance of age, so that the world appears to be much older than it is, and that this appearance is what gives the geological findings and other methods of dating the Earth and the universe their much longer timelines.
The Christian organizations Answers in Genesis (AiG), Institute for Creation Research (ICR) and the Creation Research Society (CRS) promote young Earth creationism in the United States. Carl Baugh's Creation Evidence Museum in Texas, United States AiG's Creation Museum and Ark Encounter in Kentucky, United States were opened to promote young Earth creationism. Creation Ministries International promotes young Earth views in Australia, Canada, South Africa, New Zealand, the United States, and the United Kingdom.
Among Roman Catholics, the Kolbe Center for the Study of Creation promotes similar ideas.
Old Earth creationism
Old Earth creationism holds that the physical universe was created by God, but that the creation event described in the Book of Genesis is to be taken figuratively. This group generally believes that the age of the universe and the age of the Earth are as described by astronomers and geologists, but that details of modern evolutionary theory are questionable.
Old Earth creationism itself comes in at least three types:
Gap creationism
Gap creationism (also known as ruin-restoration creationism, restoration creationism, or the Gap Theory) is a form of old Earth creationism that posits that the six-yom creation period, as described in the Book of Genesis, involved six literal 24-hour days, but that there was a gap of time between two distinct creations in the first and the second verses of Genesis, which the theory states explains many scientific observations, including the age of the Earth. Thus, the six days of creation (verse 3 onwards) start sometime after the Earth was "without form and void." This allows an indefinite gap of time to be inserted after the original creation of the universe, but prior to the Genesis creation narrative, (when present biological species and humanity were created). Gap theorists can therefore agree with the scientific consensus regarding the age of the Earth and universe, while maintaining a literal interpretation of the biblical text.
Some gap creationists expand the basic version of creationism by proposing a "primordial creation" of biological life within the "gap" of time. This is thought to be "the world that then was" mentioned in 2 Peter 3:3–6. Discoveries of fossils and archaeological ruins older than 10,000 years are generally ascribed to this "world that then was," which may also be associated with Lucifer's rebellion.
Day-age creationism
Day-age creationism, a type of old Earth creationism, is a metaphorical interpretation of the creation accounts in Genesis. It holds that the six days referred to in the Genesis account of creation are not ordinary 24-hour days, but are much longer periods (from thousands to billions of years). The Genesis account is then reconciled with the age of the Earth. Proponents of the day-age theory can be found among both theistic evolutionists, who accept the scientific consensus on evolution, and progressive creationists, who reject it. The theories are said to be built on the understanding that the Hebrew word yom is also used to refer to a time period, with a beginning and an end and not necessarily that of a 24-hour day.
The day-age theory attempts to reconcile the Genesis creation narrative and modern science by asserting that the creation "days" were not ordinary 24-hour days, but actually lasted for long periods of time (as day-age implies, the "days" each lasted an age). According to this view, the sequence and duration of the creation "days" may be paralleled to the scientific consensus for the age of the earth and the universe.
Progressive creationism
Progressive creationism is the religious belief that God created new forms of life gradually over a period of hundreds of millions of years. As a form of old Earth creationism, it accepts mainstream geological and cosmological estimates for the age of the Earth, some tenets of biology such as microevolution as well as archaeology to make its case. In this view creation occurred in rapid bursts in which all "kinds" of plants and animals appear in stages lasting millions of years. The bursts are followed by periods of stasis or equilibrium to accommodate new arrivals. These bursts represent instances of God creating new types of organisms by divine intervention. As viewed from the archaeological record, progressive creationism holds that "species do not gradually appear by the steady transformation of its ancestors; [but] appear all at once and "fully formed."
The view rejects macroevolution, claiming it is biologically untenable and not supported by the fossil record, as well as rejects the concept of common descent from a last universal common ancestor. Thus the evidence for macroevolution is claimed to be false, but microevolution is accepted as a genetic parameter designed by the Creator into the fabric of genetics to allow for environmental adaptations and survival. Generally, it is viewed by proponents as a middle ground between literal creationism and evolution. Organizations such as Reasons To Believe, founded by Hugh Ross, promote this version of creationism.
Progressive creationism can be held in conjunction with hermeneutic approaches to the Genesis creation narrative such as the day-age creationism or framework/metaphoric/poetic views.
Philosophic and scientific creationism
Creation science
Creation science, or initially scientific creationism, is a pseudoscience that emerged in the 1960s with proponents aiming to have young Earth creationist beliefs taught in school science classes as a counter to teaching of evolution. Common features of creation science argument include: creationist cosmologies which accommodate a universe on the order of thousands of years old, criticism of radiometric dating through a technical argument about radiohalos, explanations for the fossil record as a record of the Genesis flood narrative (see flood geology), and explanations for the present diversity as a result of pre-designed genetic variability and partially due to the rapid degradation of the perfect genomes God placed in "created kinds" or "baramins" due to mutations.
Neo-creationism
Neo-creationism is a pseudoscientific movement which aims to restate creationism in terms more likely to be well received by the public, by policy makers, by educators and by the scientific community. It aims to re-frame the debate over the origins of life in non-religious terms and without appeals to scripture. This comes in response to the 1987 ruling by the United States Supreme Court in Edwards v. Aguillard that creationism is an inherently religious concept and that advocating it as correct or accurate in public-school curricula violates the Establishment Clause of the First Amendment.
One of the principal claims of neo-creationism propounds that ostensibly objective orthodox science, with a foundation in naturalism, is actually a dogmatically atheistic religion. Its proponents argue that the scientific method excludes certain explanations of phenomena, particularly where they point towards supernatural elements, thus effectively excluding religious insight from contributing to understanding the universe. This leads to an open and often hostile opposition to what neo-creationists term "Darwinism", which they generally mean to refer to evolution, but which they may extend to include such concepts as abiogenesis, stellar evolution and the Big Bang theory.
Unlike their philosophical forebears, neo-creationists largely do not believe in many of the traditional cornerstones of creationism such as a young Earth, or in a dogmatically literal interpretation of the Bible.
Intelligent design
Intelligent design (ID) is the pseudoscientific view that "certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection." All of its leading proponents are associated with the Discovery Institute, a think tank whose wedge strategy aims to replace the scientific method with "a science consonant with Christian and theistic convictions" which accepts supernatural explanations. It is widely accepted in the scientific and academic communities that intelligent design is a form of creationism, and is sometimes referred to as "intelligent design creationism."
ID originated as a re-branding of creation science in an attempt to avoid a series of court decisions ruling out the teaching of creationism in American public schools, and the Discovery Institute has run a series of campaigns to change school curricula. In Australia, where curricula are under the control of state governments rather than local school boards, there was a public outcry when the notion of ID being taught in science classes was raised by the Federal Education Minister Brendan Nelson; the minister quickly conceded that the correct forum for ID, if it were to be taught, is in religious or philosophy classes.
In the US, teaching of intelligent design in public schools has been decisively ruled by a federal district court to be in violation of the Establishment Clause of the First Amendment to the United States Constitution. In Kitzmiller v. Dover, the court found that intelligent design is not science and "cannot uncouple itself from its creationist, and thus religious, antecedents," and hence cannot be taught as an alternative to evolution in public school science classrooms under the jurisdiction of that court. This sets a persuasive precedent, based on previous US Supreme Court decisions in Edwards v. Aguillard and Epperson v. Arkansas (1968), and by the application of the Lemon test, that creates a legal hurdle to teaching intelligent design in public school districts in other federal court jurisdictions.
Geocentrism
In astronomy, the geocentric model (also known as geocentrism, or the Ptolemaic system), is a description of the cosmos where Earth is at the orbital center of all celestial bodies. This model served as the predominant cosmological system in many ancient civilizations such as ancient Greece. As such, they assumed that the Sun, Moon, stars, and naked eye planets circled Earth, including the noteworthy systems of Aristotle (see Aristotelian physics) and Ptolemy.
Articles arguing that geocentrism was the biblical perspective appeared in some early creation science newsletters associated with the Creation Research Society pointing to some passages in the Bible, which, when taken literally, indicate that the daily apparent motions of the Sun and the Moon are due to their actual motions around the Earth rather than due to the rotation of the Earth about its axis. For example, where the Sun and Moon are said to stop in the sky, and where the world is described as immobile. Contemporary advocates for such religious beliefs include Robert Sungenis, co-author of the self-published Galileo Was Wrong: The Church Was Right (2006). These people subscribe to the view that a plain reading of the Bible contains an accurate account of the manner in which the universe was created and requires a geocentric worldview.
Most contemporary creationist organizations reject such perspectives.
Omphalos hypothesis
The Omphalos hypothesis is one attempt to reconcile the scientific evidence that the universe is billions of years old with a literal interpretation of the Genesis creation narrative, which implies that the Earth is only a few thousand years old. It is based on the religious belief that the universe was created by a divine being, within the past six to ten thousand years (in keeping with flood geology), and that the presence of objective, verifiable evidence that the universe is older than approximately ten millennia is due to the creator introducing false evidence that makes the universe appear significantly older.
The idea was named after the title of an 1857 book, Omphalos by Philip Henry Gosse, in which Gosse argued that in order for the world to be functional God must have created the Earth with mountains and canyons, trees with growth rings, Adam and Eve with fully grown hair, fingernails, and navels (ὀμφαλός omphalos is Greek for "navel"), and all living creatures with fully formed evolutionary features, etc..., and that, therefore, no empirical evidence about the age of the Earth or universe can be taken as reliable.
Various supporters of Young Earth creationism have given different explanations for their belief that the universe is filled with false evidence of the universe's age, including a belief that some things needed to be created at a certain age for the ecosystems to function, or their belief that the creator was deliberately planting deceptive evidence. The idea has seen some revival in the 20th century by some modern creationists, who have extended the argument to address the "starlight problem". The idea has been criticised as Last Thursdayism, and on the grounds that it requires a deliberately deceptive creator.
Theistic evolution
Theistic evolution, or evolutionary creation, is a belief that "the personal God of the Bible created the universe and life through evolutionary processes." According to the American Scientific Affiliation:
Through the 19th century the term creationism most commonly referred to direct creation of individual souls, in contrast to traducianism. Following the publication of Vestiges of the Natural History of Creation, there was interest in ideas of Creation by divine law. In particular, the liberal theologian Baden Powell argued that this illustrated the Creator's power better than the idea of miraculous creation, which he thought ridiculous. When On the Origin of Species was published, the cleric Charles Kingsley wrote of evolution as "just as noble a conception of Deity." Darwin's view at the time was of God creating life through the laws of nature, and the book makes several references to "creation," though he later regretted using the term rather than calling it an unknown process. In America, Asa Gray argued that evolution is the secondary effect, or modus operandi, of the first cause, design, and published a pamphlet defending the book in theistic terms, Natural Selection not inconsistent with Natural Theology. Theistic evolution, also called, evolutionary creation, became a popular compromise, and St. George Jackson Mivart was among those accepting evolution but attacking Darwin's naturalistic mechanism. Eventually it was realised that supernatural intervention could not be a scientific explanation, and naturalistic mechanisms such as neo-Lamarckism were favoured as being more compatible with purpose than natural selection.
Some theists took the general view that, instead of faith being in opposition to biological evolution, some or all classical religious teachings about Christian God and creation are compatible with some or all of modern scientific theory, including specifically evolution; it is also known as "evolutionary creation." In Evolution versus Creationism, Eugenie Scott and Niles Eldredge state that it is in fact a type of evolution.
It generally views evolution as a tool used by God, who is both the first cause and immanent sustainer/upholder of the universe; it is therefore well accepted by people of strong theistic (as opposed to deistic) convictions. Theistic evolution can synthesize with the day-age creationist interpretation of the Genesis creation narrative; however most adherents consider that the first chapters of the Book of Genesis should not be interpreted as a "literal" description, but rather as a literary framework or allegory.
From a theistic viewpoint, the underlying laws of nature were designed by God for a purpose, and are so self-sufficient that the complexity of the entire physical universe evolved from fundamental particles in processes such as stellar evolution, life forms developed in biological evolution, and in the same way the origin of life by natural causes has resulted from these laws.
In one form or another, theistic evolution is the view of creation taught at the majority of mainline Protestant seminaries. For Roman Catholics, human evolution is not a matter of religious teaching, and must stand or fall on its own scientific merits. Evolution and the Roman Catholic Church are not in conflict. The Catechism of the Catholic Church comments positively on the theory of evolution, which is neither precluded nor required by the sources of faith, stating that scientific studies "have splendidly enriched our knowledge of the age and dimensions of the cosmos, the development of life-forms and the appearance of man." Roman Catholic schools teach evolution without controversy on the basis that scientific knowledge does not extend beyond the physical, and scientific truth and religious truth cannot be in conflict. Theistic evolution can be described as "creationism" in holding that divine intervention brought about the origin of life or that divine laws govern formation of species, though many creationists (in the strict sense) would deny that the position is creationism at all. In the creation–evolution controversy, its proponents generally take the "evolutionist" side. This sentiment was expressed by Fr. George Coyne, (the Vatican's chief astronomer between 1978 and 2006):...in America, creationism has come to mean some fundamentalistic, literal, scientific interpretation of Genesis. Judaic-Christian faith is radically creationist, but in a totally different sense. It is rooted in a belief that everything depends upon God, or better, all is a gift from God.
While supporting the methodological naturalism inherent in modern science, the proponents of theistic evolution reject the implication taken by some atheists that this gives credence to ontological materialism. In fact, many modern philosophers of science, including atheists, refer to the long-standing convention in the scientific method that observable events in nature should be explained by natural causes, with the distinction that it does not assume the actual existence or non-existence of the supernatural.
Religious views
There are also non-Christian forms of creationism, notably Islamic creationism and Hindu creationism.
Bahá'í Faith
In the creation myth taught by Bahá'u'lláh, the Bahá'í Faith founder, the universe has "neither beginning nor ending," and that the component elements of the material world have always existed and will always exist. With regard to evolution and the origin of human beings, 'Abdu'l-Bahá gave extensive comments on the subject when he addressed western audiences in the beginning of the 20th century. Transcripts of these comments can be found in Some Answered Questions, Paris Talks and The Promulgation of Universal Peace. 'Abdu'l-Bahá described the human species as having evolved from a primitive form to modern man, but that the capacity to form human intelligence was always in existence.
Buddhism
Buddhism denies a creator deity and posits that mundane deities such as Mahabrahma are sometimes misperceived to be a creator. While Buddhism includes belief in divine beings called devas, it holds that they are mortal, limited in their power, and that none of them are creators of the universe. In the Saṃyutta Nikāya, the Buddha also states that the cycle of rebirths stretches back hundreds of thousands of eons, without discernible beginning.
Major Buddhist Indian philosophers such as Nagarjuna, Vasubandhu, Dharmakirti and Buddhaghosa, consistently critiqued Creator God views put forth by Hindu thinkers.
Christianity
, most Christians around the world accepted evolution as the most likely explanation for the origins of species, and did not take a literal view of the Genesis creation narrative. The United States is an exception where belief in religious fundamentalism is much more likely to affect attitudes towards evolution than it is for believers elsewhere. Political partisanship affecting religious belief may be a factor because political partisanship in the US is highly correlated with fundamentalist thinking, unlike in Europe.
Most contemporary Christian leaders and scholars from mainstream churches, such as Anglicans and Lutherans, consider that there is no conflict between the spiritual meaning of creation and the science of evolution. According to the former archbishop of Canterbury, Rowan Williams, "for most of the history of Christianity, and I think this is fair enough, most of the history of the Christianity there's been an awareness that a belief that everything depends on the creative act of God, is quite compatible with a degree of uncertainty or latitude about how precisely that unfolds in creative time."
Leaders of the Anglican and Roman Catholic churches have made statements in favor of evolutionary theory, as have scholars such as the physicist John Polkinghorne, who argues that evolution is one of the principles through which God created living beings. Earlier supporters of evolutionary theory include Frederick Temple, Asa Gray and Charles Kingsley who were enthusiastic supporters of Darwin's theories upon their publication, and the French Jesuit priest and geologist Pierre Teilhard de Chardin saw evolution as confirmation of his Christian beliefs, despite condemnation from Church authorities for his more speculative theories. Another example is that of Liberal theology, not providing any creation models, but instead focusing on the symbolism in beliefs of the time of authoring Genesis and the cultural environment.
Many Christians and Jews had been considering the idea of the creation history as an allegory (instead of historical) long before the development of Darwin's theory of evolution. For example, Philo, whose works were taken up by early Church writers, wrote that it would be a mistake to think that creation happened in six days, or in any set amount of time. Augustine of the late fourth century who was also a former neoplatonist argued that everything in the universe was created by God at the same moment in time (and not in six days as a literal reading of the Book of Genesis would seem to require); It appears that both Philo and Augustine felt uncomfortable with the idea of a seven-day creation because it detracted from the notion of God's omnipotence. In 1950, Pope Pius XII stated limited support for the idea in his encyclical . In 1996, Pope John Paul II stated that "new knowledge has led to the recognition of the theory of evolution as more than a hypothesis," but, referring to previous papal writings, he concluded that "if the human body takes its origin from pre-existent living matter, the spiritual soul is immediately created by God."
In the US, Evangelical Christians have continued to believe in a literal Genesis. , members of evangelical Protestant (70%), Mormon (76%) and Jehovah's Witnesses (90%) denominations were the most likely to reject the evolutionary interpretation of the origins of life.
Jehovah's Witnesses assert that scientific evidence about the age of the universe is compatible with the Bible, but that the 'days' after Genesis 1:1 were each thousands of years in length. They view this belief as an alternative to Creationism rather than a variation of Creationism.
The historic Christian literal interpretation of creation requires the harmonization of the two creation stories, Genesis 1:1–2:3 and Genesis 2:4–25, for there to be a consistent interpretation. They sometimes seek to ensure that their belief is taught in science classes, mainly in American schools. Opponents reject the claim that the literalistic biblical view meets the criteria required to be considered scientific. Many religious groups teach that God created the Cosmos. From the days of the early Christian Church Fathers there were allegorical interpretations of the Book of Genesis as well as literal aspects.
Christian Science, a system of thought and practice derived from the writings of Mary Baker Eddy, interprets the Book of Genesis figuratively rather than literally. It holds that the material world is an illusion, and consequently not created by God: the only real creation is the spiritual realm, of which the material world is a distorted version. Christian Scientists regard the story of the creation in the Book of Genesis as having symbolic rather than literal meaning. According to Christian Science, both creationism and evolution are false from an absolute or "spiritual" point of view, as they both proceed from a (false) belief in the reality of a material universe. However, Christian Scientists do not oppose the teaching of evolution in schools, nor do they demand that alternative accounts be taught: they believe that both material science and literalist theology are concerned with the illusory, mortal and material, rather than the real, immortal and spiritual. With regard to material theories of creation, Eddy showed a preference for Darwin's theory of evolution over others.
Hinduism
Hindu creationists claim that species of plants and animals are material forms adopted by pure consciousness which live an endless cycle of births and rebirths. Ronald Numbers says that: "Hindu Creationists have insisted on the antiquity of humans, who they believe appeared fully formed as long, perhaps, as trillions of years ago." Hindu creationism is a form of old Earth creationism, according to Hindu creationists the universe may even be older than billions of years. These views are based on the Vedas, the creation myths of which depict an extreme antiquity of the universe and history of the Earth.
In Hindu cosmology, time cyclically repeats general events of creation and destruction, with many "first man", each known as Manu, the progenitor of mankind. Each Manu successively reigns over a 306.72 million year period known as a , each ending with the destruction of mankind followed by a (period of non-activity) before the next . 120.53million years have elapsed in the current (current mankind) according to calculations on Hindu units of time. The universe is cyclically created at the start and destroyed at the end of a (day of Brahma), lasting for 4.32billion years, which is followed by a (period of dissolution) of equal length. 1.97billion years have elapsed in the current (current universe). The universal elements or building blocks (unmanifest matter) exists for a period known as a , lasting for 311.04trillion years, which is followed by a (period of great dissolution) of equal length. 155.52trillion years have elapsed in the current .
Islam
The creation myths in the Quran are more vague and allow for a wider range of interpretations similar to those in other Abrahamic religions.
Islam also has its own school of theistic evolutionism, which holds that mainstream scientific analysis of the origin of the universe is supported by the Quran. Some Muslims believe in evolutionary creation, especially among liberal movements within Islam.
Writing for The Boston Globe, Drake Bennett noted: "Without a Book of Genesis to account for[...] Muslim creationists have little interest in proving that the age of the Earth is measured in the thousands rather than the billions of years, nor do they show much interest in the problem of the dinosaurs. And the idea that animals might evolve into other animals also tends to be less controversial, in part because there are passages of the Koran that seem to support it. But the issue of whether human beings are the product of evolution is just as fraught among Muslims." Khalid Anees, president of the Islamic Society of Britain, states that Muslims do not agree that one species can develop from another.
Ottoman-Lebanese Sunni scholar Hussein al-Jisr, declared that there is no contradiction between evolution and the Islamic scriptures. He stated that "there is no evidence in the Quran to suggest whether all species, each of which exists by the grace of God, were created all at once or gradually", and referred to the aforementioned story of creation in Sūrat al-Anbiyā. In Kemalist Turkey, important scholars strove to accommodate the theory of evolution in Islamic scripture during the first decades of the Turkish Republic; their approach to the theory defended Islamic belief in the face of scientific theories of their times.
The Saudi Arabian government, on the other hand, began funding and promoting denial of evolution in the 1970s in accordance to its Salafi-Wahhabi interpretation of Islam. This stance garnered criticism from the governments and academics of mainline Muslim countries such as Turkey, Pakistan, Lebanon, and Iran, where evolution was initially taught and promoted. Since the 1980s, Turkey has been a site of strong advocacy for creationism, supported by American adherents.
Judaism
For Orthodox Jews who seek to reconcile discrepancies between science and the creation myths in the Bible, the notion that science and the Bible should even be reconciled through traditional scientific means is questioned. To these groups, science is as true as the Torah and if there seems to be a problem, epistemological limits are to blame for apparently irreconcilable points. They point to discrepancies between what is expected and what actually is to demonstrate that things are not always as they appear. They note that even the root word for 'world' in the Hebrew language, , means 'hidden' (). Just as they know from the Torah that God created man and trees and the light on its way from the stars in their observed state, so too can they know that the world was created in its over the six days of Creation that reflects progression to its currently-observed state, with the understanding that physical ways to verify this may eventually be identified. This knowledge has been advanced by Rabbi Dovid Gottlieb, former philosophy professor at Johns Hopkins University.
Kabbalistic sources from well before the scientifically apparent age of the universe was first determined are also in close concord with modern scientific estimates of the age of the universe, according to Rabbi Aryeh Kaplan, and based on Sefer Temunah, an early kabbalistic work attributed to the first-century Tanna Nehunya ben HaKanah. Many kabbalists accepted the teachings of the Sefer HaTemunah, including the medieval Jewish scholar Nahmanides, his close student Isaac ben Samuel of Acre, and David ben Solomon ibn Abi Zimra. Other parallels are derived, among other sources, from Nahmanides, who expounds that there was a Neanderthal-like species with which Adam mated (he did this long before Neanderthals had even been discovered scientifically). Reform Judaism does not take the Torah as a literal text, but rather as a symbolic or open-ended work.
Some contemporary writers such as Rabbi Gedalyah Nadel have sought to reconcile the discrepancy between the account in the Torah, and scientific findings by arguing that each day referred to in the Bible was not 24 hours, but billions of years long. Others claim that the Earth was created a few thousand years ago, but was deliberately made to look as if it was five billion years old, e.g. by being created with ready made fossils. The best known exponent of this approach being Rabbi Menachem Mendel Schneerson. Others state that although the world was physically created in six 24-hour days, the Torah accounts can be interpreted to mean that there was a period of billions of years before the six days of creation.
Prevalence
Most vocal literalist creationists are from the US, and strict creationist views are much less common in other developed countries. According to a study published in Science, a survey of the US, Turkey, Japan and Europe showed that public acceptance of evolution is most prevalent in Iceland, Denmark and Sweden at 80% of the population. There seems to be no significant correlation between believing in evolution and understanding evolutionary science.
Australia
A 2009 Nielsen poll showed that 23% of Australians believe "the biblical account of human origins," 42% believe in a "wholly scientific" explanation for the origins of life, while 32% believe in an evolutionary process "guided by God".
A 2013 survey conducted by Auspoll and the Australian Academy of Science found that 80% of Australians believe in evolution (70% believe it is currently occurring, 10% believe in evolution but do not think it is currently occurring), 12% were not sure and 9% stated they do not believe in evolution.
Brazil
A 2011 Ipsos survey found that 47% of responders in Brazil identified themselves as "creationists and believe that human beings were in fact created by a spiritual force such as the God they believe in and do not believe that the origin of man came from evolving from other species such as apes".
In 2004, IBOPE conducted a poll in Brazil that asked questions about creationism and the teaching of creationism in schools. When asked if creationism should be taught in schools, 89% of people said that creationism should be taught in schools. When asked if the teaching of creationism should replace the teaching of evolution in schools, 75% of people said that the teaching of creationism should replace the teaching of evolution in schools.
Canada
A 2012 survey, by Angus Reid Public Opinion revealed that 61 percent of Canadians believe in evolution. The poll asked "Where did human beings come fromdid we start as singular cells millions of year ago and evolve into our present form, or did God create us in his image 10,000 years ago?"
In 2019, a Research Co. poll asked people in Canada if creationism "should be part of the school curriculum in their province". 38% of Canadians said that creationism should be part of the school curriculum, 39% of Canadians said that it should not be part of the school curriculum, and 23% of Canadians were undecided.
In 2023, a Research Co. poll found that 21% of Canadians "believe God created human beings in their present form within the last 10,000 years". The poll also found that "More than two-in-five Canadians (43%) think creationism should be part of the school curriculum in their province."
Europe
In Europe, literalist creationism is more widely rejected, though regular opinion polls are not available. Most people accept that evolution is the most widely accepted scientific theory as taught in most schools. In countries with a Roman Catholic majority, papal acceptance of evolutionary creationism as worthy of study has essentially ended debate on the matter for many people.
In the UK, a 2006 poll on the "origin and development of life", asked participants to choose between three different perspectives on the origin of life: 22% chose creationism, 17% opted for intelligent design, 48% selected evolutionary theory, and the rest did not know. A subsequent 2010 YouGov poll on the correct explanation for the origin of humans found that 9% opted for creationism, 12% intelligent design, 65% evolutionary theory and 13% didn't know. The former Archbishop of Canterbury Rowan Williams, head of the worldwide Anglican Communion, views the idea of teaching creationism in schools as a mistake. In 2009, an Ipsos Mori survey in the United Kingdom found that 54% of Britons agreed with the view: "Evolutionary theories should be taught in science lessons in schools together with other possible perspectives, such as intelligent design and creationism."
In Italy, Education Minister Letizia Moratti wanted to retire evolution from the secondary school level; after one week of massive protests, she reversed her opinion.
There continues to be scattered and possibly mounting efforts on the part of religious groups throughout Europe to introduce creationism into public education. In response, the Parliamentary Assembly of the Council of Europe has released a draft report titled The dangers of creationism in education on June 8, 2007, reinforced by a further proposal of banning it in schools dated October 4, 2007.
Serbia suspended the teaching of evolution for one week in September 2004, under education minister Ljiljana Čolić, only allowing schools to reintroduce evolution into the curriculum if they also taught creationism. "After a deluge of protest from scientists, teachers and opposition parties" says the BBC report, Čolić's deputy made the statement, "I have come here to confirm Charles Darwin is still alive" and announced that the decision was reversed. Čolić resigned after the government said that she had caused "problems that had started to reflect on the work of the entire government."
Poland saw a major controversy over creationism in 2006, when the Deputy Education Minister, Mirosław Orzechowski, denounced evolution as "one of many lies" taught in Polish schools. His superior, Minister of Education Roman Giertych, has stated that the theory of evolution would continue to be taught in Polish schools, "as long as most scientists in our country say that it is the right theory." Giertych's father, Member of the European Parliament Maciej Giertych, has opposed the teaching of evolution and has claimed that dinosaurs and humans co-existed.
A June 2015 – July 2016 Pew poll of Eastern European countries, found that 56% of people from Armenia say that humans and other living things have "Existed in present state since the beginning of time". Armenia is followed by 52% from Bosnia, 42% from Moldova, 37% from Lithuania, 34% from Georgia and Ukraine, 33% from Croatia and Romania, 31% from Bulgaria, 29% from Greece and Serbia, 26% from Russia, 25% from Latvia, 23% from Belarus and Poland, 21% from Estonia and Hungary, and 16% from the Czech Republic.
South Africa
A 2011 Ipsos survey found that 56% of responders in South Africa identified themselves as "creationists and believe that human beings were in fact created by a spiritual force such as the God they believe in and do not believe that the origin of man came from evolving from other species such as apes".
South Korea
In 2009, an EBS survey in South Korea found that 63% of people believed that creation and evolution should both be taught in schools simultaneously.
United States
A 2017 poll by Pew Research found that 62% of Americans believe humans have evolved over time and 34% of Americans believe humans and other living things have existed in their present form since the beginning of time. A 2019 Gallup creationism survey found that 40% of adults in the United States inclined to the view that "God created humans in their present form at one time within the last 10,000 years" when asked for their views on the origin and development of human beings.
According to a 2014 Gallup poll, about 42% of Americans believe that "God created human beings pretty much in their present form at one time within the last 10,000 years or so." Another 31% believe that "human beings have developed over millions of years from less advanced forms of life, but God guided this process,"and 19% believe that "human beings have developed over millions of years from less advanced forms of life, but God had no part in this process."
Belief in creationism is inversely correlated to education; of those with postgraduate degrees, 74% accept evolution. In 1987, Newsweek reported: "By one count there are some 700 scientists with respectable academic credentials (out of a total of 480,000 U.S. earth and life scientists) who give credence to creation-science, the general theory that complex life forms did not evolve but appeared 'abruptly.'"
A 2000 poll for People for the American Way found 70% of the US public felt that evolution was compatible with a belief in God.
According to a study published in Science, between 1985 and 2005 the number of adult North Americans who accept evolution declined from 45% to 40%, the number of adults who reject evolution declined from 48% to 39% and the number of people who were unsure increased from 7% to 21%. Besides the US the study also compared data from 32 European countries, Turkey, and Japan. The only country where acceptance of evolution was lower than in the US was Turkey (25%).
According to a 2011 Fox News poll, 45% of Americans believe in creationism, down from 50% in a similar poll in 1999. 21% believe in 'the theory of evolution as outlined by Darwin and other scientists' (up from 15% in 1999), and 27% answered that both are true (up from 26% in 1999).
In September 2012, educator and television personality Bill Nye spoke with the Associated Press and aired his fears about acceptance of creationism, believing that teaching children that creationism is the only true answer without letting them understand the way science works will prevent any future innovation in the world of science. In February 2014, Nye defended evolution in the classroom in a debate with creationist Ken Ham on the topic of whether creation is a viable model of origins in today's modern, scientific era.
Education controversies
In the US, creationism has become centered in the political controversy over creation and evolution in public education, and whether teaching creationism in science classes conflicts with the separation of church and state. Currently, the controversy comes in the form of whether advocates of the intelligent design movement who wish to "Teach the Controversy" in science classes have conflated science with religion.
People for the American Way polled 1500 North Americans about the teaching of evolution and creationism in November and December 1999. They found that most North Americans were not familiar with creationism, and most North Americans had heard of evolution, but many did not fully understand the basics of the theory. The main findings were:
In such political contexts, creationists argue that their particular religiously based origin belief is superior to those of other belief systems, in particular those made through secular or scientific rationale. Political creationists are opposed by many individuals and organizations who have made detailed critiques and given testimony in various court cases that the alternatives to scientific reasoning offered by creationists are opposed by the consensus of the scientific community.
Criticism
Christian criticism
Most Christians disagree with the teaching of creationism as an alternative to evolution in schools. Several religious organizations, among them the Catholic Church, hold that their faith does not conflict with the scientific consensus regarding evolution. The Clergy Letter Project, which has collected more than 13,000 signatures, is an "endeavor designed to demonstrate that religion and science can be compatible."
In his 2002 article "Intelligent Design as a Theological Problem", George Murphy argues against the view that life on Earth, in all its forms, is direct evidence of God's act of creation (Murphy quotes Phillip E. Johnson's claim that he is speaking "of a God who acted openly and left his fingerprints on all the evidence."). Murphy argues that this view of God is incompatible with the Christian understanding of God as "the one revealed in the cross and resurrection of Christ." The basis of this theology is Isaiah 45:15, "Verily thou art a God that hidest thyself, O God of Israel, the Saviour."
Murphy observes that the execution of a Jewish carpenter by Roman authorities is in and of itself an ordinary event and did not require divine action. On the contrary, for the crucifixion to occur, God had to limit or "empty" himself. It was for this reason that Paul the Apostle wrote, in Philippians 2:5-8:
Let this mind be in you, which was also in Christ Jesus: Who, being in the form of God, thought it not robbery to be equal with God: But made himself of no reputation, and took upon him the form of a servant, and was made in the likeness of men: And being found in fashion as a man, he humbled himself, and became obedient unto death, even the death of the cross.
Murphy concludes that,Just as the Son of God limited himself by taking human form and dying on a cross, God limits divine action in the world to be in accord with rational laws which God has chosen. This enables us to understand the world on its own terms, but it also means that natural processes hide God from scientific observation.For Murphy, a theology of the cross requires that Christians accept a methodological naturalism, meaning that one cannot invoke God to explain natural phenomena, while recognizing that such acceptance does not require one to accept a metaphysical naturalism, which proposes that nature is all that there is.
The Jesuit priest George Coyne has stated that it is "unfortunate that, especially here in America, creationism has come to mean...some literal interpretation of Genesis." He argues that "...Judaic-Christian faith is radically creationist, but in a totally different sense. It is rooted in belief that everything depends on God, or better, all is a gift from God."
Teaching of creationism
Other Christians have expressed qualms about teaching creationism. In March 2006, then Archbishop of Canterbury Rowan Williams, the leader of the world's Anglicans, stated his discomfort about teaching creationism, saying that creationism was "a kind of category mistake, as if the Bible were a theory like other theories." He also said: "My worry is creationism can end up reducing the doctrine of creation rather than enhancing it." The views of the Episcopal Churcha major American-based branch of the Anglican Communionon teaching creationism resemble those of Williams.
The National Science Teachers Association is opposed to teaching creationism as a science, as is the Association for Science Teacher Education, the National Association of Biology Teachers, the American Anthropological Association, the American Geosciences Institute, the Geological Society of America, the American Geophysical Union, and numerous other professional teaching and scientific societies.
In April 2010, the American Academy of Religion issued Guidelines for Teaching About Religion in K‐12 Public Schools in the United States, which included guidance that creation science or intelligent design should not be taught in science classes, as "Creation science and intelligent design represent worldviews that fall outside of the realm of science that is defined as (and limited to) a method of inquiry based on gathering observable and measurable evidence subject to specific principles of reasoning." However, they, as well as other "worldviews that focus on speculation regarding the origins of life represent another important and relevant form of human inquiry that is appropriately studied in literature or social sciences courses. Such study, however, must include a diversity of worldviews representing a variety of religious and philosophical perspectives and must avoid privileging one view as more legitimate than others."
Randy Moore and Sehoya Cotner, from the biology program at the University of Minnesota, reflect on the relevance of teaching creationism in the article "The Creationist Down the Hall: Does It Matter When Teachers Teach Creationism?", in which they write: "Despite decades of science education reform, numerous legal decisions declaring the teaching of creationism in public-school science classes to be unconstitutional, overwhelming evidence supporting evolution, and the many denunciations of creationism as nonscientific by professional scientific societies, creationism remains popular throughout the United States."
Scientific criticism
Science is a system of knowledge based on observation, empirical evidence, and the development of theories that yield testable explanations and predictions of natural phenomena. By contrast, creationism is often based on literal interpretations of the narratives of particular religious texts. Creationist beliefs involve purported forces that lie outside of nature, such as supernatural intervention, and often do not allow predictions at all. Therefore, these can neither be confirmed nor disproved by scientists. However, many creationist beliefs can be framed as testable predictions about phenomena such as the age of the Earth, its geological history and the origins, distributions and relationships of living organisms found on it. Early science incorporated elements of these beliefs, but as science developed these beliefs were gradually falsified and were replaced with understandings based on accumulated and reproducible evidence that often allows the accurate prediction of future results.
Some scientists, such as Stephen Jay Gould, consider science and religion to be two compatible and complementary fields, with authorities in distinct areas of human experience, so-called non-overlapping magisteria. This view is also held by many theologians, who believe that ultimate origins and meaning are addressed by religion, but favor verifiable scientific explanations of natural phenomena over those of creationist beliefs. Other scientists, such as Richard Dawkins, reject the non-overlapping magisteria and argue that, in disproving literal interpretations of creationists, the scientific method also undermines religious texts as a source of truth. Irrespective of this diversity in viewpoints, since creationist beliefs are not supported by empirical evidence, the scientific consensus is that any attempt to teach creationism as science should be rejected.
Organizations
See also
Biblical inerrancy
Biogenesis
Dangers of creationism in education
Evolution of complexity
Flying Spaghetti Monster
History of creationism
Religious cosmology
Notes
References
Citations
Works cited
"Presented as a Paleontological Society short course at the annual meeting of the Geological Society of America, Denver, Colorado, October 24, 1999."
Further reading
External links
"Creationism" at the Stanford Encyclopedia of Philosophy by Michael Ruse
"How Creationism Works" at HowStuffWorks by Julia Layton
"TIMELINE: Evolution, Creationism and Intelligent Design"Focuses on major historical and recent events in the scientific and political debate
by Warren D. Allmon, Director of the Museum of the Earth
"What is creationism?" at talk.origins by Mark Isaak
"The Creation/Evolution Continuum" by Eugenie Scott
"15 Answers to Creationist Nonsense" by John Rennie, editor in chief of Scientific American magazine
"Race, Evolution and the Science of Human Origins" by Allison Hopper, Scientific American (July 5, 2021).
Human Timeline (Interactive)Smithsonian, National Museum of Natural History (August 2016) | Creationism | [
"Biology"
] | 10,858 | [
"Creationism",
"Biology theories",
"Obsolete biology theories"
] |
5,346 | https://en.wikipedia.org/wiki/Colloid | A colloid is a mixture in which one substance consisting of microscopically dispersed insoluble particles is suspended throughout another substance. Some definitions specify that the particles must be dispersed in a liquid, while others extend the definition to include substances like aerosols and gels. The term colloidal suspension refers unambiguously to the overall mixture (although a narrower sense of the word suspension is distinguished from colloids by larger particle size). A colloid has a dispersed phase (the suspended particles) and a continuous phase (the medium of suspension). The dispersed phase particles have a diameter of approximately 1 nanometre to 1 micrometre.
Some colloids are translucent because of the Tyndall effect, which is the scattering of light by particles in the colloid. Other colloids may be opaque or have a slight color.
Colloidal suspensions are the subject of interface and colloid science. This field of study began in 1845 by Francesco Selmi, who called them pseudosolutions, and expanded by Michael Faraday and Thomas Graham, who coined the term colloid in 1861.
Classification
Colloids can be classified as follows:
Homogeneous mixtures with a dispersed phase in this size range may be called colloidal aerosols, colloidal emulsions, colloidal suspensions, colloidal foams, colloidal dispersions, or hydrosols.
Hydrocolloids
Hydrocolloids describe certain chemicals (mostly polysaccharides and proteins) that are colloidally dispersible in water. Thus becoming effectively "soluble" they change the rheology of water by raising the viscosity and/or inducing gelation. They may provide other interactive effects with other chemicals, in some cases synergistic, in others antagonistic. Using these attributes hydrocolloids are very useful chemicals since in many areas of technology from foods through pharmaceuticals, personal care and industrial applications, they can provide stabilization, destabilization and separation, gelation, flow control, crystallization control and numerous other effects. Apart from uses of the soluble forms some of the hydrocolloids have additional useful functionality in a dry form if after solubilization they have the water removed - as in the formation of films for breath strips or sausage casings or indeed, wound dressing fibers, some being more compatible with skin than others. There are many different types of hydrocolloids each with differences in structure function and utility that generally are best suited to particular application areas in the control of rheology and the physical modification of form and texture. Some hydrocolloids like starch and casein are useful foods as well as rheology modifiers, others have limited nutritive value, usually providing a source of fiber.
The term hydrocolloids also refers to a type of dressing designed to lock moisture in the skin and help the natural healing process of skin to reduce scarring, itching and soreness.
Components
Hydrocolloids contain some type of gel-forming agent, such as sodium carboxymethylcellulose (NaCMC) and gelatin. They are normally combined with some type of sealant, i.e. polyurethane to 'stick' to the skin.
Compared with solution
A colloid has a dispersed phase and a continuous phase, whereas in a solution, the solute and solvent constitute only one phase. A solute in a solution are individual molecules or ions, whereas colloidal particles are bigger. For example, in a solution of salt in water, the sodium chloride (NaCl) crystal dissolves, and the Na+ and Cl− ions are surrounded by water molecules. However, in a colloid such as milk, the colloidal particles are globules of fat, rather than individual fat molecules. Because colloid is multiple phases, it has very different properties compared to fully mixed, continuous solution.
Interaction between particles
The following forces play an important role in the interaction of colloid particles:
Excluded volume repulsion: This refers to the impossibility of any overlap between hard particles.
Electrostatic interaction: Colloidal particles often carry an electrical charge and therefore attract or repel each other. The charge of both the continuous and the dispersed phase, as well as the mobility of the phases are factors affecting this interaction.
van der Waals forces: This is due to interaction between two dipoles that are either permanent or induced. Even if the particles do not have a permanent dipole, fluctuations of the electron density gives rise to a temporary dipole in a particle. This temporary dipole induces a dipole in particles nearby. The temporary dipole and the induced dipoles are then attracted to each other. This is known as van der Waals force, and is always present (unless the refractive indexes of the dispersed and continuous phases are matched), is short-range, and is attractive.
Steric forces: A repulsive steric force typically occurring due to adsorbed polymers coating a colloid's surface.
Depletion forces: An attractive entropic force arising from an osmotic pressure imbalance when colloids are suspended in a medium of much smaller particles or polymers called depletants.
Sedimentation velocity
The Earth’s gravitational field acts upon colloidal particles. Therefore, if the colloidal particles are denser than the medium of suspension, they will sediment (fall to the bottom), or if they are less dense, they will cream (float to the top). Larger particles also have a greater tendency to sediment because they have smaller Brownian motion to counteract this movement.
The sedimentation or creaming velocity is found by equating the Stokes drag force with the gravitational force:
where
is the Archimedean weight of the colloidal particles,
is the viscosity of the suspension medium,
is the radius of the colloidal particle,
and is the sedimentation or creaming velocity.
The mass of the colloidal particle is found using:
where
is the volume of the colloidal particle, calculated using the volume of a sphere ,
and is the difference in mass density between the colloidal particle and the suspension medium.
By rearranging, the sedimentation or creaming velocity is:
There is an upper size-limit for the diameter of colloidal particles because particles larger than 1 μm tend to sediment, and thus the substance would no longer be considered a colloidal suspension.
The colloidal particles are said to be in sedimentation equilibrium if the rate of sedimentation is equal to the rate of movement from Brownian motion.
Preparation
There are two principal ways to prepare colloids:
Dispersion of large particles or droplets to the colloidal dimensions by milling, spraying, or application of shear (e.g., shaking, mixing, or high shear mixing).
Condensation of small dissolved molecules into larger colloidal particles by precipitation, condensation, or redox reactions. Such processes are used in the preparation of colloidal silica or gold.
Stabilization
The stability of a colloidal system is defined by particles remaining suspended in solution and depends on the interaction forces between the particles. These include electrostatic interactions and van der Waals forces, because they both contribute to the overall free energy of the system.
A colloid is stable if the interaction energy due to attractive forces between the colloidal particles is less than kT, where k is the Boltzmann constant and T is the absolute temperature. If this is the case, then the colloidal particles will repel or only weakly attract each other, and the substance will remain a suspension.
If the interaction energy is greater than kT, the attractive forces will prevail, and the colloidal particles will begin to clump together. This process is referred to generally as aggregation, but is also referred to as flocculation, coagulation or precipitation. While these terms are often used interchangeably, for some definitions they have slightly different meanings. For example, coagulation can be used to describe irreversible, permanent aggregation where the forces holding the particles together are stronger than any external forces caused by stirring or mixing. Flocculation can be used to describe reversible aggregation involving weaker attractive forces, and the aggregate is usually called a floc. The term precipitation is normally reserved for describing a phase change from a colloid dispersion to a solid (precipitate) when it is subjected to a perturbation. Aggregation causes sedimentation or creaming, therefore the colloid is unstable: if either of these processes occur the colloid will no longer be a suspension.
Electrostatic stabilization and steric stabilization are the two main mechanisms for stabilization against aggregation.
Electrostatic stabilization is based on the mutual repulsion of like electrical charges. The charge of colloidal particles is structured in an electrical double layer, where the particles are charged on the surface, but then attract counterions (ions of opposite charge) which surround the particle. The electrostatic repulsion between suspended colloidal particles is most readily quantified in terms of the zeta potential. The combined effect of van der Waals attraction and electrostatic repulsion on aggregation is described quantitatively by the DLVO theory. A common method of stabilising a colloid (converting it from a precipitate) is peptization, a process where it is shaken with an electrolyte.
Steric stabilization consists absorbing a layer of a polymer or surfactant on the particles to prevent them from getting close in the range of attractive forces. The polymer consists of chains that are attached to the particle surface, and the part of the chain that extends out is soluble in the suspension medium. This technique is used to stabilize colloidal particles in all types of solvents, including organic solvents.
A combination of the two mechanisms is also possible (electrosteric stabilization).
A method called gel network stabilization represents the principal way to produce colloids stable to both aggregation and sedimentation. The method consists in adding to the colloidal suspension a polymer able to form a gel network. Particle settling is hindered by the stiffness of the polymeric matrix where particles are trapped, and the long polymeric chains can provide a steric or electrosteric stabilization to dispersed particles. Examples of such substances are xanthan and guar gum.
Destabilization
Destabilization can be accomplished by different methods:
Removal of the electrostatic barrier that prevents aggregation of the particles. This can be accomplished by the addition of salt to a suspension to reduce the Debye screening length (the width of the electrical double layer) of the particles. It is also accomplished by changing the pH of a suspension to effectively neutralise the surface charge of the particles in suspension. This removes the repulsive forces that keep colloidal particles separate and allows for aggregation due to van der Waals forces. Minor changes in pH can manifest in significant alteration to the zeta potential. When the magnitude of the zeta potential lies below a certain threshold, typically around ± 5mV, rapid coagulation or aggregation tends to occur.
Addition of a charged polymer flocculant. Polymer flocculants can bridge individual colloidal particles by attractive electrostatic interactions. For example, negatively charged colloidal silica or clay particles can be flocculated by the addition of a positively charged polymer.
Addition of non-adsorbed polymers called depletants that cause aggregation due to entropic effects.
Unstable colloidal suspensions of low-volume fraction form clustered liquid suspensions, wherein individual clusters of particles sediment if they are more dense than the suspension medium, or cream if they are less dense. However, colloidal suspensions of higher-volume fraction form colloidal gels with viscoelastic properties. Viscoelastic colloidal gels, such as bentonite and toothpaste, flow like liquids under shear, but maintain their shape when shear is removed. It is for this reason that toothpaste can be squeezed from a toothpaste tube, but stays on the toothbrush after it is applied.
Monitoring stability
The most widely used technique to monitor the dispersion state of a product, and to identify and quantify destabilization phenomena, is multiple light scattering coupled with vertical scanning. This method, known as turbidimetry, is based on measuring the fraction of light that, after being sent through the sample, it backscattered by the colloidal particles. The backscattering intensity is directly proportional to the average particle size and volume fraction of the dispersed phase. Therefore, local changes in concentration caused by sedimentation or creaming, and clumping together of particles caused by aggregation, are detected and monitored. These phenomena are associated with unstable colloids.
Dynamic light scattering can be used to detect the size of a colloidal particle by measuring how fast they diffuse. This method involves directing laser light towards a colloid. The scattered light will form an interference pattern, and the fluctuation in light intensity in this pattern is caused by the Brownian motion of the particles. If the apparent size of the particles increases due to them clumping together via aggregation, it will result in slower Brownian motion. This technique can confirm that aggregation has occurred if the apparent particle size is determined to be beyond the typical size range for colloidal particles.
Accelerating methods for shelf life prediction
The kinetic process of destabilisation can be rather long (up to several months or years for some products). Thus, it is often required for the formulator to use further accelerating methods to reach reasonable development time for new product design. Thermal methods are the most commonly used and consist of increasing temperature to accelerate destabilisation (below critical temperatures of phase inversion or chemical degradation). Temperature affects not only viscosity, but also interfacial tension in the case of non-ionic surfactants or more generally interactions forces inside the system. Storing a dispersion at high temperatures enables to simulate real life conditions for a product (e.g. tube of sunscreen cream in a car in the summer), but also to accelerate destabilisation processes up to 200 times.
Mechanical acceleration including vibration, centrifugation and agitation are sometimes used. They subject the product to different forces that pushes the particles / droplets against one another, hence helping in the film drainage. Some emulsions would never coalesce in normal gravity, while they do under artificial gravity. Segregation of different populations of particles have been highlighted when using centrifugation and vibration.
As a model system for atoms
In physics, colloids are an interesting model system for atoms. Micrometre-scale colloidal particles are large enough to be observed by optical techniques such as confocal microscopy. Many of the forces that govern the structure and behavior of matter, such as excluded volume interactions or electrostatic forces, govern the structure and behavior of colloidal suspensions. For example, the same techniques used to model ideal gases can be applied to model the behavior of a hard sphere colloidal suspension. Phase transitions in colloidal suspensions can be studied in real time using optical techniques, and are analogous to phase transitions in liquids. In many interesting cases optical fluidity is used to control colloid suspensions.
Crystals
A colloidal crystal is a highly ordered array of particles that can be formed over a very long range (typically on the order of a few millimeters to one centimeter) and that appear analogous to their atomic or molecular counterparts. One of the finest natural examples of this ordering phenomenon can be found in precious opal, in which brilliant regions of pure spectral color result from close-packed domains of amorphous colloidal spheres of silicon dioxide (or silica, SiO2). These spherical particles precipitate in highly siliceous pools in Australia and elsewhere, and form these highly ordered arrays after years of sedimentation and compression under hydrostatic and gravitational forces. The periodic arrays of submicrometre spherical particles provide similar arrays of interstitial voids, which act as a natural diffraction grating for visible light waves, particularly when the interstitial spacing is of the same order of magnitude as the incident lightwave.
Thus, it has been known for many years that, due to repulsive Coulombic interactions, electrically charged macromolecules in an aqueous environment can exhibit long-range crystal-like correlations with interparticle separation distances, often being considerably greater than the individual particle diameter. In all of these cases in nature, the same brilliant iridescence (or play of colors) can be attributed to the diffraction and constructive interference of visible lightwaves that satisfy Bragg’s law, in a matter analogous to the scattering of X-rays in crystalline solids.
The large number of experiments exploring the physics and chemistry of these so-called "colloidal crystals" has emerged as a result of the relatively simple methods that have evolved in the last 20 years for preparing synthetic monodisperse colloids (both polymer and mineral) and, through various mechanisms, implementing and preserving their long-range order formation.
In biology
Colloidal phase separation is an important organising principle for compartmentalisation of both the cytoplasm and nucleus of cells into biomolecular condensates—similar in importance to compartmentalisation via lipid bilayer membranes, a type of liquid crystal. The term biomolecular condensate has been used to refer to clusters of macromolecules that arise via liquid-liquid or liquid-solid phase separation within cells. Macromolecular crowding strongly enhances colloidal phase separation and formation of biomolecular condensates.
In the environment
Colloidal particles can also serve as transport vector
of diverse contaminants in the surface water (sea water, lakes, rivers, fresh water bodies) and in underground water circulating in fissured rocks
(e.g. limestone, sandstone, granite). Radionuclides and heavy metals easily sorb onto colloids suspended in water. Various types of colloids are recognised: inorganic colloids (e.g. clay particles, silicates, iron oxy-hydroxides), organic colloids (humic and fulvic substances). When heavy metals or radionuclides form their own pure colloids, the term "eigencolloid" is used to designate pure phases, i.e., pure Tc(OH)4, U(OH)4, or Am(OH)3. Colloids have been suspected for the long-range transport of plutonium on the Nevada Nuclear Test Site. They have been the subject of detailed studies for many years. However, the mobility of inorganic colloids is very low in compacted bentonites and in deep clay formations
because of the process of ultrafiltration occurring in dense clay membrane.
The question is less clear for small organic colloids often mixed in porewater with truly dissolved organic molecules.
In soil science, the colloidal fraction in soils consists of tiny clay and humus particles that are less than 1μm in diameter and carry either positive and/or negative electrostatic charges that vary depending on the chemical conditions of the soil sample, i.e. soil pH.
Intravenous therapy
Colloid solutions used in intravenous therapy belong to a major group of volume expanders, and can be used for intravenous fluid replacement. Colloids preserve a high colloid osmotic pressure in the blood, and therefore, they should theoretically preferentially increase the intravascular volume, whereas other types of volume expanders called crystalloids also increase the interstitial volume and intracellular volume. However, there is still controversy to the actual difference in efficacy by this difference, and much of the research related to this use of colloids is based on fraudulent research by Joachim Boldt. Another difference is that crystalloids generally are much cheaper than colloids.
References
Chemical mixtures
Colloidal chemistry
Condensed matter physics
Soft matter
Dosage forms | Colloid | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 4,133 | [
"Colloidal chemistry",
"Soft matter",
"Phases of matter",
"Materials science",
"Colloids",
"Surface science",
"Chemical mixtures",
"Condensed matter physics",
"nan",
"Matter"
] |
5,363 | https://en.wikipedia.org/wiki/Video%20game | A video game, sometimes further qualified as a computer game, is an electronic game that involves interaction with a user interface or input device (such as a joystick, controller, keyboard, or motion sensing device) to generate visual feedback from a display device, most commonly shown in a video format on a television set, computer monitor, flat-panel display or touchscreen on handheld devices, or a virtual reality headset. Most modern video games are audiovisual, with audio complement delivered through speakers or headphones, and sometimes also with other types of sensory feedback (e.g., haptic technology that provides tactile sensations). Some video games also allow microphone and webcam inputs for in-game chatting and livestreaming.
Video games are typically categorized according to their hardware platform, which traditionally includes arcade video games, console games, and computer games (which includes LAN games, online games, and browser games). More recently, the video game industry has expanded onto mobile gaming through mobile devices (such as smartphones and tablet computers), virtual and augmented reality systems, and remote cloud gaming. Video games are also classified into a wide range of genres based on their style of gameplay and target audience.
The first video game prototypes in the 1950s and 1960s were simple extensions of electronic games using video-like output from large, room-sized mainframe computers. The first consumer video game was the arcade video game Computer Space in 1971, which took inspiration from the earlier 1962 computer game Spacewar!. In 1972 came the now-iconic video game Pong and the first home console, the Magnavox Odyssey. The industry grew quickly during the "golden age" of arcade video games from the late 1970s to early 1980s but suffered from the crash of the North American video game market in 1983 due to loss of publishing control and saturation of the market. Following the crash, the industry matured, was dominated by Japanese companies such as Nintendo, Sega, and Sony, and established practices and methods around the development and distribution of video games to prevent a similar crash in the future, many of which continue to be followed. In the 2000s, the core industry centered on "AAA" games, leaving little room for riskier experimental games. Coupled with the availability of the Internet and digital distribution, this gave room for independent video game development (or "indie games") to gain prominence into the 2010s. Since then, the commercial importance of the video game industry has been increasing. The emerging Asian markets and proliferation of smartphone games in particular are altering player demographics towards casual gaming and increasing monetization by incorporating games as a service.
Today, video game development requires numerous skills, vision, teamwork, and liaisons between different parties, including developers, publishers, distributors, retailers, hardware manufacturers, and other marketers, to successfully bring a game to its consumers. , the global video game market had estimated annual revenues of across hardware, software, and services, which is three times the size of the global music industry and four times that of the film industry in 2019, making it a formidable heavyweight across the modern entertainment industry. The video game market is also a major influence behind the electronics industry, where personal computer component, console, and peripheral sales, as well as consumer demands for better game performance, have been powerful driving factors for hardware design and innovation.
Origins
Early video games use interactive electronic devices with various display formats. The earliest example is from 1947—a "cathode-ray tube amusement device" was filed for a patent on 25 January 1947, by Thomas T. Goldsmith Jr. and Estle Ray Mann, and issued on 14 December 1948, as U.S. Patent 2455992. Inspired by radar display technology, it consists of an analog device allowing a user to control the parabolic arc of a dot on the screen to simulate a missile being fired at targets, which are paper drawings fixed to the screen. Other early examples include Christopher Strachey's draughts game, the Nimrod computer at the 1951 Festival of Britain; OXO, a tic-tac-toe computer game by Alexander S. Douglas for the EDSAC in 1952; Tennis for Two, an electronic interactive game engineered by William Higinbotham in 1958; and Spacewar!, written by Massachusetts Institute of Technology students Martin Graetz, Steve Russell, and Wayne Wiitanen's on a DEC PDP-1 computer in 1962. Each game has different means of display: NIMROD has a panel of lights to play the game of Nim, OXO has a graphical display to play tic-tac-toe, Tennis for Two has an oscilloscope to display a side view of a tennis court, and Spacewar! has the DEC PDP-1's vector display to have two spaceships battle each other.
These inventions laid the foundation for modern video games. In 1966, while working at Sanders Associates, Ralph H. Baer devised a system to play a basic table tennis game on a television screen. With the company's approval, Baer created the prototype known as the "Brown Box". Sanders patented Baer's innovations and licensed them to Magnavox, which commercialized the technology as the first home video game console, the Magnavox Odyssey, released in 1972. Separately, Nolan Bushnell and Ted Dabney, inspired by seeing Spacewar! running at Stanford University, devised a similar version running in a smaller coin-operated arcade cabinet using a less expensive computer. This was released as Computer Space, the first arcade video game, in 1971. Bushnell and Dabney went on to form Atari, Inc., and with Allan Alcorn, created their second arcade game in 1972, the hit ping pong-style Pong, which was directly inspired by the table tennis game on the Odyssey. Atari made a home version of Pong, which was released by Christmas 1975. The success of the Odyssey and Pong, both as an arcade game and home machine, launched the video game industry. Both Baer and Bushnell have been titled "Father of Video Games" for their contributions.
Terminology
The term "video game" was developed to distinguish this class of electronic games that were played on some type of video display rather than on a teletype printer, audio speaker, or similar device. This also distinguished from many handheld electronic games like Merlin which commonly used LED lights for indicators but did not use these in combination for imaging purposes.
"Computer game" may also be used as a descriptor, as all these types of games essentially require the use of a computer processor, and in some cases, it is used interchangeably with "video game". Particularly in the United Kingdom and Western Europe, this is common due to the historic relevance of domestically produced microcomputers. Other terms used include digital game, for example, by the Australian Bureau of Statistics. However, the term "computer game" can also be used to more specifically refer to games played primarily on personal computers or other types of flexible hardware systems (also known as PC game), as a way to distinguish them from console games, arcade games, or mobile games. Other terms such as "television game", "telegame", or "TV game" had been used in the 1970s and early 1980s, particularly for home gaming consoles that rely on connection to a television set. However, these terms were also used interchangeably with "video game" in the 1970s, primarily due to "video" and "television" being synonymous. In Japan, where consoles like the Odyssey were first imported and then made within the country by the large television manufacturers such as Toshiba and Sharp Corporation, such games are known as "TV games", "TV geemu", or "terebi geemu". The term "TV game" is still commonly used into the 21st century. "Electronic game" may also be used to refer to video games, but this also incorporates devices like early handheld electronic games that lack any video output.
The first appearance of the term "video game" emerged around 1973. The Oxford English Dictionary cited a 10 November 1973 BusinessWeek article as the first printed use of the term. Though Bushnell believed the term came from a vending magazine review of Computer Space in 1971, a review of the major vending magazines Vending Times and Cashbox showed that the term may have come even earlier, appearing first in a letter dated July 10, 1972. In the letter, Bushnell uses the term "video game" twice. Per video game historian Keith Smith, the sudden appearance suggested that the term had been proposed and readily adopted by those in the field. Around March 1973, Ed Adlum, who ran Cashboxs coin-operated section until 1972 and then later founded RePlay Magazine, covering the coin-op amusement field, in 1975, used the term in an article in March 1973. In a September 1982 issue of RePlay, Adlum is credited with first naming these games as "video games": "RePlay's Eddie Adlum worked at 'Cash Box' when 'TV games' first came out. The personalities in those days were Bushnell, his sales manager Pat Karns, and a handful of other 'TV game' manufacturers like Henry Leyser and the McEwan brothers. It seemed awkward to call their products 'TV games', so borrowing a word from Billboards description of movie jukeboxes, Adlum started to refer to this new breed of amusement machine as 'video games.' The phrase stuck." Adlum explained in 1985 that up until the early 1970s, amusement arcades typically had non-video arcade games such as pinball machines and electro-mechanical games. With the arrival of video games in arcades during the early 1970s, there was initially some confusion in the arcade industry over what term should be used to describe the new games. He "wrestled with descriptions of this type of game," alternating between "TV game" and "television game" but "finally woke up one day" and said, "What the hell... video game!"
Definition
While many games readily fall into a clear, well-understood definition of video games, new genres and innovations in game development have raised the question of what are the essential factors of a video game that separate the medium from other forms of entertainment.
The introduction of interactive films in the 1980s with games like Dragon's Lair, featured games with full motion video played off a form of media but only limited user interaction. This had required a means to distinguish these games from more traditional board games that happen to also use external media, such as the Clue VCR Mystery Game which required players to watch VCR clips between turns. To distinguish between these two, video games are considered to require some interactivity that affects the visual display.
Most video games tend to feature some type of victory or winning conditions, such as a scoring mechanism or a final boss fight. The introduction of walking simulators (adventure games that allow for exploration but lack any objectives) like Gone Home, and empathy games (video games that tend to focus on emotion) like That Dragon, Cancer brought the idea of games that did not have any such type of winning condition and raising the question of whether these were actually games. These are still commonly justified as video games as they provide a game world that the player can interact with by some means.
The lack of any industry definition for a video game by 2021 was an issue during the case Epic Games v. Apple which dealt with video games offered on Apple's iOS App Store. Among concerns raised were games like Fortnite Creative and Roblox which created metaverses of interactive experiences, and whether the larger game and the individual experiences themselves were games or not in relation to fees that Apple charged for the App Store. Judge Yvonne Gonzalez Rogers, recognizing that there was yet an industry standard definition for a video game, established for her ruling that "At a bare minimum, video games appear to require some level of interactivity or involvement between the player and the medium" compared to passive entertainment like film, music, and television, and "videogames are also generally graphically rendered or animated, as opposed to being recorded live or via motion capture as in films or television". Rogers still concluded that what is a video game "appears highly eclectic and diverse".
Video game terminology
The gameplay experience varies radically between video games, but many common elements exist. Most games will launch into a title screen and give the player a chance to review options such as the number of players before starting a game. Most games are divided into levels which the player must work the avatar through, scoring points, collecting power-ups to boost the avatar's innate attributes, all while either using special attacks to defeat enemies or moves to avoid them. This information is relayed to the player through a type of on-screen user interface such as a heads-up display atop the rendering of the game itself. Taking damage will deplete their avatar's health, and if that falls to zero or if the avatar otherwise falls into an impossible-to-escape location, the player will lose one of their lives. Should they lose all their lives without gaining an extra life or "1-UP", then the player will reach the "game over" screen. Many levels as well as the game's finale end with a type of boss character the player must defeat to continue on. In some games, intermediate points between levels will offer save points where the player can create a saved game on storage media to restart the game should they lose all their lives or need to stop the game and restart at a later time. These also may be in the form of a passage that can be written down and reentered at the title screen.
Product flaws include software bugs which can manifest as glitches which may be exploited by the player; this is often the foundation of speedrunning a video game. These bugs, along with cheat codes, Easter eggs, and other hidden secrets that were intentionally added to the game can also be exploited. On some consoles, cheat cartridges allow players to execute these cheat codes, and user-developed trainers allow similar bypassing for computer software games. Both of which might make the game easier, give the player additional power-ups, or change the appearance of the game.
Components
To distinguish from electronic games, a video game is generally considered to require a platform, the hardware which contains computing elements, to process player interaction from some type of input device and displays the results to a video output display.
Platform
Video games require a platform, a specific combination of electronic components or computer hardware and associated software, to operate. The term system is also commonly used. These platforms may include multiple brandsheld by platform holders, such as Nintendo or Sony, seeking to gain larger market shares. Games are typically designed to be played on one or a limited number of platforms, and exclusivity to a platform or brand is used by platform holders as a competitive edge in the video game market. However, games may be developed for alternative platforms than intended, which are described as ports or conversions. These also may be remasters - where most of the original game's source code is reused and art assets, models, and game levels are updated for modern systems – and remakes, where in addition to asset improvements, significant reworking of the original game and possibly from scratch is performed.
The list below is not exhaustive and excludes other electronic devices capable of playing video games such as PDAs and graphing calculators.
PC games
PC games involve a player interacting with a personal computer (PC) connected to a video monitor. Personal computers are not dedicated game platforms, so there may be differences running the same game on different hardware. Also, the openness allows some features to developers like reduced software cost, increased flexibility, increased innovation, emulation, creation of modifications or mods, open hosting for online gaming (in which a person plays a video game with people who are in a different household) and others. A gaming computer is a PC or laptop intended specifically for gaming, typically using high-performance, high-cost components. In addition to personal computer gaming, there also exist games that work on mainframe computers and other similarly shared systems, with users logging in remotely to use the computer.
Home console
A console game is played on a home console, a specialized electronic device that connects to a common television set or composite video monitor. Home consoles are specifically designed to play games using a dedicated hardware environment, giving developers a concrete hardware target for development and assurances of what features will be available, simplifying development compared to PC game development. Usually consoles only run games developed for it, or games from other platform made by the same company, but never games developed by its direct competitor, even if the same game is available on different platforms. It often comes with a specific game controller. Major console platforms include Xbox, PlayStation and Nintendo.
Handheld console
A handheld game console is a small, self-contained electronic device that is portable and can be held in a user's hands. It features the console, a small screen, speakers and buttons, joystick or other game controllers in a single unit. Like consoles, handhelds are dedicated platforms, and share almost the same characteristics. Handheld hardware usually is less powerful than PC or console hardware. Some handheld games from the late 1970s and early 1980s could only play one game. In the 1990s and 2000s, a number of handheld games used cartridges, which enabled them to be used to play many different games. The handheld console has waned in the 2010s as mobile device gaming has become a more dominant factor.
Arcade video game
An arcade video game generally refers to a game played on an even more specialized type of electronic device that is typically designed to play only one game and is encased in a special, large coin-operated cabinet which has one built-in console, controllers (joystick, buttons, etc.), a CRT screen, and audio amplifier and speakers. Arcade games often have brightly painted logos and images relating to the theme of the game. While most arcade games are housed in a vertical cabinet, which the user typically stands in front of to play, some arcade games use a tabletop approach, in which the display screen is housed in a table-style cabinet with a see-through table top. With table-top games, the users typically sit to play. In the 1990s and 2000s, some arcade games offered players a choice of multiple games. In the 1980s, video arcades were businesses in which game players could use a number of arcade video games. In the 2010s, there are far fewer video arcades, but some movie theaters and family entertainment centers still have them.
Browser game
A browser game takes advantages of standardizations of technologies for the functionality of web browsers across multiple devices providing a cross-platform environment. These games may be identified based on the website that they appear, such as with Miniclip games. Others are named based on the programming platform used to develop them, such as Java and Flash games.
Mobile game
With the introduction of smartphones and tablet computers standardized on the iOS and Android operating systems, mobile gaming has become a significant platform. These games may use unique features of mobile devices that are not necessary present on other platforms, such as accelerometers, global positioning information and camera devices to support augmented reality gameplay.
Cloud gaming
Cloud gaming requires a minimal hardware device, such as a basic computer, console, laptop, mobile phone or even a dedicated hardware device connected to a display with good Internet connectivity that connects to hardware systems by the cloud gaming provider. The game is computed and rendered on the remote hardware, using a number of predictive methods to reduce the network latency between player input and output on their display device. For example, the Xbox Cloud Gaming and PlayStation Now platforms use dedicated custom server blade hardware in cloud computing centers.
Virtual reality
Virtual reality (VR) games generally require players to use a special head-mounted unit that provides stereoscopic screens and motion tracking to immerse a player within virtual environment that responds to their head movements. Some VR systems include control units for the player's hands as to provide a direct way to interact with the virtual world. VR systems generally require a separate computer, console, or other processing device that couples with the head-mounted unit.
Emulation
An emulator enables games from a console or otherwise different system to be run in a type of virtual machine on a modern system, simulating the hardware of the original and allows old games to be played. While emulators themselves have been found to be legal in United States case law, the act of obtaining the game software that one does not already own may violate copyrights. However, there are some official releases of emulated software from game manufacturers, such as Nintendo with its Virtual Console or Nintendo Switch Online offerings.
Backward compatibility
Backward compatibility is similar in nature to emulation in that older games can be played on newer platforms, but typically directly though hardware and built-in software within the platform. The PlayStation 2 popularized the trend of having the capability of playing past generation games from the PlayStation 1 simply by inserting the original game media into the newer console, while Nintendo's Wii could play GameCube titles as well in the same manner.
Game media
Early arcade games, home consoles, and handheld games were dedicated hardware units with the game's logic built into the electronic componentry of the hardware. Since then, most video game platforms are considered programmable, having means to read and play multiple games distributed on different types of media or formats. Physical formats include ROM cartridges, magnetic storage including magnetic-tape data storage and floppy discs, optical media formats including CD-ROM and DVDs, and flash memory cards. Furthermore digital distribution over the Internet or other communication methods as well as cloud gaming alleviate the need for any physical media. In some cases, the media serves as the direct read-only memory for the game, or it may be the form of installation media that is used to write the main assets to the player's platform's local storage for faster loading periods and later updates.
Games can be extended with new content and software patches through either expansion packs which are typically available as physical media, or as downloadable content nominally available via digital distribution. These can be offered freely or can be used to monetize a game following its initial release. Several games offer players the ability to create user-generated content to share with others to play. Other games, mostly those on personal computers, can be extended with user-created modifications or mods that alter or add onto the game; these often are unofficial and were developed by players from reverse engineering of the game, but other games provide official support for modding the game.
Input device
Video game can use several types of input devices to translate human actions to a game. Most common are the use of game controllers like gamepads and joysticks for most consoles, and as accessories for personal computer systems along keyboard and mouse controls. Common controls on the most recent controllers include face buttons, shoulder triggers, analog sticks, and directional pads ("d-pads"). Consoles typically include standard controllers which are shipped or bundled with the console itself, while peripheral controllers are available as a separate purchase from the console manufacturer or third-party vendors. Similar control sets are built into handheld consoles and onto arcade cabinets. Newer technology improvements have incorporated additional technology into the controller or the game platform, such as touchscreens and motion detection sensors that give more options for how the player interacts with the game. Specialized controllers may be used for certain genres of games, including racing wheels, light guns and dance pads. Digital cameras and motion detection can capture movements of the player as input into the game, which can, in some cases, effectively eliminate the control, and on other systems such as virtual reality, are used to enhance immersion into the game.
Display and output
By definition, all video games are intended to output graphics to an external video display, such as cathode-ray tube televisions, newer liquid-crystal display (LCD) televisions and built-in screens, projectors or computer monitors, depending on the type of platform the game is played on. Features such as color depth, refresh rate, frame rate, and screen resolution are a combination of the limitations of the game platform and display device and the program efficiency of the game itself. The game's output can range from fixed displays using LED or LCD elements, text-based games, two-dimensional and three-dimensional graphics, and augmented reality displays.
The game's graphics are often accompanied by sound produced by internal speakers on the game platform or external speakers attached to the platform, as directed by the game's programming. This often will include sound effects tied to the player's actions to provide audio feedback, as well as background music for the game.
Some platforms support additional feedback mechanics to the player that a game can take advantage of. This is most commonly haptic technology built into the game controller, such as causing the controller to shake in the player's hands to simulate a shaking earthquake occurring in game.
Classifications
Video games are frequently classified by a number of factors related to how one plays them.
Genre
A video game, like most other forms of media, may be categorized into genres. However, unlike film or television which use visual or narrative elements, video games are generally categorized into genres based on their gameplay interaction, since this is the primary means which one interacts with a video game. The narrative setting does not impact gameplay; a shooter game is still a shooter game, regardless of whether it takes place in a fantasy world or in outer space. An exception is the horror game genre, used for games that are based on narrative elements of horror fiction, the supernatural, and psychological horror.
Genre names are normally self-describing in terms of the type of gameplay, such as action game, role playing game, or shoot 'em up, though some genres have derivations from influential works that have defined that genre, such as roguelikes from Rogue, Grand Theft Auto clones from Grand Theft Auto III, and battle royale games from the film Battle Royale. The names may shift over time as players, developers and the media come up with new terms; for example, first-person shooters were originally called "Doom clones" based on the 1993 game. A hierarchy of game genres exist, with top-level genres like "shooter game" and "action game" that broadly capture the game's main gameplay style, and several subgenres of specific implementation, such as within the shooter game first-person shooter and third-person shooter. Some cross-genre types also exist that fall until multiple top-level genres such as action-adventure game.
Mode
A video game's mode describes how many players can use the game at the same type. This is primarily distinguished by single-player video games and multiplayer video games. Within the latter category, multiplayer games can be played in a variety of ways, including locally at the same device, on separate devices connected through a local network such as LAN parties, or online via separate Internet connections. Most multiplayer games are based on competitive gameplay, but many offer cooperative and team-based options as well as asymmetric gameplay. Online games use server structures that can also enable massively multiplayer online games (MMOs) to support hundreds of players at the same time.
A small number of video games are zero-player games, in which the player has very limited interaction with the game itself. These are most commonly simulation games where the player may establish a starting state and then let the game proceed on its own, watching the results as a passive observer, such as with many computerized simulations of Conway's Game of Life.
Types
Most video games are intended for entertainment purposes. Different game types include:
Core games
Core or hard-core games refer to the typical perception of video games, developed for entertainment purposes. These games typically require a fair amount of time to learn and master, in contrast to casual games, and thus are most appealing to gamers rather than a broader audience. Most of the AAA video game industry is based around the delivery of core games.
Casual games
In contrast to core games, casual games are designed for ease of accessibility, simple to understand gameplay and quick to grasp rule sets, and aimed at mass market audience. They frequently support the ability to jump in and out of play on demand, such as during commuting or lunch breaks. Numerous browser and mobile games fall into the casual game area, and casual games often are from genres with low intensity game elements such as match three, hidden object, time management, and puzzle games. Causal games frequently use social-network game mechanics, where players can enlist the help of friends on their social media networks for extra turns or moves each day. Popular casual games include Tetris and Candy Crush Saga. More recent, starting in the late 2010s, are hyper-casual games which use even more simplistic rules for short but infinitely replayable games, such as Flappy Bird.
Educational games
Education software has been used in homes and classrooms to help teach children and students, and video games have been similarly adapted for these reasons, all designed to provide a form of interactivity and entertainment tied to game design elements. There are a variety of differences in their designs and how they educate the user. These are broadly split between edutainment games that tend to focus on the entertainment value and rote learning but are unlikely to engage in critical thinking, and educational video games that are geared towards problem solving through motivation and positive reinforcement while downplaying the entertainment value. Examples of educational games include The Oregon Trail and the Carmen Sandiego series. Further, games not initially developed for educational purposes have found their way into the classroom after release, such as that feature open worlds or virtual sandboxes like Minecraft, or offer critical thinking skills through puzzle video games like SpaceChem.
Serious games
Further extending from educational games, serious games are those where the entertainment factor may be augmented, overshadowed, or even eliminated by other purposes for the game. Game design is used to reinforce the non-entertainment purpose of the game, such as using video game technology for the game's interactive world, or gamification for reinforcement training. Educational games are a form of serious games, but other types of games include fitness games that incorporate significant physical exercise to help keep the player fit (such as Wii Fit), simulator games that resemble flight simulators to pilot aircraft (such as Microsoft Flight Simulator), advergames that are built around the advertising of a product (such as Pepsiman), and newsgames aimed at conveying a specific advocacy message (such as NarcoGuerra).
Art games
Although video games have been considered an art form on their own, games may be developed to try to purposely communicate a story or message, using the medium as a work of art. These art or arthouse games are designed to generate emotion and empathy from the player by challenging societal norms and offering critique through the interactivity of the video game medium. They may not have any type of win condition and are designed to let the player explore through the game world and scenarios. Most art games are indie games in nature, designed based on personal experiences or stories through a single developer or small team. Examples of art games include Passage, Flower, and That Dragon, Cancer.
Content rating
Video games can be subject to national and international content rating requirements. Like with film content ratings, video game ratings typing identify the target age group that the national or regional ratings board believes is appropriate for the player, ranging from all-ages, to a teenager-or-older, to mature, to the infrequent adult-only games. Most content review is based on the level of violence, both in the type of violence and how graphic it may be represented, and sexual content, but other themes such as drug and alcohol use and gambling that can influence children may also be identified. A primary identifier based on a minimum age is used by nearly all systems, along with additional descriptors to identify specific content that players and parents should be aware of.
The regulations vary from country to country but generally are voluntary systems upheld by vendor practices, with penalty and fines issued by the ratings body on the video game publisher for misuse of the ratings. Among the major content rating systems include:
Entertainment Software Rating Board (ESRB) that oversees games released in the United States. ESRB ratings are voluntary and rated along a E (Everyone), E10+ (Everyone 10 and older), T (Teen), M (Mature), and AO (Adults Only). Attempts to mandate video games ratings in the U.S. subsequently led to the landmark Supreme Court case, Brown v. Entertainment Merchants Association in 2011 which ruled video games were a protected form of art, a key victory for the video game industry.
Pan European Game Information (PEGI) covering the United Kingdom, most of the European Union and other European countries, replacing previous national-based systems. The PEGI system uses content rated based on minimum recommended ages, which include 3+, 8+, 12+, 16+, and 18+.
Australian Classification Board (ACB) oversees the ratings of games and other works in Australia, using ratings of G (General), PG (Parental Guidance), M (Mature), MA15+ (Mature Accompanied), R18+ (Restricted), and X (Restricted for pornographic material). ACB can also deny to give a rating to game (RC – Refused Classification). The ACB's ratings are enforceable by law, and importantly, games cannot be imported or purchased digitally in Australia if they have failed to gain a rating or were given the RC rating, leading to a number of notable banned games.
Computer Entertainment Rating Organization (CERO) rates games for Japan. Their ratings include A (all ages), B (12 and older), C (15 and over), D (17 and over), and Z (18 and over).
Unterhaltungssoftware Selbstkontrolle (USK) rates games for Germany. Their ratings include 0, 6, 12, 16, and 18.
Additionally, the major content system provides have worked to create the International Age Rating Coalition (IARC), a means to streamline and align the content ratings system between different region, so that a publisher would only need to complete the content ratings review for one provider, and use the IARC transition to affirm the content rating for all other regions.
Certain nations have even more restrictive rules related to political or ideological content. Within Germany, until 2018, the Unterhaltungssoftware Selbstkontrolle (Entertainment Software Self-Regulation) would refuse to classify, and thus allow sale, of any game depicting Nazi imagery, and thus often requiring developers to replace such imagery with fictional ones. This ruling was relaxed in 2018 to allow for such imagery for "social adequacy" purposes that applied to other works of art. China's video game segment is mostly isolated from the rest of the world due to the government's censorship, and all games published there must adhere to strict government review, disallowing content such as smearing the image of the Chinese Communist Party. Foreign games published in China often require modification by developers and publishers to meet these requirements.
Development
Video game development and authorship, much like any other form of entertainment, is frequently a cross-disciplinary field. Video game developers, as employees within this industry are commonly referred to, primarily include programmers and graphic designers. Over the years, this has expanded to include almost every type of skill that one might see prevalent in the creation of any movie or television program, including sound designers, musicians, and other technicians; as well as skills that are specific to video games, such as the game designer. All of these are managed by producers.
In the early days of the industry, it was more common for a single person to manage all of the roles needed to create a video game. As platforms have become more complex and powerful in the type of material they can present, larger teams have been needed to generate all of the art, programming, cinematography, and more. This is not to say that the age of the "one-man shop" is gone, as this is still sometimes found in the casual gaming and handheld markets, where smaller games are prevalent due to technical limitations such as limited RAM or lack of dedicated 3D graphics rendering capabilities on the target platform (e.g., some PDAs).
Video games are programmed like any other piece of computer software. Prior to the mid-1970s, arcade and home consoles were programmed by assembling discrete electro-mechanical components on circuit boards, which limited games to relatively simple logic. By 1975, low-cost microprocessors were available at volume to be used for video game hardware, which allowed game developers to program more detailed games, widening the scope of what was possible. Ongoing improvements in computer hardware technology have expanded what has become possible to create in video games, coupled with convergence of common hardware between console, computer, and arcade platforms to simplify the development process. Today, game developers have a number of commercial and open source tools available for use to make games, often which are across multiple platforms to support portability, or may still opt to create their own for more specialized features and direct control of the game. Today, many games are built around a game engine that handles the bulk of the game's logic, gameplay, and rendering. These engines can be augmented with specialized engines for specific features, such as a physics engine that simulates the physics of objects in real-time. A variety of middleware exists to help developers access other features, such as playback of videos within games, network-oriented code for games that communicate via online services, matchmaking for online games, and similar features. These features can be used from a developer's programming language of choice, or they may opt to also use game development kits that minimize the amount of direct programming they have to do but can also limit the amount of customization they can add into a game. Like all software, video games usually undergo quality testing before release to assure there are no bugs or glitches in the product, though frequently developers will release patches and updates.
With the growth of the size of development teams in the industry, the problem of cost has increased. Development studios need the best talent, while publishers reduce costs to maintain profitability on their investment. Typically, a video game console development team ranges from 5 to 50 people, and some exceed 100. In May 2009, Assassin's Creed II was reported to have a development staff of 450. The growth of team size combined with greater pressure to get completed projects into the market to begin recouping production costs has led to a greater occurrence of missed deadlines, rushed games, and the release of unfinished products.
While amateur and hobbyist game programming had existed since the late 1970s with the introduction of home computers, a newer trend since the mid-2000s is indie game development. Indie games are made by small teams outside any direct publisher control, their games being smaller in scope than those from the larger "AAA" game studios, and are often experiments in gameplay and art style. Indie game development is aided by the larger availability of digital distribution, including the newer mobile gaming market, and readily-available and low-cost development tools for these platforms.
Game theory and studies
Although departments of computer science have been studying the technical aspects of video games for years, theories that examine games as an artistic medium are a relatively recent development in the humanities. The two most visible schools in this emerging field are ludology and narratology. Narrativists approach video games in the context of what Janet Murray calls "Cyberdrama". That is to say, their major concern is with video games as a storytelling medium, one that arises out of interactive fiction. Murray puts video games in the context of the Holodeck, a fictional piece of technology from Star Trek, arguing for the video game as a medium in which the player is allowed to become another person, and to act out in another world. This image of video games received early widespread popular support, and forms the basis of films such as Tron, eXistenZ and The Last Starfighter.
Ludologists break sharply and radically from this idea. They argue that a video game is first and foremost a game, which must be understood in terms of its rules, interface, and the concept of play that it deploys. Espen J. Aarseth argues that, although games certainly have plots, characters, and aspects of traditional narratives, these aspects are incidental to gameplay. For example, Aarseth is critical of the widespread attention that narrativists have given to the heroine of the game Tomb Raider, saying that "the dimensions of Lara Croft's body, already analyzed to death by film theorists, are irrelevant to me as a player, because a different-looking body would not make me play differently... When I play, I don't even see her body, but see through it and past it." Simply put, ludologists reject traditional theories of art because they claim that the artistic and socially relevant qualities of a video game are primarily determined by the underlying set of rules, demands, and expectations imposed on the player.
While many games rely on emergent principles, video games commonly present simulated story worlds where emergent behavior occurs within the context of the game. The term "emergent narrative" has been used to describe how, in a simulated environment, storyline can be created simply by "what happens to the player." However, emergent behavior is not limited to sophisticated games. In general, any place where event-driven instructions occur for AI in a game, emergent behavior will exist. For instance, take a racing game in which cars are programmed to avoid crashing, and they encounter an obstacle in the track: the cars might then maneuver to avoid the obstacle causing the cars behind them to slow or maneuver to accommodate the cars in front of them and the obstacle. The programmer never wrote code to specifically create a traffic jam, yet one now exists in the game.
Intellectual property for video games
Most commonly, video games are protected by copyright, though both patents and trademarks have been used as well.
Though local copyright regulations vary to the degree of protection, video games qualify as copyrighted visual-audio works, and enjoy cross-country protection under the Berne Convention. This typically only applies to the underlying code, as well as to the artistic aspects of the game such as its writing, art assets, and music. Gameplay itself is generally not considered copyrightable; in the United States among other countries, video games are considered to fall into the idea–expression distinction in that it is how the game is presented and expressed to the player that can be copyrighted, but not the underlying principles of the game.
Because gameplay is normally ineligible for copyright, gameplay ideas in popular games are often replicated and built upon in other games. At times, this repurposing of gameplay can be seen as beneficial and a fundamental part of how the industry has grown by building on the ideas of others. For example Doom (1993) and Grand Theft Auto III (2001) introduced gameplay that created popular new game genres, the first-person shooter and the Grand Theft Auto clone, respectively, in the few years after their release. However, at times and more frequently at the onset of the industry, developers would intentionally create video game clones of successful games and game hardware with few changes, which led to the flooded arcade and dedicated home console market around 1978. Cloning is also a major issue with countries that do not have strong intellectual property protection laws, such as within China. The lax oversight by China's government and the difficulty for foreign companies to take Chinese entities to court had enabled China to support a large grey market of cloned hardware and software systems. The industry remains challenged to distinguish between creating new games based on refinements of past successful games to create a new type of gameplay, and intentionally creating a clone of a game that may simply swap out art assets.
Industry
History
The early history of the video game industry, following the first game hardware releases and through 1983, had little structure. Video games quickly took off during the golden age of arcade video games from the late 1970s to early 1980s, but the newfound industry was mainly composed of game developers with little business experience. This led to numerous companies forming simply to create clones of popular games to try to capitalize on the market. Due to loss of publishing control and oversaturation of the market, the North American home video game market crashed in 1983, dropping from revenues of around in 1983 to by 1985. Many of the North American companies created in the prior years closed down. Japan's growing game industry was briefly shocked by this crash but had sufficient longevity to withstand the short-term effects, and Nintendo helped to revitalize the industry with the release of the Nintendo Entertainment System in North America in 1985. Along with it, Nintendo established a number of core industrial practices to prevent unlicensed game development and control game distribution on their platform, methods that continue to be used by console manufacturers today.
The industry remained more conservative following the 1983 crash, forming around the concept of publisher-developer dichotomies, and by the 2000s, leading to the industry centralizing around low-risk, triple-A games and studios with large development budgets of at least or more. The advent of the Internet brought digital distribution as a viable means to distribute games, and contributed to the growth of more riskier, experimental independent game development as an alternative to triple-A games in the late 2000s and which has continued to grow as a significant portion of the video game industry.
Industry roles
Video games have a large network effect that draw on many different sectors that tie into the larger video game industry. While video game developers are a significant portion of the industry, other key participants in the market include:
Publishers: Companies generally that oversee bringing the game from the developer to market. This often includes performing the marketing, public relations, and advertising of the game. Publishers frequently pay the developers ahead of time to make their games and will be involved in critical decisions about the direction of the game's progress, and then pay the developers additional royalties or bonuses based on sales performances. Other smaller, boutique publishers may simply offer to perform the publishing of a game for a small fee and a portion of the sales, and otherwise leave the developer with the creative freedom to proceed. A range of other publisher-developer relationships exist between these points.
Distributors: Publishers often are able to produce their own game media and take the role of distributor, but there are also third-party distributors that can mass-produce game media and distribute to retailers. Digital storefronts like Steam and the iOS App Store also serve as distributors and retailers in the digital space.
Retailers: Physical storefronts, which include large online retailers, department and electronic stores, and specialty video game stores, sell games, consoles, and other accessories to consumers. This has also including a trade-in market in certain regions, allowing players to turn in used games for partial refunds or credit towards other games. However, with the uprising of digital marketplaces and e-commerce revolution, retailers have been performing worse than in the past.
Hardware manufacturers: The video game console manufacturers produce console hardware, often through a value chain system that include numerous component suppliers and contract manufacturer that assemble the consoles. Further, these console manufacturers typically require a license to develop for their platform and may control the production of some games, such as Nintendo does with the use of game cartridges for its systems. In exchange, the manufacturers may help promote games for their system and may seek console exclusivity for certain games. For games on personal computers, a number of manufacturers are devoted to high-performance "gaming computer" hardware, particularly in the graphics card area; several of the same companies overlap with component supplies for consoles. A range of third-party manufacturers also exist to provide equipment and gear for consoles post-sale, such as additional controllers for console or carrying cases and gear for handheld devices.
Journalism: While journalism around video games used to be primarily print-based, and focused more on post-release reviews and gameplay strategy, the Internet has brought a more proactive press that use web journalism, covering games in the months prior to release as well as beyond, helping to build excitement for games ahead of release.
Influencers: With the rising importance of social media, video game companies have found that the opinions of influencers using streaming media to play through their games has had a significant impact on game sales, and have turned to use influencers alongside traditional journalism as a means to build up attention to their game before release.
Esports: Esports is a major function of several multiplayer games with numerous professional leagues established since the 2000s, with large viewership numbers, particularly out of southeast Asia since the 2010s.
Trade and advocacy groups: Trade groups like the Entertainment Software Association were established to provide a common voice for the industry in response to governmental and other advocacy concerns. They frequently set up the major trade events and conventions for the industry such as E3.
Gamers: Proactive hobbyists who are players and consumers of video games. While their representation in the industry is primarily seen through game sales, many companies follow gamers' comments on social media or on user reviews and engage with them to work to improve their products in addition to other feedback from other parts of the industry. Demographics of the larger player community also impact parts of the market; while once dominated by younger men, the market shifted in the mid-2010s towards women and older players who generally preferred mobile and causal games, leading to further growth in those sectors.
Major regional markets
The industry itself grew out from both the United States and Japan in the 1970s and 1980s before having a larger worldwide contribution. Today, the video game industry is predominantly led by major companies in North America (primarily the United States and Canada), Europe, and southeast Asia including Japan, South Korea, and China. Hardware production remains an area dominated by Asian companies either directly involved in hardware design or part of the production process, but digital distribution and indie game development of the late 2000s has allowed game developers to flourish nearly anywhere and diversify the field.
Game sales
According to the market research firm Newzoo, the global video game industry drew estimated revenues of over in 2020. Mobile games accounted for the bulk of this, with a 48% share of the market, followed by console games at 28% and personal computer games at 23%.
Sales of different types of games vary widely between countries due to local preferences. Japanese consumers tend to purchase much more handheld games than console games and especially PC games, with a strong preference for games catering to local tastes. Another key difference is that, though having declined in the West, arcade games remain an important sector of the Japanese gaming industry. In South Korea, computer games are generally preferred over console games, especially MMORPG games and real-time strategy games. Computer games are also popular in China.
Effects on society
Culture
Video game culture is a worldwide new media subculture formed around video games and game playing. As computer and video games have increased in popularity over time, they have had a significant influence on popular culture. Video game culture has also evolved over time hand in hand with internet culture as well as the increasing popularity of mobile games. Many people who play video games identify as gamers, which can mean anything from someone who enjoys games to someone who is passionate about it. As video games become more social with multiplayer and online capability, gamers find themselves in growing social networks. Gaming can both be entertainment as well as competition, as a new trend known as electronic sports is becoming more widely accepted. In the 2010s, video games and discussions of video game trends and topics can be seen in social media, politics, television, film and music. The COVID-19 pandemic during 2020–2021 gave further visibility to video games as a pastime to enjoy with friends and family online as a means of social distancing.
Art
Since the mid-2000s there has been debate whether video games qualify as art, primarily as the form's interactivity interfered with the artistic intent of the work and that they are designed for commercial appeal. A significant debate on the matter came after film critic Roger Ebert published an essay "Video Games can never be art", which challenged the industry to prove him and other critics wrong. The view that video games were an art form was cemented in 2011 when the U.S. Supreme Court ruled in the landmark case Brown v. Entertainment Merchants Association that video games were a protected form of speech with artistic merit. Since then, video game developers have come to use the form more for artistic expression, including the development of art games, and the cultural heritage of video games as works of arts, beyond their technical capabilities, have been part of major museum exhibits, including The Art of Video Games at the Smithsonian American Art Museum and toured at other museums from 2012 to 2016.
Video games will inspire sequels and other video games within the same franchise, but also have influenced works outside of the video game medium. Numerous television shows (both animated and live-action), films, comics and novels have been created based on existing video game franchises. Because video games are an interactive medium there has been trouble in converting them to these passive forms of media, and typically such works have been critically panned or treated as children's media. For example, until 2019, no video game film had ever been received a "Fresh" rating on Rotten Tomatoes, but the releases of Detective Pikachu (2019) and Sonic the Hedgehog (2020), both receiving "Fresh" ratings, shows signs of the film industry having found an approach to adapt video games for the large screen. That said, some early video game-based films have been highly successful at the box office, such as 1995's Mortal Kombat and 2001's Lara Croft: Tomb Raider.
More recently since the 2000s, there has also become a larger appreciation of video game music, which ranges from chiptunes composed for limited sound-output devices on early computers and consoles, to fully-scored compositions for most modern games. Such music has frequently served as a platform for covers and remixes, and concerts featuring video game soundtracks performed by bands or orchestras, such as Video Games Live, have also become popular. Video games also frequently incorporate licensed music, particularly in the area of rhythm games, furthering the depth of which video games and music can work together.
Further, video games can serve as a virtual environment under full control of a producer to create new works. With the capability to render 3D actors and settings in real-time, a new type of work machinima (short for "machine cinema") grew out from using video game engines to craft narratives. As video game engines gain higher fidelity, they have also become part of the tools used in more traditional filmmaking. Unreal Engine has been used as a backbone by Industrial Light & Magic for their StageCraft technology for shows like The Mandalorian.
Separately, video games are also frequently used as part of the promotion and marketing for other media, such as for films, anime, and comics. However, these licensed games in the 1990s and 2000s often had a reputation for poor quality, developed without any input from the intellectual property rights owners, and several of them are considered among lists of games with notably negative reception, such as Superman 64. More recently, with these licensed games being developed by triple-A studios or through studios directly connected to the licensed property owner, there has been a significant improvement in the quality of these games, with an early trendsetting example of Batman: Arkham Asylum.
Beneficial uses
Besides their entertainment value, appropriately-designed video games have been seen to provide value in education across several ages and comprehension levels. Learning principles found in video games have been identified as possible techniques with which to reform the U.S. education system. It has been noticed that gamers adopt an attitude while playing that is of such high concentration, they do not realize they are learning, and that if the same attitude could be adopted at school, education would enjoy significant benefits. Students are found to be "learning by doing" while playing video games while fostering creative thinking.
Video games are also believed to be beneficial to the mind and body. It has been shown that action video game players have better hand–eye coordination and visuo-motor skills, such as their resistance to distraction, their sensitivity to information in the peripheral vision and their ability to count briefly presented objects, than nonplayers. Researchers found that such enhanced abilities could be acquired by training with action games, involving challenges that switch attention between different locations, but not with games requiring concentration on single objects. A 2018 systematic review found evidence that video gaming training had positive effects on cognitive and emotional skills in the adult population, especially with young adults. A 2019 systematic review also added support for the claim that video games are beneficial to the brain, although the beneficial effects of video gaming on the brain differed by video games types.
Organisers of video gaming events, such as the organisers of the D-Lux video game festival in Dumfries, Scotland, have emphasised the positive aspects video games can have on mental health. Organisers, mental health workers and mental health nurses at the event emphasised the relationships and friendships that can be built around video games and how playing games can help people learn about others as a precursor to discussing the person's mental health. A study in 2020 from Oxford University also suggested that playing video games can be a benefit to a person's mental health. The report of 3,274 gamers, all over the age of 18, focused on the games Animal Crossing: New Horizons and Plants vs Zombies: Battle for Neighborville and used actual play-time data. The report found that those that played more games tended to report greater "wellbeing". Also in 2020, computer science professor Regan Mandryk of the University of Saskatchewan said her research also showed that video games can have health benefits such as reducing stress and improving mental health. The university's research studied all age groups – "from pre-literate children through to older adults living in long term care homes" – with a main focus on 18 to 55-year-olds.
A study of gamers attitudes towards gaming which was reported about in 2018 found that millennials use video games as a key strategy for coping with stress. In the study of 1,000 gamers, 55% said that it "helps them to unwind and relieve stress ... and half said they see the value in gaming as a method of escapism to help them deal with daily work pressures".
Controversies
Video games have caused controversy since the 1970s. Parents and children's advocates regularly raise concerns that violent video games can influence young players into performing those violent acts in real life, and events such as the Columbine High School massacre in 1999 in which some claimed the perpetrators specifically alluded to using video games to plot out their attack, raised further fears. Medical experts and mental health professionals have also raised concerned that video games may be addictive, and the World Health Organization has included "gaming disorder" in the 11th revision of its International Statistical Classification of Diseases. Other health experts, including the American Psychiatric Association, have stated that there is insufficient evidence that video games can create violent tendencies or lead to addictive behavior, though agree that video games typically use a compulsion loop in their core design that can create dopamine that can help reinforce the desire to continue to play through that compulsion loop and potentially lead into violent or addictive behavior. Even with case law establishing that video games qualify as a protected art form, there has been pressure on the video game industry to keep their products in check to avoid over-excessive violence particularly for games aimed at younger children. The potential addictive behavior around games, coupled with increased used of post-sale monetization of video games, has also raised concern among parents, advocates, and government officials about gambling tendencies that may come from video games, such as controversy around the use of loot boxes in many high-profile games.
Numerous other controversies around video games and its industry have arisen over the years, among the more notable incidents include the 1993 United States Congressional hearings on violent games like Mortal Kombat which lead to the formation of the ESRB ratings system, numerous legal actions taken by attorney Jack Thompson over violent games such as Grand Theft Auto III and Manhunt from 2003 to 2007, the outrage over the "No Russian" level from Call of Duty: Modern Warfare 2 in 2009 which allowed the player to shoot a number of innocent non-player characters at an airport, and the Gamergate harassment campaign in 2014 that highlighted misogyny from a portion of the player demographic. The industry as a whole has also dealt with issues related to gender, racial, and LGBTQ+ discrimination and mischaracterization of these minority groups in video games. A further issue in the industry is related to working conditions, as development studios and publishers frequently use "crunch time", required extended working hours, in the weeks and months ahead of a game's release to assure on-time delivery.
Collecting and preservation
Players of video games often maintain collections of games. More recently there has been interest in retrogaming, focusing on games from the first decades. Games in retail packaging in good shape have become collectors items for the early days of the industry, with some rare publications having gone for over . Separately, there is also concern about the preservation of video games, as both game media and the hardware to play them degrade over time. Further, many of the game developers and publishers from the first decades no longer exist, so records of their games have disappeared. Archivists and preservations have worked within the scope of copyright law to save these games as part of the cultural history of the industry.
There are many video game museums around the world, including the National Videogame Museum in Frisco, Texas, which serves as the largest museum wholly dedicated to the display and preservation of the industry's most important artifacts. Europe hosts video game museums such as the Computer Games Museum in Berlin and the Museum of Soviet Arcade Machines in Moscow and Saint-Petersburg. The Museum of Art and Digital Entertainment in Oakland, California is a dedicated video game museum focusing on playable exhibits of console and computer games. The Video Game Museum of Rome is also dedicated to preserving video games and their history. The International Center for the History of Electronic Games at The Strong in Rochester, New York contains one of the largest collections of electronic games and game-related historical materials in the world, including a exhibit which allows guests to play their way through the history of video games. The Smithsonian Institution in Washington, DC has three video games on permanent display: Pac-Man, Dragon's Lair, and Pong.
The Museum of Modern Art has added a total of 20 video games and one video game console to its permanent Architecture and Design Collection since 2012. In 2012, the Smithsonian American Art Museum ran an exhibition on "The Art of Video Games". However, the reviews of the exhibit were mixed, including questioning whether video games belong in an art museum.
See also
Lists of video games
List of accessories to video games by system
Outline of video games
Notes
References
Bibliography
Further reading
The Ultimate History of Video Games, Volume 2: Nintendo, Sony, Microsoft, and the Billion-Dollar Battle to Shape Modern Gaming by Steven L. Kent, Crown, 2021,
External links
Video games bibliography by the French video game research association Ludoscience
The Virtual Museum of Computing (VMoC) (archived 10 October 2014)
Games and sports introduced in 1947
Digital media
Articles containing video clips | Video game | [
"Technology"
] | 13,017 | [
"Multimedia",
"Digital media"
] |
5,371 | https://en.wikipedia.org/wiki/Concrete | Concrete is a composite material composed of aggregate bonded together with a fluid cement that cures to a solid over time. Concrete is the second-most-used substance in the world after water, and is the most widely used building material. Its usage worldwide, ton for ton, is twice that of steel, wood, plastics, and aluminium combined.
When aggregate is mixed with dry Portland cement and water, the mixture forms a fluid slurry that is easily poured and molded into shape. The cement reacts with the water through a process called concrete hydration that hardens it over several hours to form a hard matrix that binds the materials together into a durable stone-like material that has many uses. This time allows concrete to not only be cast in forms, but also to have a variety of tooled processes performed. The hydration process is exothermic, which means ambient temperature plays a significant role in how long it takes concrete to set. Often, additives (such as pozzolans or superplasticizers) are included in the mixture to improve the physical properties of the wet mix, delay or accelerate the curing time, or otherwise change the finished material. Most concrete is poured with reinforcing materials (such as steel rebar) embedded to provide tensile strength, yielding reinforced concrete.
In the past, lime-based cement binders, such as lime putty, were often used but sometimes with other hydraulic cements, (water resistant) such as a calcium aluminate cement or with Portland cement to form Portland cement concrete (named for its visual resemblance to Portland stone). Many other non-cementitious types of concrete exist with other methods of binding aggregate together, including asphalt concrete with a bitumen binder, which is frequently used for road surfaces, and polymer concretes that use polymers as a binder. Concrete is distinct from mortar. Whereas concrete is itself a building material, mortar is a bonding agent that typically holds bricks, tiles and other masonry units together. Grout is another material associated with concrete and cement. It does not contain coarse aggregates and is usually either pourable or thixotropic, and is used to fill gaps between masonry components or coarse aggregate which has already been put in place. Some methods of concrete manufacture and repair involve pumping grout into the gaps to make up a solid mass in situ.
Etymology
The word concrete comes from the Latin word "" (meaning compact or condensed), the perfect passive participle of "", from "-" (together) and "" (to grow).
History
Ancient times
Concrete floors were found in the royal palace of Tiryns, Greece, which dates roughly to 1400 to 1200 BC. Lime mortars were used in Greece, such as in Crete and Cyprus, in 800 BC. The Assyrian Jerwan Aqueduct (688 BC) made use of waterproof concrete. Concrete was used for construction in many ancient structures.
Mayan concrete at the ruins of Uxmal (AD 850–925) is referenced in Incidents of Travel in the Yucatán by John L. Stephens. "The roof is flat and had been covered with cement". "The floors were cement, in some places hard, but, by long exposure, broken, and now crumbling under the feet." "But throughout the wall was solid, and consisting of large stones imbedded in mortar, almost as hard as rock."
Small-scale production of concrete-like materials was pioneered by the Nabatean traders who occupied and controlled a series of oases and developed a small empire in the regions of southern Syria and northern Jordan from the 4th century BC. They discovered the advantages of hydraulic lime, with some self-cementing properties, by 700 BC. They built kilns to supply mortar for the construction of rubble masonry houses, concrete floors, and underground waterproof cisterns. They kept the cisterns secret as these enabled the Nabataeans to thrive in the desert. Some of these structures survive to this day.
In the Ancient Egyptian and later Roman eras, builders discovered that adding volcanic ash to lime allowed the mix to set underwater. They discovered the pozzolanic reaction.
Classical era
The Romans used concrete extensively from 300 BC to AD 476. During the Roman Empire, Roman concrete (or opus caementicium) was made from quicklime, pozzolana and an aggregate of pumice. Its widespread use in many Roman structures, a key event in the history of architecture termed the Roman architectural revolution, freed Roman construction from the restrictions of stone and brick materials. It enabled revolutionary new designs in terms of both structural complexity and dimension. The Colosseum in Rome was built largely of concrete, and the Pantheon has the world's largest unreinforced concrete dome.
Concrete, as the Romans knew it, was a new and revolutionary material. Laid in the shape of arches, vaults and domes, it quickly hardened into a rigid mass, free from many of the internal thrusts and strains that troubled the builders of similar structures in stone or brick.
Modern tests show that opus caementicium had as much compressive strength as modern Portland-cement concrete (c. ). However, due to the absence of reinforcement, its tensile strength was far lower than modern reinforced concrete, and its mode of application also differed:
Modern structural concrete differs from Roman concrete in two important details. First, its mix consistency is fluid and homogeneous, allowing it to be poured into forms rather than requiring hand-layering together with the placement of aggregate, which, in Roman practice, often consisted of rubble. Second, integral reinforcing steel gives modern concrete assemblies great strength in tension, whereas Roman concrete could depend only upon the strength of the concrete bonding to resist tension.
The long-term durability of Roman concrete structures has been found to be due to its use of pyroclastic (volcanic) rock and ash, whereby the crystallization of strätlingite (a specific and complex calcium aluminosilicate hydrate) and the coalescence of this and similar calcium–aluminium-silicate–hydrate cementing binders helped give the concrete a greater degree of fracture resistance even in seismically active environments. Roman concrete is significantly more resistant to erosion by seawater than modern concrete; it used pyroclastic materials which react with seawater to form Al-tobermorite crystals over time. The use of hot mixing and the presence of lime clasts are thought to give the concrete a self-healing ability, where cracks that form become filled with calcite that prevents the crack from spreading.
The widespread use of concrete in many Roman structures ensured that many survive to the present day. The Baths of Caracalla in Rome are just one example. Many Roman aqueducts and bridges, such as the magnificent Pont du Gard in southern France, have masonry cladding on a concrete core, as does the dome of the Pantheon.
Middle Ages
After the Roman Empire, the use of burned lime and pozzolana was greatly reduced. Low kiln temperatures in the burning of lime, lack of pozzolana, and poor mixing all contributed to a decline in the quality of concrete and mortar. From the 11th century, the increased use of stone in church and castle construction led to an increased demand for mortar. Quality began to improve in the 12th century through better grinding and sieving. Medieval lime mortars and concretes were non-hydraulic and were used for binding masonry, "hearting" (binding rubble masonry cores) and foundations. Bartholomaeus Anglicus in his De proprietatibus rerum (1240) describes the making of mortar. In an English translation from 1397, it reads "lyme ... is a stone brent; by medlynge thereof with sonde and water sement is made". From the 14th century, the quality of mortar was again excellent, but only from the 17th century was pozzolana commonly added.
The Canal du Midi was built using concrete in 1670.
Industrial era
Perhaps the greatest step forward in the modern use of concrete was Smeaton's Tower, built by British engineer John Smeaton in Devon, England, between 1756 and 1759. This third Eddystone Lighthouse pioneered the use of hydraulic lime in concrete, using pebbles and powdered brick as aggregate.
A method for producing Portland cement was developed in England and patented by Joseph Aspdin in 1824. Aspdin chose the name for its similarity to Portland stone, which was quarried on the Isle of Portland in Dorset, England. His son William continued developments into the 1840s, earning him recognition for the development of "modern" Portland cement.
Reinforced concrete was invented in 1849 by Joseph Monier. and the first reinforced concrete house was built by François Coignet in 1853.
The first concrete reinforced bridge was designed and built by Joseph Monier in 1875.
Prestressed concrete and post-tensioned concrete were pioneered by Eugène Freyssinet, a French structural and civil engineer. Concrete components or structures are compressed by tendon cables during, or after, their fabrication in order to strengthen them against tensile forces developing when put in service. Freyssinet patented the technique on 2 October 1928.
Composition
Concrete is an artificial composite material, comprising a matrix of cementitious binder (typically Portland cement paste or asphalt) and a dispersed phase or "filler" of aggregate (typically a rocky material, loose stones, and sand). The binder "glues" the filler together to form a synthetic conglomerate. Many types of concrete are available, determined by the formulations of binders and the types of aggregate used to suit the application of the engineered material. These variables determine strength and density, as well as chemical and thermal resistance of the finished product.
Construction aggregates consist of large chunks of material in a concrete mix, generally a coarse gravel or crushed rocks such as limestone, or granite, along with finer materials such as sand.
Cement paste, most commonly made of Portland cement, is the most prevalent kind of concrete binder. For cementitious binders, water is mixed with the dry cement powder and aggregate, which produces a semi-liquid slurry (paste) that can be shaped, typically by pouring it into a form. The concrete solidifies and hardens through a chemical process called hydration. The water reacts with the cement, which bonds the other components together, creating a robust, stone-like material. Other cementitious materials, such as fly ash and slag cement, are sometimes added—either pre-blended with the cement or directly as a concrete component—and become a part of the binder for the aggregate. Fly ash and slag can enhance some properties of concrete such as fresh properties and durability. Alternatively, other materials can also be used as a concrete binder: the most prevalent substitute is asphalt, which is used as the binder in asphalt concrete.
Admixtures are added to modify the cure rate or properties of the material. Mineral admixtures use recycled materials as concrete ingredients. Conspicuous materials include fly ash, a by-product of coal-fired power plants; ground granulated blast furnace slag, a by-product of steelmaking; and silica fume, a by-product of industrial electric arc furnaces.
Structures employing Portland cement concrete usually include steel reinforcement because this type of concrete can be formulated with high compressive strength, but always has lower tensile strength. Therefore, it is usually reinforced with materials that are strong in tension, typically steel rebar.
The mix design depends on the type of structure being built, how the concrete is mixed and delivered, and how it is placed to form the structure.
Cement
Portland cement is the most common type of cement in general usage. It is a basic ingredient of concrete, mortar, and many plasters. It consists of a mixture of calcium silicates (alite, belite), aluminates and ferrites—compounds, which will react with water. Portland cement and similar materials are made by heating limestone (a source of calcium) with clay or shale (a source of silicon, aluminium and iron) and grinding this product (called clinker) with a source of sulfate (most commonly gypsum).
Cement kilns are extremely large, complex, and inherently dusty industrial installations. Of the various ingredients used to produce a given quantity of concrete, the cement is the most energetically expensive. Even complex and efficient kilns require 3.3 to 3.6 gigajoules of energy to produce a ton of clinker and then grind it into cement. Many kilns can be fueled with difficult-to-dispose-of wastes, the most common being used tires. The extremely high temperatures and long periods of time at those temperatures allows cement kilns to efficiently and completely burn even difficult-to-use fuels. The five major compounds of calcium silicates and aluminates comprising Portland cement range from 5 to 50% in weight.
Curing
Combining water with a cementitious material forms a cement paste by the process of hydration. The cement paste glues the aggregate together, fills voids within it, and makes it flow more freely.
As stated by Abrams' law, a lower water-to-cement ratio yields a stronger, more durable concrete, whereas more water gives a freer-flowing concrete with a higher slump. The hydration of cement involves many concurrent reactions. The process involves polymerization, the interlinking of the silicates and aluminate components as well as their bonding to sand and gravel particles to form a solid mass. One illustrative conversion is the hydration of tricalcium silicate:
Cement chemist notation: C3S + H → C-S-H + CH + heat
Standard notation: Ca3SiO5 + H2O → CaO・SiO2・H2O (gel) + Ca(OH)2 + heat
Balanced: 2 Ca3SiO5 + 7 H2O → 3 CaO・2 SiO2・4 H2O (gel) + 3 Ca(OH)2 + heat
(approximately as the exact ratios of CaO, SiO2 and H2O in C-S-H can vary)
The hydration (curing) of cement is irreversible.
Aggregates
Fine and coarse aggregates make up the bulk of a concrete mixture. Sand, natural gravel, and crushed stone are used mainly for this purpose. Recycled aggregates (from construction, demolition, and excavation waste) are increasingly used as partial replacements for natural aggregates, while a number of manufactured aggregates, including air-cooled blast furnace slag and bottom ash are also permitted.
The size distribution of the aggregate determines how much binder is required. Aggregate with a very even size distribution has the biggest gaps whereas adding aggregate with smaller particles tends to fill these gaps. The binder must fill the gaps between the aggregate as well as paste the surfaces of the aggregate together, and is typically the most expensive component. Thus, variation in sizes of the aggregate reduces the cost of concrete. The aggregate is nearly always stronger than the binder, so its use does not negatively affect the strength of the concrete.
Redistribution of aggregates after compaction often creates non-homogeneity due to the influence of vibration. This can lead to strength gradients.
Decorative stones such as quartzite, small river stones or crushed glass are sometimes added to the surface of concrete for a decorative "exposed aggregate" finish, popular among landscape designers.
Admixtures
Admixtures are materials in the form of powder or fluids that are added to the concrete to give it certain characteristics not obtainable with plain concrete mixes. Admixtures are defined as additions "made as the concrete mix is being prepared". The most common admixtures are retarders and accelerators. In normal use, admixture dosages are less than 5% by mass of cement and are added to the concrete at the time of batching/mixing. (See below.) The common types of admixtures are as follows:
Accelerators speed up the hydration (hardening) of the concrete. Typical materials used are calcium chloride, calcium nitrate and sodium nitrate. However, use of chlorides may cause corrosion in steel reinforcing and is prohibited in some countries, so that nitrates may be favored, even though they are less effective than the chloride salt. Accelerating admixtures are especially useful for modifying the properties of concrete in cold weather.
Air entraining agents add and entrain tiny air bubbles in the concrete, which reduces damage during freeze-thaw cycles, increasing durability. However, entrained air entails a tradeoff with strength, as each 1% of air may decrease compressive strength by 5%. If too much air becomes trapped in the concrete as a result of the mixing process, defoamers can be used to encourage the air bubble to agglomerate, rise to the surface of the wet concrete and then disperse.
Bonding agents are used to create a bond between old and new concrete (typically a type of polymer) with wide temperature tolerance and corrosion resistance.
Corrosion inhibitors are used to minimize the corrosion of steel and steel bars in concrete.
Crystalline admixtures are typically added during batching of the concrete to lower permeability. The reaction takes place when exposed to water and un-hydrated cement particles to form insoluble needle-shaped crystals, which fill capillary pores and micro-cracks in the concrete to block pathways for water and waterborne contaminates. Concrete with crystalline admixture can expect to self-seal as constant exposure to water will continuously initiate crystallization to ensure permanent waterproof protection.
Pigments can be used to change the color of concrete, for aesthetics.
Plasticizers increase the workability of plastic, or "fresh", concrete, allowing it to be placed more easily, with less consolidating effort. A typical plasticizer is lignosulfonate. Plasticizers can be used to reduce the water content of a concrete while maintaining workability and are sometimes called water-reducers due to this use. Such treatment improves its strength and durability characteristics.
Superplasticizers (also called high-range water-reducers) are a class of plasticizers that have fewer deleterious effects and can be used to increase workability more than is practical with traditional plasticizers. Superplasticizers are used to increase compressive strength. It increases the workability of the concrete and lowers the need for water content by 15–30%.
Pumping aids improve pumpability, thicken the paste and reduce separation and bleeding.
Retarders slow the hydration of concrete and are used in large or difficult pours where partial setting is undesirable before completion of the pour. Typical retarders include sugar, sodium gluconate, citric acid, and tartaric acid.
Mineral admixtures and blended cements
Inorganic materials that have pozzolanic or latent hydraulic properties, these very fine-grained materials are added to the concrete mix to improve the properties of concrete (mineral admixtures), or as a replacement for Portland cement (blended cements). Products which incorporate limestone, fly ash, blast furnace slag, and other useful materials with pozzolanic properties into the mix, are being tested and used. These developments are ever growing in relevance to minimize the impacts caused by cement use, notorious for being one of the largest producers (at about 5 to 10%) of global greenhouse gas emissions. The use of alternative materials also is capable of lowering costs, improving concrete properties, and recycling wastes, the latest being relevant for circular economy aspects of the construction industry, whose demand is ever growing with greater impacts on raw material extraction, waste generation and landfill practices.
Fly ash: A by-product of coal-fired electric generating plants, it is used to partially replace Portland cement (by up to 60% by mass). The properties of fly ash depend on the type of coal burnt. In general, siliceous fly ash is pozzolanic, while calcareous fly ash has latent hydraulic properties.
Ground granulated blast furnace slag (GGBFS or GGBS): A by-product of steel production is used to partially replace Portland cement (by up to 80% by mass). It has latent hydraulic properties.
Silica fume: A by-product of the production of silicon and ferrosilicon alloys. Silica fume is similar to fly ash, but has a particle size 100 times smaller. This results in a higher surface-to-volume ratio and a much faster pozzolanic reaction. Silica fume is used to increase strength and durability of concrete, but generally requires the use of superplasticizers for workability.
High reactivity metakaolin (HRM): Metakaolin produces concrete with strength and durability similar to concrete made with silica fume. While silica fume is usually dark gray or black in color, high-reactivity metakaolin is usually bright white in color, making it the preferred choice for architectural concrete where appearance is important.
Carbon nanofibers can be added to concrete to enhance compressive strength and gain a higher Young's modulus, and also to improve the electrical properties required for strain monitoring, damage evaluation and self-health monitoring of concrete. Carbon fiber has many advantages in terms of mechanical and electrical properties (e.g., higher strength) and self-monitoring behavior due to the high tensile strength and high electrical conductivity.
Carbon products have been added to make concrete electrically conductive, for deicing purposes.
New research from Japan's University of Kitakyushu shows that a washed and dried recycled mix of used diapers can be an environmental solution to producing less landfill and using less sand in concrete production. A model home was built in Indonesia to test the strength and durability of the new diaper-cement composite.
Production
Concrete production is the process of mixing together the various ingredients—water, aggregate, cement, and any additives—to produce concrete. Concrete production is time-sensitive. Once the ingredients are mixed, workers must put the concrete in place before it hardens. In modern usage, most concrete production takes place in a large type of industrial facility called a concrete plant, or often a batch plant. The usual method of placement is casting in formwork, which holds the mix in shape until it has set enough to hold its shape unaided.
Concrete plants come in two main types, ready-mix plants and central mix plants. A ready-mix plant blends all of the solid ingredients, while a central mix does the same but adds water. A central-mix plant offers more precise control of the concrete quality. Central mix plants must be close to the work site where the concrete will be used, since hydration begins at the plant.
A concrete plant consists of large hoppers for storage of various ingredients like cement, storage for bulk ingredients like aggregate and water, mechanisms for the addition of various additives and amendments, machinery to accurately weigh, move, and mix some or all of those ingredients, and facilities to dispense the mixed concrete, often to a concrete mixer truck.
Modern concrete is usually prepared as a viscous fluid, so that it may be poured into forms. The forms are containers that define the desired shape. Concrete formwork can be prepared in several ways, such as slip forming and steel plate construction. Alternatively, concrete can be mixed into dryer, non-fluid forms and used in factory settings to manufacture precast concrete products.
Interruption in pouring the concrete can cause the initially placed material to begin to set before the next batch is added on top. This creates a horizontal plane of weakness called a cold joint between the two batches. Once the mix is where it should be, the curing process must be controlled to ensure that the concrete attains the desired attributes. During concrete preparation, various technical details may affect the quality and nature of the product.
Design mix
Design mix ratios are decided by an engineer after analyzing the properties of the specific ingredients being used. Instead of using a 'nominal mix' of 1 part cement, 2 parts sand, and 4 parts aggregate, a civil engineer will custom-design a concrete mix to exactly meet the requirements of the site and conditions, setting material ratios and often designing an admixture package to fine-tune the properties or increase the performance envelope of the mix. Design-mix concrete can have very broad specifications that cannot be met with more basic nominal mixes, but the involvement of the engineer often increases the cost of the concrete mix.
Concrete mixes are primarily divided into nominal mix, standard mix and design mix.
Nominal mix ratios are given in volume of . Nominal mixes are a simple, fast way of getting a basic idea of the properties of the finished concrete without having to perform testing in advance.
Various governing bodies (such as British Standards) define nominal mix ratios into a number of grades, usually ranging from lower compressive strength to higher compressive strength. The grades usually indicate the 28-day cure strength.
Mixing
Thorough mixing is essential to produce uniform, high-quality concrete.
has shown that the mixing of cement and water into a paste before combining these materials with aggregates can increase the compressive strength of the resulting concrete. The paste is generally mixed in a , shear-type mixer at a w/c (water to cement ratio) of 0.30 to 0.45 by mass. The cement paste premix may include admixtures such as accelerators or retarders, superplasticizers, pigments, or silica fume. The premixed paste is then blended with aggregates and any remaining batch water and final mixing is completed in conventional concrete mixing equipment.
Sample analysis—workability
Workability is the ability of a fresh (plastic) concrete mix to fill the form/mold properly with the desired work (pouring, pumping, spreading, tamping, vibration) and without reducing the concrete's quality. Workability depends on water content, aggregate (shape and size distribution), cementitious content and age (level of hydration) and can be modified by adding chemical admixtures, like superplasticizer. Raising the water content or adding chemical admixtures increases concrete workability. Excessive water leads to increased bleeding or segregation of aggregates (when the cement and aggregates start to separate), with the resulting concrete having reduced quality. Changes in gradation can also affect workability of the concrete, although a wide range of gradation can be used for various applications. An undesirable gradation can mean using a large aggregate that is too large for the size of the formwork, or which has too few smaller aggregate grades to serve to fill the gaps between the larger grades, or using too little or too much sand for the same reason, or using too little water, or too much cement, or even using jagged crushed stone instead of smoother round aggregate such as pebbles. Any combination of these factors and others may result in a mix which is too harsh, i.e., which does not flow or spread out smoothly, is difficult to get into the formwork, and which is difficult to surface finish.
Workability can be measured by the concrete slump test, a simple measure of the plasticity of a fresh batch of concrete following the ASTM C 143 or EN 12350-2 test standards. Slump is normally measured by filling an "Abrams cone" with a sample from a fresh batch of concrete. The cone is placed with the wide end down onto a level, non-absorptive surface. It is then filled in three layers of equal volume, with each layer being tamped with a steel rod to consolidate the layer. When the cone is carefully lifted off, the enclosed material slumps a certain amount, owing to gravity. A relatively dry sample slumps very little, having a slump value of one or two inches (25 or 50 mm) out of . A relatively wet concrete sample may slump as much as eight inches. Workability can also be measured by the flow table test.
Slump can be increased by addition of chemical admixtures such as plasticizer or superplasticizer without changing the water-cement ratio. Some other admixtures, especially air-entraining admixture, can increase the slump of a mix.
High-flow concrete, like self-consolidating concrete, is tested by other flow-measuring methods. One of these methods includes placing the cone on the narrow end and observing how the mix flows through the cone while it is gradually lifted.
After mixing, concrete is a fluid and can be pumped to the location where needed.
Curing
Maintaining optimal conditions for cement hydration
Concrete must be kept moist during curing in order to achieve optimal strength and durability. During curing hydration occurs, allowing calcium-silicate hydrate (C-S-H) to form. Over 90% of a mix's final strength is typically reached within four weeks, with the remaining 10% achieved over years or even decades. The conversion of calcium hydroxide in the concrete into calcium carbonate from absorption of CO2 over several decades further strengthens the concrete and makes it more resistant to damage. This carbonation reaction, however, lowers the pH of the cement pore solution and can corrode the reinforcement bars.
Hydration and hardening of concrete during the first three days is critical. Abnormally fast drying and shrinkage due to factors such as evaporation from wind during placement may lead to increased tensile stresses at a time when it has not yet gained sufficient strength, resulting in greater shrinkage cracking. The early strength of the concrete can be increased if it is kept damp during the curing process. Minimizing stress prior to curing minimizes cracking. High-early-strength concrete is designed to hydrate faster, often by increased use of cement that increases shrinkage and cracking. The strength of concrete changes (increases) for up to three years. It depends on cross-section dimension of elements and conditions of structure exploitation. Addition of short-cut polymer fibers can improve (reduce) shrinkage-induced stresses during curing and increase early and ultimate compression strength.
Properly curing concrete leads to increased strength and lower permeability and avoids cracking where the surface dries out prematurely. Care must also be taken to avoid freezing or overheating due to the exothermic setting of cement. Improper curing can cause spalling, reduced strength, poor abrasion resistance and cracking.
Curing techniques avoiding water loss by evaporation
During the curing period, concrete is ideally maintained at controlled temperature and humidity. To ensure full hydration during curing, concrete slabs are often sprayed with "curing compounds" that create a water-retaining film over the concrete. Typical films are made of wax or related hydrophobic compounds. After the concrete is sufficiently cured, the film is allowed to abrade from the concrete through normal use.
Traditional conditions for curing involve spraying or ponding the concrete surface with water. The adjacent picture shows one of many ways to achieve this, ponding—submerging setting concrete in water and wrapping in plastic to prevent dehydration. Additional common curing methods include wet burlap and plastic sheeting covering the fresh concrete.
For higher-strength applications, accelerated curing techniques may be applied to the concrete. A common technique involves heating the poured concrete with steam, which serves to both keep it damp and raise the temperature so that the hydration process proceeds more quickly and more thoroughly.
Alternative types
Asphalt
Asphalt concrete (commonly called asphalt, blacktop, or pavement in North America, and tarmac, bitumen macadam, or rolled asphalt in the United Kingdom and the Republic of Ireland) is a composite material commonly used to surface roads, parking lots, airports, as well as the core of embankment dams. Asphalt mixtures have been used in pavement construction since the beginning of the twentieth century. It consists of mineral aggregate bound together with asphalt, laid in layers, and compacted. The process was refined and enhanced by Belgian inventor and U.S. immigrant Edward De Smedt.
The terms asphalt (or asphaltic) concrete, bituminous asphalt concrete, and bituminous mixture are typically used only in engineering and construction documents, which define concrete as any composite material composed of mineral aggregate adhered with a binder. The abbreviation, AC, is sometimes used for asphalt concrete but can also denote asphalt content or asphalt cement, referring to the liquid asphalt portion of the composite material.
Graphene enhanced concrete
Graphene enhanced concretes are standard designs of concrete mixes, except that during the cement-mixing or production process, a small amount of chemically engineered graphene is added. These enhanced graphene concretes are designed around the concrete application.
Microbial
Bacteria such as Bacillus pasteurii, Bacillus pseudofirmus, Bacillus cohnii, Sporosarcina pasteuri, and Arthrobacter crystallopoietes increase the compression strength of concrete through their biomass. However some forms of bacteria can also be concrete-destroying. Bacillus sp. CT-5. can reduce corrosion of reinforcement in reinforced concrete by up to four times. Sporosarcina pasteurii reduces water and chloride permeability. B. pasteurii increases resistance to acid. Bacillus pasteurii and B. sphaericuscan induce calcium carbonate precipitation in the surface of cracks, adding compression strength.
Nanoconcrete
Nanoconcrete (also spelled "nano concrete"' or "nano-concrete") is a class of materials that contains Portland cement particles that are no greater than 100 μm and particles of silica no greater than 500 μm, which fill voids that would otherwise occur in normal concrete, thereby substantially increasing the material's strength. It is widely used in foot and highway bridges where high flexural and compressive strength are indicated.
Pervious
Pervious concrete is a mix of specially graded coarse aggregate, cement, water, and little-to-no fine aggregates. This concrete is also known as "no-fines" or porous concrete. Mixing the ingredients in a carefully controlled process creates a paste that coats and bonds the aggregate particles. The hardened concrete contains interconnected air voids totaling approximately 15 to 25 percent. Water runs through the voids in the pavement to the soil underneath. Air entrainment admixtures are often used in freeze-thaw climates to minimize the possibility of frost damage. Pervious concrete also permits rainwater to filter through roads and parking lots, to recharge aquifers, instead of contributing to runoff and flooding.
Polymer
Polymer concretes are mixtures of aggregate and any of various polymers and may be reinforced. The cement is costlier than lime-based cements, but polymer concretes nevertheless have advantages; they have significant tensile strength even without reinforcement, and they are largely impervious to water. Polymer concretes are frequently used for the repair and construction of other applications, such as drains.
Plant fibers
Plant fibers and particles can be used in a concrete mix or as a reinforcement. These materials can increase ductility but the lignocellulosic particles hydrolyze during concrete curing as a result of alkaline environment and elevated temperatures Such process, that is difficult to measure, can affect the properties of the resulting concrete.
Sulfur concrete
Sulfur concrete is a special concrete that uses sulfur as a binder and does not require cement or water.
Volcanic
Volcanic concrete substitutes volcanic rock for the limestone that is burned to form clinker. It consumes a similar amount of energy, but does not directly emit carbon as a byproduct. Volcanic rock/ash are used as supplementary cementitious materials in concrete to improve the resistance to sulfate, chloride and alkali silica reaction due to pore refinement. Also, they are generally cost effective in comparison to other aggregates, good for semi and light weight concretes, and good for thermal and acoustic insulation.
Pyroclastic materials, such as pumice, scoria, and ashes are formed from cooling magma during explosive volcanic eruptions. They are used as supplementary cementitious materials (SCM) or as aggregates for cements and concretes. They have been extensively used since ancient times to produce materials for building applications. For example, pumice and other volcanic glasses were added as a natural pozzolanic material for mortars and plasters during the construction of the Villa San Marco in the Roman period (89 BC – 79 AD), which remain one of the best-preserved otium villae of the Bay of Naples in Italy.
Waste light
Waste light is a form of polymer modified concrete. The specific polymer admixture allows the replacement of all the traditional aggregates (gravel, sand, stone) by any mixture of solid waste materials in the grain size of 3–10 mm to form a low-compressive-strength (3–20 N/mm2) product for road and building construction. One cubic meter of waste light concrete contains 1.1–1.3 m3 of shredded waste and no other aggregates.
Recycled Aggregate Concrete (RAC)
Recycled aggregate concretes are standard concrete mixes with the addition or substitution of natural aggregates with recycled aggregates sourced from construction and demolition wastes, disused pre-cast concretes or masonry. In most cases, recycled aggregate concrete results in higher water absorption levels by capillary action and permeation, which are the prominent determiners of the strength and durability of the resulting concrete. The increase in water absorption levels is mainly caused by the porous adhered mortar that exists in the recycled aggregates. Accordingly, recycled concrete aggregates that have been washed to reduce the quantity of mortar adhered to aggregates show lower water absorption levels compared to untreated recycled aggregates.
The quality of the recycled aggregate concrete is determined by several factors, including the size, the number of replacement cycles, and the moisture levels of the recycled aggregates. When the recycled concrete aggregates are crushed into coarser fractures, the mixed concrete shows better permeability levels, resulting in an overall increase in strength. In contrast, recycled masonry aggregates provide better qualities when crushed in finer fractures. With each generation of recycled concrete, the resulting compressive strength decreases.
Properties
Concrete has relatively high compressive strength, but much lower tensile strength. Therefore, it is usually reinforced with materials that are strong in tension (often steel). The elasticity of concrete is relatively constant at low stress levels but starts decreasing at higher stress levels as matrix cracking develops. Concrete has a very low coefficient of thermal expansion and shrinks as it matures. All concrete structures crack to some extent, due to shrinkage and tension. Concrete that is subjected to long-duration forces is prone to creep.
Tests can be performed to ensure that the properties of concrete correspond to specifications for the application.
The ingredients affect the strengths of the material. Concrete strength values are usually specified as the lower-bound compressive strength of either a cylindrical or cubic specimen as determined by standard test procedures.
The strengths of concrete is dictated by its function. Very low-strength— or less—concrete may be used when the concrete must be lightweight. Lightweight concrete is often achieved by adding air, foams, or lightweight aggregates, with the side effect that the strength is reduced. For most routine uses, concrete is often used. concrete is readily commercially available as a more durable, although more expensive, option. Higher-strength concrete is often used for larger civil projects. Strengths above are often used for specific building elements. For example, the lower floor columns of high-rise concrete buildings may use concrete of or more, to keep the size of the columns small. Bridges may use long beams of high-strength concrete to lower the number of spans required. Occasionally, other structural needs may require high-strength concrete. If a structure must be very rigid, concrete of very high strength may be specified, even much stronger than is required to bear the service loads. Strengths as high as have been used commercially for these reasons.
Energy efficiency
The cement produced for making concrete accounts for about 8% of worldwide emissions per year (compared to, e.g., global aviation at 1.9%). The two largest sources of are produced by the cement manufacturing process, arising from (1) the decarbonation reaction of limestone in the cement kiln (T ≈ 950 °C), and (2) from the combustion of fossil fuel to reach the sintering temperature (T ≈ 1450 °C) of cement clinker in the kiln. The energy required for extracting, crushing, and mixing the raw materials (construction aggregates used in the concrete production, and also limestone and clay feeding the cement kiln) is lower. Energy requirement for transportation of ready-mix concrete is also lower because it is produced nearby the construction site from local resources, typically manufactured within 100 kilometers of the job site. The overall embodied energy of concrete at roughly 1 to 1.5 megajoules per kilogram is therefore lower than for many structural and construction materials.
Once in place, concrete offers a great energy efficiency over the lifetime of a building. Concrete walls leak air far less than those made of wood frames. Air leakage accounts for a large percentage of energy loss from a home. The thermal mass properties of concrete increase the efficiency of both residential and commercial buildings. By storing and releasing the energy needed for heating or cooling, concrete's thermal mass delivers year-round benefits by reducing temperature swings inside and minimizing heating and cooling costs. While insulation reduces energy loss through the building envelope, thermal mass uses walls to store and release energy. Modern concrete wall systems use both external insulation and thermal mass to create an energy-efficient building. Insulating concrete forms (ICFs) are hollow blocks or panels made of either insulating foam or rastra that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure.
Fire safety
Concrete buildings are more resistant to fire than those constructed using steel frames, since concrete has lower heat conductivity than steel and can thus last longer under the same fire conditions. Concrete is sometimes used as a fire protection for steel frames, for the same effect as above. Concrete as a fire shield, for example Fondu fyre, can also be used in extreme environments like a missile launch pad.
Options for non-combustible construction include floors, ceilings and roofs made of cast-in-place and hollow-core precast concrete. For walls, concrete masonry technology and Insulating Concrete Forms (ICFs) are additional options. ICFs are hollow blocks or panels made of fireproof insulating foam that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure.
Concrete also provides good resistance against externally applied forces such as high winds, hurricanes, and tornadoes owing to its lateral stiffness, which results in minimal horizontal movement. However, this stiffness can work against certain types of concrete structures, particularly where a relatively higher flexing structure is required to resist more extreme forces.
Earthquake safety
As discussed above, concrete is very strong in compression, but weak in tension. Larger earthquakes can generate very large shear loads on structures. These shear loads subject the structure to both tensile and compressional loads. Concrete structures without reinforcement, like other unreinforced masonry structures, can fail during severe earthquake shaking. Unreinforced masonry structures constitute one of the largest earthquake risks globally. These risks can be reduced through seismic retrofitting of at-risk buildings, (e.g. school buildings in Istanbul, Turkey).
Construction
Concrete is one of the most durable building materials. It provides superior fire resistance compared with wooden construction and gains strength over time. Structures made of concrete can have a long service life. Concrete is used more than any other artificial material in the world. As of 2006, about 7.5 billion cubic meters of concrete are made each year, more than one cubic meter for every person on Earth.
Reinforced
The use of reinforcement, in the form of iron was introduced in the 1850s by French industrialist François Coignet, and it was not until the 1880s that German civil engineer G. A. Wayss used steel as reinforcement. Concrete is a relatively brittle material that is strong under compression but less in tension. Plain, unreinforced concrete is unsuitable for many structures as it is relatively poor at withstanding stresses induced by vibrations, wind loading, and so on. Hence, to increase its overall strength, steel rods, wires, mesh or cables can be embedded in concrete before it is set. This reinforcement, often known as rebar, resists tensile forces.
Reinforced concrete (RC) is a versatile composite and one of the most widely used materials in modern construction. It is made up of different constituent materials with very different properties that complement each other. In the case of reinforced concrete, the component materials are almost always concrete and steel. These two materials form a strong bond together and are able to resist a variety of applied forces, effectively acting as a single structural element.
Reinforced concrete can be precast or cast-in-place (in situ) concrete, and is used in a wide range of applications such as; slab, wall, beam, column, foundation, and frame construction. Reinforcement is generally placed in areas of the concrete that are likely to be subject to tension, such as the lower portion of beams. Usually, there is a minimum of 50 mm cover, both above and below the steel reinforcement, to resist spalling and corrosion which can lead to structural instability. Other types of non-steel reinforcement, such as Fibre-reinforced concretes are used for specialized applications, predominately as a means of controlling cracking.
Precast
Precast concrete is concrete which is cast in one place for use elsewhere and is a mobile material. The largest part of precast production is carried out in the works of specialist suppliers, although in some instances, due to economic and geographical factors, scale of product or difficulty of access, the elements are cast on or adjacent to the construction site. Precasting offers considerable advantages because it is carried out in a controlled environment, protected from the elements, but the downside of this is the contribution to greenhouse gas emission from transportation to the construction site.
Advantages to be achieved by employing precast concrete:
Preferred dimension schemes exist, with elements of tried and tested designs available from a catalogue.
Major savings in time result from manufacture of structural elements apart from the series of events which determine overall duration of the construction, known by planning engineers as the 'critical path'.
Availability of Laboratory facilities capable of the required control tests, many being certified for specific testing in accordance with National Standards.
Equipment with capability suited to specific types of production such as stressing beds with appropriate capacity, moulds and machinery dedicated to particular products.
High-quality finishes achieved direct from the mould eliminate the need for interior decoration and ensure low maintenance costs.
Mass structures
Due to cement's exothermic chemical reaction while setting up, large concrete structures such as dams, navigation locks, large mat foundations, and large breakwaters generate excessive heat during hydration and associated expansion. To mitigate these effects, post-cooling is commonly applied during construction. An early example at Hoover Dam used a network of pipes between vertical concrete placements to circulate cooling water during the curing process to avoid damaging overheating. Similar systems are still used; depending on volume of the pour, the concrete mix used, and ambient air temperature, the cooling process may last for many months after the concrete is placed. Various methods also are used to pre-cool the concrete mix in mass concrete structures.
Another approach to mass concrete structures that minimizes cement's thermal by-product is the use of roller-compacted concrete, which uses a dry mix which has a much lower cooling requirement than conventional wet placement. It is deposited in thick layers as a semi-dry material then roller compacted into a dense, strong mass.
Surface finishes
Raw concrete surfaces tend to be porous and have a relatively uninteresting appearance. Many finishes can be applied to improve the appearance and preserve the surface against staining, water penetration, and freezing.
Examples of improved appearance include stamped concrete where the wet concrete has a pattern impressed on the surface, to give a paved, cobbled or brick-like effect, and may be accompanied with coloration. Another popular effect for flooring and table tops is polished concrete where the concrete is polished optically flat with diamond abrasives and sealed with polymers or other sealants.
Other finishes can be achieved with chiseling, or more conventional techniques such as painting or covering it with other materials.
The proper treatment of the surface of concrete, and therefore its characteristics, is an important stage in the construction and renovation of architectural structures.
Prestressed
Prestressed concrete is a form of reinforced concrete that builds in compressive stresses during construction to oppose tensile stresses experienced in use. This can greatly reduce the weight of beams or slabs, by
better distributing the stresses in the structure to make optimal use of the reinforcement. For example, a horizontal beam tends to sag. Prestressed reinforcement along the bottom of the beam counteracts this.
In pre-tensioned concrete, the prestressing is achieved by using steel or polymer tendons or bars that are subjected to a tensile force prior to casting, or for post-tensioned concrete, after casting.
There are two different systems being used:
Pretensioned concrete is almost always precast, and contains steel wires (tendons) that are held in tension while the concrete is placed and sets around them.
Post-tensioned concrete has ducts through it. After the concrete has gained strength, tendons are pulled through the ducts and stressed. The ducts are then filled with grout. Bridges built in this way have experienced considerable corrosion of the tendons, so external post-tensioning may now be used in which the tendons run along the outer surface of the concrete.
More than of highways in the United States are paved with this material. Reinforced concrete, prestressed concrete and precast concrete are the most widely used types of concrete functional extensions in modern days. For more information see Brutalist architecture.
Placement
Once mixed, concrete is typically transported to the place where it is intended to become a structural item. Various methods of transportation and placement are used depending on the distances involve, quantity needed, and other details of application. Large amounts are often transported by truck, poured free under gravity or through a tremie, or pumped through a pipe. Smaller amounts may be carried in a skip (a metal container which can be tilted or opened to release the contents, usually transported by crane or hoist), or wheelbarrow, or carried in toggle bags for manual placement underwater.
Cold weather placement
Extreme weather conditions (extreme heat or cold; windy conditions, and humidity variations) can significantly alter the quality of concrete. Many precautions are observed in cold weather placement. Low temperatures significantly slow the chemical reactions involved in hydration of cement, thus affecting the strength development. Preventing freezing is the most important precaution, as formation of ice crystals can cause damage to the crystalline structure of the hydrated cement paste. If the surface of the concrete pour is insulated from the outside temperatures, the heat of hydration will prevent freezing.
The American Concrete Institute (ACI) definition of cold weather placement, ACI 306, is:
A period when for more than three successive days the average daily air temperature drops below 40 °F (~ 4.5 °C), and
Temperature stays below for more than one-half of any 24-hour period.
In Canada, where temperatures tend to be much lower during the cold season, the following criteria are used by CSA A23.1:
When the air temperature is ≤ 5 °C, and
When there is a probability that the temperature may fall below 5 °C within 24 hours of placing the concrete.
The minimum strength before exposing concrete to extreme cold is . CSA A 23.1 specified a compressive strength of 7.0 MPa to be considered safe for exposure to freezing.
Underwater placement
Concrete may be placed and cured underwater. Care must be taken in the placement method to prevent washing out the cement. Underwater placement methods include the tremie, pumping, skip placement, manual placement using toggle bags, and bagwork.
A tremie is a vertical, or near-vertical, pipe with a hopper at the top used to pour concrete underwater in a way that avoids washout of cement from the mix due to turbulent water contact with the concrete while it is flowing. This produces a more reliable strength of the product. The method is generally used for placing small quantities and for repairs. Wet concrete is loaded into a reusable canvas bag and squeezed out at the required place by the diver. Care must be taken to avoid washout of the cement and fines.
is the manual placement by divers of woven cloth bags containing dry mix, followed by piercing the bags with steel rebar pins to tie the bags together after every two or three layers, and create a path for hydration to induce curing, which can typically take about 6 to 12 hours for initial hardening and full hardening by the next day. Bagwork concrete will generally reach full strength within 28 days. Each bag must be pierced by at least one, and preferably up to four pins. Bagwork is a simple and convenient method of underwater concrete placement which does not require pumps, plant, or formwork, and which can minimise environmental effects from dispersing cement in the water. Prefilled bags are available, which are sealed to prevent premature hydration if stored in suitable dry conditions. The bags may be biodegradable.
is an alternative method of forming a concrete mass underwater, where the forms are filled with coarse aggregate and the voids then completely filled from the bottom by displacing the water with pumped grout.
Roads
Concrete roads are more fuel efficient to drive on, more reflective and last significantly longer than other paving surfaces, yet have a much smaller market share than other paving solutions. Modern-paving methods and design practices have changed the economics of concrete paving, so that a well-designed and placed concrete pavement will be less expensive on initial costs and significantly less expensive over the life cycle. Another major benefit is that pervious concrete can be used, which eliminates the need to place storm drains near the road, and reducing the need for slightly sloped roadway to help rainwater to run off. No longer requiring discarding rainwater through use of drains also means that less electricity is needed (more pumping is otherwise needed in the water-distribution system), and no rainwater gets polluted as it no longer mixes with polluted water. Rather, it is immediately absorbed by the ground.
Tube forest
Cement molded into a forest of tubular structures can be 5.6 times more resistant to cracking/failure than standard concrete. The approach mimics mammalian cortical bone that features elliptical, hollow osteons suspended in an organic matrix, connected by relatively weak "cement lines". Cement lines provide a preferable in-plane crack path. This design fails via a "stepwise toughening mechanism". Cracks are contained within the tube, reducing spreading, by dissipating energy at each tube/step.
Environment, health and safety
The manufacture and use of concrete produce a wide range of environmental, economic and social impacts.
Health and safety
Grinding of concrete can produce hazardous dust. Exposure to cement dust can lead to issues such as silicosis, kidney disease, skin irritation and similar effects. The U.S. National Institute for Occupational Safety and Health in the United States recommends attaching local exhaust ventilation shrouds to electric concrete grinders to control the spread of this dust. In addition, the Occupational Safety and Health Administration (OSHA) has placed more stringent regulations on companies whose workers regularly come into contact with silica dust. An updated silica rule, which OSHA put into effect 23 September 2017 for construction companies, restricted the amount of breathable crystalline silica workers could legally come into contact with to 50 micro grams per cubic meter of air per 8-hour workday. That same rule went into effect 23 June 2018 for general industry, hydraulic fracturing and maritime. That deadline was extended to 23 June 2021 for engineering controls in the hydraulic fracturing industry. Companies which fail to meet the tightened safety regulations can face financial charges and extensive penalties. The presence of some substances in concrete, including useful and unwanted additives, can cause health concerns due to toxicity and radioactivity. Fresh concrete (before curing is complete) is highly alkaline and must be handled with proper protective equipment.
Cement
A major component of concrete is cement, a fine powder used mainly to bind sand and coarser aggregates together in concrete. Although a variety of cement types exist, the most common is "Portland cement", which is produced by mixing clinker with smaller quantities of other additives such as gypsum and ground limestone. The production of clinker, the main constituent of cement, is responsible for the bulk of the sector's greenhouse gas emissions, including both energy intensity and process emissions.
The cement industry is one of the three primary producers of carbon dioxide, a major greenhouse gas – the other two being energy production and transportation industries. On average, every tonne of cement produced releases one tonne of CO2 into the atmosphere. Pioneer cement manufacturers have claimed to reach lower carbon intensities, with 590 kg of CO2eq per tonne of cement produced. The emissions are due to combustion and calcination processes, which roughly account for 40% and 60% of the greenhouse gases, respectively. Considering that cement is only a fraction of the constituents of concrete, it is estimated that a tonne of concrete is responsible for emitting about 100–200 kg of CO2. Every year more than 10 billion tonnes of concrete are used worldwide. In the coming years, large quantities of concrete will continue to be used, and the mitigation of CO2 emissions from the sector will be even more critical.
Concrete is used to create hard surfaces that contribute to surface runoff, which can cause heavy soil erosion, water pollution, and flooding, but conversely can be used to divert, dam, and control flooding. Concrete dust released by building demolition and natural disasters can be a major source of dangerous air pollution. Concrete is a contributor to the urban heat island effect, though less so than asphalt.
Climate change mitigation
Reducing the cement clinker content might have positive effects on the environmental life-cycle assessment of concrete. Some research work on reducing the cement clinker content in concrete has already been carried out. However, there exist different research strategies. Often replacement of some clinker for large amounts of slag or fly ash was investigated based on conventional concrete technology. This could lead to a waste of scarce raw materials such as slag and fly ash. The aim of other research activities is the efficient use of cement and reactive materials like slag and fly ash in concrete based on a modified mix design approach.
The embodied carbon of a precast concrete facade can be reduced by 50% when using the presented fiber reinforced high performance concrete in place of typical reinforced concrete cladding. Studies have been conducted about commercialization of low-carbon concretes. Life cycle assessment (LCA) of low-carbon concrete was investigated according to the ground granulated blast-furnace slag (GGBS) and fly ash (FA) replacement ratios. Global warming potential (GWP) of GGBS decreased by 1.1 kg CO2 eq/m3, while FA decreased by 17.3 kg CO2 eq/m3 when the mineral admixture replacement ratio was increased by 10%. This study also compared the compressive strength properties of binary blended low-carbon concrete according to the replacement ratios, and the applicable range of mixing proportions was derived.
Climate change adaptation
High-performance building materials will be particularly important for enhancing resilience, including for flood defenses and critical-infrastructure protection. Risks to infrastructure and cities posed by extreme weather events are especially serious for those places exposed to flood and hurricane damage, but also where residents need protection from extreme summer temperatures. Traditional concrete can come under strain when exposed to humidity and higher concentrations of atmospheric CO2. While concrete is likely to remain important in applications where the environment is challenging, novel, smarter and more adaptable materials are also needed.
End-of-life: degradation and waste
Recycling
There have been concerns about the recycling of painted concrete due to possible lead content. Studies have indicated that recycled concrete exhibits lower strength and durability compared to concrete produced using natural aggregates. This deficiency can be addressed by incorporating supplementary materials such as fly ash into the mixture.
World records
The world record for the largest concrete pour in a single project is the Three Gorges Dam in Hubei Province, China by the Three Gorges Corporation. The amount of concrete used in the construction of the dam is estimated at 16 million cubic meters over 17 years. The previous record was 12.3 million cubic meters held by Itaipu hydropower station in Brazil.
The world record for concrete pumping was set on 7 August 2009 during the construction of the Parbati Hydroelectric Project, near the village of Suind, Himachal Pradesh, India, when the concrete mix was pumped through a vertical height of .
The Polavaram dam works in Andhra Pradesh on 6 January 2019 entered the Guinness World Records by pouring 32,100 cubic metres of concrete in 24 hours. The world record for the largest continuously poured concrete raft was achieved in August 2007 in Abu Dhabi by contracting firm Al Habtoor-CCC Joint Venture and the concrete supplier is Unibeton Ready Mix. The pour (a part of the foundation for the Abu Dhabi's Landmark Tower) was 16,000 cubic meters of concrete poured within a two-day period. The previous record, 13,200 cubic meters poured in 54 hours despite a severe tropical storm requiring the site to be covered with tarpaulins to allow work to continue, was achieved in 1992 by joint Japanese and South Korean consortiums Hazama Corporation and the Samsung C&T Corporation for the construction of the Petronas Towers in Kuala Lumpur, Malaysia.
The world record for largest continuously poured concrete floor was completed 8 November 1997, in Louisville, Kentucky by design-build firm EXXCEL Project Management. The monolithic placement consisted of of concrete placed in 30 hours, finished to a flatness tolerance of FF 54.60 and a levelness tolerance of FL 43.83. This surpassed the previous record by 50% in total volume and 7.5% in total area.
The record for the largest continuously placed underwater concrete pour was completed 18 October 2010, in New Orleans, Louisiana by contractor C. J. Mahan Construction Company, LLC of Grove City, Ohio. The placement consisted of 10,251 cubic yards of concrete placed in 58.5 hours using two concrete pumps and two dedicated concrete batch plants. Upon curing, this placement allows the cofferdam to be dewatered approximately below sea level to allow the construction of the Inner Harbor Navigation Canal Sill & Monolith Project to be completed in the dry.
See also
Eurocode 2: Design of concrete structures
References
Further reading
External links
Advantage and Disadvantage of Concrete
Release of ultrafine particles from three simulated building processes
Concrete: The Quest for Greener Alternatives
Building materials
Composite materials
Heterogeneous chemical mixtures
Masonry
Pavements
Roofing materials
Sculpture materials
Articles containing video clips | Concrete | [
"Physics",
"Chemistry",
"Engineering"
] | 13,053 | [
"Structural engineering",
"Masonry",
"Building engineering",
"Composite materials",
"Architecture",
"Construction",
"Materials",
"Chemical mixtures",
"Heterogeneous chemical mixtures",
"Concrete",
"Matter",
"Building materials"
] |
5,373 | https://en.wikipedia.org/wiki/Coitus%20interruptus | Coitus interruptus, also known as withdrawal, pulling out or the pull-out method, is an act of birth control during penetrative sexual intercourse, whereby the penis is withdrawn from a vagina prior to ejaculation so that the ejaculate (semen) may be directed away in an effort to avoid insemination.
This method was used by an estimated 38 million couples worldwide in 1991. Coitus interruptus does not protect against sexually transmitted infections (STIs).
History
Perhaps the oldest description of the use of the withdrawal method to avoid pregnancy is the story of Onan in the Torah and the Bible. This text is believed to have been written over 2,500 years ago. Societies in the ancient civilizations of Greece and Rome preferred small families and are known to have practiced a variety of birth control methods. There are references that have led historians to believe withdrawal was sometimes used as birth control. However, these societies viewed birth control as a woman's responsibility, and the only well-documented contraception methods were female-controlled devices (both possibly effective, such as pessaries, and ineffective, such as amulets).
After the decline of the Roman Empire in the 5th century AD, contraceptive practices fell out of use in Europe; the use of contraceptive pessaries, for example, is not documented again until the 15th century. If withdrawal was used during the Roman Empire, knowledge of the art may have been lost during its decline.
From the 18th century until the development of modern methods, withdrawal was one of the most popular methods of birth-control in Europe, North America, and elsewhere.
Effects
Like many methods of birth control, reliable effect is achieved only by correct and consistent use. Observed failure rates of withdrawal vary depending on the population being studied: American studies have found actual failure rates of 15–28% per year. One US study, based on self-reported data from the 2006–2010 cycle of the National Survey of Family Growth, found significant differences in failure rate based on parity status. Women with 0 previous births had a 12-month failure rate of only 8.4%, which then increased to 20.4% for those with 1 prior birth and again to 27.7% for those with 2 or more.
An analysis of Demographic and Health Surveys in 43 developing countries between 1990 and 2013 found a median 12-month failure rate across subregions of 13.4%, with a range of 7.8–17.1%. Individual countries within the subregions were even more varied. A large scale study of women in England and Scotland during 1968–1974 to determine the efficacy of various contraceptive methods found a failure rate of 6.7 per 100 woman-years of use. This was a “typical use” failure rate, including user failure to use the method correctly. In comparison, the combined oral contraceptive pill has an actual use failure rate of 2–8%, while intrauterine devices (IUDs) have an actual use failure rate of 0.1–0.8%. Condoms have an actual use failure rate of 10–18%. However, some authors suggest that actual effectiveness of withdrawal could be similar to the effectiveness of condoms; this area needs further research. (See Comparison of birth control methods.)
For couples that use coitus interruptus consistently and correctly at every act of intercourse, the failure rate is 4% per year. This rate is derived from an educated guess based on a modest chance of sperm in the pre-ejaculate. In comparison, the pill has a perfect-use failure rate of 0.3%, IUDs a rate of 0.1–0.6%, and internal condoms a rate of 2%.
It has been suggested that the pre-ejaculate ("Cowper's fluid") emitted by the penis prior to ejaculation may contain spermatozoa (sperm cells), which would compromise the effectiveness of the method. However, several small studies have failed to find any viable sperm in the fluid. While no large conclusive studies have been done, it is believed by some that the cause of method (correct-use) failure is the pre-ejaculate fluid picking up sperm from a previous ejaculation. For this reason, it is recommended that the male partner urinate between ejaculations, to clear the urethra of sperm, and wash any ejaculate from objects that might come near the woman's vulva (such as hands and penis).
However, recent research suggests that this might not be accurate. A contrary, yet non-generalizable study that found mixed evidence, including individual cases of a high sperm concentration, was published in March 2011. A noted limitation to these previous studies' findings is that pre-ejaculate samples were analyzed after the critical two-minute point. That is, looking for motile sperm in small amounts of pre-ejaculate via microscope after two minutes – when the sample has most likely dried – makes examination and evaluation "extremely difficult". Thus, in March 2011 a team of researchers assembled 27 male volunteers and analyzed their pre-ejaculate samples within two minutes after producing them. The researchers found that 11 of the 27 men (41%) produced pre-ejaculatory samples that contained sperm, and 10 of these samples (37%) contained a "fair amount" of motile sperm (in other words, as few as 1 million to as many as 35 million). This study therefore recommends, in order to minimize unintended pregnancy and disease transmission, the use of condoms from the first moment of genital contact. As a point of reference, a study showed that, of couples who conceived within a year of trying, only 2.5% included a male partner with a total sperm count (per ejaculate) of 23 million sperm or less. However, across a wide range of observed values, total sperm count (as with other identified semen and sperm characteristics) has weak power to predict which couples are at risk of pregnancy. Regardless, this study introduced the concept that some men may consistently have sperm in their pre-ejaculate, due to a "leakage," while others may not.
Similarly, another robust study performed in 2016 found motile sperm in the pre-ejaculate of 16.7% (7/42) healthy men. What more, this study attempted to exclude contamination of sperm from ejaculate by drying the pre-ejaculate specimens to reveal a fern-like pattern, characteristics of true pre-ejaculate. All pre-ejaculate specimens were examined within an hour of production and then dried; all pre-ejaculate specimens were found to be true pre-ejaculate. It is widely believed that urinating after an ejaculation will flush the urethra of remaining sperm. However, some of the subjects in the March 2011 study who produced sperm in their pre-ejaculate did urinate (sometimes more than once) before producing their sample. Therefore, some males can release the pre-ejaculate fluid containing sperm without a previous ejaculation.
Advantages
The advantage of coitus interruptus is that it can be used by people who have objections to, or do not have access to, other forms of contraception. Some people prefer it so they can avoid possible adverse effects of hormonal contraceptives or so that they can have a full experience and be able to "feel" their partner. Other reasons for the popularity of this method are its anecdotal increase in male sexual deftness, it has no direct monetary cost, requires no artificial devices, has no physical side effects, can be practiced without a prescription or medical consultation, and provides no barriers to stimulation.
Disadvantages
Compared to the other common reversible methods of contraception such as IUDs, hormonal contraceptives, and male condoms, coitus interruptus is less effective at preventing pregnancy. As a result, it is also less cost-effective than many more effective methods: although the method itself has no direct cost, users have a greater chance of incurring the risks and expenses of either child-birth or abortion. Only models that assume all couples practice perfect use of the method find cost savings associated with the choice of withdrawal as a birth control method.
The method is largely ineffective in the prevention of sexually transmitted infections (STIs), like HIV, since pre-ejaculate may carry viral particles or bacteria which may infect the partner if this fluid comes in contact with mucous membranes. However, a reduction in the volume of bodily fluids exchanged during intercourse may reduce the likelihood of disease transmission compared to using no method due to the smaller number of pathogens present.
Prevalence
Based on data from surveys conducted during the late 1990s, 3% of women of childbearing age worldwide rely on withdrawal as their primary method of contraception. Regional popularity of the method varies widely, from a low of 1% in Africa to 16% in Western Asia.
In the United States, according to the National Survey of Family Growth (NSFG) in 2014, 8.1% of reproductive-aged women reported using withdrawal as a primary contraceptive method. This was a significant increase from 2012 when 4.8% of women reported the use of withdrawal as their most effective method. However, when withdrawal is used in addition to or in rotation with another contraceptive method, the percentage of women using withdrawal jumps from 5% for sole use and 11% for any withdrawal use in 2002, and for adolescents from 7.1% of sole withdrawal use to 14.6% of any withdrawal use in 2006–2008.
When asked if withdrawal was used at least once in the past month by women, use of withdrawal increased from 13% as sole use to 33% ever use in the past month. These increases are even more pronounced for adolescents 15 to 19 years old and young women 20 to 24 years old Similarly, the NSFG reports that 9.8% of unmarried men who have had sexual intercourse in the last three months in 2002 used withdrawal, which then increased to 14.5% in 2006–2010, and then to 18.8% in 2011–2015. The use of withdrawal varied by the unmarried man's age and cohabiting status, but not by ethnicity or race. The use of withdrawal decreased significantly with increasing age groups, ranging from 26.2% among men aged 15–19 to 12% among men aged 35–44. The use of withdrawal was significantly higher for never-married men (23.0%) compared with formerly married (16.3%) and cohabiting (13.0%) men.
For 1998, about 18% of married men in Turkey reported using withdrawal as a contraceptive method.
See also
Coitus reservatus
Coitus saxonicus
Masturbation
References
External links
Contraception and abortion in Islam
Withdrawal
Methods of birth control
Contraception for males
Latin words and phrases | Coitus interruptus | [
"Biology"
] | 2,277 | [
"Methods of birth control",
"Medical technology"
] |
5,376 | https://en.wikipedia.org/wiki/Cladistics | Cladistics ( ; from Ancient Greek 'branch') is an approach to biological classification in which organisms are categorized in groups ("clades") based on hypotheses of most recent common ancestry. The evidence for hypothesized relationships is typically shared derived characteristics (synapomorphies) that are not present in more distant groups and ancestors. However, from an empirical perspective, common ancestors are inferences based on a cladistic hypothesis of relationships of taxa whose character states can be observed. Theoretically, a last common ancestor and all its descendants constitute a (minimal) clade. Importantly, all descendants stay in their overarching ancestral clade. For example, if the terms worms or fishes were used within a strict cladistic framework, these terms would include humans. Many of these terms are normally used paraphyletically, outside of cladistics, e.g. as a 'grade', which are fruitless to precisely delineate, especially when including extinct species. Radiation results in the generation of new subclades by bifurcation, but in practice sexual hybridization may blur very closely related groupings.
As a hypothesis, a clade can be rejected only if some groupings were explicitly excluded. It may then be found that the excluded group did actually descend from the last common ancestor of the group, and thus emerged within the group. ("Evolved from" is misleading, because in cladistics all descendants stay in the ancestral group). To keep only valid clades, upon finding that the group is paraphyletic this way, either such excluded groups should be granted to the clade, or the group should be abolished.
Branches down to the divergence to the next significant (e.g. extant) sister are considered stem-groupings of the clade, but in principle each level stands on its own, to be assigned a unique name. For a fully bifurcated tree, adding a group to a tree also adds an additional (named) clade, and a new level on that branch. Specifically, also extinct groups are always put on a side-branch, not distinguishing whether an actual ancestor of other groupings was found.
The techniques and nomenclature of cladistics have been applied to disciplines other than biology. (See phylogenetic nomenclature.)
Cladistics findings are posing a difficulty for taxonomy, where the rank and (genus-)naming of established groupings may turn out to be inconsistent.
Cladistics is now the most commonly used method to classify organisms.
History
The original methods used in cladistic analysis and the school of taxonomy derived from the work of the German entomologist Willi Hennig, who referred to it as phylogenetic systematics (also the title of his 1966 book); but the terms "cladistics" and "clade" were popularized by other researchers. Cladistics in the original sense refers to a particular set of methods used in phylogenetic analysis, although it is now sometimes used to refer to the whole field.
What is now called the cladistic method appeared as early as 1901 with a work by Peter Chalmers Mitchell for birds and subsequently by Robert John Tillyard (for insects) in 1921, and W. Zimmermann (for plants) in 1943. The term "clade" was introduced in 1958 by Julian Huxley after having been coined by Lucien Cuénot in 1940, "cladogenesis" in 1958, "cladistic" by Arthur Cain and Harrison in 1960, "cladist" (for an adherent of Hennig's school) by Ernst Mayr in 1965, and "cladistics" in 1966. Hennig referred to his own approach as "phylogenetic systematics". From the time of his original formulation until the end of the 1970s, cladistics competed as an analytical and philosophical approach to systematics with phenetics and so-called evolutionary taxonomy. Phenetics was championed at this time by the numerical taxonomists Peter Sneath and Robert Sokal, and evolutionary taxonomy by Ernst Mayr.
Originally conceived, if only in essence, by Willi Hennig in a book published in 1950, cladistics did not flourish until its translation into English in 1966 (Lewin 1997). Today, cladistics is the most popular method for inferring phylogenetic trees from morphological data.
In the 1990s, the development of effective polymerase chain reaction techniques allowed the application of cladistic methods to biochemical and molecular genetic traits of organisms, vastly expanding the amount of data available for phylogenetics. At the same time, cladistics rapidly became popular in evolutionary biology, because computers made it possible to process large quantities of data about organisms and their characteristics.
Methodology
The cladistic method interprets each shared character state transformation as a potential piece of evidence for grouping. Synapomorphies (shared, derived character states) are viewed as evidence of grouping, while symplesiomorphies (shared ancestral character states) are not. The outcome of a cladistic analysis is a cladogram – a tree-shaped diagram (dendrogram) that is interpreted to represent the best hypothesis of phylogenetic relationships. Although traditionally such cladograms were generated largely on the basis of morphological characters and originally calculated by hand, genetic sequencing data and computational phylogenetics are now commonly used in phylogenetic analyses, and the parsimony criterion has been abandoned by many phylogeneticists in favor of more "sophisticated" but less parsimonious evolutionary models of character state transformation. Cladists contend that these models are unjustified because there is no evidence that they recover more "true" or "correct" results from actual empirical data sets
Every cladogram is based on a particular dataset analyzed with a particular method. Datasets are tables consisting of molecular, morphological, ethological and/or other characters and a list of operational taxonomic units (OTUs), which may be genes, individuals, populations, species, or larger taxa that are presumed to be monophyletic and therefore to form, all together, one large clade; phylogenetic analysis infers the branching pattern within that clade. Different datasets and different methods, not to mention violations of the mentioned assumptions, often result in different cladograms. Only scientific investigation can show which is more likely to be correct.
Until recently, for example, cladograms like the following have generally been accepted as accurate representations of the ancestral relations among turtles, lizards, crocodilians, and birds:
If this phylogenetic hypothesis is correct, then the last common ancestor of turtles and birds, at the branch near the lived earlier than the last common ancestor of lizards and birds, near the . Most molecular evidence, however, produces cladograms more like this:
If this is accurate, then the last common ancestor of turtles and birds lived later than the last common ancestor of lizards and birds. Since the cladograms show two mutually exclusive hypotheses to describe the evolutionary history, at most one of them is correct.
The cladogram to the right represents the current universally accepted hypothesis that all primates, including strepsirrhines like the lemurs and lorises, had a common ancestor all of whose descendants are or were primates, and so form a clade; the name Primates is therefore recognized for this clade. Within the primates, all anthropoids (monkeys, apes, and humans) are hypothesized to have had a common ancestor all of whose descendants are or were anthropoids, so they form the clade called Anthropoidea. The "prosimians", on the other hand, form a paraphyletic taxon. The name Prosimii is not used in phylogenetic nomenclature, which names only clades; the "prosimians" are instead divided between the clades Strepsirhini and Haplorhini, where the latter contains Tarsiiformes and Anthropoidea.
Lemurs and tarsiers may have looked closely related to humans, in the sense of being close on the evolutionary tree to humans. However, from the perspective of a tarsier, humans and lemurs would have looked close, in the exact same sense. Cladistics forces a neutral perspective, treating all branches (extant or extinct) in the same manner. It also forces one to try to make statements, and honestly take into account findings, about the exact historic relationships between the groups.
Terminology for character states
The following terms, coined by Hennig, are used to identify shared or distinct character states among groups:
A plesiomorphy ("close form") or ancestral state is a character state that a taxon has retained from its ancestors. When two or more taxa that are not nested within each other share a plesiomorphy, it is a symplesiomorphy (from syn-, "together"). Symplesiomorphies do not mean that the taxa that exhibit that character state are necessarily closely related. For example, Reptilia is traditionally characterized by (among other things) being cold-blooded (i.e., not maintaining a constant high body temperature), whereas birds are warm-blooded. Since cold-bloodedness is a plesiomorphy, inherited from the common ancestor of traditional reptiles and birds, and thus a symplesiomorphy of turtles, snakes and crocodiles (among others), it does not mean that turtles, snakes and crocodiles form a clade that excludes the birds.
An apomorphy ("separate form") or derived state is an innovation. It can thus be used to diagnose a clade – or even to help define a clade name in phylogenetic nomenclature. Features that are derived in individual taxa (a single species or a group that is represented by a single terminal in a given phylogenetic analysis) are called autapomorphies (from auto-, "self"). Autapomorphies express nothing about relationships among groups; clades are identified (or defined) by synapomorphies (from syn-, "together"). For example, the possession of digits that are homologous with those of Homo sapiens is a synapomorphy within the vertebrates. The tetrapods can be singled out as consisting of the first vertebrate with such digits homologous to those of Homo sapiens together with all descendants of this vertebrate (an apomorphy-based phylogenetic definition). Importantly, snakes and other tetrapods that do not have digits are nonetheless tetrapods: other characters, such as amniotic eggs and diapsid skulls, indicate that they descended from ancestors that possessed digits which are homologous with ours.
A character state is homoplastic or "an instance of homoplasy" if it is shared by two or more organisms but is absent from their common ancestor or from a later ancestor in the lineage leading to one of the organisms. It is therefore inferred to have evolved by convergence or reversal. Both mammals and birds are able to maintain a high constant body temperature (i.e., they are warm-blooded). However, the accepted cladogram explaining their significant features indicates that their common ancestor is in a group lacking this character state, so the state must have evolved independently in the two clades. Warm-bloodedness is separately a synapomorphy of mammals (or a larger clade) and of birds (or a larger clade), but it is not a synapomorphy of any group including both these clades. Hennig's Auxiliary Principle states that shared character states should be considered evidence of grouping unless they are contradicted by the weight of other evidence; thus, homoplasy of some feature among members of a group may only be inferred after a phylogenetic hypothesis for that group has been established.
The terms plesiomorphy and apomorphy are relative; their application depends on the position of a group within a tree. For example, when trying to decide whether the tetrapods form a clade, an important question is whether having four limbs is a synapomorphy of the earliest taxa to be included within Tetrapoda: did all the earliest members of the Tetrapoda inherit four limbs from a common ancestor, whereas all other vertebrates did not, or at least not homologously? By contrast, for a group within the tetrapods, such as birds, having four limbs is a plesiomorphy. Using these two terms allows a greater precision in the discussion of homology, in particular allowing clear expression of the hierarchical relationships among different homologous features.
It can be difficult to decide whether a character state is in fact the same and thus can be classified as a synapomorphy, which may identify a monophyletic group, or whether it only appears to be the same and is thus a homoplasy, which cannot identify such a group. There is a danger of circular reasoning: assumptions about the shape of a phylogenetic tree are used to justify decisions about character states, which are then used as evidence for the shape of the tree. Phylogenetics uses various forms of parsimony to decide such questions; the conclusions reached often depend on the dataset and the methods. Such is the nature of empirical science, and for this reason, most cladists refer to their cladograms as hypotheses of relationship. Cladograms that are supported by a large number and variety of different kinds of characters are viewed as more robust than those based on more limited evidence.
Terminology for taxa
Mono-, para- and polyphyletic taxa can be understood based on the shape of the tree (as done above), as well as based on their character states. These are compared in the table below.
Criticism
Cladistics, either generally or in specific applications, has been criticized from its beginnings. Decisions as to whether particular character states are homologous, a precondition of their being synapomorphies, have been challenged as involving circular reasoning and subjective judgements. Of course, the potential unreliability of evidence is a problem for any systematic method, or for that matter, for any empirical scientific endeavor at all.
Transformed cladistics arose in the late 1970s in an attempt to resolve some of these problems by removing a priori assumptions about phylogeny from cladistic analysis, but it has remained unpopular.
Issues
Ancestors
The cladistic method does not identify fossil species as actual ancestors of a clade. Instead, fossil taxa are identified as belonging to separate extinct branches. While a fossil species could be the actual ancestor of a clade, there is no way to know that. Therefore, a more conservative hypothesis is that the fossil taxon is related to other fossil and extant taxa, as implied by the pattern of shared apomorphic features.
Extinction status
An otherwise extinct group with any extant descendants, is not considered (literally) extinct, and for instance does not have a date of extinction.
Hybridization, interbreeding
Anything having to do with biology and sex is complicated and messy, and cladistics is no exception. Many species reproduce sexually, and are capable of interbreeding for millions of years. Worse, during such a period, many branches may have radiated, and it may take hundreds of millions of years for them to have whittled down to just two. Only then one can theoretically assign proper last common ancestors of groupings which do not inadvertently include earlier branches. The process of true cladistic bifurcation can thus take a much more extended time than one is usually aware of. In practice, for recent radiations, cladistically guided findings only give a coarse impression of the complexity. A more detailed account will give details about fractions of introgressions between groupings, and even geographic variations thereof. This has been used as an argument for the use of paraphyletic groupings, but typically other reasons are quoted.
Horizontal gene transfer
Horizontal gene transfer is the mobility of genetic info between different organisms that can have immediate or delayed effects for the reciprocal host. There are several processes in nature which can cause horizontal gene transfer. This does typically not directly interfere with ancestry of the organism, but can complicate the determination of that ancestry. On another level, one can map the horizontal gene transfer processes, by determining the phylogeny of the individual genes using cladistics.
Naming stability
If there is unclarity in mutual relationships, there are a lot of possible trees. Assigning names to each possible clade may not be prudent. Furthermore, established names are discarded in cladistics, or alternatively carry connotations which may no longer hold, such as when additional groups are found to have emerged in them. Naming changes are the direct result of changes in the recognition of mutual relationships, which often is still in flux, especially for extinct species. Hanging on to older naming and/or connotations is counter-productive, as they typically do not reflect actual mutual relationships precisely at all. E.g. Archaea, Asgard archaea, protists, slime molds, worms, invertebrata, fishes, reptilia, monkeys, Ardipithecus, Australopithecus, Homo erectus all contain Homo sapiens cladistically, in their sensu lato meaning. For originally extinct stem groups, sensu lato generally means generously keeping previously included groups, which then may come to include even living species. A pruned sensu stricto meaning is often adopted instead, but the group would need to be restricted to a single branch on the stem. Other branches then get their own name and level. This is commensurate to the fact that more senior stem branches are in fact closer related to the resulting group than the more basal stem branches; that those stem branches only may have lived for a short time does not affect that assessment in cladistics.
In disciplines other than biology
The comparisons used to acquire data on which cladograms can be based are not limited to the field of biology. Any group of individuals or classes that are hypothesized to have a common ancestor, and to which a set of common characteristics may or may not apply, can be compared pairwise. Cladograms can be used to depict the hypothetical descent relationships within groups of items in many different academic realms. The only requirement is that the items have characteristics that can be identified and measured.
Anthropology and archaeology: Cladistic methods have been used to reconstruct the development of cultures or artifacts using groups of cultural traits or artifact features.
Comparative mythology and folktale use cladistic methods to reconstruct the protoversion of many myths. Mythological phylogenies constructed with mythemes clearly support low horizontal transmissions (borrowings), historical (sometimes Palaeolithic) diffusions and punctuated evolution. They also are a powerful way to test hypotheses about cross-cultural relationships among folktales.
Literature: Cladistic methods have been used in the classification of the surviving manuscripts of the Canterbury Tales, and the manuscripts of the Sanskrit Charaka Samhita.
Historical linguistics: Cladistic methods have been used to reconstruct the phylogeny of languages using linguistic features. This is similar to the traditional comparative method of historical linguistics, but is more explicit in its use of parsimony and allows much faster analysis of large datasets (computational phylogenetics).
Textual criticism or stemmatics: Cladistic methods have been used to reconstruct the phylogeny of manuscripts of the same work (and reconstruct the lost original) using distinctive copying errors as apomorphies. This differs from traditional historical-comparative linguistics in enabling the editor to evaluate and place in genetic relationship large groups of manuscripts with large numbers of variants that would be impossible to handle manually. It also enables parsimony analysis of contaminated traditions of transmission that would be impossible to evaluate manually in a reasonable period of time.
Astrophysics infers the history of relationships between galaxies to create branching diagram hypotheses of galaxy diversification.
See also
Bioinformatics
Biomathematics
Coalescent theory
Common descent
Glossary of scientific naming
Language family
Patrocladogram
Phylogenetic network
Scientific classification
Stratocladistics
Subclade
Systematics
Three-taxon analysis
Tree model
Tree structure
Notes and references
Bibliography
Available free online at Gallica (No direct URL). This is the paper credited by for the first use of the term 'clade'.
responding to .
Translated from manuscript in German eventually published in 1982 (Phylogenetische Systematik, Verlag Paul Parey, Berlin).
d'Huy, Julien (2012b), "Le motif de Pygmalion : origine afrasienne et diffusion en Afrique". Sahara, 23: 49-59 .
d'Huy, Julien (2013a), "Polyphemus (Aa. Th. 1137)." "A phylogenetic reconstruction of a prehistoric tale". Nouvelle Mythologie Comparée / New Comparative Mythology 1,
d'Huy, Julien (2013c) "Les mythes évolueraient par ponctuations". Mythologie française, 252, 2013c: 8–12.
d'Huy, Julien (2013d) "A Cosmic Hunt in the Berber sky : a phylogenetic reconstruction of Palaeolithic mythology". Les Cahiers de l'AARS, 15, 2013d: 93–106.
Reissued 1997 in paperback. Includes a reprint of Mayr's 1974 anti-cladistics paper at pp. 433–476, "Cladistic analysis or cladistic classification." This is the paper to which is a response.
.
Tehrani, Jamshid J., 2013, "The Phylogeny of Little Red Riding Hood", PLOS ONE, 13 November.
External links
OneZoom: Tree of Life – all living species as intuitive and zoomable fractal explorer (responsive design)
Willi Hennig Society
Cladistics (scholarly journal of the Willi Hennig Society)
Phylogenetics
Evolutionary biology
Zoology
Philosophy of biology | Cladistics | [
"Biology"
] | 4,553 | [
"Evolutionary biology",
"Taxonomy (biology)",
"Bioinformatics",
"Zoology",
"Phylogenetics"
] |
5,377 | https://en.wikipedia.org/wiki/Calendar | A calendar is a system of organizing days. This is done by giving names to periods of time, typically days, weeks, months and years. A date is the designation of a single and specific day within such a system. A calendar is also a physical record (often paper) of such a system. A calendar can also mean a list of planned events, such as a court calendar, or a partly or fully chronological list of documents, such as a calendar of wills.
Periods in a calendar (such as years and months) are usually, though not necessarily, synchronized with the cycle of the sun or the moon. The most common type of pre-modern calendar was the lunisolar calendar, a lunar calendar that occasionally adds one intercalary month to remain synchronized with the solar year over the long term.
Etymology
The term calendar is taken from , the term for the first day of the month in the Roman calendar, related to the verb 'to call out', referring to the "calling" of the new moon when it was first seen. Latin meant 'account book, register' (as accounts were settled and debts were collected on the calends of each month). The Latin term was adopted in Old French as and from there in Middle English as by the 13th century (the spelling calendar is early modern).
History
The course of the Sun and the Moon are the most salient regularly recurring natural events useful for timekeeping, and in pre-modern societies around the world lunation and the year were most commonly used as time units. Nevertheless, the Roman calendar contained remnants of a very ancient pre-Etruscan 10-month solar year.
The first recorded physical calendars, dependent on the development of writing in the Ancient Near East, are the Bronze Age Egyptian and Sumerian calendars.
During the Vedic period India developed a sophisticated timekeeping methodology and calendars for Vedic rituals. According to Yukio Ohashi, the Vedanga calendar in ancient India was based on astronomical studies during the Vedic Period and was not derived from other cultures.
A large number of calendar systems in the Ancient Near East were based on the Babylonian calendar dating from the Iron Age, among them the calendar system of the Persian Empire, which in turn gave rise to the Zoroastrian calendar and the Hebrew calendar.
A great number of Hellenic calendars were developed in Classical Greece, and during the Hellenistic period they gave rise to the ancient Roman calendar and to various Hindu calendars.
Calendars in antiquity were lunisolar, depending on the introduction of intercalary months to align the solar and the lunar years. This was mostly based on observation, but there may have been early attempts to model the pattern of intercalation algorithmically, as evidenced in the fragmentary 2nd-century Coligny calendar.
The Roman calendar was reformed by Julius Caesar in 46 BC. His "Julian" calendar was no longer dependent on the observation of the new moon, but followed an algorithm of introducing a leap day every four years. This created a dissociation of the calendar month from lunation. The Gregorian calendar, introduced in 1582, corrected most of the remaining difference between the Julian calendar and the solar year.
The Islamic calendar is based on the prohibition of intercalation (nasi') by Muhammad, in Islamic tradition dated to a sermon given on 9 Dhu al-Hijjah AH 10 (Julian date: 6 March 632). This resulted in an observation-based lunar calendar that shifts relative to the seasons of the solar year.
There have been several modern proposals for reform of the modern calendar, such as the World Calendar, the International Fixed Calendar, the Holocene calendar, and the Hanke–Henry Permanent Calendar. Such ideas are promoted from time to time, but have failed to gain traction because of the loss of continuity and the massive upheaval that implementing them would involve, as well as their effect on cycles of religious activity.
Systems
A full calendar system has a different calendar date for every day. Thus the week cycle is by itself not a full calendar system; neither is a system to name the days within a year without a system for identifying the years.
The simplest calendar system just counts time periods from a reference date. This applies for the Julian day or Unix Time. Virtually the only possible variation is using a different reference date, in particular, one less distant in the past to make the numbers smaller. Computations in these systems are just a matter of addition and subtraction.
Other calendars have one (or multiple) larger units of time.
Calendars that contain one level of cycles:
week and weekday – this system (without year, the week number keeps on increasing) is not very common
year and ordinal date within the year, e.g., the ISO 8601 ordinal date system
Calendars with two levels of cycles:
year, month, and day – most systems, including the Gregorian calendar (and its very similar predecessor, the Julian calendar), the Islamic calendar, the Solar Hijri calendar and the Hebrew calendar
year, week, and weekday – e.g., the ISO week date
Cycles can be synchronized with periodic phenomena:
Lunar calendars are synchronized to the motion of the Moon (lunar phases); an example is the Islamic calendar.
Solar calendars are based on perceived seasonal changes synchronized to the apparent motion of the Sun; an example is the Persian calendar.
Lunisolar calendars are based on a combination of both solar and lunar reckonings; examples include the traditional calendar of China, the Hindu calendar in India and Nepal, and the Hebrew calendar.
The week cycle is an example of one that is not synchronized to any external phenomenon (although it may have been derived from lunar phases, beginning anew every month).
Very commonly a calendar includes more than one type of cycle or has both cyclic and non-cyclic elements.
Most calendars incorporate more complex cycles. For example, the vast majority of them track years, months, weeks and days. The seven-day week is practically universal, though its use varies. It has run uninterrupted for millennia.
Solar
Solar calendars assign a date to each solar day. A day may consist of the period between sunrise and sunset, with a following period of night, or it may be a period between successive events such as two sunsets. The length of the interval between two such successive events may be allowed to vary slightly during the year, or it may be averaged into a mean solar day. Other types of calendar may also use a solar day.
The Egyptians appear to have been the first to develop a solar calendar, using as a fixed point the annual sunrise reappearance of the Dog Star—Sirius, or Sothis—in the eastern sky, which coincided with the annual flooding of the Nile River. They built a calendar with 365 days, divided into 12 months of 30 days each, with 5 extra days at the end of the year. However, they did not include the extra bit of time in each year, and this caused their calendar to slowly become inaccurate.
Lunar
Not all calendars use the solar year as a unit. A lunar calendar is one in which days are numbered within each lunar phase cycle. Because the length of the lunar month is not an even fraction of the length of the tropical year, a purely lunar calendar quickly drifts against the seasons, which do not vary much near the equator. It does, however, stay constant with respect to other phenomena, notably tides. An example is the Islamic calendar.
Alexander Marshack, in a controversial reading, believed that marks on a bone baton () represented a lunar calendar. Other marked bones may also represent lunar calendars. Similarly, Michael Rappenglueck believes that marks on a 15,000-year-old cave painting represent a lunar calendar.
Lunisolar
A lunisolar calendar is a lunar calendar that compensates by adding an extra month as needed to realign the months with the seasons. Prominent examples of lunisolar calendar are Hindu calendar and Buddhist calendar that are popular in South Asia and Southeast Asia. Another example is the Hebrew calendar, which uses a 19-year cycle.
Subdivisions
Nearly all calendar systems group consecutive days into "months" and also into "years". In a solar calendar a year approximates Earth's tropical year (that is, the time it takes for a complete cycle of seasons), traditionally used to facilitate the planning of agricultural activities. In a lunar calendar, the month approximates the cycle of the moon phase. Consecutive days may be grouped into other periods such as the week.
Because the number of days in the tropical year is not a whole number, a solar calendar must have a different number of days in different years. This may be handled, for example, by adding an extra day in leap years. The same applies to months in a lunar calendar and also the number of months in a year in a lunisolar calendar. This is generally known as intercalation. Even if a calendar is solar, but not lunar, the year cannot be divided entirely into months that never vary in length.
Cultures may define other units of time, such as the week, for the purpose of scheduling regular activities that do not easily coincide with months or years. Many cultures use different baselines for their calendars' starting years. Historically, several countries have based their calendars on regnal years, a calendar based on the reign of their current sovereign. For example, the year 2006 in Japan is year 18 Heisei, with Heisei being the era name of Emperor Akihito.
Other types
Arithmetical and astronomical
An astronomical calendar is based on ongoing observation; examples are the religious Islamic calendar and the old religious Jewish calendar in the time of the Second Temple. Such a calendar is also referred to as an observation-based calendar. The advantage of such a calendar is that it is perfectly and perpetually accurate. The disadvantage is that working out when a particular date would occur is difficult.
An arithmetic calendar is one that is based on a strict set of rules; an example is the current Jewish calendar. Such a calendar is also referred to as a rule-based calendar. The advantage of such a calendar is the ease of calculating when a particular date occurs. The disadvantage is imperfect accuracy. Furthermore, even if the calendar is very accurate, its accuracy diminishes slowly over time, owing to changes in Earth's rotation. This limits the lifetime of an accurate arithmetic calendar to a few thousand years. After then, the rules would need to be modified from observations made since the invention of the calendar.
Other variants
The early Roman calendar, created during the reign of Romulus, lumped the 61 days of the winter period together as simply "winter". Over time, this period became January and February; through further changes over time (including the creation of the Julian calendar) this calendar became the modern Gregorian calendar, introduced in the 1570s.
Usage
The primary practical use of a calendar is to identify days: to be informed about or to agree on a future event and to record an event that has happened. Days may be significant for agricultural, civil, religious, or social reasons. For example, a calendar provides a way to determine when to start planting or harvesting, which days are religious or civil holidays, which days mark the beginning and end of business accounting periods, and which days have legal significance, such as the day taxes are due or a contract expires. Also, a calendar may, by identifying a day, provide other useful information about the day such as its season.
Calendars are also used as part of a complete timekeeping system: date and time of day together specify a moment in time. In the modern world, timekeepers can show time, date, and weekday. Some may also show the lunar phase.
Gregorian
The Gregorian calendar is the de facto international standard and is used almost everywhere in the world for civil purposes. The widely used solar aspect is a cycle of leap days in a 400-year cycle designed to keep the duration of the year aligned with the solar year. There is a lunar aspect which approximates the position of the moon during the year, and is used in the calculation of the date of Easter. Each Gregorian year has either 365 or 366 days (the leap day being inserted as 29 February), amounting to an average Gregorian year of 365.2425 days (compared to a solar year of 365.2422 days).
The Gregorian calendar was introduced in 1582 as a refinement to the Julian calendar, that had been in use throughout the European Middle Ages, amounting to a 0.002% correction in the length of the year. During the Early Modern period, its adoption was mostly limited to Roman Catholic nations, but by the 19th century it had become widely adopted for the sake of convenience in international trade. The last European country to adopt it was Greece, in 1923.
The calendar epoch used by the Gregorian calendar is inherited from the medieval convention established by Dionysius Exiguus and associated with the Julian calendar. The year number is variously given as AD (for Anno Domini) or CE (for Common Era or Christian Era).
Religious
The most important use of pre-modern calendars is keeping track of the liturgical year and the observation of religious feast days.
While the Gregorian calendar is itself historically motivated to the calculation of the Easter date, it is now in worldwide secular use as the de facto standard. Alongside the use of the Gregorian calendar for secular matters, there remain several calendars in use for religious purposes.
Western Christian liturgical calendars are based on the cycle of the Roman Rite of the Catholic Church, and generally include the liturgical seasons of Advent, Christmas, Ordinary Time (Time after Epiphany), Lent, Easter, and Ordinary Time (Time after Pentecost). Some Christian calendars do not include Ordinary Time and every day falls into a denominated season.
The Eastern Orthodox Church employs the use of 2 liturgical calendars; the Julian calendar (often called the Old Calendar) and the Revised Julian Calendar (often called the New Calendar). The Revised Julian Calendar is nearly the same as the Gregorian calendar, with the addition that years divisible by 100 are not leap years, except that years with remainders of 200 or 600 when divided by 900 remain leap years, e.g. 2000 and 2400 as in the Gregorian calendar.
The Islamic calendar or Hijri calendar is a lunar calendar consisting of 12 lunar months in a year of 354 or 355 days. It is used to date events in most of the Muslim countries (concurrently with the Gregorian calendar) and used by Muslims everywhere to determine the proper day on which to celebrate Islamic holy days and festivals. Its epoch is the Hijra (corresponding to AD 622). With an annual drift of 11 or 12 days, the seasonal relation is repeated approximately every 33 Islamic years.
Various Hindu calendars remain in use in the Indian subcontinent, including the Nepali calendars, Bengali calendar, Malayalam calendar, Tamil calendar, Vikrama Samvat used in Northern India, and Shalivahana calendar in the Deccan states.
The Buddhist calendar and the traditional lunisolar calendars of Cambodia, Laos, Myanmar, Sri Lanka and Thailand are also based on an older version of the Hindu calendar.
Most of the Hindu calendars are inherited from a system first enunciated in Vedanga Jyotisha of Lagadha, standardized in the Sūrya Siddhānta and subsequently reformed by astronomers such as Āryabhaṭa (AD 499), Varāhamihira (6th century) and Bhāskara II (12th century).
The Hebrew calendar is used by Jews worldwide for religious and cultural affairs, also influences civil matters in Israel (such as national holidays) and can be used business dealings (such as for the dating of cheques).
Followers of the Baháʼí Faith use the Baháʼí calendar. The Baháʼí Calendar, also known as the Badi Calendar was first established by the Bab in the Kitab-i-Asma. The Baháʼí Calendar is also purely a solar calendar and comprises 19 months each having nineteen days.
National
The Chinese, Hebrew, Hindu, and Julian calendars are widely used for religious and social purposes.
The Iranian (Persian) calendar is used in Iran and some parts of Afghanistan. The Assyrian calendar is in use by the members of the Assyrian community in the Middle East (mainly Iraq, Syria, Turkey, and Iran) and the diaspora. The first year of the calendar is exactly 4750 years prior to the start of the Gregorian calendar. The Ethiopian calendar or Ethiopic calendar is the principal calendar used in Ethiopia and Eritrea, with the Oromo calendar also in use in some areas. In neighboring Somalia, the Somali calendar co-exists alongside the Gregorian and Islamic calendars. In Thailand, where the Thai solar calendar is used, the months and days have adopted the western standard, although the years are still based on the traditional Buddhist calendar.
Fiscal
A fiscal calendar generally means the accounting year of a government or a business. It is used for budgeting, keeping accounts, and taxation. It is a set of 12 months that may start at any date in a year. The US government's fiscal year starts on 1 October and ends on 30 September. The government of India's fiscal year starts on 1 April and ends on 31 March. Small traditional businesses in India start the fiscal year on Diwali festival and end the day before the next year's Diwali festival.
In accounting (and particularly accounting software), a fiscal calendar (such as a 4/4/5 calendar) fixes each month at a specific number of weeks to facilitate comparisons from month to month and year to year. January always has exactly 4 weeks (Sunday through Saturday), February has 4 weeks, March has 5 weeks, etc. Note that this calendar will normally need to add a 53rd week to every 5th or 6th year, which might be added to December or might not be, depending on how the organization uses those dates. There exists an international standard way to do this (the ISO week). The ISO week starts on a Monday and ends on a Sunday. Week 1 is always the week that contains 4 January in the Gregorian calendar.
Formats
The term calendar applies not only to a given scheme of timekeeping but also to a specific record or device displaying such a scheme, for example, an appointment book in the form of a pocket calendar (or personal organizer), desktop calendar, a wall calendar, etc.
In a paper calendar, one or two sheets can show a single day, a week, a month, or a year. If a sheet is for a single day, it easily shows the date and the weekday. If a sheet is for multiple days it shows a conversion table to convert from weekday to date and back. With a special pointing device, or by crossing out past days, it may indicate the current date and weekday. This is the most common usage of the word.
In the US Sunday is considered the first day of the week and so appears on the far left and Saturday the last day of the week appearing on the far right. In Britain, the weekend may appear at the end of the week so the first day is Monday and the last day is Sunday. The US calendar display is also used in Britain.
It is common to display the Gregorian calendar in separate monthly grids of seven columns (from Monday to Sunday, or Sunday to Saturday depending on which day is considered to start the week – this varies according to country) and five to six rows (or rarely, four rows when the month of February contains 28 days in common years beginning on the first day of the week), with the day of the month numbered in each cell, beginning with 1. The sixth row is sometimes eliminated by marking 23/30 and 24/31 together as necessary.
When working with weeks rather than months, a continuous format is sometimes more convenient, where no blank cells are inserted to ensure that the first day of a new month begins on a fresh row.
Software
Calendaring software provides users with an electronic version of a calendar, and may additionally provide an appointment book, address book, or contact list.
Calendaring is a standard feature of many PDAs, EDAs, and smartphones. The software may be a local package designed for individual use (e.g., Lightning extension for Mozilla Thunderbird, Microsoft Outlook without Exchange Server, or Windows Calendar) or maybe a networked package that allows for the sharing of information between users (e.g., Mozilla Sunbird, Windows Live Calendar, Google Calendar, or Microsoft Outlook with Exchange Server).
See also
General Roman Calendar
List of calendars
Advent calendar
Calendar reform
Calendrical calculation
Docket (court)
History of calendars
Horology
List of international common standards
List of unofficial observances by date
Real-time clock (RTC), which underlies the Calendar software on modern computers.
Unit of time
References
Citations
Sources
Further reading
External links
Calendar converter, including all major civil, religious and technical calendars.
Units of time | Calendar | [
"Physics",
"Mathematics"
] | 4,303 | [
"Calendars",
"Physical quantities",
"Time",
"Units of time",
"Quantity",
"Spacetime",
"Units of measurement"
] |
5,378 | https://en.wikipedia.org/wiki/Physical%20cosmology | Physical cosmology is a branch of cosmology concerned with the study of cosmological models. A cosmological model, or simply cosmology, provides a description of the largest-scale structures and dynamics of the universe and allows study of fundamental questions about its origin, structure, evolution, and ultimate fate. Cosmology as a science originated with the Copernican principle, which implies that celestial bodies obey identical physical laws to those on Earth, and Newtonian mechanics, which first allowed those physical laws to be understood.
Physical cosmology, as it is now understood, began in 1915 with the development of Albert Einstein's general theory of relativity, followed by major observational discoveries in the 1920s: first, Edwin Hubble discovered that the universe contains a huge number of external galaxies beyond the Milky Way; then, work by Vesto Slipher and others showed that the universe is expanding. These advances made it possible to speculate about the origin of the universe, and allowed the establishment of the Big Bang theory, by Georges Lemaître, as the leading cosmological model. A few researchers still advocate a handful of alternative cosmologies; however, most cosmologists agree that the Big Bang theory best explains the observations.
Dramatic advances in observational cosmology since the 1990s, including the cosmic microwave background, distant supernovae and galaxy redshift surveys, have led to the development of a standard model of cosmology. This model requires the universe to contain large amounts of dark matter and dark energy whose nature is currently not well understood, but the model gives detailed predictions that are in excellent agreement with many diverse observations.
Cosmology draws heavily on the work of many disparate areas of research in theoretical and applied physics. Areas relevant to cosmology include particle physics experiments and theory, theoretical and observational astrophysics, general relativity, quantum mechanics, and plasma physics.
Subject history
Modern cosmology developed along tandem tracks of theory and observation. In 1916, Albert Einstein published his theory of general relativity, which provided a unified description of gravity as a geometric property of space and time. At the time, Einstein believed in a static universe, but found that his original formulation of the theory did not permit it. This is because masses distributed throughout the universe gravitationally attract, and move toward each other over time. However, he realized that his equations permitted the introduction of a constant term which could counteract the attractive force of gravity on the cosmic scale. Einstein published his first paper on relativistic cosmology in 1917, in which he added this cosmological constant to his field equations in order to force them to model a static universe. The Einstein model describes a static universe; space is finite and unbounded (analogous to the surface of a sphere, which has a finite area but no edges). However, this so-called Einstein model is unstable to small perturbations—it will eventually start to expand or contract. It was later realized that Einstein's model was just one of a larger set of possibilities, all of which were consistent with general relativity and the cosmological principle. The cosmological solutions of general relativity were found by Alexander Friedmann in the early 1920s. His equations describe the Friedmann–Lemaître–Robertson–Walker universe, which may expand or contract, and whose geometry may be open, flat, or closed.
In the 1910s, Vesto Slipher (and later Carl Wilhelm Wirtz) interpreted the red shift of spiral nebulae as a Doppler shift that indicated they were receding from Earth. However, it is difficult to determine the distance to astronomical objects. One way is to compare the physical size of an object to its angular size, but a physical size must be assumed in order to do this. Another method is to measure the brightness of an object and assume an intrinsic luminosity, from which the distance may be determined using the inverse-square law. Due to the difficulty of using these methods, they did not realize that the nebulae were actually galaxies outside our own Milky Way, nor did they speculate about the cosmological implications. In 1927, the Belgian Roman Catholic priest Georges Lemaître independently derived the Friedmann–Lemaître–Robertson–Walker equations and proposed, on the basis of the recession of spiral nebulae, that the universe began with the "explosion" of a "primeval atom"—which was later called the Big Bang. In 1929, Edwin Hubble provided an observational basis for Lemaître's theory. Hubble showed that the spiral nebulae were galaxies by determining their distances using measurements of the brightness of Cepheid variable stars. He discovered a relationship between the redshift of a galaxy and its distance. He interpreted this as evidence that the galaxies are receding from Earth in every direction at speeds proportional to their distance from Earth. This fact is now known as Hubble's law, though the numerical factor Hubble found relating recessional velocity and distance was off by a factor of ten, due to not knowing about the types of Cepheid variables.
Given the cosmological principle, Hubble's law suggested that the universe was expanding. Two primary explanations were proposed for the expansion. One was Lemaître's Big Bang theory, advocated and developed by George Gamow. The other explanation was Fred Hoyle's steady state model in which new matter is created as the galaxies move away from each other. In this model, the universe is roughly the same at any point in time.
For a number of years, support for these theories was evenly divided. However, the observational evidence began to support the idea that the universe evolved from a hot dense state. The discovery of the cosmic microwave background in 1965 lent strong support to the Big Bang model, and since the precise measurements of the cosmic microwave background by the Cosmic Background Explorer in the early 1990s, few cosmologists have seriously proposed other theories of the origin and evolution of the cosmos. One consequence of this is that in standard general relativity, the universe began with a singularity, as demonstrated by Roger Penrose and Stephen Hawking in the 1960s.
An alternative view to extend the Big Bang model, suggesting the universe had no beginning or singularity and the age of the universe is infinite, has been presented.
In September 2023, astrophysicists questioned the overall current view of the universe, in the form of the Standard Model of Cosmology, based on the latest James Webb Space Telescope studies.
Energy of the cosmos
The lightest chemical elements, primarily hydrogen and helium, were created during the Big Bang through the process of nucleosynthesis. In a sequence of stellar nucleosynthesis reactions, smaller atomic nuclei are then combined into larger atomic nuclei, ultimately forming stable iron group elements such as iron and nickel, which have the highest nuclear binding energies. The net process results in a later energy release, meaning subsequent to the Big Bang. Such reactions of nuclear particles can lead to sudden energy releases from cataclysmic variable stars such as novae. Gravitational collapse of matter into black holes also powers the most energetic processes, generally seen in the nuclear regions of galaxies, forming quasars and active galaxies.
Cosmologists cannot explain all cosmic phenomena exactly, such as those related to the accelerating expansion of the universe, using conventional forms of energy. Instead, cosmologists propose a new form of energy called dark energy that permeates all space. One hypothesis is that dark energy is just the vacuum energy, a component of empty space that is associated with the virtual particles that exist due to the uncertainty principle.
There is no clear way to define the total energy in the universe using the most widely accepted theory of gravity, general relativity. Therefore, it remains controversial whether the total energy is conserved in an expanding universe. For instance, each photon that travels through intergalactic space loses energy due to the redshift effect. This energy is not transferred to any other system, so seems to be permanently lost. On the other hand, some cosmologists insist that energy is conserved in some sense; this follows the law of conservation of energy.
Different forms of energy may dominate the cosmos—relativistic particles which are referred to as radiation, or non-relativistic particles referred to as matter. Relativistic particles are particles whose rest mass is zero or negligible compared to their kinetic energy, and so move at the speed of light or very close to it; non-relativistic particles have much higher rest mass than their energy and so move much slower than the speed of light.
As the universe expands, both matter and radiation become diluted. However, the energy densities of radiation and matter dilute at different rates. As a particular volume expands, mass-energy density is changed only by the increase in volume, but the energy density of radiation is changed both by the increase in volume and by the increase in the wavelength of the photons that make it up. Thus the energy of radiation becomes a smaller part of the universe's total energy than that of matter as it expands. The very early universe is said to have been 'radiation dominated' and radiation controlled the deceleration of expansion. Later, as the average energy per photon becomes roughly 10 eV and lower, matter dictates the rate of deceleration and the universe is said to be 'matter dominated'. The intermediate case is not treated well analytically. As the expansion of the universe continues, matter dilutes even further and the cosmological constant becomes dominant, leading to an acceleration in the universe's expansion.
History of the universe
The history of the universe is a central issue in cosmology. The history of the universe is divided into different periods called epochs, according to the dominant forces and processes in each period. The standard cosmological model is known as the Lambda-CDM model.
Equations of motion
Within the standard cosmological model, the equations of motion governing the universe as a whole are derived from general relativity with a small, positive cosmological constant. The solution is an expanding universe; due to this expansion, the radiation and matter in the universe cool and become diluted. At first, the expansion is slowed down by gravitation attracting the radiation and matter in the universe. However, as these become diluted, the cosmological constant becomes more dominant and the expansion of the universe starts to accelerate rather than decelerate. In our universe this happened billions of years ago.
Particle physics in cosmology
During the earliest moments of the universe, the average energy density was very high, making knowledge of particle physics critical to understanding this environment. Hence, scattering processes and decay of unstable elementary particles are important for cosmological models of this period.
As a rule of thumb, a scattering or a decay process is cosmologically important in a certain epoch if the time scale describing that process is smaller than, or comparable to, the time scale of the expansion of the universe. The time scale that describes the expansion of the universe is with being the Hubble parameter, which varies with time. The expansion timescale is roughly equal to the age of the universe at each point in time.
Timeline of the Big Bang
Observations suggest that the universe began around 13.8 billion years ago. Since then, the evolution of the universe has passed through three phases. The very early universe, which is still poorly understood, was the split second in which the universe was so hot that particles had energies higher than those currently accessible in particle accelerators on Earth. Therefore, while the basic features of this epoch have been worked out in the Big Bang theory, the details are largely based on educated guesses.
Following this, in the early universe, the evolution of the universe proceeded according to known high energy physics. This is when the first protons, electrons and neutrons formed, then nuclei and finally atoms. With the formation of neutral hydrogen, the cosmic microwave background was emitted. Finally, the epoch of structure formation began, when matter started to aggregate into the first stars and quasars, and ultimately galaxies, clusters of galaxies and superclusters formed. The future of the universe is not yet firmly known, but according to the ΛCDM model it will continue expanding forever.
Areas of study
Below, some of the most active areas of inquiry in cosmology are described, in roughly chronological order. This does not include all of the Big Bang cosmology, which is presented in Timeline of the Big Bang.
Very early universe
The early, hot universe appears to be well explained by the Big Bang from roughly 10−33 seconds onwards, but there are several problems. One is that there is no compelling reason, using current particle physics, for the universe to be flat, homogeneous, and isotropic (see the cosmological principle). Moreover, grand unified theories of particle physics suggest that there should be magnetic monopoles in the universe, which have not been found. These problems are resolved by a brief period of cosmic inflation, which drives the universe to flatness, smooths out anisotropies and inhomogeneities to the observed level, and exponentially dilutes the monopoles. The physical model behind cosmic inflation is extremely simple, but it has not yet been confirmed by particle physics, and there are difficult problems reconciling inflation and quantum field theory. Some cosmologists think that string theory and brane cosmology will provide an alternative to inflation.
Another major problem in cosmology is what caused the universe to contain far more matter than antimatter. Cosmologists can observationally deduce that the universe is not split into regions of matter and antimatter. If it were, there would be X-rays and gamma rays produced as a result of annihilation, but this is not observed. Therefore, some process in the early universe must have created a small excess of matter over antimatter, and this (currently not understood) process is called baryogenesis. Three required conditions for baryogenesis were derived by Andrei Sakharov in 1967, and requires a violation of the particle physics symmetry, called CP-symmetry, between matter and antimatter. However, particle accelerators measure too small a violation of CP-symmetry to account for the baryon asymmetry. Cosmologists and particle physicists look for additional violations of the CP-symmetry in the early universe that might account for the baryon asymmetry.
Both the problems of baryogenesis and cosmic inflation are very closely related to particle physics, and their resolution might come from high energy theory and experiment, rather than through observations of the universe.
Big Bang Theory
Big Bang nucleosynthesis is the theory of the formation of the elements in the early universe. It finished when the universe was about three minutes old and its temperature dropped below that at which nuclear fusion could occur. Big Bang nucleosynthesis had a brief period during which it could operate, so only the very lightest elements were produced. Starting from hydrogen ions (protons), it principally produced deuterium, helium-4, and lithium. Other elements were produced in only trace abundances. The basic theory of nucleosynthesis was developed in 1948 by George Gamow, Ralph Asher Alpher, and Robert Herman. It was used for many years as a probe of physics at the time of the Big Bang, as the theory of Big Bang nucleosynthesis connects the abundances of primordial light elements with the features of the early universe. Specifically, it can be used to test the equivalence principle, to probe dark matter, and test neutrino physics. Some cosmologists have proposed that Big Bang nucleosynthesis suggests there is a fourth "sterile" species of neutrino.
Standard model of Big Bang cosmology
The ΛCDM (Lambda cold dark matter) or Lambda-CDM model is a parametrization of the Big Bang cosmological model in which the universe contains a cosmological constant, denoted by Lambda (Greek Λ), associated with dark energy, and cold dark matter (abbreviated CDM). It is frequently referred to as the standard model of Big Bang cosmology.
Cosmic microwave background
The cosmic microwave background is radiation left over from decoupling after the epoch of recombination when neutral atoms first formed. At this point, radiation produced in the Big Bang stopped Thomson scattering from charged ions. The radiation, first observed in 1965 by Arno Penzias and Robert Woodrow Wilson, has a perfect thermal black-body spectrum. It has a temperature of 2.7 kelvins today and is isotropic to one part in 105. Cosmological perturbation theory, which describes the evolution of slight inhomogeneities in the early universe, has allowed cosmologists to precisely calculate the angular power spectrum of the radiation, and it has been measured by the recent satellite experiments (COBE and WMAP) and many ground and balloon-based experiments (such as Degree Angular Scale Interferometer, Cosmic Background Imager, and Boomerang). One of the goals of these efforts is to measure the basic parameters of the Lambda-CDM model with increasing accuracy, as well as to test the predictions of the Big Bang model and look for new physics. The results of measurements made by WMAP, for example, have placed limits on the neutrino masses.
Newer experiments, such as QUIET and the Atacama Cosmology Telescope, are trying to measure the polarization of the cosmic microwave background. These measurements are expected to provide further confirmation of the theory as well as information about cosmic inflation, and the so-called secondary anisotropies, such as the Sunyaev-Zel'dovich effect and Sachs-Wolfe effect, which are caused by interaction between galaxies and clusters with the cosmic microwave background.
On 17 March 2014, astronomers of the BICEP2 Collaboration announced the apparent detection of B-mode polarization of the CMB, considered to be evidence of primordial gravitational waves that are predicted by the theory of inflation to occur during the earliest phase of the Big Bang. However, later that year the Planck collaboration provided a more accurate measurement of cosmic dust, concluding that the B-mode signal from dust is the same strength as that reported from BICEP2. On 30 January 2015, a joint analysis of BICEP2 and Planck data was published and the European Space Agency announced that the signal can be entirely attributed to interstellar dust in the Milky Way.
Formation and evolution of large-scale structure
Understanding the formation and evolution of the largest and earliest structures (i.e., quasars, galaxies, clusters and superclusters) is one of the largest efforts in cosmology. Cosmologists study a model of hierarchical structure formation in which structures form from the bottom up, with smaller objects forming first, while the largest objects, such as superclusters, are still assembling. One way to study structure in the universe is to survey the visible galaxies, in order to construct a three-dimensional picture of the galaxies in the universe and measure the matter power spectrum. This is the approach of the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey.
Another tool for understanding structure formation is simulations, which cosmologists use to study the gravitational aggregation of matter in the universe, as it clusters into filaments, superclusters and voids. Most simulations contain only non-baryonic cold dark matter, which should suffice to understand the universe on the largest scales, as there is much more dark matter in the universe than visible, baryonic matter. More advanced simulations are starting to include baryons and study the formation of individual galaxies. Cosmologists study these simulations to see if they agree with the galaxy surveys, and to understand any discrepancy.
Other, complementary observations to measure the distribution of matter in the distant universe and to probe reionization include:
The Lyman-alpha forest, which allows cosmologists to measure the distribution of neutral atomic hydrogen gas in the early universe, by measuring the absorption of light from distant quasars by the gas.
The 21-centimeter absorption line of neutral atomic hydrogen also provides a sensitive test of cosmology.
Weak lensing, the distortion of a distant image by gravitational lensing due to dark matter.
These will help cosmologists settle the question of when and how structure formed in the universe.
Dark matter
Evidence from Big Bang nucleosynthesis, the cosmic microwave background, structure formation, and galaxy rotation curves suggests that about 23% of the mass of the universe consists of non-baryonic dark matter, whereas only 4% consists of visible, baryonic matter. The gravitational effects of dark matter are well understood, as it behaves like a cold, non-radiative fluid that forms haloes around galaxies. Dark matter has never been detected in the laboratory, and the particle physics nature of dark matter remains completely unknown. Without observational constraints, there are a number of candidates, such as a stable supersymmetric particle, a weakly interacting massive particle, a gravitationally-interacting massive particle, an axion, and a massive compact halo object. Alternatives to the dark matter hypothesis include a modification of gravity at small accelerations (MOND) or an effect from brane cosmology. TeVeS is a version of MOND that can explain gravitational lensing.
Dark energy
If the universe is flat, there must be an additional component making up 73% (in addition to the 23% dark matter and 4% baryons) of the energy density of the universe. This is called dark energy. In order not to interfere with Big Bang nucleosynthesis and the cosmic microwave background, it must not cluster in haloes like baryons and dark matter. There is strong observational evidence for dark energy, as the total energy density of the universe is known through constraints on the flatness of the universe, but the amount of clustering matter is tightly measured, and is much less than this. The case for dark energy was strengthened in 1999, when measurements demonstrated that the expansion of the universe has begun to gradually accelerate.
Apart from its density and its clustering properties, nothing is known about dark energy. Quantum field theory predicts a cosmological constant (CC) much like dark energy, but 120 orders of magnitude larger than that observed. Steven Weinberg and a number of string theorists (see string landscape) have invoked the 'weak anthropic principle': i.e. the reason that physicists observe a universe with such a small cosmological constant is that no physicists (or any life) could exist in a universe with a larger cosmological constant. Many cosmologists find this an unsatisfying explanation: perhaps because while the weak anthropic principle is self-evident (given that living observers exist, there must be at least one universe with a cosmological constant (CC) which allows for life to exist) it does not attempt to explain the context of that universe. For example, the weak anthropic principle alone does not distinguish between:
Only one universe will ever exist and there is some underlying principle that constrains the CC to the value we observe.
Only one universe will ever exist and although there is no underlying principle fixing the CC, we got lucky.
Lots of universes exist (simultaneously or serially) with a range of CC values, and of course ours is one of the life-supporting ones.
Other possible explanations for dark energy include quintessence or a modification of gravity on the largest scales. The effect on cosmology of the dark energy that these models describe is given by the dark energy's equation of state, which varies depending upon the theory. The nature of dark energy is one of the most challenging problems in cosmology.
A better understanding of dark energy is likely to solve the problem of the ultimate fate of the universe. In the current cosmological epoch, the accelerated expansion due to dark energy is preventing structures larger than superclusters from forming. It is not known whether the acceleration will continue indefinitely, perhaps even increasing until a big rip, or whether it will eventually reverse, lead to a Big Freeze, or follow some other scenario.
Gravitational waves
Gravitational waves are ripples in the curvature of spacetime that propagate as waves at the speed of light, generated in certain gravitational interactions that propagate outward from their source. Gravitational-wave astronomy is an emerging branch of observational astronomy which aims to use gravitational waves to collect observational data about sources of detectable gravitational waves such as binary star systems composed of white dwarfs, neutron stars, and black holes; and events such as supernovae, and the formation of the early universe shortly after the Big Bang.
In 2016, the LIGO Scientific Collaboration and Virgo Collaboration teams announced that they had made the first observation of gravitational waves, originating from a pair of merging black holes using the Advanced LIGO detectors. On 15 June 2016, a second detection of gravitational waves from coalescing black holes was announced. Besides LIGO, many other gravitational-wave observatories (detectors) are under construction.
Other areas of inquiry
Cosmologists also study:
Whether primordial black holes were formed in our universe, and what happened to them.
Detection of cosmic rays with energies above the GZK cutoff, and whether it signals a failure of special relativity at high energies.
The equivalence principle, whether or not Einstein's general theory of relativity is the correct theory of gravitation, and if the fundamental laws of physics are the same everywhere in the universe.
Biophysical cosmology: a type of physical cosmology that studies and understands life as part or an inherent part of physical cosmology. It stresses that life is inherent to the universe and therefore frequent.
See also
Accretion
Hubble's law
Illustris project
List of cosmologists
Physical ontology
Quantum cosmology
String cosmology
Universal Rotation Curve
References
Further reading
Popular
Textbooks
Introductory cosmology and general relativity without the full tensor apparatus, deferred until the last part of the book.
Modern introduction to cosmology covering the homogeneous and inhomogeneous universe as well as inflation and the CMB.
An introductory text, released slightly before the WMAP results.
For undergraduates; mathematically gentle with a strong historical focus.
An introductory astronomy text.
The classic reference for researchers.
Cosmology without general relativity.
An introduction to cosmology with a thorough discussion of inflation.
Discusses the formation of large-scale structures in detail.
An introduction including more on general relativity and quantum field theory than most.
Strong historical focus.
The classic work on large-scale structure and correlation functions.
A standard reference for the mathematical formalism.
External links
From groups
Cambridge Cosmology – from Cambridge University (public home page)
Cosmology 101 – from the NASA WMAP group
Center for Cosmological Physics. University of Chicago, Chicago, Illinois
Origins, Nova Online – Provided by PBS
From individuals
Gale, George, "Cosmology: Methodological Debates in the 1930s and 1940s", The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.)
Madore, Barry F., "Level 5 : A Knowledgebase for Extragalactic Astronomy and Cosmology". Caltech and Carnegie. Pasadena, California.
Tyler, Pat, and Newman, Phil, "Beyond Einstein". Laboratory for High Energy Astrophysics (LHEA) NASA Goddard Space Flight Center.
Wright, Ned. "Cosmology tutorial and FAQ". Division of Astronomy & Astrophysics, UCLA.
Philosophy of physics
Philosophy of time
Astronomical sub-disciplines
Astrophysics | Physical cosmology | [
"Physics",
"Astronomy"
] | 5,692 | [
"Philosophy of physics",
"Astronomical sub-disciplines",
"Applied and interdisciplinary physics",
"Physical quantities",
"Time",
"Theoretical physics",
"Astrophysics",
"Philosophy of time",
"Spacetime",
"Physical cosmology"
] |
5,382 | https://en.wikipedia.org/wiki/Cosmic%20inflation | In physical cosmology, cosmic inflation, cosmological inflation, or just inflation, is a theory of exponential expansion of space in the very early universe. Following the inflationary period, the universe continued to expand, but at a slower rate. The re-acceleration of this slowing expansion due to dark energy began after the universe was already over 7.7 billion years old (5.4 billion years ago).
Inflation theory was developed in the late 1970s and early 1980s, with notable contributions by several theoretical physicists, including Alexei Starobinsky at Landau Institute for Theoretical Physics, Alan Guth at Cornell University, and Andrei Linde at Lebedev Physical Institute. Starobinsky, Guth, and Linde won the 2014 Kavli Prize "for pioneering the theory of cosmic inflation". It was developed further in the early 1980s. It explains the origin of the large-scale structure of the cosmos. Quantum fluctuations in the microscopic inflationary region, magnified to cosmic size, become the seeds for the growth of structure in the Universe (see galaxy formation and evolution and structure formation). Many physicists also believe that inflation explains why the universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the universe is flat, and why no magnetic monopoles have been observed.
The detailed particle physics mechanism responsible for inflation is unknown. A number of inflation model predictions have been confirmed by observation; for example temperature anisotropies observed by the COBE satellite in 1992 exhibit nearly scale-invariant spectra as predicted by the inflationary paradigm and WMAP results also show strong evidence for inflation. However, some scientists dissent from this position. The hypothetical field thought to be responsible for inflation is called the inflaton.
In 2002, three of the original architects of the theory were recognized for their major contributions; physicists Alan Guth of M.I.T., Andrei Linde of Stanford, and Paul Steinhardt of Princeton shared the Dirac Prize "for development of the concept of inflation in cosmology". In 2012, Guth and Linde were awarded the Breakthrough Prize in Fundamental Physics for their invention and development of inflationary cosmology.
Overview
Around 1930, Edwin Hubble discovered that light from remote galaxies was redshifted; the more remote, the more shifted. This implies that the galaxies are receding from the Earth, with more distant galaxies receding more rapidly, such that galaxies also recede from each other. This expansion of the universe was previously predicted by Alexander Friedmann and Georges Lemaître from the theory of general relativity. It can be understood as a consequence of an initial impulse, which sent the contents of the universe flying apart at such a rate that their mutual gravitational attraction has not reversed their increasing separation.
Inflation may have provided this initial impulse. According to the Friedmann equations that describe the dynamics of an expanding universe, a fluid with sufficiently negative pressure exerts gravitational repulsion in the cosmological context. A field in a positive-energy false vacuum state could represent such a fluid, and the resulting repulsion would set the universe into exponential expansion. This inflation phase was originally proposed by Alan Guth in 1979 because the exponential expansion could dilute exotic relics, such as magnetic monopoles, that were predicted by grand unified theories at the time. This would explain why such relics were not seen. It was quickly realized that such accelerated expansion would resolve the horizon problem and the flatness problem. These problems arise from the notion that to look like it does today, the Universe must have started from very finely tuned, or "special", initial conditions at the Big Bang.
Theory
An expanding universe generally has a cosmological horizon, which, by analogy with the more familiar horizon caused by the curvature of Earth's surface, marks the boundary of the part of the Universe that an observer can see. Light (or other radiation) emitted by objects beyond the cosmological horizon in an accelerating universe never reaches the observer, because the space in between the observer and the object is expanding too rapidly.
The observable universe is one causal patch of a much larger unobservable universe; other parts of the Universe cannot communicate with Earth yet. These parts of the Universe are outside our current cosmological horizon, which is believed to be 46 billion light years in all directions from Earth. In the standard hot big bang model, without inflation, the cosmological horizon moves out, bringing new regions into view. Yet as a local observer sees such a region for the first time, it looks no different from any other region of space the local observer has already seen: Its background radiation is at nearly the same temperature as the background radiation of other regions, and its space-time curvature is evolving lock-step with the others. This presents a mystery: how did these new regions know what temperature and curvature they were supposed to have? They could not have learned it by getting signals, because they were not previously in communication with our past light cone.
Inflation answers this question by postulating that all the regions come from an earlier era with a big vacuum energy, or cosmological constant. A space with a cosmological constant is qualitatively different: instead of moving outward, the cosmological horizon stays put. For any one observer, the distance to the cosmological horizon is constant. With exponentially expanding space, two nearby observers are separated very quickly; so much so, that the distance between them quickly exceeds the limits of communication. The spatial slices are expanding very fast to cover huge volumes. Things are constantly moving beyond the cosmological horizon, which is a fixed distance away, and everything becomes homogeneous.
As the inflationary field slowly relaxes to the vacuum, the cosmological constant goes to zero and space begins to expand normally. The new regions that come into view during the normal expansion phase are exactly the same regions that were pushed out of the horizon during inflation, and so they are at nearly the same temperature and curvature, because they come from the same originally small patch of space.
The theory of inflation thus explains why the temperatures and curvatures of different regions are so nearly equal. It also predicts that the total curvature of a space-slice at constant global time is zero. This prediction implies that the total ordinary matter, dark matter and residual vacuum energy in the Universe have to add up to the critical density, and the evidence supports this. More strikingly, inflation allows physicists to calculate the minute differences in temperature of different regions from quantum fluctuations during the inflationary era, and many of these quantitative predictions have been confirmed.
Space expands
In a space that expands exponentially (or nearly exponentially) with time, any pair of free-floating objects that are initially at rest will move apart from each other at an accelerating rate, at least as long as they are not bound together by any force. From the point of view of one such object, the spacetime is something like an inside-out Schwarzschild black hole—each object is surrounded by a spherical event horizon. Once the other object has fallen through this horizon it can never return, and even light signals it sends will never reach the first object (at least so long as the space continues to expand exponentially).
In the approximation that the expansion is exactly exponential, the horizon is static and remains a fixed physical distance away. This patch of an inflating universe can be described by the following metric:
This exponentially expanding spacetime is called a de Sitter space, and to sustain it there must be a cosmological constant, a vacuum energy density that is constant in space and time and proportional to Λ in the above metric. For the case of exactly exponential expansion, the vacuum energy has a negative pressure p equal in magnitude to its energy density ρ; the equation of state is p=−ρ.
Inflation is typically not an exactly exponential expansion, but rather quasi- or near-exponential. In such a universe the horizon will slowly grow with time as the vacuum energy density gradually decreases.
Few inhomogeneities remain
Because the accelerating expansion of space stretches out any initial variations in density or temperature to very large length scales, an essential feature of inflation is that it smooths out inhomogeneities and anisotropies, and reduces the curvature of space. This pushes the Universe into a very simple state in which it is completely dominated by the inflaton field and the only significant inhomogeneities are tiny quantum fluctuations. Inflation also dilutes exotic heavy particles, such as the magnetic monopoles predicted by many extensions to the Standard Model of particle physics. If the Universe was only hot enough to form such particles before a period of inflation, they would not be observed in nature, as they would be so rare that it is quite likely that there are none in the observable universe. Together, these effects are called the inflationary "no-hair theorem" by analogy with the no hair theorem for black holes.
The "no-hair" theorem works essentially because the cosmological horizon is no different from a black-hole horizon, except for not testable disagreements about what is on the other side. The interpretation of the no-hair theorem is that the Universe (observable and unobservable) expands by an enormous factor during inflation. In an expanding universe, energy densities generally fall, or get diluted, as the volume of the Universe increases. For example, the density of ordinary "cold" matter (dust) declines as the inverse of the volume: when linear dimensions double, the energy density declines by a factor of eight; the radiation energy density declines even more rapidly as the Universe expands since the wavelength of each photon is stretched (redshifted), in addition to the photons being dispersed by the expansion. When linear dimensions are doubled, the energy density in radiation falls by a factor of sixteen (see the solution of the energy density continuity equation for an ultra-relativistic fluid). During inflation, the energy density in the inflaton field is roughly constant. However, the energy density in everything else, including inhomogeneities, curvature, anisotropies, exotic particles, and standard-model particles is falling, and through sufficient inflation these all become negligible. This leaves the Universe flat and symmetric, and (apart from the homogeneous inflaton field) mostly empty, at the moment inflation ends and reheating begins.
Reheating
Inflation is a period of supercooled expansion, when the temperature drops by a factor of 100,000 or so. (The exact drop is model-dependent, but in the first models it was typically from K down to K.) This relatively low temperature is maintained during the inflationary phase. When inflation ends, the temperature returns to the pre-inflationary temperature; this is called reheating or thermalization because the large potential energy of the inflaton field decays into particles and fills the Universe with Standard Model particles, including electromagnetic radiation, starting the radiation dominated phase of the Universe. Because the nature of the inflaton field is not known, this process is still poorly understood, although it is believed to take place through a parametric resonance.
Motivations
Inflation tries to resolve several problems in Big Bang cosmology that were discovered in the 1970s. Inflation was first proposed by Alan Guth in 1979 while investigating the problem of why no magnetic monopoles are seen today; he found that a positive-energy false vacuum would, according to general relativity, generate an exponential expansion of space. It was quickly realised that such an expansion would resolve many other long-standing problems. These problems arise from the observation that to look like it does today, the Universe would have to have started from very finely tuned, or "special" initial conditions at the Big Bang. Inflation attempts to resolve these problems by providing a dynamical mechanism that drives the Universe to this special state, thus making a universe like ours much more likely in the context of the Big Bang theory.
Horizon problem
The horizon problem is the problem of determining why the universe appears statistically homogeneous and isotropic in accordance with the cosmological principle. For example, molecules in a canister of gas are distributed homogeneously and isotropically because they are in thermal equilibrium: gas throughout the canister has had enough time to interact to dissipate inhomogeneities and anisotropies. The situation is quite different in the big bang model without inflation, because gravitational expansion does not give the early universe enough time to equilibrate. In a big bang with only the matter and radiation known in the Standard Model, two widely separated regions of the observable universe cannot have equilibrated because they move apart from each other faster than the speed of light and thus have never come into causal contact. In the early Universe, it was not possible to send a light signal between the two regions. Because they have had no interaction, it is difficult to explain why they have the same temperature (are thermally equilibrated). Historically, proposed solutions included the Phoenix universe of Georges Lemaître, the related oscillatory universe of Richard Chase Tolman, and the Mixmaster universe of Charles Misner. Lemaître and Tolman proposed that a universe undergoing a number of cycles of contraction and expansion could come into thermal equilibrium. Their models failed, however, because of the buildup of entropy over several cycles. Misner made the (ultimately incorrect) conjecture that the Mixmaster mechanism, which made the Universe more chaotic, could lead to statistical homogeneity and isotropy.
Flatness problem
The flatness problem is sometimes called one of the Dicke coincidences (along with the cosmological constant problem). It became known in the 1960s that the density of matter in the Universe was comparable to the critical density necessary for a flat universe (that is, a universe whose large-scale geometry is the usual Euclidean geometry, rather than a non-Euclidean hyperbolic or spherical geometry).
Therefore, regardless of the shape of the universe, the contribution of spatial curvature to the expansion of the Universe could not be much greater than the contribution of matter. But as the Universe expands, the curvature redshifts away more slowly than matter and radiation. Extrapolated into the past, this presents a fine-tuning problem because the contribution of curvature to the Universe must be exponentially small (sixteen orders of magnitude less than the density of radiation at Big Bang nucleosynthesis, for example). Observations of the cosmic microwave background have demonstrated that the Universe is flat to within a few percent.
Magnetic-monopole problem
Stable magnetic monopoles are a problem for Grand Unified Theories, which propose that at high temperatures (such as in the early universe), the electromagnetic force, strong, and weak nuclear forces are not actually fundamental forces but arise due to spontaneous symmetry breaking from a single gauge theory. These theories predict a number of heavy, stable particles that have not been observed in nature. The most notorious is the magnetic monopole, a kind of stable, heavy "charge" of magnetic field.
Monopoles are predicted to be copiously produced following Grand Unified Theories at high temperature, and they should have persisted to the present day, to such an extent that they would become the primary constituent of the Universe. Not only is that not the case, but all searches for them have failed, placing stringent limits on the density of relic magnetic monopoles in the Universe.
A period of inflation that occurs below the temperature where magnetic monopoles can be produced would offer a possible resolution of this problem: Monopoles would be separated from each other as the Universe around them expands, potentially lowering their observed density by many orders of magnitude. Though, as cosmologist Martin Rees has written,
"Skeptics about exotic physics might not be hugely impressed by a theoretical argument to explain the absence of particles that are themselves only hypothetical. Preventive medicine can readily seem 100 percent effective against a disease that doesn't exist!"
History
Precursors
In the early days of general relativity, Albert Einstein introduced the cosmological constant to allow a static solution, which was a three-dimensional sphere with a uniform density of matter. Later, Willem de Sitter found a highly symmetric inflating universe, which described a universe with a cosmological constant that is otherwise empty. It was discovered that Einstein's universe is unstable, and that small fluctuations cause it to collapse or turn into a de Sitter universe.
In 1965, Erast Gliner proposed a unique assumption regarding the early Universe's pressure in the context of the Einstein–Friedmann equations. According to his idea, the pressure was negatively proportional to the energy density. This relationship between pressure and energy density served as the initial theoretical prediction of dark energy.
In the early 1970s, Yakov Zeldovich noticed the flatness and horizon problems of Big Bang cosmology; before his work, cosmology was presumed to be symmetrical on purely philosophical grounds. In the Soviet Union, this and other considerations led Vladimir Belinski and Isaak Khalatnikov to analyze the chaotic BKL singularity in general relativity. Misner's Mixmaster universe attempted to use this chaotic behavior to solve the cosmological problems, with limited success.
False vacuum
In the late 1970s, Sidney Coleman applied the instanton techniques developed by Alexander Polyakov and collaborators to study the fate of the false vacuum in quantum field theory. Like a metastable phase in statistical mechanics—water below the freezing temperature or above the boiling point—a quantum field would need to nucleate a large enough bubble of the new vacuum, the new phase, in order to make a transition. Coleman found the most likely decay pathway for vacuum decay and calculated the inverse lifetime per unit volume. He eventually noted that gravitational effects would be significant, but he did not calculate these effects and did not apply the results to cosmology.
The universe could have been spontaneously created from nothing (no space, time, nor matter) by quantum fluctuations of metastable false vacuum causing an expanding bubble of true vacuum.
The Causal Universe of Brout Englert and Gunzig
In 1978 and 1979, Robert Brout, François Englert and Edgard Gunzig suggested that the universe could originate from a fluctuation of Minkowski space which would be followed by a period in which the geometry would resemble De Sitter space.
This initial period would then evolve into the standard expanding universe. They noted that their proposal makes the universe causal, as there are neither particle nor event horizons in their model.
Starobinsky inflation
In the Soviet Union, Alexei Starobinsky noted that quantum corrections to general relativity should be important for the early universe. These generically lead to curvature-squared corrections to the Einstein–Hilbert action and a form of modified gravity. The solution to Einstein's equations in the presence of curvature squared terms, when the curvatures are large, leads to an effective cosmological constant. Therefore, he proposed that the early universe went through an inflationary de Sitter era. This resolved the cosmology problems and led to specific predictions for the corrections to the microwave background radiation, corrections that were then calculated in detail. Starobinsky used the action
which corresponds to the potential
in the Einstein frame. This results in the observables:
Monopole problem
In 1978, Zeldovich noted the magnetic monopole problem, which was an unambiguous quantitative version of the horizon problem, this time in a subfield of particle physics, which led to several speculative attempts to resolve it. In 1980, Alan Guth realized that false vacuum decay in the early universe would solve the problem, leading him to propose a scalar-driven inflation. Starobinsky's and Guth's scenarios both predicted an initial de Sitter phase, differing only in mechanistic details.
Early inflationary models
Guth proposed inflation in January 1981 to explain the nonexistence of magnetic monopoles; it was Guth who coined the term "inflation". At the same time, Starobinsky argued that quantum corrections to gravity would replace the supposed initial singularity of the Universe with an exponentially expanding de Sitter phase. In October 1980, Demosthenes Kazanas suggested that exponential expansion could eliminate the particle horizon and perhaps solve the horizon problem, while Katsuhiko Sato suggested that an exponential expansion could eliminate domain walls (another kind of exotic relic). In 1981, Einhorn and Sato published a model similar to Guth's and showed that it would resolve the puzzle of the magnetic monopole abundance in Grand Unified Theories. Like Guth, they concluded that such a model not only required fine tuning of the cosmological constant, but also would likely lead to a much too granular universe, i.e., to large density variations resulting from bubble wall collisions.
Guth proposed that as the early universe cooled, it was trapped in a false vacuum with a high energy density, which is much like a cosmological constant. As the very early universe cooled it was trapped in a metastable state (it was supercooled), which it could only decay out of through the process of bubble nucleation via quantum tunneling. Bubbles of true vacuum spontaneously form in the sea of false vacuum and rapidly begin expanding at the speed of light. Guth recognized that this model was problematic because the model did not reheat properly: when the bubbles nucleated, they did not generate radiation. Radiation could only be generated in collisions between bubble walls. But if inflation lasted long enough to solve the initial conditions problems, collisions between bubbles became exceedingly rare. In any one causal patch it is likely that only one bubble would nucleate.
Slow-roll inflation
The bubble collision problem was solved by Andrei Linde and independently by Andreas Albrecht and Paul Steinhardt in a model named new inflation or slow-roll inflation (Guth's model then became known as old inflation). In this model, instead of tunneling out of a false vacuum state, inflation occurred by a scalar field rolling down a potential energy hill. When the field rolls very slowly compared to the expansion of the Universe, inflation occurs. However, when the hill becomes steeper, inflation ends and reheating can occur.
Effects of asymmetries
Eventually, it was shown that new inflation does not produce a perfectly symmetric universe, but that quantum fluctuations in the inflaton are created. These fluctuations form the primordial seeds for all structure created in the later universe. These fluctuations were first calculated by Viatcheslav Mukhanov and G. V. Chibisov in analyzing Starobinsky's similar model. In the context of inflation, they were worked out independently of the work of Mukhanov and Chibisov at the three-week 1982 Nuffield Workshop on the Very Early Universe at Cambridge University. The fluctuations were calculated by four groups working separately over the course of the workshop: Stephen Hawking; Starobinsky; Alan Guth and So-Young Pi; and James Bardeen, Paul Steinhardt and Michael Turner.
Observational status
Inflation is a mechanism for realizing the cosmological principle, which is the basis of the standard model of physical cosmology: it accounts for the homogeneity and isotropy of the observable universe. In addition, it accounts for the observed flatness and absence of magnetic monopoles. Since Guth's early work, each of these observations has received further confirmation, most impressively by the detailed observations of the cosmic microwave background made by the Planck spacecraft. This analysis shows that the Universe is flat to within percent, and that it is homogeneous and isotropic to one part in 100,000.
Inflation predicts that the structures visible in the Universe today formed through the gravitational collapse of perturbations that were formed as quantum mechanical fluctuations in the inflationary epoch. The detailed form of the spectrum of perturbations, called a nearly-scale-invariant Gaussian random field is very specific and has only two free parameters. One is the amplitude of the spectrum and the spectral index, which measures the slight deviation from scale invariance predicted by inflation (perfect scale invariance corresponds to the idealized de Sitter universe).
The other free parameter is the tensor to scalar ratio. The simplest inflation models, those without fine-tuning, predict a tensor to scalar ratio near 0.1 .
Inflation predicts that the observed perturbations should be in thermal equilibrium with each other (these are called adiabatic or isentropic perturbations). This structure for the perturbations has been confirmed by the Planck spacecraft, WMAP spacecraft and other cosmic microwave background (CMB) experiments, and galaxy surveys, especially the ongoing Sloan Digital Sky Survey. These experiments have shown that the one part in 100,000 inhomogeneities observed have exactly the form predicted by theory. There is evidence for a slight deviation from scale invariance. The spectral index, is one for a scale-invariant Harrison–Zel'dovich spectrum. The simplest inflation models predict that is between 0.92 and 0.98 . This is the range that is possible without fine-tuning of the parameters related to energy. From Planck data it can be inferred that =0.968 ± 0.006, and a tensor to scalar ratio that is less than 0.11 . These are considered an important confirmation of the theory of inflation.
Various inflation theories have been proposed that make radically different predictions, but they generally have much more fine-tuning than should be necessary. As a physical model, however, inflation is most valuable in that it robustly predicts the initial conditions of the Universe based on only two adjustable parameters: the spectral index (that can only change in a small range) and the amplitude of the perturbations. Except in contrived models, this is true regardless of how inflation is realized in particle physics.
Occasionally, effects are observed that appear to contradict the simplest models of inflation. The first-year WMAP data suggested that the spectrum might not be nearly scale-invariant, but might instead have a slight curvature. However, the third-year data revealed that the effect was a statistical anomaly. Another effect remarked upon since the first cosmic microwave background satellite, the Cosmic Background Explorer is that the amplitude of the quadrupole moment of the CMB is unexpectedly low and the other low multipoles appear to be preferentially aligned with the ecliptic plane. Some have claimed that this is a signature of non-Gaussianity and thus contradicts the simplest models of inflation. Others have suggested that the effect may be due to other new physics, foreground contamination, or even publication bias.
An experimental program is underway to further test inflation with more precise CMB measurements. In particular, high precision measurements of the so-called "B-modes" of the polarization of the background radiation could provide evidence of the gravitational radiation produced by inflation, and could also show whether the energy scale of inflation predicted by the simplest models (~ GeV) is correct. In March 2014, the BICEP2 team announced B-mode CMB polarization confirming inflation had been demonstrated. The team announced the tensor-to-scalar power ratio was between 0.15 and 0.27 (rejecting the null hypothesis; is expected to be 0 in the absence of inflation). However, on 19 June 2014, lowered confidence in confirming the findings was reported; on 19 September 2014, a further reduction in confidence was reported and, on 30 January 2015, even less confidence yet was reported. By 2018, additional data suggested, with 95% confidence, that is 0.06 or lower: Consistent with the null hypothesis, but still also consistent with many remaining models of inflation.
Other potentially corroborating measurements are expected from the Planck spacecraft, although it is unclear if the signal will be visible, or if contamination from foreground sources will interfere.
Other forthcoming measurements, such as those of 21 centimeter radiation (radiation emitted and absorbed from neutral hydrogen before the first stars formed), may measure the power spectrum with even greater resolution than the CMB and galaxy surveys, although it is not known if these measurements will be possible or if interference with radio sources on Earth and in the galaxy will be too great.
Theoretical status
In Guth's early proposal, it was thought that the inflaton was the Higgs field, the field that explains the mass of the elementary particles. It is now believed by some that the inflaton cannot be the Higgs field. One problem of this identification is the current tension with experimental data at the electroweak scale,. Other models of inflation relied on the properties of Grand Unified Theories.
Fine-tuning problem
One of the most severe challenges for inflation arises from the need for fine tuning. In new inflation, the slow-roll conditions must be satisfied for inflation to occur. The slow-roll conditions say that the inflaton potential must be flat (compared to the large vacuum energy) and that the inflaton particles must have a small mass.
New inflation requires the Universe to have a scalar field with an especially flat potential and special initial conditions. However, explanations for these fine-tunings have been proposed. For example, classically scale invariant field theories, where scale invariance is broken by quantum effects, provide an explanation of the flatness of inflationary potentials, as long as the theory can be studied through perturbation theory.
Linde proposed a theory known as chaotic inflation in which he suggested that the conditions for inflation were actually satisfied quite generically. Inflation will occur in virtually any universe that begins in a chaotic, high energy state that has a scalar field with unbounded potential energy.
However, in his model, the inflaton field necessarily takes values larger than one Planck unit: For this reason, these are often called large field models and the competing new inflation models are called small field models. In this situation, the predictions of effective field theory are thought to be invalid, as renormalization should cause large corrections that could prevent inflation.
This problem has not yet been resolved and some cosmologists argue that the small field models, in which inflation can occur at a much lower energy scale, are better models.
While inflation depends on quantum field theory (and the semiclassical approximation to quantum gravity) in an important way, it has not been completely reconciled with these theories.
Brandenberger commented on fine-tuning in another situation.
The amplitude of the primordial inhomogeneities produced in inflation is directly tied to the energy scale of inflation. This scale is suggested to be around GeV or times the Planck energy. The natural scale is naïvely the Planck scale so this small value could be seen as another form of fine-tuning (called a hierarchy problem): The energy density given by the scalar potential is down by compared to the Planck density. This is not usually considered to be a critical problem, however, because the scale of inflation corresponds naturally to the scale of gauge unification.
Eternal inflation
In many models, the inflationary phase of the Universe's expansion lasts forever in at least some regions of the Universe. This occurs because inflating regions expand very rapidly, reproducing themselves. Unless the rate of decay to the non-inflating phase is sufficiently fast, new inflating regions are produced more rapidly than non-inflating regions. In such models, most of the volume of the Universe is continuously inflating at any given time.
All models of eternal inflation produce an infinite, hypothetical multiverse, typically a fractal. The multiverse theory has created significant dissension in the scientific community about the viability of the inflationary model.
Paul Steinhardt, one of the original architects of the inflationary model, introduced the first example of eternal inflation in 1983. He showed that the inflation could proceed forever by producing bubbles of non-inflating space filled with hot matter and radiation surrounded by empty space that continues to inflate. The bubbles could not grow fast enough to keep up with the inflation. Later that same year, Alexander Vilenkin showed that eternal inflation is generic.
Although new inflation is classically rolling down the potential, quantum fluctuations can sometimes lift it to previous levels. These regions in which the inflaton fluctuates upwards, expand much faster than regions in which the inflaton has a lower potential energy, and tend to dominate in terms of physical volume. It has been shown that any inflationary theory with an unbounded potential is eternal. There are well-known theorems that this steady state cannot continue forever into the past. Inflationary spacetime, which is similar to de Sitter space, is incomplete without a contracting region. However, unlike de Sitter space, fluctuations in a contracting inflationary space collapse to form a gravitational singularity, a point where densities become infinite. Therefore, it is necessary to have a theory for the Universe's initial conditions.
In eternal inflation, regions with inflation have an exponentially growing volume, while regions that are not inflating do not. This suggests that the volume of the inflating part of the Universe in the global picture is always unimaginably larger than the part that has stopped inflating, even though inflation eventually ends as seen by any single pre-inflationary observer. Scientists disagree about how to assign a probability distribution to this hypothetical anthropic landscape. If the probability of different regions is counted by volume, one should expect that inflation will never end or applying boundary conditions that a local observer exists to observe it, that inflation will end as late as possible.
Some physicists believe this paradox can be resolved by weighting observers by their pre-inflationary volume. Others believe that there is no resolution to the paradox and that the multiverse is a critical flaw in the inflationary paradigm. Paul Steinhardt, who first introduced the eternal inflationary model, later became one of its most vocal critics for this reason.
Initial conditions
Some physicists have tried to avoid the initial conditions problem by proposing models for an eternally inflating universe with no origin. These models propose that while the Universe, on the largest scales, expands exponentially it was, is and always will be, spatially infinite and has existed, and will exist, forever.
Other proposals attempt to describe the ex nihilo creation of the Universe based on quantum cosmology and the following inflation. Vilenkin put forth one such scenario. Hartle and Hawking offered the no-boundary proposal for the initial creation of the Universe in which inflation comes about naturally.
Guth described the inflationary universe as the "ultimate free lunch": new universes, similar to our own, are continually produced in a vast inflating background. Gravitational interactions, in this case, circumvent (but do not violate) the first law of thermodynamics (energy conservation) and the second law of thermodynamics (entropy and the arrow of time problem). However, while there is consensus that this solves the initial conditions problem, some have disputed this, as it is much more likely that the Universe came about by a quantum fluctuation. Don Page was an outspoken critic of inflation because of this anomaly. He stressed that the thermodynamic arrow of time necessitates low entropy initial conditions, which would be highly unlikely. According to them, rather than solving this problem, the inflation theory aggravates it – the reheating at the end of the inflation era increases entropy, making it necessary for the initial state of the Universe to be even more orderly than in other Big Bang theories with no inflation phase.
Hawking and Page later found ambiguous results when they attempted to compute the probability of inflation in the Hartle–Hawking initial state. Other authors have argued that, since inflation is eternal, the probability doesn't matter as long as it is not precisely zero: once it starts, inflation perpetuates itself and quickly dominates the Universe. However, Albrecht and Lorenzo Sorbo argued that the probability of an inflationary cosmos, consistent with today's observations, emerging by a random fluctuation from some pre-existent state is much higher than that of a non-inflationary cosmos. This is because the "seed" amount of non-gravitational energy required for the inflationary cosmos is so much less than that for a non-inflationary alternative, which outweighs any entropic considerations.
Another problem that has occasionally been mentioned is the trans-Planckian problem or trans-Planckian effects. Since the energy scale of inflation and the Planck scale are relatively close, some of the quantum fluctuations that have made up the structure in our universe were smaller than the Planck length before inflation. Therefore, there ought to be corrections from Planck-scale physics, in particular the unknown quantum theory of gravity. Some disagreement remains about the magnitude of this effect: about whether it is just on the threshold of detectability or completely undetectable.
Hybrid inflation
Another kind of inflation, called hybrid inflation, is an extension of new inflation. It introduces additional scalar fields, so that while one of the scalar fields is responsible for normal slow roll inflation, another triggers the end of inflation: when inflation has continued for sufficiently long, it becomes favorable to the second field to decay into a much lower energy state.
In hybrid inflation, one scalar field is responsible for most of the energy density (thus determining the rate of expansion), while another is responsible for the slow roll (thus determining the period of inflation and its termination). Thus fluctuations in the former inflaton would not affect inflation termination, while fluctuations in the latter would not affect the rate of expansion. Therefore, hybrid inflation is not eternal. When the second (slow-rolling) inflaton reaches the bottom of its potential, it changes the location of the minimum of the first inflaton's potential, which leads to a fast roll of the inflaton down its potential, leading to termination of inflation.
Relation to dark energy
Dark energy is broadly similar to inflation and is thought to be causing the expansion of the present-day universe to accelerate. However, the energy scale of dark energy is much lower, GeV, roughly 27 orders of magnitude less than the scale of inflation.
Inflation and string cosmology
The discovery of flux compactifications opened the way for reconciling inflation and string theory. Brane inflation suggests that inflation arises from the motion of D-branes in the compactified geometry, usually towards a stack of anti-D-branes. This theory, governed by the Dirac–Born–Infeld action, is different from ordinary inflation. The dynamics are not completely understood. It appears that special conditions are necessary since inflation occurs in tunneling between two vacua in the string landscape. The process of tunneling between two vacua is a form of old inflation, but new inflation must then occur by some other mechanism.
Inflation and loop quantum gravity
When investigating the effects the theory of loop quantum gravity would have on cosmology, a loop quantum cosmology model has evolved that provides a possible mechanism for cosmological inflation. Loop quantum gravity assumes a quantized spacetime. If the energy density is larger than can be held by the quantized spacetime, it is thought to bounce back.
Alternatives and adjuncts
Other models have been advanced that are claimed to explain some or all of the observations addressed by inflation.
Big bounce
The big bounce hypothesis attempts to replace the cosmic singularity with a cosmic contraction and bounce, thereby explaining the initial conditions that led to the big bang. The flatness and horizon problems are naturally solved in the Einstein–Cartan–Sciama–Kibble theory of gravity, without needing an exotic form of matter or free parameters. This theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. The minimal coupling between torsion and Dirac spinors generates a spin-spin interaction that is significant in fermionic matter at extremely high densities. Such an interaction averts the unphysical Big Bang singularity, replacing it with a cusp-like bounce at a finite minimum scale factor, before which the Universe was contracting. The rapid expansion immediately after the Big Bounce explains why the present Universe at largest scales appears spatially flat, homogeneous and isotropic. As the density of the Universe decreases, the effects of torsion weaken and the Universe smoothly enters the radiation-dominated era.
Ekpyrotic and cyclic models
The ekpyrotic and cyclic models are also considered adjuncts to inflation. These models solve the horizon problem through an expanding epoch well before the Big Bang, and then generate the required spectrum of primordial density perturbations during a contracting phase leading to a Big Crunch. The Universe passes through the Big Crunch and emerges in a hot Big Bang phase. In this sense they are reminiscent of Richard Chace Tolman's oscillatory universe; in Tolman's model, however, the total age of the Universe is necessarily finite, while in these models this is not necessarily so. Whether the correct spectrum of density fluctuations can be produced, and whether the Universe can successfully navigate the Big Bang/Big Crunch transition, remains a topic of controversy and current research. Ekpyrotic models avoid the magnetic monopole problem as long as the temperature at the Big Crunch/Big Bang transition remains below the Grand Unified Scale, as this is the temperature required to produce magnetic monopoles in the first place. As things stand, there is no evidence of any 'slowing down' of the expansion, but this is not surprising as each cycle is expected to last on the order of a trillion years.
String gas cosmology
String theory requires that, in addition to the three observable spatial dimensions, additional dimensions exist that are curled up or compactified (see also Kaluza–Klein theory). Extra dimensions appear as a frequent component of supergravity models and other approaches to quantum gravity. This raised the contingent question of why four space-time dimensions became large and the rest became unobservably small. An attempt to address this question, called string gas cosmology, was proposed by Robert Brandenberger and Cumrun Vafa. This model focuses on the dynamics of the early universe considered as a hot gas of strings. Brandenberger and Vafa show that a dimension of spacetime can only expand if the strings that wind around it can efficiently annihilate each other. Each string is a one-dimensional object, and the largest number of dimensions in which two strings will generically intersect (and, presumably, annihilate) is three. Therefore, the most likely number of non-compact (large) spatial dimensions is three. Current work on this model centers on whether it can succeed in stabilizing the size of the compactified dimensions and produce the correct spectrum of primordial density perturbations. The original model did not "solve the entropy and flatness problems of standard cosmology", although Brandenburger and coauthors later argued that these problems can be eliminated by implementing string gas cosmology in the context of a bouncing-universe scenario.
Varying c
Cosmological models employing a variable speed of light have been proposed to resolve the horizon problem of and provide an alternative to cosmic inflation. In the VSL models, the fundamental constant c, denoting the speed of light in vacuum, is greater in the early universe than its present value, effectively increasing the particle horizon at the time of decoupling sufficiently to account for the observed isotropy of the CMB.
Criticisms
Since its introduction by Alan Guth in 1980, the inflationary paradigm has become widely accepted. Nevertheless, many physicists, mathematicians, and philosophers of science have voiced criticisms, claiming untestable predictions and a lack of serious empirical support. In 1999, John Earman and Jesús Mosterín published a thorough critical review of inflationary cosmology, concluding,
"we do not think that there are, as yet, good grounds for admitting any of the models of inflation into the standard core of cosmology."
As pointed out by Roger Penrose from 1986 on, in order to work, inflation requires extremely specific initial conditions of its own, so that the problem (or pseudo-problem) of initial conditions is not solved:
"There is something fundamentally misconceived about trying to explain the uniformity of the early universe as resulting from a thermalization process. ... For, if the thermalization is actually doing anything ... then it represents a definite increasing of the entropy. Thus, the universe would have been even more special before the thermalization than after."
The problem of specific or "fine-tuned" initial conditions would not have been solved; it would have gotten worse. At a conference in 2015, Penrose said that
"inflation isn't falsifiable, it's falsified. ... BICEP did a wonderful service by bringing all the inflation-ists out of their shell, and giving them a black eye."
A recurrent criticism of inflation is that the invoked inflaton field does not correspond to any known physical field, and that its potential energy curve seems to be an ad hoc contrivance to accommodate almost any data obtainable. Paul Steinhardt, one of the founding fathers of inflationary cosmology, calls 'bad inflation' a period of accelerated expansion whose outcome conflicts with observations, and 'good inflation' one compatible with them:
"Not only is bad inflation more likely than good inflation, but no inflation is more likely than either ... Roger Penrose considered all the possible configurations of the inflaton and gravitational fields. Some of these configurations lead to inflation ... Other configurations lead to a uniform, flat universe directly – without inflation. Obtaining a flat universe is unlikely overall. Penrose's shocking conclusion, though, was that obtaining a flat universe without inflation is much more likely than with inflation – by a factor of 10 to the googol power!"
Together with Anna Ijjas and Abraham Loeb, he wrote articles claiming that the inflationary paradigm is in trouble in view of the data from the Planck satellite.
Counter-arguments were presented by Alan Guth, David Kaiser, and Yasunori Nomura and by Linde, saying that
"cosmic inflation is on a stronger footing than ever before".
See also
Notes
References
Sources
External links
Pedagogic, step-by-step derivation by the author of Student Friendly Quantum Field Theory of the basic cosmic inflation model. Requires knowledge of quantum field theory and general relativity. Cosmic inflation.
Was Cosmic Inflation The 'Bang' Of The Big Bang?, by Alan Guth, 1997
update 2004 by Andrew Liddle
The Growth of Inflation Symmetry, December 2004
Guth's logbook showing the original idea
WMAP Bolsters Case for Cosmic Inflation, March 2006
NASA March 2006 WMAP press release
Max Tegmark. Our Mathematical Universe (2014), "Chapter 5: Inflation"
Concepts in astronomy
Astronomical events
Physical cosmological concepts
1980 in science | Cosmic inflation | [
"Physics",
"Astronomy"
] | 9,666 | [
"Physical cosmological concepts",
"Concepts in astronomy",
"Astronomical events",
"Concepts in astrophysics"
] |
5,385 | https://en.wikipedia.org/wiki/Candela | The candela (symbol: cd) is the unit of luminous intensity in the International System of Units (SI). It measures luminous power per unit solid angle emitted by a light source in a particular direction. Luminous intensity is analogous to radiant intensity, but instead of simply adding up the contributions of every wavelength of light in the source's spectrum, the contribution of each wavelength is weighted by the luminous efficiency function, the model of the sensitivity of the human eye to different wavelengths, standardized by the CIE and ISO. A common wax candle emits light with a luminous intensity of roughly one candela. If emission in some directions is blocked by an opaque barrier, the emission would still be approximately one candela in the directions that are not obscured.
The word candela is Latin for candle. The old name "candle" is still sometimes used, as in foot-candle and the modern definition of candlepower.
Definition
The 26th General Conference on Weights and Measures (CGPM) redefined the candela in 2018. The new definition, which took effect on 20 May 2019, is:
The candela [...] is defined by taking the fixed numerical value of the luminous efficacy of monochromatic radiation of frequency , Kcd, to be 683 when expressed in the unit lm W−1, which is equal to , or , where the kilogram, metre and second are defined in terms of h, c and ΔνCs.
Explanation
The frequency chosen is in the visible spectrum near green, corresponding to a wavelength of about 555 nanometres. The human eye, when adapted for bright conditions, is most sensitive near this frequency. Under these conditions, photopic vision dominates the visual perception of our eyes over the scotopic vision. At other frequencies, more radiant intensity is required to achieve the same luminous intensity, according to the frequency response of the human eye. The luminous intensity for light of a particular wavelength λ is given by
where is the luminous intensity, is the radiant intensity and is the photopic luminous efficiency function. If more than one wavelength is present (as is usually the case), one must integrate over the spectrum of wavelengths to get the total luminous intensity.
Examples
A common candle emits light with roughly 1 cd luminous intensity.
A 25 W compact fluorescent light bulb puts out around 1700 lumens; if that light is radiated equally in all directions (i.e. over 4 steradians), it will have an intensity of
Focused into a 20° beam (0.095 steradians), the same light bulb would have an intensity of around 18,000 cd or 18 kcd within the beam.
History
Prior to 1948, various standards for luminous intensity were in use in a number of countries. These were typically based on the brightness of the flame from a "standard candle" of defined composition, or the brightness of an incandescent filament of specific design. One of the best-known of these was the English standard of candlepower. One candlepower was the light produced by a pure spermaceti candle weighing one sixth of a pound and burning at a rate of 120 grains per hour. Germany, Austria and Scandinavia used the Hefnerkerze, a unit based on the output of a Hefner lamp.
A better standard for luminous intensity was needed. In 1884, Jules Violle had proposed a standard based on the light emitted by 1 cm2 of platinum at its melting point (or freezing point). The resulting unit of intensity, called the "violle", was roughly equal to 60 English candlepower. Platinum was convenient for this purpose because it had a high enough melting point, was not prone to oxidation, and could be obtained in pure form. Violle showed that the intensity emitted by pure platinum was strictly dependent on its temperature, and so platinum at its melting point should have a consistent luminous intensity.
In practice, realizing a standard based on Violle's proposal turned out to be more difficult than expected. Impurities on the surface of the platinum could directly affect its emissivity, and in addition impurities could affect the luminous intensity by altering the melting point. Over the following half century various scientists tried to make a practical intensity standard based on incandescent platinum. The successful approach was to suspend a hollow shell of thorium dioxide with a small hole in it in a bath of molten platinum. The shell (cavity) serves as a black body, producing black-body radiation that depends on the temperature and is not sensitive to details of how the device is constructed.
In 1937, the Commission Internationale de l'Éclairage (International Commission on Illumination) and the CIPM proposed a "new candle" based on this concept, with value chosen to make it similar to the earlier unit candlepower. The decision was promulgated by the CIPM in 1946:
The value of the new candle is such that the brightness of the full radiator at the temperature of solidification of platinum is 60 new candles per square centimetre.
It was then ratified in 1948 by the 9th CGPM which adopted a new name for this unit, the candela. In 1967 the 13th CGPM removed the term "new candle" and gave an amended version of the candela definition, specifying the atmospheric pressure applied to the freezing platinum:
The candela is the luminous intensity, in the perpendicular direction, of a surface of square metre of a black body at the temperature of freezing platinum under a pressure of newtons per square metre.
In 1979, because of the difficulties in realizing a Planck radiator at high temperatures and the new possibilities offered by radiometry, the 16th CGPM adopted a new definition of the candela:
The candela is the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency and that has a radiant intensity in that direction of watt per steradian.
The definition describes how to produce a light source that (by definition) emits one candela, but does not specify the luminous efficiency function for weighting radiation at other frequencies. Such a source could then be used to calibrate instruments designed to measure luminous intensity with reference to a specified luminous efficiency function. An appendix to the SI Brochure makes it clear that the luminous efficiency function is not uniquely specified, but must be selected to fully define the candela.
The arbitrary (1/683) term was chosen so that the new definition would precisely match the old definition. Although the candela is now defined in terms of the second (an SI base unit) and the watt (a derived SI unit), the candela remains a base unit of the SI system, by definition.
The 26th CGPM approved the modern definition of the candela in 2018 as part of the 2019 revision of the SI, which redefined the SI base units in terms of fundamental physical constants.
SI photometric light units
Relationships between luminous intensity, luminous flux, and illuminance
If a source emits a known luminous intensity (in candelas) in a well-defined cone, the total luminous flux in lumens is given by
where is the radiation angle of the lamp—the full vertex angle of the emission cone. For example, a lamp that emits 590 cd with a radiation angle of 40° emits about 224 lumens. See MR16 for emission angles of some common lamps.
If the source emits light uniformly in all directions, the flux can be found by multiplying the intensity by 4: a uniform 1 candela source emits 4 lumens (approximately 12.566 lumens).
For the purpose of measuring illumination, the candela is not a practical unit, as it only applies to idealized point light sources, each approximated by a source small compared to the distance from which its luminous radiation is measured, also assuming that it is done so in the absence of other light sources. What gets directly measured by a light meter is incident light on a sensor of finite area, i.e. illuminance in lm/m2 (lux). However, if designing illumination from many point light sources, like light bulbs, of known approximate omnidirectionally uniform intensities, the contributions to illuminance from incoherent light being additive, it is mathematically estimated as follows. If is the position of the ith source of uniform intensity , and is the unit vector normal to the illuminated elemental opaque area being measured, and provided that all light sources lie in the same half-space divided by the plane of this area,
In the case of a single point light source of intensity Iv, at a distance r and normally incident, this reduces to
SI multiples
Like other SI units, the candela can also be modified by adding a metric prefix that multiplies it by a power of 10, for example millicandela (mcd) for 10−3 candela.
Notes
References
SI base units
Units of luminous intensity | Candela | [
"Mathematics"
] | 1,853 | [
"Quantity",
"Units of luminous intensity",
"Units of measurement"
] |
5,387 | https://en.wikipedia.org/wiki/Condensed%20matter%20physics | Condensed matter physics is the field of physics that deals with the macroscopic and microscopic physical properties of matter, especially the solid and liquid phases, that arise from electromagnetic forces between atoms and electrons. More generally, the subject deals with condensed phases of matter: systems of many constituents with strong interactions among them. More exotic condensed phases include the superconducting phase exhibited by certain materials at extremely low cryogenic temperatures, the ferromagnetic and antiferromagnetic phases of spins on crystal lattices of atoms, the Bose–Einstein condensates found in ultracold atomic systems, and liquid crystals. Condensed matter physicists seek to understand the behavior of these phases by experiments to measure various material properties, and by applying the physical laws of quantum mechanics, electromagnetism, statistical mechanics, and other physics theories to develop mathematical models and predict the properties of extremely large groups of atoms.
The diversity of systems and phenomena available for study makes condensed matter physics the most active field of contemporary physics: one third of all American physicists self-identify as condensed matter physicists, and the Division of Condensed Matter Physics is the largest division of the American Physical Society. These include solid state and soft matter physicists, who study quantum and non-quantum physical properties of matter respectively. Both types study a great range of materials, providing many research, funding and employment opportunities. The field overlaps with chemistry, materials science, engineering and nanotechnology, and relates closely to atomic physics and biophysics. The theoretical physics of condensed matter shares important concepts and methods with that of particle physics and nuclear physics.
A variety of topics in physics such as crystallography, metallurgy, elasticity, magnetism, etc., were treated as distinct areas until the 1940s, when they were grouped together as solid-state physics. Around the 1960s, the study of physical properties of liquids was added to this list, forming the basis for the more comprehensive specialty of condensed matter physics. The Bell Telephone Laboratories was one of the first institutes to conduct a research program in condensed matter physics. According to the founding director of the Max Planck Institute for Solid State Research, physics professor Manuel Cardona, it was Albert Einstein who created the modern field of condensed matter physics starting with his seminal 1905 article on the photoelectric effect and photoluminescence which opened the fields of photoelectron spectroscopy and photoluminescence spectroscopy, and later his 1907 article on the specific heat of solids which introduced, for the first time, the effect of lattice vibrations on the thermodynamic properties of crystals, in particular the specific heat. Deputy Director of the Yale Quantum Institute A. Douglas Stone makes a similar priority case for Einstein in his work on the synthetic history of quantum mechanics.
Etymology
According to physicist Philip Warren Anderson, the use of the term "condensed matter" to designate a field of study was coined by him and Volker Heine, when they changed the name of their group at the Cavendish Laboratories, Cambridge, from Solid state theory to Theory of Condensed Matter in 1967, as they felt it better included their interest in liquids, nuclear matter, and so on. Although Anderson and Heine helped popularize the name "condensed matter", it had been used in Europe for some years, most prominently in the Springer-Verlag journal Physics of Condensed Matter, launched in 1963. The name "condensed matter physics" emphasized the commonality of scientific problems encountered by physicists working on solids, liquids, plasmas, and other complex matter, whereas "solid state physics" was often associated with restricted industrial applications of metals and semiconductors. In the 1960s and 70s, some physicists felt the more comprehensive name better fit the funding environment and Cold War politics of the time.
References to "condensed" states can be traced to earlier sources. For example, in the introduction to his 1947 book Kinetic Theory of Liquids, Yakov Frenkel proposed that "The kinetic theory of liquids must accordingly be developed as a generalization and extension of the kinetic theory of solid bodies. As a matter of fact, it would be more correct to unify them under the title of 'condensed bodies.
History
Classical physics
One of the first studies of condensed states of matter was by English chemist Humphry Davy, in the first decades of the nineteenth century. Davy observed that of the forty chemical elements known at the time, twenty-six had metallic properties such as lustre, ductility and high electrical and thermal conductivity. This indicated that the atoms in John Dalton's atomic theory were not indivisible as Dalton claimed, but had inner structure. Davy further claimed that elements that were then believed to be gases, such as nitrogen and hydrogen could be liquefied under the right conditions and would then behave as metals.
In 1823, Michael Faraday, then an assistant in Davy's lab, successfully liquefied chlorine and went on to liquefy all known gaseous elements, except for nitrogen, hydrogen, and oxygen. Shortly after, in 1869, Irish chemist Thomas Andrews studied the phase transition from a liquid to a gas and coined the term critical point to describe the condition where a gas and a liquid were indistinguishable as phases, and Dutch physicist Johannes van der Waals supplied the theoretical framework which allowed the prediction of critical behavior based on measurements at much higher temperatures. By 1908, James Dewar and Heike Kamerlingh Onnes were successfully able to liquefy hydrogen and the then newly discovered helium respectively.
Paul Drude in 1900 proposed the first theoretical model for a classical electron moving through a metallic solid. Drude's model described properties of metals in terms of a gas of free electrons, and was the first microscopic model to explain empirical observations such as the Wiedemann–Franz law. However, despite the success of Drude's model, it had one notable problem: it was unable to correctly explain the electronic contribution to the specific heat and magnetic properties of metals, and the temperature dependence of resistivity at low temperatures.
In 1911, three years after helium was first liquefied, Onnes working at University of Leiden discovered superconductivity in mercury, when he observed the electrical resistivity of mercury to vanish at temperatures below a certain value. The phenomenon completely surprised the best theoretical physicists of the time, and it remained unexplained for several decades. Albert Einstein, in 1922, said regarding contemporary theories of superconductivity that "with our far-reaching ignorance of the quantum mechanics of composite systems we are very far from being able to compose a theory out of these vague ideas."
Advent of quantum mechanics
Drude's classical model was augmented by Wolfgang Pauli, Arnold Sommerfeld, Felix Bloch and other physicists. Pauli realized that the free electrons in metal must obey the Fermi–Dirac statistics. Using this idea, he developed the theory of paramagnetism in 1926. Shortly after, Sommerfeld incorporated the Fermi–Dirac statistics into the free electron model and made it better to explain the heat capacity. Two years later, Bloch used quantum mechanics to describe the motion of an electron in a periodic lattice.
The mathematics of crystal structures developed by Auguste Bravais, Yevgraf Fyodorov and others was used to classify crystals by their symmetry group, and tables of crystal structures were the basis for the series International Tables of Crystallography, first published in 1935. Band structure calculations were first used in 1930 to predict the properties of new materials, and in 1947 John Bardeen, Walter Brattain and William Shockley developed the first semiconductor-based transistor, heralding a revolution in electronics.
In 1879, Edwin Herbert Hall working at the Johns Hopkins University discovered that a voltage developed across conductors which was transverse to both an electric current in the conductor and a magnetic field applied perpendicular to the current. This phenomenon, arising due to the nature of charge carriers in the conductor, came to be termed the Hall effect, but it was not properly explained at the time because the electron was not experimentally discovered until 18 years later. After the advent of quantum mechanics, Lev Landau in 1930 developed the theory of Landau quantization and laid the foundation for a theoretical explanation of the quantum Hall effect which was discovered half a century later.
Magnetism as a property of matter has been known in China since 4000 BC. However, the first modern studies of magnetism only started with the development of electrodynamics by Faraday, Maxwell and others in the nineteenth century, which included classifying materials as ferromagnetic, paramagnetic and diamagnetic based on their response to magnetization. Pierre Curie studied the dependence of magnetization on temperature and discovered the Curie point phase transition in ferromagnetic materials. In 1906, Pierre Weiss introduced the concept of magnetic domains to explain the main properties of ferromagnets. The first attempt at a microscopic description of magnetism was by Wilhelm Lenz and Ernst Ising through the Ising model that described magnetic materials as consisting of a periodic lattice of spins that collectively acquired magnetization. The Ising model was solved exactly to show that spontaneous magnetization can occur in one dimension and it is possible in higher-dimensional lattices. Further research such as by Bloch on spin waves and Néel on antiferromagnetism led to developing new magnetic materials with applications to magnetic storage devices.
Modern many-body physics
The Sommerfeld model and spin models for ferromagnetism illustrated the successful application of quantum mechanics to condensed matter problems in the 1930s. However, there still were several unsolved problems, most notably the description of superconductivity and the Kondo effect. After World War II, several ideas from quantum field theory were applied to condensed matter problems. These included recognition of collective excitation modes of solids and the important notion of a quasiparticle. Soviet physicist Lev Landau used the idea for the Fermi liquid theory wherein low energy properties of interacting fermion systems were given in terms of what are now termed Landau-quasiparticles. Landau also developed a mean-field theory for continuous phase transitions, which described ordered phases as spontaneous breakdown of symmetry. The theory also introduced the notion of an order parameter to distinguish between ordered phases. Eventually in 1956, John Bardeen, Leon Cooper and Robert Schrieffer developed the so-called BCS theory of superconductivity, based on the discovery that arbitrarily small attraction between two electrons of opposite spin mediated by phonons in the lattice can give rise to a bound state called a Cooper pair.
The study of phase transitions and the critical behavior of observables, termed critical phenomena, was a major field of interest in the 1960s. Leo Kadanoff, Benjamin Widom and Michael Fisher developed the ideas of critical exponents and widom scaling. These ideas were unified by Kenneth G. Wilson in 1972, under the formalism of the renormalization group in the context of quantum field theory.
The quantum Hall effect was discovered by Klaus von Klitzing, Dorda and Pepper in 1980 when they observed the Hall conductance to be integer multiples of a fundamental constant .(see figure) The effect was observed to be independent of parameters such as system size and impurities. In 1981, theorist Robert Laughlin proposed a theory explaining the unanticipated precision of the integral plateau. It also implied that the Hall conductance is proportional to a topological invariant, called Chern number, whose relevance for the band structure of solids was formulated by David J. Thouless and collaborators. Shortly after, in 1982, Horst Störmer and Daniel Tsui observed the fractional quantum Hall effect where the conductance was now a rational multiple of the constant . Laughlin, in 1983, realized that this was a consequence of quasiparticle interaction in the Hall states and formulated a variational method solution, named the Laughlin wavefunction. The study of topological properties of the fractional Hall effect remains an active field of research. Decades later, the aforementioned topological band theory advanced by David J. Thouless and collaborators was further expanded leading to the discovery of topological insulators.
In 1986, Karl Müller and Johannes Bednorz discovered the first high temperature superconductor, La2-xBaxCuO4, which is superconducting at temperatures as high as 39 kelvin. It was realized that the high temperature superconductors are examples of strongly correlated materials where the electron–electron interactions play an important role. A satisfactory theoretical description of high-temperature superconductors is still not known and the field of strongly correlated materials continues to be an active research topic.
In 2012, several groups released preprints which suggest that samarium hexaboride has the properties of a topological insulator in accord with the earlier theoretical predictions. Since samarium hexaboride is an established Kondo insulator, i.e. a strongly correlated electron material, it is expected that the existence of a topological Dirac surface state in this material would lead to a topological insulator with strong electronic correlations.
Theoretical
Theoretical condensed matter physics involves the use of theoretical models to understand properties of states of matter. These include models to study the electronic properties of solids, such as the Drude model, the band structure and the density functional theory. Theoretical models have also been developed to study the physics of phase transitions, such as the Ginzburg–Landau theory, critical exponents and the use of mathematical methods of quantum field theory and the renormalization group. Modern theoretical studies involve the use of numerical computation of electronic structure and mathematical tools to understand phenomena such as high-temperature superconductivity, topological phases, and gauge symmetries.
Emergence
Theoretical understanding of condensed matter physics is closely related to the notion of emergence, wherein complex assemblies of particles behave in ways dramatically different from their individual constituents. For example, a range of phenomena related to high temperature superconductivity are understood poorly, although the microscopic physics of individual electrons and lattices is well known. Similarly, models of condensed matter systems have been studied where collective excitations behave like photons and electrons, thereby describing electromagnetism as an emergent phenomenon. Emergent properties can also occur at the interface between materials: one example is the lanthanum aluminate-strontium titanate interface, where two band-insulators are joined to create conductivity and superconductivity.
Electronic theory of solids
The metallic state has historically been an important building block for studying properties of solids. The first theoretical description of metals was given by Paul Drude in 1900 with the Drude model, which explained electrical and thermal properties by describing a metal as an ideal gas of then-newly discovered electrons. He was able to derive the empirical Wiedemann-Franz law and get results in close agreement with the experiments. This classical model was then improved by Arnold Sommerfeld who incorporated the Fermi–Dirac statistics of electrons and was able to explain the anomalous behavior of the specific heat of metals in the Wiedemann–Franz law. In 1912, The structure of crystalline solids was studied by Max von Laue and Paul Knipping, when they observed the X-ray diffraction pattern of crystals, and concluded that crystals get their structure from periodic lattices of atoms. In 1928, Swiss physicist Felix Bloch provided a wave function solution to the Schrödinger equation with a periodic potential, known as Bloch's theorem.
Calculating electronic properties of metals by solving the many-body wavefunction is often computationally hard, and hence, approximation methods are needed to obtain meaningful predictions. The Thomas–Fermi theory, developed in the 1920s, was used to estimate system energy and electronic density by treating the local electron density as a variational parameter. Later in the 1930s, Douglas Hartree, Vladimir Fock and John Slater developed the so-called Hartree–Fock wavefunction as an improvement over the Thomas–Fermi model. The Hartree–Fock method accounted for exchange statistics of single particle electron wavefunctions. In general, it is very difficult to solve the Hartree–Fock equation. Only the free electron gas case can be solved exactly. Finally in 1964–65, Walter Kohn, Pierre Hohenberg and Lu Jeu Sham proposed the density functional theory (DFT) which gave realistic descriptions for bulk and surface properties of metals. The density functional theory has been widely used since the 1970s for band structure calculations of variety of solids.
Symmetry breaking
Some states of matter exhibit symmetry breaking, where the relevant laws of physics possess some form of symmetry that is broken. A common example is crystalline solids, which break continuous translational symmetry. Other examples include magnetized ferromagnets, which break rotational symmetry, and more exotic states such as the ground state of a BCS superconductor, that breaks U(1) phase rotational symmetry.
Goldstone's theorem in quantum field theory states that in a system with broken continuous symmetry, there may exist excitations with arbitrarily low energy, called the Goldstone bosons. For example, in crystalline solids, these correspond to phonons, which are quantized versions of lattice vibrations.
Phase transition
Phase transition refers to the change of phase of a system, which is brought about by change in an external parameter such as temperature, pressure, or molar composition. In a single-component system, a classical phase transition occurs at a temperature (at a specific pressure) where there is an abrupt change in the order of the system. For example, when ice melts and becomes water, the ordered hexagonal crystal structure of ice is modified to a hydrogen bonded, mobile arrangement of water molecules.
In quantum phase transitions, the temperature is set to absolute zero, and the non-thermal control parameter, such as pressure or magnetic field, causes the phase transitions when order is destroyed by quantum fluctuations originating from the Heisenberg uncertainty principle. Here, the different quantum phases of the system refer to distinct ground states of the Hamiltonian matrix. Understanding the behavior of quantum phase transition is important in the difficult tasks of explaining the properties of rare-earth magnetic insulators, high-temperature superconductors, and other substances.
Two classes of phase transitions occur: first-order transitions and second-order or continuous transitions. For the latter, the two phases involved do not co-exist at the transition temperature, also called the critical point. Near the critical point, systems undergo critical behavior, wherein several of their properties such as correlation length, specific heat, and magnetic susceptibility diverge exponentially. These critical phenomena present serious challenges to physicists because normal macroscopic laws are no longer valid in the region, and novel ideas and methods must be invented to find the new laws that can describe the system.
The simplest theory that can describe continuous phase transitions is the Ginzburg–Landau theory, which works in the so-called mean-field approximation. However, it can only roughly explain continuous phase transition for ferroelectrics and type I superconductors which involves long range microscopic interactions. For other types of systems that involves short range interactions near the critical point, a better theory is needed.
Near the critical point, the fluctuations happen over broad range of size scales while the feature of the whole system is scale invariant. Renormalization group methods successively average out the shortest wavelength fluctuations in stages while retaining their effects into the next stage. Thus, the changes of a physical system as viewed at different size scales can be investigated systematically. The methods, together with powerful computer simulation, contribute greatly to the explanation of the critical phenomena associated with continuous phase transition.
Experimental
Experimental condensed matter physics involves the use of experimental probes to try to discover new properties of materials. Such probes include effects of electric and magnetic fields, measuring response functions, transport properties and thermometry. Commonly used experimental methods include spectroscopy, with probes such as X-rays, infrared light and inelastic neutron scattering; study of thermal response, such as specific heat and measuring transport via thermal and heat conduction.
Scattering
Several condensed matter experiments involve scattering of an experimental probe, such as X-ray, optical photons, neutrons, etc., on constituents of a material. The choice of scattering probe depends on the observation energy scale of interest. Visible light has energy on the scale of 1 electron volt (eV) and is used as a scattering probe to measure variations in material properties such as the dielectric constant and refractive index. X-rays have energies of the order of 10 keV and hence are able to probe atomic length scales, and are used to measure variations in electron charge density and crystal structure.
Neutrons can also probe atomic length scales and are used to study the scattering off nuclei and electron spins and magnetization (as neutrons have spin but no charge). Coulomb and Mott scattering measurements can be made by using electron beams as scattering probes. Similarly, positron annihilation can be used as an indirect measurement of local electron density. Laser spectroscopy is an excellent tool for studying the microscopic properties of a medium, for example, to study forbidden transitions in media with nonlinear optical spectroscopy.
External magnetic fields
In experimental condensed matter physics, external magnetic fields act as thermodynamic variables that control the state, phase transitions and properties of material systems. Nuclear magnetic resonance (NMR) is a method by which external magnetic fields are used to find resonance modes of individual nuclei, thus giving information about the atomic, molecular, and bond structure of their environment. NMR experiments can be made in magnetic fields with strengths up to 60 tesla. Higher magnetic fields can improve the quality of NMR measurement data. Quantum oscillations is another experimental method where high magnetic fields are used to study material properties such as the geometry of the Fermi surface. High magnetic fields will be useful in experimental testing of the various theoretical predictions such as the quantized magnetoelectric effect, image magnetic monopole, and the half-integer quantum Hall effect.
Magnetic resonance spectroscopy
The local structure, as well as the structure of the nearest neighbour atoms, can be investigated in condensed matter with magnetic resonance methods, such as electron paramagnetic resonance (EPR) and nuclear magnetic resonance (NMR), which are very sensitive to the details of the surrounding of nuclei and electrons by means of the hyperfine coupling. Both localized electrons and specific stable or unstable isotopes of the nuclei become the probe of these hyperfine interactions), which couple the electron or nuclear spin to the local electric and magnetic fields. These methods are suitable to study defects, diffusion, phase transitions and magnetic order. Common experimental methods include NMR, nuclear quadrupole resonance (NQR), implanted radioactive probes as in the case of muon spin spectroscopy (SR), Mössbauer spectroscopy, NMR and perturbed angular correlation (PAC). PAC is especially ideal for the study of phase changes at extreme temperatures above 2000 °C due to the temperature independence of the method.
Cold atomic gases
Ultracold atom trapping in optical lattices is an experimental tool commonly used in condensed matter physics, and in atomic, molecular, and optical physics. The method involves using optical lasers to form an interference pattern, which acts as a lattice, in which ions or atoms can be placed at very low temperatures. Cold atoms in optical lattices are used as quantum simulators, that is, they act as controllable systems that can model behavior of more complicated systems, such as frustrated magnets. In particular, they are used to engineer one-, two- and three-dimensional lattices for a Hubbard model with pre-specified parameters, and to study phase transitions for antiferromagnetic and spin liquid ordering.
In 1995, a gas of rubidium atoms cooled down to a temperature of 170 nK was used to experimentally realize the Bose–Einstein condensate, a novel state of matter originally predicted by S. N. Bose and Albert Einstein, wherein a large number of atoms occupy one quantum state.
Applications
Research in condensed matter physics has given rise to several device applications, such as the development of the semiconductor transistor, laser technology, magnetic storage, liquid crystals, optical fibres and several phenomena studied in the context of nanotechnology. Methods such as scanning-tunneling microscopy can be used to control processes at the nanometer scale, and have given rise to the study of nanofabrication. Such molecular machines were developed for example by Nobel laureates in chemistry Ben Feringa, Jean-Pierre Sauvage and Fraser Stoddart. Feringa and his team developed multiple molecular machines such as the molecular car, molecular windmill and many more.
In quantum computation, information is represented by quantum bits, or qubits. The qubits may decohere quickly before useful computation is completed. This serious problem must be solved before quantum computing may be realized. To solve this problem, several promising approaches are proposed in condensed matter physics, including Josephson junction qubits, spintronic qubits using the spin orientation of magnetic materials, and the topological non-Abelian anyons from fractional quantum Hall effect states.
Condensed matter physics also has important uses for biomedicine. For example, magnetic resonance imaging is widely used in medical imaging of soft tissue and other physiological features which cannot be viewed with traditional x-ray imaging.
See also
Notes
References
Further reading
Anderson, Philip W. (2018-03-09). Basic Notions Of Condensed Matter Physics. CRC Press. .
Girvin, Steven M.; Yang, Kun (2019-02-28). Modern Condensed Matter Physics. Cambridge University Press. .
Coleman, Piers (2015). Introduction to Many-Body Physics, Cambridge University Press, .
P. M. Chaikin and T. C. Lubensky (2000). Principles of Condensed Matter Physics, Cambridge University Press; 1st edition,
Alexander Altland and Ben Simons (2006). Condensed Matter Field Theory, Cambridge University Press, .
Michael P. Marder (2010). Condensed Matter Physics, second edition, John Wiley and Sons, .
Lillian Hoddeson, Ernest Braun, Jürgen Teichmann and Spencer Weart, eds. (1992). Out of the Crystal Maze: Chapters from the History of Solid State Physics, Oxford University Press, .
External links
Materials science | Condensed matter physics | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 5,389 | [
"Applied and interdisciplinary physics",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"nan",
"Matter"
] |
5,390 | https://en.wikipedia.org/wiki/Conversion%20of%20units | Conversion of units is the conversion of the unit of measurement in which a quantity is expressed, typically through a multiplicative conversion factor that changes the unit without changing the quantity. This is also often loosely taken to include replacement of a quantity with a corresponding quantity that describes the same physical property.
Unit conversion is often easier within a metric system such as the SI than in others, due to the system's coherence and its metric prefixes that act as power-of-10 multipliers.
Overview
The definition and choice of units in which to express a quantity may depend on the specific situation and the intended purpose. This may be governed by regulation, contract, technical specifications or other published standards. Engineering judgment may include such factors as:
the precision and accuracy of measurement and the associated uncertainty of measurement
the statistical confidence interval or tolerance interval of the initial measurement
the number of significant figures of the measurement
the intended use of the measurement, including the engineering tolerances
historical definitions of the units and their derivatives used in old measurements; e.g., international foot vs. US survey foot.
For some purposes, conversions from one system of units to another are needed to be exact, without increasing or decreasing the precision of the expressed quantity. An adaptive conversion may not produce an exactly equivalent expression. Nominal values are sometimes allowed and used.
Factor–label method
The factor–label method, also known as the unit–factor method or the unity bracket method, is a widely used technique for unit conversions that uses the rules of algebra.
The factor–label method is the sequential application of conversion factors expressed as fractions and arranged so that any dimensional unit appearing in both the numerator and denominator of any of the fractions can be cancelled out until only the desired set of dimensional units is obtained. For example, 10 miles per hour can be converted to metres per second by using a sequence of conversion factors as shown below:
Each conversion factor is chosen based on the relationship between one of the original units and one of the desired units (or some intermediary unit), before being rearranged to create a factor that cancels out the original unit. For example, as "mile" is the numerator in the original fraction and , "mile" will need to be the denominator in the conversion factor. Dividing both sides of the equation by 1 mile yields , which when simplified results in the dimensionless . Because of the identity property of multiplication, multiplying any quantity (physical or not) by the dimensionless 1 does not change that quantity. Once this and the conversion factor for seconds per hour have been multiplied by the original fraction to cancel out the units mile and hour, 10 miles per hour converts to 4.4704 metres per second.
As a more complex example, the concentration of nitrogen oxides (NOx) in the flue gas from an industrial furnace can be converted to a mass flow rate expressed in grams per hour (g/h) of NOx by using the following information as shown below:
NOx concentration = 10 parts per million by volume = 10 ppmv = 10 volumes/106 volumes
NOx molar mass = 46 kg/kmol = 46 g/mol
Flow rate of flue gas = 20 cubic metres per minute = 20 m3/min
The flue gas exits the furnace at 0 °C temperature and 101.325 kPa absolute pressure.
The molar volume of a gas at 0 °C temperature and 101.325 kPa is 22.414 m3/kmol.
After cancelling any dimensional units that appear both in the numerators and the denominators of the fractions in the above equation, the NOx concentration of 10 ppmv converts to mass flow rate of 24.63 grams per hour.
Checking equations that involve dimensions
The factor–label method can also be used on any mathematical equation to check whether or not the dimensional units on the left hand side of the equation are the same as the dimensional units on the right hand side of the equation. Having the same units on both sides of an equation does not ensure that the equation is correct, but having different units on the two sides (when expressed in terms of base units) of an equation implies that the equation is wrong.
For example, check the universal gas law equation of , when:
the pressure P is in pascals (Pa)
the volume V is in cubic metres (m3)
the amount of substance n is in moles (mol)
the universal gas constant R is 8.3145 Pa⋅m3/(mol⋅K)
the temperature T is in kelvins (K)
As can be seen, when the dimensional units appearing in the numerator and denominator of the equation's right hand side are cancelled out, both sides of the equation have the same dimensional units. Dimensional analysis can be used as a tool to construct equations that relate non-associated physico-chemical properties. The equations may reveal undiscovered or overlooked properties of matter, in the form of left-over dimensions – dimensional adjusters – that can then be assigned physical significance. It is important to point out that such 'mathematical manipulation' is neither without prior precedent, nor without considerable scientific significance. Indeed, the Planck constant, a fundamental physical constant, was 'discovered' as a purely mathematical abstraction or representation that built on the Rayleigh–Jeans law for preventing the ultraviolet catastrophe. It was assigned and ascended to its quantum physical significance either in tandem or post mathematical dimensional adjustment – not earlier.
Limitations
The factor–label method can convert only unit quantities for which the units are in a linear relationship intersecting at 0 (ratio scale in Stevens's typology). Most conversions fit this paradigm. An example for which it cannot be used is the conversion between the Celsius scale and the Kelvin scale (or the Fahrenheit scale). Between degrees Celsius and kelvins, there is a constant difference rather than a constant ratio, while between degrees Celsius and degrees Fahrenheit there is neither a constant difference nor a constant ratio. There is, however, an affine transform (, rather than a linear transform ) between them.
For example, the freezing point of water is 0 °C and 32 °F, and a 5 °C change is the same as a 9 °F change. Thus, to convert from units of Fahrenheit to units of Celsius, one subtracts 32 °F (the offset from the point of reference), divides by 9 °F and multiplies by 5 °C (scales by the ratio of units), and adds 0 °C (the offset from the point of reference). Reversing this yields the formula for obtaining a quantity in units of Celsius from units of Fahrenheit; one could have started with the equivalence between 100 °C and 212 °F, which yields the same formula.
Hence, to convert the numerical quantity value of a temperature T[F] in degrees Fahrenheit to a numerical quantity value T[C] in degrees Celsius, this formula may be used:
T[C] = (T[F] − 32) × 5/9.
To convert T[C] in degrees Celsius to T[F] in degrees Fahrenheit, this formula may be used:
T[F] = (T[C] × 9/5) + 32.
Example
Starting with:
replace the original unit with its meaning in terms of the desired unit , e.g. if , then:
Now and are both numerical values, so just calculate their product.
Or, which is just mathematically the same thing, multiply Z by unity, the product is still Z:
For example, you have an expression for a physical value Z involving the unit feet per second () and you want it in terms of the unit miles per hour ():
Or as an example using the metric system, you have a value of fuel economy in the unit litres per 100 kilometres and you want it in terms of the unit microlitres per metre:
Calculation involving non-SI Units
In the cases where non-SI units are used, the numerical calculation of a formula can be done by first working out the factor, and then plug in the numerical values of the given/known quantities.
For example, in the study of Bose–Einstein condensate, atomic mass is usually given in daltons, instead of kilograms, and chemical potential is often given in the Boltzmann constant times nanokelvin. The condensate's healing length is given by:
For a 23Na condensate with chemical potential of (the Boltzmann constant times) 128 nK, the calculation of healing length (in micrometres) can be done in two steps:
Calculate the factor
Assume that , this gives
which is our factor.
Calculate the numbers
Now, make use of the fact that . With , .
This method is especially useful for programming and/or making a worksheet, where input quantities are taking multiple different values; For example, with the factor calculated above, it is very easy to see that the healing length of 174Yb with chemical potential 20.3 nK is
.
Software tools
There are many conversion tools. They are found in the function libraries of applications such as spreadsheets databases, in calculators, and in macro packages and plugins for many other applications such as the mathematical, scientific and technical applications.
There are many standalone applications that offer the thousands of the various units with conversions. For example, the free software movement offers a command line utility GNU units for GNU and Windows. The Unified Code for Units of Measure is also a popular option.
See also
Conversion of units of temperature
Dimensional analysis
English units
Imperial units
International System of Units
List of conversion factors
List of metric units
Mesures usuelles
Metric prefix
Metric system
Metrication
Natural units
United States customary units
Unit of length
Units of measurement
Notes and references
Notes
External links
NIST Guide to SI Units Many conversion factors listed.
The Unified Code for Units of Measure
Units, Symbols, and Conversions XML Dictionary
"Instruction sur les poids et mesures républicaines – déduites de la grandeur de la terre, uniformes pour toute la République, et sur les calculs relatifs à leur division décimale"
Math Skills Review
A Discussion of Units
Short Guide to Unit Conversions
Canceling Units Lesson
Chapter 11: Behavior of Gases Chemistry: Concepts and Applications, Denton independent school District
Metrication
Conversion of units of measurement | Conversion of units | [
"Mathematics"
] | 2,139 | [
"Quantity",
"Conversion of units of measurement",
"Units of measurement"
] |
5,439 | https://en.wikipedia.org/wiki/Capricornus | Capricornus is one of the constellations of the zodiac. Its name is Latin for "horned goat" or "goat horn" or "having horns like a goat's", and it is commonly represented in the form of a sea goat: a mythical creature that is half goat, half fish.
Capricornus is one of the 88 modern constellations, and was also one of the 48 constellations listed by the 2nd century astronomer Claudius Ptolemy. Its old astronomical symbol is (♑︎). Under its modern boundaries it is bordered by Aquila, Sagittarius, Microscopium, Piscis Austrinus, and Aquarius. The constellation is located in an area of sky called the Sea or the Water, consisting of many water-related constellations such as Aquarius, Pisces and Eridanus. It is the smallest constellation in the zodiac.
Notable features
Stars
Capricornus is a faint constellation, with only one star above magnitude 3; its alpha star has a magnitude of only 3.6.
The brightest star in Capricornus is δ Capricorni, also called Deneb Algedi, with a magnitude of 2.9, located 39 light-years from Earth. Like several other stars such as Denebola and Deneb, it is named for the Arabic word for "tail or end" (deneb) and “young goat / kid” (al-gedi); its traditional name means "tail to head” or “back to the beginning", which could be related to the Ouroboros or Janus since the zodiac relates to January. Deneb Algedi is a Beta Lyrae variable star (a type of eclipsing binary). It ranges by about 0.2 magnitudes with a period of 24.5 hours.
The other bright stars in Capricornus range in magnitude from 3.1 to 5.1. α Capricorni is a multiple star. The primary (α2 Cap), 109 light-years from Earth, is a yellow-hued giant star of magnitude 3.6; the secondary (α1 Cap), 690 light-years from Earth, is a yellow-hued supergiant star of magnitude 4.3. The two stars are distinguishable by the naked eye, and both are themselves multiple stars. α1 Capricorni is accompanied by a star of magnitude 9.2; α2 Capricorni is accompanied by a star of magnitude 11.0; this faint star is itself a binary star with two components of magnitude 11. Also called Algedi or Giedi, the traditional names of α Capricorni come from the Arabic word for "the kid", which references the constellation's mythology.
β Capricorni is a double star also known as Dabih. It is a yellow-hued giant star of magnitude 3.1, 340 light-years from Earth. The secondary is a blue-white hued star of magnitude 6.1. The two stars are distinguishable in binoculars. β Capricorni's traditional name comes from the Arabic phrase for "the lucky stars of the slaughterer," a reference to ritual sacrifices performed by ancient Arabs at the heliacal rising of Capricornus. Another star visible to the naked eye is γ Capricorni, sometimes called Nashira ("bringing good tidings"); it is a white-hued giant star of magnitude 3.7, 139 light-years from Earth. π Capricorni is a double star with a blue-white hued primary of magnitude 5.1 and a white-hued secondary of magnitude 8.3. It is 670 light-years from Earth and the components are distinguishable in a small telescope.
Deep-sky objects
Several galaxies and star clusters are contained within Capricornus. Messier 30 is a globular cluster located 1 degree south of the galaxy group that contains NGC 7103. The constellation also harbors the wide spiral galaxy NGC 6907.
Messier 30 (NGC 7099) is a centrally-condensed globular cluster of magnitude 7.5 . At a distance of 30,000 light-years, it has chains of stars extending to the north that are resolvable in small amateur telescopes.
One galaxy group located in Capricornus is HCG 87, a group of at least three galaxies located 400 million light-years from Earth (redshift 0.0296). It contains a large elliptical galaxy, a face-on spiral galaxy, and an edge-on spiral galaxy. The face-on spiral galaxy is experiencing abnormally high rates of star formation, indicating that it is interacting with one or both members of the group. Furthermore, the large elliptical galaxy and the edge-on spiral galaxy, both of which have active nuclei, are connected by a stream of stars and dust, indicating that they too are interacting. Astronomers predict that the three galaxies may merge millions of years in the future to form a giant elliptical galaxy.
History
The constellation was first attested in depictions on a cylinder-seal from around the 21st century BCE, it was explicitly recorded in the Babylonian star catalogues before 1000 BCE. In the Early Bronze Age the winter solstice occurred in the constellation, but due to the precession of the equinoxes, the December solstice now takes place in the constellation Sagittarius. The Sun is now in the constellation Capricorn (as distinct from the astrological sign) from late January through mid-February.
Although the solstice during the northern hemisphere's winter no longer takes place while the sun is in the constellation Capricornus, as it did until 130 BCE, the astrological sign called Capricorn is still used to denote the position of the solstice, and the latitude of the sun's most southerly position continues to be called the Tropic of Capricorn, a term which also applies to the line on the Earth at which the sun is directly overhead at local noon on the day of the December solstice.
The planet Neptune was discovered by German astronomer Johann Galle, near Deneb Algedi (δ Capricorni) on 23 September 1846, as Capricornus can be seen best from Europe at 4:00 in September (although, by modern constellation boundaries established in the early 20th century CE, Neptune lay within the confines of Aquarius at the time of its discovery).
Mythology
Despite its faintness, the constellation Capricornus has one of the oldest mythological associations, having been consistently represented as a hybrid of a goat and a fish since the Middle Bronze Age, when the Babylonians used "The Goat-Fish" as a symbol of their god Ea.
In Greek mythology, the constellation is sometimes identified as Amalthea, the goat that suckled the infant Zeus after his mother, Rhea, saved him from being devoured by his father, Cronos. Amalthea's broken horn was transformed into the cornucopia or "horn of plenty".
Capricornus is also sometimes identified as Pan, the god with a goat's horns and legs, who saved himself from the monster Typhon by giving himself a fish's tail and diving into a river.
Visualizations
Capricornus's brighter stars are found on a triangle whose vertices are α2 Capricorni (Giedi), δ Capricorni (Deneb Algiedi), and ω Capricorni. Ptolemy's method of connecting the stars of Capricornus has been influential. Capricornus is usually drawn as a goat with the tail of a fish.
H. A. Rey has suggested an alternative visualization, which graphically shows a goat. The goat's head is formed by the triangle of stars ι Cap, θ Cap, and ζ Cap. The goat's horn sticks out with stars γ Cap and δ Cap. Star δ Cap, at the tip of the horn, is of the third magnitude. The goat's tail consists of stars β Cap and α2 Cap: star β Cap being of the third magnitude. The goat's hind foot consists of stars ψ Cap and ω Cap. Both of these stars are of the fourth magnitude.
Equivalents
In Chinese astronomy, constellation Capricornus lies in The Black Tortoise of the North ().
The Nakh peoples called this constellation Roofing Towers ().
In the Society Islands, the figure of Capricornus was called Rua-o-Mere, "Cavern of parental yearnings".
In Indian astronomy and Indian astrology, it is called Makara, the crocodile.
See also
Capricornus in Chinese astronomy
Hippocampus (mythology), the mythological sea horse
IC 1337, galaxy
Citations
Citations
References
External links
The Deep Photographic Guide to the Constellations: Capricornus
Ian Ridpath's Star Tales – Capricornus
Warburg Institute Iconographic Database (medieval and early modern images of Capricornus)
Constellations
Southern constellations
Constellations listed by Ptolemy | Capricornus | [
"Astronomy"
] | 1,879 | [
"Constellations listed by Ptolemy",
"Capricornus",
"Southern constellations",
"Constellations",
"Sky regions"
] |
5,510 | https://en.wikipedia.org/wiki/Clipperton%20Island | Clipperton Island ( ; ), also known as Clipperton Atoll and previously as Clipperton's Rock, is an uninhabited French coral atoll in the eastern Pacific Ocean. The only French territory in the North Pacific, Clipperton is from Paris, France; from Papeete, French Polynesia; and from Acapulco, Mexico.
Clipperton was documented by French merchant-explorers in 1711 and formally claimed as part of the French protectorate of Tahiti in 1858. Despite this, American guano miners began working the island in the early 1890s. As interest in the island grew, Mexico asserted a claim to the island based upon Spanish records from the 1520s that may have identified the island. Mexico established a small military colony on the island in 1905, but during the Mexican Revolution contact with the mainland became infrequent, most of the colonists died, and lighthouse keeper Victoriano Álvarez instituted a short, brutal reign as "king" of the island. Eleven survivors were rescued in 1917 and Clipperton was abandoned.
The dispute between Mexico and France over Clipperton was taken to binding international arbitration in 1909. Victor Emmanuel III, King of Italy, was chosen as arbitrator and decided in 1931 that the island was French territory. Despite the ruling, Clipperton remained largely uninhabited until 1944 when the U.S. Navy established a weather station on the island to support its war efforts in the Pacific. France protested and, as concerns about Japanese activity in the eastern Pacific waned, the U.S. abandoned the site in late 1945.
Since the end of World War II, Clipperton has primarily been the site for scientific expeditions to study the island's wildlife and marine life, including its significant masked and brown booby colonies. It has also hosted climate scientists and amateur radio DX-peditions. Plans to develop the island for trade and tourism have been considered, but none have been enacted and the island remains mostly uninhabited with periodic visits from the French navy.
Geography
The coral island is located at in the East Pacific, southwest of Mexico, west of Nicaragua, west of Costa Rica and northwest of the Galápagos Islands in Ecuador. The nearest land is Socorro Island, about to the northwest in the Revillagigedo Archipelago. The nearest French-owned island is Hiva Oa in the Marquesas Islands of French Polynesia.
Despite its proximity to North America, Clipperton is often considered one of the eastern-most points of Oceania due to being part the French Indo-Pacific, and to commonalities between its marine fauna and the marine fauna of Hawaii and Kiribati's Line Islands, with the island sitting along the migration path for animals in the Eastern Tropical Pacific region. The island is the only emerged part of the East Pacific Rise, as well as the only feature in the Clipperton fracture zone that breaks the ocean's surface, and it is one of the few islands in the Pacific that lacks an underwater archipelagic apron.
The atoll is low-lying and largely barren, with some scattered grasses, and a few clumps of coconut palms (Cocos nucifera). The land ring surrounding the lagoon measures in area with an average elevation of , although a small volcanic outcropping, referred to as Clipperton Rock (), rises to on its southeast side. The surrounding reef hosts an abundance of corals and is partly exposed at low tide. In 2001 a geodetic marker was placed to evaluate if the land is rising or sinking.
Clipperton Rock is the remains of the island's now extinct volcano's rim; because it includes this rocky outcropping, Clipperton is not a true atoll and is sometimes referred to as a 'near-atoll'. The surrounding reef in combination with the weather makes landing on the island difficult and anchoring offshore hazardous for larger ships; in the 1940s American ships reported active problems in this regard.
Environment
The environment of Clipperton Island has been studied extensively with the first recordings and sample collection being done in the 1800s. Modern research on Clipperton is focused primarily on climate science and migratory wildlife.
The SURPACLIP oceanographic expedition, a joint undertaking by the National Autonomous University of Mexico and the University of New Caledonia Nouméa, made extensive studies of the island in 1997. In 2001, French National Centre for Scientific Research geographer Christian Jost extended the 1997 studies through the French Passion 2001 expedition, which focused on the evolution of Clipperton's ecosystem. In 2003, cinematographer Lance Milbrand stayed on the island for 41 days, recording the adventure for the National Geographic Explorer and plotting a GPS map of Clipperton for the National Geographic Society.
In 2005, a four-month scientific mission organised by Jean-Louis Étienne made a complete inventory of Clipperton's mineral, plant, and animal species; studied algae as deep as below sea level; and examined the effects of pollution. A 2008 expedition from the University of Washington's School of Oceanography collected sediment cores from the lagoon to study climate change over the past millennium.
Lagoon
Clipperton is a ring-shaped atoll that completely encloses a stagnant fresh water lagoon and measures in circumference and in area. The island is the only coral island in the eastern Pacific. The lagoon is devoid of fish, and is shallow over large parts except for some deep basins with depths of , including a spot known as ('the bottomless hole') with acidic water at its base. The water is described as being almost fresh at the surface and highly eutrophic. Seaweed beds cover approximately 45% of the lagoon's surface. The rim averages in width, reaching in the west, and narrowing to in the north-east, where sea waves occasionally spill over into the lagoon. Ten islets are present in the lagoon, six of which are covered with vegetation, including the Egg Islands ().
The closure of the lagoon approximately 170 years ago and prevention of seawater from entering the lagoon has formed a meromictic lake. The bottom of the shallow parts of the lake contain eroded coral heads from when the lagoon was last connected with the ocean. During visits in 1897 and 1898 the depth at the middle of the lagoon was recorded as being between two inches and two feet due to the dead coral. The surface of the lagoon has a high concentration of phytoplankton that vary slightly with the seasons. As a result of this the water columns are stratified and do not mix leaving the lagoon with an oxic and brackish upper water layer and a deep sulfuric anoxic saline layer. At a depth of approximately the water shifts with salinity rising and both pH and oxygen quickly decreasing. The deepest levels of the lagoon record waters enriched with hydrogen sulfide which prevent the growth of coral. Before the lagoon was closed off to seawater, coral and clams were able to survive in the area as evident by fossilized specimens.
Studies of the water have found that microbial communities on the water's surface are similar to other water samples from around the world with deeper water samples showing a great diversity of both bacteria and archaea. In 2005, a group of French scientists discovered three dinoflagellate microalgae species in the lagoon: Peridiniopsis cristata, which was abundant; Durinskia baltica, which was known to exist previously in other locations, but was new to Clipperton; and Peridiniopsis cristata var. tubulifera, which is unique to the island. The lagoon also harbours millions of isopods, which are reported to deliver a painful sting.
While some sources have rated the lagoon water as non-potable, testimony from the crew of the tuna clipper M/V Monarch, stranded for 23 days in 1962 after their boat sank, indicates otherwise. Their report reveals that the lagoon water, while "muddy and dirty", was drinkable, despite not tasting very good. Several of the castaways drank it, with no apparent ill effects. Survivors of a Mexican military colony in 1917 (see below) indicated that they were dependent upon rain for their water supply, catching it in old boats. American servicemen on the island during World War II had to use evaporators to purify the lagoon's water. Aside from the lagoon and water caught from rain, no freshwater sources are known to exist.
Climate
The island has a tropical oceanic climate, with average temperatures of and highs up to . Annual rainfall is , and the humidity level is generally between 85 per cent and 95 per cent with December to March being the drier months. The prevailing winds are the southeast trade winds. The rainy season occurs from May to October, and the region is subject to tropical cyclones from April to September, but such storms often pass to the northeast of Clipperton. In 1997 Clipperton was in the path of the start of Hurricane Felicia, as well as Hurricane Sandra in 2015. In addition, Clipperton has been subjected to multiple tropical storms and depressions, including Tropical Storm Andres in 2003. Surrounding ocean waters are warm, pushed by equatorial and counter-equatorial currents and have seen temperature increases due to global warming.
Flora and fauna
When Snodgrass and Heller visited in 1898, they reported that "no land plant is native to the island". Historical accounts from 1711, 1825, and 1839 show a low grassy or suffrutescent (partially woody) flora. During Marie-Hélène Sachet visit in 1958, the vegetation was found to consist of a sparse cover of spiny grass and low thickets, a creeping plant (Ipomoea spp.), and stands of coconut palm. This low-lying herbaceous flora seems to be a pioneer in nature, and most of it is believed to be composed of recently introduced species. Sachet suspected that Heliotropium curassavicum, and possibly Portulaca oleracea, were native. Coconut palms and pigs introduced in the 1890s by guano miners were still present in the 1940s. The largest coconut grove is Bougainville Wood () on the southwestern end of the island. On the northwest side of the atoll, the most abundant plant species are Cenchrus echinatus, Sida rhombifolia, and Corchorus aestuans. These plants compose a shrub cover up to in height, and are intermixed with Eclipta, Phyllanthus, and Solanum, as well as the taller Brassica juncea. The islets in the lagoon are primarily vegetated with Cyperaceae, Scrophulariaceae, and Ipomoea pes-caprae. A unique feature of Clipperton is that the vegetation is arranged in parallel rows of species, with dense rows of taller species alternating with lower, more open vegetation. This was assumed to be a result of the trench-digging method of phosphate mining used by guano hunters.
The only land animals known to exist are two species of reptiles (the Pacific stump-toed gecko and the copper-tailed skink), bright-orange land crabs known as Clipperton crabs (Johngarthia oceanica, prior to 2019 classified as Johngartia planata), birds, and ship rats. The rats probably arrived when large fishing boats wrecked on the island in 1999 and 2000.
The pigs introduced in the 1890s reduced the crab population, which in turn allowed grassland to gradually cover about 80 per cent of the land surface. The elimination of these pigs in 1958, the result of a personal project by Kenneth E. Stager, caused most of this vegetation to disappear as the population of land crabs recovered. As a result, Clipperton is mostly a sandy desert with only 674 palms counted by Christian Jost during the Passion 2001 French mission and five islets in the lagoon with grass that the terrestrial crabs cannot reach. A 2005 report by the National Oceanic and Atmospheric Administration Southwest Fisheries Science Center indicated that after the introduction of rats and their increased presence has led to a decline in both crab and bird populations, causing a corresponding increase in both vegetation and coconut palms. This report urgently recommended eradication of rats, which have been destroying bird nesting sites and the crab population, so that vegetation might be reduced, and the island might return to its 'pre-human' state.
In 1825, Benjamin Morrell reported finding green sea turtles nesting on Clipperton, but later expeditions have not found nesting turtles there, possibly due to disruption from guano extraction, as well as the introduction of pigs and rats. Sea turtles found on the island appear to have been injured due to fishing practices. Morrell also reported fur and elephant seals on the island in 1825, but they too have not been recorded by later expeditions.
Birds are common on the island; Morrell noted in 1825: "The whole island is literally covered with sea-birds, such as gulls, whale-birds, gannets, and the booby". Thirteen species of birds are known to breed on the island and 26 others have been observed as visitors. The island has been identified as an Important Bird Area by BirdLife International because of the large breeding colony of masked boobies, with 110,000 individual birds recorded. Observed bird species include white terns, masked boobies, sooty terns, brown boobies, brown noddies, black noddies, great frigatebirds, coots, martins (swallows), cuckoos, and yellow warblers. Ducks and moorhens have been reported in the lagoon.
The coral reef on the north side of the island includes colonies more than high. The 2018 Tara Pacific expedition located five colonies of Millepora platyphylla at depths of , the first of this fire coral species known in the region. Among the Porites spp. stony corals, some bleaching was observed, along with other indications of disease or stress, including parasitic worms and microalgae.
The reefs that surround Clipperton have some of the highest concentration of endemic species found anywhere with more than 115 species identified. Many species are recorded in the area, including five or six endemics, such as Clipperton angelfish (Holacanthus limbaughi), Clipperton grouper (Epinephelus clippertonensis), Clipperton damselfish (Stegastes baldwini) and Robertson's wrasse (Thalassoma robertsoni). Widespread species around the reefs include Pacific creolefish, blue-and-gold snapper, and various species of goatfish. In the water column, trevallies are predominant, including black jacks, bigeye trevally, and bluefin trevally. Also common around Clipperton are black triggerfish;, several species of groupers, including leather bass and starry groupers; Mexican hogfish; whitecheek, convict, and striped-fin surgeonfish; yellow longnose and blacknosed butterflyfish; coral hawkfish; golden pufferfish; Moorish idols; parrotfish; and moray eels, especially speckled moray eels. The waters around the island are an important nursery for sharks, particularly the white tip shark. Galapagos sharks, reef sharks, whale sharks, and hammerhead sharks are also present around Clipperton.
Three expeditions to Clipperton have collected sponge specimens, including U.S. President Franklin Roosevelt's visit in 1938. Of the 190 specimens collected, 20 species were noted, including nine found only at Clipperton. One of the endemic sponges, collected during the 1938 visit, was named Callyspongia roosevelti in honor of Roosevelt.
In April 2009, Steven Robinson, a tropical fish dealer from Hayward, California, traveled to Clipperton to collect Clipperton angelfish. Upon his return to the United States, he described the 52 illegally collected fish to federal wildlife authorities as king angelfish, not the rarer Clipperton angelfish, which he intended to sell for $10,000. On 15 December 2011, Robinson was sentenced to 45 days of incarceration, one year of probation, and a $2,000 fine.
Environmental threats
During the night of 10 February 2010, the Sichem Osprey, a Maltese chemical tanker, ran aground en route from the Panama Canal to South Korea. The ship contained of xylene, of soybean oil, and of tallow. All 19 crew members were reported safe, and the vessel reported no leaks. The vessel was re-floated on 6 March and returned to service.
In mid-March 2012, the crew from the Clipperton Project noted the widespread presence of refuse, particularly on the northeast shore, and around the Clipperton Rock. Debris, including plastic bottles and containers, create a potentially harmful environment for the island's flora and fauna. This trash is common to only two beaches (northeast and southwest), and the rest of the island is fairly clean. Other refuse has been left after the occupations by Americans 1944–1945, French 1966–1969, and the 2008 scientific expedition. During a 2015 scientific and amateur radio expedition to Clipperton, the operating team discovered a package that contained of cocaine. It is suspected that the package washed up after being discarded at sea. In April 2023, the Passion 23 mission by France's and the surveillance frigate Germinal collected more than of plastic waste from the island's beaches along with a bale of cocaine.
The Sea Around Us Project estimates the Clipperton EEZ produces a harvest of of fish per year; however, because French naval patrols in the area are infrequent, this includes a significant amount of illegal fishing, along with lobster harvesting and shark finning, resulting in estimated losses for France of €0.42 per kilogram of fish caught.
As deep-sea mining of polymetallic nodules increases in the adjacent Clarion–Clipperton zone, similar mining activity within France's exclusive economic zone surrounding the atoll may have an impact on marine life around Clipperton. Polymetallic nodules were discovered in the Clipperton EEZ during the Passion 2015 expedition.
Politics and government
The island is an overseas state private property of France under direct authority of the Minister of the Overseas. Although the island is French territory, it has no status within the European Union. Ownership of Clipperton Island was disputed in the 19th and early 20th centuries between France and Mexico, but was finally settled through arbitration in 1931; the Clipperton Island Case remains widely studied in international law textbooks.
In the late 1930s, as flying boats opened the Pacific to air travel, Clipperton Island was noted as a possible waypoint for a trans-Pacific route from the Americas to Asia via the Marquesas Islands in French Polynesia, bypassing Hawaii. However, France indicated no interest in developing commercial air traffic in the corridor.
After France ratified the United Nations Convention on the Law of the Sea (UNCLOS) in 1996, they reaffirmed the exclusive economic zone off Clipperton island which had been established in 1976. After changes were made to the area nations were allowed to claim under the third convention of UNCLOS France in 2018 expanded the outer limits of the territorial sea to and the exclusive economic zone off Clipperton Island to , encompassing of ocean.
On 21 February 2007, administration of Clipperton was transferred from the High Commissioner of the Republic in French Polynesia to the Minister of Overseas France.
In 2015, French MP Philippe Folliot set foot on Clipperton becoming the first elected official from France to do so. Folliot noted that visiting Clipperton was something he had wanted to do since he was nine years old. Following the visit, Folliot reported to the National Assembly on the pressing need to reaffirm French sovereignty over the atoll and its surrounding maritime claims. He also proposed establishing an international scientific research station on Clipperton and administrative reforms surrounding the oversight of the atoll.
In 2022, France passed legislation officially referring to the island as "La Passion–Clipperton".
History
Discovery and early claims
There are several claims to the first discovery of the island. The earliest recorded possible sighting is 24 January 1521 when Portuguese-born Spanish explorer Ferdinand Magellan discovered an island he named San Pablo after turning westward away from the American mainland during his circumnavigation of the globe. On 15 November 1528, Spaniard Álvaro de Saavedra Cerón discovered an island he called Isla Médanos in the region while on an expedition commissioned by his cousin, the Spanish conquistador Hernán Cortés, to find a route to the Philippines.
Although both San Pablo and Isla Médanos are considered to be possible sightings of Clipperton, the island was first charted by French merchant Michel Dubocage, commanding La Découverte, who arrived at the island on Good Friday, 3 April 1711; he was joined the following day by fellow ship captain and La Princesse. The island was given the name ('Passion Island') as the date of rediscovery fell within Passiontide. They drew up the first map of the island and claimed it for France.
In August 1825, American sea captain Benjamin Morrell made the first recorded landing on Clipperton, exploring the island and making a detailed report of its vegetation.
The common name for the island comes from John Clipperton, an English pirate and privateer who fought the Spanish during the early 18th century, and who is said to have passed by the island. Some sources claim that he used it as a base for his raids on shipping.
19th century
Mexican claim 1821–1858
After its declaration of independence in 1821, Mexico took possession of the lands that had once belonged to Spain. As Spanish records noted the existence of the island as early as 1528, the territory was incorporated into Mexico. The Mexican constitution of 1917 explicitly includes the island, using the Spanish name , as Mexican territory. This would be amended on January 18, 1934, after the sovereignty dispute over the island was settled in favor of France.
French claim (1858)
In April 1858, French minister Eugène Rouher reached an agreement with a Mr. Lockhard of Le Havre to claim oceanic islands in the Pacific for the exploitation of guano deposits. On 17 November 1858, Emperor Napoleon III formally annexed Clipperton as part of the French protectorate of Tahiti. Sailing aboard Lockhart's ship Amiral, Ship-of-the-line Lieutenant Victor Le Coat de Kervéguen published a notice of this annexation in Hawaiian newspapers to further cement France's claim to the island.
Guano mining claims (1892–1905)
In 1892, a claim on the island was filed with the U.S. State Department under the U.S. Guano Islands Act by Frederick W. Permien of San Francisco on behalf of the Stonington Phosphate Company. In 1893, Permien transferred those rights to a new company, the Oceanic Phosphate Company. In response to the application, the State Department rejected the claim, noting France's prior claim on the island and that the claim was not bonded as was required by law. Additionally during this time there were concerns in Mexico that the British or Americans would lay claim to the island.
Despite the lack of U.S. approval of its claim, the Oceanic Phosphate Company began mining guano on the island in 1895. Although the company had plans for as many as 200 workers on the island, at its peak only 25 men were stationed there. The company shipped its guano to Honolulu and San Francisco where it sold for between US$10 and US$20 per ton. In 1897, the Oceanic Phosphate Company began negotiations with the British Pacific Islands Company to transfer its interest in Clipperton; this drew the attention of both French and Mexican officials.
On 24 November 1897, French naval authorities arrived on the Duguay Trouin and found three Americans working on the island. The French ordered the American flag to be lowered. At that time, U.S. authorities assured the French that they did not intend to assert American sovereignty over the island. A few weeks later, on 13 December 1897, Mexico sent the gunboat La Demócrata and a group of marines to assert its claim on the island, evicting the Americans, raising the Mexican flag, and drawing a protest from France. From 1898 to 1905, the Pacific Islands Company worked the Clipperton guano deposits under a concession agreement with Mexico. In 1898, Mexico made a US$1.5 million claim against the Oceanic Phosphate Company for the guano shipped from the island from 1895 to 1897.
20th century
Mexican colonization (1905–1917)
In 1905, the Mexican government renegotiated its agreement with the British Pacific Islands Company, establishing a military garrison on the island a year later and erecting a lighthouse under the orders of Mexican President Porfirio Díaz. Captain Ramón Arnaud was appointed governor of Clipperton. At first he was reluctant to accept the post, believing it amounted to exile from Mexico, but he relented after being told that Díaz had personally chosen him to protect Mexico's interests in the international conflict with France. It was also noted that because Arnaud spoke English, French, and Spanish, he would be well equipped to help protect Mexico's sovereignty over the territory. He arrived on Clipperton as governor later that year.
By 1914 around 100 men, women, and children lived on the island, resupplied every two months by a ship from Acapulco. With the escalation of fighting in the Mexican Revolution, regular resupply visits ceased, and the inhabitants were left to their own devices. On 28 February 1914, the schooner Nokomis wrecked on Clipperton; with a still seaworthy lifeboat, four members of the crew volunteered to row to Acapulco for help. The arrived months later to rescue the crew. While there, the captain offered to transport the survivors of the colony back to Acapulco; Arnaud refused as he believed a supply ship would soon arrive.
By 1917, all but one of the male inhabitants had died. Many had perished from scurvy, while others, including Arnaud, died during an attempt to sail after a passing ship to fetch help. Lighthouse keeper Victoriano Álvarez was the last man on the island, together with 15 women and children. Álvarez proclaimed himself 'king', and began a campaign of rape and murder, before being killed by Tirza Rendón, who was his favourite victim. Almost immediately after Álvarez's death, four women and seven children, the last survivors, were picked up by the U.S. Navy gunship on 18 July 1917.
Final arbitration of ownership (1931)
Throughout Mexico's occupation of Clipperton, France insisted on its ownership of the island, and lengthy diplomatic correspondence between the two countries led to a treaty on 2 March 1909, agreeing to seek binding international arbitration by Victor Emmanuel III of Italy, with each nation promising to abide by his determination. In 1931, Victor Emmanuel III issued his arbitral decision in the Clipperton Island Case, declaring Clipperton a French possession. Mexican President Pascual Ortiz Rubio, in response to public opinion that considered the Italian king biased towards France, consulted international experts on the validity of the decision, but ultimately Mexico accepted Victor Emmanuel's findings. The Mexican press at the time raised the issue of the Monroe Doctrine with the United States, stating that the French claim had preceded its issuance. France formally took possession of Clipperton on January 26, 1935.
U.S. presidential visit
President Franklin D. Roosevelt made a stop over at Clipperton in July 1938 aboard the as part of a fishing expedition to the Galápagos Islands and other points along the Central and South American coasts. At the island, Roosevelt and his party spent time fishing for sharks, and afterwards Dr. Waldo L. Schmitt of the Smithsonian Institution went ashore with some crew to gather scientific samples and make observations of the island.
Roosevelt had previously tried to visit Clipperton in July 1934 after transiting through the Panama Canal en route to Hawaii on the Houston; he had heard the area was good for fishing, but heavy seas prevented them from lowering a boat when they reached the island. On 19 July 1934, soon after the stop at Clipperton, the rigid airship rendezvoused with the Houston, and one of the Macon Curtiss F9C biplanes delivered mail to the president.
American occupation (1944–1945)
In April 1944, the took observations of Clipperton while en route to Hawaii. After an overflight of the island by planes from the and to ensure Clipperton was uninhabited, the departed San Francisco on 4 December 1944 with aerological specialists and personnel and was followed several days later by with provisions, heavy equipment, and equipment for construction of a U.S. Navy weather station on the island. The sailors at the weather station were armed in case of a possible Japanese attack in the region. Landing on the island proved challenging. LST-563 grounded on the reef and the salvage ship was brought in to help refloat the ship but it too was grounded. Finally, in January 1945, the and were able to free the Seize and to offload equipment from LST-563 before it was abandoned.
Once the weather station was completed and sailors garrisoned on the island, the U.S. government informed the British, French, and Mexican governments of the station and its purpose. Every day at 9 a.m., the 24 sailors stationed at the Clipperton weather station sent up weather balloons to gather information. Later, Clipperton was considered for an airfield to shift traffic between North America and Australia far from the front lines of Pacific Theater.
In April 1943, during a meeting between presidents Roosevelt of the U.S. and Avila Camacho of Mexico, the topic of Mexican ownership of Clipperton was raised. The American government seemed interested in Clipperton being handed over to Mexico due to the importance the island might play in both commercial and military air travel, as well as its proximity to the Panama Canal.
Although these talks were informal, the U.S. backed away from any Mexican claim on Clipperton as Mexico had previously accepted the 1931 arbitration decision. The U.S. government also felt it would be easier to obtain a military base on the island from France. However, after the French government was notified about the weather station, relations on this matter deteriorated rapidly with the French government sending a formal note of protest in defense of French sovereignty. In response, the U.S. extended an offer for the French military to operate the station or to have the Americans agree to leave the weather station under the same framework previously agreed to with other weather stations in France and North Africa. There were additional concern within the newly formed Provisional Government of the French Republic that notification of the installation was made to military and not civilian leadership.
French Foreign Minister Georges Bidault said of the incident: "This is very humiliating to us we are anxious to cooperate with you, but sometimes you do not make it easy". French Vice Admiral Raymond Fenard requested during a meeting with U.S. Admiral Lyal A. Davidson that civilians be given access to Clipperton and the surrounding waters, but the U.S. Navy denied the request because there was an active military installation on the island. Instead Davidson offered to transport a French officer to the installation and reassured the French government that the United States did not wish to claim sovereignty over the island. During these discussions between the admirals, French diplomats in Mexico attempted to hire the Mexican vessel Pez de Plata out of Acapulco to bring a military attaché to Clipperton under a cover story that they were going on a shark fishing trip. At the request of the Americans, the Mexican government refused to allow the Pez De Plata to leave port. French officials then attempted to leave in another smaller vessel and filed a false destination with the local port authorities but were also stopped by Mexican officials.
During this period, French officials in Mexico leaked information about their concerns, as well as about the arrival of seaplanes at Clipperton, to The New York Times and Newsweek; both stories were refused publishing clearance on national security grounds. In February 1945, the U.S. Navy transported French Officer Lieutenant Louis Jampierre on a 4-day trip to Clipperton out of San Diego where he visited the installation and that afternoon returned to the United States. As the war in the Pacific progressed, concerns about Japanese incursions into the Eastern Pacific were reduced and in September 1945 the U.S. Navy began removing from Clipperton. During the evacuation, munitions were destroyed, but significant matériel was left on the island. By 21 October 1945, the last U.S. Navy staff at the weather station left Clipperton.
Post-World War II developments
Since the island was abandoned by American forces at the end of World War II, the island has been visited by sports fishermen, French naval patrols, and Mexican tuna and shark fishermen. There have been infrequent scientific and amateur radio expeditions and, in 1978, Jacques-Yves Cousteau visited with a team of divers and a survivor from the 1917 evacuation to film a television special called Clipperton: The Island that Time Forgot.
The island was visited by ornithologist Ken Stager of the Los Angeles County Museum in 1958. Appalled at the depredations visited by feral pigs upon the island's brown booby and masked booby colonies (reduced to 500 and 150 birds, respectively), Stager procured a shotgun and killed all 58 pigs. By 2003, the booby colonies had grown to 25,000 brown boobies and 112,000 masked boobies, making Clipperton home to the world's second-largest brown booby colony, and its largest masked booby colony. In 1994, Stager's story inspired Bernie Tershy and Don Croll, both professors at the University of California, Santa Cruz Long Marine Lab, to found the non-profit Island Conservation, which works to prevent extinctions through the removal of invasive species from islands.
When the independence of Algeria in 1962 threatened French nuclear testing sites in North Africa, the French Ministry of Defence considered Clipperton as a possible replacement site. This was eventually ruled out due to the island's hostile climate and remote location, but the island was used to house a small scientific mission to collect data on nuclear fallout from other nuclear tests. From 1966 to 1969, the French military sent a series of missions, called "Bougainville", to the island. The Bougainville missions unloaded some 25 tons of equipment, including sanitary facilities, traditional Polynesian dwellings, drinking water treatment tanks, and generators. The missions sought to surveil the island and its surrounding waters, observe weather conditions, and evaluate potential rehabilitation of the World War II era airstrip. By 1978, the structures built during the Bougainville missions had become quite derelict. The French explored reopening the lagoon and developing a harbour for trade and tourism during the 1970s, but this too was abandoned. An automatic weather installation was completed on 7 April 1980, with data collected by the station transmitted via the Argos satellite system to the Lannion Space Meteorology Center in Brittany France.
In 1981, the Académie des sciences d'outre-mer recommended the island have its own economic infrastructure, with an airstrip and a fishing port in the lagoon. This would mean opening the lagoon to the ocean by creating a passage in the atoll rim. To oversee this, the French government reassigned Clipperton from the High Commissioner for French Polynesia to the direct authority of the French government, classifying the island as an overseas state private property administered by France's Overseas Minister. In 1986, the Company for the Study, Development and Exploitation of Clipperton Island (French acronym, SEDEIC) and French officials began outlining a plan for the development of Clipperton as a fishing port, but due to economic constraints, the distance from markets, and the small size of the atoll, nothing beyond preliminary studies was undertaken and plans for the development were abandoned. In the mid-1980s, the French government began efforts to enlist citizens of French Polynesia to settle on Clipperton; these plans were ultimately abandoned as well.
In November 1994, the French Space Agency requested the help of NASA to track the first stage breakup of the newly designed Ariane 5 rocket. After spending a month on Clipperton setting up and calibrating radar equipment to monitor Ariane flight V88, the mission ended in disappointment when the rocket disintegrated 37 seconds after launch due to a software bug.
Despite Mexico accepting the 1931 arbitration decision that Clipperton was French territory, the right of Mexican fishing vessels to work Clipperton's territorial waters have remained a point of contention. A 2007 treaty, reaffirmed in 2017, grants Mexican access to Clipperton's fisheries so long as authorization is sought from the French government, conservation measures are followed, and catches are reported; however, the lack of regular monitoring of the fisheries by France makes verifying compliance difficult.
Castaways
In May 1893, Charles Jensen and "Brick" Thurman of the Oceanic Phosphate Company were left on the island by the company's ship Compeer with 90 days worth of supplies in order to prevent other attempts to claim the island and its guano. Before sailing for Clipperton, Jensen wrote a letter to the Secretary of the Coast Seamen's Union, Andrew Furuseth, instructing him that if the Oceanic Phosphate Company had not sent a vessel to Clipperton six weeks after the return of the Compeer to make it known that they had been stranded there. The Oceanic Phosphate Company denied it had left the men without adequate supplies and contracted the schooner Viking to retrieve them in late August. The Viking rescued the men, who had used seabirds' eggs to supplement their supplies, and returned them to San Francisco on 31 October.
In May 1897, the British cargo vessel Kinkora wrecked on Clipperton; the crew was able to salvage food and water from the ship, allowing them to survive on the island in relative comfort. During the crew's time on the island, a passing vessel offered to take the men to the mainland for $1,500, which the crew refused. Instead eight of the men loaded up a lifeboat and rowed to Acapulco for help. After the first mate of the Kinkora, Mr. McMarty, arrived in Acapulco, HMS Comus set sail from British Columbia to rescue the sailors.
In 1947, five American fishermen from San Pedro, California, were rescued from Clipperton after surviving on the island for six weeks.
In early 1962, the island provided a home to nine crewmen of the sunken tuna clipper MV Monarch, stranded for 23 days from 6 February to 1 March. They reported that the lagoon water was drinkable, although they preferred to drink water from the coconuts they found. Unable to use any of the dilapidated buildings, they constructed a crude shelter from cement bags and tin salvaged from Quonset huts built by the American military 20 years earlier. Wood from the huts was used for firewood, and fish caught off the fringing reef combined with potatoes and onions they had saved from their sinking vessel augmented the island's meager supply of coconuts. The crewmen reported they tried eating bird's eggs, but found them to be rancid, and they decided after trying to cook a 'little black bird' that it did not have enough meat to make the effort worthwhile. Pigs had been eradicated, but the crewmen reported seeing their skeletons around the atoll. The crewmen were eventually discovered by another fishing boat, and rescued by the U.S. Navy destroyer .
Amateur radio DX-peditions
Clipperton has long been an attractive destination for amateur radio groups due to its remoteness, permit requirements, history, and interesting environment. While some radio operation has been part of other visits to the island, major DX-peditions have included FO0XB (1978), FO0XX (1985), FO0CI (1992), FO0AAA (2000), TX5C (2008), and TX5S (2024).
In March 2014, the Cordell Expedition, organised and led by Robert Schmieder, combined a radio DX-pedition using callsign TX5K with environmental and scientific investigations. The team of 24 radio operators made more than 114,000 contacts, breaking the previous record of 75,000. The activity included extensive operation in the 6-meter band, including Earth–Moon–Earth communication (EME) or 'moonbounce' contacts. A notable accomplishment was the use of DXA, a real-time satellite-based online graphic radio log web page, allowing anyone with a browser to see the radio activity. Scientific work conducted during the expedition included the first collection and identification of foraminifera and extensive aerial imaging of the island using kite-borne cameras. The team included two scientists from the University of Tahiti and a French TV documentary crew from Thalassa.
In April 2015, Alain Duchauchoy, F6BFH, operated from Clipperton using callsign TX5P as part of the Passion 2015 scientific expedition to Clipperton Island. Duchauchoy also researched Mexican use of the island during the early 1900s as part of the expedition.
See also
Uninhabited island
Lists of islands
Notes
References
External links
Isla Clipperton o 'Los náufragos mexicanos − 1914/1917' [Clipperton or 'The Mexican Castaways – 1914/1917']
Photo galleries
The first dive trip to Clipperton Island aboard the Nautilus Explorer – pictures taken during a 2007 visit
Clipperton Island 2008 – Flickr gallery containing 94 large photos from a 2008 visit
3D photos of Clipperton Island 2010 – 3D anaglyphs
Visits and expeditions
2000 DXpedition to Clipperton Island – website of a visit by amateur radio enthusiasts in 2000
Diving trips to Clipperton atoll – from NautilusExplorer.com
States and territories established in 1931
1931 establishments in the French colonial empire
1931 establishments in North America
1931 in Mexico
Islands of Overseas France
Pacific Ocean atolls of France
Uninhabited islands of France
Islands of Central America
Dependent territories in North America
Dependent territories in Oceania
French colonization of the Americas
Former populated places in North America
Former populated places in Oceania
Former disputed islands
Arbitration cases
Territorial disputes of France
Territorial disputes of Mexico
Tropical Eastern Pacific
Uninhabited islands of the Pacific Ocean
Pacific islands claimed under the Guano Islands Act
Coral reefs
Reefs of the Pacific Ocean
Neotropical ecoregions
Ecoregions of Central America
Important Bird Areas of Overseas France
Important Bird Areas of Oceania
Seabird colonies
Island restoration
Victor Emmanuel III | Clipperton Island | [
"Biology"
] | 8,781 | [
"Biogeomorphology",
"Coral reefs"
] |
5,561 | https://en.wikipedia.org/wiki/Computational%20linguistics | Computational linguistics is an interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, computational linguistics draws upon linguistics, computer science, artificial intelligence, mathematics, logic, philosophy, cognitive science, cognitive psychology, psycholinguistics, anthropology and neuroscience, among others.
Origins
The field overlapped with artificial intelligence since the efforts in the United States in the 1950s to use computers to automatically translate texts from foreign languages, particularly Russian scientific journals, into English. Since rule-based approaches were able to make arithmetic (systematic) calculations much faster and more accurately than humans, it was expected that lexicon, morphology, syntax and semantics can be learned using explicit rules, as well. After the failure of rule-based approaches, David Hays coined the term in order to distinguish the field from AI and co-founded both the Association for Computational Linguistics (ACL) and the International Committee on Computational Linguistics (ICCL) in the 1970s and 1980s. What started as an effort to translate between languages evolved into a much wider field of natural language processing.
Annotated corpora
In order to be able to meticulously study the English language, an annotated text corpus was much needed. The Penn Treebank was one of the most used corpora. It consisted of IBM computer manuals, transcribed telephone conversations, and other texts, together containing over 4.5 million words of American English, annotated using both part-of-speech tagging and syntactic bracketing.
Japanese sentence corpora were analyzed and a pattern of log-normality was found in relation to sentence length.
Modeling language acquisition
The fact that during language acquisition, children are largely only exposed to positive evidence, meaning that the only evidence for what is a correct form is provided, and no evidence for what is not correct, was a limitation for the models at the time because the now available deep learning models were not available in late 1980s.
It has been shown that languages can be learned with a combination of simple input presented incrementally as the child develops better memory and longer attention span, which explained the long period of language acquisition in human infants and children.
Robots have been used to test linguistic theories. Enabled to learn as children might, models were created based on an affordance model in which mappings between actions, perceptions, and effects were created and linked to spoken words. Crucially, these robots were able to acquire functioning word-to-meaning mappings without needing grammatical structure.
Using the Price equation and Pólya urn dynamics, researchers have created a system which not only predicts future linguistic evolution but also gives insight into the evolutionary history of modern-day languages.
Chomsky's theories
Chomsky's theories have influenced computational linguistics, particularly in understanding how infants learn complex grammatical structures, such as those described in Chomsky normal form. Attempts have been made to determine how an infant learns a "non-normal grammar" as theorized by Chomsky normal form. Research in this area combines structural approaches with computational models to analyze large linguistic corpora like the Penn Treebank, helping to uncover patterns in language acquisition.
See also
Artificial intelligence in fiction
Collostructional analysis
Computational lexicology
Computational Linguistics (journal)
Computational models of language acquisition
Computational semantics
Computational semiotics
Computer-assisted reviewing
Dialog systems
Glottochronology
Grammar induction
Human speechome project
Internet linguistics
Lexicostatistics
Natural language processing
Natural language user interface
Quantitative linguistics
Semantic relatedness
Semantometrics
Systemic functional linguistics
Translation memory
Universal Networking Language
References
Further reading
Steven Bird, Ewan Klein, and Edward Loper (2009). Natural Language Processing with Python. O'Reilly Media. .
Daniel Jurafsky and James H. Martin (2008). Speech and Language Processing, 2nd edition. Pearson Prentice Hall. .
Mohamed Zakaria KURDI (2016). Natural Language Processing and Computational Linguistics: speech, morphology, and syntax, Volume 1. ISTE-Wiley. .
Mohamed Zakaria KURDI (2017). Natural Language Processing and Computational Linguistics: semantics, discourse, and applications, Volume 2. ISTE-Wiley. .
External links
Association for Computational Linguistics (ACL)
ACL Anthology of research papers
ACL Wiki for Computational Linguistics
CICLing annual conferences on Computational Linguistics
Computational Linguistics – Applications workshop
Language Technology World
Resources for Text, Speech and Language Processing
The Research Group in Computational Linguistics
Formal sciences
Cognitive science
Computational fields of study | Computational linguistics | [
"Mathematics",
"Technology"
] | 908 | [
"Computational fields of study",
"Mathematical linguistics",
"Applied mathematics",
"Computational linguistics",
"Computing and society",
"Natural language and computing"
] |
5,623 | https://en.wikipedia.org/wiki/Canal | Canals or artificial waterways are waterways or engineered channels built for drainage management (e.g. flood control and irrigation) or for conveyancing water transport vehicles (e.g. water taxi). They carry free, calm surface flow under atmospheric pressure, and can be thought of as artificial rivers.
In most cases, a canal has a series of dams and locks that create reservoirs of low speed current flow. These reservoirs are referred to as slack water levels, often just called levels. A canal can be called a navigation canal when it parallels a natural river and shares part of the latter's discharges and drainage basin, and leverages its resources by building dams and locks to increase and lengthen its stretches of slack water levels while staying in its valley.
A canal can cut across a drainage divide atop a ridge, generally requiring an external water source above the highest elevation. The best-known example of such a canal is the Panama Canal.
Many canals have been built at elevations, above valleys and other waterways. Canals with sources of water at a higher level can deliver water to a destination such as a city where water is needed. The Roman Empire's aqueducts were such water supply canals.
The term was once used to describe linear features seen on the surface of Mars, Martian canals, an optical illusion.
Types of artificial waterways
A navigation is a series of channels that run roughly parallel to the valley and stream bed of an unimproved river. A navigation always shares the drainage basin of the river. A vessel uses the calm parts of the river itself as well as improvements, traversing the same changes in height.
A true canal is a channel that cuts across a drainage divide, making a navigable channel connecting two different drainage basins.
Structures used in artificial waterways
Both navigations and canals use engineered structures to improve navigation:
weirs and dams to raise river water levels to usable depths;
looping descents to create a longer and gentler channel around a stretch of rapids or falls;
locks to allow ships and barges to ascend/descend.
Since they cut across drainage divides, canals are more difficult to construct and often need additional improvements, like viaducts and aqueducts to bridge waters over streams and roads, and ways to keep water in the channel.
Types of canals
There are two broad types of canal:
Waterways: canals and navigations used for carrying vessels transporting goods and people. These can be subdivided into two kinds:
Those connecting existing lakes, rivers, other canals or seas and oceans.
Those connected in a city network: such as the Canal Grande and others of Venice; the grachten of Amsterdam or Utrecht, and the waterways of Bangkok.
Aqueducts: water supply canals that are used for the conveyance and delivery of potable water, municipal uses, hydro power canals and agriculture irrigation.
Importance
Historically, canals were of immense importance to the commerce, development, growth and vitality of a civilization. The movement of bulk raw materials such as coal and ores—practically a prerequisite for further urbanization and industrialization—were difficult and only marginally affordable to move without water transport. The movement of bulk raw materials, facilitated by canals, fueled the Industrial Revolution, leading to new research disciplines, new industries and economies of scale, raising the standard of living for industrialized societies.
The few canals still in operation in the 21st century are a fraction of the number that were once maintained during the earlier part of the Industrial Revolution. Their replacement was gradual, beginning first in the United Kingdom in the 1840s, where canal shipping was first augmented by, and later superseded by the much faster, less geographically constrained, and generally cheaper to maintain railways.
By the early 1880s, many canals which had little ability to compete with rail transport were abandoned. In the 20th century, oil was increasingly used as the heating fuel of choice, and the growth of coal shipments began to decrease. After the First World War, technological advances in motor trucks as well as expanding road networks saw increasing amounts of freight being transported by road, and the last small U.S. barge canals saw a steady decline in cargo ton-miles.
The once critical smaller inland waterways conceived and engineered as boat and barge canals have largely been supplanted and filled in, abandoned and left to deteriorate, or kept in service under a park service and staffed by government employees, where dams and locks are maintained for flood control or pleasure boating. Today, most ship canals (intended for larger, oceangoing vessels) service primarily service bulk cargo and large ship transportation industries.
The longest extant canal today, the Grand Canal in northern China, still remains in heavy use, especially the portion south of the Yellow River. It stretches from Beijing to Hangzhou at 1,794 kilometres (1,115 miles).
Construction
Canals are built in one of three ways, or a combination of the three, depending on available water and available path:
Human made streams
A canal can be created where no stream presently exists. Either the body of the canal is dug or the sides of the canal are created by making dykes or levees by piling dirt, stone, concrete or other building materials. The finished shape of the canal as seen in cross section is known as the canal prism. The water for the canal must be provided from an external source, like streams or reservoirs. Where the new waterway must change elevation engineering works like locks, lifts or elevators are constructed to raise and lower vessels. Examples include canals that connect valleys over a higher body of land, like Canal du Midi, Canal de Briare and the Panama Canal.
A canal can be constructed by dredging a channel in the bottom of an existing lake. When the channel is complete, the lake is drained and the channel becomes a new canal, serving both drainage of the surrounding polder and providing transport there. Examples include the . One can also build two parallel dikes in an existing lake, forming the new canal in between, and then drain the remaining parts of the lake. The eastern and central parts of the North Sea Canal were constructed in this way. In both cases pumping stations are required to keep the land surrounding the canal dry, either pumping water from the canal into surrounding waters, or pumping it from the land into the canal.
Canalization and navigations
A stream can be canalized to make its navigable path more predictable and easier to maneuver. Canalization modifies the stream to carry traffic more safely by controlling the flow of the stream by dredging, damming and modifying its path. This frequently includes the incorporation of locks and spillways, that make the river a navigation. Examples include the Lehigh Canal in Northeastern Pennsylvania's coal Region, Basse Saône, Canal de Mines de Fer de la Moselle, and canal Aisne. Riparian zone restoration may be required.
Lateral canals
When a stream is too difficult to modify with canalization, a second stream can be created next to or at least near the existing stream. This is called a lateral canal, and may meander in a large horseshoe bend or series of curves some distance from the source waters stream bed lengthening the effective length in order to lower the ratio of rise over run (slope or pitch). The existing stream usually acts as the water source and the landscape around its banks provide a path for the new body. Examples include the Chesapeake and Ohio Canal, Canal latéral à la Loire, Garonne Lateral Canal, Welland Canal and Juliana Canal.
Smaller transportation canals can carry barges or narrowboats, while ship canals allow seagoing ships to travel to an inland port (e.g., Manchester Ship Canal), or from one sea or ocean to another (e.g., Caledonian Canal, Panama Canal).
Features
At their simplest, canals consist of a trench filled with water. Depending on the stratum the canal passes through, it may be necessary to line the cut with some form of watertight material such as clay or concrete. When this is done with clay, it is known as puddling.
Canals need to be level, and while small irregularities in the lie of the land can be dealt with through cuttings and embankments, for larger deviations other approaches have been adopted. The most common is the pound lock, which consists of a chamber within which the water level can be raised or lowered connecting either two pieces of canal at a different level or the canal with a river or the sea. When there is a hill to be climbed, flights of many locks in short succession may be used.
Prior to the development of the pound lock in 984 AD in China by Chhaio Wei-Yo and later in Europe in the 15th century, either flash locks consisting of a single gate were used or ramps, sometimes equipped with rollers, were used to change the level. Flash locks were only practical where there was plenty of water available.
Locks use a lot of water, so builders have adopted other approaches for situations where little water is available. These include boat lifts, such as the Falkirk Wheel, which use a caisson of water in which boats float while being moved between two levels; and inclined planes where a caisson is hauled up a steep railway.
To cross a stream, road or valley (where the delay caused by a flight of locks at either side would be unacceptable) the valley can be spanned by a navigable aqueduct – a famous example in Wales is the Pontcysyllte Aqueduct (now a UNESCO World Heritage Site) across the valley of the River Dee.
Another option for dealing with hills is to tunnel through them. An example of this approach is the Harecastle Tunnel on the Trent and Mersey Canal. Tunnels are only practical for smaller canals.
Some canals attempted to keep changes in level down to a minimum. These canals known as contour canals would take longer, winding routes, along which the land was a uniform altitude. Other, generally later, canals took more direct routes requiring the use of various methods to deal with the change in level.
Canals have various features to tackle the problem of water supply. In cases, like the Suez Canal, the canal is open to the sea. Where the canal is not at sea level, a number of approaches have been adopted. Taking water from existing rivers or springs was an option in some cases, sometimes supplemented by other methods to deal with seasonal variations in flow. Where such sources were unavailable, reservoirs – either separate from the canal or built into its course – and back pumping were used to provide the required water. In other cases, water pumped from mines was used to feed the canal. In certain cases, extensive "feeder canals" were built to bring water from sources located far from the canal.
Where large amounts of goods are loaded or unloaded such as at the end of a canal, a canal basin may be built. This would normally be a section of water wider than the general canal. In some cases, the canal basins contain wharfs and cranes to assist with movement of goods.
When a section of the canal needs to be sealed off so it can be drained for maintenance stop planks are frequently used. These consist of planks of wood placed across the canal to form a dam. They are generally placed in pre-existing grooves in the canal bank. On more modern canals, "guard locks" or gates were sometimes placed to allow a section of the canal to be quickly closed off, either for maintenance, or to prevent a major loss of water due to a canal breach.
Canal falls
A canal fall, or canal drop, is a vertical drop in the canal bed. These are built when the natural ground slope is steeper than the desired canal gradient. They are constructed so the falling water's kinetic energy is dissipated in order to prevent it from scouring the bed and sides of the canal.
A canal fall is constructed by cut and fill. It may be combined with a regulator, bridge, or other structure to save costs.
There are various types of canal falls, based on their shape. One type is the ogee fall, where the drop follows an s-shaped curve to create a smooth transition and reduce turbulence. However, this smooth transition does not dissipate the water's kinetic energy, which leads to heavy scouring. As a result, the canal needs to be reinforced with concrete or masonry to protect it from eroding.
Another type of canal fall is the vertical fall, which is "simple and economical". These feature a "cistern", or depressed area just downstream from the fall, to "cushion" the water by providing a deep pool for its kinetic energy to be diffused in. Vertical falls work for drops of up to 1.5 m in height, and for discharge of up to 15 cubic meters per second.
History
The transport capacity of pack animals and carts is limited. A mule can carry an eighth-ton [] maximum load over a journey measured in days and weeks, though much more for shorter distances and periods with appropriate rest. Besides, carts need roads. Transport over water is much more efficient and cost-effective for large cargoes.
Ancient canals
The oldest known canals were irrigation canals, built in Mesopotamia , in what is now Iraq. The Indus Valley civilization of ancient India () had sophisticated irrigation and storage systems developed, including the reservoirs built at Girnar in 3000 BC. This is the first time that such planned civil project had taken place in the ancient world. In Egypt, canals date back at least to the time of Pepi I Meryre (reigned 2332–2283 BC), who ordered a canal built to bypass the cataract on the Nile near Aswan.
In ancient China, large canals for river transport were established as far back as the Spring and Autumn period (8th–5th centuries BC), the longest one of that period being the Hong Gou (Canal of the Wild Geese), which according to the ancient historian Sima Qian connected the old states of Song, Zhang, Chen, Cai, Cao, and Wei. The Caoyun System of canals was essential for imperial taxation, which was largely assessed in kind and involved enormous shipments of rice and other grains. By far the longest canal was the Grand Canal of China, still the longest canal in the world today and the oldest extant one. It is long and was built to carry the Emperor Yang Guang between Zhuodu (Beijing) and Yuhang (Hangzhou). The project began in 605 and was completed in 609, although much of the work combined older canals, the oldest section of the canal existing since at least 486 BC. Even in its narrowest urban sections it is rarely less than wide.
In the 5th century BC, Achaemenid king Xerxes I of Persia ordered the construction of the Xerxes Canal through the base of Mount Athos peninsula, Chalkidiki, northern Greece. It was constructed as part of his preparations for the Second Persian invasion of Greece, a part of the Greco-Persian Wars. It is one of the few monuments left by the Persian Empire in Europe.
Greek engineers were also among the first to use canal locks, by which they regulated the water flow in the Ancient Suez Canal as early as the 3rd century BC.
There was little experience moving bulk loads by carts, while a pack-horse would [i.e. 'could'] carry only an eighth of a ton. On a soft road a horse might be able to draw 5/8ths of a ton. But if the load were carried by a barge on a waterway, then up to 30 tons could be drawn by the same horse.— technology historian Ronald W. Clark referring to transport realities before the industrial revolution and the Canal age.
Hohokam was a society in the North American Southwest in what is now part of Arizona, United States, and Sonora, Mexico. Their irrigation systems supported the largest population in the Southwest by 1300 CE. Archaeologists working at a major archaeological dig in the 1990s in the Tucson Basin, along the Santa Cruz River, identified a culture and people that may have been the ancestors of the Hohokam. This prehistoric group occupied southern Arizona as early as 2000 BCE, and in the Early Agricultural period grew corn, lived year-round in sedentary villages, and developed sophisticated irrigation canals.
The large-scale Hohokam irrigation network in the Phoenix metropolitan area was the most complex in ancient North America. A portion of the ancient canals has been renovated for the Salt River Project and now helps to supply the city's water.
The Sinhalese constructed the 87 km (54 mi) Yodha Ela in 459 A.D. as a part of their extensive irrigation network which functioned in a way of a moving reservoir due to its single banking aspect to manage the canal pressure with the influx of water. It was also designed as an elongated reservoir passing through traps creating 66 mini catchments as it flows from Kala Wewa to Thissa Wawa. The canal was not designed for the quick conveying of water from Kala Wewa to Thissa Wawa but to create a mass of water between the two reservoirs, which would in turn provided for agriculture and the use of humans and animals.
They also achieved a rather low gradient for its time. The canal is still in use after renovation.
Middle Ages
In the Middle Ages, water transport was several times cheaper and faster than transport overland. Overland transport by animal drawn conveyances was used around settled areas, but unimproved roads required pack animal trains, usually of mules to carry any degree of mass, and while a mule could carry an eighth ton, it also needed teamsters to tend it and one man could only tend perhaps five mules, meaning overland bulk transport was also expensive, as men expect compensation in the form of wages, room and board. This was because long-haul roads were unpaved, more often than not too narrow for carts, much less wagons, and in poor condition, wending their way through forests, marshy or muddy quagmires as often as unimproved but dry footing. In that era, as today, greater cargoes, especially bulk goods and raw materials, could be transported by ship far more economically than by land; in the pre-railroad days of the industrial revolution, water transport was the gold standard of fast transportation. The first artificial canal in Western Europe was the Fossa Carolina built at the end of the 8th century under personal supervision of Charlemagne.
In Britain, the Glastonbury Canal is believed to be the first post-Roman canal and was built in the middle of the 10th century to link the River Brue at Northover with Glastonbury Abbey, a distance of about . Its initial purpose is believed to be the transport of building stone for the abbey, but later it was used for delivering produce, including grain, wine and fish, from the abbey's outlying properties. It remained in use until at least the 14th century, but possibly as late as the mid-16th century.More lasting and of more economic impact were canals like the Naviglio Grande built between 1127 and 1257 to connect Milan with the river Ticino. The Naviglio Grande is the most important of the lombard "navigli" and the oldest functioning canal in Europe.Later, canals were built in the Netherlands and Flanders to drain the polders and assist transportation of goods and people.
Canal building was revived in this age because of commercial expansion from the 12th century. River navigations were improved progressively by the use of single, or flash locks. Taking boats through these used large amounts of water leading to conflicts with watermill owners and to correct this, the pound or chamber lock first appeared, in the 10th century in China and in Europe in 1373 in Vreeswijk, Netherlands. Another important development was the mitre gate, which was, it is presumed, introduced in Italy by Bertola da Novate in the 16th century. This allowed wider gates and also removed the height restriction of guillotine locks.
To break out of the limitations caused by river valleys, the first summit level canals were developed with the Grand Canal of China in 581–617 AD whilst in Europe the first, also using single locks, was the Stecknitz Canal in Germany in 1398.
Africa
In the Songhai Empire of West Africa, several canals were constructed under Sunni Ali and Askia Muhammad I between Kabara and Timbuktu in the 15th century. These were used primarily for irrigation and transport. Sunni Ali also attempted to construct a canal from the Niger River to Walata to facilitate conquest of the city but his progress was halted when he went to war with the Mossi Kingdoms.
Early modern period
Around 1500–1800 the first summit level canal to use pound locks in Europe was the Briare Canal connecting the Loire and Seine (1642), followed by the more ambitious Canal du Midi (1683) connecting the Atlantic to the Mediterranean. This included a staircase of 8 locks at Béziers, a tunnel, and three major aqueducts.
Canal building progressed steadily in Germany in the 17th and 18th centuries with three great rivers, the Elbe, Oder and Weser being linked by canals. In post-Roman Britain, the first early modern period canal built appears to have been the Exeter Canal, which was surveyed in 1563, and open in 1566.
The oldest canal in the European settlements of North America, technically a mill race built for industrial purposes, is Mother Brook between the Boston, Massachusetts neighbourhoods of Dedham and Hyde Park connecting the higher waters of the Charles River and the mouth of the Neponset River and the sea. It was constructed in 1639 to provide water power for mills.
In Russia, the Volga–Baltic Waterway, a nationwide canal system connecting the Baltic Sea and Caspian Sea via the Neva and Volga rivers, was opened in 1718.
Industrial Revolution
The modern canal system was mainly a product of the 18th century and early 19th century. It came into being because the Industrial Revolution (which began in Britain during the mid-18th century) demanded an economic and reliable way to transport goods and commodities in large quantities.
By the early 18th century, river navigations such as the Aire and Calder Navigation were becoming quite sophisticated, with pound locks and longer and longer "cuts" (some with intermediate locks) to avoid circuitous or difficult stretches of river. Eventually, the experience of building long multi-level cuts with their own locks gave rise to the idea of building a "pure" canal, a waterway designed on the basis of where goods needed to go, not where a river happened to be.
The claim for the first pure canal in Great Britain is debated between "Sankey" and "Bridgewater" supporters. The first true canal in what is now the United Kingdom was the Newry Canal in Northern Ireland constructed by Thomas Steers in 1741.
The Sankey Brook Navigation, which connected St Helens with the River Mersey, is often claimed as the first modern "purely artificial" canal because although originally a scheme to make the Sankey Brook navigable, it included an entirely new artificial channel that was effectively a canal along the Sankey Brook valley. However, "Bridgewater" supporters point out that the last quarter-mile of the navigation is indeed a canalized stretch of the Brook, and that it was the Bridgewater Canal (less obviously associated with an existing river) that captured the popular imagination and inspired further canals.
In the mid-eighteenth century the 3rd Duke of Bridgewater, who owned a number of coal mines in northern England, wanted a reliable way to transport his coal to the rapidly industrializing city of Manchester. He commissioned the engineer James Brindley to build a canal for that purpose. Brindley's design included an aqueduct carrying the canal over the River Irwell. This was an engineering wonder which immediately attracted tourists. The construction of this canal was funded entirely by the Duke and was called the Bridgewater Canal. It opened in 1761 and was the first major British canal.
The new canals proved highly successful. The boats on the canal were horse-drawn with a towpath alongside the canal for the horse to walk along. This horse-drawn system proved to be highly economical and became standard across the British canal network. Commercial horse-drawn canal boats could be seen on the UK's canals until as late as the 1950s, although by then diesel-powered boats, often towing a second unpowered boat, had become standard.
The canal boats could carry thirty tons at a time with only one horse pulling – more than ten times the amount of cargo per horse that was possible with a cart. Because of this huge increase in supply, the Bridgewater canal reduced the price of coal in Manchester by nearly two-thirds within just a year of its opening. The Bridgewater was also a huge financial success, with it earning what had been spent on its construction within just a few years.
This success proved the viability of canal transport, and soon industrialists in many other parts of the country wanted canals. After the Bridgewater canal, early canals were built by groups of private individuals with an interest in improving communications. In Staffordshire the famous potter Josiah Wedgwood saw an opportunity to bring bulky cargoes of clay to his factory doors and to transport his fragile finished goods to market in Manchester, Birmingham or further away, by water, minimizing breakages. Within just a few years of the Bridgewater's opening, an embryonic national canal network came into being, with the construction of canals such as the Oxford Canal and the Trent & Mersey Canal.
The new canal system was both cause and effect of the rapid industrialization of The Midlands and the north. The period between the 1770s and the 1830s is often referred to as the "Golden Age" of British canals.
For each canal, an Act of Parliament was necessary to authorize construction, and as people saw the high incomes achieved from canal tolls, canal proposals came to be put forward by investors interested in profiting from dividends, at least as much as by people whose businesses would profit from cheaper transport of raw materials and finished goods.
In a further development, there was often out-and-out speculation, where people would try to buy shares in a newly floated company to sell them on for an immediate profit, regardless of whether the canal was ever profitable, or even built. During this period of "canal mania", huge sums were invested in canal building, and although many schemes came to nothing, the canal system rapidly expanded to nearly 4,000 miles (over 6,400 kilometres) in length.
Many rival canal companies were formed and competition was rampant. Perhaps the best example was Worcester Bar in Birmingham, a point where the Worcester and Birmingham Canal and the Birmingham Canal Navigations Main Line were only seven feet apart. For many years, a dispute about tolls meant that goods travelling through Birmingham had to be portaged from boats in one canal to boats in the other.
Canal companies were initially chartered by individual states in the United States. These early canals were constructed, owned, and operated by private joint-stock companies. Four were completed when the War of 1812 broke out; these were the South Hadley Canal (opened 1795) in Massachusetts, Santee Canal (opened 1800) in South Carolina, the Middlesex Canal (opened 1802) also in Massachusetts, and the Dismal Swamp Canal (opened 1805) in Virginia. The Erie Canal (opened 1825) was chartered and owned by the state of New York and financed by bonds bought by private investors. The Erie canal runs about from Albany, New York, on the Hudson River to Buffalo, New York, at Lake Erie. The Hudson River connects Albany to the Atlantic port of New York City and the Erie Canal completed a navigable water route from the Atlantic Ocean to the Great Lakes. The canal contains 36 locks and encompasses a total elevation differential of around 565 ft. (169 m). The Erie Canal with its easy connections to most of the U.S. mid-west and New York City soon quickly paid back all its invested capital (US$7 million) and started turning a profit. By cutting transportation costs in half or more it became a large profit center for Albany and New York City as it allowed the cheap transportation of many of the agricultural products grown in the mid west of the United States to the rest of the world. From New York City these agricultural products could easily be shipped to other U.S. states or overseas. Assured of a market for their farm products the settlement of the U.S. mid-west was greatly accelerated by the Erie Canal. The profits generated by the Erie Canal project started a canal building boom in the United States that lasted until about 1850 when railroads started becoming seriously competitive in price and convenience. The Blackstone Canal (finished in 1828) in Massachusetts and Rhode Island fulfilled a similar role in the early industrial revolution between 1828 and 1848. The Blackstone Valley was a major contributor of the American Industrial Revolution where Samuel Slater built his first textile mill.
Power canals
A power canal refers to a canal used for hydraulic power generation, rather than for transport. Nowadays power canals are built almost exclusively as parts of hydroelectric power stations. Parts of the United States, particularly in the Northeast, had enough fast-flowing rivers that water power was the primary means of powering factories (usually textile mills) until after the American Civil War. For example, Lowell, Massachusetts, considered to be "The Cradle of the American Industrial Revolution," has of canals, built from around 1790 to 1850, that provided water power and a means of transportation for the city. The output of the system is estimated at 10,000 horsepower. Other cities with extensive power canal systems include Lawrence, Massachusetts, Holyoke, Massachusetts, Manchester, New Hampshire, and Augusta, Georgia. The most notable power canal was built in 1862 for the Niagara Falls Hydraulic Power and Manufacturing Company.
19th century
Competition, from railways from the 1830s and roads in the 20th century, made the smaller canals obsolete for most commercial transport, and many of the British canals fell into decay. Only the Manchester Ship Canal and the Aire and Calder Canal bucked this trend. Yet in other countries canals grew in size as construction techniques improved. During the 19th century in the US, the length of canals grew from to over 4,000, with a complex network making the Great Lakes navigable, in conjunction with Canada, although some canals were later drained and used as railroad rights-of-way.
In the United States, navigable canals reached into isolated areas and brought them in touch with the world beyond. By 1825 the Erie Canal, long with 36 locks, opened up a connection from the populated Northeast to the Great Lakes. Settlers flooded into regions serviced by such canals, since access to markets was available. The Erie Canal (as well as other canals) was instrumental in lowering the differences in commodity prices between these various markets across America. The canals caused price convergence between different regions because of their reduction in transportation costs, which allowed Americans to ship and buy goods from farther distances much cheaper. Ohio built many miles of canal, Indiana had working canals for a few decades, and the Illinois and Michigan Canal connected the Great Lakes to the Mississippi River system until replaced by a channelized river waterway.
Three major canals with very different purposes were built in what is now Canada. The first Welland Canal, which opened in 1829 between Lake Ontario and Lake Erie, bypassing Niagara Falls and the Lachine Canal (1825), which allowed ships to skirt the nearly impassable rapids on the St. Lawrence River at Montreal, were built for commerce. The Rideau Canal, completed in 1832, connects Ottawa on the Ottawa River to Kingston, Ontario on Lake Ontario. The Rideau Canal was built as a result of the War of 1812 to provide military transportation between the British colonies of Upper Canada and Lower Canada as an alternative to part of the St. Lawrence River, which was susceptible to blockade by the United States.
In France, a steady linking of all the river systems – Rhine, Rhône, Saône and Seine – and the North Sea was boosted in 1879 by the establishment of the Freycinet gauge, which specified the minimum size of locks. Canal traffic doubled in the first decades of the 20th century.
Many notable sea canals were completed in this period, starting with the Suez Canal (1869) – which carries tonnage many times that of most other canals – and the Kiel Canal (1897), though the Panama Canal was not opened until 1914.
In the 19th century, a number of canals were built in Japan including the Biwako canal and the Tone canal. These canals were partially built with the help of engineers from the Netherlands and other countries.
A major question was how to connect the Atlantic and the Pacific with a canal through narrow Central America. (The Panama Railroad opened in 1855.) The original proposal was for a sea-level canal through what is today Nicaragua, taking advantage of the relatively large Lake Nicaragua. This canal has never been built in part because of political instability, which scared off potential investors. It remains an active project (the geography has not changed), and in the 2010s Chinese involvement was developing.
The second choice for a Central American canal was a Panama Canal. The De Lesseps company, which ran the Suez Canal, first attempted to build a Panama Canal in the 1880s. The difficulty of the terrain and weather (rain) encountered caused the company to go bankrupt. High worker mortality from disease also discouraged further investment in the project. DeLesseps' abandoned excavating equipment sits, isolated decaying machines, today tourist attractions.
Twenty years later, an expansionist United States, that just acquired colonies after defeating Spain in the 1898 Spanish–American War, and whose Navy became more important, decided to reactivate the project. The United States and Colombia did not reach agreement on the terms of a canal treaty (see Hay–Herrán Treaty). Panama, which did not have (and still does not have) a land connection with the rest of Colombia, was already thinking of independence. In 1903 the United States, with support from Panamanians who expected the canal to provide substantial wages, revenues, and markets for local goods and services, took Panama province away from Colombia, and set up a puppet republic (Panama). Its currency, the Balboa – a name that suggests the country began as a way to get from one hemisphere to the other – was a replica of the US dollar. The US dollar was and remains legal tender (used as currency). A U.S. military zone, the Canal Zone, wide, with U.S. military stationed there (bases, 2 TV stations, channels 8 and 10, Pxs, a U.S.-style high school), split Panama in half. The Canal – a major engineering project – was built. The U.S. did not feel that conditions were stable enough to withdraw until 1979. The withdrawal from Panama contributed to President Jimmy Carter's defeat in 1980.
Modern uses
Large-scale ship canals such as the Panama Canal and Suez Canal continue to operate for cargo transportation, as do European barge canals. Due to globalization, they are becoming increasingly important, resulting in expansion projects such as the Panama Canal expansion project. The expanded canal began commercial operation on 26 June 2016. The new set of locks allow transit of larger, Post-Panamax and New Panamax ships.
The narrow early industrial canals, however, have ceased to carry significant amounts of trade and many have been abandoned to navigation, but may still be used as a system for transportation of untreated water. In some cases railways have been built along the canal route, an example being the Croydon Canal.
A movement that began in Britain and France to use the early industrial canals for pleasure boats, such as hotel barges, has spurred rehabilitation of stretches of historic canals. In some cases, abandoned canals such as the Kennet and Avon Canal have been restored and are now used by pleasure boaters. In Britain, canalside housing has also proven popular in recent years.
The Seine–Nord Europe Canal is being developed into a major transportation waterway, linking France with Belgium, Germany, and the Netherlands.
Canals have found another use in the 21st century, as easements for the installation of fibre optic telecommunications network cabling, avoiding having them buried in roadways while facilitating access and reducing the hazard of being damaged from digging equipment.
Canals are still used to provide water for agriculture. An extensive canal system exists within the Imperial Valley in the Southern California desert to provide irrigation to agriculture within the area.
Cities on water
Canals are so deeply identified with Venice that many canal cities have been nicknamed "the Venice of…". The city is built on marshy islands, with wooden piles supporting the buildings, so that the land is man-made rather than the waterways. The islands have a long history of settlement; by the 12th century, Venice was a powerful city state.
Amsterdam was built in a similar way, with buildings on wooden piles. It became a city around 1300. Many Amsterdam canals were built as part of fortifications. They became grachten when the city was enlarged and houses were built alongside the water. Its nickname as the "Venice of the North" is shared with Hamburg of Germany, St. Petersburg of Russia and Bruges of Belgium.
Suzhou was dubbed the "Venice of the East" by Marco Polo during his travels there in the 13th century, with its modern canalside Pingjiang Road and Shantang Street becoming major tourist attractions. Other nearby cities including Nanjing, Shanghai, Wuxi, Jiaxing, Huzhou, Nantong, Taizhou, Yangzhou, and Changzhou are located along the lower mouth of the Yangtze River and Lake Tai, yet another source of small rivers and creeks, which have been canalized and developed for centuries.
Other cities with extensive canal networks include: Alkmaar, Amersfoort, Bolsward, Brielle, Delft, Den Bosch, Dokkum, Dordrecht, Enkhuizen, Franeker, Gouda, Haarlem, Harlingen, Leeuwarden, Leiden, Sneek and Utrecht in the Netherlands; Brugge and Gent in Flanders, Belgium; Birmingham in England; Saint Petersburg in Russia; Bydgoszcz, Gdańsk, Szczecin and Wrocław in Poland; Aveiro in Portugal; Hamburg and Berlin in Germany; Fort Lauderdale and Cape Coral in Florida, United States, Wenzhou in China, Cần Thơ in Vietnam, Bangkok in Thailand, and Lahore in Pakistan.
Liverpool Maritime Mercantile City was a UNESCO World Heritage Site near the centre of Liverpool, England, where a system of intertwining waterways and docks is now being developed for mainly residential and leisure use.
Canal estates (sometimes known as bayous in the United States) are a form of subdivision popular in cities like Miami, Florida, Texas City, Texas and the Gold Coast, Queensland; the Gold Coast has over 890 km of residential canals. Wetlands are difficult areas upon which to build housing estates, so dredging part of the wetland down to a navigable channel provides fill to build up another part of the wetland above the flood level for houses. Land is built up in a finger pattern that provides a suburban street layout of waterfront housing blocks.
Boats
Inland canals have often had boats specifically built for them. An example of this is the British narrowboat, which is up to long and wide and was primarily built for British Midland canals. In this case the limiting factor was the size of the locks. This is also the limiting factor on the Panama canal where Panamax ships were limited to a length of and a beam of until 26 June 2016 when the opening of larger locks allowed for the passage of larger New Panamax ships. For the lockless Suez Canal the limiting factor for Suezmax ships is generally draft, which is limited to . At the other end of the scale, tub-boat canals such as the Bude Canal were limited to boats of under 10 tons for much of their length due to the capacity of their inclined planes or boat lifts. Most canals have a limit on height imposed either by bridges or by tunnels.
Lists of canals
Africa
Bahr Yussef
El Salam Canal (Egypt)
Ibrahimiya Canal (Egypt)
Mahmoudiyah Canal (Egypt)
Suez Canal (Egypt)
Asia
see List of canals in India
see List of canals in Pakistan
see History of canals in China
King Abdullah Canal (Jordan)
Qanat al-Jaish (Iraq)
Europe
Danube–Black Sea Canal (Romania)
North Crimean Canal (Ukraine)
Canals of France
Canals of Amsterdam
Canals of Germany
Canals of Ireland
Canals of Russia
Canals of the United Kingdom
List of canals in the United Kingdom
Great Bačka Canal (Serbia)
North America
Canals of Canada
Canals of the United States
Panama Canal
Lists of proposed canals
Eurasia Canal
Istanbul Canal
Nicaragua Canal
Salwa Canal
Thai Canal
Sulawesi Canal
Two Seas Canal
Northern river reversal
Balkan Canal or Danube–Morava–Vardar–Aegean Canal
Iranrud
See also
Beaver, a non-human animal also known for canal building
Canal elevator
Calle canal
Canal & River Trust
Canal tunnel
Environment Agency
Horse-drawn boat
Irrigation district
Lists of canals
List of navigation authorities in the United Kingdom
List of waterways
List of waterway societies in the United Kingdom
Mooring
Navigation authority
Proposed canals
Roman canals – (Torksey)
Volumetric flow rate
Water bridge
Waterscape
Water transportation
Waterway restoration
Waterways in the United Kingdom
Weigh lock
References
Notes
Bibliography
External links
British Waterways' leisure website – Britain's official guide to canals, rivers and lakes
Leeds Liverpool Canal Photographic Guide
Information and Boater's Guide to the New York State Canal System
"Canals and Navigable Rivers" by James S. Aber, Emporia State University
National Canal Museum (US)
London Canal Museum (UK)
Canals in Amsterdam
Canal du Midi
Canal des Deux Mers
Canal flow measurement using a sensor.
Coastal construction
Water transport infrastructure
Artificial bodies of water
Infrastructure | Canal | [
"Engineering"
] | 8,476 | [
"Construction",
"Coastal construction",
"Infrastructure"
] |
5,630 | https://en.wikipedia.org/wiki/Copula%20%28linguistics%29 | In linguistics, a copula /‘kɒpjələ/ (: copulas or copulae; abbreviated ) is a word or phrase that links the subject of a sentence to a subject complement, such as the word is in the sentence "The sky is blue" or the phrase was not being in the sentence "It was not being cooperative." The word copula derives from the Latin noun for a "link" or "tie" that connects two different things.
A copula is often a verb or a verb-like word, though this is not universally the case. A verb that is a copula is sometimes called a copulative or copular verb. In English primary education grammar courses, a copula is often called a linking verb. In other languages, copulas show more resemblances to pronouns, as in Classical Chinese and Guarani, or may take the form of suffixes attached to a noun, as in Korean, Beja, and Inuit languages.
Most languages have one main copula (in English, the verb "to be"), although some (like Spanish, Portuguese and Thai) have more than one, while others have none. While the term copula is generally used to refer to such principal verbs, it may also be used for a wider group of verbs with similar potential functions (like become, get, feel and seem in English); alternatively, these might be distinguished as "semi-copulas" or "pseudo-copulas".
Grammatical function
The principal use of a copula is to link the subject of a clause to a subject complement. A copular verb is often considered to be part of the predicate, the remainder being called a predicative expression. A simple clause containing a copula is illustrated below:
The book is on the table.
In that sentence, the noun phrase the book is the subject, the verb is serves as the copula, and the prepositional phrase on the table is the predicative expression. In some theories of grammar, the whole expression is on the table may be called a predicate or a verb phrase.
The predicative expression accompanying the copula, also known as the complement of the copula, may take any of several possible forms: it may be a noun or noun phrase, an adjective or adjective phrase, a prepositional phrase (as above), or an adverb or another adverbial phrase expressing time or location. Examples are given below, with the copula in bold and the predicative expression in italics:
The three components (subject, copula and predicative expression) do not necessarily appear in that order: their positioning depends on the rules for word order applicable to the language in question. In English (an SVO language), the ordering given above is the normal one, but certain variation is possible:
In many questions and other clauses with subject–auxiliary inversion, the copula moves in front of the subject: Are you happy?
In inverse copular constructions (see below) the predicative expression precedes the copula, but the subject follows it: In the room were three men.
It is also possible, in certain circumstances, for one (or even two) of the three components to be absent:
In null-subject (pro-drop) languages, the subject may be omitted, as it may from other types of sentence. In Italian, means , literally .
In non-finite clauses in languages like English, the subject is often absent, as in the participial phrase being tired or the infinitive phrase to be tired. The same applies to most imperative sentences like Be good!
For cases in which no copula appears, see below.
Any of the three components may be omitted as a result of various general types of ellipsis. In particular, in English, the predicative expression may be elided in a construction similar to verb phrase ellipsis, as in short sentences like I am; Are they? (where the predicative expression is understood from the previous context).
Inverse copular constructions, in which the positions of the predicative expression and the subject are reversed, are found in various languages. They have been the subject of much theoretical analysis, particularly in regard to the difficulty of maintaining, in the case of such sentences, the usual division into a subject noun phrase and a predicate verb phrase.
Another issue is verb agreement when both subject and predicative expression are noun phrases (and differ in number or person): in English, the copula typically agrees with the syntactical subject even if it is not logically (i.e. semantically) the subject, as in the cause of the riot is (not are) these pictures of the wall. Compare Italian ; notice the use of the plural to agree with plural rather than with singular . In instances where an English syntactical subject comprises a prepositional object that is pluralized, however, the prepositional object agrees with the predicative expression, e.g. "What kind of birds are those?"
The definition and scope of the concept of a copula is not necessarily precise in any language. As noted above, though the concept of the copula in English is most strongly associated with the verb to be, there are many other verbs that can be used in a copular sense as well.
The boy became a man.
The girl grew more excited as the holiday preparations intensified.
The dog felt tired from the activity.
And more tenuously
The milk turned sour.
The food smells good.
You seem upset.
Other functions
A copular verb may also have other uses supplementary to or distinct from its uses as a copula. Some co-occurrences are common.
Auxiliary verb
The English verb to be is also used as an auxiliary verb, especially for expressing passive voice (together with the past participle) or expressing progressive aspect (together with the present participle):
Other languages' copulas have additional uses as auxiliaries. For example, French can be used to express passive voice similarly to English be; both French and German are used to express the perfect forms of certain verbs:
In the same way, usage of English be in the present perfect, though archaic, is still commonly seen in old texts/translations:
The auxiliary functions of these verbs derived from their copular function, and could be interpreted as special cases of the copular function (with the verbal forms it precedes being considered adjectival).
Another auxiliary usage in English is to denote an obligatory action or expected occurrence: "I am to serve you". "The manager is to resign". This can be put also into past tense: "We were to leave at 9". For forms like "if I was/were to come", see English conditional sentences. (By certain criteria, the English copula be may always be considered an auxiliary verb; see Diagnostics for identifying auxiliary verbs in English.)
Existential verb
The English to be and its equivalents in certain other languages also have a non-copular use as an existential verb, meaning "to exist". This use is illustrated in the following sentences: I want only to be, and that is enough; I think therefore I am; To be or not to be, that is the question. In these cases, the verb itself expresses a predicate (that of existence), rather than linking to a predicative expression as it does when used as a copula. In ontology it is sometimes suggested that the "is" of existence is reducible to the "is" of property attribution or class membership; to be, Aristotle held, is to be something. However, Abelard in his Dialectica made a reductio ad absurdum argument against the idea that the copula can express existence.
Similar examples can be found in many other languages; for example, the French and Latin equivalents of I think therefore I am are and , where and are the equivalents of English "am", normally used as copulas. However, other languages prefer a different verb for existential use, as in the Spanish version (where the verb is used rather than the copula or ).
Another type of existential usage is in clauses of the there is... or there are... type. Languages differ in the way they express such meanings; some of them use the copular verb, possibly with an expletive pronoun like the English there, while other languages use different verbs and constructions, like the French (which uses parts of the verb , not the copula) or the Swedish (the passive voice of the verb for "to find"). For details, see existential clause.
Relying on a unified theory of copular sentences, it has been proposed that the English there-sentences are subtypes of inverse copular constructions.
Meanings
Predicates formed using a copula may express identity: that the two noun phrases (subject and complement) have the same referent or express an identical concept:
They may also express membership of a class or a subset relationship:
Similarly they may express some property, relation or position, permanent or temporary:
Essence versus state
Some languages use different copulas, or different syntax, to denote a permanent, essential characteristic of something versus a temporary state. For examples, see the sections on the Romance languages, Slavic languages and Irish.
Forms
In many languages the principal copula is a verb, like English (to) be, German , Mixtec , Touareg emous, etc. It may inflect for grammatical categories like tense, aspect and mood, like other verbs in the language. Being a very commonly used verb, it is likely that the copula has irregular inflected forms; in English, the verb be has a number of highly irregular (suppletive) forms and has more different inflected forms than any other English verb (am, is, are, was, were, etc.; see English verbs for details).
Other copulas show more resemblances to pronouns. That is the case for Classical Chinese and Guarani, for instance. In highly synthetic languages, copulas are often suffixes, attached to a noun, but they may still behave otherwise like ordinary verbs: in Inuit languages.
In some other languages, like Beja and Ket, the copula takes the form of suffixes that attach to a noun but are distinct from the person agreement markers used on predicative verbs. This phenomenon is known as nonverbal person agreement (or nonverbal subject agreement), and the relevant markers are always established as deriving from cliticized independent pronouns.
Zero copula
In some languages, copula omission occurs within a particular grammatical context. For example, speakers of Bengali, Russian, Indonesian, Turkish, Hungarian, Arabic, Hebrew, Geʽez and Quechuan languages consistently drop the copula in present tense: Bengali: , Aami manush, 'I (am a) human'; Russian: , ; Indonesian: ; Turkish: ; Hungarian: ; Arabic: , ; Hebrew: , ; Geʽez: , / / ; Southern Quechua: . The usage is known generically as the zero copula. In other tenses (sometimes in forms other than third person singular), the copula usually reappears.
Some languages drop the copula in poetic or aphoristic contexts. Examples in English include
The more, the merrier.
Out of many, one.
True that.
Such poetic copula dropping is more pronounced in some languages other than English, like the Romance languages.
In informal speech of English, the copula may also be dropped in general sentences, as in "She a nurse." It is a feature of African-American Vernacular English, but is also used by a variety of other English speakers. An example is the sentence "I saw twelve men, each a soldier."
Examples in specific languages
In Ancient Greek, when an adjective precedes a noun with an article, the copula is understood: , "the house is large", can be written , "large the house (is)."
In Quechua (Southern Quechua used for the examples), zero copula is restricted to present tense in third person singular (): ; but: .
In Māori, the zero copula can be used in predicative expressions and with continuous verbs (many of which take a copulative verb in many Indo-European languages) — , literally , ; , literally , ; , literally , , , literally , .
Alternatively, in many cases, the particle can be used as a copulative (though not all instances of are used as thus, like all other Māori particles, has multiple purposes): ; ; .
However, when expressing identity or class membership, must be used: ; ; .
When expressing identity, can be placed on either object in the clause without changing the meaning ( is the same as ) but not on both ( would be equivalent to saying "it is this, it is my book" in English).
In Hungarian, zero copula is restricted to present tense in third person singular and plural: / — / ; but: , , , . The copula also reappears for stating locations: , and for stating time: . However, the copula may be omitted in colloquial language: .
Hungarian uses copula for expressing location: , but it is omitted in the third person present tense for attribution or identity statements: ; ; (but , , ).
In Turkish, both the third person singular and the third person plural copulas are omittable. and both mean , and and both mean . Both of the sentences are acceptable and grammatically correct, but sentences with the copula are more formal.
The Turkish first person singular copula suffix is omitted when introducing oneself. is grammatically correct, but (same sentence with the copula) is not for an introduction (but is grammatically correct in other cases).
Further restrictions may apply before omission is permitted. For example, in the Irish language, , the present tense of the copula, may be omitted when the predicate is a noun. , the past/conditional, cannot be deleted. If the present copula is omitted, the pronoun (e.g., , , ) preceding the noun is omitted as well.
Copula-like words
Sometimes, the term copula is taken to include not only a language's equivalent(s) to the verb be but also other verbs or forms that serve to link a subject to a predicative expression (while adding semantic content of their own). For example, English verbs like become, get, feel, look, taste, smell, and seem can have this function, as in the following sentences (the predicative expression, the complement of the verb, is in italics):
(This usage should be distinguished from the use of some of these verbs as "action" verbs, as in They look at the wall, in which look denotes an action and cannot be replaced by the basic copula are.)
Some verbs have rarer, secondary uses as copular verbs, like the verb fall in sentences like The zebra fell victim to the lion.
These extra copulas are sometimes called "semi-copulas" or "pseudo-copulas." For a list of common verbs of this type in English, see List of English copulae.
In particular languages
Indo-European
In Indo-European languages, the words meaning to be are sometimes similar to each other. Due to the high frequency of their use, their inflection retains a considerable degree of similarity in some cases. Thus, for example, the English form is is a cognate of German , Latin , Persian and Russian , even though the Germanic, Italic, Iranian and Slavic language groups split at least 3000 years ago. The origins of the copulas of most Indo-European languages can be traced back to four Proto-Indo-European stems: (), (), and ().
English
The English copular verb be has eight basic forms (be, am, is, are, being, was, were, been) and five negative forms (ain't (in some dialects), isn't, aren't, wasn't, weren't). No other English verb has more than five forms. Additional archaic forms include art, wast, wert, and occasionally beest (as a subjunctive). For more details see English verbs. For the etymology of the various forms, see Indo-European copula.
The main uses of the copula in English are described in the above sections. The possibility of copula omission is mentioned under .
A particular construction found in English (particularly in speech) is the use of two successive copulas when only one appears necessary, as in My point is, is that.... The acceptability of this construction is a disputed matter in English prescriptive grammar.
The simple English copula "be" may on occasion be substituted by other verbs with near identical meanings.
Persian
In Persian, the verb to be can take the form of either (cognate to English is) or (cognate to be).
{| border="0" cellspacing="2" cellpadding="1"
|-
|
|
|
|-
|
|
|
|-
|
|
|
|}
Hindustani
In Hindustani (Hindi and Urdu), the copula can be put into four grammatical aspects (simple, habitual, perfective, and progressive) and each of those four aspects can be put into five grammatical moods (indicative, presumptive, subjunctive, contrafactual, and imperative). Some example sentences using the simple aspect are shown below:
Besides the verb , there are three other verbs which can also be used as the copula: , , and . The following table shows the conjugations of the copula in the five grammatical moods in the simple aspect. The transliteration scheme used is ISO 15919.
Romance
Copulas in the Romance languages usually consist of two different verbs that can be translated as "to be", the main one from the Latin (via Vulgar Latin ; deriving from *es-), often referenced as (another of the Latin verb's principal parts) and a secondary one from (from *sta-), often referenced as . The resulting distinction in the modern forms is found in all the Iberian Romance languages, and to a lesser extent Italian, but not in French or Romanian. The difference is that the first usually refers to essential characteristics, while the second refers to states and situations, e.g., "Bob is old" versus "Bob is well." A similar division is found in the non-Romance Basque language (viz. and ). (The English words just used, "essential" and "state", are also cognate with the Latin infinitives and . The word "stay" also comes from Latin , through Middle French , stem of Old French .) In Spanish and Portuguese, the high degree of verbal inflection, plus the existence of two copulas ( and ), means that there are 105 (Spanish) and 110 (Portuguese) separate forms to express the copula, compared to eight in English and one in Chinese.
In some cases, the verb itself changes the meaning of the adjective/sentence. The following examples are from Portuguese:
Slavic
Some Slavic languages make a distinction between essence and state (similar to that discussed in the above section on the Romance languages), by putting a predicative expression denoting a state into the instrumental case, and essential characteristics are in the nominative. This can apply with other copula verbs as well: the verbs for "become" are normally used with the instrumental case.
As noted above under , Russian and other North Slavic languages generally or often omit the copula in the present tense.
Irish
In Irish and Scottish Gaelic, there are two copulas, and the syntax is also changed when one is distinguishing between states or situations and essential characteristics.
Describing the subject's state or situation typically uses the normal VSO ordering with the verb . The copula is used to state essential characteristics or equivalences.
{| border="0" cellspacing="2" cellpadding="1" valign="top"
| align=left valign=top| || align=right valign=top | || align=left valign=top |
|-
| || || (lit. )
|-
| || || (lit. )
|}
The word is the copula (rhymes with the English word "miss").
The pronoun used with the copula is different from the normal pronoun. For a masculine singular noun, is used (for "he" or "it"), as opposed to the normal pronoun ; for a feminine singular noun, is used (for "she" or "it"), as opposed to normal pronoun ; for plural nouns, is used (for "they" or "those"), as opposed to the normal pronoun .
To describe being in a state, condition, place, or act, the verb "to be" is used:
Arabic dialects
North Levantine Arabic
The North Levantine Arabic dialect, spoken in Syria and Lebanon, has a negative copula formed by and a suffixed pronoun.
Bantu languages
Chichewa
In Chichewa, a Bantu language spoken mainly in Malawi, a very similar distinction exists between permanent and temporary states as in Spanish and Portuguese, but only in the present tense. For a permanent state, in the 3rd person, the copula used in the present tense is (negative ):
For the 1st and 2nd persons the particle is combined with pronouns, e.g., :
For temporary states and location, the copula is the appropriate form of the defective verb :
For the 1st and 2nd persons the person is shown, as normally with Chichewa verbs, by the appropriate pronominal prefix:
In the past tenses, is used for both types of copula:
In the future, subjunctive, or conditional tenses, a form of the verb is used as a copula:
Muylaq' Aymaran
Uniquely, the existence of the copulative verbalizer suffix in the Southern Peruvian Aymaran language variety, Muylaq' Aymara, is evident only in the surfacing of a vowel that would otherwise have been deleted because of the presence of a following suffix, lexically prespecified to suppress it. As the copulative verbalizer has no independent phonetic structure, it is represented by the Greek letter ʋ in the examples used in this entry.
Accordingly, unlike in most other Aymaran variants, whose copulative verbalizer is expressed with a vowel-lengthening component, -:, the presence of the copulative verbalizer in Muylaq' Aymara is often not apparent on the surface at all and is analyzed as existing only meta-linguistically. However, in a verb phrase like "It is old", the noun does not require the copulative verbalizer: .
It is now pertinent to make some observations about the distribution of the copulative verbalizer. The best place to start is with words in which its presence or absence is obvious. When the vowel-suppressing first person simple tense suffix attaches to a verb, the vowel of the immediately preceding suffix is suppressed (in the examples in this subsection, the subscript "c" appears prior to vowel-suppressing suffixes in the interlinear gloss to better distinguish instances of deletion that arise from the presence of a lexically pre-specified suffix from those that arise from other (e.g. phonotactic) motivations). Consider the verb , which is inflected for the first person simple tense and so, predictably, loses its final root vowel: .
However, prior to the suffixation of the first person simple suffix to the same root nominalized with the agentive nominalizer , the word must be verbalized. The fact that the final vowel of below is not suppressed indicates the presence of an intervening segment, the copulative verbalizer: .
It is worthwhile to compare of the copulative verbalizer in Muylaq' Aymara as compared to La Paz Aymara, a variant which represents this suffix with vowel lengthening. Consider the near-identical sentences below, both translations of "I have a small house" in which the nominal root is verbalized with the copulative verbalizer, but the correspondence between the copulative verbalizer in these two variants is not always a strict one-to-one relation.
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
| La Paz Aymara:
|
|-
| Muylaq' Aymara:
|
|}
Georgian
As in English, the verb "to be" () is irregular in Georgian (a Kartvelian language); different verb roots are employed in different tenses. The roots , , , and (past participle) are used in the present tense, future tense, past tense and the perfective tenses respectively. Examples:
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
|
|
|-
|
|
|-
|
|
|-
|
|
|-
|
|
|}
In the last two examples (perfective and pluperfect), two roots are used in one verb compound. In the perfective tense, the root (which is the expected root for the perfective tense) is followed by the root , which is the root for the present tense. In the pluperfective tense, again, the root is followed by the past tense root . This formation is very similar to German (an Indo-European language), where the perfect and the pluperfect are expressed in the following way:
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
|
| , literally
|-
|
| , literally
|}
Here, is the past participle of in German. In both examples, as in Georgian, this participle is used together with the present and the past forms of the verb in order to conjugate for the perfect and the pluperfect aspects.
Haitian Creole
Haitian Creole, a French-based creole language, has three forms of the copula: , , and the zero copula, no word at all (the position of which will be indicated with Ø, just for purposes of illustration).
Although no textual record exists of Haitian-Creole at its earliest stages of development from French, is derived from French (written ), which is the normal French contraction of (that, written ) and the copula (is, written ) (a form of the verb ).
The derivation of is less obvious; but we can assume that the French source was ("he/it is", written ), which, in rapidly spoken French, is very commonly pronounced as (typically written ).
The use of a zero copula is unknown in French, and it is thought to be an innovation from the early days when Haitian-Creole was first developing as a Romance-based pidgin. Latin also sometimes used a zero copula.
Which of //Ø is used in any given copula clause depends on complex syntactic factors that we can superficially summarize in the following four rules:
1. Use Ø (i.e., no word at all) in declarative sentences where the complement is an adjective phrase, prepositional phrase, or adverb phrase:
2. Use when the complement is a noun phrase. But, whereas other verbs come after any tense/mood/aspect particles (like to mark negation, or to explicitly mark past tense, or to mark progressive aspect), comes before any such particles:
3. Use where French and English have a dummy "it" subject:
4. Finally, use the other copula form in situations where the sentence's syntax leaves the copula at the end of a phrase:
The above is, however, only a simplified analysis.
Japanese
The Japanese copula (most often translated into English as an inflected form of "to be") is unique among verbs in Japanese. It is highly irregular, and in several ways behaves in ways other verbs do not; such as requiring a separate relativised form in some circumstances, and acting simply as a marker of formality/politeness with no predication force in some circumstances. In the most basic case, it behaves like a normal verb with irregular forms, which (like most copulas crosslinguistically) takes a non-case-marked complement instead of an object.
As with all verbs in Japanese, it is necessary to mark the speaker's implied social relationship to the addressee by the choice of verb form. The following two sentences differ only in the fact that the first is appropriate only between decently close friends or family, or said by someone of significantly higher social status than the listener, and the second is only appropriate outside of such circumstances.
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
|
| || ||
|-
|
| || ||
|}
Japanese has two classes of words which correspond to adjectives in English, one of which requires a copula to become a predicate and one of which does not.
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
|
| ||
|-
|
| ||
|-
| *
| * || colspan=2 | Invalid, as is its own predicate and does not need a copula to make it a predicate
|}
However, the polite copula is used as a means to mark the self-predicating class of adjectives as grammatically formal, and thus the formal equivalent of is . In these situations, the copula is not serving as an actual predication device; it is only a means to supply formality marking.
The non-self-predicating class of adjectives is the one place in modern Japanese where a separate relativiser form appears; these require the form in order to modify nouns.
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
|
| ||
|-
|
| ||
|-
|
| ||
|-
|
| ||
|-
| *
| * || colspan=2 | Invalid, as this class of adjectives cannot just be placed next to a noun to modify it
|-
| *
| * || colspan=2 | Invalid, as the copula form requires a specially marked form when it heads a relative clause, unlike all other verbs in modern Japanese
|}
Etymologically the copula is a reduced form of , which effectively means 'exists as'; in formal situations or its formal form can appear in place of or , and in certain situations other forms of may be appropriate (such as /). Nonstandard forms such as in Kansai and in much of the rest of western Japan (see map above) are due to various dialects reducing differently than the Kantō-based standard form did.
The negative form of the copula is generally or its reduced form (or in formal situations, substitute for ). This includes the topic marker , due to negative copula sentences typically implying some kind of contrastive topic-like force on the complement. can occur in relative clauses, where information structure marking might be odd, but is also a general negative copula and would be sensible still in any situation might be used.
Many sentences in Japanese are structurally a headless relative clause nominalised by (or its reduced form ) and then predicated with a copula; the structure is analogous to something like English it's that.... This structure is used to indicate that the statement is intended to answer a question or explain confusion a listener may have had (though the question it answers may not have ever been overtly spoken). This has largely been incorporated into Japanese's sentence-final particle system, and is far more common than the equivalent English structure.
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
|
| ||
|-
|
| ||
|}
Similarly, has also been recruited into the sentence-final particle system, and is used to mark a sentence that the speaker should have been decently obvious to the listener, or to indicate that the speaker is surprised to find that the sentence is true. In this role it can cooccur with an actual predicative , but not with the positive ; is omitted in such sentences.
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
|
| || (differs from "It's not tomorrow" only by intonation; as a sentence-final particle is not a separate phonological unit while as a negative copula it is)
|-
|
| ||
|}
Korean
For sentences with predicate nominatives, the copula () is added to the predicate nominative (with no space in between).
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
|
| ||
|}
Some adjectives (usually colour adjectives) are nominalized and used with the copula ().
1. Without the copula ():
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
|
| ||
|}
2. With the copula ():
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
|
| ||
|}
Some Korean adjectives are derived using the copula. Separating these articles and nominalizing the former part will often result in a sentence with a related, but different meaning. Using the separated sentence in a situation where the un-separated sentence is appropriate is usually acceptable as the listener can decide what the speaker is trying to say using the context.
Chinese
In Chinese, both states and qualities are, in general, expressed with stative verbs (SV) with no need for a copula, e.g., in Chinese, "to be tired" ( ), "to be hungry" ( ), "to be located at" ( ), "to be stupid" ( ) and so forth. A sentence can consist simply of a pronoun and such a verb: for example, (). Usually, however, verbs expressing qualities are qualified by an adverb (meaning "very", "not", "quite", etc.); when not otherwise qualified, they are often preceded by , which in other contexts means "very", but in this use often has no particular meaning.
Only sentences with a noun as the complement (e.g., "This is my sister") use the copular verb "to be": . This is used frequently; for example, instead of having a verb meaning "to be Chinese", the usual expression is "to be a Chinese person" (; ; ). This is sometimes called an equative verb. Another possibility is for the complement to be just a noun modifier (ending in ), the noun being omitted:
Before the Han dynasty, the character served as a demonstrative pronoun meaning "this" (this usage survives in some idioms and proverbs.) Some linguists believe that developed into a copula because it often appeared, as a repetitive subject, after the subject of a sentence (in classical Chinese we can say, for example: "George W. Bush, this president of the United States" meaning "George W. Bush is the president of the United States). The character appears to be formed as a compound of characters with the meanings of "early" and "straight."
Another use of in modern Chinese is in combination with the modifier to mean "yes" or to show agreement. For example:
Question: Response: , meaning "Yes", or , meaning "No."
(A more common way of showing that the person asking the question is correct is by simply saying "right" or "correct", ; the corresponding negative answer is .)
Yet another use of is in the shì...(de) construction, which is used to emphasize a particular element of the sentence; see .
In Hokkien acts as the copula, and is the equivalent in Wu Chinese. Cantonese uses () instead of ; similarly, Hakka uses .
Siouan languages
In Siouan languages like Lakota, in principle almost all words—according to their structure—are verbs. So not only (transitive, intransitive and so-called "stative") verbs but even nouns often behave like verbs and do not need to have copulas.
For example, the word refers to a man, and the verb is expressed as . Yet there also is a copula that in most cases is used: .
In order to express the statement , one has to say . But, in order to express that that person is THE doctor (say, that had been phoned to help), one must use another copula :
In order to refer to space (e.g., Robert is in the house), various verbs are used, e.g., (lit., ) for humans, or for inanimate objects of a certain shape. "Robert is in the house" could be translated as , whereas "There's one restaurant next to the gas station" translates as
Constructed languages
The constructed language Lojban has two words that act similar to a copula in natural languages. The clause turns whatever follows it into a predicate that means to be (among) what it follows. For example, means "to be Bob", and means "to be one of the three sisters". Another one is , which is itself a predicate that means all its arguments are the same thing (equal). One word which is often confused for a copula in Lojban, but is not one, is . It merely indicates that the word which follows is the main predicate of the sentence. For example, means "my friend is a musician", but the word does not correspond to English is; instead, the word , which is a predicate, corresponds to the entire phrase "is a musician". The word is used to prevent , which would mean "the friend-of-me type of musician".
See also
Indo-European copula
Nominal sentence
Stative verb
Subject complement
Zero copula
Citations
General references
(See "copular sentences" and "existential sentences and expletive there" in Volume II.)
Moro, A. (1997) The Raising of Predicates. Cambridge University Press, Cambridge, England.
Tüting, A. W. (December 2003). Essay on Lakota syntax. .
Further reading
Parts of speech
Verb types | Copula (linguistics) | [
"Technology"
] | 8,335 | [
"Parts of speech",
"Components"
] |
5,638 | https://en.wikipedia.org/wiki/Combustion | Combustion, or burning, is a high-temperature exothermic redox chemical reaction between a fuel (the reductant) and an oxidant, usually atmospheric oxygen, that produces oxidized, often gaseous products, in a mixture termed as smoke. Combustion does not always result in fire, because a flame is only visible when substances undergoing combustion vaporize, but when it does, a flame is a characteristic indicator of the reaction. While activation energy must be supplied to initiate combustion (e.g., using a lit match to light a fire), the heat from a flame may provide enough energy to make the reaction self-sustaining. The study of combustion is known as combustion science.
Combustion is often a complicated sequence of elementary radical reactions. Solid fuels, such as wood and coal, first undergo endothermic pyrolysis to produce gaseous fuels whose combustion then supplies the heat required to produce more of them. Combustion is often hot enough that incandescent light in the form of either glowing or a flame is produced. A simple example can be seen in the combustion of hydrogen and oxygen into water vapor, a reaction which is commonly used to fuel rocket engines. This reaction releases 242kJ/mol of heat and reduces the enthalpy accordingly (at constant temperature and pressure):
2H_2(g){+}O_2(g)\rightarrow 2H_2O\uparrow
Uncatalyzed combustion in air requires relatively high temperatures. Complete combustion is stoichiometric concerning the fuel, where there is no remaining fuel, and ideally, no residual oxidant. Thermodynamically, the chemical equilibrium of combustion in air is overwhelmingly on the side of the products. However, complete combustion is almost impossible to achieve, since the chemical equilibrium is not necessarily reached, or may contain unburnt products such as carbon monoxide, hydrogen and even carbon (soot or ash). Thus, the produced smoke is usually toxic and contains unburned or partially oxidized products. Any combustion at high temperatures in atmospheric air, which is 78 percent nitrogen, will also create small amounts of several nitrogen oxides, commonly referred to as NOx, since the combustion of nitrogen is thermodynamically favored at high, but not low temperatures. Since burning is rarely clean, fuel gas cleaning or catalytic converters may be required by law.
Fires occur naturally, ignited by lightning strikes or by volcanic products. Combustion (fire) was the first controlled chemical reaction discovered by humans, in the form of campfires and bonfires, and continues to be the main method to produce energy for humanity. Usually, the fuel is carbon, hydrocarbons, or more complicated mixtures such as wood that contain partially oxidized hydrocarbons. The thermal energy produced from the combustion of either fossil fuels such as coal or oil, or from renewable fuels such as firewood, is harvested for diverse uses such as cooking, production of electricity or industrial or domestic heating. Combustion is also currently the only reaction used to power rockets. Combustion is also used to destroy (incinerate) waste, both nonhazardous and hazardous.
Oxidants for combustion have high oxidation potential and include atmospheric or pure oxygen, chlorine, fluorine, chlorine trifluoride, nitrous oxide and nitric acid. For instance, hydrogen burns in chlorine to form hydrogen chloride with the liberation of heat and light characteristic of combustion. Although usually not catalyzed, combustion can be catalyzed by platinum or vanadium, as in the contact process.
Types
Complete and incomplete
Complete
In complete combustion, the reactant burns in oxygen and produces a limited number of products. When a hydrocarbon burns in oxygen, the reaction will primarily yield carbon dioxide and water. When elements are burned, the products are primarily the most common oxides. Carbon will yield carbon dioxide, sulfur will yield sulfur dioxide, and iron will yield iron(III) oxide. Nitrogen is not considered to be a combustible substance when oxygen is the oxidant. Still, small amounts of various nitrogen oxides (commonly designated species) form when the air is the oxidative.
Combustion is not necessarily favorable to the maximum degree of oxidation, and it can be temperature-dependent. For example, sulfur trioxide is not produced quantitatively by the combustion of sulfur. species appear in significant amounts above about , and more is produced at higher temperatures. The amount of is also a function of oxygen excess.
In most industrial applications and in fires, air is the source of oxygen (). In the air, each mole of oxygen is mixed with approximately of nitrogen. Nitrogen does not take part in combustion, but at high temperatures, some nitrogen will be converted to (mostly , with much smaller amounts of ). On the other hand, when there is insufficient oxygen to combust the fuel completely, some fuel carbon is converted to carbon monoxide, and some of the hydrogens remain unreacted. A complete set of equations for the combustion of a hydrocarbon in the air, therefore, requires an additional calculation for the distribution of oxygen between the carbon and hydrogen in the fuel.
The amount of air required for complete combustion is known as the "theoretical air" or "stoichiometric air". The amount of air above this value actually needed for optimal combustion is known as the "excess air", and can vary from 5% for a natural gas boiler, to 40% for anthracite coal, to 300% for a gas turbine.
Incomplete
Incomplete combustion will occur when there is not enough oxygen to allow the fuel to react completely to produce carbon dioxide and water. It also happens when the combustion is quenched by a heat sink, such as a solid surface or flame trap. As is the case with complete combustion, water is produced by incomplete combustion; however, carbon and carbon monoxide are produced instead of carbon dioxide.
For most fuels, such as diesel oil, coal, or wood, pyrolysis occurs before combustion. In incomplete combustion, products of pyrolysis remain unburnt and contaminate the smoke with noxious particulate matter and gases. Partially oxidized compounds are also a concern; partial oxidation of ethanol can produce harmful acetaldehyde, and carbon can produce toxic carbon monoxide.
The designs of combustion devices can improve the quality of combustion, such as burners and internal combustion engines. Further improvements are achievable by catalytic after-burning devices (such as catalytic converters) or by the simple partial return of the exhaust gases into the combustion process. Such devices are required by environmental legislation for cars in most countries. They may be necessary to enable large combustion devices, such as thermal power stations, to reach legal emission standards.
The degree of combustion can be measured and analyzed with test equipment. HVAC contractors, firefighters and engineers use combustion analyzers to test the efficiency of a burner during the combustion process. Also, the efficiency of an internal combustion engine can be measured in this way, and some U.S. states and local municipalities use combustion analysis to define and rate the efficiency of vehicles on the road today.
Carbon monoxide is one of the products from incomplete combustion. The formation of carbon monoxide produces less heat than formation of carbon dioxide so complete combustion is greatly preferred especially as carbon monoxide is a poisonous gas. When breathed, carbon monoxide takes the place of oxygen and combines with some of the hemoglobin in the blood, rendering it unable to transport oxygen.
Problems associated with incomplete combustion
Environmental problems
These oxides combine with water and oxygen in the atmosphere, creating nitric acid and sulfuric acids, which return to Earth's surface as acid deposition, or "acid rain." Acid deposition harms aquatic organisms and kills trees. Due to its formation of certain nutrients that are less available to plants such as calcium and phosphorus, it reduces the productivity of the ecosystem and farms. An additional problem associated with nitrogen oxides is that they, along with hydrocarbon pollutants, contribute to the formation of ground level ozone, a major component of smog.
Human health problems
Breathing carbon monoxide causes headache, dizziness, vomiting, and nausea. If carbon monoxide levels are high enough, humans become unconscious or die. Exposure to moderate and high levels of carbon monoxide over long periods is positively correlated with the risk of heart disease. People who survive severe carbon monoxide poisoning may suffer long-term health problems. Carbon monoxide from the air is absorbed in the lungs which then binds with hemoglobin in human's red blood cells. This reduces the capacity of red blood cells that carry oxygen throughout the body.
Smoldering
Smoldering is the slow, low-temperature, flameless form of combustion, sustained by the heat evolved when oxygen directly attacks the surface of a condensed-phase fuel. It is a typically incomplete combustion reaction. Solid materials that can sustain a smoldering reaction include coal, cellulose, wood, cotton, tobacco, peat, duff, humus, synthetic foams, charring polymers (including polyurethane foam) and dust. Common examples of smoldering phenomena are the initiation of residential fires on upholstered furniture by weak heat sources (e.g., a cigarette, a short-circuited wire) and the persistent combustion of biomass behind the flaming fronts of wildfires.
Spontaneous
Spontaneous combustion is a type of combustion that occurs by self-heating (increase in temperature due to exothermic internal reactions), followed by thermal runaway (self-heating which rapidly accelerates to high temperatures) and finally, ignition.
For example, phosphorus self-ignites at room temperature without the application of heat. Organic materials undergoing bacterial composting can generate enough heat to reach the point of combustion.
Turbulent
Combustion resulting in a turbulent flame is the most used for industrial applications (e.g. gas turbines, gasoline engines, etc.) because the turbulence helps the mixing process between the fuel and oxidizer.
Micro-gravity
The term 'micro' gravity refers to a gravitational state that is 'low' (i.e., 'micro' in the sense of 'small' and not necessarily a millionth of Earth's normal gravity) such that the influence of buoyancy on physical processes may be considered small relative to other flow processes that would be present at normal gravity. In such an environment, the thermal and flow transport dynamics can behave quite differently than in normal gravity conditions (e.g., a candle's flame takes the shape of a sphere.). Microgravity combustion research contributes to the understanding of a wide variety of aspects that are relevant to both the environment of a spacecraft (e.g., fire dynamics relevant to crew safety on the International Space Station) and terrestrial (Earth-based) conditions (e.g., droplet combustion dynamics to assist developing new fuel blends for improved combustion, materials fabrication processes, thermal management of electronic systems, multiphase flow boiling dynamics, and many others).
Micro-combustion
Combustion processes that happen in very small volumes are considered micro-combustion. The high surface-to-volume ratio increases specific heat loss. Quenching distance plays a vital role in stabilizing the flame in such combustion chambers.
Chemical equations
Stoichiometric combustion of a hydrocarbon in oxygen
Generally, the chemical equation for stoichiometric combustion of a hydrocarbon in oxygen is:
For example, the stoichiometric combustion of methane in oxygen is:
\underset{methane}{CH4} + 2O2 -> CO2 + 2H2O
Stoichiometric combustion of a hydrocarbon in air
If the stoichiometric combustion takes place using air as the oxygen source, the nitrogen present in the air (Atmosphere of Earth) can be added to the equation (although it does not react) to show the stoichiometric composition of the fuel in air and the composition of the resultant flue gas. Treating all non-oxygen components in air as nitrogen gives a 'nitrogen' to oxygen ratio of 3.77, i.e. (100% − %) / % where % is 20.95% vol:
where .
For example, the stoichiometric combustion of methane in air is:
The stoichiometric composition of methane in air is 1 / (1 + 2 + 7.54) = 9.49% vol.
The stoichiometric combustion reaction for CHO in air:
The stoichiometric combustion reaction for CHOS:
The stoichiometric combustion reaction for CHONS:
The stoichiometric combustion reaction for CHOF:
Trace combustion products
Various other substances begin to appear in significant amounts in combustion products when the flame temperature is above about . When excess air is used, nitrogen may oxidize to and, to a much lesser extent, to . forms by disproportionation of , and and form by disproportionation of .
For example, when of propane is burned with of air (120% of the stoichiometric amount), the combustion products contain 3.3% . At , the equilibrium combustion products contain 0.03% and 0.002% . At , the combustion products contain 0.17% , 0.05% , 0.01% , and 0.004% .
Diesel engines are run with an excess of oxygen to combust small particles that tend to form with only a stoichiometric amount of oxygen, necessarily producing nitrogen oxide emissions. Both the United States and European Union enforce limits to vehicle nitrogen oxide emissions, which necessitate the use of special catalytic converters or treatment of the exhaust with urea (see Diesel exhaust fluid).
Incomplete combustion of a hydrocarbon in oxygen
The incomplete (partial) combustion of a hydrocarbon with oxygen produces a gas mixture containing mainly , , , and . Such gas mixtures are commonly prepared for use as protective atmospheres for the heat-treatment of metals and for gas carburizing. The general reaction equation for incomplete combustion of one mole of a hydrocarbon in oxygen is:
\underset{fuel}{C_\mathit{x} H_\mathit{y}} + \underset{oxygen}{\mathit{z} O2} -> \underset{carbon \ dioxide}{\mathit{a}CO2} + \underset{carbon\ monoxide}{\mathit{b}CO} + \underset{water}{\mathit{c}H2O} + \underset{hydrogen}{\mathit{d}H2}
When z falls below roughly 50% of the stoichiometric value, can become an important combustion product; when z falls below roughly 35% of the stoichiometric value, elemental carbon may become stable.
The products of incomplete combustion can be calculated with the aid of a material balance, together with the assumption that the combustion products reach equilibrium. For example, in the combustion of one mole of propane () with four moles of , seven moles of combustion gas are formed, and z is 80% of the stoichiometric value. The three elemental balance equations are:
Carbon:
Hydrogen:
Oxygen:
These three equations are insufficient in themselves to calculate the combustion gas composition.
However, at the equilibrium position, the water-gas shift reaction gives another equation:
CO + H2O -> CO2 + H2;
For example, at the value of K is 0.728. Solving, the combustion gas consists of 42.4% , 29.0% , 14.7% , and 13.9% . Carbon becomes a stable phase at and pressure when z is less than 30% of the stoichiometric value, at which point the combustion products contain more than 98% and and about 0.5% .
Substances or materials which undergo combustion are called fuels. The most common examples are natural gas, propane, kerosene, diesel, petrol, charcoal, coal, wood, etc.
Liquid fuels
Combustion of a liquid fuel in an oxidizing atmosphere actually happens in the gas phase. It is the vapor that burns, not the liquid. Therefore, a liquid will normally catch fire only above a certain temperature: its flash point. The flash point of liquid fuel is the lowest temperature at which it can form an ignitable mix with air. It is the minimum temperature at which there is enough evaporated fuel in the air to start combustion.
Gaseous fuels
Combustion of gaseous fuels may occur through one of four distinctive types of burning: diffusion flame, premixed flame, autoignitive reaction front, or as a detonation. The type of burning that actually occurs depends on the degree to which the fuel and oxidizer are mixed prior to heating: for example, a diffusion flame is formed if the fuel and oxidizer are separated initially, whereas a premixed flame is formed otherwise. Similarly, the type of burning also depends on the pressure: a detonation, for example, is an autoignitive reaction front coupled to a strong shock wave giving it its characteristic high-pressure peak and high detonation velocity.
Solid fuels
The act of combustion consists of three relatively distinct but overlapping phases:
Preheating phase, when the unburned fuel is heated up to its flash point and then fire point. Flammable gases start being evolved in a process similar to dry distillation.
Distillation phase or gaseous phase, when the mix of evolved flammable gases with oxygen is ignited. Energy is produced in the form of heat and light. Flames are often visible. Heat transfer from the combustion to the solid maintains the evolution of flammable vapours.
Charcoal phase or solid phase, when the output of flammable gases from the material is too low for the persistent presence of flame and the charred fuel does not burn rapidly and just glows and later only smoulders.
Combustion management
Efficient process heating requires recovery of the largest possible part of a fuel's heat of combustion into the material being processed. There are many avenues of loss in the operation of a heating process. Typically, the dominant loss is sensible heat leaving with the offgas (i.e., the flue gas). The temperature and quantity of offgas indicates its heat content (enthalpy), so keeping its quantity low minimizes heat loss.
In a perfect furnace, the combustion air flow would be matched to the fuel flow to give each fuel molecule the exact amount of oxygen needed to cause complete combustion. However, in the real world, combustion does not proceed in a perfect manner. Unburned fuel (usually and ) discharged from the system represents a heating value loss (as well as a safety hazard). Since combustibles are undesirable in the offgas, while the presence of unreacted oxygen there presents minimal safety and environmental concerns, the first principle of combustion management is to provide more oxygen than is theoretically needed to ensure that all the fuel burns. For methane () combustion, for example, slightly more than two molecules of oxygen are required.
The second principle of combustion management, however, is to not use too much oxygen. The correct amount of oxygen requires three types of measurement: first, active control of air and fuel flow; second, offgas oxygen measurement; and third, measurement of offgas combustibles. For each heating process, there exists an optimum condition of minimal offgas heat loss with acceptable levels of combustibles concentration. Minimizing excess oxygen pays an additional benefit: for a given offgas temperature, the NOx level is lowest when excess oxygen is kept lowest.
Adherence to these two principles is furthered by making material and heat balances on the combustion process. The material balance directly relates the air/fuel ratio to the percentage of in the combustion gas. The heat balance relates the heat available for the charge to the overall net heat produced by fuel combustion. Additional material and heat balances can be made to quantify the thermal advantage from preheating the combustion air, or enriching it in oxygen.
Reaction mechanism
Combustion in oxygen is a chain reaction in which many distinct radical intermediates participate. The high energy required for initiation is explained by the unusual structure of the dioxygen molecule. The lowest-energy configuration of the dioxygen molecule is a stable, relatively unreactive diradical in a triplet spin state. Bonding can be described with three bonding electron pairs and two antibonding electrons, with spins aligned, such that the molecule has nonzero total angular momentum. Most fuels, on the other hand, are in a singlet state, with paired spins and zero total angular momentum. Interaction between the two is quantum mechanically a "forbidden transition", i.e. possible with a very low probability. To initiate combustion, energy is required to force dioxygen into a spin-paired state, or singlet oxygen. This intermediate is extremely reactive. The energy is supplied as heat, and the reaction then produces additional heat, which allows it to continue.
Combustion of hydrocarbons is thought to be initiated by hydrogen atom abstraction (not proton abstraction) from the fuel to oxygen, to give a hydroperoxide radical (HOO). This reacts further to give hydroperoxides, which break up to give hydroxyl radicals. There are a great variety of these processes that produce fuel radicals and oxidizing radicals. Oxidizing species include singlet oxygen, hydroxyl, monatomic oxygen, and hydroperoxyl. Such intermediates are short-lived and cannot be isolated. However, non-radical intermediates are stable and are produced in incomplete combustion. An example is acetaldehyde produced in the combustion of ethanol. An intermediate in the combustion of carbon and hydrocarbons, carbon monoxide, is of special importance because it is a poisonous gas, but also economically useful for the production of syngas.
Solid and heavy liquid fuels also undergo a great number of pyrolysis reactions that give more easily oxidized, gaseous fuels. These reactions are endothermic and require constant energy input from the ongoing combustion reactions. A lack of oxygen or other improperly designed conditions result in these noxious and carcinogenic pyrolysis products being emitted as thick, black smoke.
The rate of combustion is the amount of a material that undergoes combustion over a period of time. It can be expressed in grams per second (g/s) or kilograms per second (kg/s).
Detailed descriptions of combustion processes, from the chemical kinetics perspective, require the formulation of large and intricate webs of elementary reactions. For instance, combustion of hydrocarbon fuels typically involve hundreds of chemical species reacting according to thousands of reactions.
The inclusion of such mechanisms within computational flow solvers still represents a pretty challenging task mainly in two aspects. First, the number of degrees of freedom (proportional to the number of chemical species) can be dramatically large; second, the source term due to reactions introduces a disparate number of time scales which makes the whole dynamical system stiff. As a result, the direct numerical simulation of turbulent reactive flows with heavy fuels soon becomes intractable even for modern supercomputers.
Therefore, a plethora of methodologies have been devised for reducing the complexity of combustion mechanisms without resorting to high detail levels. Examples are provided by:
The Relaxation Redistribution Method (RRM)
The Intrinsic Low-Dimensional Manifold (ILDM) approach and further developments
The invariant-constrained equilibrium edge preimage curve method.
A few variational approaches
The Computational Singular perturbation (CSP) method and further developments.
The Rate Controlled Constrained Equilibrium (RCCE) and Quasi Equilibrium Manifold (QEM) approach.
The G-Scheme.
The Method of Invariant Grids (MIG).
Kinetic modelling
The kinetic modelling may be explored for insight into the reaction mechanisms of thermal decomposition in the combustion of different materials by using for instance Thermogravimetric analysis.
Temperature
Assuming perfect combustion conditions, such as complete combustion under adiabatic conditions (i.e., no heat loss or gain), the adiabatic combustion temperature can be determined. The formula that yields this temperature is based on the first law of thermodynamics and takes note of the fact that the heat of combustion is used entirely for heating the fuel, the combustion air or oxygen, and the combustion product gases (commonly referred to as the flue gas).
In the case of fossil fuels burnt in air, the combustion temperature depends on all of the following:
the heating value;
the stoichiometric air to fuel ratio ;
the specific heat capacity of fuel and air;
the air and fuel inlet temperatures.
The adiabatic combustion temperature (also known as the adiabatic flame temperature) increases for higher heating values and inlet air and fuel temperatures and for stoichiometric air ratios approaching one.
Most commonly, the adiabatic combustion temperatures for coals are around (for inlet air and fuel at ambient temperatures and for ), around for oil and for natural gas.
In industrial fired heaters, power station steam generators, and large gas-fired turbines, the more common way of expressing the usage of more than the stoichiometric combustion air is percent excess combustion air. For example, excess combustion air of 15 percent means that 15 percent more than the required stoichiometric air is being used.
Instabilities
Combustion instabilities are typically violent pressure oscillations in a combustion chamber. These pressure oscillations can be as high as 180dB, and long-term exposure to these cyclic pressure and thermal loads reduces the life of engine components. In rockets, such as the F1 used in the Saturn V program, instabilities led to massive damage to the combustion chamber and surrounding components. This problem was solved by re-designing the fuel injector. In liquid jet engines, the droplet size and distribution can be used to attenuate the instabilities. Combustion instabilities are a major concern in ground-based gas turbine engines because of emissions. The tendency is to run lean, an equivalence ratio less than 1, to reduce the combustion temperature and thus reduce the emissions; however, running the combustion lean makes it very susceptible to combustion instability.
The Rayleigh Criterion is the basis for analysis of thermoacoustic combustion instability and is evaluated using the Rayleigh Index over one cycle of instability
where q' is the heat release rate perturbation and p' is the pressure fluctuation.
When the heat release oscillations are in phase with the pressure oscillations, the Rayleigh Index is positive and the magnitude of the thermoacoustic instability is maximised. On the other hand, if the Rayleigh Index is negative, then thermoacoustic damping occurs. The Rayleigh Criterion implies that thermoacoustic instability can be optimally controlled by having heat release oscillations 180 degrees out of phase with pressure oscillations at the same frequency. This minimizes the Rayleigh Index.
See also
Related concepts
Air–fuel ratio
Autoignition temperature
Chemical looping combustion
Deflagration
Detonation
Dust explosion
Explosion
Fire
Flame
Global warming
Heterogeneous combustion
Markstein number
Phlogiston theory (historical)
Spontaneous combustion
Machines and equipment
Boiler
Bunsen burner
External combustion engine
Furnace
Gas turbine
Internal combustion engine
Rocket engine
Scientific and engineering societies
International Flame Research Foundation
The Combustion Institute
Other
Combustible dust
Biomass burning
List of light sources
Open burning of waste
Stubble burning
References
Further reading
Chemical reactions | Combustion | [
"Chemistry"
] | 5,652 | [
"Combustion",
"nan"
] |
5,642 | https://en.wikipedia.org/wiki/Costume%20jewelry | Costume or fashion jewelry includes a range of decorative items worn for personal adornment that are manufactured as less expensive ornamentation to complement a particular fashionable outfit or garment as opposed to "real" (fine) jewelry, which is more costly and which may be regarded primarily as collectibles, keepsakes, or investments. From the outset, costume jewelry — also known as fashion jewelry — paralleled the styles of its more precious fine counterparts.
Terminology
It is also known as artificial jewellery, imitation jewellery, imitated jewelry, trinkets, fashion jewelry, junk jewelry, fake jewelry, or fallalery.
Etymology
The term costume jewelry dates back to the early 20th century. It reflects the use of the word "costume" to refer to what is now called an "outfit".
Components
Originally, costume or fashion jewelry was made of inexpensive simulated gemstones, such as rhinestones or lucite, set in pewter, silver, nickel, or brass. During the depression years, rhinestones were even down-graded by some manufacturers to meet the cost of production.
During the World War II era, sterling silver was often incorporated into costume jewelry designs primarily because:
The components used for base metal were needed for wartime production (i.e., military applications), and a ban was placed on their use in the private sector.
Base metal was originally popular because it could approximate platinum's color, sterling silver fulfilled the same function.
This resulted in a number of years during which sterling silver costume jewelry was produced and some can still be found in today's vintage jewelry marketplace.
Modern costume jewelry incorporates a wide range of materials. High-end crystals, cubic zirconia simulated diamonds, and some semi-precious stones are used in place of precious stones. Metals include gold- or silver-plated brass, and sometimes vermeil or sterling silver. Lower-priced jewelry may still use gold plating over pewter, nickel, or other metals; items made in countries outside the United States may contain lead. Some pieces incorporate plastic, acrylic, leather, or wood.
Historical expression
Costume jewelry can be characterized by the period in history in which it was made.
Art Deco period (1920–1930s)
The Art Deco movement was an attempt to combine the harshness of mass production with the sensitivity of art and design. The movement died with the onset of the Great Depression and the outbreak of World War II.
According to Schiffer, some of the characteristics of the costume jewelry in the Art Deco period were:
Free-flowing curves were replaced with a harshly geometric and symmetrical theme
Long pendants, bangle bracelets, cocktail rings, and elaborate accessory items such as cigarette cases and holders
Retro period (1935 to 1950)
In the Retro period, designers struggled with the art versus mass production dilemma. Natural materials merged with plastics. The retro period primarily included American-made jewelry, which had a distinctly American look. With the war in Europe, many European jewelry firms were forced to shut down. Many European designers emigrated to the U.S. since the economy was recovering.
According to Schiffer, some of the characteristics of costume jewelry in the Retro period were:
Glamour, elegance, and sophistication
Flowers, bows, and sunburst designs with a Hollywood flair
Moonstones, horse motifs, military influence, and ballerinas
Bakelite and other plastic jewelry
Art Modern period (1945 to 1960)
In the Art Modern period following World War II, jewelry designs became more traditional and understated. The big, bold styles of the Retro period went out of style and were replaced by the more tailored styles of the 1950s and 1960s.
According to Schiffer, some of the characteristics of costume jewelry in the Art Modern period were:
Bold, lavish jewelry
Large, chunky bracelets, charm bracelets, Jade/opal, citrine and topaz
Poodle pins, Christmas tree pins, and other Christmas jewelry
Rhinestones
With the advent of the Mod period came "Body Jewelry". Carl Schimel of Kim Craftsmen Jewelry was at the forefront of this style. While Kim Craftsmen closed in the early 1990s, many collectors still forage for their items at antique shows and flea markets.
General history
Costume jewelry has been part of the culture for almost 300 years. During the 18th century, jewelers began making pieces with inexpensive glass. In the 19th century, costume jewelry made of semi-precious material came into the market. Jewels made of semi-precious material were more affordable, and this affordability gave common people the chance to own costume jewelry.
But the real golden era for costume jewelry began in the middle of the 20th century. The new middle class wanted beautiful, but affordable jewelry. The demand for jewelry of this type coincided with the machine age and the Industrial Revolution. The revolution made the production of carefully executed replicas of admired heirloom pieces possible.
As the class structure in America changed, so did measures of real wealth. Women in all social stations, even the working-class woman, could own a small piece of costume jewelry. The average town and countrywoman could acquire and wear a considerable amount of this mass-produced jewelry that was both affordable and stylish.
Costume jewelry was also made popular by various designers in the mid-20th century. Some of the most remembered names in costume jewelry include both the high and low priced brands: Crown Trifari, Dior, Chanel, Miriam Haskell, Sherman, Monet, Napier, Corocraft, Coventry, and Kim Craftsmen.
A significant factor in the popularization of costume jewelry was Hollywood movies. The leading female stars of the 1940s and 1950s often wore and then endorsed the pieces produced by a range of designers. If you admired a necklace worn by Bette Davis in The Private Lives of Elizabeth and Essex, you could buy a copy from Joseff of Hollywood, who made the original. Stars such as Vivien Leigh, Elizabeth Taylor, and Jane Russell appeared in adverts for the pieces and the availability of the collections in shops such as Woolworth made it possible for ordinary women to own and wear such jewelry.
Coco Chanel greatly popularized the use of faux jewelry in her years as a fashion designer, bringing costume jewelry to life with gold and faux pearls. Chanel’s designs drew from a variety of historical styles, including Byzantine and Renaissance influences, often featuring crosses and intricate metalwork. Her collaboration with glassmakers, such as the Gripoix family, introduced richly colored glass beads and simulated gemstones, which added depth to her creations without the high cost of traditional precious stones.
Kenneth Jay Lane has since the 1960s been known for creating unique pieces for Jackie Onassis, Elizabeth Taylor, Diana Vreeland, and Audrey Hepburn. He is probably best known for his three-strand faux pearl necklace worn by Barbara Bush to her husband's inaugural ball. Other celebrated names who wore Lane’s creations include Jackie Kennedy, Babe Paley, the Duchess of Windsor, and Nancy Reagan.
Elsa Schiaparelli brought surrealist influences into costume jewelry design, collaborating with artists such as Salvador Dalí.
In many instances, high-end fashion jewelry has achieved a "collectible" status and increased value over time. Today, there is a substantial secondary market for vintage fashion jewelry. The main collecting market is for 'signed pieces', that is pieces that have the maker's mark, usually stamped on the reverse. Amongst the most sought after are Miriam Haskell, Sherman, Coro, Butler and Wilson, Crown Trifari, and Sphinx. However, there is also demand for good quality 'unsigned' pieces, especially if they are of an unusual design.
Business and industry
Costume jewelry is considered a discrete category of fashion accessory and displays many characteristics of a self-contained industry. Costume jewelry manufacturers are located throughout the world, with a particular concentration in parts of China and India, where entire citywide and region-wide economies are dominated by the trade of these goods. There has been considerable controversy in the United States and elsewhere about the lack of regulations in the manufacture of such jewelry—these range from human rights issues surrounding the treatment of labor, to the use of manufacturing processes in which small, but potentially harmful, amounts of toxic metals are added during production. In 2010, the Associated Press released the story that toxic levels of the metal cadmium were found in children's jewelry. An Associated Press investigation found some pieces contained more than 80 percent of cadmium. The wider issues surrounding imports, exports, trade laws, and globalization also apply to the costume jewelry trade.
As part of the supply chain, wholesalers in the United States and other nations purchase costume jewelry from manufacturers and typically import or export it to wholesale distributors and suppliers who deal directly with retailers. Wholesale costume jewelry merchants will traditionally seek out new suppliers at trade shows. As the Internet has become increasingly important in global trade, the trade-show model has changed. Retailers can now select from a large number of wholesalers with sites on the World Wide Web. The wholesalers purchase from international suppliers who are also available on the Web from different parts of the world like Chinese, Korean, Indonesian, Thai, and Indian jewelry companies, with their wide range of products in bulk quantities. Some of these sites also market directly to consumers who can purchase costume jewelry at greatly reduced prices. Some of these websites categorize fashion jewelry separately, while others use this term in place of costume jewelry. The trend of jewelry-making at home by hobbyists for personal enjoyment or for sale on sites like Etsy has resulted in the common practice of buying wholesale costume jewelry in bulk and using it for parts.
There is a rise in demand for artificial or imitation jewelry by 85% due to the increase in gold prices, according to a 2011 report.
See also
Marcasite jewelry
Gustave Sherman
References
External links
Jewellery components | Costume jewelry | [
"Technology"
] | 1,989 | [
"Jewellery components",
"Components"
] |
5,658 | https://en.wikipedia.org/wiki/Human%20cannibalism | Human cannibalism is the act or practice of humans eating the flesh or internal organs of other human beings. A person who practices cannibalism is called a cannibal. The meaning of "cannibalism" has been extended into zoology to describe animals consuming parts of individuals of the same species as food.
Anatomically modern humans, Neanderthals, and Homo antecessor are known to have practised cannibalism to some extent in the Pleistocene. Cannibalism was occasionally practised in Egypt during ancient and Roman times, as well as later during severe famines. The Island Caribs of the Lesser Antilles, whose name is the origin of the word cannibal, acquired a long-standing reputation as eaters of human flesh, reconfirmed when their legends were recorded in the 17th century. Some controversy exists over the accuracy of these legends and the prevalence of actual cannibalism in the culture. Depicting indigenous peoples as cannibals was a common fantasy and rationale for European colonialism and 'civilising missions'.
Cannibalism has been well documented in much of the world, including Fiji (once nicknamed the "Cannibal Isles"), the Amazon Basin, the Congo, and the Māori people of New Zealand. Cannibalism was also practised in New Guinea and in parts of the Solomon Islands, and human flesh was sold at markets in some parts of Melanesia and of the Congo Basin. A form of cannibalism popular in early modern Europe was the consumption of body parts or blood for medical purposes. Reaching its height during the 17th century, this practice continued in some cases into the second half of the 19th century.
Cannibalism has occasionally been practised as a last resort by people suffering from famine. Well-known examples include the ill-fated Donner Party (1846–1847), the Holodomor (1932–1933), and the crash of Uruguayan Air Force Flight 571 (1972), after which the survivors ate the bodies of the dead. Additionally, there are cases of people engaging in cannibalism for sexual pleasure, such as Albert Fish, Issei Sagawa, Jeffrey Dahmer, and Armin Meiwes. Cannibalism has been both practised and fiercely condemned in several recent wars, especially in Liberia and the Democratic Republic of the Congo. It was still practised in Papua New Guinea as of 2012, for cultural reasons.
Cannibalism has been said to test the bounds of cultural relativism because it challenges anthropologists "to define what is or is not beyond the pale of acceptable human behavior". A few scholars argue that no firm evidence exists that cannibalism has ever been a socially acceptable practice anywhere in the world, but such views have been largely rejected as irreconcilable with the actual evidence.
Etymology
The word "cannibal" is derived from Spanish caníbal or caríbal, originally used as a name variant for the Kalinago (Island Caribs), a people from the West Indies said to have eaten human flesh. The older term anthropophagy, meaning "eating humans", is also used for human cannibalism.
Reasons and types
Cannibalism has been practised under a variety of circumstances and for various motives. To adequately express this diversity, Shirley Lindenbaum suggests that "it might be better to talk about 'cannibalisms in the plural.
Institutionalized, survival, and pathological cannibalism
One major distinction is whether cannibal acts are accepted by the culture in which they occur ("institutionalized cannibalism"), or whether they are merely practised under starvation conditions to ensure one's immediate survival ("survival cannibalism"), or by isolated individuals considered criminal and often pathological by society at large ("cannibalism as psychopathology" or as "aberrant behavior").
Institutionalized cannibalism, sometimes also called "learned cannibalism", is the consumption of human body parts as "an institutionalized practice" generally accepted in the culture where it occurs.
By contrast, survival cannibalism means "the consumption of others under conditions of starvation such as shipwreck, military siege, and famine, in which persons normally averse to the idea are driven [to it] by the will to live". Also known as famine cannibalism, such forms of cannibalism resorted to only in situations of extreme necessity have occurred in many cultures where cannibalism is otherwise clearly rejected. The survivors of the shipwrecks of the Essex and Méduse in the 19th century are said to have engaged in cannibalism, as did the members of Franklin's lost expedition and the Donner Party.
Such cases often involve only necro-cannibalism (eating the corpse of someone already dead) as opposed to homicidal cannibalism (killing someone for food). In modern English law, the latter is always considered a crime, even in the most trying circumstances. The case of R v Dudley and Stephens, in which two men were found guilty of murder for killing and eating a cabin boy while adrift at sea in a lifeboat, set the precedent that necessity is no defence to a charge of murder. This decision outlawed and effectively ended the practice of shipwrecked sailors drawing lots in order to determine who would be killed and eaten to prevent the others from starving, a time-honoured practice formerly known as a "custom of the sea".
In other cases, cannibalism is an expression of a psychopathology or mental disorder, condemned by the society in which it occurs and "considered to be an indicator of [a] severe personality disorder or psychosis". Well-known cases include Albert Fish, Issei Sagawa, and Armin Meiwes. Fantasies of cannibalism, whether acted out or not, are not specifically mentioned in manuals of mental disorders such as the DSM, presumably because at least serious cases (that lead to murder) are very rare.
Exo-, endo-, and autocannibalism
Within institutionalized cannibalism, exocannibalism is often distinguished from endocannibalism. Endocannibalism refers to the consumption of a person from the same community. Often it is a part of a funerary ceremony, similar to burial or cremation in other cultures. The consumption of the recently deceased in such rites can be considered "an act of affection" and a major part of the grieving process. It has also been explained as a way of guiding the souls of the dead into the bodies of living descendants.
In contrast, exocannibalism is the consumption of a person from outside the community. It is frequently "an act of aggression, often in the context of warfare", where the flesh of killed or captured enemies may be eaten to celebrate one's victory over them.
Some scholars explain both types of cannibalism as due to a belief that eating a person's flesh or internal organs will endow the cannibal with some of the positive characteristics of the deceased. However, several authors investigating exocannibalism in New Zealand, New Guinea, and the Congo Basin observe that such beliefs were absent in these regions.
A further type, different from both exo- and endocannibalism, is autocannibalism (also called autophagy or self-cannibalism), "the act of eating parts of oneself". It does not ever seem to have been an institutionalized practice, but occasionally occurs as pathological behaviour, or due to other reasons such as curiosity. Also on record are instances of forced autocannibalism committed as acts of aggression, where individuals are forced to eat parts of their own bodies as a form of torture.
Exocannibalism is thus often associated with the consumption of enemies as an act of aggression, a practice also known as war cannibalism. Endocannibalism is often associated with the consumption of deceased relatives in funerary rites driven by a practice known as funerary or mortuary cannibalism.
Additional motives
Medicinal cannibalism (also called medical cannibalism) means "the ingestion of human tissue ... as a supposed medicine or tonic". In contrast to other forms of cannibalism, which Europeans generally frowned upon, the "medicinal ingestion" of various "human body parts was widely practiced throughout Europe from the sixteenth to the eighteenth centuries", with early records of the practice going back to the first century CE. It was also frequently practised in China.
Sacrificial cannibalism refers the consumption of the flesh of victims of human sacrifice, for example among the Aztecs. Human and animal remains excavated in Knossos, Crete, have been interpreted as evidence of a ritual in which children and sheep were sacrificed and eaten together during the Bronze Age. According to Ancient Roman reports, the Celts in Britain practised sacrificial cannibalism, and archaeological evidence backing these claims has by now been found.
Infanticidal cannibalism or cannibalistic infanticide refers to cases where newborns or infants are killed because they are "considered unwanted or unfit to live" and then "consumed by the mother, father, both parents or close relatives".
Infanticide followed by cannibalism was practised in various regions, but is particularly well documented among Aboriginal Australians. Among animals, such behaviour is called filial cannibalism, and it is common in many species, especially among fish.
Human predation is the hunting of people from unrelated and possibly hostile groups in order to eat them. In parts of the Southern New Guinea lowland rain forests, hunting people "was an opportunistic extension of seasonal foraging or pillaging strategies", with human bodies just as welcome as those of animals as sources of protein, according to the anthropologist Bruce M. Knauft. As populations living near coasts and rivers were usually better nourished and hence often physically larger and stronger than those living inland, they "raided inland 'bush' peoples with impunity and often with little fear of retaliation". Cases of human predation are also on record for the neighbouring Bismarck Archipelago and for Australia. In the Congo Basin, there lived groups such as the Bankutu who hunted humans for food even when game was plentiful.
The term innocent cannibalism has been used for cases of people eating human flesh without knowing what they are eating. It is a subject of myths, such as the myth of Thyestes who unknowingly ate the flesh of his own sons. There are also actual cases on record, for example from the Congo Basin, where cannibalism had been quite widespread and where even in the 1950s travellers were sometimes served a meat dish, learning only afterwards that the meat had been of human origin.
Gastronomic and functionalist explanations
The term gastronomic cannibalism has been suggested for cases where human flesh is eaten to "provide a supplement to the regular thus essentially for its nutritional or, in an alternative definition, for cases where it is "eaten without ceremony (other than culinary), in the same manner as the flesh of any other animal". While the term has been criticized as being too vague to clearly identify a specific type of cannibalism, various records indicate that nutritional or culinary concerns could indeed play a role in such acts even outside of periods of starvation. Referring to the Congo Basin, where many of the eaten were butchered slaves rather than enemies killed in war, the anthropologist Emil Torday notes that "the most common [reason for cannibalism] was simply gastronomic: the natives loved 'the flesh that speaks' [as human flesh was commonly called] and paid for it". The historian Key Ray Chong observes that, throughout Chinese history, "learned cannibalism was often practiced ... for culinary appreciation".
In his popular book Guns, Germs and Steel, Jared Diamond suggests that "protein starvation is probably also the ultimate reason why cannibalism was widespread in traditional New Guinea highland societies", and both in New Zealand and Fiji, cannibals explained their acts as due to a lack of animal meat. In Liberia, a former cannibal argued that it would have been wasteful to let the flesh of killed enemies spoil, and eaters of human flesh in New Guinea and the neighbouring Bismarck Archipelago expressed the same sentiment.
In many cases, human flesh was also described as particularly delicious, especially when it came from women, children, or both. Such statements are on record for various regions and peoples, including the Aztecs, today's Liberia and Nigeria, the Fang people in west-central Africa, the Congo Basin, China up to the 14th century, Sumatra, Borneo, Australia, New Guinea, New Zealand, Vanuatu, and Fiji.
Some Europeans and Americans who ate human flesh accidentally, out of curiosity, or to comply with local customs likewise tended to describe it as very good.
There is a debate among anthropologists on how important functionalist reasons are for the understanding of institutionalized cannibalism. Diamond is not alone in suggesting "that the consumption of human flesh was of nutritional benefit for some populations in New Guinea" and the same case has been made for other "tropical peoples ... exploiting a diverse range of animal foods", including human flesh. The materialist anthropologist Marvin Harris argued that a "shortage of animal protein" was also the underlying reason for Aztec cannibalism. The cultural anthropologist Marshall Sahlins, on the other hand, rejected such explanations as overly simplistic, stressing that cannibal customs must be regarded as "complex phenomen[a]" with "myriad attributes" which can only be understood if one considers "symbolism, ritual, and cosmology" in addition to their "practical function".
In pre-modern medicine, an explanation given by the now-discredited theory of humorism for cannibalism was that it was caused by a black acrimonious humor, which, being lodged in the linings of the ventricles of the heart, produced a voracity for human flesh. On the other hand, the French philosopher Michel de Montaigne understood war cannibalism as a way of expressing vengeance and hatred towards one's enemies and celebrating one's victory over them, thus giving an interpretation that is close to modern explanations. He also pointed out that some acts of Europeans in his own time could be considered as equally barbarous, making his essay "Of Cannibals" () a precursor to later ideas of cultural relativism.
Body parts and culinary practices
Nutritional value of the human body
Archaeologist James Cole investigated the nutritional value of the human body and found it to be similar to that of animals of similar size.
He notes that, according to ethnographic and archaeological records, nearly all edible parts of humans were sometimes eaten – not only skeletal muscle tissue ("flesh" or "meat" in a narrow sense), but also "lungs, liver, brain, heart, nervous tissue, bone marrow, genitalia and skin", as well as kidneys. For a typical adult man, the combined nutritional value of all these edible parts is about 126,000 kilocalories (kcal). The nutritional value of women and younger individuals is lower because of their lower body weight – for example, around 86% of a male adult for an adult woman and 30% for a boy aged around 5 or 6.
As the daily energy need of an adult man is about 2,400 kilocalories, a dead male body could thus have fed a group of 25 men for a bit more than two days, provided they ate nothing but the human flesh alone – longer if it was part of a mixed diet. The nutritional value of the human body is thus not insubstantial, though Cole notes that for prehistoric hunters, large megafauna such as mammoths, rhinoceros, and bisons would have been an even better deal as long as they were available and could be caught, because of their much higher body weight.
Hearts and livers
Cases of people eating human livers and hearts, especially of enemies, have been reported from across the world. After the Battle of Uhud (625), Hind bint Utba ate (or at least attempted to) the liver of Hamza ibn Abd al-Muttalib, an uncle of Muhammad. At that time, the liver was considered "the seat of life".
French Catholics ate livers and hearts of Huguenots at the St. Bartholomew's Day massacre in 1572, in some cases also offering them for sale.
In China, medical cannibalism was practised over centuries. People voluntarily cut their own body parts, including parts of their livers, and boiled them to cure ailing relatives. Children were sometimes killed because eating their boiled hearts was considered a good way of extending one's life. Emperor Wuzong of Tang supposedly ordered provincial officials to send him "the hearts and livers of fifteen-year-old boys and girls" when he had become seriously ill, hoping in vain this medicine would cure him. Later private individuals sometimes followed his example, paying soldiers who kidnapped preteen children for their kitchen.
When "human flesh and organs were sold openly at the marketplace" during the Taiping Rebellion in 1850–1864, human hearts became a popular dish, according to some who afterwards freely admitted having consumed them.
According to a missionary's report from the brutal suppression of the Dungan Revolt of 1895–1896 in northwestern China, "thousands of men, women and children were ruthlessly massacred by the imperial soldiers" and "many a meal of human hearts and livers was partaken of by soldiers", supposedly out of a belief that this would give them "the courage their enemies had displayed".
In World War II, Japanese soldiers ate the livers of killed Americans in the Chichijima incident.
Many Japanese soldiers who died during the occupation of Jolo Island in the Philippines had their livers eaten by local Moro fighters, according to Japanese soldier Fujioka Akiyoshi.
During the Cultural Revolution (1966–1976), hundreds of incidents of cannibalism occurred, mostly motivated by hatred against supposed "class enemies", but sometimes also by health concerns. In a case recorded by the local authorities, a school teacher in Mengshan County "heard that consuming a 'beauty's heart' could cure disease". He then chose a 13- or 14-year-old student of his and publicly denounced her as a member of the enemy faction, which was enough to get her killed by an angry mob. After the others had left, he "cut open the girl's chest ..., dug out her heart, and took it home to enjoy".
In a further case that took place in Wuxuan County, likewise in the Guangxi region, three brothers were beaten to death as supposed enemies; afterwards their livers were cut out, baked, and consumed "as medicine".
According to the Chinese writer Zheng Yi, who researched these events, "the consumption of human liver was mentioned at least fifty or sixty times" in just a small number of archival documents. He talked with a man who had eaten human liver and told him that "barbecued liver is delicious".
During a massacre of the Madurese minority in the Indonesian part of Borneo in 1999, reporter Richard Lloyd Parry met a young cannibal who had just participated in a "human barbecue" and told him without hesitation: "It tastes just like chicken. Especially the liver – just the same as chicken." In 2013, during the Syrian civil war, Syrian rebel Abu Sakkar was filmed eating parts of the lung or liver of a government soldier while declaring that "We will eat your hearts and your livers you soldiers of Bashar the dog".
Breasts, palms, and soles
Various accounts from around the world mention women's breasts as a
favourite body part. Also frequently mentioned are the palms of the hands and sometimes the soles of the feet, regardless of the victim's gender.
Jerome, in his treatise Against Jovinianus, claimed that the British Attacotti were cannibals who
regarded the buttocks of men and the breasts of women as delicacies.
During the Mongol invasion of Europe in the 13th century and their subsequent rule over China during the Yuan dynasty (1271–1368), some Mongol fighters practised cannibalism and both European and Chinese observers record a preference for women's breasts, which were considered "delicacies" and, if there were many corpses, sometimes the only part of a female body that was eaten (of men, only the thighs were said to be eaten in such circumstances).
After meeting a group of cannibals in West Africa in the 14th century, the Moroccan explorer Ibn Battuta recorded that, according to their preferences, "the tastiest part of women's flesh is the palms and the breast."
Centuries later, the anthropologist wrote that, in southern Nigeria, "the parts in greatest favour are the palms of the hands, the fingers and toes, and, of a woman, the breast."
Regarding the north of the country, his colleague Charles Kingsley Meek added: "Among all the cannibal tribes the palms of the hands and the soles of the feet were considered the tit-bits of the body."
Among the Apambia, a cannibalistic clan of the Azande people in Central Africa, palms and soles were considered the best parts of the human body, while their favourite dish was prepared with "fat from a woman's breast", according to the missionary and ethnographer F. Gero.
Similar preferences are on record throughout Melanesia. According to the anthropologists Bernard Deacon and Camilla Wedgwood, women were "specially fattened for eating" in Vanuatu, "the breasts being the great delicacy". A missionary confirmed that "a body of a female usually formed the principal part of the repast" at feasts for chiefs and warriors.
The ethnologist writes: "Apart from the breasts of women and the genitals of men, palms of hands and soles of feet were the most coveted morsels." He knew a chief on Ambae, one of the islands of Vanuatu, who, "according to fairly reliably sources", dined on a young girl's breasts every few days.
When visiting the Solomon Islands in the 1980s, anthropologist Michael Krieger met a former cannibal who told him that women's breasts had been considered the best part of the human body because they were so fatty, with fat being a rare and sought delicacy.
They were also considered among the best parts in New Guinea and the Bismarck Archipelago.
Modes of preparation
Based on theoretical considerations, the structuralist anthropologist Claude Lévi-Strauss suggested that human flesh was most typically boiled, with roasting also used to prepare the bodies of enemies and other outsiders in exocannibalism, but rarely in funerary endocannibalism (when eating deceased relatives).
But an analysis of 60 sufficiently detailed and credible descriptions of institutionalized cannibalism by anthropologist Paul Shankman failed to confirm this hypothesis. Shankman found that roasting and boiling together accounted for only about half of the cases, with roasting being slightly more common. In contrast to Lévi-Strauss's predictions, boiling was more often used in exocannibalism, while roasting was about equally common for both.
Shankman observed that various other "ways of preparing people" were repeatedly employed as well; in one third of all cases, two or more modes were used together (e.g. some bodies or body parts were boiled or baked, while others were roasted). Human flesh was baked in steam on preheated rocks or in earth ovens (a technique widely used in the Pacific), smoked (which allowed to preserve it for later consumption), or eaten raw. While these modes were used in both exo- and endocannibalism, another method that was only used in the latter and only in the Americas was to burn the bones or bodies of deceased relatives and then to consume the bone ash.
After analysing numerous accounts from China, Key Ray Chong similarly concludes that "a variety of methods for cooking human flesh" were used in this country. Most popular were "broiling, roasting, boiling and steaming", followed by "pickling in salt, wine, sauce and the like". Human flesh was also often "cooked into soup" or stewed in cauldrons. Eating human flesh raw was the "least popular" method, but a few cases are on record too. Chong notes that human flesh was typically cooked in the same way as "ordinary foodstuffs for daily consumption" – no principal distinction from the treatment of animal meat is detectable, and nearly any mode of preparation used for animals could also be used for people.
Whole-body roasting and baking
Though human corpses, like those of animals, were usually cut into pieces for further processing, reports of people being roasted or baked whole are on record throughout the world.
At the archaeological site of Herxheim, Germany, more than a thousand people were killed and eaten about 7000 years ago, and the evidence indicates that many of them were spit-roasted whole over open fires.
During severe famines in China and Egypt during the 12th and early 13th centuries, there was a black-market trade in corpses of little children that were roasted or boiled whole.
In China, human-flesh sellers advertised such corpses as good for being boiled or steamed whole, "including their bones", and praised their particular tenderness.
In Cairo, Egypt, the Arab physician Abd al-Latif al-Baghdadi repeatedly saw "little children, roasted or boiled", offered for sale in baskets on street corners during a heavy famine that started in 1200 CE.
Older children and possibly adults were sometimes prepared in the same way.
Once he saw "a child nearing the age of puberty, who had been found roasted"; two young people confessed to having killed and cooked the child.
Another time, remains were found of a person who had apparently been roasted and served whole, the legs tied like those of "a sheep trussed for cooking".
Only the skeleton was found, still undivided and in the trussed position, but "with all the flesh stripped off for food".
In some cases children were roasted and offered for sale by their own parents; other victims were street children, who had become very numerous and were often kidnapped and cooked by people looking for food or extra income.
The victims were so numerous that sometimes "two or three children, even more, would be found in a single cooking pot."
Al-Latif notes that, while initially people were shocked by such acts, they "eventually ... grew accustomed, and some conceived such a taste for these detestable meats that they made them their ordinary provender ... The horror people had felt at first vanished entirely".
After the end of the Mongol-led Yuan dynasty (1271–1368), a Chinese writer criticized in his recollections of the period that some Mongol soldiers ate human flesh because of its taste rather than (as had also occurred in other times) merely in cases of necessity. He added that they enjoyed torturing their victims (often children or women, whose flesh was preferred over that of men) by roasting them alive, in "large jars whose outside touched the fire [or] on an iron grate".
Other victims were placed "inside a double bag ... which was put into a large pot" and so boiled alive.
While not mentioning live roasting or boiling, European authors also complained about cannibalism and cruelty during the Mongol invasion of Europe, and a drawing in the Chronica Majora (compiled by Matthew Paris) shows Mongol fighters spit-roasting a human victim.
, who accompanied Christopher Columbus during his second voyage, afterwards stated "that he saw there with his own eyes several Indians skewered on spits being roasted over burning coals as a treat for the gluttonous."
Jean de Léry, who lived for several months among the Tupinambá in Brazil, writes that several of his companions reported "that they had seen not only a number of men and women cut in pieces and grilled on the boucans, but also little unweaned children roasted whole" after a successful attack on an enemy village.
According to German ethnologist Leo Frobenius, children captured by Songye slave raiders in the Central African Kasaï region that were too young to be sold with a profit were instead "skewered on long spears like rats and roasted over a quickly kindled large fire" for consumption by the raiders.
In the Solomon Islands in the 1870s, a British captain saw a "dead body, dressed and cooked whole" offered for sale in a canoe. A settler treated the scene as "an every-day occurrence" and told him "that he had seen as many as twenty bodies lying on the beach, dressed and cooked". Decades later, a missionary reported that whole bodies were still offered "up and down the coast in canoes for sale" after battles, since human flesh was eaten "for pleasure".
In Fiji, whole human bodies cooked in earth ovens were served in carefully pre-arranged postures, according to anthropologist Lorimer Fison and several other sources:
Within this archipelago, it was especially the Gau Islanders who "were famous for cooking bodies whole".
In New Caledonia, a missionary named Ta'unga from the Cook Islands repeatedly saw how whole human bodies were cooked in earth ovens: "They tie the hands together and bundle them up together with the intestines. The legs are bent up and bound with hibiscus bark. When it is completed they lay the body out flat on its back in the earth oven, then when it is baked ready they cut it up and eat it." Ta'unga commented: "One curious thing is that when a man is alive he has a human appearance, but after he is baked he looks more like a dog, as the lips are shriveled back and his teeth are bared."
Among the Māori in New Zealand, children captured in war campaigns were sometimes spit-roasted whole (after slitting open their bellies to remove the intestines), as various sources report. Enslaved children, including teenagers, could meet the same fate, and whole babies were sometimes served at the tables of chiefs.
In the Marquesas Islands, captives (preferably women) killed for consumption "were spitted on long poles that entered between their legs and emerged from their mouths" and then roasted whole. Similar customs had a long history: In Nuku Hiva, the largest of these islands, archaeologists found the partially consumed "remains of a young child" that had been roasted whole in an oven during the 14th century or earlier.
Medical aspects
A well-known case of mortuary cannibalism is that of the Fore tribe in New Guinea, which resulted in the spread of the prion disease kuru. Although the Fore's mortuary cannibalism was well-documented, the practice had ceased before the cause of the disease was recognized. However, some scholars argue that although post-mortem dismemberment was the practice during funeral rites, cannibalism was not. Marvin Harris theorizes that it happened during a famine period coincident with the arrival of Europeans and was rationalized as a religious rite.
In 2003, a publication in Science received a large amount of press attention when it suggested that early humans may have practised extensive cannibalism. According to this research, genetic markers commonly found in modern humans worldwide suggest that today many people carry a gene that evolved as protection against the brain diseases that can be spread by consuming human brain tissue. A 2006 reanalysis of the data questioned this hypothesis, because it claimed to have found a data collection bias, which led to an erroneous conclusion. This claimed bias came from incidents of cannibalism used in the analysis not being due to local cultures, but having been carried out by explorers, stranded seafarers or escaped convicts. The original authors published a subsequent paper in 2008 defending their conclusions.
Myths, legends and folklore
Cannibalism features in the folklore and legends of many cultures and is most often attributed to evil characters or as extreme retribution for some wrongdoing. Examples include the witch in "Hansel and Gretel", Lamia of Greek mythology, the witch Baba Yaga of Slavic folklore, and the Yama-uba in Japanese folklore.
A number of stories in Greek mythology involve cannibalism, in particular the eating of close family members, e.g., the stories of Thyestes, Tereus and especially Cronus, who became Saturn in the Roman pantheon. The story of Tantalus is another example, though here a family member is prepared for consumption by others.
The wendigo is a creature appearing in the legends of the Algonquian people. It is thought of variously as a malevolent cannibalistic spirit that could possess humans or a monster that humans could physically transform into. Those who indulged in cannibalism were at particular risk, and the legend appears to have reinforced this practice as taboo. The Zuni people tell the story of the Átahsaia – a giant who cannibalizes his fellow demons and seeks out human flesh.
The wechuge is a demonic cannibalistic creature that seeks out human flesh appearing in the mythology of the Athabaskan people. It is said to be half monster and half human-like; however, it has many shapes and forms.
In literature and popular culture
Cannibalism is depicted in literary and other imaginative works across history. Homer's Odyssey, Beowulf, Shakespeare's Titus Andronicus, Daniel Defoe's Robinson Crusoe, Herman Melville's Moby-Dick, and Gustave Flaubert's Salammbo are prominent examples. It also features in several classic Chinese novel, such as Romance of the Three Kingdoms and Water Margin.
One of the most famous satirical essays in the English language concerns cannibalism. A Modest Proposal for Preventing the Children of Poor People from Being a Burthen to Their Parents or Country, and for Making Them Beneficial to the Publick, commonly referred to as A Modest Proposal, is a Juvenalian satire published by Anglo-Irish writer and clergyman Jonathan Swift in 1729. It suggests that poor people in Ireland could ease their economic troubles by selling their young children as food to the elite, and describes in detail the various advantages this would ostensibly have. Among other satirical works depicting cannibalism are Mark Twain's short story "Cannibalism in the Cars" (1868) and Mo Yan's novel The Republic of Wine (1992).
Cannibalism is also a recurring theme in popular culture, especially within the horror genre, with cannibal films being a notable subgenre. One of the best known fictional serial killers is a cannibal: Hannibal Lecter, created by Thomas Harris. Survival cannibalism is a topic of films such as Society of the Snow (2023) and TV series such as Yellowjackets (2021–). Other works mention cannibalism in post-apocalyptic settings, among them Cormac McCarthy's novel The Road (2006) and its 2009 film adaptation. People who consume human flesh without knowing it are depicted in various films, among them the science fiction classic Soylent Green (1973) and the horror comedy The Rocky Horror Picture Show (1975).
Scepticism
William Arens, author of The Man-Eating Myth: Anthropology and Anthropophagy, questions the credibility of reports of cannibalism and argues that the description by one group of people of another people as cannibals is a consistent and demonstrable ideological and rhetorical device to establish perceived cultural superiority. Arens bases his thesis on a detailed analysis of various "classic" cases of cannibalism reported by explorers, missionaries, and anthropologists. He claims that all of them were steeped in racism, unsubstantiated, or based on second-hand or hearsay evidence. Though widely discussed, Arens's book generally failed to convince the academic community. Claude Lévi-Strauss observes that, in spite of his "brilliant but superficial book ... [n]o serious ethnologist disputes the reality of cannibalism". Shirley Lindenbaum notes that, while after "Arens['s] ... provocative suggestion ... many anthropologists ... reevaluated their data", the outcome was an improved and "more nuanced" understanding of where, why and under which circumstances cannibalism took place rather than a confirmation of his claims: "Anthropologists working in the Americas, Africa, and Melanesia now acknowledge that institutionalized cannibalism occurred in some places at some times. Archaeologists and evolutionary biologists are taking cannibalism seriously."
Lindenbaum and others point out that Arens displays a "strong ethnocentrism". His refusal to admit that institutionalized cannibalism ever existed seems to be motivated by the implied idea "that cannibalism is the worst thing of all" – worse than any other behaviour people engaged in, and therefore uniquely suited to vilifying others. Kajsa Ekholm Friedman calls this "a remarkable opinion in a culture [the European/American one] that has been capable of the most extreme cruelty and destructive behavior, both at home and in other parts of the world."
She observes that, contrary to European values and expectations, "in many parts of the Congo region there was no negative evaluation of cannibalism. On the contrary, people expressed their strong appreciation of this very special meat and could not understand the hysterical reactions from the white man's side." And why indeed, she goes on to ask, should they have had the same negative reactions to cannibalism as Arens and his contemporaries? Implicitly he assumes that everybody throughout human history must have shared the strong taboo placed by his own culture on cannibalism, but he never attempts to explain why this should be so, and "neither logic nor historical evidence justifies" this viewpoint, as Christian Siefkes commented.
Some have argued that it is the taboo against cannibalism, rather than its practice, that needs to be explained. Hubert Murray, the Lieutenant-Governor of Papua in the early 20th century, admitted that "I have never been able to give a convincing answer to a native who says to me, 'Why should I not eat human flesh? After observing that the Orokaiva people in New Guinea explained their cannibal customs as due to "a simple desire for good food", the Australian anthropologist F. E. Williams commented: "Anthropologically speaking the fact that we ourselves should persist in a superstitious, or at least sentimental, prejudice against human flesh is more puzzling than the fact that the Orokaiva, a born hunter, should see fit to enjoy perfectly good meat when he gets it."
Accusations of cannibalism could be used to characterize indigenous peoples as "uncivilized", "primitive", or even "inhuman." While this means that the reliability of reports of cannibal practices must be carefully evaluated especially if their wording suggests such a context, many actual accounts do not fit this pattern. The earliest firsthand account of cannibal customs in the Caribbean comes from Diego Álvarez Chanca, who accompanied Christopher Columbus on his second voyage. His description of the customs of the Caribs of Guadeloupe includes their cannibalism (men killed or captured in war were eaten, while captured boys were "castrated [and used as] servants until they gr[e]w up, when they [were] slaughtered" for consumption), but he nevertheless notes "that these people are more civilized than the other islanders" (who did not practice cannibalism). Nor was he an exception. Among the earliest reports of cannibalism in the Caribbean and the Americas, there are some (like those of Amerigo Vespucci) that seem to mostly consist of hearsay and "gross exaggerations", but others (by Chanca, Columbus himself, and other early travellers) show "genuine interest and respect for the natives" and include "numerous cases of sincere praise".
Reports of cannibalism from other continents follow similar patterns. Condescending remarks can be found, but many Europeans who described cannibal customs in Central Africa wrote about those who practised them in quite positive terms, calling them "splendid" and "the finest people" and not rarely, like Chanca, actually considering them as "far in advance of" and "intellectually and morally superior" to the non-cannibals around them. Writing from Melanesia, the missionary George Brown explicitly rejects the European prejudice of picturing cannibals as "particularly ferocious and repulsive", noting instead that many cannibals he met were "no more ferocious than" others and "indeed ... very nice people".
Reports or assertions of cannibal practices could nevertheless be used to promote the use of military force as a means of "civilizing" and "pacifying" the "savages". During the Spanish conquest of the Aztec Empire and its earlier conquests in the Caribbean there were widespread reports of cannibalism, and cannibals became exempted from Queen Isabella's prohibition on enslaving the indigenous. Another example of the sensationalism of cannibalism and its connection to imperialism occurred during Japan's 1874 expedition to Taiwan. As Robert Eskildsen describes, Japan's popular media "exaggerated the aborigines' violent nature", in some cases by wrongly accusing them of cannibalism.
This Horrid Practice: The Myth and Reality of Traditional Maori Cannibalism (2008) by New Zealand historian Paul Moon received a hostile reception by some Māori, who felt the book tarnished their whole people. However, the factual accuracy of the book was not seriously disputed and even critics such as Margaret Mutu grant that cannibalism was "definitely" practised and that it was "part of our [Māori] culture."
History
There is archaeological evidence that cannibalism has been practised for at least hundreds of thousands of years by early Homo sapiens and archaic hominins.
Among modern humans, cannibalism has been practised by various groups. An incomplete list of cases where it is documented to have occurred in institutionalized form includes prehistoric and early modern Europe, South America, Mesoamerica, Iroquoian peoples in North America, parts of Western and Central Africa, China and Sumatra, among pre-contact Aboriginal Australians, among Māori in New Zealand, on some other Polynesian islands as well as in New Guinea, the Solomon Islands, and Fiji. Evidence of cannibalism has also been found in ruins associated with the Ancestral Puebloans, at Cowboy Wash in the Southwestern United States.
After World War I, institutionalized cannibalism has become very rare, but cases were still reported during times of famine. Occasional cannibal acts committed by individual criminals also are documented throughout the 20th and 21st centuries.
The Americas
Africa
Europe
Asia
Oceania
See also
Autocannibalism, the practice of eating oneself (also called self-cannibalism)
Cannibal film
Cannibalism in Africa
Cannibalism in Asia
Cannibalism in Europe
Cannibalism in literature
Cannibalism in Oceania
Cannibalism in popular culture
Cannibalism in poultry
Cannibalism in the Americas
Cannibalization (marketing), a business strategy
Child cannibalism for children as victims of cannibalism (in myth and reality)
Custom of the sea, the practice of shipwrecked survivors drawing lots to see who would be killed and eaten so that the others might survive
Endocannibalism, the consumption of persons from the same community, often as a funerary rite
Exocannibalism, the consumption of persons from outside the community, often enemies killed or captured in war
Filial cannibalism, the consumption of one's own offspring
Homo antecessor, an extinct human species providing some of the earliest known evidence for human cannibalism
Human placentophagy, the consumption of the placenta (afterbirth)
Issei Sagawa, a Japanese man who became a minor celebrity after killing and eating another student
List of incidents of cannibalism
Medical cannibalism, the consumption of human body parts to treat or prevent diseases
Placentophagy, the act of mammals eating the placenta of their young after childbirth
Pleistocene human diet, the eating habits of human ancestors in the Pleistocene
Sexual cannibalism, behaviour of (usually female) animals that eat their mates during or after copulation
Transmissible spongiform encephalopathy, an incurable disease that can damage the brain and nervous system of many animals, including humans
Vorarephilia, a sexual fetish and paraphilia where arousal results from the idea of devouring others or being devoured
References
Bibliography
Further reading
Sahlins, Marshall. "Cannibalism: An Exchange." New York Review of Books 26, no. 4 (March 22, 1979).
Schutt, Bill. Cannibalism: A Perfectly Natural History. Chapel Hill: Algonquin Books 2017.
External links
The Straight Dope columns:
Víctor Montoya, Cannibalism (2007, translated by Elizabeth Gamble Miller) – a look at representations of cannibalism in art and myth, and why we tend to be so horrified by it
Rachael Bell, Cannibalism: The Ancient Taboo in Modern Times (2015) – from Crime Library
Alisa G. Woods, Cannibalism and the Resistant Brain (2015) – on how studies of kuru might lead to a better understanding of other diseases
Shirley Lindenbaum, Cannibalism (2021) – article from the Open Encyclopedia of Anthropology
Terry Madenholm, A Brief History of Cannibalism: Not Just a Matter of Taste (2022) – from Haaretz
Human activities | Human cannibalism | [
"Biology"
] | 9,629 | [
"Human activities",
"Behavior",
"Human behavior"
] |
5,659 | https://en.wikipedia.org/wiki/Chemical%20element | A chemical element is a chemical substance whose atoms all have the same number of protons. The number of protons is called the atomic number of that element. For example, oxygen has an atomic number of 8, meaning each oxygen atom has 8 protons in its nucleus. Atoms of the same element can have different numbers of neutrons in their nuclei, known as isotopes of the element. Two or more atoms can combine to form molecules. Some elements are formed from molecules of identical atoms, e. g. atoms of hydrogen (H) form diatomic molecules (H2). Chemical compounds are substances made of atoms of different elements; they can have molecular or non-molecular structure. Mixtures are materials containing different chemical substances; that means (in case of molecular substances) that they contain different types of molecules. Atoms of one element can be transformed into atoms of a different element in nuclear reactions, which change an atom's atomic number.
Historically, the term "chemical element" meant a substance that cannot be broken down into constituent substances by chemical reactions, and for most practical purposes this definition still has validity. There was some controversy in the 1920s over whether isotopes deserved to be recognized as separate elements if they could be separated by chemical means.
The term "(chemical) element" is used in two different but closely related meanings: it can mean a chemical substance consisting of a single kind of atoms, or it can mean that kind of atoms as a component of various chemical substances. For example, molecules of water (H2O) contain atoms of hydrogen (H) and oxygen (O), so water can be said as a compound consisting of the elements hydrogen (H) and oxygen (O) even though it does not contain the chemical substances (di)hydrogen (H2) and (di)oxygen (O2), as H2O molecules are different from H2 and O2 molecules. For the meaning "chemical substance consisting of a single kind of atoms", the terms "elementary substance" and "simple substance" have been suggested, but they have not gained much acceptance in English chemical literature, whereas in some other languages their equivalent is widely used. For example, the French chemical terminology distinguishes (kind of atoms) and (chemical substance consisting of a single kind of atoms); the Russian chemical terminology distinguishes and .
Almost all baryonic matter in the universe is composed of elements (among rare exceptions are neutron stars). When different elements undergo chemical reactions, atoms are rearranged into new compounds held together by chemical bonds. Only a few elements, such as silver and gold, are found uncombined as relatively pure native element minerals. Nearly all other naturally occurring elements occur in the Earth as compounds or mixtures. Air is mostly a mixture of molecular nitrogen and oxygen, though it does contain compounds including carbon dioxide and water, as well as atomic argon, a noble gas which is chemically inert and therefore does not undergo chemical reactions.
The history of the discovery and use of elements began with early human societies that discovered native minerals like carbon, sulfur, copper and gold (though the modern concept of an element was not yet understood). Attempts to classify materials such as these resulted in the concepts of classical elements, alchemy, and similar theories throughout history. Much of the modern understanding of elements developed from the work of Dmitri Mendeleev, a Russian chemist who published the first recognizable periodic table in 1869. This table organizes the elements by increasing atomic number into rows ("periods") in which the columns ("groups") share recurring ("periodic") physical and chemical properties. The periodic table summarizes various properties of the elements, allowing chemists to derive relationships between them and to make predictions about elements not yet discovered, and potential new compounds.
By November 2016, the International Union of Pure and Applied Chemistry (IUPAC) had recognized a total of 118 elements. The first 94 occur naturally on Earth, and the remaining 24 are synthetic elements produced in nuclear reactions. Save for unstable radioactive elements (radioelements) which decay quickly, nearly all elements are available industrially in varying amounts. The discovery and synthesis of further new elements is an ongoing area of scientific study.
Description
The lightest elements are hydrogen and helium, both created by Big Bang nucleosynthesis in the first 20 minutes of the universe in a ratio of around 3:1 by mass (or 12:1 by number of atoms), along with tiny traces of the next two elements, lithium and beryllium. Almost all other elements found in nature were made by various natural methods of nucleosynthesis. On Earth, small amounts of new atoms are naturally produced in nucleogenic reactions, or in cosmogenic processes, such as cosmic ray spallation. New atoms are also naturally produced on Earth as radiogenic daughter isotopes of ongoing radioactive decay processes such as alpha decay, beta decay, spontaneous fission, cluster decay, and other rarer modes of decay.
Of the 94 naturally occurring elements, those with atomic numbers 1 through 82 each have at least one stable isotope (except for technetium, element 43 and promethium, element 61, which have no stable isotopes). Isotopes considered stable are those for which no radioactive decay has yet been observed. Elements with atomic numbers 83 through 94 are unstable to the point that radioactive decay of all isotopes can be detected. Some of these elements, notably bismuth (atomic number 83), thorium (atomic number 90), and uranium (atomic number 92), have one or more isotopes with half-lives long enough to survive as remnants of the explosive stellar nucleosynthesis that produced the heavy metals before the formation of our Solar System. At over 1.9 years, over a billion times longer than the estimated age of the universe, bismuth-209 has the longest known alpha decay half-life of any isotope, and is almost always considered on par with the 80 stable elements. The heaviest elements (those beyond plutonium, element 94) undergo radioactive decay with half-lives so short that they are not found in nature and must be synthesized.
There are now 118 known elements. In this context, "known" means observed well enough, even from just a few decay products, to have been differentiated from other elements. Most recently, the synthesis of element 118 (since named oganesson) was reported in October 2006, and the synthesis of element 117 (tennessine) was reported in April 2010. Of these 118 elements, 94 occur naturally on Earth. Six of these occur in extreme trace quantities: technetium, atomic number 43; promethium, number 61; astatine, number 85; francium, number 87; neptunium, number 93; and plutonium, number 94. These 94 elements have been detected in the universe at large, in the spectra of stars and also supernovae, where short-lived radioactive elements are newly being made. The first 94 elements have been detected directly on Earth as primordial nuclides present from the formation of the Solar System, or as naturally occurring fission or transmutation products of uranium and thorium.
The remaining 24 heavier elements, not found today either on Earth or in astronomical spectra, have been produced artificially: all are radioactive, with short half-lives; if any of these elements were present at the formation of Earth, they are certain to have completely decayed, and if present in novae, are in quantities too small to have been noted. Technetium was the first purportedly non-naturally occurring element synthesized, in 1937, though trace amounts of technetium have since been found in nature (and also the element may have been discovered naturally in 1925). This pattern of artificial production and later natural discovery has been repeated with several other radioactive naturally occurring rare elements.
List of the elements are available by name, atomic number, density, melting point, boiling point and chemical symbol, as well as ionization energy. The nuclides of stable and radioactive elements are also available as a list of nuclides, sorted by length of half-life for those that are unstable. One of the most convenient, and certainly the most traditional presentation of the elements, is in the form of the periodic table, which groups together elements with similar chemical properties (and usually also similar electronic structures).
Atomic number
The atomic number of an element is equal to the number of protons in each atom, and defines the element. For example, all carbon atoms contain 6 protons in their atomic nucleus; so the atomic number of carbon is 6. Carbon atoms may have different numbers of neutrons; atoms of the same element having different numbers of neutrons are known as isotopes of the element.
The number of protons in the nucleus also determines its electric charge, which in turn determines the number of electrons of the atom in its non-ionized state. The electrons are placed into atomic orbitals that determine the atom's chemical properties. The number of neutrons in a nucleus usually has very little effect on an element's chemical properties; except for hydrogen (for which the kinetic isotope effect is significant). Thus, all carbon isotopes have nearly identical chemical properties because they all have six electrons, even though they may have 6 to 8 neutrons. That is why atomic number, rather than mass number or atomic weight, is considered the identifying characteristic of an element.
The symbol for atomic number is Z.
Isotopes
Isotopes are atoms of the same element (that is, with the same number of protons in their nucleus), but having different numbers of neutrons. Thus, for example, there are three main isotopes of carbon. All carbon atoms have 6 protons, but they can have either 6, 7, or 8 neutrons. Since the mass numbers of these are 12, 13 and 14 respectively, said three isotopes are known as carbon-12, carbon-13, and carbon-14 (C, C, and C). Natural carbon is a mixture of C (about 98.9%), C (about 1.1%) and about 1 atom per trillion of C.
Most (54 of 94) naturally occurring elements have more than one stable isotope. Except for the isotopes of hydrogen (which differ greatly from each other in relative mass—enough to cause chemical effects), the isotopes of a given element are chemically nearly indistinguishable.
All elements have radioactive isotopes (radioisotopes); most of these radioisotopes do not occur naturally. Radioisotopes typically decay into other elements via alpha decay, beta decay, or inverse beta decay; some isotopes of the heaviest elements also undergo spontaneous fission. Isotopes that are not radioactive, are termed "stable" isotopes. All known stable isotopes occur naturally (see primordial nuclide). The many radioisotopes that are not found in nature have been characterized after being artificially produced. Certain elements have no stable isotopes and are composed only of radioisotopes: specifically the elements without any stable isotopes are technetium (atomic number 43), promethium (atomic number 61), and all observed elements with atomic number greater than 82.
Of the 80 elements with at least one stable isotope, 26 have only one stable isotope. The mean number of stable isotopes for the 80 stable elements is 3.1 stable isotopes per element. The largest number of stable isotopes for a single element is 10 (for tin, element 50).
Isotopic mass and atomic mass
The mass number of an element, A, is the number of nucleons (protons and neutrons) in the atomic nucleus. Different isotopes of a given element are distinguished by their mass number, which is written as a superscript on the left hand side of the chemical symbol (e.g., U). The mass number is always an integer and has units of "nucleons". Thus, magnesium-24 (24 is the mass number) is an atom with 24 nucleons (12 protons and 12 neutrons).
Whereas the mass number simply counts the total number of neutrons and protons and is thus an integer, the atomic mass of a particular isotope (or "nuclide") of the element is the mass of a single atom of that isotope, and is typically expressed in daltons (symbol: Da), or universal atomic mass units (symbol: u). Its relative atomic mass is a dimensionless number equal to the atomic mass divided by the atomic mass constant, which equals 1 Da. In general, the mass number of a given nuclide differs in value slightly from its relative atomic mass, since the mass of each proton and neutron is not exactly 1 Da; since the electrons contribute a lesser share to the atomic mass as neutron number exceeds proton number; and because of the nuclear binding energy and electron binding energy. For example, the atomic mass of chlorine-35 to five significant digits is 34.969 Da and that of chlorine-37 is 36.966 Da. However, the relative atomic mass of each isotope is quite close to its mass number (always within 1%). The only isotope whose atomic mass is exactly a natural number is C, which has a mass of 12 Da; because the dalton is defined as 1/12 of the mass of a free neutral carbon-12 atom in the ground state.
The standard atomic weight (commonly called "atomic weight") of an element is the average of the atomic masses of all the chemical element's isotopes as found in a particular environment, weighted by isotopic abundance, relative to the atomic mass unit. This number may be a fraction that is not close to a whole number. For example, the relative atomic mass of chlorine is 35.453 u, which differs greatly from a whole number as it is an average of about 76% chlorine-35 and 24% chlorine-37. Whenever a relative atomic mass value differs by more than ~1% from a whole number, it is due to this averaging effect, as significant amounts of more than one isotope are naturally present in a sample of that element.
Chemically pure and isotopically pure
Chemists and nuclear scientists have different definitions of a pure element. In chemistry, a pure element means a substance whose atoms all (or in practice almost all) have the same atomic number, or number of protons. Nuclear scientists, however, define a pure element as one that consists of only one isotope.
For example, a copper wire is 99.99% chemically pure if 99.99% of its atoms are copper, with 29 protons each. However it is not isotopically pure since ordinary copper consists of two stable isotopes, 69% Cu and 31% Cu, with different numbers of neutrons. However, pure gold would be both chemically and isotopically pure, since ordinary gold consists only of one isotope, Au.
Allotropes
Atoms of chemically pure elements may bond to each other chemically in more than one way, allowing the pure element to exist in multiple chemical structures (spatial arrangements of atoms), known as allotropes, which differ in their properties. For example, carbon can be found as diamond, which has a tetrahedral structure around each carbon atom; graphite, which has layers of carbon atoms with a hexagonal structure stacked on top of each other; graphene, which is a single layer of graphite that is very strong; fullerenes, which have nearly spherical shapes; and carbon nanotubes, which are tubes with a hexagonal structure (even these may differ from each other in electrical properties). The ability of an element to exist in one of many structural forms is known as 'allotropy'.
The reference state of an element is defined by convention, usually as the thermodynamically most stable allotrope and physical state at a pressure of 1 bar and a given temperature (typically at 298.15K). However, for phosphorus, the reference state is white phosphorus even though it is not the most stable allotrope, and the reference state for carbon is graphite, because the structure of graphite is more stable than that of the other allotropes. In thermochemistry, an element is defined to have an enthalpy of formation of zero in its reference state.
Properties
Several kinds of descriptive categorizations can be applied broadly to the elements, including consideration of their general physical and chemical properties, their states of matter under familiar conditions, their melting and boiling points, their densities, their crystal structures as solids, and their origins.
General properties
Several terms are commonly used to characterize the general physical and chemical properties of the chemical elements. A first distinction is between metals, which readily conduct electricity, nonmetals, which do not, and a small group, (the metalloids), having intermediate properties and often behaving as semiconductors.
A more refined classification is often shown in colored presentations of the periodic table. This system restricts the terms "metal" and "nonmetal" to only certain of the more broadly defined metals and nonmetals, adding additional terms for certain sets of the more broadly viewed metals and nonmetals. The version of this classification used in the periodic tables presented here includes: actinides, alkali metals, alkaline earth metals, halogens, lanthanides, transition metals, post-transition metals, metalloids, reactive nonmetals, and noble gases. In this system, the alkali metals, alkaline earth metals, and transition metals, as well as the lanthanides and the actinides, are special groups of the metals viewed in a broader sense. Similarly, the reactive nonmetals and the noble gases are nonmetals viewed in the broader sense. In some presentations, the halogens are not distinguished, with astatine identified as a metalloid and the others identified as nonmetals.
States of matter
Another commonly used basic distinction among the elements is their state of matter (phase), whether solid, liquid, or gas, at standard temperature and pressure (STP). Most elements are solids at STP, while several are gases. Only bromine and mercury are liquid at 0 degrees Celsius (32 degrees Fahrenheit) and 1 atmosphere pressure; caesium and gallium are solid at that temperature, but melt at 28.4°C (83.2°F) and 29.8°C (85.6°F), respectively.
Melting and boiling points
Melting and boiling points, typically expressed in degrees Celsius at a pressure of one atmosphere, are commonly used in characterizing the various elements. While known for most elements, either or both of these measurements is still undetermined for some of the radioactive elements available in only tiny quantities. Since helium remains a liquid even at absolute zero at atmospheric pressure, it has only a boiling point, and not a melting point, in conventional presentations.
Densities
The density at selected standard temperature and pressure (STP) is often used in characterizing the elements. Density is often expressed in grams per cubic centimetre (g/cm). Since several elements are gases at commonly encountered temperatures, their densities are usually stated for their gaseous forms; when liquefied or solidified, the gaseous elements have densities similar to those of the other elements.
When an element has allotropes with different densities, one representative allotrope is typically selected in summary presentations, while densities for each allotrope can be stated where more detail is provided. For example, the three familiar allotropes of carbon (amorphous carbon, graphite, and diamond) have densities of 1.8–2.1, 2.267, and 3.515 g/cm, respectively.
Crystal structures
The elements studied to date as solid samples have eight kinds of crystal structures: cubic, body-centered cubic, face-centered cubic, hexagonal, monoclinic, orthorhombic, rhombohedral, and tetragonal. For some of the synthetically produced transuranic elements, available samples have been too small to determine crystal structures.
Occurrence and origin on Earth
Chemical elements may also be categorized by their origin on Earth, with the first 94 considered naturally occurring, while those with atomic numbers beyond 94 have only been produced artificially via human-made nuclear reactions.
Of the 94 naturally occurring elements, 83 are considered primordial and either stable or weakly radioactive. The longest-lived isotopes of the remaining 11 elements have half lives too short for them to have been present at the beginning of the Solar System, and are therefore considered transient elements. Of these 11 transient elements, five (polonium, radon, radium, actinium, and protactinium) are relatively common decay products of thorium and uranium. The remaining six transient elements (technetium, promethium, astatine, francium, neptunium, and plutonium) occur only rarely, as products of rare decay modes or nuclear reaction processes involving uranium or other heavy elements.
Elements with atomic numbers 1 through 82, except 43 (technetium) and 61 (promethium), each have at least one isotope for which no radioactive decay has been observed. Observationally stable isotopes of some elements (such as tungsten and lead), however, are predicted to be slightly radioactive with very long half-lives: for example, the half-lives predicted for the observationally stable lead isotopes range from 10 to 10 years. Elements with atomic numbers 43, 61, and 83 through 94 are unstable enough that their radioactive decay can be detected. Three of these elements, bismuth (element 83), thorium (90), and uranium (92) have one or more isotopes with half-lives long enough to survive as remnants of the explosive stellar nucleosynthesis that produced the heavy elements before the formation of the Solar System. For example, at over 1.9 years, over a billion times longer than the estimated age of the universe, bismuth-209 has the longest known alpha decay half-life of any isotope. The last 24 elements (those beyond plutonium, element 94) undergo radioactive decay with short half-lives and cannot be produced as daughters of longer-lived elements, and thus are not known to occur in nature at all.
Periodic table
The properties of the elements are often summarized using the periodic table, which powerfully and elegantly organizes the elements by increasing atomic number into rows ("periods") in which the columns ("groups") share recurring ("periodic") physical and chemical properties. The table contains 118 confirmed elements as of 2021.
Although earlier precursors to this presentation exist, its invention is generally credited to Russian chemist Dmitri Mendeleev in 1869, who intended the table to illustrate recurring trends in the properties of the elements. The layout of the table has been refined and extended over time as new elements have been discovered and new theoretical models have been developed to explain chemical behavior.
Use of the periodic table is now ubiquitous in chemistry, providing an extremely useful framework to classify, systematize and compare all the many different forms of chemical behavior. The table has also found wide application in physics, geology, biology, materials science, engineering, agriculture, medicine, nutrition, environmental health, and astronomy. Its principles are especially important in chemical engineering.
Nomenclature and symbols
The various chemical elements are formally identified by their unique atomic numbers, their accepted names, and their chemical symbols.
Atomic numbers
The known elements have atomic numbers from 1 to 118, conventionally presented as Arabic numerals. Since the elements can be uniquely sequenced by atomic number, conventionally from lowest to highest (as in a periodic table), sets of elements are sometimes specified by such notation as "through", "beyond", or "from ... through", as in "through iron", "beyond uranium", or "from lanthanum through lutetium". The terms "light" and "heavy" are sometimes also used informally to indicate relative atomic numbers (not densities), as in "lighter than carbon" or "heavier than lead", though the atomic masses of the elements (their atomic weights or atomic masses) do not always increase monotonically with their atomic numbers.
Element names
The naming of various substances now known as elements precedes the atomic theory of matter, as names were given locally by various cultures to various minerals, metals, compounds, alloys, mixtures, and other materials, though at the time it was not known which chemicals were elements and which compounds. As they were identified as elements, the existing names for anciently known elements (e.g., gold, mercury, iron) were kept in most countries. National differences emerged over the element names either for convenience, linguistic niceties, or nationalism. For example, German speakers use "Wasserstoff" (water substance) for "hydrogen", "Sauerstoff" (acid substance) for "oxygen" and "Stickstoff" (smothering substance) for "nitrogen"; English and some other languages use "sodium" for "natrium", and "potassium" for "kalium"; and the French, Italians, Greeks, Portuguese and Poles prefer "azote/azot/azoto" (from roots meaning "no life") for "nitrogen".
For purposes of international communication and trade, the official names of the chemical elements both ancient and more recently recognized are decided by the International Union of Pure and Applied Chemistry (IUPAC), which has decided on a sort of international English language, drawing on traditional English names even when an element's chemical symbol is based on a Latin or other traditional word, for example adopting "gold" rather than "aurum" as the name for the 79th element (Au). IUPAC prefers the British spellings "aluminium" and "caesium" over the U.S. spellings "aluminum" and "cesium", and the U.S. "sulfur" over British "sulphur". However, elements that are practical to sell in bulk in many countries often still have locally used national names, and countries whose national language does not use the Latin alphabet are likely to use the IUPAC element names.
According to IUPAC, element names are not proper nouns; therefore, the full name of an element is not capitalized in English, even if derived from a proper noun, as in californium and einsteinium. Isotope names are also uncapitalized if written out, e.g., carbon-12 or uranium-235. Chemical element symbols (such as Cf for californium and Es for einsteinium), are always capitalized (see below).
In the second half of the 20th century, physics laboratories became able to produce elements with half-lives too short for an appreciable amount of them to exist at any time. These are also named by IUPAC, which generally adopts the name chosen by the discoverer. This practice can lead to the controversial question of which research group actually discovered an element, a question that delayed the naming of elements with atomic number of 104 and higher for a considerable amount of time. (See element naming controversy).
Precursors of such controversies involved the nationalistic namings of elements in the late 19th century. For example, lutetium was named in reference to Paris, France. The Germans were reluctant to relinquish naming rights to the French, often calling it cassiopeium. Similarly, the British discoverer of niobium originally named it columbium, in reference to the New World. It was used extensively as such by American publications before the international standardization (in 1950).
Chemical symbols
Specific elements
Before chemistry became a science, alchemists designed arcane symbols for both metals and common compounds. These were however used as abbreviations in diagrams or procedures; there was no concept of atoms combining to form molecules. With his advances in the atomic theory of matter, John Dalton devised his own simpler symbols, based on circles, to depict molecules.
The current system of chemical notation was invented by Jöns Jacob Berzelius in 1814. In this system, chemical symbols are not mere abbreviations—though each consists of letters of the Latin alphabet. They are intended as universal symbols for people of all languages and alphabets.
Since Latin was the common language of science at Berzelius' time, his symbols were abbreviations based on the Latin names of elements (they may be Classical Latin names of elements known since antiquity or Neo-Latin coinages for later elements). The symbols are not followed by a period (full stop) as with abbreviations. In most cases, Latin names of elements as used by Berzelius have the same roots as the modern English name. For example, hydrogen has the symbol "H" from Neo-Latin , which has the same Greek roots as English hydrogen. However, in eleven cases Latin (as used by Berzelius) and English names of elements have different roots. Eight of them are the seven metals of antiquity and a metalloid also known since antiquity: "Fe" (Latin ) for iron, "Hg" (Latin ) for mercury, "Sn" (Latin ) for tin, "Au" (Latin ) for gold, "Ag" (Latin ) for silver, "Pb" (Latin ) for lead, "Cu" (Latin ) for copper, and "Sb" (Latin ) for antimony. The three other mismatches between Neo-Latin (as used by Berzelius) and English names are "Na" (Neo-Latin ) for sodium, "K" (Neo-Latin ) for potassium, and "W" (Neo-Latin ) for tungsten. These mismatches came from different suggestings of naming the elements in the Modern era. Initially Berzelius had suggested "So" and "Po" for sodium and potassium, but he changed the symbols to "Na" and "K" later in the same year.
Elements discovered after 1814 were also assigned unique chemical symbols, based on the name of the element. The use of Latin as the universal language of science was fading, but chemical names of newly discovered elements came to be borrowed from language to language with little or no modifications. Symbols of elements discovered after 1814 match their names in English, French (ignoring the acute accent on ), and German (though German often allows alternate spellings with or instead of : e.g., the name of calcium may be spelled or in German, but its symbol is always "Ca"). Other languages sometimes modify element name spellings: Spanish (ytterbium), Italian (hafnium), Swedish (moscovium); but those modifications do not affect chemical symbols: Yb, Hf, Mc.
Chemical symbols are understood internationally when element names might require translation. There have been some differences in the past. For example, Germans in the past have used "J" (for the name ) for iodine, but now use "I" and .
The first letter of a chemical symbol is always capitalized, as in the preceding examples, and the subsequent letters, if any, are always lower case. Thus, the symbols for californium and einsteinium are Cf and Es.
General chemical symbols
There are also symbols in chemical equations for groups of elements, for example in comparative formulas. These are often a single capital letter, and the letters are reserved and not used for names of specific elements. For example, "X" indicates a variable group (usually a halogen) in a class of compounds, while "R" is a radical, meaning a compound structure such as a hydrocarbon chain. The letter "Q" is reserved for "heat" in a chemical reaction. "Y" is also often used as a general chemical symbol, though it is also the symbol of yttrium. "Z" is also often used as a general variable group. "E" is used in organic chemistry to denote an electron-withdrawing group or an electrophile; similarly "Nu" denotes a nucleophile. "L" is used to represent a general ligand in inorganic and organometallic chemistry. "M" is also often used in place of a general metal.
At least two other, two-letter generic chemical symbols are also in informal use, "Ln" for any lanthanide and "An" for any actinide. "Rg" was formerly used for any rare gas element, but the group of rare gases has now been renamed noble gases and "Rg" now refers to roentgenium.
Isotope symbols
Isotopes of an element are distinguished by mass number (total protons and neutrons), with this number combined with the element's symbol. IUPAC prefers that isotope symbols be written in superscript notation when practical, for example C and U. However, other notations, such as carbon-12 and uranium-235, or C-12 and U-235, are also used.
As a special case, the three naturally occurring isotopes of hydrogen are often specified as H for H (protium), D for H (deuterium), and T for H (tritium). This convention is easier to use in chemical equations, replacing the need to write out the mass number each time. Thus, the formula for heavy water may be written DO instead of HO.
Origin of the elements
Only about 4% of the total mass of the universe is made of atoms or ions, and thus represented by elements. This fraction is about 15% of the total matter, with the remainder of the matter (85%) being dark matter. The nature of dark matter is unknown, but it is not composed of atoms of elements because it contains no protons, neutrons, or electrons. (The remaining non-matter part of the mass of the universe is composed of the even less well understood dark energy).
The 94 naturally occurring elements were produced by at least four classes of astrophysical process. Most of the hydrogen, helium and a very small quantity of lithium were produced in the first few minutes of the Big Bang. This Big Bang nucleosynthesis happened only once; the other processes are ongoing. Nuclear fusion inside stars produces elements through stellar nucleosynthesis, including all elements from carbon to iron in atomic number. Elements higher in atomic number than iron, including heavy elements like uranium and plutonium, are produced by various forms of explosive nucleosynthesis in supernovae and neutron star mergers. The light elements lithium, beryllium and boron are produced mostly through cosmic ray spallation (fragmentation induced by cosmic rays) of carbon, nitrogen, and oxygen.
In the early phases of the Big Bang, nucleosynthesis of hydrogen resulted in the production of hydrogen-1 (protium, H) and helium-4 (He), as well as a smaller amount of deuterium (H) and tiny amounts (on the order of 10) of lithium and beryllium. Even smaller amounts of boron may have been produced in the Big Bang, since it has been observed in some very old stars, while carbon has not. No elements heavier than boron were produced in the Big Bang. As a result, the primordial abundance of atoms (or ions) consisted of ~75% H, 25% He, and 0.01% deuterium, with only tiny traces of lithium, beryllium, and perhaps boron. Subsequent enrichment of galactic halos occurred due to stellar nucleosynthesis and supernova nucleosynthesis. However, the element abundance in intergalactic space can still closely resemble primordial conditions, unless it has been enriched by some means.
On Earth (and elsewhere), trace amounts of various elements continue to be produced from other elements as products of nuclear transmutation processes. These include some produced by cosmic rays or other nuclear reactions (see cosmogenic and nucleogenic nuclides), and others produced as decay products of long-lived primordial nuclides. For example, trace (but detectable) amounts of carbon-14 (C) are continually produced in the air by cosmic rays impacting nitrogen atoms, and argon-40 (Ar) is continually produced by the decay of primordially occurring but unstable potassium-40 (K). Also, three primordially occurring but radioactive actinides, thorium, uranium, and plutonium, decay through a series of recurrently produced but unstable elements such as radium and radon, which are transiently present in any sample of containing these metals. Three other radioactive elements, technetium, promethium, and neptunium, occur only incidentally in natural materials, produced as individual atoms by nuclear fission of the nuclei of various heavy elements or in other rare nuclear processes.
Besides the 94 naturally occurring elements, several artificial elements have been produced by nuclear physics technology. By 2016, these experiments had produced all elements up to atomic number 118.
Abundance
The following graph (note log scale) shows the abundance of elements in our Solar System. The table shows the 12 most common elements in our galaxy (estimated spectroscopically), as measured in parts per million by mass. Nearby galaxies that have evolved along similar lines have a corresponding enrichment of elements heavier than hydrogen and helium. The more distant galaxies are being viewed as they appeared in the past, so their abundances of elements appear closer to the primordial mixture. As physical laws and processes appear common throughout the visible universe, however, scientists expect that these galaxies evolved elements in similar abundance.
The abundance of elements in the Solar System is in keeping with their origin from nucleosynthesis in the Big Bang and a number of progenitor supernova stars. Very abundant hydrogen and helium are products of the Big Bang, but the next three elements are rare since they had little time to form in the Big Bang and are not made in stars (they are, however, produced in small quantities by the breakup of heavier elements in interstellar dust, as a result of impact by cosmic rays). Beginning with carbon, elements are produced in stars by buildup from alpha particles (helium nuclei), resulting in an alternatingly larger abundance of elements with even atomic numbers (these are also more stable). In general, such elements up to iron are made in large stars in the process of becoming supernovas. Iron-56 is particularly common, since it is the most stable nuclide that can easily be made from alpha particles (being a product of decay of radioactive nickel-56, ultimately made from 14 helium nuclei). Elements heavier than iron are made in energy-absorbing processes in large stars, and their abundance in the universe (and on Earth) generally decreases with their atomic number.
The abundance of the chemical elements on Earth varies from air to crust to ocean, and in various types of life. The abundance of elements in Earth's crust differs from that in the Solar System (as seen in the Sun and massive planets like Jupiter) mainly in selective loss of the very lightest elements (hydrogen and helium) and also volatile neon, carbon (as hydrocarbons), nitrogen and sulfur, as a result of solar heating in the early formation of the Solar System. Oxygen, the most abundant Earth element by mass, is retained on Earth by combination with silicon. Aluminium at 8% by mass is more common in the Earth's crust than in the universe and solar system, but the composition of the far more bulky mantle, which has magnesium and iron in place of aluminium (which occurs there only at 2% of mass) more closely mirrors the elemental composition of the solar system, save for the noted loss of volatile elements to space, and loss of iron which has migrated to the Earth's core.
The composition of the human body, by contrast, more closely follows the composition of seawater—save that the human body has additional stores of carbon and nitrogen necessary to form the proteins and nucleic acids, together with phosphorus in the nucleic acids and energy transfer molecule adenosine triphosphate (ATP) that occurs in the cells of all living organisms. Certain kinds of organisms require particular additional elements, for example the magnesium in chlorophyll in green plants, the calcium in mollusc shells, or the iron in the hemoglobin in vertebrates' red blood cells.
History
Evolving definitions
The concept of an "element" as an indivisible substance has developed through three major historical phases: Classical definitions (such as those of the ancient Greeks), chemical definitions, and atomic definitions.
Classical definitions
Ancient philosophy posited a set of classical elements to explain observed patterns in nature. These elements originally referred to earth, water, air and fire rather than the chemical elements of modern science.
The term 'elements' (stoicheia) was first used by Greek philosopher Plato around 360 BCE in his dialogue Timaeus, which includes a discussion of the composition of inorganic and organic bodies and is a speculative treatise on chemistry. Plato believed the elements introduced a century earlier by Empedocles were composed of small polyhedral forms: tetrahedron (fire), octahedron (air), icosahedron (water), and cube (earth).
Aristotle, , also used the term stoicheia and added a fifth element, aether, which formed the heavens. Aristotle defined an element as:
Chemical definitions
Robert Boyle
In 1661, in The Sceptical Chymist, Robert Boyle proposed his theory of corpuscularism which favoured the analysis of matter as constituted of irreducible units of matter (atoms); and, choosing to side with neither Aristotle's view of the four elements nor Paracelsus' view of three fundamental elements, left open the question of the number of elements. Boyle argued against a pre-determined number of elements—directly against Paracelsus' three principles (sulfur, mercury, and salt), indirectly against the "Aristotelian" elements (earth, water, air, and fire), for Boyle felt that the arguments against the former were at least as valid against the latter.
Then Boyle stated his view in four propositions. In the first and second, he suggests that matter consists of particles, but that these particles may be difficult to separate. Boyle used the concept of "corpuscles"—or "atomes", as he also called them—to explain how a limited number of elements could combine into a vast number of compounds.
Boyle explained that gold reacts with aqua regia, and mercury with nitric acid, sulfuric acid, and sulfur to produce various "compounds", and that they could be recovered from those compounds, just as would be expected of elements. Yet, Boyle did not consider gold, mercury, or lead elements, but rather—together with wine—"perfectly mixt bodies".
Even though Boyle is primarily regarded as the first modern chemist, The Sceptical Chymist still contains old ideas about the elements, alien to a contemporary viewpoint. Sulfur, for example, is not only the familiar yellow non-metal but also an inflammable "spirit".
Isaac Watts
In 1724, in his book Logick, the English minister and logician Isaac Watts enumerated the elements then recognized by chemists. Watts' list of elements included two of Paracelsus' principles (sulfur and salt) and two classical elements (earth and water) as well as "spirit". Watts did, however, note a lack of consensus among chemists.
Antoine Lavoisier, Jöns Jacob Berzelius, and Dmitri Mendeleev
The first modern list of elements was given in Antoine Lavoisier's 1789 Elements of Chemistry, which contained 33 elements, including light and caloric. By 1818, Jöns Jacob Berzelius had determined atomic weights for 45 of the 49 then-accepted elements. Dmitri Mendeleev had 63 elements in his 1869 periodic table.
From Boyle until the early 20th century, an element was defined as a pure substance that cannot be decomposed into any simpler substance and cannot be transformed into other elements by chemical processes. Elements at the time were generally distinguished by their atomic weights, a property measurable with fair accuracy by available analytical techniques.
Atomic definitions
The 1913 discovery by English physicist Henry Moseley that the nuclear charge is the physical basis for the atomic number, further refined when the nature of protons and neutrons became appreciated, eventually led to the current definition of an element based on atomic number (number of protons). The use of atomic numbers, rather than atomic weights, to distinguish elements has greater predictive value (since these numbers are integers) and also resolves some ambiguities in the chemistry-based view due to varying properties of isotopes and allotropes within the same element. Currently, IUPAC defines an element to exist if it has isotopes with a lifetime longer than the 10 seconds it takes the nucleus to form an electronic cloud.
By 1914, eighty-seven elements were known, all naturally occurring (see Discovery of chemical elements). The remaining naturally occurring elements were discovered or isolated in subsequent decades, and various additional elements have also been produced synthetically, with much of that work pioneered by Glenn T. Seaborg. In 1955, element 101 was discovered and named mendelevium in honor of D. I. Mendeleev, the first to arrange the elements periodically.
Discovery and recognition of various elements
Ten materials familiar to various prehistoric cultures are now known to be elements: Carbon, copper, gold, iron, lead, mercury, silver, sulfur, tin, and zinc. Three additional materials now accepted as elements, arsenic, antimony, and bismuth, were recognized as distinct substances before 1500 AD. Phosphorus, cobalt, and platinum were isolated before 1750.
Most of the remaining naturally occurring elements were identified and characterized by 1900, including:
Such now-familiar industrial materials as aluminium, silicon, nickel, chromium, magnesium, and tungsten
Reactive metals such as lithium, sodium, potassium, and calcium
The halogens fluorine, chlorine, bromine, and iodine
Gases such as hydrogen, oxygen, nitrogen, helium, argon, and neon
Most of the rare-earth elements, including cerium, lanthanum, gadolinium, and neodymium
The more common radioactive elements, including uranium, thorium, and radium
Elements isolated or produced since 1900 include:
The three remaining undiscovered stable elements: hafnium, lutetium, and rhenium
Plutonium, which was first produced synthetically in 1940 by Glenn T. Seaborg, but is now also known from a few long-persisting natural occurrences
The three incidentally occurring natural elements (neptunium, promethium, and technetium), which were all first produced synthetically but later discovered in trace amounts in geological samples
Four scarce decay products of uranium or thorium (astatine, francium, actinium, and protactinium), and
All synthetic transuranic elements, beginning with americium and curium
Recently discovered elements
The first transuranium element (element with an atomic number greater than 92) discovered was neptunium in 1940. Since 1999, the IUPAC/IUPAP Joint Working Party has considered claims for the discovery of new elements. As of January 2016, all 118 elements have been confirmed by IUPAC as being discovered. The discovery of element 112 was acknowledged in 2009, and the name copernicium and the chemical symbol Cn were suggested for it. The name and symbol were officially endorsed by IUPAC on 19 February 2010. The heaviest element that is believed to have been synthesized to date is element 118, oganesson, on 9 October 2006, by the Flerov Laboratory of Nuclear Reactions in Dubna, Russia. Tennessine, element 117 was the latest element claimed to be discovered, in 2009. On 28 November 2016, scientists at the IUPAC officially recognized the names for the four newest elements, with atomic numbers 113, 115, 117, and 118.
List of the 118 known chemical elements
The following sortable table shows the 118 known elements.
Atomic number, Element, and Symbol all serve independently as unique identifiers.
Element names are those accepted by IUPAC.
Block indicates the periodic table block for each element: red = s-block, yellow = p-block, blue = d-block, green = f-block.
Group and period refer to an element's position in the periodic table. Group numbers here show the currently accepted numbering; for older numberings, see Group (periodic table).
See also
Biological roles of the elements
Chemical database
Discovery of chemical elements
Element collecting
Fictional element
Goldschmidt classification
Island of stability
List of nuclides
List of the elements' densities
Mineral (nutrient)
Periodic systems of small molecules
Prices of chemical elements
Systematic element name
Table of nuclides
Roles of chemical elements
References
Bibliography
Further reading
XML on-line corrected version: created by M. Nic, J. Jirat, B. Kosata; updates compiled by A. Jenkins
External links
Videos for each element by the University of Nottingham
"Chemical Elements", In Our Time, BBC Radio 4 discussion with Paul Strathern, Mary Archer and John Murrell (25 May 2000)
Chemistry | Chemical element | [
"Physics"
] | 10,205 | [
"Chemical elements",
"Atoms",
"Matter"
] |
5,662 | https://en.wikipedia.org/wiki/Calendar%20year | A calendar year begins on the New Year's Day of the given calendar system and ends on the day before the following New Year's Day, and thus consists of a whole number of days.
The Gregorian calendar year, which is in use as civil calendar in most of the world, begins on January 1 and ends on December 31. It has a length of 365 days in an ordinary year but, in order to reconcile the calendar year with the astronomical cycle, it has 366 days in a leap year. With 97 leap years every 400 years, the Gregorian calendar year has an average length of 365.2425 days.
Other formula-based calendars can have lengths which are further out of step with the solar cycle: for example, the Julian calendar has an average length of 365.25 days, and the Hebrew calendar has an average length of 365.2468 days. The Lunar Hijri calendar ("Islamic calendar") is a lunar calendar consisting of 12 lunar months in a year of 354 or 355 days. The astronomer's mean tropical year, which is averaged over equinoxes and solstices, is currently 365.24219 days, slightly shorter than the average length of the calendar year in most calendars.
A year can also be measured by starting on any other named day of the calendar, and ending on the day before this named day in the following year. This may be termed a "year's time", but is not a "calendar year".
Quarter year
The calendar year can be divided into four quarters, often abbreviated as Q1, Q2, Q3, and Q4. Since they are three months each, they are also called trimesters. In the Gregorian calendar:
First quarter, Q1: January 1 – March 31 (90 days or 91 days in leap years)
Second quarter, Q2: April 1 – June 30 (91 days)
Third quarter, Q3: July 1 – September 30 (92 days)
Fourth quarter, Q4: October 1 – December 31 (92 days)
In some domains, weeks are preferred over months for scheduling and reporting, so they use quarters of exactly 13 weeks each, often following ISO week date conventions. One in five to six years has a 53rd week which is usually appended to the last quarter. It is then 98 days instead of 91 days long, which complicates comparisons.
In the Chinese calendar, the quarters are traditionally associated with the 4 seasons of the year:
Spring: 1st to 3rd month
Summer: 4th to 6th month
Autumn: 7th to 9th month
Winter: 10th to 12th month
Quadrimester
The calendar year can also be divided into quadrimesters (from French quadrimestre), lasting for four months each. They can also be called the early, middle, or late parts of the year. In the Gregorian calendar:
First quadrimester, early year: January 1 – April 30 (120 days or 121 days in leap years)
Second quadrimester, mid-year: May 1 – August 31 (122 days)
Third quadrimester, late year: September 1 – December 31 (121 days)
Semester
The calendar year can also be divided into semesters, lasting six months each and often being abbreviated as S1 and S2. In the Gregorian calendar:
First semester, S1: January 1 – June 30 (181 days or 182 days in leap years)
Second semester, S2: July 1 – December 31 (184 days)
See also
(historical usage)
Julian year (astronomy) a time interval of exactly 365.25 Earth days
Julian year (calendar) a year in the Julian calendar that is either 365 or 366 days, or 365.25 days on average
Notes
References
Year
Units of time
Types of year | Calendar year | [
"Physics",
"Mathematics"
] | 766 | [
"Calendars",
"Physical quantities",
"Time",
"Units of time",
"Quantity",
"Spacetime",
"Units of measurement"
] |
5,664 | https://en.wikipedia.org/wiki/Consciousness | Consciousness, at its simplest, is awareness of internal and external existence. However, its nature has led to millennia of analyses, explanations, and debate by philosophers, scientists, and theologians. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with the mind, and at other times, an aspect of it. In the past, it was one's "inner life", the world of introspection, of private thought, imagination, and volition. Today, it often includes any kind of cognition, experience, feeling, or perception. It may be awareness, awareness of awareness, metacognition, or self-awareness, either continuously changing or not. The disparate range of research, notions, and speculations raises a curiosity about whether the right questions are being asked.
Examples of the range of descriptions, definitions or explanations are: ordered distinction between self and environment, simple wakefulness, one's sense of selfhood or soul explored by "looking within"; being a metaphorical "stream" of contents, or being a mental state, mental event, or mental process of the brain.
Etymology
The words "conscious" and "consciousness" in the English language date to the 17th century, and the first recorded use of "conscious" as a simple adjective was applied figuratively to inanimate objects ("the conscious Groves", 1643). It derived from the Latin conscius (con- "together" and scio "to know") which meant "knowing with" or "having joint or common knowledge with another", especially as in sharing a secret. Thomas Hobbes in Leviathan (1651) wrote: "Where two, or more men, know of one and the same fact, they are said to be Conscious of it one to another". There were also many occurrences in Latin writings of the phrase conscius sibi, which translates literally as "knowing with oneself", or in other words "sharing knowledge with oneself about something". This phrase has the figurative sense of "knowing that one knows", which is something like the modern English word "conscious", but it was rendered into English as "conscious to oneself" or "conscious unto oneself". For example, Archbishop Ussher wrote in 1613 of "being so conscious unto myself of my great weakness".
The Latin conscientia, literally 'knowledge-with', first appears in Roman juridical texts by writers such as Cicero. It means a kind of shared knowledge with moral value, specifically what a witness knows of someone else's deeds. Although René Descartes (1596–1650), writing in Latin, is generally taken to be the first philosopher to use conscientia in a way less like the traditional meaning and more like the way modern English speakers would use "conscience", his meaning is nowhere defined. In Search after Truth (, Amsterdam 1701) he wrote the word with a gloss: conscientiâ, vel interno testimonio (translatable as "conscience, or internal testimony"). It might mean the knowledge of the value of one's own thoughts.
The origin of the modern concept of consciousness is often attributed to John Locke who defined the word in his Essay Concerning Human Understanding, published in 1690, as "the perception of what passes in a man's own mind". The essay strongly influenced 18th-century British philosophy, and Locke's definition appeared in Samuel Johnson's celebrated Dictionary (1755).
The French term conscience is defined roughly like English "consciousness" in the 1753 volume of Diderot and d'Alembert's Encyclopédie as "the opinion or internal feeling that we ourselves have from what we do".
Problem of definition
Scholars are divided as to whether Aristotle had a concept of consciousness. He does not use any single word or terminology that is clearly similar to the phenomenon or concept defined by John Locke. Victor Caston contends that Aristotle did have a concept more clearly similar to perception.
Modern dictionary definitions of the word consciousness evolved over several centuries and reflect a range of seemingly related meanings, with some differences that have been controversial, such as the distinction between inward awareness and perception of the physical world, or the distinction between conscious and unconscious, or the notion of a mental entity or mental activity that is not physical.
The common-usage definitions of consciousness in Webster's Third New International Dictionary (1966) are as follows:
awareness or perception of an inward psychological or spiritual fact; intuitively perceived knowledge of something in one's inner self
inward awareness of an external object, state, or fact
concerned awareness; INTEREST, CONCERN—often used with an attributive noun [e.g. class consciousness]
the state or activity that is characterized by sensation, emotion, volition, or thought; mind in the broadest possible sense; something in nature that is distinguished from the physical
the totality in psychology of sensations, perceptions, ideas, attitudes, and feelings of which an individual or a group is aware at any given time or within a particular time span—compare STREAM OF CONSCIOUSNESS
waking life (as that to which one returns after sleep, trance, fever) wherein all one's mental powers have returned . . .
the part of mental life or psychic content in psychoanalysis that is immediately available to the ego—compare PRECONSCIOUS, UNCONSCIOUS
The Cambridge English Dictionary defines consciousness as "the state of understanding and realizing something".
The Oxford Living Dictionary defines consciousness as "[t]he state of being aware of and responsive to one's surroundings", "[a] person's awareness or perception of something", and "[t]he fact of awareness by the mind of itself and the world".
Philosophers have attempted to clarify technical distinctions by using a jargon of their own. The corresponding entry in the Routledge Encyclopedia of Philosophy (1998) reads:
ConsciousnessPhilosophers have used the term consciousness for four main topics: knowledge in general, intentionality, introspection (and the knowledge it specifically generates) and phenomenal experience... Something within one's mind is 'introspectively conscious' just in case one introspects it (or is poised to do so). Introspection is often thought to deliver one's primary knowledge of one's mental life. An experience or other mental entity is 'phenomenally conscious' just in case there is 'something it is like' for one to have it. The clearest examples are: perceptual experience, such as tastings and seeings; bodily-sensational experiences, such as those of pains, tickles and itches; imaginative experiences, such as those of one's own actions or perceptions; and streams of thought, as in the experience of thinking 'in words' or 'in images'. Introspection and phenomenality seem independent, or dissociable, although this is controversial.
Traditional metaphors for mind
During the early 19th century, the emerging field of geology inspired a popular metaphor that the mind likewise had hidden layers "which recorded the past of the individual". By 1875, most psychologists believed that "consciousness was but a small part of mental life", and this idea underlies the goal of Freudian therapy, to expose the of the mind.
Other metaphors from various sciences inspired other analyses of the mind, for example: Johann Friedrich Herbart described ideas as being attracted and repulsed like magnets; John Stuart Mill developed the idea of "mental chemistry" and "mental compounds", and Edward B. Titchener sought the "structure" of the mind by analyzing its "elements". The abstract idea of states of consciousness mirrored the concept of states of matter.
In 1892, William James noted that the "ambiguous word 'content' has been recently invented instead of 'object'" and that the metaphor of mind as a seemed to minimize the dualistic problem of how "states of consciousness can " things, or objects; by 1899 psychologists were busily studying the "contents of conscious experience by introspection and experiment". Another popular metaphor was James's doctrine of the stream of consciousness, with continuity, fringes, and transitions.
James discussed the difficulties of describing and studying psychological phenomena, recognizing that commonly-used terminology was a necessary and acceptable starting point towards more precise, scientifically justified language. Prime examples were phrases like inner experience and personal consciousness:
From introspection to awareness
Prior to the 20th century, philosophers treated the phenomenon of consciousness as the "inner world [of] one's own mind", and introspection was the mind "attending to" itself, an activity seemingly distinct from that of perceiving the 'outer world' and its physical phenomena. In 1892 William James noted the distinction along with doubts about the inward character of the mind:
By the 1960s, for many philosophers and psychologists who talked about consciousness, the word no longer meant the 'inner world' but an indefinite, large category called awareness, as in the following example:
Many philosophers and scientists have been unhappy about the difficulty of producing a definition that does not involve circularity or fuzziness. In The Macmillan Dictionary of Psychology (1989 edition), Stuart Sutherland emphasized external awareness, and expressed a skeptical attitude more than a definition:
Using 'awareness', however, as a definition or synonym of consciousness is not a simple matter:
Influence on research
Many philosophers have argued that consciousness is a unitary concept that is understood by the majority of people despite the difficulty philosophers have had defining it. Max Velmans proposed that the "everyday understanding of consciousness" uncontroversially "refers to experience itself rather than any particular thing that we observe or experience" and he added that consciousness "is [therefore] exemplified by the things that we observe or experience", whether thoughts, feelings, or perceptions. Velmans noted however, as of 2009, that there was a deep level of "confusion and internal division" among experts about the phenomenon of consciousness, because researchers lacked "a sufficiently well-specified use of the term...to agree that they are investigating the same thing". He argued additionally that "pre-existing theoretical commitments" to competing explanations of consciousness might be a source of bias.
Within the "modern consciousness studies" community the technical phrase 'phenomenal consciousness' is a common synonym for all forms of awareness, or simply 'experience', without differentiating between inner and outer, or between higher and lower types. With advances in brain research, "the presence or absence of experienced phenomena" of any kind underlies the work of those neuroscientists who seek "to analyze the precise relation of conscious phenomenology to its associated information processing" in the brain. This neuroscientific goal is to find the "neural correlates of consciousness" (NCC). One criticism of this goal is that it begins with a theoretical commitment to the neurological origin of all "experienced phenomena" whether inner or outer. Also, the fact that the easiest 'content of consciousness' to be so analyzed is "the experienced three-dimensional world (the phenomenal world) beyond the body surface" invites another criticism, that most consciousness research since the 1990s, perhaps because of bias, has focused on processes of external perception.
From a history of psychology perspective, Julian Jaynes rejected popular but "superficial views of consciousness" especially those which equate it with "that vaguest of terms, experience". In 1976 he insisted that if not for introspection, which for decades had been ignored or taken for granted rather than explained, there could be no "conception of what consciousness is" and in 1990, he reaffirmed the traditional idea of the phenomenon called 'consciousness', writing that "its denotative definition is, as it was for René Descartes, John Locke, and David Hume, what is introspectable". Jaynes saw consciousness as an important but small part of human mentality, and he asserted: "there can be no progress in the science of consciousness until ... what is introspectable [is] sharply distinguished" from the processes of cognition such as perception, reactive awareness and attention, and automatic forms of learning, problem-solving, and decision-making.
The cognitive science point of view—with an inter-disciplinary perspective involving fields such as psychology, linguistics and anthropology—requires no agreed definition of "consciousness" but studies the interaction of many processes besides perception. For some researchers, consciousness is linked to some kind of "selfhood", for example to certain pragmatic issues such as the feeling of agency and the effects of regret and action on experience of one's own body or social identity. Similarly Daniel Kahneman, who focused on systematic errors in perception, memory and decision-making, has differentiated between two kinds of mental processes, or cognitive "systems": the "fast" activities that are primary, automatic and "cannot be turned off", and the "slow", deliberate, effortful activities of a secondary system "often associated with the subjective experience of agency, choice, and concentration". Kahneman's two systems have been described as "roughly corresponding to unconscious and conscious processes". The two systems can interact, for example in sharing the control of attention. While System 1 can be impulsive, "System 2 is in charge of self-control", and "When we think of ourselves, we identify with System 2, the conscious, reasoning self that has beliefs, makes choices, and decides what to think about and what to do".
Some have argued that we should eliminate the concept from our understanding of the mind, a position known as consciousness semanticism.
In medicine, a "level of consciousness" terminology is used to describe a patient's arousal and responsiveness, which can be seen as a continuum of states ranging from full alertness and comprehension, through disorientation, delirium, loss of meaningful communication, and finally loss of movement in response to painful stimuli. Issues of practical concern include how the level of consciousness can be assessed in severely ill, comatose, or anesthetized people, and how to treat conditions in which consciousness is impaired or disrupted. The degree or level of consciousness is measured by standardized behavior observation scales such as the Glasgow Coma Scale.
Philosophy of mind
While historically philosophers have defended various views on consciousness, surveys indicate that physicalism is now the dominant position among contemporary philosophers of mind. For an overview of the field, approaches often include both historical perspectives (e.g., Descartes, Locke, Kant) and organization by key issues in contemporary debates. An alternative is to focus primarily on current philosophical stances and empirical findings.
Coherence of the concept
Philosophers differ from non-philosophers in their intuitions about what consciousness is. While most people have a strong intuition for the existence of what they refer to as consciousness, skeptics argue that this intuition is too narrow, either because the concept of consciousness is embedded in our intuitions, or because we all are illusions. Gilbert Ryle, for example, argued that traditional understanding of consciousness depends on a Cartesian dualist outlook that improperly distinguishes between mind and body, or between mind and world. He proposed that we speak not of minds, bodies, and the world, but of entities, or identities, acting in the world. Thus, by speaking of "consciousness" we end up leading ourselves by thinking that there is any sort of thing as consciousness separated from behavioral and linguistic understandings.
Types
Ned Block argued that discussions on consciousness often failed to properly distinguish phenomenal (P-consciousness) from access (A-consciousness), though these terms had been used before Block. P-consciousness, according to Block, is raw experience: it is moving, colored forms, sounds, sensations, emotions and feelings with our bodies and responses at the center. These experiences, considered independently of any impact on behavior, are called qualia. A-consciousness, on the other hand, is the phenomenon whereby information in our minds is accessible for verbal report, reasoning, and the control of behavior. So, when we perceive, information about what we perceive is access conscious; when we introspect, information about our thoughts is access conscious; when we remember, information about the past is access conscious, and so on. Although some philosophers, such as Daniel Dennett, have disputed the validity of this distinction, others have broadly accepted it. David Chalmers has argued that A-consciousness can in principle be understood in mechanistic terms, but that understanding P-consciousness is much more challenging: he calls this the hard problem of consciousness.
Some philosophers believe that Block's two types of consciousness are not the end of the story. William Lycan, for example, argued in his book Consciousness and Experience that at least eight clearly distinct types of consciousness can be identified (organism consciousness; control consciousness; consciousness of; state/event consciousness; reportability; introspective consciousness; subjective consciousness; self-consciousness)—and that even this list omits several more obscure forms.
There is also debate over whether or not A-consciousness and P-consciousness always coexist or if they can exist separately. Although P-consciousness without A-consciousness is more widely accepted, there have been some hypothetical examples of A without P. Block, for instance, suggests the case of a "zombie" that is computationally identical to a person but without any subjectivity. However, he remains somewhat skeptical concluding "I don't know whether there are any actual cases of A-consciousness without P-consciousness, but I hope I have illustrated their conceptual possibility".
Distinguishing consciousness from its contents
Sam Harris observes: "At the level of your experience, you are not a body of cells, organelles, and atoms; you are consciousness and its ever-changing contents". Seen in this way, consciousness is a subjectively experienced, ever-present field in which things (the contents of consciousness) come and go.
Christopher Tricker argues that this field of consciousness is symbolized by the mythical bird that opens the Daoist classic the Zhuangzi. This bird's name is Of a Flock (peng 鵬), yet its back is countless thousands of miles across and its wings are like clouds arcing across the heavens. "Like Of a Flock, whose wings arc across the heavens, the wings of your consciousness span to the horizon. At the same time, the wings of every other being's consciousness span to the horizon. You are of a flock, one bird among kin."
Mind–body problem
Mental processes (such as consciousness) and physical processes (such as brain events) seem to be correlated, however the specific nature of the connection is unknown.
The first influential philosopher to discuss this question specifically was Descartes, and the answer he gave is known as mind–body dualism. Descartes proposed that consciousness resides within an immaterial domain he called res cogitans (the realm of thought), in contrast to the domain of material things, which he called res extensa (the realm of extension). He suggested that the interaction between these two domains occurs inside the brain, perhaps in a small midline structure called the pineal gland.
Although it is widely accepted that Descartes explained the problem cogently, few later philosophers have been happy with his solution, and his ideas about the pineal gland have especially been ridiculed. However, no alternative solution has gained general acceptance. Proposed solutions can be divided broadly into two categories: dualist solutions that maintain Descartes's rigid distinction between the realm of consciousness and the realm of matter but give different answers for how the two realms relate to each other; and monist solutions that maintain that there is really only one realm of being, of which consciousness and matter are both aspects. Each of these categories itself contains numerous variants. The two main types of dualism are substance dualism (which holds that the mind is formed of a distinct type of substance not governed by the laws of physics), and property dualism (which holds that the laws of physics are universally valid but cannot be used to explain the mind). The three main types of monism are physicalism (which holds that the mind consists of matter organized in a particular way), idealism (which holds that only thought or experience truly exists, and matter is merely an illusion), and neutral monism (which holds that both mind and matter are aspects of a distinct essence that is itself identical to neither of them). There are also, however, a large number of idiosyncratic theories that cannot cleanly be assigned to any of these schools of thought.
Since the dawn of Newtonian science with its vision of simple mechanical principles governing the entire universe, some philosophers have been tempted by the idea that consciousness could be explained in purely physical terms. The first influential writer to propose such an idea explicitly was Julien Offray de La Mettrie, in his book Man a Machine (L'homme machine). His arguments, however, were very abstract. The most influential modern physical theories of consciousness are based on psychology and neuroscience. Theories proposed by neuroscientists such as Gerald Edelman and Antonio Damasio, and by philosophers such as Daniel Dennett, seek to explain consciousness in terms of neural events occurring within the brain. Many other neuroscientists, such as Christof Koch, have explored the neural basis of consciousness without attempting to frame all-encompassing global theories. At the same time, computer scientists working in the field of artificial intelligence have pursued the goal of creating digital computer programs that can simulate or embody consciousness.
A few theoretical physicists have argued that classical physics is intrinsically incapable of explaining the holistic aspects of consciousness, but that quantum theory may provide the missing ingredients. Several theorists have therefore proposed quantum mind (QM) theories of consciousness. Notable theories falling into this category include the holonomic brain theory of Karl Pribram and David Bohm, and the Orch-OR theory formulated by Stuart Hameroff and Roger Penrose. Some of these QM theories offer descriptions of phenomenal consciousness, as well as QM interpretations of access consciousness. None of the quantum mechanical theories have been confirmed by experiment. Recent publications by G. Guerreshi, J. Cia, S. Popescu, and H. Briegel could falsify proposals such as those of Hameroff, which rely on quantum entanglement in protein. At the present time many scientists and philosophers consider the arguments for an important role of quantum phenomena to be unconvincing. Empirical evidence is against the notion of quantum consciousness, an experiment about wave function collapse led by Catalina Curceanu in 2022 suggests that quantum consciousness, as suggested by Roger Penrose and Stuart Hameroff, is highly implausible.
Apart from the general question of the "hard problem" of consciousness (which is, roughly speaking, the question of how mental experience can arise from a physical basis), a more specialized question is how to square the subjective notion that we are in control of our decisions (at least in some small measure) with the customary view of causality that subsequent events are caused by prior events. The topic of free will is the philosophical and scientific examination of this conundrum.
Problem of other minds
Many philosophers consider experience to be the essence of consciousness, and believe that experience can only fully be known from the inside, subjectively. The problem of other minds is a philosophical problem traditionally stated as the following epistemological question: Given that I can only observe the behavior of others, how can I know that others have minds? The problem of other minds is particularly acute for people who believe in the possibility of philosophical zombies, that is, people who think it is possible in principle to have an entity that is physically indistinguishable from a human being and behaves like a human being in every way but nevertheless lacks consciousness. Related issues have also been studied extensively by Greg Littmann of the University of Illinois, and by Colin Allen (a professor at the University of Pittsburgh) regarding the literature and research studying artificial intelligence in androids.
The most commonly given answer is that we attribute consciousness to other people because we see that they resemble us in appearance and behavior; we reason that if they look like us and act like us, they must be like us in other ways, including having experiences of the sort that we do. There are, however, a variety of problems with that explanation. For one thing, it seems to violate the principle of parsimony, by postulating an invisible entity that is not necessary to explain what we observe. Some philosophers, such as Daniel Dennett in a research paper titled "The Unimagined Preposterousness of Zombies", argue that people who give this explanation do not really understand what they are saying. More broadly, philosophers who do not accept the possibility of zombies generally believe that consciousness is reflected in behavior (including verbal behavior), and that we attribute consciousness on the basis of behavior. A more straightforward way of saying this is that we attribute experiences to people because of what they can do, including the fact that they can tell us about their experiences.
Qualia
The term "qualia" was introduced in philosophical literature by C. I. Lewis. The word is derived from Latin and means "of what sort". It is basically a quantity or property of something as perceived or experienced by an individual, like the scent of rose, the taste of wine, or the pain of a headache. They are difficult to articulate or describe. The philosopher and scientist Daniel Dennett describes them as "the way things seem to us", while philosopher and cognitive scientist David Chalmers expanded on qualia as the "hard problem of consciousness" in the 1990s. When qualia is experienced, activity is simulated in the brain, and these processes are called neural correlates of consciousness (NCCs). Many scientific studies have been done to attempt to link particular brain regions with emotions or experiences.
Species which experience qualia are said to have sentience, which is central to the animal rights movement, because it includes the ability to experience pain and suffering.
Scientific study
For many decades, consciousness as a research topic was avoided by the majority of mainstream scientists, because of a general feeling that a phenomenon defined in subjective terms could not properly be studied using objective experimental methods. In 1975 George Mandler published an influential psychological study which distinguished between slow, serial, and limited conscious processes and fast, parallel and extensive unconscious ones. The Science and Religion Forum 1984 annual conference, 'From Artificial Intelligence to Human Consciousness' identified the nature of consciousness as a matter for investigation; Donald Michie was a keynote speaker. Starting in the 1980s, an expanding community of neuroscientists and psychologists have associated themselves with a field called Consciousness Studies, giving rise to a stream of experimental work published in books, journals such as Consciousness and Cognition, Frontiers in Consciousness Research, Psyche, and the Journal of Consciousness Studies, along with regular conferences organized by groups such as the Association for the Scientific Study of Consciousness and the Society for Consciousness Studies.
Modern medical and psychological investigations into consciousness are based on psychological experiments (including, for example, the investigation of priming effects using subliminal stimuli), and on case studies of alterations in consciousness produced by trauma, illness, or drugs. Broadly viewed, scientific approaches are based on two core concepts. The first identifies the content of consciousness with the experiences that are reported by human subjects; the second makes use of the concept of consciousness that has been developed by neurologists and other medical professionals who deal with patients whose behavior is impaired. In either case, the ultimate goals are to develop techniques for assessing consciousness objectively in humans as well as other animals, and to understand the neural and psychological mechanisms that underlie it.
Measurement via verbal report
Experimental research on consciousness presents special difficulties, due to the lack of a universally accepted operational definition. In the majority of experiments that are specifically about consciousness, the subjects are human, and the criterion used is verbal report: in other words, subjects are asked to describe their experiences, and their descriptions are treated as observations of the contents of consciousness.
For example, subjects who stare continuously at a Necker cube usually report that they experience it "flipping" between two 3D configurations, even though the stimulus itself remains the same. The objective is to understand the relationship between the conscious awareness of stimuli (as indicated by verbal report) and the effects the stimuli have on brain activity and behavior. In several paradigms, such as the technique of response priming, the behavior of subjects is clearly influenced by stimuli for which they report no awareness, and suitable experimental manipulations can lead to increasing priming effects despite decreasing prime identification (double dissociation).
Verbal report is widely considered to be the most reliable indicator of consciousness, but it raises a number of issues. For one thing, if verbal reports are treated as observations, akin to observations in other branches of science, then the possibility arises that they may contain errors—but it is difficult to make sense of the idea that subjects could be wrong about their own experiences, and even more difficult to see how such an error could be detected. Daniel Dennett has argued for an approach he calls heterophenomenology, which means treating verbal reports as stories that may or may not be true, but his ideas about how to do this have not been widely adopted. Another issue with verbal report as a criterion is that it restricts the field of study to humans who have language: this approach cannot be used to study consciousness in other species, pre-linguistic children, or people with types of brain damage that impair language. As a third issue, philosophers who dispute the validity of the Turing test may feel that it is possible, at least in principle, for verbal report to be dissociated from consciousness entirely: a philosophical zombie may give detailed verbal reports of awareness in the absence of any genuine awareness.
Although verbal report is in practice the "gold standard" for ascribing consciousness, it is not the only possible criterion. In medicine, consciousness is assessed as a combination of verbal behavior, arousal, brain activity, and purposeful movement. The last three of these can be used as indicators of consciousness when verbal behavior is absent. The scientific literature regarding the neural bases of arousal and purposeful movement is very extensive. Their reliability as indicators of consciousness is disputed, however, due to numerous studies showing that alert human subjects can be induced to behave purposefully in a variety of ways in spite of reporting a complete lack of awareness. Studies related to the neuroscience of free will have also shown that the influence consciousness has on decision-making is not always straightforward.
Mirror test and contingency awareness
Another approach applies specifically to the study of self-awareness, that is, the ability to distinguish oneself from others. In the 1970s Gordon Gallup developed an operational test for self-awareness, known as the mirror test. The test examines whether animals are able to differentiate between seeing themselves in a mirror versus seeing other animals. The classic example involves placing a spot of coloring on the skin or fur near the individual's forehead and seeing if they attempt to remove it or at least touch the spot, thus indicating that they recognize that the individual they are seeing in the mirror is themselves. Humans (older than 18 months) and other great apes, bottlenose dolphins, orcas, pigeons, European magpies and elephants have all been observed to pass this test. While some other animals like pigs have been shown to find food by looking into the mirror.
Contingency awareness is another such approach, which is basically the conscious understanding of one's actions and its effects on one's environment. It is recognized as a factor in self-recognition. The brain processes during contingency awareness and learning is believed to rely on an intact medial temporal lobe and age. A study done in 2020 involving transcranial direct current stimulation, Magnetic resonance imaging (MRI) and eyeblink classical conditioning supported the idea that the parietal cortex serves as a substrate for contingency awareness and that age-related disruption of this region is sufficient to impair awareness.
Neural correlates
A major part of the scientific literature on consciousness consists of studies that examine the relationship between the experiences reported by subjects and the activity that simultaneously takes place in their brains—that is, studies of the neural correlates of consciousness. The hope is to find that activity in a particular part of the brain, or a particular pattern of global brain activity, which will be strongly predictive of conscious awareness. Several brain imaging techniques, such as EEG and fMRI, have been used for physical measures of brain activity in these studies.
Another idea that has drawn attention for several decades is that consciousness is associated with high-frequency (gamma band) oscillations in brain activity. This idea arose from proposals in the 1980s, by Christof von der Malsburg and Wolf Singer, that gamma oscillations could solve the so-called binding problem, by linking information represented in different parts of the brain into a unified experience. Rodolfo Llinás, for example, proposed that consciousness results from recurrent thalamo-cortical resonance where the specific thalamocortical systems (content) and the non-specific (centromedial thalamus) thalamocortical systems (context) interact in the gamma band frequency via synchronous oscillations.
A number of studies have shown that activity in primary sensory areas of the brain is not sufficient to produce consciousness: it is possible for subjects to report a lack of awareness even when areas such as the primary visual cortex (V1) show clear electrical responses to a stimulus. Higher brain areas are seen as more promising, especially the prefrontal cortex, which is involved in a range of higher cognitive functions collectively known as executive functions. There is substantial evidence that a "top-down" flow of neural activity (i.e., activity propagating from the frontal cortex to sensory areas) is more predictive of conscious awareness than a "bottom-up" flow of activity. The prefrontal cortex is not the only candidate area, however: studies by Nikos Logothetis and his colleagues have shown, for example, that visually responsive neurons in parts of the temporal lobe reflect the visual perception in the situation when conflicting visual images are presented to different eyes (i.e., bistable percepts during binocular rivalry). Furthermore, top-down feedback from higher to lower visual brain areas may be weaker or absent in the peripheral visual field, as suggested by some experimental data and theoretical arguments; nevertheless humans can perceive visual inputs in the peripheral visual field arising from bottom-up V1 neural activities. Meanwhile, bottom-up V1 activities for the central visual fields can be vetoed, and thus made invisible to perception, by the top-down feedback, when these bottom-up signals are inconsistent with the brain's internal model of the visual world.
Modulation of neural responses may correlate with phenomenal experiences. In contrast to the raw electrical responses that do not correlate with consciousness, the modulation of these responses by other stimuli correlates surprisingly well with an important aspect of consciousness: namely with the phenomenal experience of stimulus intensity (brightness, contrast). In the research group of Danko Nikolić it has been shown that some of the changes in the subjectively perceived brightness correlated with the modulation of firing rates while others correlated with the modulation of neural synchrony. An fMRI investigation suggested that these findings were strictly limited to the primary visual areas. This indicates that, in the primary visual areas, changes in firing rates and synchrony can be considered as neural correlates of qualia—at least for some type of qualia.
In 2013, the perturbational complexity index (PCI) was proposed, a measure of the algorithmic complexity of the electrophysiological response of the cortex to transcranial magnetic stimulation. This measure was shown to be higher in individuals that are awake, in REM sleep or in a locked-in state than in those who are in deep sleep or in a vegetative state, making it potentially useful as a quantitative assessment of consciousness states.
Assuming that not only humans but even some non-mammalian species are conscious, a number of evolutionary approaches to the problem of neural correlates of consciousness open up. For example, assuming that birds are conscious—a common assumption among neuroscientists and ethologists due to the extensive cognitive repertoire of birds—there are comparative neuroanatomical ways to validate some of the principal, currently competing, mammalian consciousness–brain theories. The rationale for such a comparative study is that the avian brain deviates structurally from the mammalian brain. So how similar are they? What homologs can be identified? The general conclusion from the study by Butler, et al. is that some of the major theories for the mammalian brain also appear to be valid for the avian brain. The structures assumed to be critical for consciousness in mammalian brains have homologous counterparts in avian brains. Thus the main portions of the theories of Crick and Koch, Edelman and Tononi, and Cotterill seem to be compatible with the assumption that birds are conscious. Edelman also differentiates between what he calls primary consciousness (which is a trait shared by humans and non-human animals) and higher-order consciousness as it appears in humans alone along with human language capacity. Certain aspects of the three theories, however, seem less easy to apply to the hypothesis of avian consciousness. For instance, the suggestion by Crick and Koch that layer 5 neurons of the mammalian brain have a special role, seems difficult to apply to the avian brain, since the avian homologs have a different morphology. Likewise, the theory of Eccles seems incompatible, since a structural homolog/analogue to the dendron has not been found in avian brains. The assumption of an avian consciousness also brings the reptilian brain into focus. The reason is the structural continuity between avian and reptilian brains, meaning that the phylogenetic origin of consciousness may be earlier than suggested by many leading neuroscientists.
Joaquin Fuster of UCLA has advocated the position of the importance of the prefrontal cortex in humans, along with the areas of Wernicke and Broca, as being of particular importance to the development of human language capacities neuro-anatomically necessary for the emergence of higher-order consciousness in humans.
A study in 2016 looked at lesions in specific areas of the brainstem that were associated with coma and vegetative states. A small region of the rostral dorsolateral pontine tegmentum in the brainstem was suggested to drive consciousness through functional connectivity with two cortical regions, the left ventral anterior insular cortex, and the pregenual anterior cingulate cortex. These three regions may work together as a triad to maintain consciousness.
Models
A wide range of empirical theories of consciousness have been proposed. Adrian Doerig and colleagues list 13 notable theories, while Anil Seth and Tim Bayne list 22 notable theories.
Global workspace theory
Global workspace theory (GWT) is a cognitive architecture and theory of consciousness proposed by the cognitive psychologist Bernard Baars in 1988. Baars explains the theory with the metaphor of a theater, with conscious processes represented by an illuminated stage. This theater integrates inputs from a variety of unconscious and otherwise autonomous networks in the brain and then broadcasts them to unconscious networks (represented in the metaphor by a broad, unlit "audience"). The theory has since been expanded upon by other scientists including cognitive neuroscientist Stanislas Dehaene and Lionel Naccache.
Integrated information theory
Integrated information theory (IIT), pioneered by neuroscientist Giulio Tononi in 2004, postulates that consciousness resides in the information being processed and arises once the information reaches a certain level of complexity. Additionally, IIT is one of the only leading theories of consciousness that attempts to create a 1:1 mapping between conscious states and precise, formal mathematical descriptions of those mental states. Proponents of this model suggest that it may provide a physical grounding for consciousness in neurons, as they provide the mechanism by which information is integrated. This also relates to the "hard problem of consciousness" proposed by David Chalmers. The theory remains controversial, because of its lack of credibility.
Orchestrated objective reduction
Orchestrated objective reduction (Orch-OR), or the quantum theory of mind, was proposed by scientists Roger Penrose and Stuart Hameroff, and states that consciousness originates at the quantum level inside neurons. The mechanism is held to be a quantum process called objective reduction that is orchestrated by cellular structures called microtubules, which form the cytoskeleton around which the brain is built. The duo proposed that these quantum processes accounted for creativity, innovation, and problem-solving abilities. Penrose published his views in the book The Emperor's New Mind. In 2014, the discovery of quantum vibrations inside microtubules gave new life to the argument.
Attention schema theory
In 2011, Graziano and Kastner proposed the "attention schema" theory of awareness. In that theory, specific cortical areas, notably in the superior temporal sulcus and the temporo-parietal junction, are used to build the construct of awareness and attribute it to other people. The same cortical machinery is also used to attribute awareness to oneself. Damage to these cortical regions can lead to deficits in consciousness such as hemispatial neglect. In the attention schema theory, the value of explaining the feature of awareness and attributing it to a person is to gain a useful predictive model of that person's attentional processing. Attention is a style of information processing in which a brain focuses its resources on a limited set of interrelated signals. Awareness, in this theory, is a useful, simplified schema that represents attentional states. To be aware of X is explained by constructing a model of one's attentional focus on X.
Entropic brain theory
The entropic brain is a theory of conscious states informed by neuroimaging research with psychedelic drugs. The theory suggests that the brain in primary states such as rapid eye movement (REM) sleep, early psychosis and under the influence of psychedelic drugs, is in a disordered state; normal waking consciousness constrains some of this freedom and makes possible metacognitive functions such as internal self-administered reality testing and self-awareness. Criticism has included questioning whether the theory has been adequately tested.
Projective consciousness model
In 2017, work by David Rudrauf and colleagues, including Karl Friston, applied the active inference paradigm to consciousness, leading to the projective consciousness model (PCM), a model of how sensory data is integrated with priors in a process of projective transformation. The authors argue that, while their model identifies a key relationship between computation and phenomenology, it does not completely solve the hard problem of consciousness or completely close the explanatory gap.
Claustrum being the conductor for consciousness
In 2004, a proposal was made by molecular biologist Francis Crick (co-discoverer of the double helix), which stated that to bind together an individual's experience, a conductor of an orchestra is required. Together with neuroscientist Christof Koch, he proposed that this conductor would have to collate information rapidly from various regions of the brain. The duo reckoned that the claustrum was well suited for the task. However, Crick died while working on the idea.
The proposal is backed by a study done in 2014, where a team at the George Washington University induced unconsciousness in a 54-year-old woman suffering from intractable epilepsy by stimulating her claustrum. The woman underwent depth electrode implantation and electrical stimulation mapping. The electrode between the left claustrum and anterior-dorsal insula was the one which induced unconsciousness. Correlation for interactions affecting medial parietal and posterior frontal channels during stimulation increased significantly as well. Their findings suggested that the left claustrum or anterior insula is an important part of a network that subserves consciousness, and that disruption of consciousness is related to increased EEG signal synchrony within frontal-parietal networks. However, this remains an isolated, hence inconclusive study.
Biological function and evolution
The emergence of consciousness during biological evolution remains a topic of ongoing scientific inquiry. The survival value of consciousness is still a matter of exploration and understanding. While consciousness appears to play a crucial role in human cognition, decision-making, and self-awareness, its adaptive significance across different species remains a subject of debate.
Some people question whether consciousness has any survival value. Some argue that consciousness is a by-product of evolution. Thomas Henry Huxley for example defends in an essay titled "On the Hypothesis that Animals are Automata, and its History" an epiphenomenalist theory of consciousness, according to which consciousness is a causally inert effect of neural activity—"as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery". To this William James objects in his essay Are We Automata? by stating an evolutionary argument for mind-brain interaction implying that if the preservation and development of consciousness in the biological evolution is a result of natural selection, it is plausible that consciousness has not only been influenced by neural processes, but has had a survival value itself; and it could only have had this if it had been efficacious. Karl Popper develops a similar evolutionary argument in the book The Self and Its Brain.
Opinions are divided on when and how consciousness first arose. It has been argued that consciousness emerged (i) exclusively with the first humans, (ii) exclusively with the first mammals, (iii) independently in mammals and birds, or (iv) with the first reptiles. Other authors date the origins of consciousness to the first animals with nervous systems or early vertebrates in the Cambrian over 500 million years ago. Donald Griffin suggests in his book Animal Minds a gradual evolution of consciousness. Further exploration of the origins of consciousness, particularly in molluscs, has been done by Peter Godfrey Smith in his book Metazoa.
Regarding the primary function of conscious processing, a recurring idea in recent theories is that phenomenal states somehow integrate neural activities and information-processing that would otherwise be independent. This has been called the integration consensus. Another example has been proposed by Gerald Edelman called dynamic core hypothesis which puts emphasis on reentrant connections that reciprocally link areas of the brain in a massively parallel manner. Edelman also stresses the importance of the evolutionary emergence of higher-order consciousness in humans from the historically older trait of primary consciousness which humans share with non-human animals (see Neural correlates section above). These theories of integrative function present solutions to two classic problems associated with consciousness: differentiation and unity. They show how our conscious experience can discriminate between a virtually unlimited number of different possible scenes and details (differentiation) because it integrates those details from our sensory systems, while the integrative nature of consciousness in this view easily explains how our experience can seem unified as one whole despite all of these individual parts. However, it remains unspecified which kinds of information are integrated in a conscious manner and which kinds can be integrated without consciousness. Nor is it explained what specific causal role conscious integration plays, nor why the same functionality cannot be achieved without consciousness. Not all kinds of information are capable of being disseminated consciously (e.g., neural activity related to vegetative functions, reflexes, unconscious motor programs, low-level perceptual analyzes, etc.), and many kinds of information can be disseminated and combined with other kinds without consciousness, as in intersensory interactions such as the ventriloquism effect. Hence it remains unclear why any of it is conscious. For a review of the differences between conscious and unconscious integrations, see the article of Ezequiel Morsella.
As noted earlier, even among writers who consider consciousness to be well-defined, there is widespread dispute about which animals other than humans can be said to possess it. Edelman has described this distinction as that of humans possessing higher-order consciousness while sharing the trait of primary consciousness with non-human animals (see previous paragraph). Thus, any examination of the evolution of consciousness is faced with great difficulties. Nevertheless, some writers have argued that consciousness can be viewed from the standpoint of evolutionary biology as an adaptation in the sense of a trait that increases fitness. In his article "Evolution of consciousness", John Eccles argued that special anatomical and physical properties of the mammalian cerebral cortex gave rise to consciousness ("[a] psychon ... linked to [a] dendron through quantum physics"). Bernard Baars proposed that once in place, this "recursive" circuitry may have provided a basis for the subsequent development of many of the functions that consciousness facilitates in higher organisms. Peter Carruthers has put forth one such potential adaptive advantage gained by conscious creatures by suggesting that consciousness allows an individual to make distinctions between appearance and reality. This ability would enable a creature to recognize the likelihood that their perceptions are deceiving them (e.g. that water in the distance may be a mirage) and behave accordingly, and it could also facilitate the manipulation of others by recognizing how things appear to them for both cooperative and devious ends.
Other philosophers, however, have suggested that consciousness would not be necessary for any functional advantage in evolutionary processes. No one has given a causal explanation, they argue, of why it would not be possible for a functionally equivalent non-conscious organism (i.e., a philosophical zombie) to achieve the very same survival advantages as a conscious organism. If evolutionary processes are blind to the difference between function F being performed by conscious organism O and non-conscious organism O*, it is unclear what adaptive advantage consciousness could provide. As a result, an exaptive explanation of consciousness has gained favor with some theorists that posit consciousness did not evolve as an adaptation but was an exaptation arising as a consequence of other developments such as increases in brain size or cortical rearrangement. Consciousness in this sense has been compared to the blind spot in the retina where it is not an adaption of the retina, but instead just a by-product of the way the retinal axons were wired. Several scholars including Pinker, Chomsky, Edelman, and Luria have indicated the importance of the emergence of human language as an important regulative mechanism of learning and memory in the context of the development of higher-order consciousness (see Neural correlates section above).
Altered states
There are some brain states in which consciousness seems to be absent, including dreamless sleep or coma. There are also a variety of circumstances that can change the relationship between the mind and the world in less drastic ways, producing what are known as altered states of consciousness. Some altered states occur naturally; others can be produced by drugs or brain damage. Altered states can be accompanied by changes in thinking, disturbances in the sense of time, feelings of loss of control, changes in emotional expression, alternations in body image and changes in meaning or significance.
The two most widely accepted altered states are sleep and dreaming. Although dream sleep and non-dream sleep appear very similar to an outside observer, each is associated with a distinct pattern of brain activity, metabolic activity, and eye movement; each is also associated with a distinct pattern of experience and cognition. During ordinary non-dream sleep, people who are awakened report only vague and sketchy thoughts, and their experiences do not cohere into a continuous narrative. During dream sleep, in contrast, people who are awakened report rich and detailed experiences in which events form a continuous progression, which may however be interrupted by bizarre or fantastic intrusions. Thought processes during the dream state frequently show a high level of irrationality. Both dream and non-dream states are associated with severe disruption of memory: it usually disappears in seconds during the non-dream state, and in minutes after awakening from a dream unless actively refreshed.
Research conducted on the effects of partial epileptic seizures on consciousness found that patients who have partial epileptic seizures experience altered states of consciousness. In partial epileptic seizures, consciousness is impaired or lost while some aspects of consciousness, often automated behaviors, remain intact. Studies found that when measuring the qualitative features during partial epileptic seizures, patients exhibited an increase in arousal and became absorbed in the experience of the seizure, followed by difficulty in focusing and shifting attention.
A variety of psychoactive drugs, including alcohol, have notable effects on consciousness. These range from a simple dulling of awareness produced by sedatives, to increases in the intensity of sensory qualities produced by stimulants, cannabis, empathogens–entactogens such as MDMA ("Ecstasy"), or most notably by the class of drugs known as psychedelics. LSD, mescaline, psilocybin, dimethyltryptamine, and others in this group can produce major distortions of perception, including hallucinations; some users even describe their drug-induced experiences as mystical or spiritual in quality. The brain mechanisms underlying these effects are not as well understood as those induced by use of alcohol, but there is substantial evidence that alterations in the brain system that uses the chemical neurotransmitter serotonin play an essential role.
There has been some research into physiological changes in yogis and people who practise various techniques of meditation. Some research with brain waves during meditation has reported differences between those corresponding to ordinary relaxation and those corresponding to meditation. It has been disputed, however, whether there is enough evidence to count these as physiologically distinct states of consciousness.
The most extensive study of the characteristics of altered states of consciousness was made by psychologist Charles Tart in the 1960s and 1970s. Tart analyzed a state of consciousness as made up of a number of component processes, including exteroception (sensing the external world); interoception (sensing the body); input-processing (seeing meaning); emotions; memory; time sense; sense of identity; evaluation and cognitive processing; motor output; and interaction with the environment. Each of these, in his view, could be altered in multiple ways by drugs or other manipulations. The components that Tart identified have not, however, been validated by empirical studies. Research in this area has not yet reached firm conclusions, but a recent questionnaire-based study identified eleven significant factors contributing to drug-induced states of consciousness: experience of unity; spiritual experience; blissful state; insightfulness; disembodiment; impaired control and cognition; anxiety; complex imagery; elementary imagery; audio-visual synesthesia; and changed meaning of percepts.
Medical aspects
The medical approach to consciousness is scientifically oriented. It derives from a need to treat people whose brain function has been impaired as a result of disease, brain damage, toxins, or drugs. In medicine, conceptual distinctions are considered useful to the degree that they can help to guide treatments. The medical approach focuses mostly on the amount of consciousness a person has: in medicine, consciousness is assessed as a "level" ranging from coma and brain death at the low end, to full alertness and purposeful responsiveness at the high end.
Consciousness is of concern to patients and physicians, especially neurologists and anesthesiologists. Patients may have disorders of consciousness or may need to be anesthetized for a surgical procedure. Physicians may perform consciousness-related interventions such as instructing the patient to sleep, administering general anesthesia, or inducing medical coma. Also, bioethicists may be concerned with the ethical implications of consciousness in medical cases of patients such as the Karen Ann Quinlan case, while neuroscientists may study patients with impaired consciousness in hopes of gaining information about how the brain works.
Assessment
In medicine, consciousness is examined using a set of procedures known as neuropsychological assessment. There are two commonly used methods for assessing the level of consciousness of a patient: a simple procedure that requires minimal training, and a more complex procedure that requires substantial expertise. The simple procedure begins by asking whether the patient is able to move and react to physical stimuli. If so, the next question is whether the patient can respond in a meaningful way to questions and commands. If so, the patient is asked for name, current location, and current day and time. A patient who can answer all of these questions is said to be "alert and oriented times four" (sometimes denoted "A&Ox4" on a medical chart), and is usually considered fully conscious.
The more complex procedure is known as a neurological examination, and is usually carried out by a neurologist in a hospital setting. A formal neurological examination runs through a precisely delineated series of tests, beginning with tests for basic sensorimotor reflexes, and culminating with tests for sophisticated use of language. The outcome may be summarized using the Glasgow Coma Scale, which yields a number in the range 3–15, with a score of 3 to 8 indicating coma, and 15 indicating full consciousness. The Glasgow Coma Scale has three subscales, measuring the best motor response (ranging from "no motor response" to "obeys commands"), the best eye response (ranging from "no eye opening" to "eyes opening spontaneously") and the best verbal response (ranging from "no verbal response" to "fully oriented"). There is also a simpler pediatric version of the scale, for children too young to be able to use language.
In 2013, an experimental procedure was developed to measure degrees of consciousness, the procedure involving stimulating the brain with a magnetic pulse, measuring resulting waves of electrical activity, and developing a consciousness score based on the complexity of the brain activity.
Disorders
Medical conditions that inhibit consciousness are considered disorders of consciousness. This category generally includes minimally conscious state and persistent vegetative state, but sometimes also includes the less severe locked-in syndrome and more severe chronic coma. Differential diagnosis of these disorders is an active area of biomedical research. Finally, brain death results in possible irreversible disruption of consciousness. While other conditions may cause a moderate deterioration (e.g., dementia and delirium) or transient interruption (e.g., grand mal and petit mal seizures) of consciousness, they are not included in this category.
Medical experts increasingly view anosognosia as a disorder of consciousness. Anosognosia is a Greek-derived term meaning "unawareness of disease". This is a condition in which patients are disabled in some way, most commonly as a result of a stroke, but either misunderstand the nature of the problem or deny that there is anything wrong with them. The most frequently occurring form is seen in people who have experienced a stroke damaging the parietal lobe in the right hemisphere of the brain, giving rise to a syndrome known as hemispatial neglect, characterized by an inability to direct action or attention toward objects located to the left with respect to their bodies. Patients with hemispatial neglect are often paralyzed on the left side of the body, but sometimes deny being unable to move. When questioned about the obvious problem, the patient may avoid giving a direct answer, or may give an explanation that does not make sense. Patients with hemispatial neglect may also fail to recognize paralyzed parts of their bodies: one frequently mentioned case is of a man who repeatedly tried to throw his own paralyzed right leg out of the bed he was lying in, and when asked what he was doing, complained that somebody had put a dead leg into the bed with him. An even more striking type of anosognosia is Anton–Babinski syndrome, a rarely occurring condition in which patients become blind but claim to be able to see normally, and persist in this claim in spite of all evidence to the contrary.
Outside human adults
In children
Of the eight types of consciousness in the Lycan classification, some are detectable in utero and others develop years after birth. Psychologist and educator William Foulkes studied children's dreams and concluded that prior to the shift in cognitive maturation that humans experience during ages five to seven, children lack the Lockean consciousness that Lycan had labeled "introspective consciousness" and that Foulkes labels "self-reflection". In a 2020 paper, Katherine Nelson and Robyn Fivush use "autobiographical consciousness" to label essentially the same faculty, and agree with Foulkes on the timing of this faculty's acquisition. Nelson and Fivush contend that "language is the tool by which humans create a new, uniquely human form of consciousness, namely, autobiographical consciousness". Julian Jaynes had staked out these positions decades earlier. Citing the developmental steps that lead the infant to autobiographical consciousness, Nelson and Fivush point to the acquisition of "theory of mind", calling theory of mind "necessary for autobiographical consciousness" and defining it as "understanding differences between one's own mind and others' minds in terms of beliefs, desires, emotions and thoughts". They write, "The hallmark of theory of mind, the understanding of false belief, occurs ... at five to six years of age".
In animals
The topic of animal consciousness is beset by a number of difficulties. It poses the problem of other minds in an especially severe form, because non-human animals, lacking the ability to express human language, cannot tell humans about their experiences. Also, it is difficult to reason objectively about the question, because a denial that an animal is conscious is often taken to imply that it does not feel, its life has no value, and that harming it is not morally wrong. Descartes, for example, has sometimes been blamed for mistreatment of animals due to the fact that he believed only humans have a non-physical mind. Most people have a strong intuition that some animals, such as cats and dogs, are conscious, while others, such as insects, are not; but the sources of this intuition are not obvious, and are often based on personal interactions with pets and other animals they have observed.
Philosophers who consider subjective experience the essence of consciousness also generally believe, as a correlate, that the existence and nature of animal consciousness can never rigorously be known. Thomas Nagel spelled out this point of view in an influential essay titled "What Is it Like to Be a Bat?". He said that an organism is conscious "if and only if there is something that it is like to be that organism—something it is like for the organism"; and he argued that no matter how much we know about an animal's brain and behavior, we can never really put ourselves into the mind of the animal and experience its world in the way it does itself. Other thinkers, such as Douglas Hofstadter, dismiss this argument as incoherent. Several psychologists and ethologists have argued for the existence of animal consciousness by describing a range of behaviors that appear to show animals holding beliefs about things they cannot directly perceive—Donald Griffin's 2001 book Animal Minds reviews a substantial portion of the evidence.
On July 7, 2012, eminent scientists from different branches of neuroscience gathered at the University of Cambridge to celebrate the Francis Crick Memorial Conference, which deals with consciousness in humans and pre-linguistic consciousness in nonhuman animals. After the conference, they signed in the presence of Stephen Hawking, the 'Cambridge Declaration on Consciousness', which summarizes the most important findings of the survey:
"We decided to reach a consensus and make a statement directed to the public that is not scientific. It's obvious to everyone in this room that animals have consciousness, but it is not obvious to the rest of the world. It is not obvious to the rest of the Western world or the Far East. It is not obvious to the society."
"Convergent evidence indicates that non-human animals ..., including all mammals and birds, and other creatures, ... have the necessary neural substrates of consciousness and the capacity to exhibit intentional behaviors."
In artificial intelligence
The idea of an artifact made conscious is an ancient theme of mythology, appearing for example in the Greek myth of Pygmalion, who carved a statue that was magically brought to life, and in medieval Jewish stories of the Golem, a magically animated homunculus built of clay. However, the possibility of actually constructing a conscious machine was probably first discussed by Ada Lovelace, in a set of notes written in 1842 about the Analytical Engine invented by Charles Babbage, a precursor (never built) to modern electronic computers. Lovelace was essentially dismissive of the idea that a machine such as the Analytical Engine could think in a humanlike way. She wrote:
One of the most influential contributions to this question was an essay written in 1950 by pioneering computer scientist Alan Turing, titled Computing Machinery and Intelligence. Turing disavowed any interest in terminology, saying that even "Can machines think?" is too loaded with spurious connotations to be meaningful; but he proposed to replace all such questions with a specific operational test, which has become known as the Turing test. To pass the test, a computer must be able to imitate a human well enough to fool interrogators. In his essay Turing discussed a variety of possible objections, and presented a counterargument to each of them. The Turing test is commonly cited in discussions of artificial intelligence as a proposed criterion for machine consciousness; it has provoked a great deal of philosophical debate. For example, Daniel Dennett and Douglas Hofstadter argue that anything capable of passing the Turing test is necessarily conscious, while David Chalmers argues that a philosophical zombie could pass the test, yet fail to be conscious. A third group of scholars have argued that with technological growth once machines begin to display any substantial signs of human-like behavior then the dichotomy (of human consciousness compared to human-like consciousness) becomes passé and issues of machine autonomy begin to prevail even as observed in its nascent form within contemporary industry and technology. Jürgen Schmidhuber argues that consciousness is the result of compression. As an agent sees representation of itself recurring in the environment, the compression of this representation can be called consciousness.
In a lively exchange over what has come to be referred to as "the Chinese room argument", John Searle sought to refute the claim of proponents of what he calls "strong artificial intelligence (AI)" that a computer program can be conscious, though he does agree with advocates of "weak AI" that computer programs can be formatted to "simulate" conscious states. His own view is that consciousness has subjective, first-person causal powers by being essentially intentional due to the way human brains function biologically; conscious persons can perform computations, but consciousness is not inherently computational the way computer programs are. To make a Turing machine that speaks Chinese, Searle imagines a room with one monolingual English speaker (Searle himself, in fact), a book that designates a combination of Chinese symbols to be output paired with Chinese symbol input, and boxes filled with Chinese symbols. In this case, the English speaker is acting as a computer and the rulebook as a program. Searle argues that with such a machine, he would be able to process the inputs to outputs perfectly without having any understanding of Chinese, nor having any idea what the questions and answers could possibly mean. If the experiment were done in English, since Searle knows English, he would be able to take questions and give answers without any algorithms for English questions, and he would be effectively aware of what was being said and the purposes it might serve. Searle would pass the Turing test of answering the questions in both languages, but he is only conscious of what he is doing when he speaks English. Another way of putting the argument is to say that computer programs can pass the Turing test for processing the syntax of a language, but that the syntax cannot lead to semantic meaning in the way strong AI advocates hoped.
In the literature concerning artificial intelligence, Searle's essay has been second only to Turing's in the volume of debate it has generated. Searle himself was vague about what extra ingredients it would take to make a machine conscious: all he proposed was that what was needed was "causal powers" of the sort that the brain has and that computers lack. But other thinkers sympathetic to his basic argument have suggested that the necessary (though perhaps still not sufficient) extra conditions may include the ability to pass not just the verbal version of the Turing test, but the robotic version, which requires grounding the robot's words in the robot's sensorimotor capacity to categorize and interact with the things in the world that its words are about, Turing-indistinguishably from a real person. Turing-scale robotics is an empirical branch of research on embodied cognition and situated cognition.
In 2014, Victor Argonov has suggested a non-Turing test for machine consciousness based on a machine's ability to produce philosophical judgments. He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that a machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness.
Stream of consciousness
William James is usually credited with popularizing the idea that human consciousness flows like a stream, in his Principles of Psychology of 1890.
According to James, the "stream of thought" is governed by five characteristics:
Every thought tends to be part of a personal consciousness.
Within each personal consciousness thought is always changing.
Within each personal consciousness thought is sensibly continuous.
It always appears to deal with objects independent of itself.
It is interested in some parts of these objects to the exclusion of others.
A similar concept appears in Buddhist philosophy, expressed by the Sanskrit term Citta-saṃtāna, which is usually translated as mindstream or "mental continuum". Buddhist teachings describe that consciousness manifests moment to moment as sense impressions and mental phenomena that are continuously changing. The teachings list six triggers that can result in the generation of different mental events. These triggers are input from the five senses (seeing, hearing, smelling, tasting or touch sensations), or a thought (relating to the past, present or the future) that happen to arise in the mind. The mental events generated as a result of these triggers are: feelings, perceptions and intentions/behaviour. The moment-by-moment manifestation of the mind-stream is said to happen in every person all the time. It even happens in a scientist who analyzes various phenomena in the world, or analyzes the material body including the organ brain. The manifestation of the mindstream is also described as being influenced by physical laws, biological laws, psychological laws, volitional laws, and universal laws. The purpose of the Buddhist practice of mindfulness is to understand the inherent nature of the consciousness and its characteristics.
Narrative form
In the West, the primary impact of the idea has been on literature rather than science: "stream of consciousness as a narrative mode" means writing in a way that attempts to portray the moment-to-moment thoughts and experiences of a character. This technique perhaps had its beginnings in the monologues of Shakespeare's plays and reached its fullest development in the novels of James Joyce and Virginia Woolf, although it has also been used by many other noted writers.
Here, for example, is a passage from Joyce's Ulysses about the thoughts of Molly Bloom:
Spiritual approaches
The Upanishads hold the oldest recorded map of consciousness, as explored by sages through meditation.
To most philosophers, the word "consciousness" connotes the relationship between the mind and the world. To writers on spiritual or religious topics, it frequently connotes the relationship between the mind and God, or the relationship between the mind and deeper truths that are thought to be more fundamental than the physical world.
The Canadian psychiatrist Richard Maurice Bucke, author of the 1901 book Cosmic Consciousness: A Study in the Evolution of the Human Mind, distinguished between three types of consciousness: 'Simple Consciousness', awareness of the body, possessed by many animals; 'Self Consciousness', awareness of being aware, possessed only by humans; and 'Cosmic Consciousness', awareness of the life and order of the universe, possessed only by humans who have attained "intellectual enlightenment or illumination".
Another thorough account of the spiritual approach is Ken Wilber's 1977 book The Spectrum of Consciousness, a comparison of western and eastern ways of thinking about the mind. Wilber described consciousness as a spectrum with ordinary awareness at one end, and more profound types of awareness at higher levels.
Other examples include the various levels of spiritual consciousness presented by Prem Saran Satsangi and Stuart Hameroff.
See also
Notes
References
Further reading
External links
Cognitive neuroscience
Cognitive psychology
Concepts in epistemology
Concepts in the philosophy of mind
Concepts in the philosophy of science
Emergence
Mental processes
Metaphysical properties
Metaphysics of mind
Neuropsychological assessment
Ontology
Phenomenology
Theory of mind | Consciousness | [
"Biology"
] | 15,076 | [
"Behavioural sciences",
"Behavior",
"Cognitive psychology"
] |
5,667 | https://en.wikipedia.org/wiki/Chlorine | Chlorine is a chemical element; it has symbol Cl and atomic number 17. The second-lightest of the halogens, it appears between fluorine and bromine in the periodic table and its properties are mostly intermediate between them. Chlorine is a yellow-green gas at room temperature. It is an extremely reactive element and a strong oxidising agent: among the elements, it has the highest electron affinity and the third-highest electronegativity on the revised Pauling scale, behind only oxygen and fluorine.
Chlorine played an important role in the experiments conducted by medieval alchemists, which commonly involved the heating of chloride salts like ammonium chloride (sal ammoniac) and sodium chloride (common salt), producing various chemical substances containing chlorine such as hydrogen chloride, mercury(II) chloride (corrosive sublimate), and . However, the nature of free chlorine gas as a separate substance was only recognised around 1630 by Jan Baptist van Helmont. Carl Wilhelm Scheele wrote a description of chlorine gas in 1774, supposing it to be an oxide of a new element. In 1809, chemists suggested that the gas might be a pure element, and this was confirmed by Sir Humphry Davy in 1810, who named it after the Ancient Greek (, "pale green") because of its colour.
Because of its great reactivity, all chlorine in the Earth's crust is in the form of ionic chloride compounds, which includes table salt. It is the second-most abundant halogen (after fluorine) and 20th most abundant element in Earth's crust. These crystal deposits are nevertheless dwarfed by the huge reserves of chloride in seawater.
Elemental chlorine is commercially produced from brine by electrolysis, predominantly in the chloralkali process. The high oxidising potential of elemental chlorine led to the development of commercial bleaches and disinfectants, and a reagent for many processes in the chemical industry. Chlorine is used in the manufacture of a wide range of consumer products, about two-thirds of them organic chemicals such as polyvinyl chloride (PVC), many intermediates for the production of plastics, and other end products which do not contain the element. As a common disinfectant, elemental chlorine and chlorine-generating compounds are used more directly in swimming pools to keep them sanitary. Elemental chlorine at high concentration is extremely dangerous, and poisonous to most living organisms. As a chemical warfare agent, chlorine was first used in World War I as a poison gas weapon.
In the form of chloride ions, chlorine is necessary to all known species of life. Other types of chlorine compounds are rare in living organisms, and artificially produced chlorinated organics range from inert to toxic. In the upper atmosphere, chlorine-containing organic molecules such as chlorofluorocarbons have been implicated in ozone depletion. Small quantities of elemental chlorine are generated by oxidation of chloride ions in neutrophils as part of an immune system response against bacteria.
History
The most common compound of chlorine, sodium chloride, has been known since ancient times; archaeologists have found evidence that rock salt was used as early as 3000 BC and brine as early as 6000 BC.
Early discoveries
Around 900, the authors of the Arabic writings attributed to Jabir ibn Hayyan (Latin: Geber) and the Persian physician and alchemist Abu Bakr al-Razi ( 865–925, Latin: Rhazes) were experimenting with sal ammoniac (ammonium chloride), which when it was distilled together with vitriol (hydrated sulfates of various metals) produced hydrogen chloride. However, it appears that in these early experiments with chloride salts, the gaseous products were discarded, and hydrogen chloride may have been produced many times before it was discovered that it can be put to chemical use. One of the first such uses was the synthesis of mercury(II) chloride (corrosive sublimate), whose production from the heating of mercury either with alum and ammonium chloride or with vitriol and sodium chloride was first described in the De aluminibus et salibus ("On Alums and Salts", an eleventh- or twelfth century Arabic text falsely attributed to Abu Bakr al-Razi and translated into Latin in the second half of the twelfth century by Gerard of Cremona, 1144–1187). Another important development was the discovery by pseudo-Geber (in the De inventione veritatis, "On the Discovery of Truth", after c. 1300) that by adding ammonium chloride to nitric acid, a strong solvent capable of dissolving gold (i.e., aqua regia) could be produced. Although aqua regia is an unstable mixture that continually gives off fumes containing free chlorine gas, this chlorine gas appears to have been ignored until c. 1630, when its nature as a separate gaseous substance was recognised by the Brabantian chemist and physician Jan Baptist van Helmont.
Isolation
The element was first studied in detail in 1774 by Swedish chemist Carl Wilhelm Scheele, and he is credited with the discovery. Scheele produced chlorine by reacting MnO2 (as the mineral pyrolusite) with HCl:
4 HCl + MnO2 → MnCl2 + 2 H2O + Cl2
Scheele observed several of the properties of chlorine: the bleaching effect on litmus, the deadly effect on insects, the yellow-green colour, and the smell similar to aqua regia. He called it "dephlogisticated muriatic acid air" since it is a gas (then called "airs") and it came from hydrochloric acid (then known as "muriatic acid"). He failed to establish chlorine as an element.
Common chemical theory at that time held that an acid is a compound that contains oxygen (remnants of this survive in the German and Dutch names of oxygen: or , both translating into English as acid substance), so a number of chemists, including Claude Berthollet, suggested that Scheele's dephlogisticated muriatic acid air must be a combination of oxygen and the yet undiscovered element, muriaticum.
In 1809, Joseph Louis Gay-Lussac and Louis-Jacques Thénard tried to decompose dephlogisticated muriatic acid air by reacting it with charcoal to release the free element muriaticum (and carbon dioxide). They did not succeed and published a report in which they considered the possibility that dephlogisticated muriatic acid air is an element, but were not convinced.
In 1810, Sir Humphry Davy tried the same experiment again, and concluded that the substance was an element, and not a compound. He announced his results to the Royal Society on 15 November that year. At that time, he named this new element "chlorine", from the Greek word χλωρος (chlōros, "green-yellow"), in reference to its colour. The name "halogen", meaning "salt producer", was originally used for chlorine in 1811 by Johann Salomo Christoph Schweigger. This term was later used as a generic term to describe all the elements in the chlorine family (fluorine, bromine, iodine), after a suggestion by Jöns Jakob Berzelius in 1826. In 1823, Michael Faraday liquefied chlorine for the first time, and demonstrated that what was then known as "solid chlorine" had a structure of chlorine hydrate (Cl2·H2O).
Later uses
Chlorine gas was first used by French chemist Claude Berthollet to bleach textiles in 1785. Modern bleaches resulted from further work by Berthollet, who first produced sodium hypochlorite in 1789 in his laboratory in the town of Javel (now part of Paris, France), by passing chlorine gas through a solution of sodium carbonate. The resulting liquid, known as "" ("Javel water"), was a weak solution of sodium hypochlorite. This process was not very efficient, and alternative production methods were sought. Scottish chemist and industrialist Charles Tennant first produced a solution of calcium hypochlorite ("chlorinated lime"), then solid calcium hypochlorite (bleaching powder). These compounds produced low levels of elemental chlorine and could be more efficiently transported than sodium hypochlorite, which remained as dilute solutions because when purified to eliminate water, it became a dangerously powerful and unstable oxidizer. Near the end of the nineteenth century, E. S. Smith patented a method of sodium hypochlorite production involving electrolysis of brine to produce sodium hydroxide and chlorine gas, which then mixed to form sodium hypochlorite. This is known as the chloralkali process, first introduced on an industrial scale in 1892, and now the source of most elemental chlorine and sodium hydroxide. In 1884 Chemischen Fabrik Griesheim of Germany developed another chloralkali process which entered commercial production in 1888.
Elemental chlorine solutions dissolved in chemically basic water (sodium and calcium hypochlorite) were first used as anti-putrefaction agents and disinfectants in the 1820s, in France, long before the establishment of the germ theory of disease. This practice was pioneered by Antoine-Germain Labarraque, who adapted Berthollet's "Javel water" bleach and other chlorine preparations. Elemental chlorine has since served a continuous function in topical antisepsis (wound irrigation solutions and the like) and public sanitation, particularly in swimming and drinking water.
Chlorine gas was first used as a weapon on April 22, 1915, at the Second Battle of Ypres by the German Army. The effect on the allies was devastating because the existing gas masks were difficult to deploy and had not been broadly distributed.
Properties
Chlorine is the second halogen, being a nonmetal in group 17 of the periodic table. Its properties are thus similar to fluorine, bromine, and iodine, and are largely intermediate between those of the first two. Chlorine has the electron configuration [Ne]3s23p5, with the seven electrons in the third and outermost shell acting as its valence electrons. Like all halogens, it is thus one electron short of a full octet, and is hence a strong oxidising agent, reacting with many elements in order to complete its outer shell. Corresponding to periodic trends, it is intermediate in electronegativity between fluorine and bromine (F: 3.98, Cl: 3.16, Br: 2.96, I: 2.66), and is less reactive than fluorine and more reactive than bromine. It is also a weaker oxidising agent than fluorine, but a stronger one than bromine. Conversely, the chloride ion is a weaker reducing agent than bromide, but a stronger one than fluoride. It is intermediate in atomic radius between fluorine and bromine, and this leads to many of its atomic properties similarly continuing the trend from iodine to bromine upward, such as first ionisation energy, electron affinity, enthalpy of dissociation of the X2 molecule (X = Cl, Br, I), ionic radius, and X–X bond length. (Fluorine is anomalous due to its small size.)
All four stable halogens experience intermolecular van der Waals forces of attraction, and their strength increases together with the number of electrons among all homonuclear diatomic halogen molecules. Thus, the melting and boiling points of chlorine are intermediate between those of fluorine and bromine: chlorine melts at −101.0 °C and boils at −34.0 °C. As a result of the increasing molecular weight of the halogens down the group, the density and heats of fusion and vaporisation of chlorine are again intermediate between those of bromine and fluorine, although all their heats of vaporisation are fairly low (leading to high volatility) thanks to their diatomic molecular structure. The halogens darken in colour as the group is descended: thus, while fluorine is a pale yellow gas, chlorine is distinctly yellow-green. This trend occurs because the wavelengths of visible light absorbed by the halogens increase down the group. Specifically, the colour of a halogen, such as chlorine, results from the electron transition between the highest occupied antibonding πg molecular orbital and the lowest vacant antibonding σu molecular orbital. The colour fades at low temperatures, so that solid chlorine at −195 °C is almost colourless.
Like solid bromine and iodine, solid chlorine crystallises in the orthorhombic crystal system, in a layered lattice of Cl2 molecules. The Cl–Cl distance is 198 pm (close to the gaseous Cl–Cl distance of 199 pm) and the Cl···Cl distance between molecules is 332 pm within a layer and 382 pm between layers (compare the van der Waals radius of chlorine, 180 pm). This structure means that chlorine is a very poor conductor of electricity, and indeed its conductivity is so low as to be practically unmeasurable.
Isotopes
Chlorine has two stable isotopes, 35Cl and 37Cl. These are its only two natural isotopes occurring in quantity, with 35Cl making up 76% of natural chlorine and 37Cl making up the remaining 24%. Both are synthesised in stars in the oxygen-burning and silicon-burning processes. Both have nuclear spin 3/2+ and thus may be used for nuclear magnetic resonance, although the spin magnitude being greater than 1/2 results in non-spherical nuclear charge distribution and thus resonance broadening as a result of a nonzero nuclear quadrupole moment and resultant quadrupolar relaxation. The other chlorine isotopes are all radioactive, with half-lives too short to occur in nature primordially. Of these, the most commonly used in the laboratory are 36Cl (t1/2 = 3.0×105 y) and 38Cl (t1/2 = 37.2 min), which may be produced from the neutron activation of natural chlorine.
The most stable chlorine radioisotope is 36Cl. The primary decay mode of isotopes lighter than 35Cl is electron capture to isotopes of sulfur; that of isotopes heavier than 37Cl is beta decay to isotopes of argon; and 36Cl may decay by either mode to stable 36S or 36Ar. 36Cl occurs in trace quantities in nature as a cosmogenic nuclide in a ratio of about (7–10) × 10−13 to 1 with stable chlorine isotopes: it is produced in the atmosphere by spallation of 36Ar by interactions with cosmic ray protons. In the top meter of the lithosphere, 36Cl is generated primarily by thermal neutron activation of 35Cl and spallation of 39K and 40Ca. In the subsurface environment, muon capture by 40Ca becomes more important as a way to generate 36Cl.
Chemistry and compounds
Chlorine is intermediate in reactivity between fluorine and bromine, and is one of the most reactive elements. Chlorine is a weaker oxidising agent than fluorine but a stronger one than bromine or iodine. This can be seen from the standard electrode potentials of the X2/X− couples (F, +2.866 V; Cl, +1.395 V; Br, +1.087 V; I, +0.615 V; At, approximately +0.3 V). However, this trend is not shown in the bond energies because fluorine is singular due to its small size, low polarisability, and inability to show hypervalence. As another difference, chlorine has a significant chemistry in positive oxidation states while fluorine does not. Chlorination often leads to higher oxidation states than bromination or iodination but lower oxidation states than fluorination. Chlorine tends to react with compounds including M–M, M–H, or M–C bonds to form M–Cl bonds.
Given that E°(O2/H2O) = +1.229 V, which is less than +1.395 V, it would be expected that chlorine should be able to oxidise water to oxygen and hydrochloric acid. However, the kinetics of this reaction are unfavorable, and there is also a bubble overpotential effect to consider, so that electrolysis of aqueous chloride solutions evolves chlorine gas and not oxygen gas, a fact that is very useful for the industrial production of chlorine.
Hydrogen chloride
The simplest chlorine compound is hydrogen chloride, HCl, a major chemical in industry as well as in the laboratory, both as a gas and dissolved in water as hydrochloric acid. It is often produced by burning hydrogen gas in chlorine gas, or as a byproduct of chlorinating hydrocarbons. Another approach is to treat sodium chloride with concentrated sulfuric acid to produce hydrochloric acid, also known as the "salt-cake" process:
NaCl + H2SO4 NaHSO4 + HCl
NaCl + NaHSO4 Na2SO4 + HCl
In the laboratory, hydrogen chloride gas may be made by drying the acid with concentrated sulfuric acid. Deuterium chloride, DCl, may be produced by reacting benzoyl chloride with heavy water (D2O).
At room temperature, hydrogen chloride is a colourless gas, like all the hydrogen halides apart from hydrogen fluoride, since hydrogen cannot form strong hydrogen bonds to the larger electronegative chlorine atom; however, weak hydrogen bonding is present in solid crystalline hydrogen chloride at low temperatures, similar to the hydrogen fluoride structure, before disorder begins to prevail as the temperature is raised. Hydrochloric acid is a strong acid (pKa = −7) because the hydrogen-chlorine bonds are too weak to inhibit dissociation. The HCl/H2O system has many hydrates HCl·nH2O for n = 1, 2, 3, 4, and 6. Beyond a 1:1 mixture of HCl and H2O, the system separates completely into two separate liquid phases. Hydrochloric acid forms an azeotrope with boiling point 108.58 °C at 20.22 g HCl per 100 g solution; thus hydrochloric acid cannot be concentrated beyond this point by distillation.
Unlike hydrogen fluoride, anhydrous liquid hydrogen chloride is difficult to work with as a solvent, because its boiling point is low, it has a small liquid range, its dielectric constant is low and it does not dissociate appreciably into H2Cl+ and ions – the latter, in any case, are much less stable than the bifluoride ions () due to the very weak hydrogen bonding between hydrogen and chlorine, though its salts with very large and weakly polarising cations such as Cs+ and (R = Me, Et, Bun) may still be isolated. Anhydrous hydrogen chloride is a poor solvent, only able to dissolve small molecular compounds such as nitrosyl chloride and phenol, or salts with very low lattice energies such as tetraalkylammonium halides. It readily protonates nucleophiles containing lone-pairs or π bonds. Solvolysis, ligand replacement reactions, and oxidations are well-characterised in hydrogen chloride solution:
Ph3SnCl + HCl ⟶ Ph2SnCl2 + PhH (solvolysis)
Ph3COH + 3 HCl ⟶ + H3O+Cl− (solvolysis)
+ BCl3 ⟶ + HCl (ligand replacement)
PCl3 + Cl2 + HCl ⟶ (oxidation)
Other binary chlorides
Nearly all elements in the periodic table form binary chlorides. The exceptions are decidedly in the minority and stem in each case from one of three causes: extreme inertness and reluctance to participate in chemical reactions (the noble gases, with the exception of xenon in the highly unstable XeCl2 and XeCl4); extreme nuclear instability hampering chemical investigation before decay and transmutation (many of the heaviest elements beyond bismuth); and having an electronegativity higher than chlorine's (oxygen and fluorine) so that the resultant binary compounds are formally not chlorides but rather oxides or fluorides of chlorine. Even though nitrogen in NCl3 is bearing a negative charge, the compound is usually called nitrogen trichloride.
Chlorination of metals with Cl2 usually leads to a higher oxidation state than bromination with Br2 when multiple oxidation states are available, such as in MoCl5 and MoBr3. Chlorides can be made by reaction of an element or its oxide, hydroxide, or carbonate with hydrochloric acid, and then dehydrated by mildly high temperatures combined with either low pressure or anhydrous hydrogen chloride gas. These methods work best when the chloride product is stable to hydrolysis; otherwise, the possibilities include high-temperature oxidative chlorination of the element with chlorine or hydrogen chloride, high-temperature chlorination of a metal oxide or other halide by chlorine, a volatile metal chloride, carbon tetrachloride, or an organic chloride. For instance, zirconium dioxide reacts with chlorine at standard conditions to produce zirconium tetrachloride, and uranium trioxide reacts with hexachloropropene when heated under reflux to give uranium tetrachloride. The second example also involves a reduction in oxidation state, which can also be achieved by reducing a higher chloride using hydrogen or a metal as a reducing agent. This may also be achieved by thermal decomposition or disproportionation as follows:
EuCl3 + H2 ⟶ EuCl2 + HCl
ReCl5 ReCl3 + Cl2
AuCl3 AuCl + Cl2
Most metal chlorides with the metal in low oxidation states (+1 to +3) are ionic. Nonmetals tend to form covalent molecular chlorides, as do metals in high oxidation states from +3 and above. Both ionic and covalent chlorides are known for metals in oxidation state +3 (e.g. scandium chloride is mostly ionic, but aluminium chloride is not). Silver chloride is very insoluble in water and is thus often used as a qualitative test for chlorine.
Polychlorine compounds
Although dichlorine is a strong oxidising agent with a high first ionisation energy, it may be oxidised under extreme conditions to form the cation. This is very unstable and has only been characterised by its electronic band spectrum when produced in a low-pressure discharge tube. The yellow cation is more stable and may be produced as follows:
This reaction is conducted in the oxidising solvent arsenic pentafluoride. The trichloride anion, , has also been characterised; it is analogous to triiodide.
Chlorine fluorides
The three fluorides of chlorine form a subset of the interhalogen compounds, all of which are diamagnetic. Some cationic and anionic derivatives are known, such as , , , and Cl2F+. Some pseudohalides of chlorine are also known, such as cyanogen chloride (ClCN, linear), chlorine cyanate (ClNCO), chlorine thiocyanate (ClSCN, unlike its oxygen counterpart), and chlorine azide (ClN3).
Chlorine monofluoride (ClF) is extremely thermally stable, and is sold commercially in 500-gram steel lecture bottles. It is a colourless gas that melts at −155.6 °C and boils at −100.1 °C. It may be produced by the reaction of its elements at 225 °C, though it must then be separated and purified from chlorine trifluoride and its reactants. Its properties are mostly intermediate between those of chlorine and fluorine. It will react with many metals and nonmetals from room temperature and above, fluorinating them and liberating chlorine. It will also act as a chlorofluorinating agent, adding chlorine and fluorine across a multiple bond or by oxidation: for example, it will attack carbon monoxide to form carbonyl chlorofluoride, COFCl. It will react analogously with hexafluoroacetone, (CF3)2CO, with a potassium fluoride catalyst to produce heptafluoroisopropyl hypochlorite, (CF3)2CFOCl; with nitriles RCN to produce RCF2NCl2; and with the sulfur oxides SO2 and SO3 to produce ClSO2F and ClOSO2F respectively. It will also react exothermically with compounds containing –OH and –NH groups, such as water:
H2O + 2 ClF ⟶ 2 HF + Cl2O
Chlorine trifluoride (ClF3) is a volatile colourless molecular liquid which melts at −76.3 °C and boils at 11.8 °C. It may be formed by directly fluorinating gaseous chlorine or chlorine monofluoride at 200–300 °C. One of the most reactive chemical compounds known, the list of elements it sets on fire is diverse, containing hydrogen, potassium, phosphorus, arsenic, antimony, sulfur, selenium, tellurium, bromine, iodine, and powdered molybdenum, tungsten, rhodium, iridium, and iron. It will also ignite water, along with many substances which in ordinary circumstances would be considered chemically inert such as asbestos, concrete, glass, and sand. When heated, it will even corrode noble metals as palladium, platinum, and gold, and even the noble gases xenon and radon do not escape fluorination. An impermeable fluoride layer is formed by sodium, magnesium, aluminium, zinc, tin, and silver, which may be removed by heating. Nickel, copper, and steel containers are usually used due to their great resistance to attack by chlorine trifluoride, stemming from the formation of an unreactive layer of metal fluoride. Its reaction with hydrazine to form hydrogen fluoride, nitrogen, and chlorine gases was used in experimental rocket engine, but has problems largely stemming from its extreme hypergolicity resulting in ignition without any measurable delay. Today, it is mostly used in nuclear fuel processing, to oxidise uranium to uranium hexafluoride for its enriching and to separate it from plutonium, as well as in the semiconductor industry, where it is used to clean chemical vapor deposition chambers. It can act as a fluoride ion donor or acceptor (Lewis base or acid), although it does not dissociate appreciably into and ions.
Chlorine pentafluoride (ClF5) is made on a large scale by direct fluorination of chlorine with excess fluorine gas at 350 °C and 250 atm, and on a small scale by reacting metal chlorides with fluorine gas at 100–300 °C. It melts at −103 °C and boils at −13.1 °C. It is a very strong fluorinating agent, although it is still not as effective as chlorine trifluoride. Only a few specific stoichiometric reactions have been characterised. Arsenic pentafluoride and antimony pentafluoride form ionic adducts of the form [ClF4]+[MF6]− (M = As, Sb) and water reacts vigorously as follows:
2 H2O + ClF5 ⟶ 4 HF + FClO2
The product, chloryl fluoride, is one of the five known chlorine oxide fluorides. These range from the thermally unstable FClO to the chemically unreactive perchloryl fluoride (FClO3), the other three being FClO2, F3ClO, and F3ClO2. All five behave similarly to the chlorine fluorides, both structurally and chemically, and may act as Lewis acids or bases by gaining or losing fluoride ions respectively or as very strong oxidising and fluorinating agents.
Chlorine oxides
The chlorine oxides are well-studied in spite of their instability (all of them are endothermic compounds). They are important because they are produced when chlorofluorocarbons undergo photolysis in the upper atmosphere and cause the destruction of the ozone layer. None of them can be made from directly reacting the elements.
Dichlorine monoxide (Cl2O) is a brownish-yellow gas (red-brown when solid or liquid) which may be obtained by reacting chlorine gas with yellow mercury(II) oxide. It is very soluble in water, in which it is in equilibrium with hypochlorous acid (HOCl), of which it is the anhydride. It is thus an effective bleach and is mostly used to make hypochlorites. It explodes on heating or sparking or in the presence of ammonia gas.
Chlorine dioxide (ClO2) was the first chlorine oxide to be discovered in 1811 by Humphry Davy. It is a yellow paramagnetic gas (deep-red as a solid or liquid), as expected from its having an odd number of electrons: it is stable towards dimerisation due to the delocalisation of the unpaired electron. It explodes above −40 °C as a liquid and under pressure as a gas and therefore must be made at low concentrations for wood-pulp bleaching and water treatment. It is usually prepared by reducing a chlorate as follows:
+ Cl− + 2 H+ ⟶ ClO2 + Cl2 + H2O
Its production is thus intimately linked to the redox reactions of the chlorine oxoacids. It is a strong oxidising agent, reacting with sulfur, phosphorus, phosphorus halides, and potassium borohydride. It dissolves exothermically in water to form dark-green solutions that very slowly decompose in the dark. Crystalline clathrate hydrates ClO2·nH2O (n ≈ 6–10) separate out at low temperatures. However, in the presence of light, these solutions rapidly photodecompose to form a mixture of chloric and hydrochloric acids. Photolysis of individual ClO2 molecules result in the radicals ClO and ClOO, while at room temperature mostly chlorine, oxygen, and some ClO3 and Cl2O6 are produced. Cl2O3 is also produced when photolysing the solid at −78 °C: it is a dark brown solid that explodes below 0 °C. The ClO radical leads to the depletion of atmospheric ozone and is thus environmentally important as follows:
Cl• + O3 ⟶ ClO• + O2
ClO• + O• ⟶ Cl• + O2
Chlorine perchlorate (ClOClO3) is a pale yellow liquid that is less stable than ClO2 and decomposes at room temperature to form chlorine, oxygen, and dichlorine hexoxide (Cl2O6). Chlorine perchlorate may also be considered a chlorine derivative of perchloric acid (HOClO3), similar to the thermally unstable chlorine derivatives of other oxoacids: examples include chlorine nitrate (ClONO2, vigorously reactive and explosive), and chlorine fluorosulfate (ClOSO2F, more stable but still moisture-sensitive and highly reactive). Dichlorine hexoxide is a dark-red liquid that freezes to form a solid which turns yellow at −180 °C: it is usually made by reaction of chlorine dioxide with oxygen. Despite attempts to rationalise it as the dimer of ClO3, it reacts more as though it were chloryl perchlorate, [ClO2]+[ClO4]−, which has been confirmed to be the correct structure of the solid. It hydrolyses in water to give a mixture of chloric and perchloric acids: the analogous reaction with anhydrous hydrogen fluoride does not proceed to completion.
Dichlorine heptoxide (Cl2O7) is the anhydride of perchloric acid (HClO4) and can readily be obtained from it by dehydrating it with phosphoric acid at −10 °C and then distilling the product at −35 °C and 1 mmHg. It is a shock-sensitive, colourless oily liquid. It is the least reactive of the chlorine oxides, being the only one to not set organic materials on fire at room temperature. It may be dissolved in water to regenerate perchloric acid or in aqueous alkalis to regenerate perchlorates. However, it thermally decomposes explosively by breaking one of the central Cl–O bonds, producing the radicals ClO3 and ClO4 which immediately decompose to the elements through intermediate oxides.
Chlorine oxoacids and oxyanions
Chlorine forms four oxoacids: hypochlorous acid (HOCl), chlorous acid (HOClO), chloric acid (HOClO2), and perchloric acid (HOClO3). As can be seen from the redox potentials given in the adjacent table, chlorine is much more stable towards disproportionation in acidic solutions than in alkaline solutions:
{|
|-
| Cl2 + H2O || HOCl + H+ + Cl− || Kac = 4.2 × 10−4 mol2 l−2
|-
| Cl2 + 2 OH− || OCl− + H2O + Cl− || Kalk = 7.5 × 1015 mol−1 l
|}
The hypochlorite ions also disproportionate further to produce chloride and chlorate (3 ClO− 2 Cl− + ) but this reaction is quite slow at temperatures below 70 °C in spite of the very favourable equilibrium constant of 1027. The chlorate ions may themselves disproportionate to form chloride and perchlorate (4 Cl− + 3 ) but this is still very slow even at 100 °C despite the very favourable equilibrium constant of 1020. The rates of reaction for the chlorine oxyanions increases as the oxidation state of chlorine decreases. The strengths of the chlorine oxyacids increase very quickly as the oxidation state of chlorine increases due to the increasing delocalisation of charge over more and more oxygen atoms in their conjugate bases.
Most of the chlorine oxoacids may be produced by exploiting these disproportionation reactions. Hypochlorous acid (HOCl) is highly reactive and quite unstable; its salts are mostly used for their bleaching and sterilising abilities. They are very strong oxidising agents, transferring an oxygen atom to most inorganic species. Chlorous acid (HOClO) is even more unstable and cannot be isolated or concentrated without decomposition: it is known from the decomposition of aqueous chlorine dioxide. However, sodium chlorite is a stable salt and is useful for bleaching and stripping textiles, as an oxidising agent, and as a source of chlorine dioxide. Chloric acid (HOClO2) is a strong acid that is quite stable in cold water up to 30% concentration, but on warming gives chlorine and chlorine dioxide. Evaporation under reduced pressure allows it to be concentrated further to about 40%, but then it decomposes to perchloric acid, chlorine, oxygen, water, and chlorine dioxide. Its most important salt is sodium chlorate, mostly used to make chlorine dioxide to bleach paper pulp. The decomposition of chlorate to chloride and oxygen is a common way to produce oxygen in the laboratory on a small scale. Chloride and chlorate may comproportionate to form chlorine as follows:
+ 5 Cl− + 6 H+ ⟶ 3 Cl2 + 3 H2O
Perchlorates and perchloric acid (HOClO3) are the most stable oxo-compounds of chlorine, in keeping with the fact that chlorine compounds are most stable when the chlorine atom is in its lowest (−1) or highest (+7) possible oxidation states. Perchloric acid and aqueous perchlorates are vigorous and sometimes violent oxidising agents when heated, in stark contrast to their mostly inactive nature at room temperature due to the high activation energies for these reactions for kinetic reasons. Perchlorates are made by electrolytically oxidising sodium chlorate, and perchloric acid is made by reacting anhydrous sodium perchlorate or barium perchlorate with concentrated hydrochloric acid, filtering away the chloride precipitated and distilling the filtrate to concentrate it. Anhydrous perchloric acid is a colourless mobile liquid that is sensitive to shock that explodes on contact with most organic compounds, sets hydrogen iodide and thionyl chloride on fire and even oxidises silver and gold. Although it is a weak ligand, weaker than water, a few compounds involving coordinated are known. The Table below presents typical oxidation states for chlorine element as given in the secondary schools or colleges. There are more complex chemical compounds, the structure of which can only be explained using modern quantum chemical methods, for example, cluster technetium chloride [(CH3)4N]3[Tc6Cl14], in which 6 of the 14 chlorine atoms are formally divalent, and oxidation states are fractional. In addition, all the above chemical regularities are valid for "normal" or close to normal conditions, while at ultra-high pressures (for example, in the cores of large planets), chlorine can exhibit an oxidation state of -3, forming a Na3Cl compound with sodium, which does not fit into traditional concepts of chemistry.
Organochlorine compounds
Like the other carbon–halogen bonds, the C–Cl bond is a common functional group that forms part of core organic chemistry. Formally, compounds with this functional group may be considered organic derivatives of the chloride anion. Due to the difference of electronegativity between chlorine (3.16) and carbon (2.55), the carbon in a C–Cl bond is electron-deficient and thus electrophilic. Chlorination modifies the physical properties of hydrocarbons in several ways: chlorocarbons are typically denser than water due to the higher atomic weight of chlorine versus hydrogen, and aliphatic organochlorides are alkylating agents because chloride is a leaving group.
Alkanes and aryl alkanes may be chlorinated under free-radical conditions, with UV light. However, the extent of chlorination is difficult to control: the reaction is not regioselective and often results in a mixture of various isomers with different degrees of chlorination, though this may be permissible if the products are easily separated. Aryl chlorides may be prepared by the Friedel-Crafts halogenation, using chlorine and a Lewis acid catalyst. The haloform reaction, using chlorine and sodium hydroxide, is also able to generate alkyl halides from methyl ketones, and related compounds. Chlorine adds to the multiple bonds on alkenes and alkynes as well, giving di- or tetrachloro compounds. However, due to the expense and reactivity of chlorine, organochlorine compounds are more commonly produced by using hydrogen chloride, or with chlorinating agents such as phosphorus pentachloride (PCl5) or thionyl chloride (SOCl2). The last is very convenient in the laboratory because all side products are gaseous and do not have to be distilled out.
Many organochlorine compounds have been isolated from natural sources ranging from bacteria to humans. Chlorinated organic compounds are found in nearly every class of biomolecules including alkaloids, terpenes, amino acids, flavonoids, steroids, and fatty acids. Organochlorides, including dioxins, are produced in the high temperature environment of forest fires, and dioxins have been found in the preserved ashes of lightning-ignited fires that predate synthetic dioxins. In addition, a variety of simple chlorinated hydrocarbons including dichloromethane, chloroform, and carbon tetrachloride have been isolated from marine algae. A majority of the chloromethane in the environment is produced naturally by biological decomposition, forest fires, and volcanoes.
Some types of organochlorides, though not all, have significant toxicity to plants or animals, including humans. Dioxins, produced when organic matter is burned in the presence of chlorine, and some insecticides, such as DDT, are persistent organic pollutants which pose dangers when they are released into the environment. For example, DDT, which was widely used to control insects in the mid 20th century, also accumulates in food chains, and causes reproductive problems (e.g., eggshell thinning) in certain bird species. Due to the ready homolytic fission of the C–Cl bond to create chlorine radicals in the upper atmosphere, chlorofluorocarbons have been phased out due to the harm they do to the ozone layer.
Occurrence
Chlorine is too reactive to occur as the free element in nature but is very abundant in the form of its chloride salts. It is the 20th most abundant element in Earth's crust and makes up 126 parts per million of it, through the large deposits of chloride minerals, especially sodium chloride, that have been evaporated from water bodies. All of these pale in comparison to the reserves of chloride ions in seawater: smaller amounts at higher concentrations occur in some inland seas and underground brine wells, such as the Great Salt Lake in Utah and the Dead Sea in Israel.
Small batches of chlorine gas are prepared in the laboratory by combining hydrochloric acid and manganese dioxide, but the need rarely arises due to its ready availability. In industry, elemental chlorine is usually produced by the electrolysis of sodium chloride dissolved in water. This method, the chloralkali process industrialized in 1892, now provides most industrial chlorine gas. Along with chlorine, the method yields hydrogen gas and sodium hydroxide, which is the most valuable product. The process proceeds according to the following chemical equation:
2 NaCl + 2 H2O → Cl2 + H2 + 2 NaOH
Production
Chlorine is primarily produced by the chloralkali process, although non-chloralkali processes exist. Global 2022 production was estimated to be 97 million tonnes. The most visible use of chlorine is in water disinfection. 35-40 % of chlorine produced is used to make poly(vinyl chloride) through ethylene dichloride and vinyl chloride. The chlorine produced is available in cylinders from sizes ranging from 450 g to 70 kg, as well as drums (865 kg), tank wagons (15 tonnes on roads; 27–90 tonnes by rail), and barges (600–1200 tonnes).
Due to the difficulty and hazards in transporting elemental chlorine, production is typically located near where it is consumed. As examples, vinyl chloride producers such as Westlake Chemical and Formosa Plastics have integrated chloralkali assets.
Chloralkali processes
The electrolysis of chloride solutions all proceed according to the following equations:
Cathode: 2 H2O + 2 e− → H2 + 2 OH−
Anode: 2 Cl− → Cl2 + 2 e−
In the conventional case where sodium chloride is electrolyzed, sodium hydroxide and chlorine are coproducts.
Industrially, there are three chloralkali processes:
The Castner–Kellner process that utilizes a mercury electrode
The diaphragm cell process that utilizes an asbestos diaphragm that separates the cathode and anode
The membrane cell process that uses an ion exchange membrane in place of the diaphragm
The Castner–Kellner process was the first method used at the end of the nineteenth century to produce chlorine on an industrial scale. Mercury (that is toxic) was used as an electrode to amalgamate the sodium product, preventing undesirable side reactions.
In diaphragm cell electrolysis, an asbestos (or polymer-fiber) diaphragm separates a cathode and an anode, preventing the chlorine forming at the anode from re-mixing with the sodium hydroxide and the hydrogen formed at the cathode. The salt solution (brine) is continuously fed to the anode compartment and flows through the diaphragm to the cathode compartment, where the caustic alkali is produced and the brine is partially depleted. Diaphragm methods produce dilute and slightly impure alkali, but they are not burdened with the problem of mercury disposal and they are more energy efficient.
Membrane cell electrolysis employs permeable membrane as an ion exchanger. Saturated sodium (or potassium) chloride solution is passed through the anode compartment, leaving at a lower concentration. This method also produces very pure sodium (or potassium) hydroxide but has the disadvantage of requiring very pure brine at high concentrations.
However, due to the lower energy requirements of the membrane process, new chlor-alkali installations are now almost exclusively employing the membrane process. Next to this, the use of large volumes of mercury is considered undesirable.
Also, older plants are converted into the membrane process.
Non-chloralkali processes
In the Deacon process, hydrogen chloride recovered from the production of organochlorine compounds is recovered as chlorine. The process relies on oxidation using oxygen:
4 HCl + O2 → 2 Cl2 + 2 H2O
The reaction requires a catalyst. As introduced by Deacon, early catalysts were based on copper. Commercial processes, such as the Mitsui MT-Chlorine Process, have switched to chromium and ruthenium-based catalysts.
Applications
Sodium chloride is the most common chlorine compound, and is the main source of chlorine for the demand by the chemical industry. About 15000 chlorine-containing compounds are commercially traded, including such diverse compounds as chlorinated methane, ethanes, vinyl chloride, polyvinyl chloride (PVC), aluminium trichloride for catalysis, the chlorides of magnesium, titanium, zirconium, and hafnium which are the precursors for producing the pure form of those elements.
Quantitatively, of all elemental chlorine produced, about 63% is used in the manufacture of organic compounds, and 18% in the manufacture of inorganic chlorine compounds. About 15,000 chlorine compounds are used commercially. The remaining 19% of chlorine produced is used for bleaches and disinfection products. The most significant of organic compounds in terms of production volume are 1,2-dichloroethane and vinyl chloride, intermediates in the production of PVC. Other particularly important organochlorines are methyl chloride, methylene chloride, chloroform, vinylidene chloride, trichloroethylene, perchloroethylene, allyl chloride, epichlorohydrin, chlorobenzene, dichlorobenzenes, and trichlorobenzenes. The major inorganic compounds include HCl, Cl2O, HOCl, NaClO3, AlCl3, SiCl4, SnCl4, PCl3, PCl5, POCl3, AsCl3, SbCl3, SbCl5, BiCl3, and ZnCl2.
Sanitation, disinfection, and antisepsis
Combating putrefaction
In France (as elsewhere), animal intestines were processed to make musical instrument strings, Goldbeater's skin and other products. This was done in "gut factories" (boyauderies), and it was an odiferous and unhealthy process. In or about 1820, the Société d'encouragement pour l'industrie nationale offered a prize for the discovery of a method, chemical or mechanical, for separating the peritoneal membrane of animal intestines without putrefaction. The prize was won by Antoine-Germain Labarraque, a 44-year-old French chemist and pharmacist who had discovered that Berthollet's chlorinated bleaching solutions ("Eau de Javel") not only destroyed the smell of putrefaction of animal tissue decomposition, but also actually retarded the decomposition.
Labarraque's research resulted in the use of chlorides and hypochlorites of lime (calcium hypochlorite) and of sodium (sodium hypochlorite) in the boyauderies. The same chemicals were found to be useful in the routine disinfection and deodorization of latrines, sewers, markets, abattoirs, anatomical theatres, and morgues. They were successful in hospitals, lazarets, prisons, infirmaries (both on land and at sea), magnaneries, stables, cattle-sheds, etc.; and they were beneficial during exhumations, embalming, outbreaks of epidemic disease, fever, and blackleg in cattle.
Disinfection
Labarraque's chlorinated lime and soda solutions have been advocated since 1828 to prevent infection (called "contagious infection", presumed to be transmitted by "miasmas"), and to treat putrefaction of existing wounds, including septic wounds. In his 1828 work, Labarraque recommended that doctors breathe chlorine, wash their hands in chlorinated lime, and even sprinkle chlorinated lime about the patients' beds in cases of "contagious infection". In 1828, the contagion of infections was well known, even though the agency of the microbe was not discovered until more than half a century later.
During the Paris cholera outbreak of 1832, large quantities of so-called chloride of lime were used to disinfect the capital. This was not simply modern calcium chloride, but chlorine gas dissolved in lime-water (dilute calcium hydroxide) to form calcium hypochlorite (chlorinated lime). Labarraque's discovery helped to remove the terrible stench of decay from hospitals and dissecting rooms, and by doing so, effectively deodorised the Latin Quarter of Paris. These "putrid miasmas" were thought by many to cause the spread of "contagion" and "infection" – both words used before the germ theory of infection. Chloride of lime was used for destroying odors and "putrid matter". One source claims chloride of lime was used by Dr. John Snow to disinfect water from the cholera-contaminated well that was feeding the Broad Street pump in 1854 London, though three other reputable sources that describe that famous cholera epidemic do not mention the incident. One reference makes it clear that chloride of lime was used to disinfect the offal and filth in the streets surrounding the Broad Street pump – a common practice in mid-nineteenth century England.
Semmelweis and experiments with antisepsis
Perhaps the most famous application of Labarraque's chlorine and chemical base solutions was in 1847, when Ignaz Semmelweis used chlorine-water (chlorine dissolved in pure water, which was cheaper than chlorinated lime solutions) to disinfect the hands of Austrian doctors, which Semmelweis noticed still carried the stench of decomposition from the dissection rooms to the patient examination rooms. Long before the germ theory of disease, Semmelweis theorized that "cadaveric particles" were transmitting decay from fresh medical cadavers to living patients, and he used the well-known "Labarraque's solutions" as the only known method to remove the smell of decay and tissue decomposition (which he found that soap did not). The solutions proved to be far more effective antiseptics than soap (Semmelweis was also aware of their greater efficacy, but not the reason), and this resulted in Semmelweis's celebrated success in stopping the transmission of childbed fever ("puerperal fever") in the maternity wards of Vienna General Hospital in Austria in 1847.
Much later, during World War I in 1916, a standardized and diluted modification of Labarraque's solution containing hypochlorite (0.5%) and boric acid as an acidic stabilizer was developed by Henry Drysdale Dakin (who gave full credit to Labarraque's prior work in this area). Called Dakin's solution, the method of wound irrigation with chlorinated solutions allowed antiseptic treatment of a wide variety of open wounds, long before the modern antibiotic era. A modified version of this solution continues to be employed in wound irrigation in modern times, where it remains effective against bacteria that are resistant to multiple antibiotics (see Century Pharmaceuticals).
Public sanitation
The first continuous application of chlorination to drinking U.S. water was installed in Jersey City, New Jersey, in 1908. By 1918, the US Department of Treasury called for all drinking water to be disinfected with chlorine. Chlorine is presently an important chemical for water purification (such as in water treatment plants), in disinfectants, and in bleach. Even small water supplies are now routinely chlorinated.
Chlorine is usually used (in the form of hypochlorous acid) to kill bacteria and other microbes in drinking water supplies and public swimming pools. In most private swimming pools, chlorine itself is not used, but rather sodium hypochlorite, formed from chlorine and sodium hydroxide, or solid tablets of chlorinated isocyanurates. The drawback of using chlorine in swimming pools is that the chlorine reacts with the amino acids in proteins in human hair and skin. Contrary to popular belief, the distinctive "chlorine aroma" associated with swimming pools is not the result of elemental chlorine itself, but of chloramine, a chemical compound produced by the reaction of free dissolved chlorine with amines in organic substances including those in urine and sweat. As a disinfectant in water, chlorine is more than three times as effective against Escherichia coli as bromine, and more than six times as effective as iodine. Increasingly, monochloramine itself is being directly added to drinking water for purposes of disinfection, a process known as chloramination.
It is often impractical to store and use poisonous chlorine gas for water treatment, so alternative methods of adding chlorine are used. These include hypochlorite solutions, which gradually release chlorine into the water, and compounds like sodium dichloro-s-triazinetrione (dihydrate or anhydrous), sometimes referred to as "dichlor", and trichloro-s-triazinetrione, sometimes referred to as "trichlor". These compounds are stable while solid and may be used in powdered, granular, or tablet form. When added in small amounts to pool water or industrial water systems, the chlorine atoms hydrolyze from the rest of the molecule, forming hypochlorous acid (HOCl), which acts as a general biocide, killing germs, microorganisms, algae, and so on.
Use as a weapon
World War I
Chlorine gas, also known as bertholite, was first used as a weapon in World War I by Germany on April 22, 1915, in the Second Battle of Ypres. As described by the soldiers, it had the distinctive smell of a mixture of pepper and pineapple. It also tasted metallic and stung the back of the throat and chest. Chlorine reacts with water in the mucosa of the lungs to form hydrochloric acid, destructive to living tissue and potentially lethal. Human respiratory systems can be protected from chlorine gas by gas masks with activated charcoal or other filters, which makes chlorine gas much less lethal than other chemical weapons. It was pioneered by a German scientist later to be a Nobel laureate, Fritz Haber of the Kaiser Wilhelm Institute in Berlin, in collaboration with the German chemical conglomerate IG Farben, which developed methods for discharging chlorine gas against an entrenched enemy. After its first use, both sides in the conflict used chlorine as a chemical weapon, but it was soon replaced by the more deadly phosgene and mustard gas.
Middle east
Chlorine gas was also used during the Iraq War in Anbar Province in 2007, with insurgents packing truck bombs with mortar shells and chlorine tanks. The attacks killed two people from the explosives and sickened more than 350. Most of the deaths were caused by the force of the explosions rather than the effects of chlorine since the toxic gas is readily dispersed and diluted in the atmosphere by the blast. In some bombings, over a hundred civilians were hospitalized due to breathing difficulties. The Iraqi authorities tightened security for elemental chlorine, which is essential for providing safe drinking water to the population.
On 23 October 2014, it was reported that the Islamic State of Iraq and the Levant had used chlorine gas in the town of Duluiyah, Iraq. Laboratory analysis of clothing and soil samples confirmed the use of chlorine gas against Kurdish Peshmerga Forces in a vehicle-borne improvised explosive device attack on 23 January 2015 at the Highway 47 Kiske Junction near Mosul.
Another country in the middle east, Syria, has used chlorine as a chemical weapon delivered from barrel bombs and rockets. In 2016, the OPCW-UN Joint Investigative Mechanism concluded that the Syrian government used chlorine as a chemical weapon in three separate attacks. Later investigations from the OPCW's Investigation and Identification Team concluded that the Syrian Air Force was responsible for chlorine attacks in 2017 and 2018.
Biological role
The chloride anion is an essential nutrient for metabolism. Chlorine is needed for the production of hydrochloric acid in the stomach and in cellular pump functions. The main dietary source is table salt, or sodium chloride. Overly low or high concentrations of chloride in the blood are examples of electrolyte disturbances. Hypochloremia (having too little chloride) rarely occurs in the absence of other abnormalities. It is sometimes associated with hypoventilation. It can be associated with chronic respiratory acidosis. Hyperchloremia (having too much chloride) usually does not produce symptoms. When symptoms do occur, they tend to resemble those of hypernatremia (having too much sodium). Reduction in blood chloride leads to cerebral dehydration; symptoms are most often caused by rapid rehydration which results in cerebral edema. Hyperchloremia can affect oxygen transport.
Hazards
Chlorine is a toxic gas that attacks the respiratory system, eyes, and skin. Because it is denser than air, it tends to accumulate at the bottom of poorly ventilated spaces. Chlorine gas is a strong oxidizer, which may react with flammable materials.
Chlorine is detectable with measuring devices in concentrations as low as 0.2 parts per million (ppm), and by smell at 3 ppm. Coughing and vomiting may occur at 30 ppm and lung damage at 60 ppm. About 1000 ppm can be fatal after a few deep breaths of the gas. The IDLH (immediately dangerous to life and health) concentration is 10 ppm. Breathing lower concentrations can aggravate the respiratory system and exposure to the gas can irritate the eyes. When chlorine is inhaled at concentrations greater than 30 ppm, it reacts with water within the lungs, producing hydrochloric acid (HCl) and hypochlorous acid (HOCl).
When used at specified levels for water disinfection, the reaction of chlorine with water is not a major concern for human health. Other materials present in the water may generate disinfection by-products that are associated with negative effects on human health.
In the United States, the Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit for elemental chlorine at 1 ppm, or 3 mg/m3. The National Institute for Occupational Safety and Health has designated a recommended exposure limit of 0.5 ppm over 15 minutes.
In the home, accidents occur when hypochlorite bleach solutions come into contact with certain acidic drain-cleaners to produce chlorine gas. Hypochlorite bleach (a popular laundry additive) combined with ammonia (another popular laundry additive) produces chloramines, another toxic group of chemicals.
Chlorine-induced cracking in structural materials
Chlorine is widely used for purifying water, especially potable water supplies and water used in swimming pools. Several catastrophic collapses of swimming pool ceilings have occurred from chlorine-induced stress corrosion cracking of stainless steel suspension rods. Some polymers are also sensitive to attack, including acetal resin and polybutene. Both materials were used in hot and cold water domestic plumbing, and stress corrosion cracking caused widespread failures in the US in the 1980s and 1990s.
Chlorine-iron fire
The element iron can combine with chlorine at high temperatures in a strong exothermic reaction, creating a chlorine-iron fire. Chlorine-iron fires are a risk in chemical process plants, where much of the pipework that carries chlorine gas is made of steel.
See also
2022 Aqaba toxic gas leak
Chlorine cycle
Chlorine gas poisoning
Industrial gas
Polymer degradation
Reductive dechlorination
Notes
References
Bibliography
External links
Chlorine at The Periodic Table of Videos (University of Nottingham)
Agency for Toxic Substances and Disease Registry: Chlorine
Electrolytic production
Production and liquefaction of chlorine
Chlorine Production Using Mercury, Environmental Considerations and Alternatives
National Pollutant Inventory – Chlorine
National Institute for Occupational Safety and Health – Chlorine Page
Chlorine Institute – Trade association representing the chlorine industry
Chlorine Online – the web portal of Eurochlor – the business association of the European chlor-alkali industry
Chemical elements
Chemical hazards
Diatomic nonmetals
Gases with color
Halogens
Hazardous air pollutants
Industrial gases
Oxidizing agents
Pulmonary agents
Reactive nonmetals
Swimming pool equipment | Chlorine | [
"Physics",
"Chemistry",
"Materials_science"
] | 13,453 | [
"Chemical elements",
"Redox",
"Chemical weapons",
"Diatomic nonmetals",
"Nonmetals",
"Oxidizing agents",
"Pulmonary agents",
"Chemical hazards",
"Industrial gases",
"Reactive nonmetals",
"Chemical process engineering",
"Atoms",
"Matter"
] |
5,668 | https://en.wikipedia.org/wiki/Calcium | Calcium is a chemical element; it has symbol Ca and atomic number 20. As an alkaline earth metal, calcium is a reactive metal that forms a dark oxide-nitride layer when exposed to air. Its physical and chemical properties are most similar to its heavier homologues strontium and barium. It is the fifth most abundant element in Earth's crust, and the third most abundant metal, after iron and aluminium. The most common calcium compound on Earth is calcium carbonate, found in limestone and the fossilized remnants of early sea life; gypsum, anhydrite, fluorite, and apatite are also sources of calcium. The name derives from Latin calx "lime", which was obtained from heating limestone.
Some calcium compounds were known to the ancients, though their chemistry was unknown until the seventeenth century. Pure calcium was isolated in 1808 via electrolysis of its oxide by Humphry Davy, who named the element. Calcium compounds are widely used in many industries: in foods and pharmaceuticals for calcium supplementation, in the paper industry as bleaches, as components in cement and electrical insulators, and in the manufacture of soaps. On the other hand, the metal in pure form has few applications due to its high reactivity; still, in small quantities it is often used as an alloying component in steelmaking, and sometimes, as a calcium–lead alloy, in making automotive batteries.
Calcium is the most abundant metal and the fifth-most abundant element in the human body. As electrolytes, calcium ions (Ca2+) play a vital role in the physiological and biochemical processes of organisms and cells: in signal transduction pathways where they act as a second messenger; in neurotransmitter release from neurons; in contraction of all muscle cell types; as cofactors in many enzymes; and in fertilization. Calcium ions outside cells are important for maintaining the potential difference across excitable cell membranes, protein synthesis, and bone formation.
Characteristics
Classification
Calcium is a very ductile silvery metal (sometimes described as pale yellow) whose properties are very similar to the heavier elements in its group, strontium, barium, and radium. A calcium atom has twenty electrons, with electron configuration [Ar]4s. Like the other elements placed in group 2 of the periodic table, calcium has two valence electrons in the outermost s-orbital, which are very easily lost in chemical reactions to form a dipositive ion with the stable electron configuration of a noble gas, in this case argon.
Hence, calcium is almost always divalent in its compounds, which are usually ionic. Hypothetical univalent salts of calcium would be stable with respect to their elements, but not to disproportionation to the divalent salts and calcium metal, because the enthalpy of formation of MX is much higher than those of the hypothetical MX. This occurs because of the much greater lattice energy afforded by the more highly charged Ca cation compared to the hypothetical Ca cation.
Calcium, strontium, barium, and radium are always considered to be alkaline earth metals; the lighter beryllium and magnesium, also in group 2 of the periodic table, are often included as well. Nevertheless, beryllium and magnesium differ significantly from the other members of the group in their physical and chemical behaviour: they behave more like aluminium and zinc respectively and have some of the weaker metallic character of the post-transition metals, which is why the traditional definition of the term "alkaline earth metal" excludes them.
Physical properties
Calcium metal melts at 842 °C and boils at 1494 °C; these values are higher than those for magnesium and strontium, the neighbouring group 2 metals. It crystallises in the face-centered cubic arrangement like strontium and barium; above , it changes to a body-centered cubic. Its density of 1.526 g/cm3 (at 20 °C) is the lowest in its group.
Calcium is harder than lead but can be cut with a knife with effort. While calcium is a poorer conductor of electricity than copper or aluminium by volume, it is a better conductor by mass than both due to its very low density. While calcium is infeasible as a conductor for most terrestrial applications as it reacts quickly with atmospheric oxygen, its use as such in space has been considered.
Chemical properties
The chemistry of calcium is that of a typical heavy alkaline earth metal. For example, calcium spontaneously reacts with water more quickly than magnesium and less quickly than strontium to produce calcium hydroxide and hydrogen gas. It also reacts with the oxygen and nitrogen in air to form a mixture of calcium oxide and calcium nitride. When finely divided, it spontaneously burns in air to produce the nitride. Bulk calcium is less reactive: it quickly forms a hydration coating in moist air, but below 30% relative humidity it may be stored indefinitely at room temperature.
Besides the simple oxide CaO, calcium peroxide, CaO, can be made by direct oxidation of calcium metal under a high pressure of oxygen, and there is some evidence for a yellow superoxide Ca(O).Calcium hydroxide, Ca(OH), is a strong base, though not as strong as the hydroxides of strontium, barium or the alkali metals. All four dihalides of calcium are known. Calcium carbonate (CaCO) and calcium sulfate (CaSO) are particularly abundant minerals. Like strontium and barium, as well as the alkali metals and the divalent lanthanides europium and ytterbium, calcium metal dissolves directly in liquid ammonia to give a dark blue solution.
Due to the large size of the calcium ion (Ca), high coordination numbers are common, up to 24 in some intermetallic compounds such as CaZn. Calcium is readily complexed by oxygen chelates such as EDTA and polyphosphates, which are useful in analytic chemistry and removing calcium ions from hard water. In the absence of steric hindrance, smaller group 2 cations tend to form stronger complexes, but when large polydentate macrocycles are involved the trend is reversed.
Though calcium is in the same group as magnesium and organomagnesium compounds are very widely used throughout chemistry, organocalcium compounds are not similarly widespread because they are more difficult to make and more reactive, though they have recently been investigated as possible catalysts. Organocalcium compounds tend to be more similar to organoytterbium compounds due to the similar ionic radii of Yb (102 pm) and Ca (100 pm).
Most of these compounds can only be prepared at low temperatures; bulky ligands tend to favour stability. For example, calcium dicyclopentadienyl, Ca(CH), must be made by directly reacting calcium metal with mercurocene or cyclopentadiene itself; replacing the CH ligand with the bulkier C(CH) ligand on the other hand increases the compound's solubility, volatility, and kinetic stability.
Isotopes
Natural calcium is a mixture of five stable isotopes (Ca, Ca, Ca, Ca, and Ca) and one isotope with a half-life so long that it is for all practical purposes stable (Ca, with a half-life of about 4.3 × 10 years). Calcium is the first (lightest) element to have six naturally occurring isotopes.
By far the most common isotope of calcium in nature is Ca, which makes up 96.941% of all natural calcium. It is produced in the silicon-burning process from fusion of alpha particles and is the heaviest stable nuclide with equal proton and neutron numbers; its occurrence is also supplemented slowly by the decay of primordial K. Adding another alpha particle leads to unstable Ti, which decays via two successive electron captures to stable Ca; this makes up 2.806% of all natural calcium and is the second-most common isotope.
The other four natural isotopes, Ca, Ca, Ca, and Ca, are significantly rarer, each comprising less than 1% of all natural calcium. The four lighter isotopes are mainly products of the oxygen-burning and silicon-burning processes, leaving the two heavier ones to be produced via neutron capture processes. Ca is mostly produced in a "hot" s-process, as its formation requires a rather high neutron flux to allow short-lived Ca to capture a neutron. Ca is produced by electron capture in the r-process in type Ia supernovae, where high neutron excess and low enough entropy ensures its survival.
Ca and Ca are the first "classically stable" nuclides with a 6-neutron or 8-neutron excess respectively. Although extremely neutron-rich for such a light element, Ca is very stable because it is a doubly magic nucleus, having 20 protons and 28 neutrons arranged in closed shells. Its beta decay to Sc is very hindered because of the gross mismatch of nuclear spin: Ca has zero nuclear spin, being even–even, while Sc has spin 6+, so the decay is forbidden by the conservation of angular momentum. While two excited states of Sc are available for decay as well, they are also forbidden due to their high spins. As a result, when Ca does decay, it does so by double beta decay to Ti instead, being the lightest nuclide known to undergo double beta decay.
Ca can also theoretically undergo double beta decay to Ti, but this has never been observed. The most common isotope Ca is also doubly magic and could undergo double electron capture to Ar, but this has likewise never been observed. Calcium is the only element with two primordial doubly magic isotopes. The experimental lower limits for the half-lives of Ca and Ca are 5.9 × 10 years and 2.8 × 10 years respectively.
Apart from the practically stable Ca, the longest lived radioisotope of calcium is Ca. It decays by electron capture to stable K with a half-life of about 10 years. Its existence in the early Solar System as an extinct radionuclide has been inferred from excesses of K: traces of Ca also still exist today, as it is a cosmogenic nuclide, continuously produced through neutron activation of natural Ca.
Many other calcium radioisotopes are known, ranging from Ca to Ca. They are all much shorter-lived than Ca, the most stable being Ca (half-life 163 days) and Ca (half-life 4.54 days). Isotopes lighter than Ca usually undergo beta plus decay to isotopes of potassium, and those heavier than Ca usually undergo beta minus decay to isotopes of scandium, though near the nuclear drip lines, proton emission and neutron emission begin to be significant decay modes as well.
Like other elements, a variety of processes alter the relative abundance of calcium isotopes. The best studied of these processes is the mass-dependent fractionation of calcium isotopes that accompanies the precipitation of calcium minerals such as calcite, aragonite and apatite from solution. Lighter isotopes are preferentially incorporated into these minerals, leaving the surrounding solution enriched in heavier isotopes at a magnitude of roughly 0.025% per atomic mass unit (amu) at room temperature. Mass-dependent differences in calcium isotope composition are conventionally expressed by the ratio of two isotopes (usually Ca/Ca) in a sample compared to the same ratio in a standard reference material. Ca/Ca varies by about 1–2‰ among organisms on Earth.
History
Calcium compounds were known for millennia, though their chemical makeup was not understood until the 17th century. Lime as a building material and as plaster for statues was used as far back as around 7000 BC. The first dated lime kiln dates back to 2500 BC and was found in Khafajah, Mesopotamia.
About the same time, dehydrated gypsum (CaSO·2HO) was being used in the Great Pyramid of Giza. This material would later be used for the plaster in the tomb of Tutankhamun. The ancient Romans instead used lime mortars made by heating limestone (CaCO). The name "calcium" itself derives from the Latin word calx "lime".
Vitruvius noted that the lime that resulted was lighter than the original limestone, attributing this to the boiling of the water. In 1755, Joseph Black proved that this was due to the loss of carbon dioxide, which as a gas had not been recognized by the ancient Romans.
In 1789, Antoine Lavoisier suspected that lime might be an oxide of a fundamental chemical element. In his table of the elements, Lavoisier listed five "salifiable earths" (i.e., ores that could be made to react with acids to produce salts (salis = salt, in Latin): chaux (calcium oxide), magnésie (magnesia, magnesium oxide), baryte (barium sulfate), alumine (alumina, aluminium oxide), and silice (silica, silicon dioxide)). About these "elements", Lavoisier reasoned:
Calcium, along with its congeners magnesium, strontium, and barium, was first isolated by Humphry Davy in 1808. Following the work of Jöns Jakob Berzelius and Magnus Martin af Pontin on electrolysis, Davy isolated calcium and magnesium by putting a mixture of the respective metal oxides with mercury(II) oxide on a platinum plate which was used as the anode, the cathode being a platinum wire partially submerged into mercury. Electrolysis then gave calcium–mercury and magnesium–mercury amalgams, and distilling off the mercury gave the metal. However, pure calcium cannot be prepared in bulk by this method and a workable commercial process for its production was not found until over a century later.
Occurrence and production
At 3%, calcium is the fifth most abundant element in the Earth's crust, and the third most abundant metal behind aluminium and iron. It is also the fourth most abundant element in the lunar highlands. Sedimentary calcium carbonate deposits pervade the Earth's surface as fossilized remains of past marine life; they occur in two forms, the rhombohedral calcite (more common) and the orthorhombic aragonite (forming in more temperate seas). Minerals of the first type include limestone, dolomite, marble, chalk, and iceland spar; aragonite beds make up the Bahamas, the Florida Keys, and the Red Sea basins. Corals, sea shells, and pearls are mostly made up of calcium carbonate. Among the other important minerals of calcium are gypsum (CaSO4·2H2O), anhydrite (CaSO4), fluorite (CaF2), and apatite ([Ca5(PO4)3X], X = OH, Cl, or F).gre
The major producers of calcium are China (about 10000 to 12000 tonnes per year), Russia (about 6000 to 8000 tonnes per year), and the United States (about 2000 to 4000 tonnes per year). Canada and France are also among the minor producers. In 2005, about 24000 tonnes of calcium were produced; about half of the world's extracted calcium is used by the United States, with about 80% of the output used each year.
In Russia and China, Davy's method of electrolysis is still used, but is instead applied to molten calcium chloride. Since calcium is less reactive than strontium or barium, the oxide–nitride coating that results in air is stable and lathe machining and other standard metallurgical techniques are suitable for calcium. In the United States and Canada, calcium is instead produced by reducing lime with aluminium at high temperatures.
Geochemical cycling
Calcium cycling provides a link between tectonics, climate, and the carbon cycle. In the simplest terms, mountain-building exposes calcium-bearing rocks such as basalt and granodiorite to chemical weathering and releases Ca2+ into surface water. These ions are transported to the ocean where they react with dissolved CO2 to form limestone (), which in turn settles to the sea floor where it is incorporated into new rocks. Dissolved CO2, along with carbonate and bicarbonate ions, are termed "dissolved inorganic carbon" (DIC).
The actual reaction is more complicated and involves the bicarbonate ion (HCO) that forms when CO2 reacts with water at seawater pH:
At seawater pH, most of the dissolved CO2 is immediately converted back into . The reaction results in a net transport of one molecule of CO2 from the ocean/atmosphere into the lithosphere. The result is that each Ca2+ ion released by chemical weathering ultimately removes one CO2 molecule from the surficial system (atmosphere, ocean, soils and living organisms), storing it in carbonate rocks where it is likely to stay for hundreds of millions of years. The weathering of calcium from rocks thus scrubs CO2 from the ocean and atmosphere, exerting a strong long-term effect on climate.
Applications
The largest use of metallic calcium is in steelmaking, due to its strong chemical affinity for oxygen and sulfur. Its oxides and sulfides, once formed, give liquid lime aluminate and sulfide inclusions in steel which float out; on treatment, these inclusions disperse throughout the steel and become small and spherical, improving castability, cleanliness and general mechanical properties. Calcium is also used in maintenance-free automotive batteries, in which the use of 0.1% calcium–lead alloys instead of the usual antimony–lead alloys leads to lower water loss and lower self-discharging.
Due to the risk of expansion and cracking, aluminium is sometimes also incorporated into these alloys. These lead–calcium alloys are also used in casting, replacing lead–antimony alloys. Calcium is also used to strengthen aluminium alloys used for bearings, for the control of graphitic carbon in cast iron, and to remove bismuth impurities from lead. Calcium metal is found in some drain cleaners, where it functions to generate heat and calcium hydroxide that saponifies the fats and liquefies the proteins (for example, those in hair) that block drains.
Besides metallurgy, the reactivity of calcium is exploited to remove nitrogen from high-purity argon gas and as a getter for oxygen and nitrogen. It is also used as a reducing agent in the production of chromium, zirconium, thorium, vanadium and uranium. It can also be used to store hydrogen gas, as it reacts with hydrogen to form solid calcium hydride, from which the hydrogen can easily be re-extracted.
Calcium isotope fractionation during mineral formation has led to several applications of calcium isotopes. In particular, the 1997 observation by Skulan and DePaolo that calcium minerals are isotopically lighter than the solutions from which the minerals precipitate is the basis of analogous applications in medicine and in paleoceanography. In animals with skeletons mineralized with calcium, the calcium isotopic composition of soft tissues reflects the relative rate of formation and dissolution of skeletal mineral.
In humans, changes in the calcium isotopic composition of urine have been shown to be related to changes in bone mineral balance. When the rate of bone formation exceeds the rate of bone resorption, the 44Ca/40Ca ratio in soft tissue rises and vice versa. Because of this relationship, calcium isotopic measurements of urine or blood may be useful in the early detection of metabolic bone diseases like osteoporosis.
A similar system exists in seawater, where 44Ca/40Ca tends to rise when the rate of removal of Ca2+ by mineral precipitation exceeds the input of new calcium into the ocean. In 1997, Skulan and DePaolo presented the first evidence of change in seawater 44Ca/40Ca over geologic time, along with a theoretical explanation of these changes. More recent papers have confirmed this observation, demonstrating that seawater Ca2+ concentration is not constant, and that the ocean is never in a "steady state" with respect to calcium input and output. This has important climatological implications, as the marine calcium cycle is closely tied to the carbon cycle.
Many calcium compounds are used in food, as pharmaceuticals, and in medicine, among others. For example, calcium and phosphorus are supplemented in foods through the addition of calcium lactate, calcium diphosphate, and tricalcium phosphate. The last is also used as a polishing agent in toothpaste and in antacids. Calcium lactobionate is a white powder that is used as a suspending agent for pharmaceuticals. In baking, calcium phosphate is used as a leavening agent. Calcium sulfite is used as a bleach in papermaking and as a disinfectant, calcium silicate is used as a reinforcing agent in rubber, and calcium acetate is a component of liming rosin and is used to make metallic soaps and synthetic resins.
Calcium is on the World Health Organization's List of Essential Medicines.
Food sources
Foods rich in calcium include dairy products such as milk and yogurt, cheese, sardines, salmon, soy products, kale, and fortified breakfast cereals.
Because of concerns for long-term adverse side effects, including calcification of arteries and kidney stones, both the U.S. Institute of Medicine (IOM) and the European Food Safety Authority (EFSA) set Tolerable Upper Intake Levels (ULs) for combined dietary and supplemental calcium. From the IOM, people of ages 9–18 years are not to exceed 3 g/day combined intake; for ages 19–50, not to exceed 2.5 g/day; for ages 51 and older, not to exceed 2 g/day. EFSA set the UL for all adults at 2.5 g/day, but decided the information for children and adolescents was not sufficient to determine ULs.
Biological and pathological role
Function
Calcium is an essential element needed in large quantities. The Ca2+ ion acts as an electrolyte and is vital to the health of the muscular, circulatory, and digestive systems; is indispensable to the building of bone in the form of hydroxyapatite; and supports synthesis and function of blood cells. For example, it regulates the contraction of muscles, nerve conduction, and the clotting of blood. As a result, intra- and extracellular calcium levels are tightly regulated by the body. Calcium can play this role because the Ca2+ ion forms stable coordination complexes with many organic compounds, especially proteins; it also forms compounds with a wide range of solubilities, enabling the formation of the skeleton.
Binding
Calcium ions may be complexed by proteins through binding the carboxyl groups of glutamic acid or aspartic acid residues; through interacting with phosphorylated serine, tyrosine, or threonine residues; or by being chelated by γ-carboxylated amino acid residues. Trypsin, a digestive enzyme, uses the first method; osteocalcin, a bone matrix protein, uses the third.
Some other bone matrix proteins such as osteopontin and bone sialoprotein use both the first and the second. Direct activation of enzymes by binding calcium is common; some other enzymes are activated by noncovalent association with direct calcium-binding enzymes. Calcium also binds to the phospholipid layer of the cell membrane, anchoring proteins associated with the cell surface.
Solubility
As an example of the wide range of solubility of calcium compounds, monocalcium phosphate is very soluble in water, 85% of extracellular calcium is as dicalcium phosphate with a solubility of 2.00 mM, and the hydroxyapatite of bones in an organic matrix is tricalcium phosphate with a solubility of 1000 μM.
Nutrition
Calcium is a common constituent of multivitamin dietary supplements, but the composition of calcium complexes in supplements may affect its bioavailability which varies by solubility of the salt involved: calcium citrate, malate, and lactate are highly bioavailable, while the oxalate is less. Other calcium preparations include calcium carbonate, calcium citrate malate, and calcium gluconate. The intestine absorbs about one-third of calcium eaten as the free ion, and plasma calcium level is then regulated by the kidneys.
Hormonal regulation of bone formation and serum levels
Parathyroid hormone and vitamin D promote the formation of bone by allowing and enhancing the deposition of calcium ions there, allowing rapid bone turnover without affecting bone mass or mineral content. When plasma calcium levels fall, cell surface receptors are activated and the secretion of parathyroid hormone occurs; it then proceeds to stimulate the entry of calcium into the plasma pool by taking it from targeted kidney, gut, and bone cells, with the bone-forming action of parathyroid hormone being antagonized by calcitonin, whose secretion increases with increasing plasma calcium levels.
Abnormal serum levels
Excess intake of calcium may cause hypercalcemia. However, because calcium is absorbed rather inefficiently by the intestines, high serum calcium is more likely caused by excessive secretion of parathyroid hormone (PTH) or possibly by excessive intake of vitamin D, both of which facilitate calcium absorption. All these conditions result in excess calcium salts being deposited in the heart, blood vessels, or kidneys. Symptoms include anorexia, nausea, vomiting, memory loss, confusion, muscle weakness, increased urination, dehydration, and metabolic bone disease.
Chronic hypercalcaemia typically leads to calcification of soft tissue and its serious consequences: for example, calcification can cause loss of elasticity of vascular walls and disruption of laminar blood flow—and thence to plaque rupture and thrombosis. Conversely, inadequate calcium or vitamin D intakes may result in hypocalcemia, often caused also by inadequate secretion of parathyroid hormone or defective PTH receptors in cells. Symptoms include neuromuscular excitability, which potentially causes tetany and disruption of conductivity in cardiac tissue.
Bone disease
As calcium is required for bone development, many bone diseases can be traced to the organic matrix or the hydroxyapatite in molecular structure or organization of bone. Osteoporosis is a reduction in mineral content of bone per unit volume, and can be treated by supplementation of calcium, vitamin D, and bisphosphonates. Inadequate amounts of calcium, vitamin D, or phosphates can lead to softening of bones, called osteomalacia.
Safety
Metallic calcium
Because calcium reacts exothermically with water and acids, calcium metal coming into contact with bodily moisture results in severe corrosive irritation. When swallowed, calcium metal has the same effect on the mouth, oesophagus, and stomach, and can be fatal. However, long-term exposure is not known to have distinct adverse effects.
References
Bibliography
Chemical elements
Alkaline earth metals
Dietary minerals
Dietary supplements
Reducing agents
Sodium channel blockers
World Health Organization essential medicines
Chemical elements with face-centered cubic structure | Calcium | [
"Physics",
"Chemistry"
] | 5,677 | [
"Chemical elements",
"Redox",
"Reducing agents",
"Atoms",
"Matter"
] |
5,669 | https://en.wikipedia.org/wiki/Chromium | Chromium is a chemical element; it has symbol Cr and atomic number 24. It is the first element in group 6. It is a steely-grey, lustrous, hard, and brittle transition metal.
Chromium is valued for its high corrosion resistance and hardness. A major development in steel production was the discovery that steel could be made highly resistant to corrosion and discoloration by adding metallic chromium to form stainless steel. Stainless steel and chrome plating (electroplating with chromium) together comprise 85% of the commercial use. Chromium is also greatly valued as a metal that is able to be highly polished while resisting tarnishing. Polished chromium reflects almost 70% of the visible spectrum, and almost 90% of infrared light. The name of the element is derived from the Greek word χρῶμα, chrōma, meaning color, because many chromium compounds are intensely colored.
Industrial production of chromium proceeds from chromite ore (mostly FeCr2O4) to produce ferrochromium, an iron-chromium alloy, by means of aluminothermic or silicothermic reactions. Ferrochromium is then used to produce alloys such as stainless steel. Pure chromium metal is produced by a different process: roasting and leaching of chromite to separate it from iron, followed by reduction with carbon and then aluminium.
Trivalent chromium (Cr(III)) occurs naturally in many foods and is sold as a dietary supplement, although there is insufficient evidence that dietary chromium provides nutritional benefit to people. In 2014, the European Food Safety Authority concluded that research on dietary chromium did not justify it to be recognized as an essential nutrient.
While chromium metal and Cr(III) ions are considered non-toxic, chromate and its derivatives, often called "hexavalent chromium", is toxic and carcinogenic. According to the European Chemicals Agency (ECHA), chromium trioxide that is used in industrial electroplating processes is a "substance of very high concern" (SVHC).
Physical properties
Atomic
Gaseous chromium has a ground-state electron configuration of [Ar] 3d5 4s1. It is the first element in the periodic table whose configuration violates the Aufbau principle. Exceptions to the principle also occur later in the periodic table for elements such as copper, niobium and molybdenum.
Chromium is the first element in the 3d series where the 3d electrons start to sink into the core; they thus contribute less to metallic bonding, and hence the melting and boiling points and the enthalpy of atomisation of chromium are lower than those of the preceding element vanadium. Chromium(VI) is a strong oxidising agent in contrast to the molybdenum(VI) and tungsten(VI) oxides.
Bulk
Chromium is the third hardest element after carbon (diamond) and boron. Its Mohs hardness is 8.5, which means that it can scratch samples of quartz and topaz, but can be scratched by corundum. Chromium is highly resistant to tarnishing, which makes it useful as a metal that preserves its outermost layer from corroding, unlike other metals such as copper, magnesium, and aluminium.
Chromium has a melting point of 1907 °C (3465 °F), which is relatively low compared to the majority of transition metals. However, it still has the second highest melting point out of all the period 4 elements, being topped by vanadium by 3 °C (5 °F) at 1910 °C (3470 °F). The boiling point of 2671 °C (4840 °F), however, is comparatively lower, having the fourth lowest boiling point out of the Period 4 transition metals alone behind copper, manganese and zinc. The electrical resistivity of chromium at 20 °C is 125 nanoohm-meters.
Chromium has a high specular reflection in comparison to other transition metals. In infrared, at 425 μm, chromium has a maximum reflectance of about 72%, reducing to a minimum of 62% at 750 μm before rising again to 90% at 4000 μm. When chromium is used in stainless steel alloys and polished, the specular reflection decreases with the inclusion of additional metals, yet is still high in comparison with other alloys. Between 40% and 60% of the visible spectrum is reflected from polished stainless steel. The explanation on why chromium displays such a high turnout of reflected photon waves in general, especially the 90% in infrared, can be attributed to chromium's magnetic properties. Chromium has unique magnetic properties; it is the only elemental solid that shows antiferromagnetic ordering at room temperature and below. Above 38 °C, its magnetic ordering becomes paramagnetic. The antiferromagnetic properties, which cause the chromium atoms to temporarily ionize and bond with themselves, are present because the body-centric cubic's magnetic properties are disproportionate to the lattice periodicity. This is due to the magnetic moments at the cube's corners and the unequal, but antiparallel, cube centers. From here, the frequency-dependent relative permittivity of chromium, deriving from Maxwell's equations and chromium's antiferromagnetism, leaves chromium with a high infrared and visible light reflectance.
Passivation
Chromium metal in air is passivated: it forms a thin, protective surface layer of chromium oxide with the corundum structure. Passivation can be enhanced by short contact with oxidizing acids like nitric acid. Passivated chromium is stable against acids. Passivation can be removed with a strong reducing agent that destroys the protective oxide layer on the metal. Chromium metal treated in this way readily dissolves in weak acids.
The surface chromia scale, is adherent to the metal. In contrast, iron forms a more porous oxide which is weak and flakes easily and exposes fresh metal to the air, causing continued rusting. At room temperature, the chromia scale is a few atomic layers thick, growing in thickness by outward diffusion of metal ions across the scale. Above 950 °C volatile chromium trioxide forms from the chromia scale, limiting the scale thickness and oxidation protection.
Chromium, unlike iron and nickel, does not suffer from hydrogen embrittlement. However, it does suffer from nitrogen embrittlement, reacting with nitrogen from air and forming brittle nitrides at the high temperatures necessary to work the metal parts.
Isotopes
Naturally occurring chromium is composed of four stable isotopes; 50Cr, 52Cr, 53Cr and 54Cr, with 52Cr being the most abundant (83.789% natural abundance). 50Cr is observationally stable, as it is theoretically capable of decaying to 50Ti via double electron capture with a half-life of no less than 1.3 years. Twenty-five radioisotopes have been characterized, ranging from 42Cr to 70Cr; the most stable radioisotope is 51Cr with a half-life of 27.7 days. All of the remaining radioactive isotopes have half-lives that are less than 24 hours and the majority less than 1 minute. Chromium also has two metastable nuclear isomers. The primary decay mode before the most abundant stable isotope, 52Cr, is electron capture and the primary mode after is beta decay.
53Cr is the radiogenic decay product of 53Mn (half-life 3.74 million years). Chromium isotopes are typically collocated (and compounded) with manganese isotopes. This circumstance is useful in isotope geology. Manganese-chromium isotope ratios reinforce the evidence from 26Al and 107Pd concerning the early history of the Solar System. Variations in 53Cr/52Cr and Mn/Cr ratios from several meteorites indicate an initial 53Mn/55Mn ratio that suggests Mn-Cr isotopic composition must result from in-situ decay of 53Mn in differentiated planetary bodies. Hence 53Cr provides additional evidence for nucleosynthetic processes immediately before coalescence of the Solar System. 53Cr has been posited as a proxy for atmospheric oxygen concentration.
Chemistry and compounds
Chromium is a member of group 6, of the transition metals. The +3 and +6 states occur most commonly within chromium compounds, followed by +2; charges of +1, +4 and +5 for chromium are rare, but do nevertheless occasionally exist.
Common oxidation states
Chromium(0)
Many Cr(0) complexes are known. Bis(benzene)chromium and chromium hexacarbonyl are highlights in organochromium chemistry.
Chromium(II)
Chromium(II) compounds are uncommon, in part because they readily oxidize to chromium(III) derivatives in air. Water-stable chromium(II) chloride that can be made by reducing chromium(III) chloride with zinc. The resulting bright blue solution created from dissolving chromium(II) chloride is stable at neutral pH. Some other notable chromium(II) compounds include chromium(II) oxide , and chromium(II) sulfate . Many chromium(II) carboxylates are known. The red chromium(II) acetate (Cr2(O2CCH3)4) is somewhat famous. It features a Cr-Cr quadruple bond.
Chromium(III)
A large number of chromium(III) compounds are known, such as chromium(III) nitrate, chromium(III) acetate, and chromium(III) oxide. Chromium(III) can be obtained by dissolving elemental chromium in acids like hydrochloric acid or sulfuric acid, but it can also be formed through the reduction of chromium(VI) by cytochrome c7. The ion has a similar radius (63 pm) to (radius 50 pm), and they can replace each other in some compounds, such as in chrome alum and alum.
Chromium(III) tends to form octahedral complexes. Commercially available chromium(III) chloride hydrate is the dark green complex [CrCl2(H2O)4]Cl. Closely related compounds are the pale green [CrCl(H2O)5]Cl2 and violet [Cr(H2O)6]Cl3. If anhydrous violet chromium(III) chloride is dissolved in water, the violet solution turns green after some time as the chloride in the inner coordination sphere is replaced by water. This kind of reaction is also observed with solutions of chrome alum and other water-soluble chromium(III) salts. A tetrahedral coordination of chromium(III) has been reported for the Cr-centered Keggin anion [α-CrW12O40]5–.
Chromium(III) hydroxide (Cr(OH)3) is amphoteric, dissolving in acidic solutions to form [Cr(H2O)6]3+, and in basic solutions to form . It is dehydrated by heating to form the green chromium(III) oxide (Cr2O3), a stable oxide with a crystal structure identical to that of corundum.
Chromium(VI)
Chromium(VI) compounds are oxidants at low or neutral pH. Chromate anions () and dichromate (Cr2O72−) anions are the principal ions at this oxidation state. They exist at an equilibrium, determined by pH:
2 [CrO4]2− + 2 H+ [Cr2O7]2− + H2O
Chromium(VI) oxyhalides are known also and include chromyl fluoride (CrO2F2) and chromyl chloride (). However, despite several erroneous claims, chromium hexafluoride (as well as all higher hexahalides) remains unknown, as of 2020.
Sodium chromate is produced industrially by the oxidative roasting of chromite ore with sodium carbonate. The change in equilibrium is visible by a change from yellow (chromate) to orange (dichromate), such as when an acid is added to a neutral solution of potassium chromate. At yet lower pH values, further condensation to more complex oxyanions of chromium is possible.
Both the chromate and dichromate anions are strong oxidizing reagents at low pH:
+ 14 + 6 e− → 2 + 21 (ε0 = 1.33 V)
They are, however, only moderately oxidizing at high pH:
+ 4 + 3 e− → + 5 (ε0 = −0.13 V)
Chromium(VI) compounds in solution can be detected by adding an acidic hydrogen peroxide solution. The unstable dark blue chromium(VI) peroxide (CrO5) is formed, which can be stabilized as an ether adduct .
Chromic acid has the hypothetical formula . It is a vaguely described chemical, despite many well-defined chromates and dichromates being known. The dark red chromium(VI) oxide , the acid anhydride of chromic acid, is sold industrially as "chromic acid". It can be produced by mixing sulfuric acid with dichromate and is a strong oxidizing agent.
Other oxidation states
Compounds of chromium(V) are rather rare; the oxidation state +5 is only realized in few compounds but are intermediates in many reactions involving oxidations by chromate. The only binary compound is the volatile chromium(V) fluoride (CrF5). This red solid has a melting point of 30 °C and a boiling point of 117 °C. It can be prepared by treating chromium metal with fluorine at 400 °C and 200 bar pressure. The peroxochromate(V) is another example of the +5 oxidation state. Potassium peroxochromate (K3[Cr(O2)4]) is made by reacting potassium chromate with hydrogen peroxide at low temperatures. This red brown compound is stable at room temperature but decomposes spontaneously at 150–170 °C.
Compounds of chromium(IV) are slightly more common than those of chromium(V). The tetrahalides, CrF4, CrCl4, and CrBr4, can be produced by treating the trihalides () with the corresponding halogen at elevated temperatures. Such compounds are susceptible to disproportionation reactions and are not stable in water. Organic compounds containing Cr(IV) state such as chromium tetra t-butoxide are also known.
Most chromium(I) compounds are obtained solely by oxidation of electron-rich, octahedral chromium(0) complexes. Other chromium(I) complexes contain cyclopentadienyl ligands. As verified by X-ray diffraction, a Cr-Cr quintuple bond (length 183.51(4) pm) has also been described. Extremely bulky monodentate ligands stabilize this compound by shielding the quintuple bond from further reactions.
Occurrence
Chromium is the 21st most abundant element in Earth's crust with an average concentration of 100 ppm. Chromium compounds are found in the environment from the erosion of chromium-containing rocks, and can be redistributed by volcanic eruptions. Typical background concentrations of chromium in environmental media are: atmosphere <10 ng/m3; soil <500 mg/kg; vegetation <0.5 mg/kg; freshwater <10 μg/L; seawater <1 μg/L; sediment <80 mg/kg. Chromium is mined as chromite (FeCr2O4) ore.
About two-fifths of the chromite ores and concentrates in the world are produced in South Africa, about a third in Kazakhstan, while India, Russia, and Turkey are also substantial producers. Untapped chromite deposits are plentiful, but geographically concentrated in Kazakhstan and southern Africa. Although rare, deposits of native chromium exist. The Udachnaya Pipe in Russia produces samples of the native metal. This mine is a kimberlite pipe, rich in diamonds, and the reducing environment helped produce both elemental chromium and diamonds.
The relation between Cr(III) and Cr(VI) strongly depends on pH and oxidative properties of the location. In most cases, Cr(III) is the dominating species, but in some areas, the ground water can contain up to 39 μg/L of total chromium, of which 30 μg/L is Cr(VI).
History
Early applications
The ancient Chinese are credited with the first ever use of chromium to prevent rusting. Modern archaeologists discovered that bronze-tipped crossbow bolts at the tomb of Qin Shi Huang showed no sign of corrosion after more than 2,000 years, because they had been coated in chromium. Chromium was not used anywhere else until the experiments of French pharmacist and chemist Louis Nicolas Vauquelin (1763–1829) in the late 1790s. In multiple Warring States period tombs, sharp jians and other weapons were also found to be coated with 10 to 15 micrometers of chromium oxide, which left them in pristine condition to this day.
Chromium minerals as pigments came to the attention of the west in the eighteenth century. On 26 July 1761, Johann Gottlob Lehmann found an orange-red mineral in the Beryozovskoye mines in the Ural Mountains which he named Siberian red lead. Though misidentified as a lead compound with selenium and iron components, the mineral was in fact crocoite with a formula of PbCrO4. In 1770, Peter Simon Pallas visited the same site as Lehmann and found a red lead mineral that was discovered to possess useful properties as a pigment in paints. After Pallas, the use of Siberian red lead as a paint pigment began to develop rapidly throughout the region. Crocoite would be the principal source of chromium in pigments until the discovery of chromite many years later.
In 1794, Louis Nicolas Vauquelin received samples of crocoite ore. He produced chromium trioxide (CrO3) by mixing crocoite with hydrochloric acid. In 1797, Vauquelin discovered that he could isolate metallic chromium by heating the oxide in a charcoal oven, for which he is credited as the one who truly discovered the element. Vauquelin was also able to detect traces of chromium in precious gemstones, such as ruby and emerald.
During the nineteenth century, chromium was primarily used not only as a component of paints, but in tanning salts as well. For quite some time, the crocoite found in Russia was the main source for such tanning materials. In 1827, a larger chromite deposit was discovered near Baltimore, United States, which quickly met the demand for tanning salts much more adequately than the crocoite that had been used previously. This made the United States the largest producer of chromium products until the year 1848, when larger deposits of chromite were uncovered near the city of Bursa, Turkey. With the development of metallurgy and chemical industries in the Western world, the need for chromium increased.
Chromium is also famous for its reflective, metallic luster when polished. It is used as a protective and decorative coating on car parts, plumbing fixtures, furniture parts and many other items, usually applied by electroplating. Chromium was used for electroplating as early as 1848, but this use only became widespread with the development of an improved process in 1924.
Production
Approximately 28.8 million metric tons (Mt) of marketable chromite ore was produced in 2013, and converted into 7.5 Mt of ferrochromium. According to John F. Papp, writing for the USGS, "Ferrochromium is the leading end use of chromite ore, [and] stainless steel is the leading end use of ferrochromium."
The largest producers of chromium ore in 2013 have been South Africa (48%), Kazakhstan (13%), Turkey (11%), and India (10%), with several other countries producing the rest of about 18% of the world production.
The two main products of chromium ore refining are ferrochromium and metallic chromium. For those products the ore smelter process differs considerably. For the production of ferrochromium, the chromite ore (FeCr2O4) is reduced in large scale in electric arc furnace or in smaller smelters with either aluminium or silicon in an aluminothermic reaction.
For the production of pure chromium, the iron must be separated from the chromium in a two step roasting and leaching process. The chromite ore is heated with a mixture of calcium carbonate and sodium carbonate in the presence of air. The chromium is oxidized to the hexavalent form, while the iron forms the stable Fe2O3. The subsequent leaching at higher elevated temperatures dissolves the chromates and leaves the insoluble iron oxide. The chromate is converted by sulfuric acid into the dichromate.
4 FeCr2O4 + 8 Na2CO3 + 7 O2 → 8 Na2CrO4 + 2 Fe2O3 + 8 CO2
2 Na2CrO4 + H2SO4 → Na2Cr2O7 + Na2SO4 + H2O
The dichromate is converted to the chromium(III) oxide by reduction with carbon and then reduced in an aluminothermic reaction to chromium.
Na2Cr2O7 + 2 C → Cr2O3 + Na2CO3 + CO
Cr2O3 + 2 Al → Al2O3 + 2 Cr
Applications
The creation of metal alloys account for 85% of the available chromium's usage. The remainder of chromium is used in the chemical, refractory, and foundry industries.
Metallurgy
The strengthening effect of forming stable metal carbides at grain boundaries, and the strong increase in corrosion resistance made chromium an important alloying material for steel. High-speed tool steels contain 3–5% chromium. Stainless steel, the primary corrosion-resistant metal alloy, is formed when chromium is introduced to iron in concentrations above 11%. For stainless steel's formation, ferrochromium is added to the molten iron. Also, nickel-based alloys have increased strength due to the formation of discrete, stable, metal, carbide particles at the grain boundaries. For example, Inconel 718 contains 18.6% chromium. Because of the excellent high-temperature properties of these nickel superalloys, they are used in jet engines and gas turbines in lieu of common structural materials. ASTM B163 relies on chromium for condenser and heat-exchanger tubes, while castings with high strength at elevated temperatures that contain chromium are standardised with ASTM A567. AISI type 332 is used where high temperature would normally cause carburization, oxidation or corrosion. Incoloy 800 "is capable of remaining stable and maintaining its austenitic structure even after long time exposures to high temperatures". Nichrome is used as resistance wire for heating elements in things like toasters and space heaters. These uses make chromium a strategic material. Consequently, during World War II, U.S. road engineers were instructed to avoid chromium in yellow road paint, as it "may become a critical material during the emergency". The United States likewise considered chromium "essential for the German war industry" and made intense diplomatic efforts to keep it out of the hands of Nazi Germany.
The high hardness and corrosion resistance of unalloyed chromium makes it a reliable metal for surface coating; it is still the most popular metal for sheet coating, with its above-average durability, compared to other coating metals. A layer of chromium is deposited on pretreated metallic surfaces by electroplating techniques. There are two deposition methods: thin, and thick. Thin deposition involves a layer of chromium below 1 μm thickness deposited by chrome plating, and is used for decorative surfaces. Thicker chromium layers are deposited if wear-resistant surfaces are needed. Both methods use acidic chromate or dichromate solutions. To prevent the energy-consuming change in oxidation state, the use of chromium(III) sulfate is under development; for most applications of chromium, the previously established process is used.
In the chromate conversion coating process, the strong oxidative properties of chromates are used to deposit a protective oxide layer on metals like aluminium, zinc, and cadmium. This passivation and the self-healing properties of the chromate stored in the chromate conversion coating, which is able to migrate to local defects, are the benefits of this coating method. Because of environmental and health regulations on chromates, alternative coating methods are under development.
Chromic acid anodizing (or Type I anodizing) of aluminium is another electrochemical process that does not lead to the deposition of chromium, but uses chromic acid as an electrolyte in the solution. During anodization, an oxide layer is formed on the aluminium. The use of chromic acid, instead of the normally used sulfuric acid, leads to a slight difference of these oxide layers.
The high toxicity of Cr(VI) compounds, used in the established chromium electroplating process, and the strengthening of safety and environmental regulations demand a search for substitutes for chromium, or at least a change to less toxic chromium(III) compounds.
Pigment
The mineral crocoite (which is also lead chromate PbCrO4) was used as a yellow pigment shortly after its discovery. After a synthesis method became available starting from the more abundant chromite, chrome yellow was, together with cadmium yellow, one of the most used yellow pigments. The pigment does not photodegrade, but it tends to darken due to the formation of chromium(III) oxide. It has a strong color, and was used for school buses in the United States and for the postal services (for example, the Deutsche Post) in Europe. The use of chrome yellow has since declined due to environmental and safety concerns and was replaced by organic pigments or other alternatives that are free from lead and chromium. Other pigments that are based around chromium are, for example, the deep shade of red pigment chrome red, which is simply lead chromate with lead(II) hydroxide (PbCrO4·Pb(OH)2). A very important chromate pigment, which was used widely in metal primer formulations, was zinc chromate, now replaced by zinc phosphate. A wash primer was formulated to replace the dangerous practice of pre-treating aluminium aircraft bodies with a phosphoric acid solution. This used zinc tetroxychromate dispersed in a solution of polyvinyl butyral. An 8% solution of phosphoric acid in solvent was added just before application. It was found that an easily oxidized alcohol was an essential ingredient. A thin layer of about 10–15 μm was applied, which turned from yellow to dark green when it was cured. There is still a question as to the correct mechanism. Chrome green is a mixture of Prussian blue and chrome yellow, while the chrome oxide green is chromium(III) oxide.
Chromium oxides are also used as a green pigment in the field of glassmaking and also as a glaze for ceramics. Green chromium oxide is extremely lightfast and as such is used in cladding coatings. It is also the main ingredient in infrared reflecting paints, used by the armed forces to paint vehicles and to give them the same infrared reflectance as green leaves.
Other uses
Chromium(III) ions present in corundum crystals (aluminium oxide) cause them to be colored red; when corundum appears as such, it is known as a ruby. If the corundum is lacking in chromium(III) ions, it is known as a sapphire. A red-colored artificial ruby may also be achieved by doping chromium(III) into artificial corundum crystals, thus making chromium a requirement for making synthetic rubies. Such a synthetic ruby crystal was the basis for the first laser, produced in 1960, which relied on stimulated emission of light from the chromium atoms in such a crystal. Ruby has a laser transition at 694.3 nanometers, in a deep red color.
Chromium(VI) salts are used for the preservation of wood. For example, chromated copper arsenate (CCA) is used in timber treatment to protect wood from decay fungi, wood-attacking insects, including termites, and marine borers. The formulations contain chromium based on the oxide CrO3 between 35.3% and 65.5%. In the United States, 65,300 metric tons of CCA solution were used in 1996.
Chromium(III) salts, especially chrome alum and chromium(III) sulfate, are used in the tanning of leather. The chromium(III) stabilizes the leather by cross linking the collagen fibers. Chromium tanned leather can contain 4–5% of chromium, which is tightly bound to the proteins. Although the form of chromium used for tanning is not the toxic hexavalent variety, there remains interest in management of chromium in the tanning industry. Recovery and reuse, direct/indirect recycling, and "chrome-less" or "chrome-free" tanning are practiced to better manage chromium usage.
The high heat resistivity and high melting point makes chromite and chromium(III) oxide a material for high temperature refractory applications, like blast furnaces, cement kilns, molds for the firing of bricks and as foundry sands for the casting of metals. In these applications, the refractory materials are made from mixtures of chromite and magnesite. The use is declining because of the environmental regulations due to the possibility of the formation of chromium(VI).
Several chromium compounds are used as catalysts for processing hydrocarbons. For example, the Phillips catalyst, prepared from chromium oxides, is used for the production of about half the world's polyethylene. Fe-Cr mixed oxides are employed as high-temperature catalysts for the water gas shift reaction. Copper chromite is a useful hydrogenation catalyst.
Uses of compounds
Chromium(IV) oxide (CrO2) is a magnetic compound. Its ideal shape anisotropy, which imparts high coercivity and remnant magnetization, made it a compound superior to γ-Fe2O3. Chromium(IV) oxide is used to manufacture magnetic tape used in high-performance audio tape and standard audio cassettes.
Chromium(III) oxide (Cr2O3) is a metal polish known as green rouge.
Chromic acid is a powerful oxidizing agent and is a useful compound for cleaning laboratory glassware of any trace of organic compounds. It is prepared by dissolving potassium dichromate in concentrated sulfuric acid, which is then used to wash the apparatus. Sodium dichromate is sometimes used because of its higher solubility (50 g/L versus 200 g/L respectively). The use of dichromate cleaning solutions is now phased out due to the high toxicity and environmental concerns. Modern cleaning solutions are highly effective and chromium free.
Potassium dichromate is a chemical reagent, used as a titrating agent.
Chromates are added to drilling muds to prevent corrosion of steel under wet conditions.
Chrome alum is Chromium(III) potassium sulfate and is used as a mordant (i.e., a fixing agent) for dyes in fabric and in tanning.
Biological role
The possible nutritional value of chromium(III) is unproven. Although chromium is regarded as a trace element and dietary mineral, its suspected roles in the action of insulin – a hormone that mediates the metabolism and storage of carbohydrate, fat, and protein – have not been adequately established. The mechanism of its actions in the body is undefined, leaving in doubt whether chromium has a biological role in healthy people.
In contrast, hexavalent chromium (Cr(VI) or Cr6+) is highly toxic and mutagenic. Ingestion of chromium(VI) in water has been linked to stomach tumors, and it may also cause allergic contact dermatitis.
"Chromium deficiency", involving a lack of Cr(III) in the body, or perhaps some complex of it, such as glucose tolerance factor, is not accepted as a medical condition, as it has no symptoms and healthy people do not require chromium supplementation. Some studies suggest that the biologically active form of chromium(III) is transported in the body via an oligopeptide called low-molecular-weight chromium-binding substance (chromodulin), which might play a role in the insulin signaling pathway.
The chromium content of common foods is generally low (1–13 micrograms per serving). The chromium content of food varies widely, due to differences in soil mineral content, growing season, plant cultivar, and contamination during processing. Chromium (and nickel) leach into food cooked in stainless steel, with the effect being largest when the cookware is new. Acidic foods that are cooked for many hours also exacerbate this effect.
Dietary recommendations
There is disagreement on chromium's status as an essential nutrient. Governmental departments from Australia, New Zealand, India, and Japan consider chromium as essential, while the United States and European Food Safety Authority of the European Union do not.
The U.S. National Academy of Medicine (NAM) updated the Estimated Average Requirements (EARs) and the Recommended Dietary Allowances (RDAs) for chromium in 2001. For chromium, there was insufficient information to set EARs and RDAs, so its needs are described as estimates for Adequate Intake (AI). From a 2001 assessment, AI of chromium for women ages 14 through 50 is 25 μg/day, and the AI for women ages 50 and above is 20 μg/day. The AIs for women who are pregnant are 30 μg/day, and for women who are lactating, the set AI is 45 μg/day. The AI for men ages 14 through 50 is 35 μg/day, and the AI for men ages 50 and above is 30 μg/day. For children ages 1 through 13, the AI increases with age from 0.2 μg/day up to 25 μg/day. As for safety, the NAM sets Tolerable Upper Intake Levels (ULs) for vitamins and minerals when the evidence is sufficient. In the case of chromium, there is not yet enough information, hence no UL has been established. Collectively, the EARs, RDAs, AIs, and ULs are the parameters for the nutrition recommendation system known as Dietary Reference Intake (DRI).
Australia and New Zealand consider chromium to be an essential nutrient, with an AI of 35 μg/day for men, 25 μg/day for women, 30 μg/day for women who are pregnant, and 45 μg/day for women who are lactating. A UL has not been set due to the lack of sufficient data. India considers chromium to be an essential nutrient, with an adult recommended intake of 33 μg/day. Japan also considers chromium to be an essential nutrient, with an AI of 10 μg/day for adults, including women who are pregnant or lactating. A UL has not been set.
The EFSA does not consider chromium to be an essential nutrient.
Labeling
For U.S. food and dietary supplement labeling purposes, the amount of the substance in a serving is expressed as a percent of the Daily Value (%DV). For chromium labeling purposes, 100% of the Daily Value was 120 μg. As of 27 May 2016, the percentage of daily value was revised to 35 μg to bring the chromium intake into a consensus with the official Recommended Dietary Allowance. A table of the old and new adult daily values in the United States is provided at Reference Daily Intake.
After evaluation of research on the potential nutritional value of chromium, the European Food Safety Authority concluded that there was no evidence of benefit by dietary chromium in healthy people, thereby declining to establish recommendations in Europe for dietary intake of chromium.
Food sources
Food composition databases such as those maintained by the U.S. Department of Agriculture do not contain information on the chromium content of foods. A wide variety of animal and vegetable foods contain chromium. Content per serving is influenced by the chromium content of the soil in which the plants are grown, by foodstuffs fed to animals, and by processing methods, as chromium is leached into foods if processed or cooked in stainless steel equipment. One diet analysis study conducted in Mexico reported an average daily chromium intake of 30 micrograms. An estimated 31% of adults in the United States consume multi-vitamin/mineral dietary supplements, which often contain 25 to 60 micrograms of chromium.
Supplementation
Chromium is an ingredient in total parenteral nutrition (TPN), because deficiency can occur after months of intravenous feeding with chromium-free TPN. It is also added to nutritional products for preterm infants. Although the mechanism of action in biological roles for chromium is unclear, in the United States chromium-containing products are sold as non-prescription dietary supplements in amounts ranging from 50 to 1,000 μg. Lower amounts of chromium are also often incorporated into multi-vitamin/mineral supplements consumed by an estimated 31% of adults in the United States. Chemical compounds used in dietary supplements include chromium chloride, chromium citrate, chromium(III) picolinate, chromium(III) polynicotinate, and other chemical compositions. The benefit of supplements has not been proven.
Initiation of research on glucose
The notion of chromium as a potential regulator of glucose metabolism began in the 1950s when scientists performed a series of experiments controlling the diet of rats. The experimenters subjected the rats to a chromium deficient diet, and witnessed an inability to respond effectively to increased levels of blood glucose. A chromium-rich Brewer's yeast was provided in the diet, enabling the rats to effectively metabolize glucose, and so giving evidence that chromium may have a role in glucose management.
Approved and disapproved health claims
In 2005, the U.S. Food and Drug Administration had approved a qualified health claim for chromium picolinate with a requirement for specific label wording:
"One small study suggests that chromium picolinate may reduce the risk of insulin resistance, and therefore possibly may reduce the risk of type 2 diabetes. FDA concludes, however, that the existence of such a relationship between chromium picolinate and either insulin resistance or type 2 diabetes is highly uncertain."
In other parts of the petition, the FDA rejected claims for chromium picolinate and cardiovascular disease, retinopathy or kidney disease caused by abnormally high blood sugar levels. As of March 2024, this ruling on chromium remains in effect.
In 2010, chromium(III) picolinate was approved by Health Canada to be used in dietary supplements. Approved labeling statements include: a factor in the maintenance of good health, provides support for healthy glucose metabolism, helps the body to metabolize carbohydrates and helps the body to metabolize fats. The European Food Safety Authority approved claims in 2010 that chromium contributed to normal macronutrient metabolism and maintenance of normal blood glucose concentration, but rejected claims for maintenance or achievement of a normal body weight, or reduction of tiredness or fatigue.
However, in a 2014 reassessment of studies to determine whether a Dietary Reference Intake value could be established for chromium, EFSA stated:
"The Panel concludes that no Average Requirement and no Population Reference Intake for chromium for the performance of physiological functions can be defined." and
"The Panel considered that there is no evidence of beneficial effects associated with chromium intake in healthy subjects. The Panel concludes that the setting of an Adequate Intake for chromium is also not appropriate."
Diabetes
Given the evidence for chromium deficiency causing problems with glucose management in the context of intravenous nutrition products formulated without chromium, research interest turned to whether chromium supplementation would benefit people who have type 2 diabetes but are not chromium deficient. Looking at the results from four meta-analyses, one reported a statistically significant decrease in fasting plasma glucose levels and a non-significant trend in lower hemoglobin A1C. A second reported the same, a third reported significant decreases for both measures, while a fourth reported no benefit for either. A review published in 2016 listed 53 randomized clinical trials that were included in one or more of six meta-analyses. It concluded that whereas there may be modest decreases in fasting blood glucose and/or HbA1C that achieve statistical significance in some of these meta-analyses, few of the trials achieved decreases large enough to be expected to be relevant to clinical outcome.
Body weight
Two systematic reviews looked at chromium supplements as a mean of managing body weight in overweight and obese people. One, limited to chromium picolinate, a common supplement ingredient, reported a statistically significant −1.1 kg (2.4 lb) weight loss in trials longer than 12 weeks. The other included all chromium compounds and reported a statistically significant −0.50 kg (1.1 lb) weight change. Change in percent body fat did not reach statistical significance. Authors of both reviews considered the clinical relevance of this modest weight loss as uncertain/unreliable. The European Food Safety Authority reviewed the literature and concluded that there was insufficient evidence to support a claim.
Sports
Chromium is promoted as a sports performance dietary supplement, based on the theory that it potentiates insulin activity, with anticipated results of increased muscle mass, and faster recovery of glycogen storage during post-exercise recovery. A review of clinical trials reported that chromium supplementation did not improve exercise performance or increase muscle strength. The International Olympic Committee reviewed dietary supplements for high-performance athletes in 2018 and concluded there was no need to increase chromium intake for athletes, nor support for claims of losing body fat.
Fresh-water fish
Irrigation water standards for chromium are 0.1 mg/L, but some rivers in Bangladesh are more than five times that amount. The standard for fish for human consumption is less than 1 mg/kg, but many tested samples were more than five times that amount. Chromium, especially hexavalent chromium, is highly toxic to fish because it is easily absorbed across the gills, readily enters blood circulation, crosses cell membranes and bioconcentrates up the food chain. In contrast, the toxicity of trivalent chromium is very low, attributed to poor membrane permeability and little biomagnification.
Acute and chronic exposure to chromium(VI) affects fish behavior, physiology, reproduction and survival. Hyperactivity and erratic swimming have been reported in contaminated environments. Egg hatching and fingerling survival are affected. In adult fish there are reports of histopathological damage to liver, kidney, muscle, intestines, and gills. Mechanisms include mutagenic gene damage and disruptions of enzyme functions.
There is evidence that fish may not require chromium, but benefit from a measured amount in diet. In one study, juvenile fish gained weight on a zero chromium diet, but the addition of 500 μg of chromium in the form of chromium chloride or other supplement types, per kilogram of food (dry weight), increased weight gain. At 2,000 μg/kg the weight gain was no better than with the zero chromium diet, and there were increased DNA strand breaks.
Precautions
Water-insoluble chromium(III) compounds and chromium metal are not considered a health hazard, while the toxicity and carcinogenic properties of chromium(VI) have been known for a long time. Because of the specific transport mechanisms, only limited amounts of chromium(III) enter the cells. Acute oral toxicity ranges between 50 and 150 mg/kg. A 2008 review suggested that moderate uptake of chromium(III) through dietary supplements poses no genetic-toxic risk. In the US, the Occupational Safety and Health Administration (OSHA) has designated an air permissible exposure limit (PEL) in the workplace as a time-weighted average (TWA) of 1 mg/m3. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 0.5 mg/m3, time-weighted average. The IDLH (immediately dangerous to life and health) value is 250 mg/m3.
Chromium(VI) toxicity
The acute oral toxicity for chromium(VI) ranges between 1.5 and 3.3 mg/kg. In the body, chromium(VI) is reduced by several mechanisms to chromium(III) already in the blood before it enters the cells. The chromium(III) is excreted from the body, whereas the chromate ion is transferred into the cell by a transport mechanism, by which also sulfate and phosphate ions enter the cell. The acute toxicity of chromium(VI) is due to its strong oxidant properties. After it reaches the blood stream, it damages the kidneys, the liver and blood cells through oxidation reactions. Hemolysis, renal, and liver failure result. Aggressive dialysis can be therapeutic.
The carcinogenity of chromate dust has been known for a long time, and in 1890 the first publication described the elevated cancer risk of workers in a chromate dye company. Three mechanisms have been proposed to describe the genotoxicity of chromium(VI). The first mechanism includes highly reactive hydroxyl radicals and other reactive radicals which are by products of the reduction of chromium(VI) to chromium(III). The second process includes the direct binding of chromium(V), produced by reduction in the cell, and chromium(IV) compounds to the DNA. The last mechanism attributed the genotoxicity to the binding to the DNA of the end product of the chromium(III) reduction.
Chromium salts (chromates) are also the cause of allergic reactions in some people. Chromates are often used to manufacture, amongst other things, leather products, paints, cement, mortar and anti-corrosives. Contact with products containing chromates can lead to allergic contact dermatitis and irritant dermatitis, resulting in ulceration of the skin, sometimes referred to as "chrome ulcers". This condition is often found in workers that have been exposed to strong chromate solutions in electroplating, tanning and chrome-producing manufacturers.
Environmental issues
Because chromium compounds were used in dyes, paints, and leather tanning compounds, these compounds are often found in soil and groundwater at active and abandoned industrial sites, needing environmental cleanup and remediation. Primer paint containing hexavalent chromium is still widely used for aerospace and automobile refinishing applications.
In 2010, the Environmental Working Group studied the drinking water in 35 American cities in the first nationwide study. The study found measurable hexavalent chromium in the tap water of 31 of the cities sampled, with Norman, Oklahoma, at the top of list; 25 cities had levels that exceeded California's proposed limit.
The more toxic hexavalent chromium form can be reduced to the less soluble trivalent oxidation state in soils by organic matter, ferrous iron, sulfides, and other reducing agents, with the rates of such reduction being faster under more acidic conditions than under more alkaline ones. In contrast, trivalent chromium can be oxidized to hexavalent chromium in soils by manganese oxides, such as Mn(III) and Mn(IV) compounds. Since the solubility and toxicity of chromium (VI) are greater that those of chromium (III), the oxidation-reduction conversions between the two oxidation states have implications for movement and bioavailability of chromium in soils, groundwater, and plants.
Notes
References
General bibliography
External links
ATSDR Case Studies in Environmental Medicine: Chromium Toxicity U.S. Department of Health and Human Services
IARC Monograph "Chromium and Chromium compounds"
It's Elemental – The Element Chromium
The Merck Manual – Mineral Deficiency and Toxicity
National Institute for Occupational Safety and Health – Chromium Page
Chromium at The Periodic Table of Videos (University of Nottingham)
Chemical elements
Dietary minerals
Native element minerals
Chemical hazards
Chemical elements with body-centered cubic structure | Chromium | [
"Physics",
"Chemistry"
] | 10,519 | [
"Chemical hazards",
"Chemical elements",
"Atoms",
"Matter"
] |
5,672 | https://en.wikipedia.org/wiki/Cadmium | Cadmium is a chemical element; it has symbol Cd and atomic number 48. This soft, silvery-white metal is chemically similar to the two other stable metals in group 12, zinc and mercury. Like zinc, it demonstrates oxidation state +2 in most of its compounds, and like mercury, it has a lower melting point than the transition metals in groups 3 through 11. Cadmium and its congeners in group 12 are often not considered transition metals, in that they do not have partly filled d or f electron shells in the elemental or common oxidation states. The average concentration of cadmium in Earth's crust is between 0.1 and 0.5 parts per million (ppm). It was discovered in 1817 simultaneously by Stromeyer and Hermann, both in Germany, as an impurity in zinc carbonate.
Cadmium occurs as a minor component in most zinc ores and is a byproduct of zinc production. It was used for a long time in the 1900s as a corrosion-resistant plating on steel, and cadmium compounds are used as red, orange, and yellow pigments, to color glass, and to stabilize plastic. Cadmium's use is generally decreasing because it is toxic (it is specifically listed in the European Restriction of Hazardous Substances Directive) and nickel–cadmium batteries have been replaced with nickel–metal hydride and lithium-ion batteries. Due to it being a neutron poison, cadmium is also used as a component of control rods in nuclear fission reactors. One of its few new uses is in cadmium telluride solar panels.
Although cadmium has no known biological function in higher organisms, a cadmium-dependent carbonic anhydrase has been found in marine diatoms.
Characteristics
Physical properties
Cadmium is a soft, malleable, ductile, silvery-white divalent metal. It is similar in many respects to zinc but forms complex compounds. Unlike most other metals, cadmium is resistant to corrosion and is used as a protective plate on other metals. As a bulk metal, cadmium is insoluble in water and is not flammable; however, in its powdered form it may burn and release toxic fumes.
Chemical properties
Although cadmium usually has an oxidation state of +2, it also exists in the +1 state. Cadmium and its congeners are not always considered transition metals, in that they do not have partly filled d or f electron shells in the elemental or common oxidation states. Cadmium burns in air to form brown amorphous cadmium oxide (CdO); the crystalline form of this compound is a dark red which changes color when heated, similar to zinc oxide. Hydrochloric acid, sulfuric acid, and nitric acid dissolve cadmium by forming cadmium chloride (CdCl2), cadmium sulfate (CdSO4), or cadmium nitrate (Cd(NO3)2). The oxidation state +1 can be produced by dissolving cadmium in a mixture of cadmium chloride and aluminium chloride, forming the Cd22+ cation, which is similar to the Hg22+ cation in mercury(I) chloride.
Cd + CdCl2 + 2 AlCl3 → Cd2(AlCl4)2
The structures of many cadmium complexes with nucleobases, amino acids, and vitamins have been determined.
Isotopes
Naturally occurring cadmium is composed of eight isotopes. Two of them are radioactive, and three are expected to decay but have not measurably done so under laboratory conditions. The two natural radioactive isotopes are 113Cd (beta decay, half-life is ) and 116Cd (two-neutrino double beta decay, half-life is ). The other three are 106Cd, 108Cd (both double electron capture), and 114Cd (double beta decay); only lower limits on these half-lives have been determined. At least three isotopes – 110Cd, 111Cd, and 112Cd – are stable. Among the isotopes that do not occur naturally, the most long-lived are 109Cd with a half-life of 462.6 days, and 115Cd with a half-life of 53.46 hours. All of the remaining radioactive isotopes have half-lives of less than 2.5 hours, and the majority have half-lives of less than 5 minutes. Cadmium has 8 known meta states, with the most stable being 113mCd (t1⁄2 = 14.1 years), 115mCd (t1⁄2 = 44.6 days), and 117mCd (t1⁄2 = 3.36 hours).
The known isotopes of cadmium range in atomic mass from 94.950 u (95Cd) to 131.946 u (132Cd). For isotopes lighter than 112 u, the primary decay mode is electron capture and the dominant decay product is element 47 (silver). Heavier isotopes decay mostly through beta emission producing element 49 (indium).
One isotope of cadmium, 113Cd, absorbs neutrons with high selectivity: With very high probability, neutrons with energy below the cadmium cut-off will be absorbed; those higher than the cut-off will be transmitted. The cadmium cut-off is about 0.5 eV, and neutrons below that level are deemed slow neutrons, distinct from intermediate and fast neutrons.
Cadmium is created via the s-process in low- to medium-mass stars with masses of 0.6 to 10 solar masses, over thousands of years. In that process, a silver atom captures a neutron and then undergoes beta decay.
History
Cadmium (Latin cadmia, Greek καδμεία meaning "calamine", a cadmium-bearing mixture of minerals that was named after the Greek mythological character Κάδμος, Cadmus, the founder of Thebes) was discovered in contaminated zinc compounds sold in pharmacies in Germany in 1817 by Friedrich Stromeyer. Karl Samuel Leberecht Hermann simultaneously investigated the discoloration in zinc oxide and found an impurity, first suspected to be arsenic, because of the yellow precipitate with hydrogen sulfide. Additionally Stromeyer discovered that one supplier sold zinc carbonate instead of zinc oxide. Stromeyer found the new element as an impurity in zinc carbonate (calamine), and, for 100 years, Germany remained the only important producer of the metal. The metal was named after the Latin word for calamine, because it was found in this zinc ore. Stromeyer noted that some impure samples of calamine changed color when heated but pure calamine did not. He was persistent in studying these results and eventually isolated cadmium metal by roasting and reducing the sulfide. The potential for cadmium yellow as pigment was recognized in the 1840s, but the lack of cadmium limited this application.
Even though cadmium and its compounds are toxic in certain forms and concentrations, the British Pharmaceutical Codex from 1907 states that cadmium iodide was used as a medication to treat "enlarged joints, scrofulous glands, and chilblains".
In 1907, the International Astronomical Union defined the international ångström in terms of a red cadmium spectral line (1 wavelength = 6438.46963 Å). This was adopted by the 7th General Conference on Weights and Measures in 1927. In 1960, the definitions of both the metre and ångström were changed to use krypton.
After the industrial scale production of cadmium started in the 1930s and 1940s, the major application of cadmium was the coating of iron and steel to prevent corrosion; in 1944, 62% and in 1956, 59% of the cadmium in the United States was used for plating. In 1956, 24% of the cadmium in the United States was used for a second application in red, orange and yellow pigments from sulfides and selenides of cadmium.
The stabilizing effect of cadmium chemicals like the carboxylates cadmium laurate and cadmium stearate on PVC led to an increased use of those compounds in the 1970s and 1980s. The demand for cadmium in pigments, coatings, stabilizers, and alloys declined as a result of environmental and health regulations in the 1980s and 1990s; in 2006, only 7% of total cadmium consumption was used for plating, and only 10% was used for pigments.
At the same time, these decreases in consumption were compensated by a growing demand for cadmium for nickel–cadmium batteries, which accounted for 81% of the cadmium consumption in the United States in 2006.
Occurrence
Cadmium makes up about 0.1 ppm of Earth's crust and is the 65th most abundant element. It is much rarer than zinc, which makes up about 65 ppm. No significant deposits of cadmium-containing ores are known. The only cadmium mineral of importance, greenockite (CdS), is nearly always associated with sphalerite (ZnS). This association is caused by geochemical similarity between zinc and cadmium, with no geological process likely to separate them. Thus, cadmium is produced mainly as a byproduct of mining, smelting, and refining sulfidic ores of zinc, and, to a lesser degree, lead and copper. Small amounts of cadmium, about 10% of consumption, are produced from secondary sources, mainly from dust generated by recycling iron and steel scrap. Production in the United States began in 1907, but wide use began after World War I.
Metallic cadmium can be found in the Vilyuy River basin in Siberia.
Rocks mined for phosphate fertilizers contain varying amounts of cadmium, resulting in a cadmium concentration of as much as 300 mg/kg in the fertilizers and a high cadmium content in agricultural soils. Coal can contain significant amounts of cadmium, which ends up mostly in coal fly ash.
Cadmium in soil can be absorbed by crops such as rice and cocoa. In 2002, the Chinese ministry of agriculture measured that 28% of rice it sampled had excess lead and 10% had excess cadmium above limits defined by law. Consumer Reports tested 28 brands of dark chocolate sold in the United States in 2022, and found cadmium in all of them, with 13 exceeding the California Maximum Allowable Dose level.
Some plants such as willow trees and poplars have been found to clean both lead and cadmium from soil.
Typical background concentrations of cadmium do not exceed 5 ng/m3 in the atmosphere; 2 mg/kg in soil; 1 μg/L in freshwater and 50 ng/L in seawater. Concentrations of cadmium above 10 μg/L may be stable in water having low total solute concentrations and p H and can be difficult to remove by conventional water treatment processes.
Production
Cadmium is a common impurity in zinc ores, and it is most often isolated during the production of zinc. Some zinc ores concentrates from zinc sulfate ores contain up to 1.4% of cadmium. In the 1970s, the output of cadmium was per ton of zinc. Zinc sulfide ores are roasted in the presence of oxygen, converting the zinc sulfide to the oxide. Zinc metal is produced either by smelting the oxide with carbon or by electrolysis in sulfuric acid. Cadmium is isolated from the zinc metal by vacuum distillation if the zinc is smelted, or cadmium sulfate is precipitated from the electrolysis solution.
The British Geological Survey reports that in 2001, China was the top producer of cadmium with almost one-sixth of the world's production, closely followed by South Korea and Japan.
Applications
Cadmium is a common component of electric batteries, pigments, coatings, and electroplating.
Batteries
In 2009, 86% of cadmium was used in batteries, predominantly in rechargeable nickel–cadmium batteries. Nickel–cadmium cells have a nominal cell potential of 1.2 V. The cell consists of a positive nickel hydroxide electrode and a negative cadmium electrode plate separated by an alkaline electrolyte (potassium hydroxide). The European Union put a limit on cadmium in electronics in 2004 of 0.01%, with some exceptions, and in 2006 reduced the limit on cadmium content to 0.002%. Another type of battery based on cadmium is the silver–cadmium battery.
Electroplating
Cadmium electroplating, consuming 6% of the global production, is used in the aircraft industry to reduce corrosion of steel components. This coating is passivated by chromate salts. A limitation of cadmium plating is hydrogen embrittlement of high-strength steels from the electroplating process. Therefore, steel parts heat-treated to tensile strength above 1300 MPa (200 ksi) should be coated by an alternative method (such as special low-embrittlement cadmium electroplating processes or physical vapor deposition).
Titanium embrittlement from cadmium-plated tool residues resulted in banishment of those tools (and the implementation of routine tool testing to detect cadmium contamination) in the A-12/SR-71, U-2, and subsequent aircraft programs that use titanium.
Nuclear technology
Cadmium is used in the control rods of nuclear reactors, acting as a very effective neutron poison to control neutron flux in nuclear fission. When cadmium rods are inserted in the core of a nuclear reactor, cadmium absorbs neutrons, preventing them from creating additional fission events, thus controlling the amount of reactivity. The pressurized water reactor designed by Westinghouse Electric Company uses an alloy consisting of 80% silver, 15% indium, and 5% cadmium.
Televisions
QLED TVs have been starting to include cadmium in construction. Some companies have been looking to reduce the environmental impact of human exposure and pollution of the material in televisions during production.
Anticancer drugs
Complexes based on cadmium and other heavy metals have potential for the treatment of cancer, but their use is often limited due to toxic side effects.
Compounds
Cadmium oxide was used in black and white television phosphors and in the blue and green phosphors of color television cathode ray tubes. Cadmium sulfide (CdS) is used as a photoconductive surface coating for photocopier drums.
Various cadmium salts are used in paint pigments, with CdS as a yellow pigment being the most common. Cadmium selenide is a red pigment, commonly called cadmium red. To painters who work with the pigment, cadmium provides the most brilliant and durable yellows, oranges, and reds – so much so that during production, these colors are significantly toned down before they are ground with oils and binders or blended into watercolors, gouaches, acrylics, and other paint and pigment formulations. Because these pigments are potentially toxic, users should use a barrier cream on the hands to prevent absorption through the skin even though the amount of cadmium absorbed into the body through the skin is reported to be less than 1%.
In PVC, cadmium was used as heat, light, and weathering stabilizers. Currently, cadmium stabilizers have been completely replaced with barium-zinc, calcium-zinc and organo-tin stabilizers. Cadmium is used in many kinds of solder and bearing alloys, because it has a low coefficient of friction and fatigue resistance. It is also found in some of the lowest-melting alloys, such as Wood's metal.
Semiconductors
Cadmium is an element in some semiconductor materials. Cadmium sulfide, cadmium selenide, and cadmium telluride are used in some photodetectors and solar cells. HgCdTe detectors are sensitive to mid-infrared light and used in some motion detectors.
Laboratory uses
Helium–cadmium lasers are a common source of blue or ultraviolet laser light. Lasers at wavelengths of 325, 354 and 442 nm are made using this gain medium; some models can switch between these wavelengths. They are notably used in fluorescence microscopy as well as various laboratory uses requiring laser light at these wavelengths.
Cadmium selenide quantum dots emit bright luminescence under UV excitation (He–Cd laser, for example). The color of this luminescence can be green, yellow or red depending on the particle size. Colloidal solutions of those particles are used for imaging of biological tissues and solutions with a fluorescence microscope.
In molecular biology, cadmium is used to block voltage-dependent calcium channels from fluxing calcium ions, as well as in hypoxia research to stimulate proteasome-dependent degradation of Hif-1α.
Cadmium-selective sensors based on the fluorophore BODIPY have been developed for imaging and sensing of cadmium in cells. One powerful method for monitoring cadmium in aqueous environments involves electrochemistry. By employing a self-assembled monolayer one can obtain a cadmium selective electrode with a ppt-level sensitivity.
Biological role
Cadmium has no known function in higher organisms and is considered toxic. Cadmium is considered an environmental pollutant hazardous to living organisms. A cadmium-dependent carbonic anhydrase has been found in some marine diatoms, which live in environments with low zinc concentrations.
Cadmium is preferentially absorbed in the kidneys of humans. Up to about 30 mg of cadmium is commonly inhaled throughout human childhood and adolescence.
Cadmium is under research for its potential toxicity to increase the risk of cancer, cardiovascular disease, and osteoporosis.
Environmental impact
The biogeochemistry of cadmium and its release to the environment is under research.
Safety
Individuals and organizations have been reviewing cadmium's bioinorganic aspects for its toxicity. The most dangerous form of occupational exposure to cadmium is inhalation of fine dust and fumes, or ingestion of highly soluble cadmium compounds. Inhalation of cadmium fumes can result initially in metal fume fever, but may progress to chemical pneumonitis, pulmonary edema, and death.
Cadmium is also an environmental hazard. Human exposure is primarily from fossil fuel combustion, phosphate fertilizers, natural sources, iron and steel production, cement production and related activities, nonferrous metals production, and municipal solid waste incineration. Other sources of cadmium include bread, root crops, and vegetables.
There have been a few instances of general population poisoning as the result of long-term exposure to cadmium in contaminated food and water. Research into an estrogen mimicry that may induce breast cancer is ongoing, . In the decades leading up to World War II, mining operations contaminated the Jinzū River in Japan with cadmium and traces of other toxic metals. As a consequence, cadmium accumulated in the rice crops along the riverbanks downstream of the mines. Some members of the local agricultural communities consumed the contaminated rice and developed itai-itai disease and renal abnormalities, including proteinuria and glucosuria. The victims of this poisoning were almost exclusively post-menopausal women with low iron and low body stores of other minerals. Similar general population cadmium exposures in other parts of the world have not resulted in the same health problems because the populations maintained sufficient iron and other mineral levels. Thus, although cadmium is a major factor in the itai-itai disease in Japan, most researchers have concluded that it was one of several factors.
Cadmium is one of ten substances banned by the European Union's Restriction of Hazardous Substances (RoHS) directive, which regulates hazardous substances in electrical and electronic equipment, but allows for certain exemptions and exclusions from the scope of the law.
The International Agency for Research on Cancer has classified cadmium and cadmium compounds as carcinogenic to humans. Although occupational exposure to cadmium is linked to lung and prostate cancer, there is still uncertainty about the carcinogenicity of cadmium in low environmental exposure. Recent data from epidemiological studies suggest that intake of cadmium through diet is associated with a higher risk of endometrial, breast, and prostate cancer as well as with osteoporosis in humans. A recent study has demonstrated that endometrial tissue is characterized by higher levels of cadmium in current and former smoking females.
Cadmium exposure is associated with a large number of illnesses including kidney disease, early atherosclerosis, hypertension, and cardiovascular diseases. Although studies show a significant correlation between cadmium exposure and occurrence of disease in human populations, a molecular mechanism has not yet been identified. One hypothesis holds that cadmium is an endocrine disruptor and some experimental studies have shown that it can interact with different hormonal signaling pathways. For example, cadmium can bind to the estrogen receptor alpha, and affect signal transduction along the estrogen and MAPK signaling pathways at low doses.
The tobacco plant absorbs and accumulates heavy metals such as cadmium from the surrounding soil into its leaves. Following tobacco smoke inhalation, these are readily absorbed into the body of users. Tobacco smoking is the most important single source of cadmium exposure in the general population. An estimated 10% of the cadmium content of a cigarette is inhaled through smoking. Absorption of cadmium through the lungs is more effective than through the gut. As much as 50% of the cadmium inhaled in cigarette smoke may be absorbed.
On average, cadmium concentrations in the blood of smokers is 4 to 5 times greater than non-smokers and in the kidney, 2–3 times greater than in non-smokers. Despite the high cadmium content in cigarette smoke, there seems to be little exposure to cadmium from passive smoking.
In a non-smoking population, food is the greatest source of exposure. High quantities of cadmium can be found in crustaceans, mollusks, offal, frog legs, cocoa solids, bitter and semi-bitter chocolate, seaweed, fungi and algae products. However, grains, vegetables, and starchy roots and tubers are consumed in much greater quantity in the U.S., and are the source of the greatest dietary exposure there. Most plants bio-accumulate metal toxins such as cadmium and when composted to form organic fertilizers, yield a product that often can contain high amounts (e.g., over 0.5 mg) of metal toxins for every kilogram of fertilizer. Fertilizers made from animal dung (e.g., cow dung) or urban waste can contain similar amounts of cadmium. The cadmium added to the soil from fertilizers (rock phosphates or organic fertilizers) become bio-available and toxic only if the soil pH is low (i.e., acidic soils). In the European Union, an analysis of almost 22,000 topsoil samples with LUCAS survey concluded that 5.5% of samples have concentrations higher than 1 mg kg−1.
Zinc, copper, calcium, and iron ions, and selenium with vitamin C are used to treat cadmium intoxication, although it is not easily reversed.
Regulations
Because of the adverse effects of cadmium on the environment and human health, the supply and use of cadmium is restricted in Europe under the REACH Regulation.
The EFSA Panel on Contaminants in the Food Chain specifies that 2.5 μg/kg body weight is a tolerable weekly intake for humans. The Joint FAO/WHO Expert Committee on Food Additives has declared 7 μg/kg body weight to be the provisional tolerable weekly intake level. The state of California requires a food label to carry a warning about potential exposure to cadmium on products such as cocoa powder. The European Commission has put in place the EU regulation (2019/1009) on fertilizing products (EU, 2019), adopted in June 2019 and fully applicable as of July 2022. It sets a Cd limit value in phosphate fertilizers to 60 mg kg−1 of P2O5.
The U.S. Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit (PEL) for cadmium at a time-weighted average (TWA) of 0.005 ppm. The National Institute for Occupational Safety and Health (NIOSH) has not set a recommended exposure limit (REL) and has designated cadmium as a known human carcinogen. The IDLH (immediately dangerous to life and health) level for cadmium is 9 mg/m3.
In addition to mercury, the presence of cadmium in some batteries has led to the requirement of proper disposal (or recycling) of batteries.
Product recalls
In May 2006, a sale of the seats from Arsenal F.C.'s old stadium, Highbury in London, England was cancelled when the seats were discovered to contain trace amounts of cadmium. Reports of high levels of cadmium use in children's jewelry in 2010 led to a US Consumer Product Safety Commission investigation. The U.S. CPSC issued specific recall notices for cadmium content in jewelry sold by Claire's and Wal-Mart stores.
In June 2010, McDonald's voluntarily recalled more than 12 million promotional Shrek Forever After 3D Collectible Drinking Glasses because of the cadmium levels in paint pigments on the glassware. The glasses were manufactured by Arc International, of Millville, New Jersey, USA.
See also
Red List building materials
Toxic heavy metal
Notes
References
Further reading
Agency for Toxic Substances and Disease Registry (ATSDR) (2012). Toxicological Profile for Cadmium. U.S. Department of Health and Human Services, Public Health Service. https://www.atsdr.cdc.gov/toxprofiles/tp5.pdf
External links
Cadmium at The Periodic Table of Videos (University of Nottingham)
ATSDR Case Studies in Environmental Medicine: Cadmium Toxicity U.S. Department of Health and Human Services
National Institute for Occupational Safety and Health – Cadmium Page
NLM Hazardous Substances Databank – Cadmium, Elemental
Chemical elements
Transition metals
Endocrine disruptors
IARC Group 1 carcinogens
Chemical hazards
Soil contamination
Testicular toxicants
Native element minerals
Chemical elements with hexagonal close-packed structure | Cadmium | [
"Physics",
"Chemistry",
"Environmental_science"
] | 5,428 | [
"Chemical elements",
"Testicular toxicants",
"Endocrine disruptors",
"Environmental chemistry",
"Chemical hazards",
"Soil contamination",
"Atoms",
"Matter"
] |
5,675 | https://en.wikipedia.org/wiki/Curium | Curium is a synthetic chemical element; it has symbol Cm and atomic number 96. This transuranic actinide element was named after eminent scientists Marie and Pierre Curie, both known for their research on radioactivity. Curium was first intentionally made by the team of Glenn T. Seaborg, Ralph A. James, and Albert Ghiorso in 1944, using the cyclotron at Berkeley. They bombarded the newly discovered element plutonium (the isotope 239Pu) with alpha particles. This was then sent to the Metallurgical Laboratory at University of Chicago where a tiny sample of curium was eventually separated and identified. The discovery was kept secret until after the end of World War II. The news was released to the public in November 1947. Most curium is produced by bombarding uranium or plutonium with neutrons in nuclear reactors – one tonne of spent nuclear fuel contains ~20 grams of curium.
Curium is a hard, dense, silvery metal with a high melting and boiling point for an actinide. It is paramagnetic at ambient conditions, but becomes antiferromagnetic upon cooling, and other magnetic transitions are also seen in many curium compounds. In compounds, curium usually has valence +3 and sometimes +4; the +3 valence is predominant in solutions. Curium readily oxidizes, and its oxides are a dominant form of this element. It forms strongly fluorescent complexes with various organic compounds. If it gets into the human body, curium accumulates in bones, lungs, and liver, where it promotes cancer.
All known isotopes of curium are radioactive and have small critical mass for a nuclear chain reaction. The most stable isotope, 247Cm, has a half-life of 15.6 million years; the longest-lived curium isotopes predominantly emit alpha particles. Radioisotope thermoelectric generators can use the heat from this process, but this is hindered by the rarity and high cost of curium. Curium is used in making heavier actinides and the 238Pu radionuclide for power sources in artificial cardiac pacemakers and RTGs for spacecraft. It served as the α-source in the alpha particle X-ray spectrometers of several space probes, including the Sojourner, Spirit, Opportunity, and Curiosity Mars rovers and the Philae lander on comet 67P/Churyumov–Gerasimenko, to analyze the composition and structure of the surface.
History
Though curium had likely been produced in previous nuclear experiments as well as the natural nuclear fission reactor at Oklo, Gabon, it was first intentionally synthesized, isolated and identified in 1944, at University of California, Berkeley, by Glenn T. Seaborg, Ralph A. James, and Albert Ghiorso. In their experiments, they used a cyclotron.
Curium was chemically identified at the Metallurgical Laboratory (now Argonne National Laboratory), University of Chicago. It was the third transuranium element to be discovered even though it is the fourth in the series – the lighter element americium was still unknown.
The sample was prepared as follows: first plutonium nitrate solution was coated on a platinum foil of ~0.5 cm2 area, the solution was evaporated and the residue was converted into plutonium(IV) oxide (PuO2) by annealing. Following cyclotron irradiation of the oxide, the coating was dissolved with nitric acid and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The residue was dissolved in perchloric acid, and further separation was done by ion exchange to yield a certain isotope of curium. The separation of curium and americium was so painstaking that the Berkeley group initially called those elements pandemonium (from Greek for all demons or hell) and delirium (from Latin for madness).
Curium-242 was made in July–August 1944 by bombarding 239Pu with α-particles to produce curium with the release of a neutron:
^{239}_{94}Pu + ^{4}_{2}He -> ^{242}_{96}Cm + ^{1}_{0}n
Curium-242 was unambiguously identified by the characteristic energy of the α-particles emitted during the decay:
^{242}_{96}Cm -> ^{238}_{94}Pu + ^{4}_{2}He
The half-life of this alpha decay was first measured as 5 months (150 days) and then corrected to 162.8 days.
Another isotope 240Cm was produced in a similar reaction in March 1945:
^{239}_{94}Pu + ^{4}_{2}He -> ^{240}_{96}Cm + 3^{1}_{0}n
The α-decay half-life of 240Cm was determined as 26.8 days and later revised to 30.4 days.
The discovery of curium and americium in 1944 was closely related to the Manhattan Project, so the results were confidential and declassified only in 1945. Seaborg leaked the synthesis of the elements 95 and 96 on the U.S. radio show for children, the Quiz Kids, five days before the official presentation at an American Chemical Society meeting on November 11, 1945, when one listener asked if any new transuranic element beside plutonium and neptunium had been discovered during the war. The discovery of curium (242Cm and 240Cm), its production, and its compounds was later patented listing only Seaborg as the inventor.
The element was named after Marie Curie and her husband Pierre Curie, who are known for discovering radium and for their work in radioactivity. It followed the example of gadolinium, a lanthanide element above curium in the periodic table, which was named after the explorer of rare-earth elements Johan Gadolin:
As the name for the element of atomic number 96 we should like to propose "curium", with symbol Cm. The evidence indicates that element 96 contains seven 5f electrons and is thus analogous to the element gadolinium, with its seven 4f electrons in the regular rare earth series. On this basis element 96 is named after the Curies in a manner analogous to the naming of gadolinium, in which the chemist Gadolin was honored.
The first curium samples were barely visible, and were identified by their radioactivity. Louis Werner and Isadore Perlman made the first substantial sample of 30 μg curium-242 hydroxide at University of California, Berkeley in 1947 by bombarding americium-241 with neutrons. Macroscopic amounts of curium(III) fluoride were obtained in 1950 by W. W. T. Crane, J. C. Wallmann and B. B. Cunningham. Its magnetic susceptibility was very close to that of GdF3 providing the first experimental evidence for the +3 valence of curium in its compounds. Curium metal was produced only in 1951 by reduction of CmF3 with barium.
Characteristics
Physical
A synthetic, radioactive element, curium is a hard, dense metal with a silvery-white appearance and physical and chemical properties resembling gadolinium. Its melting point of 1344 °C is significantly higher than that of the previous elements neptunium (637 °C), plutonium (639 °C) and americium (1176 °C). In comparison, gadolinium melts at 1312 °C. Curium boils at 3556 °C. With a density of 13.52 g/cm3, curium is lighter than neptunium (20.45 g/cm3) and plutonium (19.8 g/cm3), but heavier than most other metals. Of two crystalline forms of curium, α-Cm is more stable at ambient conditions. It has a hexagonal symmetry, space group P63/mmc, lattice parameters a = 365 pm and c = 1182 pm, and four formula units per unit cell. The crystal consists of double-hexagonal close packing with the layer sequence ABAC and so is isotypic with α-lanthanum. At pressure >23 GPa, at room temperature, α-Cm becomes β-Cm, which has face-centered cubic symmetry, space group Fmm and lattice constant a = 493 pm. On further compression to 43 GPa, curium becomes an orthorhombic γ-Cm structure similar to α-uranium, with no further transitions observed up to 52 GPa. These three curium phases are also called Cm I, II and III.
Curium has peculiar magnetic properties. Its neighbor element americium shows no deviation from Curie-Weiss paramagnetism in the entire temperature range, but α-Cm transforms to an antiferromagnetic state upon cooling to 65–52 K, and β-Cm exhibits a ferrimagnetic transition at ~205 K. Curium pnictides show ferromagnetic transitions upon cooling: 244CmN and 244CmAs at 109 K, 248CmP at 73 K and 248CmSb at 162 K. The lanthanide analog of curium, gadolinium, and its pnictides, also show magnetic transitions upon cooling, but the transition character is somewhat different: Gd and GdN become ferromagnetic, and GdP, GdAs and GdSb show antiferromagnetic ordering.
In accordance with magnetic data, electrical resistivity of curium increases with temperature – about twice between 4 and 60 K – and then is nearly constant up to room temperature. There is a significant increase in resistivity over time (~) due to self-damage of the crystal lattice by alpha decay. This makes uncertain the true resistivity of curium (~). Curium's resistivity is similar to that of gadolinium, and the actinides plutonium and neptunium, but significantly higher than that of americium, uranium, polonium and thorium.
Under ultraviolet illumination, curium(III) ions show strong and stable yellow-orange fluorescence with a maximum in the range of 590–640 nm depending on their environment. The fluorescence originates from the transitions from the first excited state 6D7/2 and the ground state 8S7/2. Analysis of this fluorescence allows monitoring interactions between Cm(III) ions in organic and inorganic complexes.
Chemical
Curium ion in solution almost always has a +3 oxidation state, the most stable oxidation state for curium. A +4 oxidation state is seen mainly in a few solid phases, such as CmO2 and CmF4. Aqueous curium(IV) is only known in the presence of strong oxidizers such as potassium persulfate, and is easily reduced to curium(III) by radiolysis and even by water itself. Chemical behavior of curium is different from the actinides thorium and uranium, and is similar to americium and many lanthanides. In aqueous solution, the Cm3+ ion is colorless to pale green; Cm4+ ion is pale yellow. The optical absorption of Cm3+ ion contains three sharp peaks at 375.4, 381.2 and 396.5 nm and their strength can be directly converted into the concentration of the ions. The +6 oxidation state has only been reported once in solution in 1978, as the curyl ion (): this was prepared from beta decay of americium-242 in the americium(V) ion . Failure to get Cm(VI) from oxidation of Cm(III) and Cm(IV) may be due to the high Cm4+/Cm3+ ionization potential and the instability of Cm(V).
Curium ions are hard Lewis acids and thus form most stable complexes with hard bases. The bonding is mostly ionic, with a small covalent component. Curium in its complexes commonly exhibits a 9-fold coordination environment, with a tricapped trigonal prismatic molecular geometry.
Isotopes
About 19 radioisotopes and 7 nuclear isomers, 233Cm to 251Cm, are known; none are stable. The longest half-lives are 15.6 million years (247Cm) and 348,000 years (248Cm). Other long-lived ones are 245Cm (8500 years), 250Cm (8300 years) and 246Cm (4760 years). Curium-250 is unusual: it mostly (~86%) decays by spontaneous fission. The most commonly used isotopes are 242Cm and 244Cm with the half-lives 162.8 days and 18.11 years, respectively.
All isotopes ranging from 242Cm to 248Cm, as well as 250Cm, undergo a self-sustaining nuclear chain reaction and thus in principle can be a nuclear fuel in a reactor. As in most transuranic elements, nuclear fission cross section is especially high for the odd-mass curium isotopes 243Cm, 245Cm and 247Cm. These can be used in thermal-neutron reactors, whereas a mixture of curium isotopes is only suitable for fast breeder reactors since the even-mass isotopes are not fissile in a thermal reactor and accumulate as burn-up increases. The mixed-oxide (MOX) fuel, which is to be used in power reactors, should contain little or no curium because neutron activation of 248Cm will create californium. Californium is a strong neutron emitter, and would pollute the back end of the fuel cycle and increase the dose to reactor personnel. Hence, if minor actinides are to be used as fuel in a thermal neutron reactor, the curium should be excluded from the fuel or placed in special fuel rods where it is the only actinide present.
The adjacent table lists the critical masses for curium isotopes for a sphere, without moderator or reflector. With a metal reflector (30 cm of steel), the critical masses of the odd isotopes are about 3–4 kg. When using water (thickness ~20–30 cm) as the reflector, the critical mass can be as small as 59 grams for 245Cm, 155 grams for 243Cm and 1550 grams for 247Cm. There is significant uncertainty in these critical mass values. While it is usually on the order of 20%, the values for 242Cm and 246Cm were listed as large as 371 kg and 70.1 kg, respectively, by some research groups.
Curium is not currently used as nuclear fuel due to its low availability and high price. 245Cm and 247Cm have very small critical mass and so could be used in tactical nuclear weapons, but none are known to have been made. Curium-243 is not suitable for such, due to its short half-life and strong α emission, which would cause excessive heat. Curium-247 would be highly suitable due to its long half-life, which is 647 times longer than plutonium-239 (used in many existing nuclear weapons).
Occurrence
The longest-lived isotope, 247Cm, has half-life 15.6 million years; so any primordial curium, that is, present on Earth when it formed, should have decayed by now. Its past presence as an extinct radionuclide is detectable as an excess of its primordial, long-lived daughter 235U. Traces of 242Cm may occur naturally in uranium minerals due to neutron capture and beta decay (238U → 239Pu → 240Pu → 241Am → 242Cm), though the quantities would be tiny and this has not been confirmed: even with "extremely generous" estimates for neutron absorption possibilities, the quantity of 242Cm present in 1 × 108 kg of 18% uranium pitchblende would not even be one atom. Traces of 247Cm are also probably brought to Earth in cosmic rays, but this also has not been confirmed. There is also the possibility of 244Cm being produced as the double beta decay daughter of natural 244Pu.
Curium is made artificially in small amounts for research purposes. It also occurs as one of the waste products in spent nuclear fuel. Curium is present in nature in some areas used for nuclear weapons testing. Analysis of the debris at the test site of the United States' first thermonuclear weapon, Ivy Mike (1 November 1952, Enewetak Atoll), besides einsteinium, fermium, plutonium and americium also revealed isotopes of berkelium, californium and curium, in particular 245Cm, 246Cm and smaller quantities of 247Cm, 248Cm and 249Cm.
Atmospheric curium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 4,000 times higher concentration of curium at the sandy soil particles than in water present in the soil pores. An even higher ratio of about 18,000 was measured in loam soils.
The transuranium elements from americium to fermium, including curium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so.
Curium, and other non-primordial actinides, have also been suspected to exist in the spectrum of Przybylski's Star.
Synthesis
Isotope preparation
Curium is made in small amounts in nuclear reactors, and by now only kilograms of 242Cm and 244Cm have been accumulated, and grams or even milligrams for heavier isotopes. Hence the high price of curium, which has been quoted at 160–185 USD per milligram, with a more recent estimate at US$2,000/g for 242Cm and US$170/g for 244Cm. In nuclear reactors, curium is formed from 238U in a series of nuclear reactions. In the first chain, 238U captures a neutron and converts into 239U, which via β− decay transforms into 239Np and 239Pu.
Further neutron capture followed by β−-decay gives americium (241Am) which further becomes 242Cm:
For research purposes, curium is obtained by irradiating not uranium but plutonium, which is available in large amounts from spent nuclear fuel. A much higher neutron flux is used for the irradiation that results in a different reaction chain and formation of 244Cm:
Curium-244 alpha decays to 240Pu, but it also absorbs neutrons, hence a small amount of heavier curium isotopes. Of those, 247Cm and 248Cm are popular in scientific research due to their long half-lives. But the production rate of 247Cm in thermal neutron reactors is low because it is prone to fission due to thermal neutrons. Synthesis of 250Cm by neutron capture is unlikely due to the short half-life of the intermediate 249Cm (64 min), which β− decays to the berkelium isotope 249Bk.
The above cascade of (n,γ) reactions gives a mix of different curium isotopes. Their post-synthesis separation is cumbersome, so a selective synthesis is desired. Curium-248 is favored for research purposes due to its long half-life. The most efficient way to prepare this isotope is by α-decay of the californium isotope 252Cf, which is available in relatively large amounts due to its long half-life (2.65 years). About 35–50 mg of 248Cm is produced thus, per year. The associated reaction produces 248Cm with isotopic purity of 97%.
Another isotope, 245Cm, can be obtained for research, from α-decay of 249Cf; the latter isotope is produced in small amounts from β−-decay of 249Bk.
Metal preparation
Most synthesis routines yield a mix of actinide isotopes as oxides, from which a given isotope of curium needs to be separated. An example procedure could be to dissolve spent reactor fuel (e.g. MOX fuel) in nitric acid, and remove the bulk of the uranium and plutonium using a PUREX (Plutonium – URanium EXtraction) type extraction with tributyl phosphate in a hydrocarbon. The lanthanides and the remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction to give, after stripping, a mixture of trivalent actinides and lanthanides. A curium compound is then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. Bis-triazinyl bipyridine complex has been recently proposed as such reagent which is highly selective to curium. Separation of curium from the very chemically similar americium can also be done by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone at elevated temperature. Both americium and curium are present in solutions mostly in the +3 valence state; americium oxidizes to soluble Am(IV) complexes, but curium stays unchanged and so can be isolated by repeated centrifugation.
Metallic curium is obtained by reduction of its compounds. Initially, curium(III) fluoride was used for this purpose. The reaction was done in an environment free of water and oxygen, in an apparatus made of tantalum and tungsten, using elemental barium or lithium as reducing agents.
Another possibility is reduction of curium(IV) oxide using a magnesium-zinc alloy in a melt of magnesium chloride and magnesium fluoride.
Compounds and reactions
Oxides
Curium readily reacts with oxygen forming mostly Cm2O3 and CmO2 oxides, but the divalent oxide CmO is also known. Black CmO2 can be obtained by burning curium oxalate (), nitrate (), or hydroxide in pure oxygen. Upon heating to 600–650 °C in vacuum (about 0.01 Pa), it transforms into the whitish Cm2O3:
4CmO2 ->[\Delta T] 2Cm2O3 + O2.
Or, Cm2O3 can be obtained by reducing CmO2 with molecular hydrogen:
2CmO2 + H2 -> Cm2O3 + H2O
Also, a number of ternary oxides of the type M(II)CmO3 are known, where M stands for a divalent metal, such as barium.
Thermal oxidation of trace quantities of curium hydride (CmH2–3) has been reported to give a volatile form of CmO2 and the volatile trioxide CmO3, one of two known examples of the very rare +6 state for curium. Another observed species was reported to behave similar to a supposed plutonium tetroxide and was tentatively characterized as CmO4, with curium in the extremely rare +8 state; but new experiments seem to indicate that CmO4 does not exist, and have cast doubt on the existence of PuO4 as well.
Halides
The colorless curium(III) fluoride (CmF3) can be made by adding fluoride ions into curium(III)-containing solutions. The brown tetravalent curium(IV) fluoride (CmF4) on the other hand is only obtained by reacting curium(III) fluoride with molecular fluorine:
A series of ternary fluorides are known of the form A7Cm6F31 (A = alkali metal).
The colorless curium(III) chloride (CmCl3) is made by reacting curium hydroxide (Cm(OH)3) with anhydrous hydrogen chloride gas. It can be further turned into other halides such as curium(III) bromide (colorless to light green) and curium(III) iodide (colorless), by reacting it with the ammonia salt of the corresponding halide at temperatures of ~400–450 °C:
Or, one can heat curium oxide to ~600°C with the corresponding acid (such as hydrobromic for curium bromide). Vapor phase hydrolysis of curium(III) chloride gives curium oxychloride:
Chalcogenides and pnictides
Sulfides, selenides and tellurides of curium have been obtained by treating curium with gaseous sulfur, selenium or tellurium in vacuum at elevated temperature. Curium pnictides of the type CmX are known for nitrogen, phosphorus, arsenic and antimony. They can be prepared by reacting either curium(III) hydride (CmH3) or metallic curium with these elements at elevated temperature.
Organocurium compounds and biological aspects
Organometallic complexes analogous to uranocene are known also for other actinides, such as thorium, protactinium, neptunium, plutonium and americium. Molecular orbital theory predicts a stable "curocene" complex (η8-C8H8)2Cm, but it has not been reported experimentally yet.
Formation of the complexes of the type (BTP = 2,6-di(1,2,4-triazin-3-yl)pyridine), in solutions containing n-C3H7-BTP and Cm3+ ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with curium and thus are useful for separating it from lanthanides and another actinides. Dissolved Cm3+ ions bind with many organic compounds, such as hydroxamic acid, urea, fluorescein and adenosine triphosphate. Many of these compounds are related to biological activity of various microorganisms. The resulting complexes show strong yellow-orange emission under UV light excitation, which is convenient not only for their detection, but also for studying interactions between the Cm3+ ion and the ligands via changes in the half-life (of the order ~0.1 ms) and spectrum of the fluorescence.
There are a few reports on biosorption of Cm3+ by bacteria and archaea, and in the laboratory both americium and curium were found to support the growth of methylotrophs.
Applications
Radionuclides
Curium is one of the most radioactive isolable elements. Its two most common isotopes 242Cm and 244Cm are strong alpha emitters (energy 6 MeV); they have fairly short half-lives, 162.8 days and 18.1 years, and give as much as 120 W/g and 3 W/g of heat, respectively. Therefore, curium can be used in its common oxide form in radioisotope thermoelectric generators like those in spacecraft. This application has been studied for the 244Cm isotope, while 242Cm was abandoned due to its prohibitive price, around 2000 USD/g. 243Cm with a ~30-year half-life and good energy yield of ~1.6 W/g could be a suitable fuel, but it gives significant amounts of harmful gamma and beta rays from radioactive decay products. As an α-emitter, 244Cm needs much less radiation shielding, but it has a high spontaneous fission rate, and thus a lot of neutron and gamma radiation. Compared to a competing thermoelectric generator isotope such as 238Pu, 244Cm emits 500 times more neutrons, and its higher gamma emission requires a shield that is 20 times thicker— of lead for a 1 kW source, compared to for 238Pu. Therefore, this use of curium is currently considered impractical.
A more promising use of 242Cm is for making 238Pu, a better radioisotope for thermoelectric generators such as in heart pacemakers. The alternate routes to 238Pu use the (n,γ) reaction of 237Np, or deuteron bombardment of uranium, though both reactions always produce 236Pu as an undesired by-product since the latter decays to 232U with strong gamma emission. Curium is a common starting material for making higher transuranic and superheavy elements. Thus, bombarding 248Cm with neon (22Ne), magnesium (26Mg), or calcium (48Ca) yields isotopes of seaborgium (265Sg), hassium (269Hs and 270Hs), and livermorium (292Lv, 293Lv, and possibly 294Lv). Californium was discovered when a microgram-sized target of curium-242 was irradiated with 35 MeV alpha particles using the cyclotron at Berkeley:
+ → +
Only about 5,000 atoms of californium were produced in this experiment.
The odd-mass curium isotopes 243Cm, 245Cm, and 247Cm are all highly fissile and can release additional energy in a thermal spectrum nuclear reactor. All curium isotopes are fissionable in fast-neutron reactors. This is one of the motives for minor actinide separation and transmutation in the nuclear fuel cycle, helping to reduce the long-term radiotoxicity of used, or spent nuclear fuel.
X-ray spectrometer
The most practical application of 244Cm—though rather limited in total volume—is as α-particle source in alpha particle X-ray spectrometers (APXS). These instruments were installed on the Sojourner, Mars, Mars 96, Mars Exploration Rovers and Philae comet lander, as well as the Mars Science Laboratory to analyze the composition and structure of the rocks on the surface of planet Mars. APXS was also used in the Surveyor 5–7 moon probes but with a 242Cm source.
An elaborate APXS setup has a sensor head containing six curium sources with a total decay rate of several tens of millicuries (roughly one gigabecquerel). The sources are collimated on a sample, and the energy spectra of the alpha particles and protons scattered from the sample are analyzed (proton analysis is done only in some spectrometers). These spectra contain quantitative information on all major elements in the sample except for hydrogen, helium and lithium.
Safety
Due to its radioactivity, curium and its compounds must be handled in appropriate labs under special arrangements. While curium itself mostly emits α-particles which are absorbed by thin layers of common materials, some of its decay products emit significant fractions of beta and gamma rays, which require a more elaborate protection. If consumed, curium is excreted within a few days and only 0.05% is absorbed in the blood. From there, ~45% goes to the liver, 45% to the bones, and the remaining 10% is excreted. In bone, curium accumulates on the inside of the interfaces to the bone marrow and does not significantly redistribute with time; its radiation destroys bone marrow and thus stops red blood cell creation. The biological half-life of curium is about 20 years in the liver and 50 years in the bones. Curium is absorbed in the body much more strongly via inhalation, and the allowed total dose of 244Cm in soluble form is 0.3 μCi. Intravenous injection of 242Cm- and 244Cm-containing solutions to rats increased the incidence of bone tumor, and inhalation promoted lung and liver cancer.
Curium isotopes are inevitably present in spent nuclear fuel (about 20 g/tonne). The isotopes 245Cm–248Cm have decay times of thousands of years and must be removed to neutralize the fuel for disposal. Such a procedure involves several steps, where curium is first separated and then converted by neutron bombardment in special reactors to short-lived nuclides. This procedure, nuclear transmutation, while well documented for other elements, is still being developed for curium.
References
Bibliography
Holleman, Arnold F. and Wiberg, Nils Lehrbuch der Anorganischen Chemie, 102 Edition, de Gruyter, Berlin 2007, .
Penneman, R. A. and Keenan T. K. The radiochemistry of americium and curium, University of California, Los Alamos, California, 1960
External links
Curium at The Periodic Table of Videos (University of Nottingham)
NLM Hazardous Substances Databank – Curium, Radioactive
Chemical elements
Chemical elements with double hexagonal close-packed structure
Actinides
American inventions
Synthetic elements
Marie Curie
Pierre Curie | Curium | [
"Physics",
"Chemistry"
] | 6,766 | [
"Matter",
"Chemical elements",
"Synthetic materials",
"Synthetic elements",
"Atoms",
"Radioactivity"
] |
5,676 | https://en.wikipedia.org/wiki/Californium | Californium is a synthetic chemical element; it has symbol Cf and atomic number 98. It was first synthesized in 1950 at Lawrence Berkeley National Laboratory (then the University of California Radiation Laboratory) by bombarding curium with alpha particles (helium-4 ions). It is an actinide element, the sixth transuranium element to be synthesized, and has the second-highest atomic mass of all elements that have been produced in amounts large enough to see with the naked eye (after einsteinium). It was named after the university and the U.S. state of California.
Two crystalline forms exist at normal pressure: one above and one below . A third form exists at high pressure. Californium slowly tarnishes in air at room temperature. Californium compounds are dominated by the +3 oxidation state. The most stable of californium's twenty known isotopes is californium-251, with a half-life of 898 years. This short half-life means the element is not found in significant quantities in the Earth's crust. Cf, with a half-life of about 2.645 years, is the most common isotope used and is produced at Oak Ridge National Laboratory (ORNL) in the United States and Research Institute of Atomic Reactors in Russia.
Californium is one of the few transuranium elements with practical uses. Most of these applications exploit the fact that certain isotopes of californium emit neutrons. For example, californium can be used to help start up nuclear reactors, and it is used as a source of neutrons when studying materials using neutron diffraction and neutron spectroscopy. It can also be used in nuclear synthesis of higher mass elements; oganesson (element 118) was synthesized by bombarding californium-249 atoms with calcium-48 ions. Users of californium must take into account radiological concerns and the element's ability to disrupt the formation of red blood cells by bioaccumulating in skeletal tissue.
Characteristics
Physical properties
Californium is a silvery-white actinide metal with a melting point of and an estimated boiling point of . The pure metal is malleable and is easily cut with a knife. Californium metal starts to vaporize above when exposed to a vacuum. Below californium metal is either ferromagnetic or ferrimagnetic (it acts like a magnet), between 48 and 66 K it is antiferromagnetic (an intermediate state), and above it is paramagnetic (external magnetic fields can make it magnetic). It forms alloys with lanthanide metals but little is known about the resulting materials.
The element has two crystalline forms at standard atmospheric pressure: a double-hexagonal close-packed form dubbed alpha (α) and a face-centered cubic form designated beta (β). The α form exists below 600–800 °C with a density of 15.10 g/cm3 and the β form exists above 600–800 °C with a density of 8.74 g/cm. At 48 GPa of pressure the β form changes into an orthorhombic crystal system due to delocalization of the atom's 5f electrons, which frees them to bond.
The bulk modulus of a material is a measure of its resistance to uniform pressure. Californium's bulk modulus is , which is similar to trivalent lanthanide metals but smaller than more familiar metals, such as aluminium (70 GPa).
Chemical properties and compounds
Californium exhibits oxidation states of 4, 3, or 2. It typically forms eight or nine bonds to surrounding atoms or ions. Its chemical properties are predicted to be similar to other primarily 3+ valence actinide elements and the element dysprosium, which is the lanthanide above californium in the periodic table. Compounds in the +4 oxidation state are strong oxidizing agents and those in the +2 state are strong reducing agents.
The element slowly tarnishes in air at room temperature, with the rate increasing when moisture is added. Californium reacts when heated with hydrogen, nitrogen, or a chalcogen (oxygen family element); reactions with dry hydrogen and aqueous mineral acids are rapid.
Californium is only water-soluble as the californium(III) cation. Attempts to reduce or oxidize the +3 ion in solution have failed. The element forms a water-soluble chloride, nitrate, perchlorate, and sulfate and is precipitated as a fluoride, oxalate, or hydroxide. Californium is the heaviest actinide to exhibit covalent properties, as is observed in the californium borate.
Isotopes
Twenty isotopes of californium are known (mass number ranging from 237 to 256); the most stable are Cf with half-life 898 years, Cf with half-life 351 years, Cf at 13.08 years, and Cf at 2.645 years. All other isotopes have half-life shorter than a year, and most of these have half-lives less than 20 minutes.
Cf is formed by beta decay of berkelium-249, and most other californium isotopes are made by subjecting berkelium to intense neutron radiation in a nuclear reactor. Though californium-251 has the longest half-life, its production yield is only 10% due to its tendency to collect neutrons (high neutron capture) and its tendency to interact with other particles (high neutron cross section).
Cf is a very strong neutron emitter, which makes it extremely radioactive and harmful. Cf, 96.9% of the time, alpha decays to curium-248; the other 3.1% of decays are spontaneous fission. One microgram (μg) of Cf emits 2.3 million neutrons per second, an average of 3.7 neutrons per spontaneous fission. Most other isotopes of californium, alpha decay to curium (atomic number 96).
History
Californium was first made at University of California Radiation Laboratory, Berkeley, by physics researchers Stanley Gerald Thompson, Kenneth Street Jr., Albert Ghiorso, and Glenn T. Seaborg, about February 9, 1950. It was the sixth transuranium element to be discovered; the team announced its discovery on March 17, 1950.
To produce californium, a microgram-size target of curium-242 () was bombarded with 35 MeV alpha particles () in the cyclotron at Berkeley, which produced californium-245 () plus one free neutron ().
+ → +
To identify and separate out the element, ion exchange and adsorsion methods were undertaken. Only about 5,000 atoms of californium were produced in this experiment, and these atoms had a half-life of 44 minutes.
The discoverers named the new element after the university and the state. This was a break from the convention used for elements 95 to 97, which drew inspiration from how the elements directly above them in the periodic table were named. However, the element directly above element 98 in the periodic table, dysprosium, has a name that means "hard to get at", so the researchers decided to set aside the informal naming convention. They added that "the best we can do is to point out [that] ... searchers a century ago found it difficult to get to California".
Weighable amounts of californium were first produced by the irradiation of plutonium targets at Materials Testing Reactor at National Reactor Testing Station, eastern Idaho; these findings were reported in 1954. The high spontaneous fission rate of californium-252 was observed in these samples. The first experiment with californium in concentrated form occurred in 1958. The isotopes Cf to Cf were isolated that same year from a sample of plutonium-239 that had been irradiated with neutrons in a nuclear reactor for five years. Two years later, in 1960, Burris Cunningham and James Wallman of Lawrence Radiation Laboratory of the University of California created the first californium compounds—californium trichloride, californium(III) oxychloride, and californium oxide—by treating californium with steam and hydrochloric acid.
The High Flux Isotope Reactor (HFIR) at ORNL in Oak Ridge, Tennessee, started producing small batches of californium in the 1960s. By 1995, HFIR nominally produced of californium annually. Plutonium supplied by the United Kingdom to the United States under the 1958 US–UK Mutual Defence Agreement was used for making californium.
The Atomic Energy Commission sold Cf to industrial and academic customers in the early 1970s for $10/microgram, and an average of of Cf were shipped each year from 1970 to 1990. Californium metal was first prepared in 1974 by Haire and Baybarz, who reduced californium(III) oxide with lanthanum metal to obtain microgram amounts of sub-micrometer thick films.
Occurrence
Traces of californium can be found near facilities that use the element in mineral prospecting and in medical treatments. The element is fairly insoluble in water, but it adheres well to ordinary soil; and concentrations of it in the soil can be 500 times higher than in the water surrounding the soil particles.
Nuclear fallout from atmospheric nuclear weapons testing prior to 1980 contributed a small amount of californium to the environment. Californium-249, -252, -253, and -254 have been observed in the radioactive dust collected from the air after a nuclear explosion. Californium is not a major radionuclide at United States Department of Energy legacy sites since it was not produced in large quantities.
Californium was once believed to be produced in supernovas, as their decay matches the 60-day half-life of Cf. However, subsequent studies failed to demonstrate any californium spectra, and supernova light curves are now thought to follow the decay of nickel-56.
The transuranic elements americium to fermium, including californium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so.
Spectral lines of californium, along with those of several other non-primordial elements, were detected in Przybylski's Star in 2008.
Production
Californium is produced in nuclear reactors and particle accelerators. Californium-250 is made by bombarding berkelium-249 (Bk) with neutrons, forming berkelium-250 (Bk) via neutron capture (n,γ) which, in turn, quickly beta decays (β) to californium-250 (Cf) in the following reaction:
(n,γ) → + β
Bombardment of Cf with neutrons produces Cf and Cf.
Prolonged irradiation of americium, curium, and plutonium with neutrons produces milligram amounts of Cf and microgram amounts of Cf. As of 2006, curium isotopes 244 to 248 are irradiated by neutrons in special reactors to produce mainly californium-252 with lesser amounts of isotopes 249 to 255.
Microgram quantities of Cf are available for commercial use through the U.S. Nuclear Regulatory Commission. Only two sites produce Cf: Oak Ridge National Laboratory in the U.S., and the Research Institute of Atomic Reactors in Dimitrovgrad, Russia. As of 2003, the two sites produce 0.25 grams and 0.025 grams of Cf per year, respectively.
Three californium isotopes with significant half-lives are produced, requiring a total of 15 neutron captures by uranium-238 without nuclear fission or alpha decay occurring during the process. Cf is at the end of a production chain that starts with uranium-238, and includes several isotopes of plutonium, americium, curium, and berkelium, and the californium isotopes 249 to 253 (see diagram).
Applications
Californium-252 has a number of specialized uses as a strong neutron emitter; it produces 139 million neutrons per microgram per minute. This property makes it useful as a startup neutron source for some nuclear reactors and as a portable (non-reactor based) neutron source for neutron activation analysis to detect trace amounts of elements in samples. Neutrons from californium are used as a treatment of certain cervical and brain cancers where other radiation therapy is ineffective. It has been used in educational applications since 1969 when Georgia Institute of Technology got a loan of 119 μg of Cf from the Savannah River Site. It is also used with online elemental coal analyzers and bulk material analyzers in the coal and cement industries.
Neutron penetration into materials makes californium useful in detection instruments such as fuel rod scanners; neutron radiography of aircraft and weapons components to detect corrosion, bad welds, cracks and trapped moisture; and in portable metal detectors. Neutron moisture gauges use Cf to find water and petroleum layers in oil wells, as a portable neutron source for gold and silver prospecting for on-the-spot analysis, and to detect ground water movement. The main uses of Cf in 1982 were, reactor start-up (48.3%), fuel rod scanning (25.3%), and activation analysis (19.4%). By 1994, most Cf was used in neutron radiography (77.4%), with fuel rod scanning (12.1%) and reactor start-up (6.9%) as important but secondary uses. In 2021, fast neutrons from Cf were used for wireless data transmission.
Cf has a very small calculated critical mass of about , high lethality, and a relatively short period of toxic environmental irradiation. The low critical mass of californium led to some exaggerated claims about possible uses for the element.
In October 2006, researchers announced that three atoms of oganesson (element 118) had been identified at Joint Institute for Nuclear Research in Dubna, Russia, from bombarding Cf with calcium-48, making it the heaviest element ever made. The target contained about 10 mg of Cf deposited on a titanium foil of 32 cm area. Californium has also been used to produce other transuranic elements; for example, lawrencium was first synthesized in 1961 by bombarding californium with boron nuclei.
Precautions
Californium that bioaccumulates in skeletal tissue releases radiation that disrupts the body's ability to form red blood cells. The element plays no natural biological role in any organism due to its intense radioactivity and low concentration in the environment.
Californium can enter the body from ingesting contaminated food or drinks or by breathing air with suspended particles of the element. Once in the body, only 0.05% of the californium will reach the bloodstream. About 65% of that californium will be deposited in the skeleton, 25% in the liver, and the rest in other organs, or excreted, mainly in urine. Half of the californium deposited in the skeleton and liver are gone in 50 and 20 years, respectively. Californium in the skeleton adheres to bone surfaces before slowly migrating throughout the bone.
The element is most dangerous if taken into the body. In addition, californium-249 and californium-251 can cause tissue damage externally, through gamma ray emission. Ionizing radiation emitted by californium on bone and in the liver can cause cancer.
Notes
References
Bibliography
External links
Californium at The Periodic Table of Videos (University of Nottingham)
NuclearWeaponArchive.org – Californium
Hazardous Substances Databank – Californium, Radioactive
Chemical elements
Chemical elements with double hexagonal close-packed structure
Actinides
Synthetic elements
Neutron sources
Ferromagnetic materials | Californium | [
"Physics",
"Chemistry"
] | 3,334 | [
"Chemical elements",
"Synthetic materials",
"Ferromagnetic materials",
"Synthetic elements",
"Materials",
"Radioactivity",
"Atoms",
"Matter"
] |
5,693 | https://en.wikipedia.org/wiki/Claude%20Shannon | Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American mathematician, electrical engineer, computer scientist, cryptographer and inventor known as the "father of information theory" and as the "father of the Information Age". Shannon was the first to describe the Boolean gates (electronic circuits) that are essential to all digital electronic circuits, and was one of the founding fathers of artificial intelligence. Shannon is credited with laying the foundations of the Information Age.
At the University of Michigan, Shannon dual degreed, graduating with a Bachelor of Science in both electrical engineering and mathematics in 1936. A 21-year-old master's degree student at the Massachusetts Institute of Technology (MIT) in electrical engineering, his thesis concerned switching circuit theory, demonstrating that electrical applications of Boolean algebra could construct any logical numerical relationship, thereby establishing the theory behind digital computing and digital circuits. The thesis has been claimed to be the most important master's thesis of all time, having also been called the "birth certificate of the digital revolution", and winning the 1939 Alfred Noble Prize. He then graduated with a PhD in mathematics from MIT in 1940, with his thesis focused on genetics, with it deriving important results, but it went unpublished.
Shannon contributed to the field of cryptanalysis for national defense of the United States during World War II, including his fundamental work on codebreaking and secure telecommunications, writing a paper which is considered one of the foundational pieces of modern cryptography, with his work described as "a turning point, and marked the closure of classical cryptography and the beginning of modern cryptography." The work of Shannon is the foundation of secret-key cryptography, including the work of Horst Feistel, the Data Encryption Standard (DES), Advanced Encryption Standard (AES), and more. As a result, Shannon has been called the "founding father of modern cryptography".
His "A Mathematical Theory of Communication" paper in 1948 laid the foundations for the field of information theory, with his famous paper referred to as a "blueprint for the digital era" by electrical engineer Robert G. Gallager. It has been called "the Magna Carta of the Information Age" by Scientific American, along with his work being described as being at "the heart of today's digital information technology". The mathematician Solomon W. Golomb remarked on Shannon's influence on the digital age that "It's like saying how much influence the inventor of the alphabet has had on literature." Shannon's theory is widely used and has been fundamental to the success of many scientific endeavors, such as the invention of the compact disc, the development of the Internet, feasibility of mobile phones, the understanding of black holes, and more, and is at the intersection of numerous important fields. Shannon also formally introduced the term "bit".
Shannon made numerous contributions to the field of artificial intelligence, writing papers on programming a computer for chess, which have been immensely influential. His Theseus machine was the first electrical device to learn by trial and error, being one of the first examples of artificial intelligence. He also co-organized and participated in the Dartmouth workshop of 1956, considered the founding event of the field of artificial intelligence.
Roboticist Rodney Brooks declared that Shannon was the 20th century engineer who contributed the most to 21st century technologies, and Golomb described the intellectual achievement of Shannon as "one of the greatest of the twentieth century". His achievements are considered to be on par with those of Albert Einstein, Sir Isaac Newton, and Charles Darwin.
Biography
Childhood
The Shannon family lived in Gaylord, Michigan, and Claude was born in a hospital in nearby Petoskey. His father, Claude Sr. (1862–1934), was a businessman and, for a while, a judge of probate in Gaylord. His mother, Mabel Wolf Shannon (1880–1945), was a language teacher, who also served as the principal of Gaylord High School. Claude Sr. was a descendant of New Jersey settlers, while Mabel was a child of German immigrants. Shannon's family was active in their Methodist Church during his youth.
Most of the first 16 years of Shannon's life were spent in Gaylord, where he attended public school, graduating from Gaylord High School in 1932. Shannon showed an inclination towards mechanical and electrical things. His best subjects were science and mathematics. At home, he constructed such devices as models of planes, a radio-controlled model boat and a barbed-wire telegraph system to a friend's house a half-mile away. While growing up, he also worked as a messenger for the Western Union company.
Shannon's childhood hero was Thomas Edison, whom he later learned was a distant cousin. Both Shannon and Edison were descendants of John Ogden (1609–1682), a colonial leader and an ancestor of many distinguished people.
Logic circuits
In 1932, Shannon entered the University of Michigan, where he was introduced to the work of George Boole. He graduated in 1936 with two bachelor's degrees: one in electrical engineering and the other in mathematics.
In 1936, Shannon began his graduate studies in electrical engineering at the Massachusetts Institute of Technology (MIT), where he worked on Vannevar Bush's differential analyzer, which was an early analog computer that was composed of electromechanical parts and could solve differential equations. While studying the complicated ad hoc circuits of this analyzer, Shannon designed switching circuits based on Boole's concepts. In 1937, he wrote his master's degree thesis, A Symbolic Analysis of Relay and Switching Circuits, with a paper from this thesis published in 1938. A revolutionary work for switching circuit theory, Shannon diagramed switching circuits that could implement the essential operators of Boolean algebra. Then he proved that his switching circuits could be used to simplify the arrangement of the electromechanical relays that were used during that time in telephone call routing switches. Next, he expanded this concept, proving that these circuits could solve all problems that Boolean algebra could solve. In the last chapter, he presented diagrams of several circuits, including a digital 4-bit full adder. His work differed significantly from the work of previous engineers such as Akira Nakashima, who still relied on the existent circuit theory of the time and took a grounded approach. Shannon's idea were more abstract and relied on mathematics, thereby breaking new ground with his work, with his approach dominating modern-day electrical engineering.
Using electrical switches to implement logic is the fundamental concept that underlies all electronic digital computers. Shannon's work became the foundation of digital circuit design, as it became widely known in the electrical engineering community during and after World War II. The theoretical rigor of Shannon's work superseded the ad hoc methods that had prevailed previously. Howard Gardner hailed Shannon's thesis "possibly the most important, and also the most famous, master's thesis of the century." Herman Goldstine described it as "surely ... one of the most important master's theses ever written ... It helped to change digital circuit design from an art to a science." One of the reviewers of his work commented that "To the best of my knowledge, this is the first application of the methods of symbolic logic to so practical an engineering problem. From the point of view of originality I rate the paper as outstanding." Shannon's master thesis won the 1939 Alfred Noble Prize.
Shannon received his PhD in mathematics from MIT in 1940. Vannevar Bush had suggested that Shannon should work on his dissertation at the Cold Spring Harbor Laboratory, in order to develop a mathematical formulation for Mendelian genetics. This research resulted in Shannon's PhD thesis, called An Algebra for Theoretical Genetics. However, the thesis went unpublished after Shannon lost interest, but it did contain important results. Notably, he was one of the first to apply an algebraic framework to study theoretical population genetics. In addition, Shannon devised a general expression for the distribution of several linked traits in a population after multiple generations under a random mating system, which was original at the time, with the new theorem unworked out by other population geneticists of the time.
In 1940, Shannon became a National Research Fellow at the Institute for Advanced Study in Princeton, New Jersey. In Princeton, Shannon had the opportunity to discuss his ideas with influential scientists and mathematicians such as Hermann Weyl and John von Neumann, and he also had occasional encounters with Albert Einstein and Kurt Gödel. Shannon worked freely across disciplines, and this ability may have contributed to his later development of mathematical information theory.
Wartime research
Shannon had worked at Bell Labs for a few months in the summer of 1937, and returned there to work on fire-control systems and cryptography during World War II, under a contract with section D-2 (Control Systems section) of the National Defense Research Committee (NDRC).
Shannon is credited with the invention of signal-flow graphs, in 1942. He discovered the topological gain formula while investigating the functional operation of an analog computer.
For two months early in 1943, Shannon came into contact with the leading British mathematician Alan Turing. Turing had been posted to Washington to share with the U.S. Navy's cryptanalytic service the methods used by the British Government Code and Cypher School at Bletchley Park to break the cyphers used by the Kriegsmarine U-boats in the north Atlantic Ocean. He was also interested in the encipherment of speech and to this end spent time at Bell Labs. Shannon and Turing met at teatime in the cafeteria. Turing showed Shannon his 1936 paper that defined what is now known as the "universal Turing machine". This impressed Shannon, as many of its ideas complemented his own.
In 1945, as the war was coming to an end, the NDRC was issuing a summary of technical reports as a last step prior to its eventual closing down. Inside the volume on fire control, a special essay titled Data Smoothing and Prediction in Fire-Control Systems, coauthored by Shannon, Ralph Beebe Blackman, and Hendrik Wade Bode, formally treated the problem of smoothing the data in fire-control by analogy with "the problem of separating a signal from interfering noise in communications systems." In other words, it modeled the problem in terms of data and signal processing and thus heralded the coming of the Information Age.
Shannon's work on cryptography was even more closely related to his later publications on communication theory. At the close of the war, he prepared a classified memorandum for Bell Telephone Labs entitled "A Mathematical Theory of Cryptography", dated September 1945. A declassified version of this paper was published in 1949 as "Communication Theory of Secrecy Systems" in the Bell System Technical Journal. This paper incorporated many of the concepts and mathematical formulations that also appeared in his A Mathematical Theory of Communication. Shannon said that his wartime insights into communication theory and cryptography developed simultaneously, and that "they were so close together you couldn't separate them". In a footnote near the beginning of the classified report, Shannon announced his intention to "develop these results … in a forthcoming memorandum on the transmission of information."
While he was at Bell Labs, Shannon proved that the cryptographic one-time pad is unbreakable in his classified research that was later published in 1949. The same article also proved that any unbreakable system must have essentially the same characteristics as the one-time pad: the key must be truly random, as large as the plaintext, never reused in whole or part, and kept secret.
Information theory
In 1948, the promised memorandum appeared as "A Mathematical Theory of Communication", an article in two parts in the July and October issues of the Bell System Technical Journal. This work focuses on the problem of how best to encode the message a sender wants to transmit. Shannon developed information entropy as a measure of the information content in a message, which is a measure of uncertainty reduced by the message. In so doing, he essentially invented the field of information theory.
The book The Mathematical Theory of Communication reprints Shannon's 1948 article and Warren Weaver's popularization of it, which is accessible to the non-specialist. Weaver pointed out that the word "information" in communication theory is not related to what you do say, but to what you could say. That is, information is a measure of one's freedom of choice when one selects a message. Shannon's concepts were also popularized, subject to his own proofreading, in John Robinson Pierce's Symbols, Signals, and Noise.
Information theory's fundamental contribution to natural language processing and computational linguistics was further established in 1951, in his article "Prediction and Entropy of Printed English", showing upper and lower bounds of entropy on the statistics of English – giving a statistical foundation to language analysis. In addition, he proved that treating space as the 27th letter of the alphabet actually lowers uncertainty in written language, providing a clear quantifiable link between cultural practice and probabilistic cognition.
Another notable paper published in 1949 is "Communication Theory of Secrecy Systems", a declassified version of his wartime work on the mathematical theory of cryptography, in which he proved that all theoretically unbreakable cyphers must have the same requirements as the one-time pad. He is credited with the introduction of sampling theorem, which he had derived as early as 1940, and which is concerned with representing a continuous-time signal from a (uniform) discrete set of samples. This theory was essential in enabling telecommunications to move from analog to digital transmissions systems in the 1960s and later. He further wrote a paper in 1956 regarding coding for a noisy channel, which also became a classic paper in the field of information theory.
Claude Shannon's influence has been immense in the field, for example, in a 1973 collection of the key papers in the field of information theory, he was author or coauthor of 12 of the 49 papers cited, while no one else appeared more than three times. Even beyond his original paper in 1948, he is still regarded as the most important post-1948 contributor to the theory.
In May 1951, Mervin Kelly, received a request from the director of the CIA, general Walter Bedell Smith, regarding Shannon and the need for him, as Shannon was regarded as, based on "the best authority", the "most eminently qualified scientist in the particular field concerned". As a result of the request, Shannon became part of the CIA's Special Cryptologic Advisory Group or SCAG.
Artificial Intelligence
In 1950, Shannon, designed, and built with the help of his wife, a learning machine named Theseus. It consisted of a maze on a surface, through which a mechanical mouse could move through. Below the surface were sensors that followed the path of a mechanical mouse through the maze. After much trial and error, this device would learn the shortest path through the maze, and direct the mechanical mouse through the maze. The pattern of the maze could be changed at will.
Mazin Gilbert stated that Theseus "inspired the whole field of AI. This random trial and error is the foundation of artificial intelligence."
Shannon wrote multiple influential papers on artificial intelligence, such as his 1950 paper titled "Programming a Computer for Playing Chess", and his 1953 paper titled "Computers and Automata". Alongside John McCarthy, he co-edited a book titled Automata Studies, which was published in 1956. The categories in the articles within the volume were influenced by Shannon's own subject headings in his 1953 paper. Shannon shared McCarthy's goal of creating a science of intelligent machines, but also held a broader view of viable approaches in automata studies, such as neural nets, Turing machines, cybernetic mechanisms, and symbolic processing by computer.
Shannon co-organized and participated in the Dartmouth workshop of 1956, alongside John McCarthy, Marvin Minsky and Nathaniel Rochester, and which is considered the founding event of the field of artificial intelligence.
Teaching at MIT
In 1956 Shannon joined the MIT faculty, holding an endowed chair. He worked in the Research Laboratory of Electronics (RLE). He continued to serve on the MIT faculty until 1978.
Later life
Shannon developed Alzheimer's disease and spent the last few years of his life in a nursing home; he died in 2001, survived by his wife, a son and daughter, and two granddaughters.
Hobbies and inventions
Outside of Shannon's academic pursuits, he was interested in juggling, unicycling, and chess. He also invented many devices, including a Roman numeral computer called THROBAC, and juggling machines. He built a device that could solve the Rubik's Cube puzzle.
Shannon also invented flame-throwing trumpets, rocket-powered frisbees, and plastic foam shoes for navigating a lake, and which to an observer, would appear as if Shannon was walking on water.
Shannon designed the Minivac 601, a digital computer trainer to teach business people about how computers functioned. It was sold by the Scientific Development Corp starting in 1961.
He is also considered the co-inventor of the first wearable computer along with Edward O. Thorp. The device was used to improve the odds when playing roulette.
Personal life
Shannon married Norma Levor, a wealthy, Jewish, left-wing intellectual in January 1940. The marriage ended in divorce after about a year. Levor later married Ben Barzman.
Shannon met his second wife, Mary Elizabeth Moore (Betty), when she was a numerical analyst at Bell Labs. They were married in 1949. Betty assisted Claude in building some of his most famous inventions. They had three children.
Shannon presented himself as apolitical and an atheist.
Tributes and legacy
There are six statues of Shannon sculpted by Eugene Daub: one at the University of Michigan; one at MIT in the Laboratory for Information and Decision Systems; one in Gaylord, Michigan; one at the University of California, San Diego; one at Bell Labs; and another at AT&T Shannon Labs. The statue in Gaylord is located in the Claude Shannon Memorial Park. After the breakup of the Bell System, the part of Bell Labs that remained with AT&T Corporation was named Shannon Labs in his honor.
In June 1954, Shannon was listed as one of the top 20 most important scientists in America by Fortune. In 2013, information theory was listed as one of the top 10 revolutionary scientific theories by Science News.
According to Neil Sloane, an AT&T Fellow who co-edited Shannon's large collection of papers in 1993, the perspective introduced by Shannon's communication theory (now called "information theory") is the foundation of the digital revolution, and every device containing a microprocessor or microcontroller is a conceptual descendant of Shannon's publication in 1948: "He's one of the great men of the century. Without him, none of the things we know today would exist. The whole digital revolution started with him." The cryptocurrency unit shannon (a synonym for gwei) is named after him.
Shannon is credited by many as single-handedly creating information theory and for laying the foundations for the Digital Age.
The artificial intelligence large language model family Claude (language model) was named in Shannon's honor.
A Mind at Play, a biography of Shannon written by Jimmy Soni and Rob Goodman, was published in 2017. They described Shannon as "the most important genius you’ve never heard of, a man whose intellect was on par with Albert Einstein and Isaac Newton". Consultant and writer Tom Rutledge, writing for Boston Review, stated that "Of the computer pioneers who drove the mid-20th-century information technology revolution—an elite men’s club of scholar-engineers who also helped crack Nazi codes and pinpoint missile trajectories—Shannon may have been the most brilliant of them all." Electrical engineer Robert Gallager stated about Shannon that "He had this amazing clarity of vision. Einstein had it, too – this ability to take on a complicated problem and find the right way to look at it, so that things become very simple." In an obituary by Neil Sloane and Robert Calderbank, they stated that "Shannon must rank near the top of the list of major figures of twentieth century science". Due to his work in multiple fields, Shannon is also regarded as a polymath.
Historian James Gleick noted the importance of Shannon, stating that "Einstein looms large, and rightly so. But we’re not living in the relativity age, we’re living in the information age. It’s Shannon whose fingerprints are on every electronic device we own, every computer screen we gaze into, every means of digital communication. He’s one of these people who so transform the world that, after the transformation, the old world is forgotten." Gleick further noted that "he created a whole field from scratch, from the brow of Zeus".
On April 30, 2016, Shannon was honored with a Google Doodle to celebrate his life on what would have been his 100th birthday.
The Bit Player, a feature film about Shannon directed by Mark Levinson premiered at the World Science Festival in 2019. Drawn from interviews conducted with Shannon in his house in the 1980s, the film was released on Amazon Prime in August 2020.
The Mathematical Theory of Communication
Weaver's Contribution
Shannon's The Mathematical Theory of Communication, begins with an interpretation of his own work by Warren Weaver. Although Shannon's entire work is about communication itself, Warren Weaver communicated his ideas in such a way that those not acclimated to complex theory and mathematics could comprehend the fundamental laws he put forth. The coupling of their unique communicational abilities and ideas generated the Shannon-Weaver model, although the mathematical and theoretical underpinnings emanate entirely from Shannon's work after Weaver's introduction. For the layman, Weaver's introduction better communicates The Mathematical Theory of Communication, but Shannon's subsequent logic, mathematics, and expressive precision was responsible for defining the problem itself.
Other work
Shannon's mouse
"Theseus", created in 1950, was a mechanical mouse controlled by an electromechanical relay circuit that enabled it to move around a labyrinth of 25 squares. The maze configuration was flexible and it could be modified arbitrarily by rearranging movable partitions. The mouse was designed to search through the corridors until it found the target. Having travelled through the maze, the mouse could then be placed anywhere it had been before, and because of its prior experience it could go directly to the target. If placed in unfamiliar territory, it was programmed to search until it reached a known location and then it would proceed to the target, adding the new knowledge to its memory and learning new behavior. Shannon's mouse appears to have been the first artificial learning device of its kind.
Shannon's estimate for the complexity of chess
In 1949 Shannon completed a paper (published in March 1950) which estimates the game-tree complexity of chess, which is approximately 10120. This number is now often referred to as the "Shannon number", and is still regarded today as an accurate estimate of the game's complexity. The number is often cited as one of the barriers to solving the game of chess using an exhaustive analysis (i.e. brute force analysis).
Shannon's computer chess program
On March 9, 1949, Shannon presented a paper called "Programming a Computer for playing Chess". The paper was presented at the National Institute for Radio Engineers Convention in New York. He described how to program a computer to play chess based on position scoring and move selection. He proposed basic strategies for restricting the number of possibilities to be considered in a game of chess. In March 1950 it was published in Philosophical Magazine, and is considered one of the first articles published on the topic of programming a computer for playing chess, and using a computer to solve the game. In 1950, Shannon wrote an article titled "A Chess-Playing Machine", which was published in Scientific American. Both papers have had immense influence and laid the foundations for future chess programs.
His process for having the computer decide on which move to make was a minimax procedure, based on an evaluation function of a given chess position. Shannon gave a rough example of an evaluation function in which the value of the black position was subtracted from that of the white position. Material was counted according to the usual chess piece relative value (1 point for a pawn, 3 points for a knight or bishop, 5 points for a rook, and 9 points for a queen). He considered some positional factors, subtracting ½ point for each doubled pawn, backward pawn, and isolated pawn; mobility was incorporated by adding 0.1 point for each legal move available.
Shannon's maxim
Shannon formulated a version of Kerckhoffs' principle as "The enemy knows the system". In this form it is known as "Shannon's maxim".
Miscellaneous
Shannon also contributed to combinatorics and detection theory. His 1948 paper introduced many tools used in combinatorics. He did work on detection theory in 1944, with his work being one of the earliest expositions of the “matched filter” principle.
He was known as a successful investor who gave lectures on investing. A report from Barron's on August 11, 1986, detailed the recent performance of 1,026 mutual funds, and Shannon achieved a higher return than 1,025 of them. Comparing the portfolio of Shannon from the late 1950s to 1986, to Warren Buffett's of 1965 to 1995, Shannon had a return of about 28% percent, compared to 27% for Buffett. One such method of Shannon's was labeled Shannon's demon, which was to form a portfolio of equal parts cash and a stock, and rebalance regularly to take advantage of the stock's randomly jittering price movements. Shannon reportedly long thought of publishing about investing, but ultimately did not, despite giving multiple lectures. He was one of the first investors to download stock prices, and a snapshot of his portfolio in 1981 was found to be $582,717.50, translating to $1.5 million in 2015, excluding another one of his stocks.
Commemorations
Shannon centenary
The Shannon centenary, 2016, marked the life and influence of Claude Elwood Shannon on the hundredth anniversary of his birth on April 30, 1916. It was inspired in part by the Alan Turing Year. An ad hoc committee of the IEEE Information Theory Society including Christina Fragouli, Rüdiger Urbanke, Michelle Effros, Lav Varshney and Sergio Verdú, coordinated worldwide events. The initiative was announced in the History Panel at the 2015 IEEE Information Theory Workshop Jerusalem and the IEEE Information Theory Society newsletter.
A detailed listing of confirmed events was available on the website of the IEEE Information Theory Society.
Some of the activities included:
Bell Labs hosted the First Shannon Conference on the Future of the Information Age on April 28–29, 2016, in Murray Hill, New Jersey, to celebrate Claude Shannon and the continued impact of his legacy on society. The event includes keynote speeches by global luminaries and visionaries of the information age who will explore the impact of information theory on society and our digital future, informal recollections, and leading technical presentations on subsequent related work in other areas such as bioinformatics, economic systems, and social networks. There is also a student competition
Bell Labs launched a Web exhibit on April 30, 2016, chronicling Shannon's hiring at Bell Labs (under an NDRC contract with US Government), his subsequent work there from 1942 through 1957, and details of Mathematics Department. The exhibit also displayed bios of colleagues and managers during his tenure, as well as original versions of some of the technical memoranda which subsequently became well known in published form.
The Republic of Macedonia issued a commemorative stamp. A USPS commemorative stamp is being proposed, with an active petition.
A documentary on Claude Shannon and on the impact of information theory, The Bit Player, was produced by Sergio Verdú and Mark Levinson.
A trans-Atlantic celebration of both George Boole's bicentenary and Claude Shannon's centenary that is being led by University College Cork and the Massachusetts Institute of Technology. A first event was a workshop in Cork, When Boole Meets Shannon, and will continue with exhibits at the Boston Museum of Science and at the MIT Museum.
Many organizations around the world are holding observance events, including the Boston Museum of Science, the Heinz-Nixdorf Museum, the Institute for Advanced Study, Technische Universität Berlin, University of South Australia (UniSA), Unicamp (Universidade Estadual de Campinas), University of Toronto, Chinese University of Hong Kong, Cairo University, Telecom ParisTech, National Technical University of Athens, Indian Institute of Science, Indian Institute of Technology Bombay, Indian Institute of Technology Kanpur, Nanyang Technological University of Singapore, University of Maryland, University of Illinois at Chicago, École Polytechnique Federale de Lausanne, The Pennsylvania State University (Penn State), University of California Los Angeles, Massachusetts Institute of Technology, Chongqing University of Posts and Telecommunications, and University of Illinois at Urbana-Champaign.
A logo that appears on this page was crowdsourced on Crowdspring.
The Math Encounters presentation of May 4, 2016, at the National Museum of Mathematics in New York, titled Saving Face: Information Tricks for Love and Life, focused on Shannon's work in information theory. A video recording and other material are available.
Awards and honors list
The Claude E. Shannon Award was established in his honor; he was also its first recipient, in 1973.
Alfred Noble Prize of the American Society of Civil Engineers, 1939
Stuart Ballantine Medal of the Franklin Institute, 1955
Member of the American Academy of Arts and Sciences, 1957
Harvey Prize, the Technion of Haifa, Israel, 1972
Alfred Noble Prize, 1939 (award of civil engineering societies in the US)
National Medal of Science, 1966, presented by President Lyndon B. Johnson
Kyoto Prize, 1985
Morris Liebmann Memorial Prize of the Institute of Radio Engineers, 1949
United States National Academy of Sciences, 1956
Medal of Honor of the Institute of Electrical and Electronics Engineers, 1966
Golden Plate Award of the American Academy of Achievement, 1967
Royal Netherlands Academy of Arts and Sciences (KNAW), foreign member, 1975
Member of the American Philosophical Society, 1983
Member of the Royal Irish Academy, 1985
Basic Research Award, Eduard Rhein Foundation, Germany, 1991
Marconi Society Lifetime Achievement Award, 2000
Donnor Professor of Science, MIT, 1958–1979
Selected works
Claude E. Shannon: A Symbolic Analysis of Relay and Switching Circuits, master's thesis, MIT, 1937.
Claude E. Shannon: "A Mathematical Theory of Communication", Bell System Technical Journal, Vol. 27, pp. 379–423, 623–656, 1948 (abstract).
Claude E. Shannon and Warren Weaver: The Mathematical Theory of Communication. The University of Illinois Press, Urbana, Illinois, 1949.
Neil Sloane editor (1993) Claude Shannon: Collected Works, IEEE Press
See also
Entropy power inequality
Error-correcting codes with feedback
List of pioneers in computer science
Models of communication
n-gram
Noisy channel coding theorem
Nyquist–Shannon sampling theorem
One-time pad
Product cipher
Pulse-code modulation
Rate distortion theory
Sampling
Shannon capacity
Shannon entropy
Shannon index
Shannon multigraph
Shannon security
Shannon switching game
Shannon–Fano coding
Shannon–Hartley law
Shannon–Hartley theorem
Shannon's expansion
Shannon's source coding theorem
Shannon-Weaver model of communication
Whittaker–Shannon interpolation formula
References
Further reading
Rethnakaran Pulikkoonattu — Eric W. Weisstein: Mathworld biography of Shannon, Claude Elwood (1916–2001) Shannon, Claude Elwood (1916–2001) – from Eric Weisstein's World of Scientific Biography
Claude E. Shannon: Programming a Computer for Playing Chess, Philosophical Magazine, Ser.7, Vol. 41, No. 314, March 1950. (Available online under External links below)
David Levy: Computer Gamesmanship: Elements of Intelligent Game Design, Simon & Schuster, 1983.
Mindell, David A., "Automation's Finest Hour: Bell Labs and Automatic Control in World War II", IEEE Control Systems, December 1995, pp. 72–80.
Poundstone, William, Fortune's Formula, Hill & Wang, 2005,
Gleick, James, The Information: A History, A Theory, A Flood, Pantheon, 2011,
Jimmy Soni and Rob Goodman, A Mind at Play: How Claude Shannon Invented the Information Age, Simon and Schuster, 2017,
Nahin, Paul J., The Logician and the Engineer: How George Boole and Claude Shannon Create the Information Age, Princeton University Press, 2013,
Everett M. Rogers, Claude Shannon's Cryptography Research During World War II and the Mathematical Theory of Communication, 1994 Proceedings of IEEE International Carnahan Conference on Security Technology, pp. 1–5, 1994. Claude Shannon's cryptography research during World War II and the mathematical theory of communication
External links
Guide to the Claude Elwood Shannon papers at the Library of Congress
Claude Elwood Shannon (1916–2001) at the Notices of the American Mathematical Society
1916 births
2001 deaths
20th-century American engineers
20th-century American essayists
20th-century American male writers
20th-century American mathematicians
20th-century atheists
21st-century atheists
American atheists
American electronics engineers
American geneticists
American information theorists
American male essayists
American male non-fiction writers
American people of German descent
American people of World War II
Burials at Mount Auburn Cemetery
Combinatorial game theorists
Communication theorists
Computer chess people
American control theorists
Deaths from Alzheimer's disease in the United States
Foreign members of the Royal Society
Harvey Prize winners
IEEE Medal of Honor recipients
Information theory
Institute for Advanced Study visiting scholars
Jugglers
Kyoto laureates in Basic Sciences
Massachusetts Institute of Technology alumni
Mathematicians from Michigan
Members of the American Philosophical Society
Members of the Royal Netherlands Academy of Arts and Sciences
Members of the United States National Academy of Sciences
MIT School of Engineering faculty
Modern cryptographers
National Medal of Science laureates
Neurological disease deaths in Massachusetts
People from Petoskey, Michigan
People of the Cold War
20th-century cryptographers
American probability theorists
Members of the Royal Irish Academy
Scientists at Bell Labs
Scientists from Michigan
Unicyclists
University of Michigan alumni
Inventors
Inventors from Michigan | Claude Shannon | [
"Mathematics",
"Technology",
"Engineering"
] | 7,020 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
5,702 | https://en.wikipedia.org/wiki/Channel%20Tunnel | The Channel Tunnel (), sometimes referred to by the portmanteau Chunnel, is a undersea railway tunnel, opened in 1994, that connects Folkestone (Kent, England) with Coquelles (Pas-de-Calais, France) beneath the English Channel at the Strait of Dover. It is the only fixed link between the island of Great Britain and the European mainland.
At its lowest point, it is below the sea bed and below sea level. At , it has the longest underwater section of any tunnel in the world and is the third-longest railway tunnel in the world. The speed limit for trains through the tunnel is . The tunnel is owned and operated by Getlink, formerly Groupe Eurotunnel.
The tunnel carries high-speed Eurostar passenger trains, LeShuttle services for road vehicles and freight trains. It connects end-to-end with high-speed railway lines: the LGV Nord in France and High Speed 1 in England. In 2017, rail services carried 10.3million passengers and 1.22million tonnes of freight, and the Shuttle carried 10.4million passengers, 2.6million cars, 51,000 coaches, and 1.6million lorries (equivalent to 21.3million tonnes of freight), compared with 11.7million passengers, 2.6million lorries and 2.2million cars by sea through the Port of Dover.
Plans to build a cross-Channel tunnel were proposed as early as 1802, but British political and media criticism motivated by fears of compromising national security had disrupted attempts to build one. An early unsuccessful attempt was made in the late 19th century, on the English side, "in the hope of forcing the hand of the English Government". The eventual successful project, organised by Eurotunnel, began construction in 1988 and opened in 1994. Estimated to cost £5.5 billion in 1985, it was at the time the most expensive construction project ever proposed. The cost finally amounted to £9 billion (equivalent to £ billion in ).
Since its opening, the tunnel has experienced occasional mechanical problems. Both fires and cold weather have temporarily disrupted its operation. Since at least 1997, aggregations of migrants around Calais seeking entry to the United Kingdom, such as through the tunnel, have prompted deterrence and countermeasures.
History
Earlier proposals
In 1802, Albert Mathieu-Favier, a French mining engineer, proposed a tunnel under the English Channel, with illumination from oil lamps, horse-drawn coaches, and an artificial island positioned mid-Channel for changing horses. His design envisaged a bored two-level tunnel with the top tunnel used for transport and the bottom one for groundwater flows.
In 1839, Aimé Thomé de Gamond, a Frenchman, performed the first geological and hydrographical surveys on the Channel between Calais and Dover. He explored several schemes and, in 1856, presented a proposal to Napoleon III for a mined railway tunnel from Cap Gris-Nez to East Wear Point with a port/airshaft on the Varne sandbank at a cost of 170 million francs, or less than £7 million.
In 1865, a deputation led by George Ward Hunt proposed the idea of a tunnel to the Chancellor of the Exchequer of the day, William Ewart Gladstone.
In 1866, Henry Marc Brunel made a survey of the floor of the Strait of Dover. By his results, he proved that the floor was composed of chalk, like the adjoining cliffs, and thus a tunnel was feasible. For this survey, he invented the gravity corer, which is still used in geology.
Around 1866, William Low and Sir John Hawkshaw promoted tunnel ideas, but apart from preliminary geological studies, none were implemented.
An official Anglo-French protocol was established in 1876 for a cross-Channel railway tunnel.
In 1881, British railway entrepreneur Sir Edward Watkin and Alexandre Lavalley, a French Suez Canal contractor, were in the Anglo-French Submarine Railway Company that conducted exploratory work on both sides of the Channel. From June 1882 to March 1883, the British tunnel boring machine tunnelled, through chalk, a total of , while Lavalley used a similar machine to drill from Sangatte on the French side. However, the cross-Channel tunnel project was abandoned in 1883, despite this success, after fears raised by the British military that an underwater tunnel might be used as an invasion route. Nevertheless, in 1883, this TBM was used to bore a railway ventilation tunnel— in diameter and long—between Birkenhead and Liverpool, England, through sandstone under the Mersey River. These early works were encountered more than a century later during the project TransManche Link (TML).
A 1907 film, Tunnelling the English Channel by pioneer filmmaker Georges Méliès, depicts King Edward VII and President Armand Fallières dreaming of building a tunnel under the English Channel.
In 1919, during the Paris Peace Conference, British prime minister David Lloyd George repeatedly brought up the idea of a Channel tunnel as a way of reassuring France about British willingness to defend against another German attack. The French did not take the idea seriously, and nothing came of the proposal.
In the 1920s, Winston Churchill advocated for the Channel Tunnel, using that exact name in his essay "Should Strategists Veto The Tunnel?" It was published on 27 July 1924 in the Weekly Dispatch, and argued vehemently against the idea that the tunnel could be used by a Continental enemy in an invasion of Britain. Churchill expressed his enthusiasm for the project again in an article for the Daily Mail on 12 February 1936, "Why Not A Channel Tunnel?"
There was another proposal in 1929, but nothing came of this discussion and the idea was abandoned. Proponents estimated the construction cost at US$150million. The engineers had addressed the concerns of both nations' military leaders by designing two sumps – one near the coast of each country – that could be flooded at will to block the tunnel, but this did not appease the military, or dispel concerns about hordes of tourists who would disrupt English life.
A British film from Gaumont Studios, The Tunnel (also known as TransAtlantic Tunnel), was released in 1935 as a science-fiction project concerning the creation of a transatlantic tunnel. It referred briefly to its protagonist, a Mr. McAllan, as having completed a British Channel tunnel successfully in 1940, five years into the future of the film's release.
Military fears continued during World War II. After the surrender of France, as Britain prepared for an expected German invasion, a Royal Navy officer in the Directorate of Miscellaneous Weapons Development calculated that Hitler could use slave labour to build two Channel tunnels in 18 months. The estimate caused rumours that Germany had already begun digging.
By 1955, defence arguments had become less relevant due to the dominance of air power, and both the British and French governments supported technical and geological surveys. In 1958 the 1881 workings were cleared in preparation for a £100,000 geological survey by the Channel Tunnel Study Group. 30% of the funding came from Channel Tunnel Co Ltd, the largest shareholder of which was the British Transport Commission, as successor to the South Eastern Railway. A detailed geological survey was carried out in 1964 and 1965.
Although the two countries agreed to build a tunnel in 1964, the phase 1 initial studies and signing of a second agreement to cover phase 2 took until 1973. The plan described a government-funded project to create two tunnels to accommodate car shuttle wagons on either side of a service tunnel. Construction started on both sides of the Channel in 1974.
On 20 January 1975, to the dismay of their French partners, the then-governing Labour Party in Britain cancelled the project due to uncertainty about the UK's membership of the European Economic Community, doubling cost estimates amid the general economic crisis at the time. By this time the British tunnel boring machine was ready and the Ministry of Transport had performed a experimental drive. (This short tunnel, named Adit A1, was eventually reused as the starting and access point for tunnelling operations from the British side, and remains an access point to the service tunnel.) The cancellation costs were estimated at £17million. On the French side, a tunnel-boring machine had been installed underground in a stub tunnel. It lay there for 14 years until 1988, when it was sold, dismantled, refurbished and shipped to Turkey, where it was used to drive the Moda tunnel for the Istanbul Sewerage Scheme.
Initiation of project
In 1979, the "Mouse-hole Project" was suggested when the Conservatives came to power in Britain. The concept was a single-track rail tunnel with a service tunnel but without shuttle terminals. The British government took no interest in funding the project, but British Prime Minister Margaret Thatcher did not object to a privately funded project, although she said she assumed it would be for cars rather than trains. In 1981, Thatcher and French president François Mitterrand agreed to establish a working group to evaluate a privately funded project. In June 1982 the Franco-British study group favoured a twin tunnel to accommodate conventional trains and a vehicle shuttle service. In April 1985 promoters were invited to submit scheme proposals. Four submissions were shortlisted:
Channel Tunnel, a rail proposal based on the 1975 scheme presented by Channel Tunnel Group/France–Manche (CTG/F–M).
Eurobridge, a suspension bridge with a series of spans with a roadway in an enclosed tube.
Euroroute, a tunnel between artificial islands approached by bridges.
Channel Expressway, a set of large-diameter road tunnels with mid-Channel ventilation towers.
The cross-Channel ferry industry protested using the name "Flexilink". In 1975 there was no campaign protesting a fixed link, with one of the largest ferry operators (Sealink) being state-owned. Flexilink continued rousing opposition throughout 1986 and 1987. Public opinion strongly favoured a drive-through tunnel, but concerns about ventilation, accident management and driver mesmerisation resulted in the only shortlisted rail submission, CTG/F-M, being awarded the project in January 1986. Reasons given for the selection included that it caused least disruption to shipping in the Channel and least environmental disruption, was the best protected against terrorism, and was the most likely to attract sufficient private finance.
Arrangement
The British Channel Tunnel Group consisted of two banks and five construction companies, while their French counterparts, France–Manche, consisted of three banks and five construction companies. The banks' role was to advise on financing and secure loan commitments. On 2 July 1985, the groups formed Channel Tunnel Group/France–Manche (CTG/F–M). Their submission to the British and French governments was drawn from the 1975 project, including 11 volumes and a substantial environmental impact statement.
The Anglo-French Treaty on the Channel Tunnel was signed by both governments in Canterbury Cathedral. The Treaty of Canterbury (1986) prepared the Concession for the construction and operation of the Fixed Link by privately owned companies and outlined arbitration methods to be used in the event of disputes. It established the Intergovernmental Commission (IGC), responsible for monitoring all matters associated with the Tunnel's construction and operation on behalf of the British and French governments, and a Safety Authority to advise the IGC.
It drew a land frontier between the two countries in the middle of the Channel tunnel—the first of its kind.
Design and construction were done by the ten construction companies in the CTG/F-M group. The French terminal and boring from Sangatte were done by the five French construction companies in the joint venture group GIE Transmanche Construction. The English Terminal and boring from Shakespeare Cliff were done by the five British construction companies in the Translink Joint Venture. The two partnerships were linked by a bi-national project organisation, TransManche Link (TML). The Maître d'Oeuvre was a supervisory engineering body employed by Eurotunnel under the terms of the concession that monitored the project and reported to the governments and banks.
In France, with its long tradition of infrastructure investment, the project had widespread approval. The French National Assembly approved it unanimously in April 1987, and after a public inquiry, the Senate approved it unanimously in June. In Britain, select committees examined the proposal, making history by holding hearings away from Westminster, in Kent. In February 1987, the third reading of the Channel Tunnel Bill took place in the House of Commons, and passed by 94 votes to 22. The Channel Tunnel Act gained Royal assent and passed into law in July. Parliamentary support for the project came partly from provincial members of Parliament on the basis of promises of regional Eurostar through train services that never materialised; the promises were repeated in 1996 when the contract for construction of the Channel Tunnel Rail Link was awarded.
Cost
The tunnel is a build-own-operate-transfer (BOOT) project with a concession. TML would design and build the tunnel, but financing was through a separate legal entity, Eurotunnel. Eurotunnel absorbed CTG/F-M and signed a construction contract with TML, but the British and French governments controlled final engineering and safety decisions, now managed by the Channel Tunnel Safety Authority. The British and French governments gave Eurotunnel a 55-year operating concession (from 1987; extended by 10 years to 65 years in 1993) to repay loans and pay dividends. A Railway Usage Agreement was signed between Eurotunnel, British Rail and SNCF guaranteeing future revenue in exchange for the railways obtaining half of the tunnel's capacity.
Private funding for such a complex infrastructure project was of unprecedented scale. Initial equity of £45 million was raised by CTG/F-M, increased by £206 million private institutional placement, £770 million was raised in a public share offer that included press and television advertisements, a syndicated bank loan and letter of credit arranged £5 billion. Privately financed, the total investment costs at 1985 prices were £2.6 billion. At the 1994 completion actual costs were, in 1985 prices, £4.65 billion: an 80% cost overrun. The cost overrun was partly due to enhanced safety, security, and environmental demands. Financing costs were 140% higher than forecast.
Construction
Working from both the English and French sides of the Channel, eleven tunnel boring machines (TBMs) cut through chalk marl to construct two rail tunnels and a service tunnel. The vehicle shuttle terminals are at Cheriton (part of Folkestone) and Coquelles, and are connected to the English M20 and French A16 motorways respectively.
Tunnelling commenced in 1988, and the tunnel began operating in 1994. At the peak of construction 15,000 people were employed with daily expenditure over £3 million. Ten workers, eight of them British, were killed during construction between 1987 and 1993, most in the first few months of boring.
Completion
A diameter pilot hole allowed the service tunnel to break through without ceremony on 30 October 1990. On 1 December 1990, Englishman Graham Fagg and Frenchman Phillippe Cozette broke through the service tunnel with the media watching. Eurotunnel completed the tunnel on time. (A BBC TV television commentator called Graham Fagg "the first man to cross the Channel by land for 8000 years".) The two tunnelling efforts met each other with an offset of only . A Paddington Bear soft toy was chosen by British tunnellers as the first item to pass through to their French counterparts when the two sides met.
The tunnel was officially opened, one year later than originally planned, by the French president François Mitterrand and Queen Elizabeth II, at a ceremony in Calais on 6 May 1994. The Queen travelled through the tunnel to Calais on a Eurostar train, which stopped nose to nose with the train that carried President Mitterrand from Paris. After the ceremony, President Mitterrand and the Queen travelled on Le Shuttle to a similar ceremony in Folkestone. A full public service did not start for several months. The first freight train, however, ran on 1 June 1994 and carried Rover and Mini cars being exported to Italy.
The Channel Tunnel Rail Link (CTRL), now called High Speed 1, runs from St Pancras railway station in London to the tunnel portal at Folkestone in Kent. It cost £5.8 billion. On 16 September 2003 the prime minister, Tony Blair, opened the first section of High Speed 1, from Folkestone to north Kent. On 6 November 2007, the Queen officially opened High Speed 1 and St Pancras International station, replacing the original slower link to Waterloo International railway station. High Speed 1 trains travel at up to , the journey from London to Paris taking 2 hours 15 minutes, to Brussels 1 hour 51 minutes.
In 1994, the American Society of Civil Engineers elected the tunnel as one of the seven modern Wonders of the World. In 1995, the American magazine Popular Mechanics published the results.
Opening dates
The opening was phased for various services offered as the Channel Tunnel Safety Authority, the IGC, gave permission for various services to begin at several dates over the period 1994/1995 but start-up dates were a few days later.
Engineering
Site investigation undertaken in the 20 years before construction confirmed earlier speculations that a tunnel could be bored through a chalk marl stratum. The chalk marl is conducive to tunnelling, with impermeability, ease of excavation and strength. The chalk marl runs along the entire length of the English side of the tunnel, but on the French side a length of has variable and difficult geology. The tunnel consists of three bores: two diameter rail tunnels, apart, in length with a diameter service tunnel in between. The three bores are connected by cross-passages and piston relief ducts.
The service tunnel was used as a pilot tunnel, boring ahead of the main tunnels to determine the conditions. English access was provided at Shakespeare Cliff and French access from a shaft at Sangatte. The French side used five tunnel boring machines (TBMs), and the English side six. The service tunnel uses Service Tunnel Transport System (STTS) and Light Service Tunnel Vehicles (LADOGS). Fire safety was a critical design issue.
Between the portals at Beussingue and Castle Hill the tunnel is long, with under land on the French side and on the UK side, and under sea. It is the third-longest rail tunnel in the world, behind the Gotthard Base Tunnel in Switzerland and the Seikan Tunnel in Japan, but with the longest under-sea section. The average depth is below the seabed. On the UK side, of the expected of spoil approximately was used for fill at the terminal site, and the remainder was deposited at Lower Shakespeare Cliff behind a seawall, reclaiming of land. This land was then made into the Samphire Hoe Country Park. Environmental assessment did not identify any major risks for the project, and further studies into safety, noise, and air pollution were overall positive. However, environmental objections were raised concerning a high-speed link to London.
Geology
Successful tunnelling required a sound understanding of topography and geology and the selection of the best rock strata through which to dig. The geology of this site generally consists of northeasterly dipping Cretaceous strata, part of the northern limb of the Wealden-Boulonnais dome. It has:
Continuous chalk in the cliffs on either side of the Channel, with no major faulting, as observed by Verstegan in 1605.
Four geological strata, marine sediments laid down 90–100 million years ago; pervious Upper and Middle Chalk above slightly pervious Lower Chalk and finally impermeable Gault Clay. There is a sandy stratum of Glauconitic marl (tortia), between the chalk marl and the gault clay.
A layer of chalk marl (French: craie bleue) in the lower third of the lower chalk appeared to present the best tunnelling medium. The chalk has a clay content of 30–40% providing impermeability to groundwater yet relatively easy excavation with strength allowing minimal support. Ideally, the tunnel would be bored in the bottom of the chalk marl, allowing water inflow from fractures and joints to be minimised, but above the gault clay that would increase stress on the tunnel lining and swell and soften when wet.
On the English side, the stratum dip is less than 5°; on the French side, this increases to 20°. Jointing and faulting are present on both sides. On the English side, only minor faults of displacement less than exist; on the French side, displacements of up to are present owing to the Quenocs anticlinal fold. The faults are of limited width, filled with calcite, pyrite and remolded clay. The increased dip and faulting restricted the selection of routes on the French side. To avoid confusion, microfossil assemblages were used to classify the chalk marl. On the French side, particularly near the coast, the chalk was harder, more brittle and more fractured than on the English side. This led to the adoption of different tunnelling techniques on the two sides.
The Quaternary undersea valley Fosse Dangeard, and Castle Hill landslip at the English portal, caused concerns. Identified by the 1964–1965 geophysical survey, the Fosse Dangeard is an infilled valley system extending below the seabed, south of the tunnel route in mid-channel. A 1986 survey showed that a tributary crossed the path of the tunnel, and so the tunnel route was made as far north and deep as possible. The English terminal had to be located in the Castle Hill landslip, which consists of displaced and tipping blocks of lower chalk, glauconitic marl and gault debris. Thus the area was stabilised by buttressing and inserting drainage adits. The service tunnel acted as a pilot preceding the main ones, so that the geology, areas of crushed rock, and zones of high water inflow could be predicted. Exploratory probing was done in the service tunnel, in the form of extensive forward probing, vertical downward probes and sideways probing.
Site investigation
Marine soundings and samplings were made by Thomé de Gamond in 1833–67, establishing the seabed depth at a maximum of and the continuity of geological strata (layers). Surveying continued for many years, with 166 marine and 70 land-deep boreholes being drilled and more than 4,000linekilometres of the marine geophysical survey completed. Surveys were undertaken in 1958–1959, 1964–1965, 1972–1974 and 1986–1988.
The surveying in 1958–1959 catered for immersed tube and bridge designs, as well as a bored tunnel, and thus a wide area was investigated. At that time, marine geophysics surveying for engineering projects was in its infancy, with poor positioning and resolution from seismic profiling. The 1964–1965 surveys concentrated on a northerly route that left the English coast at Dover harbour; using 70 boreholes, an area of deeply weathered rock with high permeability was located just south of Dover harbour.
Given the previous survey results and access constraints, a more southerly route was investigated in the 1972–1973 survey, and the route was confirmed to be feasible. Information for the tunnelling project also came from work before the 1975 cancellation. On the French side at Sangatte, a deep shaft with adits was made. On the English side at Shakespeare Cliff, the government allowed of diameter tunnel to be driven. The actual tunnel alignment, method of excavation and support were essentially the same as the 1975 attempt. In the 1986–1987 survey, previous findings were reinforced, and the characteristics of the gault clay and the tunnelling medium (chalk marl that made up 85% of the route) were investigated. Geophysical techniques from the oil industry were employed.
Tunnelling
Tunnelling was a major engineering challenge; the only precedent was the undersea Seikan Tunnel in Japan, which opened in 1988. A serious health and safety risk with building tunnels under water is major water inflow due to the high hydrostatic pressure from the sea above, under weak ground conditions. The tunnel also had the challenge of timescale: being privately funded, an early financial return was paramount.
The objective was to construct two rail tunnels, apart, in length; a service tunnel between the two main ones; pairs of -diameter cross-passages linking the rail tunnels to the service tunnel at spacing; piston relief ducts in diameter connecting the rail tunnels apart; two undersea crossover caverns to connect the rail tunnels, with the service tunnel always preceding the main ones by at least to ascertain the ground conditions. There was plenty of experience with excavating through chalk in the mining industry, while the undersea crossover caverns were a complex engineering problem. The French one was based on the Mount Baker Ridge freeway tunnel in Seattle; the UK cavern was dug from the service tunnel ahead of the main ones, to avoid delay.
Precast segmental linings in the main tunnel boring machine (TBM) drives were used, but two different solutions were used. On the French side, neoprene and grout sealed bolted linings made of cast iron or high-strength reinforced concrete were used; on the English side, the main requirement was for speed, so bolting of cast-iron lining segments was only done in areas of poor geology. In the UK rail tunnels, eight lining segments plus a key segment were used; in the French side, five segments plus a key. On the French side, a diameter deep grout-curtained shaft at Sangatte was used for access. On the English side, a marshalling area was below the top of Shakespeare Cliff, the New Austrian Tunnelling method (NATM) was first applied in the chalk marl here. On the English side, the land tunnels were driven from Shakespeare Cliff—the same place as the marine tunnels—not from Folkestone. The platform at the base of the cliff was not large enough for all of the drives and, despite environmental objections, tunnel spoil was placed behind a reinforced concrete seawall, on condition of placing the chalk in an enclosed lagoon, to avoid wide dispersal of chalk fines. Owing to limited space, the precast lining factory was on the Isle of Grain in the Thames estuary, which used Scottish granite aggregate delivered by ship from the Foster Yeoman coastal super quarry at Glensanda in Loch Linnhe on the west coast of Scotland.
On the French side, owing to the greater permeability to water, earth pressure balance TBMs with open and closed modes were used. The TBMs were used in the closed mode for the first , but then operated as open, boring through the chalk marl stratum. This minimised the impact to the ground, allowed high water pressures to be withstood and also alleviated the need to grout ahead of the tunnel. The French effort required five TBMs: two main marine machines, one mainland machine (the short land drives of allowed one TBM to complete the first drive then reverse direction and complete the other), and two service tunnel machines.
On the English side, the simpler geology allowed faster open-faced TBMs. Six machines were used; all commenced digging from Shakespeare Cliff, three marine-bound and three for the land tunnels. Towards the completion of the undersea drives, the UK TBMs were driven steeply downwards and buried clear of the tunnel. These buried TBMs were then used to provide an electrical earth. The French TBMs then completed the tunnel and were dismantled. A gauge railway was used on the English side during construction.
In contrast to the English machines, which were given technical names, the French tunnelling machines were all named after women: Brigitte, Europa, Catherine, Virginie, Pascaline, Séverine.
After the tunnelling, one machine was on display at the side of the M20 motorway in Folkestone until Eurotunnel sold it on eBay for £39,999 to a scrap metal merchant. Another machine (T4 "Virginie") still survives on the French side, adjacent to Junction 41 on the A16, in the middle of the D243E3/D243E4 roundabout. On it are the words "hommage aux bâtisseurs du tunnel", meaning "tribute to the builders of the tunnel".
Tunnel boring machines
The eleven tunnel boring machines were designed and manufactured through a joint venture between the Robbins Company of Kent, Washington, United States; Markham & Co. of Chesterfield, England; and Kawasaki Heavy Industries of Japan. The TBMs for the service tunnels and main tunnels on the UK side were designed and manufactured by James Howden & Company Ltd, Scotland.
Railway design
Loading gauge
The loading gauge height is .
Communications
There are three communication systems:
Concession radio – for the tunnel operator's personnel and vehicles within the concession area (terminals, tunnels, coastal shafts)
Track-to-train radio – secure speech and data between trains and the railway control centre
Shuttle internal radio – communication among shuttle crew, and to passengers over car radios
Power supply
Power is delivered to the locomotives via an overhead line at with a normal overhead clearance of . All tunnel services run on electricity, shared equally from English and French sources. There are two substations fed at 400 kV at each terminal, but in an emergency, the tunnel's lighting (about 20,000 light fittings) and the plant can be powered solely from either England or France.
The traditional railway south of London uses a 750 V DC third rail to deliver electricity, but since the opening of High Speed 1 there is no longer any need for tunnel trains to use it. High Speed 1, the tunnel and the LGV Nord all have power provided via overhead catenary at 25 kV 50 Hz AC. The railways on "classic" lines in Belgium are also electrified by overhead wires, but at 3,000 V DC.
Signalling
A cab signalling system gives information directly to train drivers on a display. There is a train protection system that stops the train if the speed exceeds that indicated on the in-cab display. TVM430, as used on LGV Nord and High Speed 1, is used in the tunnel. The TVM signalling is interconnected with the signalling on the high-speed lines on either side, allowing trains to enter and exit the tunnel system without stopping. The maximum speed is .
Signalling in the tunnel is coordinated from a control centre at the Folkestone terminal. A backup facility at the Calais terminal is staffed at all times and can take over all operations in the event of a breakdown or emergency.
Track system
Conventional ballasted tunnel track was ruled out owing to the difficulty of maintenance and lack of stability and precision. The Sonneville International Corporation's track system was chosen because it was reliable and also cost-effective. The type of track used is known as Low Vibration Track (LVT), which is held in place by gravity and friction. Reinforced concrete blocks of support the rails every and are held by thick closed-cell polymer foam pads placed at the bottom of rubber boots. The latter separates the blocks' mass movements from the concrete. The track provides extra overhead clearance for larger trains. UIC60 (60 kg/m) rails of 900A grade rest on rail pads, which fit the RN/Sonneville bolted dual leaf-springs. The rails, LVT-blocks and their boots with pads were assembled outside the tunnel, in a fully automated process developed by the LVT inventor, Roger Sonneville. About 334,000 Sonneville blocks were made on the Sangatte site.
Maintenance activities are less than projected. The rails had initially been ground on a yearly basis or after approximately 100MGT of traffic. Maintenance is facilitated by the existence of two tunnel junctions or crossover facilities, allowing for two-way operation in each of the six tunnel segments, and providing safe access for maintenance of one isolated tunnel segment at a time. The two crossovers are the largest artificial undersea caverns ever built, at long, high and wide. The English crossover is from Shakespeare Cliff, and the French crossover is from Sangatte.
Ventilation, cooling and drainage
The ventilation system maintains greater air pressure in the service tunnel than in the rail tunnels, so that in the event of a fire, smoke does not enter the service tunnel from the rail tunnels. There is a normal ventilating system and a supplementary system. Twin fans are mounted in vertical shafts where digging for the tunnel began, on both sides of the channel: two in Sangatte, France, and two more at Shakespeare Cliff, UK. The normal ventilating system is connected direct to the service tunnel and provides fresh air through the cross- passages into the running tunnels, where it is dispersed by the piston effect of the train and shuttle movements. Only one fan on each side is ever running, the second being available as a backup. The supplementary ventilating system is a separate emergency system and can be used to control smoke or supply emergency air within the tunnels. On both systems, the fans are normally run on supply mode, pulling in air from the outside, but they can also be used in extraction mode to remove smoke or fumes from the tunnels.
Two cooling water pipes in each rail tunnel circulate chilled water to remove heat generated by the rail traffic. Pumping stations remove water in the tunnels from rain, seepage, and so on.
During the design stage of the tunnel, engineers found that its aerodynamic properties and the heat generated by high-speed trains as they passed through it would raise the temperature inside the tunnel to . As well as making the trains "unbearably warm" for passengers, this also presented a risk of equipment failure and track distortion. To cool the tunnel to below , engineers installed of diameter cooling pipes carrying of water. The network—Europe's largest cooling system—was supplied by eight York Titan chillers running on R22, a hydrochlorofluorocarbon (HCFC) refrigerant gas.
Due to R22's ozone depletion potential and high global warming potential, its use is being phased out in developed countries. Since 1 January 2015, it has been illegal in Europe to use HCFCs to service air-conditioning equipment; broken equipment that used HCFCs must be replaced with equipment that does not use it. In 2016, Trane was selected to provide replacement chillers for the tunnel's cooling network. The York chillers were decommissioned and four "next generation" Trane Series E CenTraVac large-capacity (2,600 kW to 14,000 kW) chillers were installed—two in Sangatte, France, and two at Shakespeare Cliff, UK. The energy-efficient chillers, using Honeywell's non-flammable, ultra-low GWP R1233zd(E) refrigerant, maintain temperatures at , and in their first year of operation generated savings of 4.8GWh—approximately 33%, equating to €500,000 ($585,000)—for tunnel operator Getlink.
Rolling stock
Rolling stock used previously
Operators
LeShuttle
Getlink operates the LeShuttle, a vehicle shuttle service, through the tunnel.
Car shuttle sets have two separate halves: single and double deck. Each half has two loading/unloading wagons and 12 carrier wagons. Eurotunnel's original order was for nine car shuttle sets.
Heavy goods vehicle (HGV) shuttle sets also have two halves, with each half containing one loading wagon, one unloading wagon and 14 carrier wagons. There is a club car behind the leading locomotive, where drivers must stay during the journey. Eurotunnel originally ordered six HGV shuttle sets.
Initially 38 LeShuttle locomotives were commissioned, with one at each end of a shuttle train.
Freight locomotives
Forty-six Class 92 locomotives for hauling freight trains and overnight passenger trains (the Nightstar project, which was abandoned) were commissioned, running on both overhead AC and third-rail DC power. However, RFF does not let these run on French railways, so there are plans to certify Alstom Prima II locomotives for use in the tunnel.
International passenger
Thirty-one Eurostar trains, based on the French TGV, built to UK loading gauge with many modifications for safety within the tunnel, were commissioned, with ownership split between British Rail, French national railways (SNCF) and Belgian national railways (NMBS/SNCB). British Rail ordered seven more for services north of London. Around 2010, Eurostar ordered ten trains from Siemens based on its Velaro product. The Class 374 entered service in 2016 and has been operating through the Channel Tunnel ever since alongside the current Class 373.
Germany (DB) tried from about 2005 to get permission to run train services to London. At the end of 2009, extensive fire-proofing requirements were dropped and DB received permission to run German Intercity-Express (ICE) test trains through the tunnel. In June 2013 DB was granted access to the tunnel, but these plans were ultimately terminated.
In October 2021, Renfe, the Spanish state railway company, expressed interest in operating a cross-Channel route between Paris and London using some of their existing trains with the intention of competing with Eurostar. No details have been revealed as to which trains would be used.
Between October and November 2023, three more companies expressed interest in potentially running services between London and various European cities:
"Evolyn", a start-up company based in Spain announced plans that they intended to run services between London and Paris by 2026. The company stated that orders had been placed for the newly developed "Avelia" high speed trains built by Alstom for international operations. Alstom however, noted that no firm order for any rolling stock had been placed, but that there were ongoing discussions with the start-up over potential procurements.
Virgin Group founder Richard Branson had reportly hired the former managing director of Virgin Trains to initiate infrastructure talks on a potential international service to rival Eurostar running services between London, Paris, Brussels and Amsterdam.
Dutch start-up "Heuro" announced plans to start running services from Amsterdam to both Paris and London. Heuro is said to have officially applied for timetable slots beginning in December 2027 and is reportedly raising investment funds in Europe and the USA.
Service locomotives
Diesel locomotives for rescue and shunting work are Eurotunnel Class 0001 and Eurotunnel Class 0031.
Operation
The following chart presents the estimated number of passengers and tonnes of freight, respectively, annually transported through the Channel Tunnel since 1994 (M = million).
Usage and services
Transport services offered by the tunnel are as follows:
Eurotunnel Le Shuttle roll-on roll-off shuttle service for road vehicles and their drivers and passengers,
Eurostar passenger trains,
through freight trains.
Both the freight and passenger traffic forecasts made before the construction of the tunnel were overestimated; in particular, Eurotunnel's commissioned forecasts were over-predictions. Although the captured share of Channel crossings was forecast correctly, high competition (especially from budget airlines which expanded rapidly in the 1990s and 2000s) and reduced tariffs led to low revenue. Overall cross-Channel traffic was overestimated.
With the European Union's liberalisation of international rail services, the tunnel and High Speed 1 have been open to competition since 2010. There have been a number of operators interested in running trains through the tunnel and along High Speed 1 to London. In June 2013, after several years, Deutsche Bahn obtained a license to operate Frankfurt – London trains, not expected to run before 2016 because of delivery delays of the custom-made trains.
Plans for the service to Frankfurt seem to have been shelved in 2018.
Passenger traffic volumes
Cross-tunnel passenger traffic volumes peaked at 18.4 million in 1998, decreased to 14.9 million in 2003, and have increased substantially since then.
At the time of the decision about building the tunnel, 15.9 million passengers were predicted for Eurostar trains during the first year. In 1995, the first full year, actual numbers were a little over 2.9 million, growing to 7.1 million in 2000, then decreasing to 6.3 million in 2003. Eurostar was initially limited by the lack of a high-speed connection on the British side. After the completion of High Speed 1 in two stages in 2003 and 2007, traffic increased. In 2008, Eurostar carried 9,113,371 passengers, a 10% increase over the previous year, despite traffic limitations due to the 2008 Channel Tunnel fire. Eurostar passenger numbers continued to increase.
Freight traffic volumes
Freight volumes have been erratic, with a major decrease during 1997 due to a closure caused by a fire in a freight shuttle. Freight crossings increased over the period, indicating the substitutability of the tunnel by sea crossings. The tunnel has achieved a market share close to or above Eurotunnel's 1980s predictions but Eurotunnel's 1990 and 1994 predictions were overestimates.
For through freight trains, the first year prediction was 7.2 million tonnes; the actual 1995 figure was 1.3 m tonnes. Through freight volumes peaked in 1998 at 3.1 m tonnes. This fell back to 1.21 m tonnes in 2007, increasing slightly to 1.24 m tonnes in 2008. Together with that carried on freight shuttles, freight volumes have grown since opening, with 6.4 m tonnes carried in 1995, 18.4 m tonnes recorded in 2003 and 19.6 m tonnes in 2007. Numbers fell back in the wake of the 2008 fire.
Eurotunnel's freight subsidiary is Europorte 2. In September 2006 EWS, the UK's largest rail freight operator, announced that owing to the cessation of UK-French government subsidies of £52 million per annum to cover the tunnel "Minimum User Charge" (a subsidy of around £13,000 per train, at a traffic level of 4,000 trains per annum), freight trains would stop running after 30 November.
Economic performance
Shares in Eurotunnel were issued at £3.50 per share on 9 December 1987. By mid-1989 their price had risen to £11.00. Delays and cost overruns resulted in the price falling; during demonstration runs in October 1994, it reached an all-time low. Eurotunnel suspended payment on its debt in September 1995 to avoid bankruptcy. In December 1997 the British and French governments extended Eurotunnel's operating concession by 34 years, to 2086. There was a financial restructuring of Eurotunnel in mid-1998, reducing debt and financial charges. Despite this, The Economist reported in 1998 that to break even Eurotunnel would have to increase fares, traffic and market share for sustainability. A cost-benefit analysis of the tunnel indicated that there were few effects on the wider economy and few developments associated with the project and that the British economy would have been better off if it had not been constructed.
Under the terms of the Concession, Eurotunnel was obliged to investigate a cross-Channel road tunnel. In December 1999 road and rail tunnel proposals were presented to the British and French governments, but it was stressed that there was not enough demand for a second tunnel. A three-way treaty between the United Kingdom, France and Belgium governs border controls, with the establishment of control zones within which the officers of the other nation may exercise limited customs and law enforcement powers. For most purposes, these are at either end of the tunnel, with the French border controls on the UK side of the tunnel and vice versa. For some city-to-city trains, the train is a control zone. A binational emergency plan coordinates UK and French emergency activities.
In 1999 Eurostar posted its first net profit, having made a loss of £925m in 1995. In 2005 Eurotunnel was described as being in a serious situation. In 2013, operating profits rose 4percent from 2012, to £54million.
Security
There is a need for full passport controls, as the tunnel acts as a border between the Schengen Area and the Common Travel Area. There are juxtaposed controls, meaning that passports are checked before boarding by officials of the departing country and by officials of the destination country. These control points are only at the main Eurostar stations: French officials operate at London St Pancras, while British officials operate at Lille-Europe, Brussels-South, Paris-Gare du Nord, Rotterdam CS, and Amsterdam CS. During the winter ski season, they also operate at Gare de Bourg-Saint-Maurice and Moûtiers-Salins-Brides-les-Bains station. Eurostar passengers pass through airport-style security screening. For the shuttle road-vehicle trains, there are juxtaposed passport controls before boarding the trains.
When Eurostar trains ran south of Paris such as from Marseille, there were no passport and security checks before departure, and those trains had to stop in Lille at least 30 minutes to allow all passengers to be checked. No checks are performed on board. There have been plans for services from Amsterdam, Frankfurt and Cologne to London, but a major reason to cancel them was the need for a stop in Lille. Direct service from London to Amsterdam started on 4 April 2018; following the building of check-in terminals at Amsterdam and Rotterdam and the intergovernmental agreement, a direct service from the two Dutch cities to London started on 30 April 2020.
Terminals
The terminals' sites are at Cheriton (near Folkestone in the United Kingdom) and Coquelles (near Calais in France). The UK site uses the M20 motorway for access. The terminals are organised with the frontier controls juxtaposed with the entry to the system to allow travellers to go onto the motorway at the destination country immediately after leaving the shuttle.
To achieve design output at the French terminal, the shuttles accept cars on double-deck wagons; for flexibility, ramps were placed inside the shuttles to provide access to the top decks. At Folkestone there are of the main-line track, 45 turnouts and eight platforms. At Calais there are of track and 44 turnouts. At the terminals, the shuttle trains traverse a figure eight to reduce uneven wear on the wheels. There is a freight marshalling yard west of Cheriton at Dollands Moor Freight Yard.
Regional effect
A 1996 report from the European Commission predicted that Kent and Nord-Pas de Calais would have increased traffic volumes due to the general growth of cross-Channel traffic and traffic attracted by the tunnel. In Kent, a high-speed rail line to London would transfer traffic from road to rail. Kent's regional development would benefit from the tunnel, but being so close to London restricts the benefits. Gains are in the traditional industries and are largely dependent on the development of Ashford International railway station, without which Kent would be dependent totally on London's expansion. Nord-Pas-de-Calais enjoys a strong internal symbolic effect of the Tunnel which results in significant gains in manufacturing.
The removal of a bottleneck by means like the tunnel does not necessarily induce economic gains in all adjacent regions. The image of a region being connected to European high-speed transport and active political response is more important for regional economic development. Some small-medium enterprises located in the immediate vicinity of the terminal have used the opportunity to re-brand the profile of their business with positive effects, such as The New Inn at Etchinghill which was able to commercially exploit its unique selling point as being 'the closest pub to the Channel Tunnel'. Tunnel-induced regional development is small compared to general economic growth. The South East of England is likely to benefit developmentally and socially from faster and cheaper transport to continental Europe, but the benefits are unlikely to be distributed equally throughout the region. The overall environmental effect is almost certainly negative.
Since the opening of the tunnel, small positive effects on the wider economy have been felt, but it is difficult to identify major economic successes attributed directly to the tunnel. The Eurotunnel does operate profitably, offering an alternative transportation mode unaffected by poor weather. High costs of construction did delay profitability, however, and companies involved in the tunnel's construction and operation early in operation relied on government aid to deal with the accumulated debt.
Illegal immigration
Illegal immigrants and would-be asylum seekers have used the tunnel to attempt to enter Britain. By 1997, the problem had attracted international press attention, and by 1999, the French Red Cross opened the first migrant centre at Sangatte, using a warehouse once used for tunnel construction; by 2002, it housed up to 1,500 people at a time, most of them trying to get to the UK. In 2001, most came from Afghanistan, Iraq, and Iran, but African countries were also represented.
Eurotunnel, the company that operates the crossing, said that more than 37,000 migrants were intercepted between January and July 2015. Approximately 3,000 migrants, mainly from Ethiopia, Eritrea, Sudan and Afghanistan, were living in the temporary camps erected in Calais at the time of an official count in July 2015. An estimated 3,000 to 5,000 migrants were waiting in Calais for a chance to get to England.
Britain and France operate a system of juxtaposed controls on immigration and customs, where investigations happen before travel. France is part of the Schengen immigration zone, removing border checks in normal times between most EU member states; Britain and Ireland form their own separate Common Travel Area immigration zone.
Most illegal immigrants and would-be asylum seekers who got into Britain found some way to ride a freight train. Trucks are loaded onto freight trains. In a few instances, migrants stowed away in a liquid chocolate tanker and managed to survive, spread across several attempts. Although the facilities were fenced, total security was deemed impossible; migrants would even jump from bridges onto moving trains. In several incidents people were injured during the crossing; others tampered with railway equipment, causing delays and requiring repairs. Eurotunnel said it was losing £5m per month because of the problem.
In 2001 and 2002, several riots broke out at Sangatte, and groups of migrants (as many as 550 in a December 2001 incident) stormed the fences and attempted to enter en masse.
Other migrants seeking permanent UK settlement use the Eurostar passenger train. They may purport to be visitors (whether to be issued with a required visit visa, or deny and falsify their true intentions to obtain a maximum of 6-months-in-a-year at-port stamp); purport to be someone else whose documents they hold, or used forged or counterfeit passports. Such breaches result in refusal of permission to enter the UK, effected by Border Force after such a person's identity is fully established, assuming they persist in their application to enter the UK.
Increased security measures around the tunnel have resulted in much of the migration moving to small boats instead.
Diplomatic efforts
Local authorities in both France and the UK called for the closure of the Sangatte migrant camp, and Eurotunnel twice sought an injunction against it. As of 2006 the United Kingdom blamed France for allowing Sangatte to open, and France blamed both the UK for its then lax asylum rules/law, and the EU for not having a uniform immigration policy. The problem's cause célèbre nature even resulted in journalists being detained as they followed migrants onto railway property.
In 2002, the European Commission told France that it was in breach of European Union rules on the free transfer of goods because of the delays and closures as a result of its poor security. The French government built a double fence, at a cost of £5 million, reducing the numbers of migrants detected each week reaching Britain on goods trains from 250 to almost none. Other measures included CCTV cameras and increased police patrols. At the end of 2002, the Sangatte centre was closed after the UK agreed to absorb some migrants.
On 23 and 30 June 2015, striking workers associated with MyFerryLink damaged sections of track by burning car tires, cancelling all trains and creating a backlog of vehicles. Hundreds seeking to reach Britain attempted to stow away inside and underneath transport trucks destined for the UK. Extra security measures included a £2million upgrade of detection technology, £1million extra for dog searches, and £12million (over three years) towards a joint fund with France for security surrounding the Port of Calais.
Illegal attempts to cross and deaths
In 2002, a dozen migrants died in crossing attempts. In the two months from June to July 2015, ten migrants died near the French tunnel terminal, during a period when 1,500 attempts to evade security precautions were being made each day.
On 6 July 2015, a migrant died while attempting to climb onto a freight train while trying to reach Britain from the French side of the Channel. The previous month an Eritrean man was killed under similar circumstances.
During the night of 28 July 2015, one person, aged 25–30, was found dead after a night in which 1,500–2,000 migrants had attempted to enter the Eurotunnel terminal. The body of a Sudanese migrant was subsequently found inside the tunnel. On 4 August 2015, another Sudanese migrant walked nearly the entire length of one of the tunnels. He was arrested close to the British side, after having walked about through the tunnel.
Mechanical incidents
Fires
There have been three fires in the tunnel, all on the heavy goods vehicle (HGV) shuttles, that were significant enough to close the tunnel, as well as other minor incidents.
On 9 December 1994, during an "invitation only" testing phase, a fire broke out in a Ford Escort car while its owner was loading it onto the upper deck of a tourist shuttle. The fire started at about 10:00, with the shuttle train stationary in the Folkestone terminal, and was put out about 40 minutes later with no passenger injuries.
On 18 November 1996, a fire broke out on an HGV shuttle wagon in the tunnel, but nobody was hurt seriously. The exact cause is unknown, although it was neither a Eurotunnel equipment nor rolling stock problem; it may have been due to arson of a heavy goods vehicle. It is estimated that the heart of the fire reached , with the tunnel severely damaged over , with some affected to some extent. Full operation recommenced six months after the fire.
On 21 August 2006, the tunnel was closed for several hours when a truck on an HGV shuttle train caught fire.
On 11 September 2008, a fire occurred in the Channel Tunnel at 13:57 GMT. The incident started on an HGV shuttle train travelling towards France. The event occurred from the French entrance to the tunnel. No one was killed but several people were taken to hospitals suffering from smoke inhalation, and minor cuts and bruises. The tunnel was closed to all traffic, with the undamaged South Tunnel reopening for limited services two days later. Full service resumed on 9 February 2009 after repairs costing €60 million.
On 29 November 2012, the tunnel was closed for several hours after a truck on an HGV shuttle caught fire.
On 17 January 2015, both tunnels were closed after a lorry fire that filled the midsection of Running Tunnel North with smoke. Eurostar cancelled all services. The shuttle train had been heading from Folkestone to Coquelles and stopped adjacent to cross-passage CP 4418 just before 12:30 UTC. 38 passengers and four members of Eurotunnel staff were evacuated into the service tunnel and transported to France in special STTS road vehicles. They were taken to the Eurotunnel Fire/Emergency Management Centre close to the French portal.
Train failures
On the night of 19/20 February 1996, about 1,000 passengers became trapped in the Channel Tunnel when Eurostar trains from London broke down owing to failures of electronic circuits caused by snow and ice being deposited and then melting on the circuit boards.
On 3 August 2007, an electrical failure lasting six hours caused passengers to be trapped in the tunnel on a shuttle.
On the evening of 18 December 2009, during the December 2009 European snowfall, five London-bound Eurostar trains failed inside the tunnel, trapping 2,000 passengers for approximately 16 hours, during the coldest temperatures in eight years. A Eurotunnel spokesperson explained that snow had evaded the train's winterisation shields, and the transition from cold air outside to the tunnel's warm atmosphere had melted the snow, resulting in electrical failures. One train was turned back before reaching the tunnel; two trains were hauled out of the tunnel by Eurotunnel Class 0001 diesel locomotives. The blocking of the tunnel led to the implementation of Operation Stack, the transformation of the M20 motorway into a linear car park.
The occasion was the first time that a Eurostar train was evacuated inside the tunnel; the failing of four at once was described as "unprecedented". The Channel Tunnel reopened the following morning. Nirj Deva, Member of the European Parliament for South East England, had called for Eurostar chief executive Richard Brown to resign over the incidents. An independent report by Christopher Garnett (former CEO of Great North Eastern Railway) and Claude Gressier (a French transport expert) on the 18/19 December 2009 incidents was issued in February 2010, making 21 recommendations.
On 7 January 2010, a Brussels–London Eurostar broke down in the tunnel. The train had 236 passengers on board and was towed to Ashford; other trains that had not yet reached the tunnel were turned back.
Safety
The Channel Tunnel Safety Authority is responsible for some aspects of safety regulation in the tunnel; it reports to the Intergovernmental Commission (IGC).
The service tunnel is used for access to technical equipment in cross-passages and equipment rooms, to provide fresh-air ventilation and for emergency evacuation. The Service Tunnel Transport System (STTS) allows fast access to all areas of the tunnel. The service vehicles are rubber-tired with a buried wire guidance system.
The 24 STTS vehicles are used mainly for maintenance but also for firefighting and emergencies. "Pods" with different purposes, up to a payload of , are inserted into the side of the vehicles. The vehicles cannot turn around within the tunnel and are driven from either end. The maximum speed is when the steering is locked. A fleet of 15 Light Service Tunnel Vehicles (LADOGS) was introduced to supplement the STTSs. The LADOGS has a short wheelbase with a turning circle, allowing two-point turns within the service tunnel. Steering cannot be locked like the STTS vehicles, and maximum speed is . Pods up to can be loaded onto the rear of the vehicles. Drivers in the tunnel sit on the right, and the vehicles drive on the left. Owing to the risk of French personnel driving on their native right side of the road, sensors in the vehicles alert the driver if the vehicle strays to the right side.
The three tunnels contain of air that needs to be conditioned for comfort and safety. Air is supplied from ventilation buildings at Shakespeare Cliff and Sangatte, with each building capable of providing 100% standby capacity. Supplementary ventilation also exists on either side of the tunnel. In the event of a fire, ventilation is used to keep smoke out of the service tunnel and move smoke in one direction in the main tunnel to give passengers clean air. The tunnel was the first main-line railway tunnel to have special cooling equipment. Heat is generated from traction equipment and drag. The design limit was set at , using a mechanical cooling system with refrigeration plants on both sides that run chilled water circulating in pipes within the tunnel.
Trains travelling at high speed create piston effect pressure changes that can affect passenger comfort, ventilation systems, tunnel doors, fans and the structure of the trains, and which drag on the trains. Piston relief ducts of diameter were chosen to solve the problem, with 4 ducts per kilometre to give close to optimum results. However, this design led to extreme lateral forces on the trains, so a reduction in train speed was required and restrictors were installed in the ducts.
The safety issue of a possible fire on a passenger-vehicle shuttle garnered much attention, with Eurotunnel noting that fire was the risk attracting the most attention in a 1994 safety case for three reasons: the opposition of ferry companies to passengers being allowed to remain with their cars; Home Office statistics indicating that car fires had doubled in ten years; and the long length of the tunnel. Eurotunnel commissioned the UK Fire Research Station—now part of the Building Research Establishment—to give reports of vehicle fires, and liaised with Kent Fire Brigade to gather vehicle fire statistics over one year. Fire tests took place at the French Mines Research Establishment with a mock wagon used to investigate how cars burned. The wagon door systems are designed to withstand fire inside the wagon for 30 minutes, longer than the transit time of 27 minutes. Wagon air conditioning units help to purge dangerous fumes from inside the wagon before travel. Each wagon has a fire detection and extinguishing system, with sensing of ions or ultraviolet radiation, smoke and gases that can trigger halon gas to quench a fire.
Since the HGV wagons are not covered, fire sensors are located on the loading wagon and in the tunnel. A water main in the service tunnel provides water to the main tunnels at intervals. The ventilation system can control smoke movement. Special arrival sidings accept a train that is on fire, as the train is not allowed to stop whilst on fire in the tunnel unless continuing its journey would lead to a worse outcome. Two STTS (Service Tunnel Transportation System) vehicles with firefighting pods are on duty at all times, with a maximum delay of 10 minutes before they reach a burning train.
Eurotunnel has banned a wide range of hazardous goods from travelling in the tunnel.
Unusual traffic
Trains
In 1999, the Kosovo Train for Life passed through the tunnel en route to Pristina, in Kosovo.
Other
In 2009, former F1 racing champion John Surtees drove a Ginetta G50 EV electric sports car prototype from England to France, using the service tunnel, as part of a charity event. He was required to keep to the speed limit. To celebrate the 2014 Tour de France's transfer from its opening stages in Britain to France in July of that year, Chris Froome of Team Sky rode a bicycle through the service tunnel, becoming the first solo rider to do so. The crossing took under an hour, reaching speeds of —faster than most cross-channel ferries.
Mobile network coverage
Since 2012, French operators Bouygues Telecom, Orange and SFR have covered Running Tunnel South, the tunnel bore normally used for travel from France to Britain.
In January 2014, UK operators EE and Vodafone signed ten-year contracts with Eurotunnel for Running Tunnel North. The agreements will enable both operators' subscribers to use 2G and 3G services. Both EE and Vodafone planned to offer LTE services on the route; EE said it expected to cover the route with LTE connectivity by the summer of 2014. EE and Vodafone will offer Channel Tunnel network coverage for travellers from the UK to France. Eurotunnel said it also held talks with Three UK but had yet to reach an agreement with the operator.
In May 2014, Eurotunnel announced that they had installed equipment from Alcatel-Lucent to cover Running Tunnel North and simultaneously to provide mobile service (GSM 900/1800 MHz and UMTS 2100 MHz) by EE, O2 and Vodafone. The service of EE and Vodafone commenced on the same date as the announcement. O2 service was expected to be available soon afterwards.
In November 2014, EE announced that it had previously switched on LTE earlier in September 2014. O2 turned on 2G, 3G and 4G services in November 2014, whilst Vodafone's 4G was due to go live later.
Other (non-transport) services
The tunnel also houses the 1,000 MW ElecLink interconnector to transfer power between the British and French electricity networks. During the night of 31 August/1 September 2021, the 51 km-long 320 kV DC cable was switched into service for the first time.
See also
British Rail Class 373
France–United Kingdom border
Japan–Korea Undersea Tunnel
List of transport megaprojects
Marmaray Tunnel
Proposed British Isles fixed sea link connections
Samphire Hoe
Strait of Gibraltar crossing
References
Sources
Further reading
Article on a post-WW1 plan for a tunnel that was scrapped by the Great Depression. A total cost figure of US$150 million was given in 1929
Autobiography of Sir John Stokes regarding 1882 deliberations
External links
UK website at eurotunnel.com
French website at eurotunnel.com/fr
Tribute website at chunnel.com
Channel Tunnel on OpenStreetMap wiki
Tunnels completed in 1994
Coastal construction
Eurostar
France–United Kingdom border crossings
Cross-border railway lines in France
Railway tunnels in England
Railway tunnels in France
Rail transport in France
Rail transport in England
Transport in Kent
Transport in Folkestone and Hythe
Undersea tunnels in Europe
International tunnels
International railway lines
Transport in Pas-de-Calais
Standard gauge railways in England
Standard gauge railways in France
Railway lines opened in 1994
Buildings and structures in Pas-de-Calais
1994 establishments in France
1994 establishments in England
25 kV AC railway electrification
English Channel
Railway tunnels | Channel Tunnel | [
"Engineering"
] | 13,403 | [
"Construction",
"Coastal construction"
] |
5,705 | https://en.wikipedia.org/wiki/Continuum%20hypothesis | In mathematics, specifically set theory, the continuum hypothesis (abbreviated CH) is a hypothesis about the possible sizes of infinite sets. It states:
Or equivalently:
In Zermelo–Fraenkel set theory with the axiom of choice (ZFC), this is equivalent to the following equation in aleph numbers: , or even shorter with beth numbers: .
The continuum hypothesis was advanced by Georg Cantor in 1878, and establishing its truth or falsehood is the first of Hilbert's 23 problems presented in 1900. The answer to this problem is independent of ZFC, so that either the continuum hypothesis or its negation can be added as an axiom to ZFC set theory, with the resulting theory being consistent if and only if ZFC is consistent. This independence was proved in 1963 by Paul Cohen, complementing earlier work by Kurt Gödel in 1940.
The name of the hypothesis comes from the term the continuum for the real numbers.
History
Cantor believed the continuum hypothesis to be true and for many years tried in vain to prove it. It became the first on David Hilbert's list of important open questions that was presented at the International Congress of Mathematicians in the year 1900 in Paris. Axiomatic set theory was at that point not yet formulated.
Kurt Gödel proved in 1940 that the negation of the continuum hypothesis, i.e., the existence of a set with intermediate cardinality, could not be proved in standard set theory. The second half of the independence of the continuum hypothesis – i.e., unprovability of the nonexistence of an intermediate-sized set – was proved in 1963 by Paul Cohen.
Cardinality of infinite sets
Two sets are said to have the same cardinality or cardinal number if there exists a bijection (a one-to-one correspondence) between them. Intuitively, for two sets S and T to have the same cardinality means that it is possible to "pair off" elements of S with elements of T in such a fashion that every element of S is paired off with exactly one element of T and vice versa. Hence, the set {banana, apple, pear} has the same cardinality as {yellow, red, green}.
With infinite sets such as the set of integers or rational numbers, the existence of a bijection between two sets becomes more difficult to demonstrate. The rational numbers seemingly form a counterexample to the continuum hypothesis: the integers form a proper subset of the rationals, which themselves form a proper subset of the reals, so intuitively, there are more rational numbers than integers and more real numbers than rational numbers. However, this intuitive analysis is flawed; it does not take proper account of the fact that all three sets are infinite. It turns out the rational numbers can actually be placed in one-to-one correspondence with the integers, and therefore the set of rational numbers is the same size (cardinality) as the set of integers: they are both countable sets.
Cantor gave two proofs that the cardinality of the set of integers is strictly smaller than that of the set of real numbers (see Cantor's first uncountability proof and Cantor's diagonal argument). His proofs, however, give no indication of the extent to which the cardinality of the integers is less than that of the real numbers. Cantor proposed the continuum hypothesis as a possible solution to this question.
The continuum hypothesis states that the set of real numbers has minimal possible cardinality which is greater than the cardinality of the set of integers. That is, every set, S, of real numbers can either be mapped one-to-one into the integers or the real numbers can be mapped one-to-one into S. As the real numbers are equinumerous with the powerset of the integers, i.e. , the continuum hypothesis can be restated as follows:
Assuming the axiom of choice, there is a unique smallest cardinal number greater than , and the continuum hypothesis is in turn equivalent to the equality .
Independence from ZFC
The independence of the continuum hypothesis (CH) from Zermelo–Fraenkel set theory (ZF) follows from combined work of Kurt Gödel and Paul Cohen.
Gödel showed that CH cannot be disproved from ZF, even if the axiom of choice (AC) is adopted (making ZFC). Gödel's proof shows that CH and AC both hold in the constructible universe L, an inner model of ZF set theory, assuming only the axioms of ZF. The existence of an inner model of ZF in which additional axioms hold shows that the additional axioms are consistent with ZF, provided ZF itself is consistent. The latter condition cannot be proved in ZF itself, due to Gödel's incompleteness theorems, but is widely believed to be true and can be proved in stronger set theories.
Cohen showed that CH cannot be proven from the ZFC axioms, completing the overall independence proof. To prove his result, Cohen developed the method of forcing, which has become a standard tool in set theory. Essentially, this method begins with a model of ZF in which CH holds, and constructs another model which contains more sets than the original, in a way that CH does not hold in the new model. Cohen was awarded the Fields Medal in 1966 for his proof.
The independence proof just described shows that CH is independent of ZFC. Further research has shown that CH is independent of all known large cardinal axioms in the context of ZFC. Moreover, it has been shown that the cardinality of the continuum can be any cardinal consistent with König's theorem. A result of Solovay, proved shortly after Cohen's result on the independence of the continuum hypothesis, shows that in any model of ZFC, if is a cardinal of uncountable cofinality, then there is a forcing extension in which . However, per König's theorem, it is not consistent to assume is or or any cardinal with cofinality .
The continuum hypothesis is closely related to many statements in analysis, point set topology and measure theory. As a result of its independence, many substantial conjectures in those fields have subsequently been shown to be independent as well.
The independence from ZFC means that proving or disproving the CH within ZFC is impossible. However, Gödel and Cohen's negative results are not universally accepted as disposing of all interest in the continuum hypothesis. The continuum hypothesis remains an active topic of research; see Woodin and Peter Koellner for an overview of the current research status.
The continuum hypothesis and the axiom of choice were among the first genuinely mathematical statements shown to be independent of ZF set theory. Although the existence of some statements independent of ZFC had already been known more than two decades prior: for example, assuming good soundness properties and the consistency ZFC, Gödel's incompleteness theorems, which were published in 1931, establish that there is a formal statement (one for each appropriate Gödel numbering scheme) expressing the consistency of ZFC, that is also independent of it. The latter independence result indeed holds for many theories.
Arguments for and against the continuum hypothesis
Gödel believed that CH is false, and that his proof that CH is consistent with ZFC only shows that the Zermelo–Fraenkel axioms do not adequately characterize the universe of sets. Gödel was a Platonist and therefore had no problems with asserting the truth and falsehood of statements independent of their provability. Cohen, though a formalist, also tended towards rejecting CH.
Historically, mathematicians who favored a "rich" and "large" universe of sets were against CH, while those favoring a "neat" and "controllable" universe favored CH. Parallel arguments were made for and against the axiom of constructibility, which implies CH. More recently, Matthew Foreman has pointed out that ontological maximalism can actually be used to argue in favor of CH, because among models that have the same reals, models with "more" sets of reals have a better chance of satisfying CH.
Another viewpoint is that the conception of set is not specific enough to determine whether CH is true or false. This viewpoint was advanced as early as 1923 by Skolem, even before Gödel's first incompleteness theorem. Skolem argued on the basis of what is now known as Skolem's paradox, and it was later supported by the independence of CH from the axioms of ZFC since these axioms are enough to establish the elementary properties of sets and cardinalities. In order to argue against this viewpoint, it would be sufficient to demonstrate new axioms that are supported by intuition and resolve CH in one direction or another. Although the axiom of constructibility does resolve CH, it is not generally considered to be intuitively true any more than CH is generally considered to be false.
At least two other axioms have been proposed that have implications for the continuum hypothesis, although these axioms have not currently found wide acceptance in the mathematical community. In 1986, Chris Freiling presented an argument against CH by showing that the negation of CH is equivalent to Freiling's axiom of symmetry, a statement derived by arguing from particular intuitions about probabilities. Freiling believes this axiom is "intuitively clear" but others have disagreed.
A difficult argument against CH developed by W. Hugh Woodin has attracted considerable attention since the year 2000. Foreman does not reject Woodin's argument outright but urges caution. Woodin proposed a new hypothesis that he labeled the , or "Star axiom". The Star axiom would imply that is , thus falsifying CH. The Star axiom was bolstered by an independent May 2021 proof showing the Star axiom can be derived from a variation of Martin's maximum. However, Woodin stated in the 2010s that he now instead believes CH to be true, based on his belief in his new "ultimate L" conjecture.
Solomon Feferman argued that CH is not a definite mathematical problem. He proposed a theory of "definiteness" using a semi-intuitionistic subsystem of ZF that accepts classical logic for bounded quantifiers but uses intuitionistic logic for unbounded ones, and suggested that a proposition is mathematically "definite" if the semi-intuitionistic theory can prove . He conjectured that CH is not definite according to this notion, and proposed that CH should, therefore, be considered not to have a truth value. Peter Koellner wrote a critical commentary on Feferman's article.
Joel David Hamkins proposes a multiverse approach to set theory and argues that "the continuum hypothesis is settled on the multiverse view by our extensive knowledge about how it behaves in the multiverse, and, as a result, it can no longer be settled in the manner formerly hoped for". In a related vein, Saharon Shelah wrote that he does "not agree with the pure Platonic view that the interesting problems in set theory can be decided, that we just have to discover the additional axiom. My mental picture is that we have many possible set theories, all conforming to ZFC".
Generalized continuum hypothesis
The generalized continuum hypothesis (GCH) states that if an infinite set's cardinality lies between that of an infinite set and that of the power set of , then it has the same cardinality as either or . That is, for any infinite cardinal there is no cardinal such that . GCH is equivalent to:
(occasionally called Cantor's aleph hypothesis).
The beth numbers provide an alternative notation for this condition: for every ordinal . The continuum hypothesis is the special case for the ordinal . GCH was first suggested by Philip Jourdain. For the early history of GCH, see Moore.
Like CH, GCH is also independent of ZFC, but Sierpiński proved that ZF + GCH implies the axiom of choice (AC) (and therefore the negation of the axiom of determinacy, AD), so choice and GCH are not independent in ZF; there are no models of ZF in which GCH holds and AC fails. To prove this, Sierpiński showed GCH implies that every cardinality n is smaller than some aleph number, and thus can be ordered. This is done by showing that n is smaller than which is smaller than its own Hartogs number—this uses the equality ; for the full proof, see Gillman.
Kurt Gödel showed that GCH is a consequence of ZF + V=L (the axiom that every set is constructible relative to the ordinals), and is therefore consistent with ZFC. As GCH implies CH, Cohen's model in which CH fails is a model in which GCH fails, and thus GCH is not provable from ZFC. W. B. Easton used the method of forcing developed by Cohen to prove Easton's theorem, which shows it is consistent with ZFC for arbitrarily large cardinals to fail to satisfy . Much later, Foreman and Woodin proved that (assuming the consistency of very large cardinals) it is consistent that holds for every infinite cardinal . Later Woodin extended this by showing the consistency of for every Carmi Merimovich showed that, for each , it is consistent with ZFC that for each infinite cardinal , is the th successor of (assuming the consistency of some large cardinal axioms). On the other hand, László Patai proved that if is an ordinal and for each infinite cardinal , is the th successor of , then is finite.
For any infinite sets and , if there is an injection from to then there is an injection from subsets of to subsets of . Thus for any infinite cardinals and , . If and are finite, the stronger inequality holds. GCH implies that this strict, stronger inequality holds for infinite cardinals as well as finite cardinals.
Implications of GCH for cardinal exponentiation
Although the generalized continuum hypothesis refers directly only to cardinal exponentiation with 2 as the base, one can deduce from it the values of cardinal exponentiation in all cases. GCH implies that for ordinals and :
when ;
when and , where cf is the cofinality operation; and
when and
The first equality (when ) follows from:
while:
The third equality (when and ) follows from:
by König's theorem, while:
See also
Absolute infinite
Beth number
Cardinality
Ω-logic
Second continuum hypothesis
Wetzel's problem
References
Sources
Further reading
Gödel, K.: What is Cantor's Continuum Problem?, reprinted in Benacerraf and Putnam's collection Philosophy of Mathematics, 2nd ed., Cambridge University Press, 1983. An outline of Gödel's arguments against CH.
Martin, D. (1976). "Hilbert's first problem: the continuum hypothesis," in Mathematical Developments Arising from Hilbert's Problems, Proceedings of Symposia in Pure Mathematics XXVIII, F. Browder, editor. American Mathematical Society, 1976, pp. 81–92.
External links
Forcing (mathematics)
Independence results
Basic concepts in infinite set theory
Hilbert's problems
Infinity
Hypotheses
Cardinal numbers | Continuum hypothesis | [
"Mathematics"
] | 3,138 | [
"Independence results",
"Forcing (mathematics)",
"Cardinal numbers",
"Basic concepts in infinite set theory",
"Mathematical logic",
"Mathematical objects",
"Infinity",
"Hilbert's problems",
"Basic concepts in set theory",
"Mathematical problems",
"Numbers"
] |
5,715 | https://en.wikipedia.org/wiki/Cryptanalysis | Cryptanalysis (from the Greek kryptós, "hidden", and analýein, "to analyze") refers to the process of analyzing information systems in order to understand hidden aspects of the systems. Cryptanalysis is used to breach cryptographic security systems and gain access to the contents of encrypted messages, even if the cryptographic key is unknown.
In addition to mathematical analysis of cryptographic algorithms, cryptanalysis includes the study of side-channel attacks that do not target weaknesses in the cryptographic algorithms themselves, but instead exploit weaknesses in their implementation.
Even though the goal has been the same, the methods and techniques of cryptanalysis have changed drastically through the history of cryptography, adapting to increasing cryptographic complexity, ranging from the pen-and-paper methods of the past, through machines like the British Bombes and Colossus computers at Bletchley Park in World War II, to the mathematically advanced computerized schemes of the present. Methods for breaking modern cryptosystems often involve solving carefully constructed problems in pure mathematics, the best-known being integer factorization.
Overview
In encryption, confidential information (called the "plaintext") is sent securely to a recipient by the sender first converting it into an unreadable form ("ciphertext") using an encryption algorithm. The ciphertext is sent through an insecure channel to the recipient. The recipient decrypts the ciphertext by applying an inverse decryption algorithm, recovering the plaintext. To decrypt the ciphertext, the recipient requires a secret knowledge from the sender, usually a string of letters, numbers, or bits, called a cryptographic key. The concept is that even if an unauthorized person gets access to the ciphertext during transmission, without the secret key they cannot convert it back to plaintext.
Encryption has been used throughout history to send important military, diplomatic and commercial messages, and today is very widely used in computer networking to protect email and internet communication.
The goal of cryptanalysis is for a third party, a cryptanalyst, to gain as much information as possible about the original ("plaintext"), attempting to "break" the encryption to read the ciphertext and learning the secret key so future messages can be decrypted and read. A mathematical technique to do this is called a cryptographic attack. Cryptographic attacks can be characterized in a number of ways:
Amount of information available to the attacker
Cryptanalytical attacks can be classified based on what type of information the attacker has available. As a basic starting point it is normally assumed that, for the purposes of analysis, the general algorithm is known; this is Shannon's Maxim "the enemy knows the system" – in its turn, equivalent to Kerckhoffs's principle. This is a reasonable assumption in practice – throughout history, there are countless examples of secret algorithms falling into wider knowledge, variously through espionage, betrayal and reverse engineering. (And on occasion, ciphers have been broken through pure deduction; for example, the German Lorenz cipher and the Japanese Purple code, and a variety of classical schemes):
Ciphertext-only: the cryptanalyst has access only to a collection of ciphertexts or codetexts.
Known-plaintext: the attacker has a set of ciphertexts to which they know the corresponding plaintext.
Chosen-plaintext (chosen-ciphertext): the attacker can obtain the ciphertexts (plaintexts) corresponding to an arbitrary set of plaintexts (ciphertexts) of their own choosing.
Adaptive chosen-plaintext: like a chosen-plaintext attack, except the attacker can choose subsequent plaintexts based on information learned from previous encryptions, similarly to the Adaptive chosen ciphertext attack.
Related-key attack: Like a chosen-plaintext attack, except the attacker can obtain ciphertexts encrypted under two different keys. The keys are unknown, but the relationship between them is known; for example, two keys that differ in the one bit.
Computational resources required
Attacks can also be characterised by the resources they require. Those resources include:
Time – the number of computation steps (e.g., test encryptions) which must be performed.
Memory – the amount of storage required to perform the attack.
Data – the quantity and type of plaintexts and ciphertexts required for a particular approach.
It is sometimes difficult to predict these quantities precisely, especially when the attack is not practical to actually implement for testing. But academic cryptanalysts tend to provide at least the estimated order of magnitude of their attacks' difficulty, saying, for example, "SHA-1 collisions now 252."
Bruce Schneier notes that even computationally impractical attacks can be considered breaks: "Breaking a cipher simply means finding a weakness in the cipher that can be exploited with a complexity less than brute force. Never mind that brute-force might require 2128 encryptions; an attack requiring 2110 encryptions would be considered a break...simply put, a break can just be a certificational weakness: evidence that the cipher does not perform as advertised."
Partial breaks
The results of cryptanalysis can also vary in usefulness. Cryptographer Lars Knudsen (1998) classified various types of attack on block ciphers according to the amount and quality of secret information that was discovered:
Total break – the attacker deduces the secret key.
Global deduction – the attacker discovers a functionally equivalent algorithm for encryption and decryption, but without learning the key.
Instance (local) deduction – the attacker discovers additional plaintexts (or ciphertexts) not previously known.
Information deduction – the attacker gains some Shannon information about plaintexts (or ciphertexts) not previously known.
Distinguishing algorithm – the attacker can distinguish the cipher from a random permutation.
Academic attacks are often against weakened versions of a cryptosystem, such as a block cipher or hash function with some rounds removed. Many, but not all, attacks become exponentially more difficult to execute as rounds are added to a cryptosystem, so it's possible for the full cryptosystem to be strong even though reduced-round variants are weak. Nonetheless, partial breaks that come close to breaking the original cryptosystem may mean that a full break will follow; the successful attacks on DES, MD5, and SHA-1 were all preceded by attacks on weakened versions.
In academic cryptography, a weakness or a break in a scheme is usually defined quite conservatively: it might require impractical amounts of time, memory, or known plaintexts. It also might require the attacker be able to do things many real-world attackers can't: for example, the attacker may need to choose particular plaintexts to be encrypted or even to ask for plaintexts to be encrypted using several keys related to the secret key. Furthermore, it might only reveal a small amount of information, enough to prove the cryptosystem imperfect but too little to be useful to real-world attackers. Finally, an attack might only apply to a weakened version of cryptographic tools, like a reduced-round block cipher, as a step towards breaking the full system.
History
Cryptanalysis has coevolved together with cryptography, and the contest can be traced through the history of cryptography—new ciphers being designed to replace old broken designs, and new cryptanalytic techniques invented to crack the improved schemes. In practice, they are viewed as two sides of the same coin: secure cryptography requires design against possible cryptanalysis.
Classical ciphers
Although the actual word "cryptanalysis" is relatively recent (it was coined by William Friedman in 1920), methods for breaking codes and ciphers are much older. David Kahn notes in The Codebreakers that Arab scholars were the first people to systematically document cryptanalytic methods.
The first known recorded explanation of cryptanalysis was given by Al-Kindi (c. 801–873, also known as "Alkindus" in Europe), a 9th-century Arab polymath, in Risalah fi Istikhraj al-Mu'amma (A Manuscript on Deciphering Cryptographic Messages). This treatise contains the first description of the method of frequency analysis. Al-Kindi is thus regarded as the first codebreaker in history. His breakthrough work was influenced by Al-Khalil (717–786), who wrote the Book of Cryptographic Messages, which contains the first use of permutations and combinations to list all possible Arabic words with and without vowels.
Frequency analysis is the basic tool for breaking most classical ciphers. In natural languages, certain letters of the alphabet appear more often than others; in English, "E" is likely to be the most common letter in any sample of plaintext. Similarly, the digraph "TH" is the most likely pair of letters in English, and so on. Frequency analysis relies on a cipher failing to hide these statistics. For example, in a simple substitution cipher (where each letter is simply replaced with another), the most frequent letter in the ciphertext would be a likely candidate for "E". Frequency analysis of such a cipher is therefore relatively easy, provided that the ciphertext is long enough to give a reasonably representative count of the letters of the alphabet that it contains.
Al-Kindi's invention of the frequency analysis technique for breaking monoalphabetic substitution ciphers was the most significant cryptanalytic advance until World War II. Al-Kindi's Risalah fi Istikhraj al-Mu'amma described the first cryptanalytic techniques, including some for polyalphabetic ciphers, cipher classification, Arabic phonetics and syntax, and most importantly, gave the first descriptions on frequency analysis. He also covered methods of encipherments, cryptanalysis of certain encipherments, and statistical analysis of letters and letter combinations in Arabic. An important contribution of Ibn Adlan (1187–1268) was on sample size for use of frequency analysis.
In Europe, Italian scholar Giambattista della Porta (1535–1615) was the author of a seminal work on cryptanalysis, De Furtivis Literarum Notis.
Successful cryptanalysis has undoubtedly influenced history; the ability to read the presumed-secret thoughts and plans of others can be a decisive advantage. For example, in England in 1587, Mary, Queen of Scots was tried and executed for treason as a result of her involvement in three plots to assassinate Elizabeth I of England. The plans came to light after her coded correspondence with fellow conspirators was deciphered by Thomas Phelippes.
In Europe during the 15th and 16th centuries, the idea of a polyalphabetic substitution cipher was developed, among others by the French diplomat Blaise de Vigenère (1523–96). For some three centuries, the Vigenère cipher, which uses a repeating key to select different encryption alphabets in rotation, was considered to be completely secure (le chiffre indéchiffrable—"the indecipherable cipher"). Nevertheless, Charles Babbage (1791–1871) and later, independently, Friedrich Kasiski (1805–81) succeeded in breaking this cipher. During World War I, inventors in several countries developed rotor cipher machines such as Arthur Scherbius' Enigma, in an attempt to minimise the repetition that had been exploited to break the Vigenère system.
Ciphers from World War I and World War II
In World War I, the breaking of the Zimmermann Telegram was instrumental in bringing the United States into the war. In World War II, the Allies benefitted enormously from their joint success cryptanalysis of the German ciphers – including the Enigma machine and the Lorenz cipher – and Japanese ciphers, particularly 'Purple' and JN-25. 'Ultra' intelligence has been credited with everything between shortening the end of the European war by up to two years, to determining the eventual result. The war in the Pacific was similarly helped by 'Magic' intelligence.
Cryptanalysis of enemy messages played a significant part in the Allied victory in World War II. F. W. Winterbotham, quoted the western Supreme Allied Commander, Dwight D. Eisenhower, at the war's end as describing Ultra intelligence as having been "decisive" to Allied victory. Sir Harry Hinsley, official historian of British Intelligence in World War II, made a similar assessment about Ultra, saying that it shortened the war "by not less than two years and probably by four years"; moreover, he said that in the absence of Ultra, it is uncertain how the war would have ended.
In practice, frequency analysis relies as much on linguistic knowledge as it does on statistics, but as ciphers became more complex, mathematics became more important in cryptanalysis. This change was particularly evident before and during World War II, where efforts to crack Axis ciphers required new levels of mathematical sophistication. Moreover, automation was first applied to cryptanalysis in that era with the Polish Bomba device, the British Bombe, the use of punched card equipment, and in the Colossus computers – the first electronic digital computers to be controlled by a program.
Indicator
With reciprocal machine ciphers such as the Lorenz cipher and the Enigma machine used by Nazi Germany during World War II, each message had its own key. Usually, the transmitting operator informed the receiving operator of this message key by transmitting some plaintext and/or ciphertext before the enciphered message. This is termed the indicator, as it indicates to the receiving operator how to set his machine to decipher the message.
Poorly designed and implemented indicator systems allowed first Polish cryptographers and then the British cryptographers at Bletchley Park to break the Enigma cipher system. Similar poor indicator systems allowed the British to identify depths that led to the diagnosis of the Lorenz SZ40/42 cipher system, and the comprehensive breaking of its messages without the cryptanalysts seeing the cipher machine.
Depth
Sending two or more messages with the same key is an insecure process. To a cryptanalyst the messages are then said to be "in depth." This may be detected by the messages having the same indicator by which the sending operator informs the receiving operator about the key generator initial settings for the message.
Generally, the cryptanalyst may benefit from lining up identical enciphering operations among a set of messages. For example, the Vernam cipher enciphers by bit-for-bit combining plaintext with a long key using the "exclusive or" operator, which is also known as "modulo-2 addition" (symbolized by ⊕ ):
Plaintext ⊕ Key = Ciphertext
Deciphering combines the same key bits with the ciphertext to reconstruct the plaintext:
Ciphertext ⊕ Key = Plaintext
(In modulo-2 arithmetic, addition is the same as subtraction.) When two such ciphertexts are aligned in depth, combining them eliminates the common key, leaving just a combination of the two plaintexts:
Ciphertext1 ⊕ Ciphertext2 = Plaintext1 ⊕ Plaintext2
The individual plaintexts can then be worked out linguistically by trying probable words (or phrases), also known as "cribs," at various locations; a correct guess, when combined with the merged plaintext stream, produces intelligible text from the other plaintext component:
(Plaintext1 ⊕ Plaintext2) ⊕ Plaintext1 = Plaintext2
The recovered fragment of the second plaintext can often be extended in one or both directions, and the extra characters can be combined with the merged plaintext stream to extend the first plaintext. Working back and forth between the two plaintexts, using the intelligibility criterion to check guesses, the analyst may recover much or all of the original plaintexts. (With only two plaintexts in depth, the analyst may not know which one corresponds to which ciphertext, but in practice this is not a large problem.) When a recovered plaintext is then combined with its ciphertext, the key is revealed:
Plaintext1 ⊕ Ciphertext1 = Key
Knowledge of a key then allows the analyst to read other messages encrypted with the same key, and knowledge of a set of related keys may allow cryptanalysts to diagnose the system used for constructing them.
Development of modern cryptography
Governments have long recognized the potential benefits of cryptanalysis for intelligence, both military and diplomatic, and established dedicated organizations devoted to breaking the codes and ciphers of other nations, for example, GCHQ and the NSA, organizations which are still very active today.
Even though computation was used to great effect in the cryptanalysis of the Lorenz cipher and other systems during World War II, it also made possible new methods of cryptography orders of magnitude more complex than ever before. Taken as a whole, modern cryptography has become much more impervious to cryptanalysis than the pen-and-paper systems of the past, and now seems to have the upper hand against pure cryptanalysis. The historian David Kahn notes:
Kahn goes on to mention increased opportunities for interception, bugging, side channel attacks, and quantum computers as replacements for the traditional means of cryptanalysis. In 2010, former NSA technical director Brian Snow said that both academic and government cryptographers are "moving very slowly forward in a mature field."
However, any postmortems for cryptanalysis may be premature. While the effectiveness of cryptanalytic methods employed by intelligence agencies remains unknown, many serious attacks against both academic and practical cryptographic primitives have been published in the modern era of computer cryptography:
The block cipher Madryga, proposed in 1984 but not widely used, was found to be susceptible to ciphertext-only attacks in 1998.
FEAL-4, proposed as a replacement for the DES standard encryption algorithm but not widely used, was demolished by a spate of attacks from the academic community, many of which are entirely practical.
The A5/1, A5/2, CMEA, and DECT systems used in mobile and wireless phone technology can all be broken in hours, minutes or even in real-time using widely available computing equipment.
Brute-force keyspace search has broken some real-world ciphers and applications, including single-DES (see EFF DES cracker), 40-bit "export-strength" cryptography, and the DVD Content Scrambling System.
In 2001, Wired Equivalent Privacy (WEP), a protocol used to secure Wi-Fi wireless networks, was shown to be breakable in practice because of a weakness in the RC4 cipher and aspects of the WEP design that made related-key attacks practical. WEP was later replaced by Wi-Fi Protected Access.
In 2008, researchers conducted a proof-of-concept break of SSL using weaknesses in the MD5 hash function and certificate issuer practices that made it possible to exploit collision attacks on hash functions. The certificate issuers involved changed their practices to prevent the attack from being repeated.
Thus, while the best modern ciphers may be far more resistant to cryptanalysis than the Enigma, cryptanalysis and the broader field of information security remain quite active.
Symmetric ciphers
Boomerang attack
Brute-force attack
Davies' attack
Differential cryptanalysis
Harvest now, decrypt later
Impossible differential cryptanalysis
Improbable differential cryptanalysis
Integral cryptanalysis
Linear cryptanalysis
Meet-in-the-middle attack
Mod-n cryptanalysis
Related-key attack
Sandwich attack
Slide attack
XSL attack
Asymmetric ciphers
Asymmetric cryptography (or public-key cryptography) is cryptography that relies on using two (mathematically related) keys; one private, and one public. Such ciphers invariably rely on "hard" mathematical problems as the basis of their security, so an obvious point of attack is to develop methods for solving the problem. The security of two-key cryptography depends on mathematical questions in a way that single-key cryptography generally does not, and conversely links cryptanalysis to wider mathematical research in a new way.
Asymmetric schemes are designed around the (conjectured) difficulty of solving various mathematical problems. If an improved algorithm can be found to solve the problem, then the system is weakened. For example, the security of the Diffie–Hellman key exchange scheme depends on the difficulty of calculating the discrete logarithm. In 1983, Don Coppersmith found a faster way to find discrete logarithms (in certain groups), and thereby requiring cryptographers to use larger groups (or different types of groups). RSA's security depends (in part) upon the difficulty of integer factorization – a breakthrough in factoring would impact the security of RSA.
In 1980, one could factor a difficult 50-digit number at an expense of 1012 elementary computer operations. By 1984 the state of the art in factoring algorithms had advanced to a point where a 75-digit number could be factored in 1012 operations. Advances in computing technology also meant that the operations could be performed much faster. Moore's law predicts that computer speeds will continue to increase. Factoring techniques may continue to do so as well, but will most likely depend on mathematical insight and creativity, neither of which has ever been successfully predictable. 150-digit numbers of the kind once used in RSA have been factored. The effort was greater than above, but was not unreasonable on fast modern computers. By the start of the 21st century, 150-digit numbers were no longer considered a large enough key size for RSA. Numbers with several hundred digits were still considered too hard to factor in 2005, though methods will probably continue to improve over time, requiring key size to keep pace or other methods such as elliptic curve cryptography to be used.
Another distinguishing feature of asymmetric schemes is that, unlike attacks on symmetric cryptosystems, any cryptanalysis has the opportunity to make use of knowledge gained from the public key.
Attacking cryptographic hash systems
Birthday attack
Hash function security summary
Rainbow table
Side-channel attacks
Black-bag cryptanalysis
Man-in-the-middle attack
Power analysis
Replay attack
Rubber-hose cryptanalysis
Timing analysis
Quantum computing applications for cryptanalysis
Quantum computers, which are still in the early phases of research, have potential use in cryptanalysis. For example, Shor's Algorithm could factor large numbers in polynomial time, in effect breaking some commonly used forms of public-key encryption.
By using Grover's algorithm on a quantum computer, brute-force key search can be made quadratically faster. However, this could be countered by doubling the key length.
See also
, a term for information security often used in government
, the overarching goal of most cryptography
, the design of applications and protocols
; vulnerabilities can include cryptographic or other flaws
Historic cryptanalysts
Conel Hugh O'Donel Alexander
Charles Babbage
Fredson Bowers
Lambros D. Callimahos
Joan Clarke
Alastair Denniston
Agnes Meyer Driscoll
Elizebeth Friedman
William F. Friedman
Meredith Gardner
Friedrich Kasiski
Al-Kindi
Dilly Knox
Solomon Kullback
Marian Rejewski
Joseph Rochefort, whose contributions affected the outcome of the Battle of Midway
Frank Rowlett
Abraham Sinkov
Giovanni Soro, the Renaissance's first outstanding cryptanalyst
John Tiltman
Alan Turing
William T. Tutte
John Wallis – 17th-century English mathematician
William Stone Weedon – worked with Fredson Bowers in World War II
Herbert Yardley
References
Citations
Sources
Ibrahim A. Al-Kadi,"The origins of cryptology: The Arab contributions", Cryptologia, 16(2) (April 1992) pp. 97–126.
Friedrich L. Bauer: "Decrypted Secrets". Springer 2002.
Helen Fouché Gaines, "Cryptanalysis", 1939, Dover.
David Kahn, "The Codebreakers – The Story of Secret Writing", 1967.
Lars R. Knudsen: Contemporary Block Ciphers. Lectures on Data Security 1998: 105–126
Abraham Sinkov, Elementary Cryptanalysis: A Mathematical Approach, Mathematical Association of America, 1966.
Christopher Swenson, Modern Cryptanalysis: Techniques for Advanced Code Breaking,
Friedman, William F., Military Cryptanalysis, Part I,
Friedman, William F., Military Cryptanalysis, Part II,
Friedman, William F., Military Cryptanalysis, Part III, Simpler Varieties of Aperiodic Substitution Systems,
Friedman, William F., Military Cryptanalysis, Part IV, Transposition and Fractionating Systems,
Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part I, Volume 1,
Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part I, Volume 2,
Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part II, Volume 1,
Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part II, Volume 2,
Transcript of a lecture given by Prof. Tutte at the University of Waterloo
Further reading
External links
Basic Cryptanalysis (files contain 5 line header, that has to be removed first)
Distributed Computing Projects
List of tools for cryptanalysis on modern cryptography
Simon Singh's crypto corner
The National Museum of Computing
UltraAnvil tool for attacking simple substitution ciphers
How Alan Turing Cracked The Enigma Code Imperial War Museums
Cryptographic attacks
Applied mathematics
Arab inventions | Cryptanalysis | [
"Mathematics",
"Technology"
] | 5,244 | [
"Applied mathematics",
"Cryptographic attacks",
"Computer security exploits"
] |
5,726 | https://en.wikipedia.org/wiki/Crane%20shot | In filmmaking and video production, a crane shot is a shot taken by a camera on a moving crane or jib. Filmmaker D. W. Griffith created the first crane for his 1916 epic film Intolerance, with famed special effects pioneer Eiji Tsuburaya later constructing the first iron camera crane which is still adapted worldwide today. Most cranes accommodate both the camera and an operator, but some can be moved by remote control. Crane shots are often found in what are supposed to be emotional or suspenseful scenes. One example of this technique is the shots taken by remote cranes in the car-chase sequence of the 1985 film To Live and Die in L.A. Some filmmakers place the camera on a boom arm simply to make it easier to move around between ordinary set-ups.
History
D. W. Griffith designed the first camera crane for his 1916 epic film Intolerance. His crane measured 140 feet tall and ascended on six four-wheeled railroad trucks. In 1929, future special effects pioneer Eiji Tsuburaya constructed a smaller replica of Griffith's wooden camera crane without blueprints or manuals. Although his wooden crane collapsed shortly after its completion, Tsuburaya created the first-ever iron shooting crane in October 1934, and an adaptation of this crane is still used worldwide today.
Camera crane types
Camera cranes may be small, medium, or large, depending on the load capacity and length of the loading arm. Historically, the first camera crane provided for lifting the camera together with the operator, and sometimes an assistant. The range of motion of the boom was restricted because of the high load capacity and the need to ensure operator safety. In recent years a camera crane boom tripod with a remote control has become popular. It carries on the boom only a movie or television camera without an operator and allows shooting from difficult positions as a small load capacity makes it possible to achieve a long reach of the crane boom and relative freedom of movement. The operator controls the camera from the ground through a motorized panoramic head, using remote control and video surveillance by watching the image on the monitor. A separate category consists of telescopic camera cranes. These devices allow setting an arbitrary trajectory of the camera, eliminating the characteristic jib crane radial displacement that comes with traditional spanning shots.
Large camera cranes are almost indistinguishable from the usual boom-type cranes, with the exception of special equipment for smoothly moving the boom and controlling noise. Small camera cranes and crane-trucks have a lightweight construction, often without a mechanical drive. The valves are controlled manually by balancing the load-specific counterweight, facilitating manipulation. To improve usability and repeatability of movement of the crane in different takes, the axis of rotation arrows are provided with limbs and a pointer. In some cases, the camera crane is mounted on a dolly for even greater camera mobility. Such devices are called crane trolleys. In modern films robotic cranes allow use of multiple actuators for high-accuracy repeated movement of the camera in trick photography. These devices are called tap-robots; some sources use the term motion control.
Manufacturers
The major supplier of cranes in the cinema of the United States throughout the 1940s, 1950s, and 1960s was the Chapman Company (later Chapman-Leonard of North Hollywood), supplanted by dozens of similar manufacturers around the world. The traditional design provided seats for both the director and the camera operator, and sometimes a third seat for the cinematographer as well. Large weights on the back of the crane compensate for the weight of the people riding the crane and must be adjusted carefully to avoid the possibility of accidents. During the 1960s, the tallest crane was the Chapman Titan crane, a massive design over 20 feet high that won an Academy Scientific & Engineering award.
During the last few years, camera cranes have been miniaturized and costs have dropped so dramatically that most aspiring film makers have access to these tools. What was once a "Hollywood" effect is now available for under $400. Manufacturers of camera cranes include ABC-Products, Cambo, Filmotechnic, Polecam, Panther and Matthews Studio Equipment, Sevenoak, and Newton Nordic.
Camera crane technique
Most such cranes were manually operated, requiring an experienced boom operator who knew how to vertically raise, lower, and "crab" the camera alongside actors while the crane platform rolled on separate tracks. The crane operator and camera operator had to precisely coordinate their moves so that focus, pan, and camera position all started and stopped at the same time, requiring great skill and rehearsal. On the back of the crane is a counter weight. This allows the crane to smooth action while in motion with minimal effort.
Notable usage
D. W. Griffith's Intolerance (1916) featured the first ever crane shot for a film.
Atsuo Tomioka's 1935 film The Chorus of a Million featured the first iron camera crane, which was created and employed in the film in 1934 by Eiji Tsuburaya.
Leni Riefenstahl had a cameraman shoot a half-circle pan shot from a crane for the 1935 Nazi propaganda film Triumph of the Will.
A crane shot was used in Orson Welles' 1941 film Citizen Kane. Welles also used a crane camera during the iconic opening of Touch of Evil (1958). The camera perched on a Chapman crane begins on a close-up of a ticking time bomb and ends three-plus minutes later with a blinding explosion.
The Western High Noon (1952) had a famous crane shot. The shot backs up and rises, in order to show Marshal Will Kane totally alone and isolated on the street.
The 1964 film by Mikhail Kalatozov, I Am Cuba contains two of the most astonishing tracking shots ever attempted.
In his film Sympathy for the Devil, Jean-Luc Godard used a crane for almost every shot in the movie, giving each scene a 360-degree tour of the tableau Godard presented to the viewer. In the final scene, he even shows the crane he was able to rent on his limited budget by including it in the scene. This was one of his traits as a filmmaker — showing off his budget — as he did with Brigitte Bardot in Le Mepris (Contempt).
The closing take of Richard Attenborough's film version of Oh! What a Lovely War begins with a single war grave, gradually pulling back to reveal hundreds of identical crosses.
The 1980 comedy-drama film The Stunt Man featured a crane throughout the production of the fictitious film-within-a-film (with the director played by Peter O'Toole).
The television comedy Second City Television (SCTV) uses the concept of the crane shot as comedic material. After using a crane shot in one of the first NBC-produced episodes, the network complained about the exorbitant cost of renting the crane. SCTV writers responded by making the "crane shot" a ubiquitous symbol of production excess while also lampooning network executives who care nothing about artistic vision and everything about the bottom line. At the end of the second season, an inebriated Johnny LaRue (John Candy) is given his very own crane by Santa Claus, implying he would be able to have a crane shot whenever he wanted it.
Director Dario Argento included an extensive scene in Tenebrae where the camera seemingly crawled over the walls and up a house wall, all in one seamless take. Due to its length, the tracking shot ended up being the production's most difficult and complex part to complete.
The 2004 Johnnie To film Breaking News opens with an elaborate seven-minute single-take crane shot.
Director Dennis Dugan frequently uses top-to-bottom crane shots in his comedy films.
A camera crane panoramic master interior live shot opens The Late Late Show with James Corden after the pre-recorded exterior aerial-shot.
Jeopardy! uses a crane to pan the camera over the audience.
See also
Technocrane, a telescopic camera crane
U-crane, a gyro-stabilized car-mounted telescopic camera crane
References
Articles containing video clips
Cinematic techniques
Film and video technology
Cranes (machines) | Crane shot | [
"Engineering"
] | 1,649 | [
"Engineering vehicles",
"Cranes (machines)"
] |
5,739 | https://en.wikipedia.org/wiki/Compiler | In computing, a compiler is a computer program that translates computer code written in one programming language (the source language) into another language (the target language). The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a low-level programming language (e.g. assembly language, object code, or machine code) to create an executable program.
There are many different types of compilers which produce output in different useful forms. A cross-compiler produces code for a different CPU or operating system than the one on which the cross-compiler itself runs. A bootstrap compiler is often a temporary compiler, used for compiling a more permanent or better optimised compiler for a language.
Related software include decompilers, programs that translate from low-level languages to higher level ones; programs that translate between high-level languages, usually called source-to-source compilers or transpilers; language rewriters, usually programs that translate the form of expressions without a change of language; and compiler-compilers, compilers that produce compilers (or parts of them), often in a generic and reusable way so as to be able to produce many differing compilers.
A compiler is likely to perform some or all of the following operations, often called phases: preprocessing, lexical analysis, parsing, semantic analysis (syntax-directed translation), conversion of input programs to an intermediate representation, code optimization and machine specific code generation. Compilers generally implement these phases as modular components, promoting efficient design and correctness of transformations of source input to target output. Program faults caused by incorrect compiler behavior can be very difficult to track down and work around; therefore, compiler implementers invest significant effort to ensure compiler correctness.
Compilers are not the only language processor used to transform source programs. An interpreter is computer software that transforms and then executes the indicated operations. The translation process influences the design of computer languages, which leads to a preference of compilation or interpretation. In theory, a programming language can have both a compiler and an interpreter. In practice, programming languages tend to be associated with just one (a compiler or an interpreter).
History
Theoretical computing concepts developed by scientists, mathematicians, and engineers formed the basis of digital modern computing development during World War II. Primitive binary languages evolved because digital devices only understand ones and zeros and the circuit patterns in the underlying machine architecture. In the late 1940s, assembly languages were created to offer a more workable abstraction of the computer architectures. Limited memory capacity of early computers led to substantial technical challenges when the first compilers were designed. Therefore, the compilation process needed to be divided into several small programs. The front end programs produce the analysis products used by the back end programs to generate target code. As computer technology provided more resources, compiler designs could align better with the compilation process.
It is usually more productive for a programmer to use a high-level language, so the development of high-level languages followed naturally from the capabilities offered by digital computers. High-level languages are formal languages that are strictly defined by their syntax and semantics which form the high-level language architecture. Elements of these formal languages include:
Alphabet, any finite set of symbols;
String, a finite sequence of symbols;
Language, any set of strings on an alphabet.
The sentences in a language may be defined by a set of rules called a grammar.
Backus–Naur form (BNF) describes the syntax of "sentences" of a language. It was developed by John Backus and used for the syntax of Algol 60. The ideas derive from the context-free grammar concepts by linguist Noam Chomsky. "BNF and its extensions have become standard tools for describing the syntax of programming notations. In many cases, parts of compilers are generated automatically from a BNF description."
Between 1942 and 1945, Konrad Zuse designed the first (algorithmic) programming language for computers called ("Plan Calculus"). Zuse also envisioned a ("Plan assembly device") to automatically translate the mathematical formulation of a program into machine-readable punched film stock. While no actual implementation occurred until the 1970s, it presented concepts later seen in APL designed by Ken Iverson in the late 1950s. APL is a language for mathematical computations.
Between 1949 and 1951, Heinz Rutishauser proposed Superplan, a high-level language and automatic translator. His ideas were later refined by Friedrich L. Bauer and Klaus Samelson.
High-level language design during the formative years of digital computing provided useful programming tools for a variety of applications:
FORTRAN (Formula Translation) for engineering and science applications is considered to be one of the first actually implemented high-level languages and first optimizing compiler.
COBOL (Common Business-Oriented Language) evolved from A-0 and FLOW-MATIC to become the dominant high-level language for business applications.
LISP (List Processor) for symbolic computation.
Compiler technology evolved from the need for a strictly defined transformation of the high-level source program into a low-level target program for the digital computer. The compiler could be viewed as a front end to deal with the analysis of the source code and a back end to synthesize the analysis into the target code. Optimization between the front end and back end could produce more efficient target code.
Some early milestones in the development of compiler technology:
May 1952: Grace Hopper's team at Remington Rand wrote the compiler for the A-0 programming language (and coined the term compiler to describe it), although the A-0 compiler functioned more as a loader or linker than the modern notion of a full compiler.
1952, before September: An Autocode compiler developed by Alick Glennie for the Manchester Mark I computer at the University of Manchester is considered by some to be the first compiled programming language.
1954–1957: A team led by John Backus at IBM developed FORTRAN which is usually considered the first high-level language. In 1957, they completed a FORTRAN compiler that is generally credited as having introduced the first unambiguously complete compiler.
1959: The Conference on Data Systems Language (CODASYL) initiated development of COBOL. The COBOL design drew on A-0 and FLOW-MATIC. By the early 1960s COBOL was compiled on multiple architectures.
1958–1960: Algol 58 was the precursor to ALGOL 60. It introduced code blocks, a key advance in the rise of structured programming. ALGOL 60 was the first language to implement nested function definitions with lexical scope. It included recursion. Its syntax was defined using BNF. ALGOL 60 inspired many languages that followed it. Tony Hoare remarked: "... it was not only an improvement on its predecessors but also on nearly all its successors."
1958–1962: John McCarthy at MIT designed LISP. The symbol processing capabilities provided useful features for artificial intelligence research. In 1962, LISP 1.5 release noted some tools: an interpreter written by Stephen Russell and Daniel J. Edwards, a compiler and assembler written by Tim Hart and Mike Levin.
Early operating systems and software were written in assembly language. In the 1960s and early 1970s, the use of high-level languages for system programming was still controversial due to resource limitations. However, several research and industry efforts began the shift toward high-level systems programming languages, for example, BCPL, BLISS, B, and C.
BCPL (Basic Combined Programming Language) designed in 1966 by Martin Richards at the University of Cambridge was originally developed as a compiler writing tool. Several compilers have been implemented, Richards' book provides insights to the language and its compiler. BCPL was not only an influential systems programming language that is still used in research but also provided a basis for the design of B and C languages.
BLISS (Basic Language for Implementation of System Software) was developed for a Digital Equipment Corporation (DEC) PDP-10 computer by W. A. Wulf's Carnegie Mellon University (CMU) research team. The CMU team went on to develop BLISS-11 compiler one year later in 1970.
Multics (Multiplexed Information and Computing Service), a time-sharing operating system project, involved MIT, Bell Labs, General Electric (later Honeywell) and was led by Fernando Corbató from MIT. Multics was written in the PL/I language developed by IBM and IBM User Group. IBM's goal was to satisfy business, scientific, and systems programming requirements. There were other languages that could have been considered but PL/I offered the most complete solution even though it had not been implemented. For the first few years of the Multics project, a subset of the language could be compiled to assembly language with the Early PL/I (EPL) compiler by Doug McIlory and Bob Morris from Bell Labs. EPL supported the project until a boot-strapping compiler for the full PL/I could be developed.
Bell Labs left the Multics project in 1969, and developed a system programming language B based on BCPL concepts, written by Dennis Ritchie and Ken Thompson. Ritchie created a boot-strapping compiler for B and wrote Unics (Uniplexed Information and Computing Service) operating system for a PDP-7 in B. Unics eventually became spelled Unix.
Bell Labs started the development and expansion of C based on B and BCPL. The BCPL compiler had been transported to Multics by Bell Labs and BCPL was a preferred language at Bell Labs. Initially, a front-end program to Bell Labs' B compiler was used while a C compiler was developed. In 1971, a new PDP-11 provided the resource to define extensions to B and rewrite the compiler. By 1973 the design of C language was essentially complete and the Unix kernel for a PDP-11 was rewritten in C. Steve Johnson started development of Portable C Compiler (PCC) to support retargeting of C compilers to new machines.
Object-oriented programming (OOP) offered some interesting possibilities for application development and maintenance. OOP concepts go further back but were part of LISP and Simula language science. Bell Labs became interested in OOP with the development of C++. C++ was first used in 1980 for systems programming. The initial design leveraged C language systems programming capabilities with Simula concepts. Object-oriented facilities were added in 1983. The Cfront program implemented a C++ front-end for C84 language compiler. In subsequent years several C++ compilers were developed as C++ popularity grew.
In many application domains, the idea of using a higher-level language quickly caught on. Because of the expanding functionality supported by newer programming languages and the increasing complexity of computer architectures, compilers became more complex.
DARPA (Defense Advanced Research Projects Agency) sponsored a compiler project with Wulf's CMU research team in 1970. The Production Quality Compiler-Compiler PQCC design would produce a Production Quality Compiler (PQC) from formal definitions of source language and the target. PQCC tried to extend the term compiler-compiler beyond the traditional meaning as a parser generator (e.g., Yacc) without much success. PQCC might more properly be referred to as a compiler generator.
PQCC research into code generation process sought to build a truly automatic compiler-writing system. The effort discovered and designed the phase structure of the PQC. The BLISS-11 compiler provided the initial structure. The phases included analyses (front end), intermediate translation to virtual machine (middle end), and translation to the target (back end). TCOL was developed for the PQCC research to handle language specific constructs in the intermediate representation. Variations of TCOL supported various languages. The PQCC project investigated techniques of automated compiler construction. The design concepts proved useful in optimizing compilers and compilers for the (since 1995, object-oriented) programming language Ada.
The Ada STONEMAN document formalized the program support environment (APSE) along with the kernel (KAPSE) and minimal (MAPSE). An Ada interpreter NYU/ED supported development and standardization efforts with the American National Standards Institute (ANSI) and the International Standards Organization (ISO). Initial Ada compiler development by the U.S. Military Services included the compilers in a complete integrated design environment along the lines of the STONEMAN document. Army and Navy worked on the Ada Language System (ALS) project targeted to DEC/VAX architecture while the Air Force started on the Ada Integrated Environment (AIE) targeted to IBM 370 series. While the projects did not provide the desired results, they did contribute to the overall effort on Ada development.
Other Ada compiler efforts got underway in Britain at the University of York and in Germany at the University of Karlsruhe. In the U. S., Verdix (later acquired by Rational) delivered the Verdix Ada Development System (VADS) to the Army. VADS provided a set of development tools including a compiler. Unix/VADS could be hosted on a variety of Unix platforms such as DEC Ultrix and the Sun 3/60 Solaris targeted to Motorola 68020 in an Army CECOM evaluation. There were soon many Ada compilers available that passed the Ada Validation tests. The Free Software Foundation GNU project developed the GNU Compiler Collection (GCC) which provides a core capability to support multiple languages and targets. The Ada version GNAT is one of the most widely used Ada compilers. GNAT is free but there is also commercial support, for example, AdaCore, was founded in 1994 to provide commercial software solutions for Ada. GNAT Pro includes the GNU GCC based GNAT with a tool suite to provide an integrated development environment.
High-level languages continued to drive compiler research and development. Focus areas included optimization and automatic code generation. Trends in programming languages and development environments influenced compiler technology. More compilers became included in language distributions (PERL, Java Development Kit) and as a component of an IDE (VADS, Eclipse, Ada Pro). The interrelationship and interdependence of technologies grew. The advent of web services promoted growth of web languages and scripting languages. Scripts trace back to the early days of Command Line Interfaces (CLI) where the user could enter commands to be executed by the system. User Shell concepts developed with languages to write shell programs. Early Windows designs offered a simple batch programming capability. The conventional transformation of these language used an interpreter. While not widely used, Bash and Batch compilers have been written. More recently sophisticated interpreted languages became part of the developers tool kit. Modern scripting languages include PHP, Python, Ruby and Lua. (Lua is widely used in game development.) All of these have interpreter and compiler support.
"When the field of compiling began in the late 50s, its focus was limited to the translation of high-level language programs into machine code ... The compiler field is increasingly intertwined with other disciplines including computer architecture, programming languages, formal methods, software engineering, and computer security." The "Compiler Research: The Next 50 Years" article noted the importance of object-oriented languages and Java. Security and parallel computing were cited among the future research targets.
Compiler construction
A compiler implements a formal transformation from a high-level source program to a low-level target program. Compiler design can define an end-to-end solution or tackle a defined subset that interfaces with other compilation tools e.g. preprocessors, assemblers, linkers. Design requirements include rigorously defined interfaces both internally between compiler components and externally between supporting toolsets.
In the early days, the approach taken to compiler design was directly affected by the complexity of the computer language to be processed, the experience of the person(s) designing it, and the resources available. Resource limitations led to the need to pass through the source code more than once.
A compiler for a relatively simple language written by one person might be a single, monolithic piece of software. However, as the source language grows in complexity the design may be split into a number of interdependent phases. Separate phases provide design improvements that focus development on the functions in the compilation process.
One-pass vis-à-vis multi-pass compilers
Classifying compilers by number of passes has its background in the hardware resource limitations of computers. Compiling involves performing much work and early computers did not have enough memory to contain one program that did all of this work. As a result, compilers were split up into smaller programs which each made a pass over the source (or some representation of it) performing some of the required analysis and translations.
The ability to compile in a single pass has classically been seen as a benefit because it simplifies the job of writing a compiler and one-pass compilers generally perform compilations faster than multi-pass compilers. Thus, partly driven by the resource limitations of early systems, many early languages were specifically designed so that they could be compiled in a single pass (e.g., Pascal).
In some cases, the design of a language feature may require a compiler to perform more than one pass over the source. For instance, consider a declaration appearing on line 20 of the source which affects the translation of a statement appearing on line 10. In this case, the first pass needs to gather information about declarations appearing after statements that they affect, with the actual translation happening during a subsequent pass.
The disadvantage of compiling in a single pass is that it is not possible to perform many of the sophisticated optimizations needed to generate high quality code. It can be difficult to count exactly how many passes an optimizing compiler makes. For instance, different phases of optimization may analyse one expression many times but only analyse another expression once.
Splitting a compiler up into small programs is a technique used by researchers interested in producing provably correct compilers. Proving the correctness of a set of small programs often requires less effort than proving the correctness of a larger, single, equivalent program.
Three-stage compiler structure
Regardless of the exact number of phases in the compiler design, the phases can be assigned to one of three stages. The stages include a front end, a middle end, and a back end.
The front end scans the input and verifies syntax and semantics according to a specific source language. For statically typed languages it performs type checking by collecting type information. If the input program is syntactically incorrect or has a type error, it generates error and/or warning messages, usually identifying the location in the source code where the problem was detected; in some cases the actual error may be (much) earlier in the program. Aspects of the front end include lexical analysis, syntax analysis, and semantic analysis. The front end transforms the input program into an intermediate representation (IR) for further processing by the middle end. This IR is usually a lower-level representation of the program with respect to the source code.
The middle end performs optimizations on the IR that are independent of the CPU architecture being targeted. This source code/machine code independence is intended to enable generic optimizations to be shared between versions of the compiler supporting different languages and target processors. Examples of middle end optimizations are removal of useless (dead-code elimination) or unreachable code (reachability analysis), discovery and propagation of constant values (constant propagation), relocation of computation to a less frequently executed place (e.g., out of a loop), or specialization of computation based on the context, eventually producing the "optimized" IR that is used by the back end.
The back end takes the optimized IR from the middle end. It may perform more analysis, transformations and optimizations that are specific for the target CPU architecture. The back end generates the target-dependent assembly code, performing register allocation in the process. The back end performs instruction scheduling, which re-orders instructions to keep parallel execution units busy by filling delay slots. Although most optimization problems are NP-hard, heuristic techniques for solving them are well-developed and implemented in production-quality compilers. Typically the output of a back end is machine code specialized for a particular processor and operating system.
This front/middle/back-end approach makes it possible to combine front ends for different languages with back ends for different CPUs while sharing the optimizations of the middle end. Practical examples of this approach are the GNU Compiler Collection, Clang (LLVM-based C/C++ compiler), and the Amsterdam Compiler Kit, which have multiple front-ends, shared optimizations and multiple back-ends.
Front end
The front end analyzes the source code to build an internal representation of the program, called the intermediate representation (IR). It also manages the symbol table, a data structure mapping each symbol in the source code to associated information such as location, type and scope.
While the frontend can be a single monolithic function or program, as in a scannerless parser, it was traditionally implemented and analyzed as several phases, which may execute sequentially or concurrently. This method is favored due to its modularity and separation of concerns. Most commonly, the frontend is broken into three phases: lexical analysis (also known as lexing or scanning), syntax analysis (also known as scanning or parsing), and semantic analysis. Lexing and parsing comprise the syntactic analysis (word syntax and phrase syntax, respectively), and in simple cases, these modules (the lexer and parser) can be automatically generated from a grammar for the language, though in more complex cases these require manual modification. The lexical grammar and phrase grammar are usually context-free grammars, which simplifies analysis significantly, with context-sensitivity handled at the semantic analysis phase. The semantic analysis phase is generally more complex and written by hand, but can be partially or fully automated using attribute grammars. These phases themselves can be further broken down: lexing as scanning and evaluating, and parsing as building a concrete syntax tree (CST, parse tree) and then transforming it into an abstract syntax tree (AST, syntax tree). In some cases additional phases are used, notably line reconstruction and preprocessing, but these are rare.
The main phases of the front end include the following:
converts the input character sequence to a canonical form ready for the parser. Languages which strop their keywords or allow arbitrary spaces within identifiers require this phase. The top-down, recursive-descent, table-driven parsers used in the 1960s typically read the source one character at a time and did not require a separate tokenizing phase. Atlas Autocode and Imp (and some implementations of ALGOL and Coral 66) are examples of stropped languages whose compilers would have a Line Reconstruction phase.
Preprocessing supports macro substitution and conditional compilation. Typically the preprocessing phase occurs before syntactic or semantic analysis; e.g. in the case of C, the preprocessor manipulates lexical tokens rather than syntactic forms. However, some languages such as Scheme support macro substitutions based on syntactic forms.
Lexical analysis (also known as lexing or tokenization) breaks the source code text into a sequence of small pieces called lexical tokens. This phase can be divided into two stages: the scanning, which segments the input text into syntactic units called lexemes and assigns them a category; and the evaluating, which converts lexemes into a processed value. A token is a pair consisting of a token name and an optional token value. Common token categories may include identifiers, keywords, separators, operators, literals and comments, although the set of token categories varies in different programming languages. The lexeme syntax is typically a regular language, so a finite-state automaton constructed from a regular expression can be used to recognize it. The software doing lexical analysis is called a lexical analyzer. This may not be a separate step—it can be combined with the parsing step in scannerless parsing, in which case parsing is done at the character level, not the token level.
Syntax analysis (also known as parsing) involves parsing the token sequence to identify the syntactic structure of the program. This phase typically builds a parse tree, which replaces the linear sequence of tokens with a tree structure built according to the rules of a formal grammar which define the language's syntax. The parse tree is often analyzed, augmented, and transformed by later phases in the compiler.
Semantic analysis adds semantic information to the parse tree and builds the symbol table. This phase performs semantic checks such as type checking (checking for type errors), or object binding (associating variable and function references with their definitions), or definite assignment (requiring all local variables to be initialized before use), rejecting incorrect programs or issuing warnings. Semantic analysis usually requires a complete parse tree, meaning that this phase logically follows the parsing phase, and logically precedes the code generation phase, though it is often possible to fold multiple phases into one pass over the code in a compiler implementation.
Middle end
The middle end, also known as optimizer, performs optimizations on the intermediate representation in order to improve the performance and the quality of the produced machine code. The middle end contains those optimizations that are independent of the CPU architecture being targeted.
The main phases of the middle end include the following:
Analysis: This is the gathering of program information from the intermediate representation derived from the input; data-flow analysis is used to build use-define chains, together with dependence analysis, alias analysis, pointer analysis, escape analysis, etc. Accurate analysis is the basis for any compiler optimization. The control-flow graph of every compiled function and the call graph of the program are usually also built during the analysis phase.
Optimization: the intermediate language representation is transformed into functionally equivalent but faster (or smaller) forms. Popular optimizations are inline expansion, dead-code elimination, constant propagation, loop transformation and even automatic parallelization.
Compiler analysis is the prerequisite for any compiler optimization, and they tightly work together. For example, dependence analysis is crucial for loop transformation.
The scope of compiler analysis and optimizations vary greatly; their scope may range from operating within a basic block, to whole procedures, or even the whole program. There is a trade-off between the granularity of the optimizations and the cost of compilation. For example, peephole optimizations are fast to perform during compilation but only affect a small local fragment of the code, and can be performed independently of the context in which the code fragment appears. In contrast, interprocedural optimization requires more compilation time and memory space, but enable optimizations that are only possible by considering the behavior of multiple functions simultaneously.
Interprocedural analysis and optimizations are common in modern commercial compilers from HP, IBM, SGI, Intel, Microsoft, and Sun Microsystems. The free software GCC was criticized for a long time for lacking powerful interprocedural optimizations, but it is changing in this respect. Another open source compiler with full analysis and optimization infrastructure is Open64, which is used by many organizations for research and commercial purposes.
Due to the extra time and space needed for compiler analysis and optimizations, some compilers skip them by default. Users have to use compilation options to explicitly tell the compiler which optimizations should be enabled.
Back end
The back end is responsible for the CPU architecture specific optimizations and for code generation.
The main phases of the back end include the following:
Machine dependent optimizations: optimizations that depend on the details of the CPU architecture that the compiler targets. A prominent example is peephole optimizations, which rewrites short sequences of assembler instructions into more efficient instructions.
Code generation: the transformed intermediate language is translated into the output language, usually the native machine language of the system. This involves resource and storage decisions, such as deciding which variables to fit into registers and memory and the selection and scheduling of appropriate machine instructions along with their associated addressing modes (see also Sethi–Ullman algorithm). Debug data may also need to be generated to facilitate debugging.
Compiler correctness
Compiler correctness is the branch of software engineering that deals with trying to show that a compiler behaves according to its language specification. Techniques include developing the compiler using formal methods and using rigorous testing (often called compiler validation) on an existing compiler.
Compiled vis-à-vis interpreted languages
Higher-level programming languages usually appear with a type of translation in mind: either designed as compiled language or interpreted language. However, in practice there is rarely anything about a language that requires it to be exclusively compiled or exclusively interpreted, although it is possible to design languages that rely on re-interpretation at run time. The categorization usually reflects the most popular or widespread implementations of a language – for instance, BASIC is sometimes called an interpreted language, and C a compiled one, despite the existence of BASIC compilers and C interpreters.
Interpretation does not replace compilation completely. It only hides it from the user and makes it gradual. Even though an interpreter can itself be interpreted, a set of directly executed machine instructions is needed somewhere at the bottom of the execution stack (see machine language).
Furthermore, for optimization compilers can contain interpreter functionality, and interpreters may include ahead of time compilation techniques. For example, where an expression can be executed during compilation and the results inserted into the output program, then it prevents it having to be recalculated each time the program runs, which can greatly speed up the final program. Modern trends toward just-in-time compilation and bytecode interpretation at times blur the traditional categorizations of compilers and interpreters even further.
Some language specifications spell out that implementations must include a compilation facility; for example, Common Lisp. However, there is nothing inherent in the definition of Common Lisp that stops it from being interpreted. Other languages have features that are very easy to implement in an interpreter, but make writing a compiler much harder; for example, APL, SNOBOL4, and many scripting languages allow programs to construct arbitrary source code at runtime with regular string operations, and then execute that code by passing it to a special evaluation function. To implement these features in a compiled language, programs must usually be shipped with a runtime library that includes a version of the compiler itself.
Types
One classification of compilers is by the platform on which their generated code executes. This is known as the target platform.
A native or hosted compiler is one whose output is intended to directly run on the same type of computer and operating system that the compiler itself runs on. The output of a cross compiler is designed to run on a different platform. Cross compilers are often used when developing software for embedded systems that are not intended to support a software development environment.
The output of a compiler that produces code for a virtual machine (VM) may or may not be executed on the same platform as the compiler that produced it. For this reason, such compilers are not usually classified as native or cross compilers.
The lower level language that is the target of a compiler may itself be a high-level programming language. C, viewed by some as a sort of portable assembly language, is frequently the target language of such compilers. For example, Cfront, the original compiler for C++, used C as its target language. The C code generated by such a compiler is usually not intended to be readable and maintained by humans, so indent style and creating pretty C intermediate code are ignored. Some of the features of C that make it a good target language include the #line directive, which can be generated by the compiler to support debugging of the original source, and the wide platform support available with C compilers.
While a common compiler type outputs machine code, there are many other types:
Source-to-source compilers are a type of compiler that takes a high-level language as its input and outputs a high-level language. For example, an automatic parallelizing compiler will frequently take in a high-level language program as an input and then transform the code and annotate it with parallel code annotations (e.g. OpenMP) or language constructs (e.g. Fortran's DOALL statements). Other terms for a source-to-source compiler are transcompiler or transpiler.
Bytecode compilers compile to assembly language of a theoretical machine, like some Prolog implementations
This Prolog machine is also known as the Warren Abstract Machine (or WAM).
Bytecode compilers for Java, Python are also examples of this category.
Just-in-time compilers (JIT compiler) defer compilation until runtime. JIT compilers exist for many modern languages including Python, JavaScript, Smalltalk, Java, Microsoft .NET's Common Intermediate Language (CIL) and others. A JIT compiler generally runs inside an interpreter. When the interpreter detects that a code path is "hot", meaning it is executed frequently, the JIT compiler will be invoked and compile the "hot" code for increased performance.
For some languages, such as Java, applications are first compiled using a bytecode compiler and delivered in a machine-independent intermediate representation. A bytecode interpreter executes the bytecode, but the JIT compiler will translate the bytecode to machine code when increased performance is necessary.
Hardware compilers (also known as synthesis tools) are compilers whose input is a hardware description language and whose output is a description, in the form of a netlist or otherwise, of a hardware configuration.
The output of these compilers target computer hardware at a very low level, for example a field-programmable gate array (FPGA) or structured application-specific integrated circuit (ASIC). Such compilers are said to be hardware compilers, because the source code they compile effectively controls the final configuration of the hardware and how it operates. The output of the compilation is only an interconnection of transistors or lookup tables.
An example of hardware compiler is XST, the Xilinx Synthesis Tool used for configuring FPGAs. Similar tools are available from Altera, Synplicity, Synopsys and other hardware vendors.
A program that translates from a low-level language to a higher level one is a decompiler.
A program that translates into an object code format that is not supported on the compilation machine is called a cross compiler and is commonly used to prepare code for execution on embedded software applications.
A program that rewrites object code back into the same type of object code while applying optimisations and transformations is a binary recompiler.
Assemblers, which translate human readable assembly language to the machine code instructions executed by hardware, are not considered compilers. (The inverse program that translates machine code to assembly language is called a disassembler.)
See also
Abstract interpretation
Bottom-up parsing
Compile and go system
Compile farm
List of compilers
Metacompilation
Program transformation
Notes and references
Further reading
(2+xiv+270+6 pages)
Compiler textbook references A collection of references to mainstream Compiler Construction Textbooks
External links
Incremental Approach to Compiler Constructiona PDF tutorial
explaining the key conceptual difference between compilers and interpreters
Let's Build a Compiler, by Jack Crenshaw
American inventions
Computer libraries
Programming language implementation
Utility software types | Compiler | [
"Technology"
] | 7,316 | [
"IT infrastructure",
"Computer libraries"
] |
5,759 | https://en.wikipedia.org/wiki/Complex%20analysis | Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers. It is helpful in many branches of mathematics, including algebraic geometry, number theory, analytic combinatorics, and applied mathematics, as well as in physics, including the branches of hydrodynamics, thermodynamics, quantum mechanics, and twistor theory. By extension, use of complex analysis also has applications in engineering fields such as nuclear, aerospace, mechanical and electrical engineering.
As a differentiable function of a complex variable is equal to the sum function given by its Taylor series (that is, it is analytic), complex analysis is particularly concerned with analytic functions of a complex variable, that is, holomorphic functions.
The concept can be extended to functions of several complex variables.
History
Complex analysis is one of the classical branches in mathematics, with roots in the 18th century and just prior. Important mathematicians associated with complex numbers include Euler, Gauss, Riemann, Cauchy, Gösta Mittag-Leffler, Weierstrass, and many more in the 20th century. Complex analysis, in particular the theory of conformal mappings, has many physical applications and is also used throughout analytic number theory. In modern times, it has become very popular through a new boost from complex dynamics and the pictures of fractals produced by iterating holomorphic functions. Another important application of complex analysis is in string theory which examines conformal invariants in quantum field theory.
Complex functions
A complex function is a function from complex numbers to complex numbers. In other words, it is a function that has a (not necessarily proper) subset of the complex numbers as a domain and the complex numbers as a codomain. Complex functions are generally assumed to have a domain that contains a nonempty open subset of the complex plane.
For any complex function, the values from the domain and their images in the range may be separated into real and imaginary parts:
where are all real-valued.
In other words, a complex function may be decomposed into
and
i.e., into two real-valued functions (, ) of two real variables (, ).
Similarly, any complex-valued function on an arbitrary set (is isomorphic to, and therefore, in that sense, it) can be considered as an ordered pair of two real-valued functions: or, alternatively, as a vector-valued function from into
Some properties of complex-valued functions (such as continuity) are nothing more than the corresponding properties of vector valued functions of two real variables. Other concepts of complex analysis, such as differentiability, are direct generalizations of the similar concepts for real functions, but may have very different properties. In particular, every differentiable complex function is analytic (see next section), and two differentiable functions that are equal in a neighborhood of a point are equal on the intersection of their domain (if the domains are connected). The latter property is the basis of the principle of analytic continuation which allows extending every real analytic function in a unique way for getting a complex analytic function whose domain is the whole complex plane with a finite number of curve arcs removed. Many basic and special complex functions are defined in this way, including the complex exponential function, complex logarithm functions, and trigonometric functions.
Holomorphic functions
Complex functions that are differentiable at every point of an open subset of the complex plane are said to be holomorphic on In the context of complex analysis, the derivative of at is defined to be
Superficially, this definition is formally analogous to that of the derivative of a real function. However, complex derivatives and differentiable functions behave in significantly different ways compared to their real counterparts. In particular, for this limit to exist, the value of the difference quotient must approach the same complex number, regardless of the manner in which we approach in the complex plane. Consequently, complex differentiability has much stronger implications than real differentiability. For instance, holomorphic functions are infinitely differentiable, whereas the existence of the nth derivative need not imply the existence of the (n + 1)th derivative for real functions. Furthermore, all holomorphic functions satisfy the stronger condition of analyticity, meaning that the function is, at every point in its domain, locally given by a convergent power series. In essence, this means that functions holomorphic on can be approximated arbitrarily well by polynomials in some neighborhood of every point in . This stands in sharp contrast to differentiable real functions; there are infinitely differentiable real functions that are nowhere analytic; see .
Most elementary functions, including the exponential function, the trigonometric functions, and all polynomial functions, extended appropriately to complex arguments as functions are holomorphic over the entire complex plane, making them entire functions, while rational functions , where p and q are polynomials, are holomorphic on domains that exclude points where q is zero. Such functions that are holomorphic everywhere except a set of isolated points are known as meromorphic functions. On the other hand, the functions and are not holomorphic anywhere on the complex plane, as can be shown by their failure to satisfy the Cauchy–Riemann conditions (see below).
An important property of holomorphic functions is the relationship between the partial derivatives of their real and imaginary components, known as the Cauchy–Riemann conditions. If , defined by where is holomorphic on a region then for all ,
In terms of the real and imaginary parts of the function, u and v, this is equivalent to the pair of equations and , where the subscripts indicate partial differentiation. However, the Cauchy–Riemann conditions do not characterize holomorphic functions, without additional continuity conditions (see Looman–Menchoff theorem).
Holomorphic functions exhibit some remarkable features. For instance, Picard's theorem asserts that the range of an entire function can take only three possible forms: or for some In other words, if two distinct complex numbers and are not in the range of an entire function then is a constant function. Moreover, a holomorphic function on a connected open set is determined by its restriction to any nonempty open subset.
Conformal map
Major results
One of the central tools in complex analysis is the line integral. The line integral around a closed path of a function that is holomorphic everywhere inside the area bounded by the closed path is always zero, as is stated by the Cauchy integral theorem. The values of such a holomorphic function inside a disk can be computed by a path integral on the disk's boundary (as shown in Cauchy's integral formula). Path integrals in the complex plane are often used to determine complicated real integrals, and here the theory of residues among others is applicable (see methods of contour integration). A "pole" (or isolated singularity) of a function is a point where the function's value becomes unbounded, or "blows up". If a function has such a pole, then one can compute the function's residue there, which can be used to compute path integrals involving the function; this is the content of the powerful residue theorem. The remarkable behavior of holomorphic functions near essential singularities is described by Picard's theorem. Functions that have only poles but no essential singularities are called meromorphic. Laurent series are the complex-valued equivalent to Taylor series, but can be used to study the behavior of functions near singularities through infinite sums of more well understood functions, such as polynomials.
A bounded function that is holomorphic in the entire complex plane must be constant; this is Liouville's theorem. It can be used to provide a natural and short proof for the fundamental theorem of algebra which states that the field of complex numbers is algebraically closed.
If a function is holomorphic throughout a connected domain then its values are fully determined by its values on any smaller subdomain. The function on the larger domain is said to be analytically continued from its values on the smaller domain. This allows the extension of the definition of functions, such as the Riemann zeta function, which are initially defined in terms of infinite sums that converge only on limited domains to almost the entire complex plane. Sometimes, as in the case of the natural logarithm, it is impossible to analytically continue a holomorphic function to a non-simply connected domain in the complex plane but it is possible to extend it to a holomorphic function on a closely related surface known as a Riemann surface.
All this refers to complex analysis in one variable. There is also a very rich theory of complex analysis in more than one complex dimension in which the analytic properties such as power series expansion carry over whereas most of the geometric properties of holomorphic functions in one complex dimension (such as conformality) do not carry over. The Riemann mapping theorem about the conformal relationship of certain domains in the complex plane, which may be the most important result in the one-dimensional theory, fails dramatically in higher dimensions.
A major application of certain complex spaces is in quantum mechanics as wave functions.
See also
Complex geometry
Hypercomplex analysis
Vector calculus
List of complex analysis topics
Monodromy theorem
Real analysis
Riemann–Roch theorem
Runge's theorem
References
Sources
Ablowitz, M. J. & A. S. Fokas, Complex Variables: Introduction and Applications (Cambridge, 2003).
Ahlfors, L., Complex Analysis (McGraw-Hill, 1953).
Cartan, H., Théorie élémentaire des fonctions analytiques d'une ou plusieurs variables complexes. (Hermann, 1961). English translation, Elementary Theory of Analytic Functions of One or Several Complex Variables. (Addison-Wesley, 1963).
Carathéodory, C., Funktionentheorie. (Birkhäuser, 1950). English translation, Theory of Functions of a Complex Variable (Chelsea, 1954). [2 volumes.]
Carrier, G. F., M. Krook, & C. E. Pearson, Functions of a Complex Variable: Theory and Technique. (McGraw-Hill, 1966).
Conway, J. B., Functions of One Complex Variable. (Springer, 1973).
Fisher, S., Complex Variables. (Wadsworth & Brooks/Cole, 1990).
Forsyth, A., Theory of Functions of a Complex Variable (Cambridge, 1893).
Freitag, E. & R. Busam, Funktionentheorie. (Springer, 1995). English translation, Complex Analysis. (Springer, 2005).
Goursat, E., Cours d'analyse mathématique, tome 2. (Gauthier-Villars, 1905). English translation, A course of mathematical analysis, vol. 2, part 1: Functions of a complex variable. (Ginn, 1916).
Henrici, P., Applied and Computational Complex Analysis (Wiley). [Three volumes: 1974, 1977, 1986.]
Kreyszig, E., Advanced Engineering Mathematics. (Wiley, 1962).
Lavrentyev, M. & B. Shabat, Методы теории функций комплексного переменного. (Methods of the Theory of Functions of a Complex Variable). (1951, in Russian).
Markushevich, A. I., Theory of Functions of a Complex Variable, (Prentice-Hall, 1965). [Three volumes.]
Marsden & Hoffman, Basic Complex Analysis. (Freeman, 1973).
Needham, T., Visual Complex Analysis. (Oxford, 1997). http://usf.usfca.edu/vca/
Remmert, R., Theory of Complex Functions. (Springer, 1990).
Rudin, W., Real and Complex Analysis. (McGraw-Hill, 1966).
Shaw, W. T., Complex Analysis with Mathematica (Cambridge, 2006).
Stein, E. & R. Shakarchi, Complex Analysis. (Princeton, 2003).
Sveshnikov, A. G. & A. N. Tikhonov, Теория функций комплексной переменной. (Nauka, 1967). English translation, The Theory Of Functions Of A Complex Variable (MIR, 1978).
Titchmarsh, E. C., The Theory of Functions. (Oxford, 1932).
Wegert, E., Visual Complex Functions. (Birkhäuser, 2012).
Whittaker, E. T. & G. N. Watson, A Course of Modern Analysis. (Cambridge, 1902). 3rd ed. (1920)
External links
Wolfram Research's MathWorld Complex Analysis Page
Complex numbers | Complex analysis | [
"Mathematics"
] | 2,716 | [
"Complex numbers",
"Mathematical objects",
"Numbers"
] |
5,762 | https://en.wikipedia.org/wiki/Civil%20engineering | Civil engineering is a professional engineering discipline that deals with the design, construction, and maintenance of the physical and naturally built environment, including public works such as roads, bridges, canals, dams, airports, sewage systems, pipelines, structural components of buildings, and railways.
Civil engineering is traditionally broken into a number of sub-disciplines. It is considered the second-oldest engineering discipline after military engineering, and it is defined to distinguish non-military engineering from military engineering. Civil engineering can take place in the public sector from municipal public works departments through to federal government agencies, and in the private sector from locally based firms to Fortune Global 500 companies.
History
Civil engineering as a discipline
Civil engineering is the application of physical and scientific principles for solving the problems of society, and its history is intricately linked to advances in the understanding of physics and mathematics throughout history. Because civil engineering is a broad profession, including several specialized sub-disciplines, its history is linked to knowledge of structures, materials science, geography, geology, soils, hydrology, environmental science, mechanics, project management, and other fields.
Throughout ancient and medieval history most architectural design and construction was carried out by artisans, such as stonemasons and carpenters, rising to the role of master builder. Knowledge was retained in guilds and seldom supplanted by advances. Structures, roads, and infrastructure that existed were repetitive, and increases in scale were incremental.
One of the earliest examples of a scientific approach to physical and mathematical problems applicable to civil engineering is the work of Archimedes in the 3rd century BC, including Archimedes' principle, which underpins our understanding of buoyancy, and practical solutions such as Archimedes' screw. Brahmagupta, an Indian mathematician, used arithmetic in the 7th century AD, based on Hindu-Arabic numerals, for excavation (volume) computations.
Civil engineering profession
Engineering has been an aspect of life since the beginnings of human existence. The earliest practice of civil engineering may have commenced between 4000 and 2000 BC in ancient Egypt, the Indus Valley civilization, and Mesopotamia (ancient Iraq) when humans started to abandon a nomadic existence, creating a need for the construction of shelter. During this time, transportation became increasingly important leading to the development of the wheel and sailing.
Until modern times there was no clear distinction between civil engineering and architecture, and the term engineer and architect were mainly geographical variations referring to the same occupation, and often used interchangeably. The constructions of pyramids in Egypt (–2500 BC) constitute some of the first instances of large structure constructions in history. Other ancient historic civil engineering constructions include the Qanat water management system in modern-day Iran (the oldest is older than 3000 years and longer than ), the Parthenon by Iktinos in Ancient Greece (447–438 BC), the Appian Way by Roman engineers (), the Great Wall of China by General Meng T'ien under orders from Ch'in Emperor Shih Huang Ti () and the stupas constructed in ancient Sri Lanka like the Jetavanaramaya and the extensive irrigation works in Anuradhapura. The Romans developed civil structures throughout their empire, including especially aqueducts, insulae, harbors, bridges, dams and roads.
In the 18th century, the term civil engineering was coined to incorporate all things civilian as opposed to military engineering. In 1747, the first institution for the teaching of civil engineering, the École Nationale des Ponts et Chaussées, was established in France; and more examples followed in other European countries, like Spain. The first self-proclaimed civil engineer was John Smeaton, who constructed the Eddystone Lighthouse. In 1771 Smeaton and some of his colleagues formed the Smeatonian Society of Civil Engineers, a group of leaders of the profession who met informally over dinner. Though there was evidence of some technical meetings, it was little more than a social society.
In 1818 the Institution of Civil Engineers was founded in London, and in 1820 the eminent engineer Thomas Telford became its first president. The institution received a Royal charter in 1828, formally recognising civil engineering as a profession. Its charter defined civil engineering as:
Civil engineering education
The first private college to teach civil engineering in the United States was Norwich University, founded in 1819 by Captain Alden Partridge. The first degree in civil engineering in the United States was awarded by Rensselaer Polytechnic Institute in 1835. The first such degree to be awarded to a woman was granted by Cornell University to Nora Stanton Blatch in 1905.
In the UK during the early 19th century, the division between civil engineering and military engineering (served by the Royal Military Academy, Woolwich), coupled with the demands of the Industrial Revolution, spawned new engineering education initiatives: the Class of Civil Engineering and Mining was founded at King's College London in 1838, mainly as a response to the growth of the railway system and the need for more qualified engineers, the private College for Civil Engineers in Putney was established in 1839, and the UK's first Chair of Engineering was established at the University of Glasgow in 1840.
Education
Civil engineers typically possess an academic degree in civil engineering. The length of study is three to five years, and the completed degree is designated as a bachelor of technology, or a bachelor of engineering. The curriculum generally includes classes in physics, mathematics, project management, design and specific topics in civil engineering. After taking basic courses in most sub-disciplines of civil engineering, they move on to specialize in one or more sub-disciplines at advanced levels. While an undergraduate degree (BEng/BSc) normally provides successful students with industry-accredited qualifications, some academic institutions offer post-graduate degrees (MEng/MSc), which allow students to further specialize in their particular area of interest.
Practicing engineers
In most countries, a bachelor's degree in engineering represents the first step towards professional certification, and a professional body certifies the degree program. After completing a certified degree program, the engineer must satisfy a range of requirements including work experience and exam requirements before being certified. Once certified, the engineer is designated as a professional engineer (in the United States, Canada and South Africa), a chartered engineer (in most Commonwealth countries), a chartered professional engineer (in Australia and New Zealand), or a European engineer (in most countries of the European Union). There are international agreements between relevant professional bodies to allow engineers to practice across national borders.
The benefits of certification vary depending upon location. For example, in the United States and Canada, "only a licensed professional engineer may prepare, sign and seal, and submit engineering plans and drawings to a public authority for approval, or seal engineering work for public and private clients." This requirement is enforced under provincial law such as the Engineers Act in Quebec. No such legislation has been enacted in other countries including the United Kingdom. In Australia, state licensing of engineers is limited to the state of Queensland. Almost all certifying bodies maintain a code of ethics which all members must abide by.
Engineers must obey contract law in their contractual relationships with other parties. In cases where an engineer's work fails, they may be subject to the law of tort of negligence, and in extreme cases, criminal charges. An engineer's work must also comply with numerous other rules and regulations such as building codes and environmental law.
Sub-disciplines
There are a number of sub-disciplines within the broad field of civil engineering. General civil engineers work closely with surveyors and specialized civil engineers to design grading, drainage, pavement, water supply, sewer service, dams, electric and communications supply. General civil engineering is also referred to as site engineering, a branch of civil engineering that primarily focuses on converting a tract of land from one usage to another. Site engineers spend time visiting project sites, meeting with stakeholders, and preparing construction plans. Civil engineers apply the principles of geotechnical engineering, structural engineering, environmental engineering, transportation engineering and construction engineering to residential, commercial, industrial and public works projects of all sizes and levels of construction.
Coastal engineering
Coastal engineering is concerned with managing coastal areas. In some jurisdictions, the terms sea defense and coastal protection mean defense against flooding and erosion, respectively. Coastal defense is the more traditional term, but coastal management has become popular as well.
Construction engineering
Construction engineering involves planning and execution, transportation of materials, and site development based on hydraulic, environmental, structural, and geotechnical engineering. As construction firms tend to have higher business risk than other types of civil engineering firms, construction engineers often engage in more business-like transactions, such as drafting and reviewing contracts, evaluating logistical operations, and monitoring supply prices.
Earthquake engineering
Earthquake engineering involves designing structures to withstand hazardous earthquake exposures. Earthquake engineering is a sub-discipline of structural engineering. The main objectives of earthquake engineering are to understand interaction of structures on the shaky ground; foresee the consequences of possible earthquakes; and design, construct and maintain structures to perform at earthquake in compliance with building codes.
Environmental engineering
Environmental engineering is the contemporary term for sanitary engineering, though sanitary engineering traditionally had not included much of the hazardous waste management and environmental remediation work covered by environmental engineering. Public health engineering and environmental health engineering are other terms being used.
Environmental engineering deals with treatment of chemical, biological, or thermal wastes, purification of water and air, and remediation of contaminated sites after waste disposal or accidental contamination. Among the topics covered by environmental engineering are pollutant transport, water purification, waste water treatment, air pollution, solid waste treatment, recycling, and hazardous waste management. Environmental engineers administer pollution reduction, green engineering, and industrial ecology. Environmental engineers also compile information on environmental consequences of proposed actions.
Forensic engineering
Forensic engineering is the investigation of materials, products, structures or components that fail or do not operate or function as intended, causing personal injury or damage to property. The consequences of failure are dealt with by the law of product liability. The field also deals with retracing processes and procedures leading to accidents in operation of vehicles or machinery. The subject is applied most commonly in civil law cases, although it may be of use in criminal law cases. Generally the purpose of a Forensic engineering investigation is to locate cause or causes of failure with a view to improve performance or life of a component, or to assist a court in determining the facts of an accident. It can also involve investigation of intellectual property claims, especially patents.
Geotechnical engineering
Geotechnical engineering studies rock and soil supporting civil engineering systems. Knowledge from the field of soil science, materials science, mechanics, and hydraulics is applied to safely and economically design foundations, retaining walls, and other structures. Environmental efforts to protect groundwater and safely maintain landfills have spawned a new area of research called geo-environmental engineering.
Identification of soil properties presents challenges to geotechnical engineers. Boundary conditions are often well defined in other branches of civil engineering, but unlike steel or concrete, the material properties and behavior of soil are difficult to predict due to its variability and limitation on investigation. Furthermore, soil exhibits nonlinear (stress-dependent) strength, stiffness, and dilatancy (volume change associated with application of shear stress), making studying soil mechanics all the more difficult. Geotechnical engineers frequently work with professional geologists, Geological Engineering professionals and soil scientists.
Materials science and engineering
Materials science is closely related to civil engineering. It studies fundamental characteristics of materials, and deals with ceramics such as concrete and mix asphalt concrete, strong metals such as aluminum and steel, and thermosetting polymers including polymethylmethacrylate (PMMA) and carbon fibers.
Materials engineering involves protection and prevention (paints and finishes). Alloying combines two types of metals to produce another metal with desired properties. It incorporates elements of applied physics and chemistry. With recent media attention on nanoscience and nanotechnology, materials engineering has been at the forefront of academic research. It is also an important part of forensic engineering and failure analysis.
Site development and planning
Site development, also known as site planning, is focused on the planning and development potential of a site as well as addressing possible impacts from permitting issues and environmental challenges.
Structural engineering
Structural engineering is concerned with the structural design and structural analysis of buildings, bridges, towers, flyovers (overpasses), tunnels, off shore structures like oil and gas fields in the sea, aerostructure and other structures. This involves identifying the loads which act upon a structure and the forces and stresses which arise within that structure due to those loads, and then designing the structure to successfully support and resist those loads. The loads can be self weight of the structures, other dead load, live loads, moving (wheel) load, wind load, earthquake load, load from temperature change etc. The structural engineer must design structures to be safe for their users and to successfully fulfill the function they are designed for (to be serviceable). Due to the nature of some loading conditions, sub-disciplines within structural engineering have emerged, including wind engineering and earthquake engineering.
Design considerations will include strength, stiffness, and stability of the structure when subjected to loads which may be static, such as furniture or self-weight, or dynamic, such as wind, seismic, crowd or vehicle loads, or transitory, such as temporary construction loads or impact. Other considerations include cost, constructibility, safety, aesthetics and sustainability.
Surveying
Surveying is the process by which a surveyor measures certain dimensions that occur on or near the surface of the Earth. Surveying equipment such as levels and theodolites are used for accurate measurement of angular deviation, horizontal, vertical and slope distances. With computerization, electronic distance measurement (EDM), total stations, GPS surveying and laser scanning have to a large extent supplanted traditional instruments. Data collected by survey measurement is converted into a graphical representation of the Earth's surface in the form of a map. This information is then used by civil engineers, contractors and realtors to design from, build on, and trade, respectively. Elements of a structure must be sized and positioned in relation to each other and to site boundaries and adjacent structures.
Although surveying is a distinct profession with separate qualifications and licensing arrangements, civil engineers are trained in the basics of surveying and mapping, as well as geographic information systems. Surveyors also lay out the routes of railways, tramway tracks, highways, roads, pipelines and streets as well as position other infrastructure, such as harbors, before construction.
Land surveying
In the United States, Canada, the United Kingdom and most Commonwealth countries land surveying is considered to be a separate and distinct profession. Land surveyors are not considered to be engineers, and have their own professional associations and licensing requirements. The services of a licensed land surveyor are generally required for boundary surveys (to establish the boundaries of a parcel using its legal description) and subdivision plans (a plot or map based on a survey of a parcel of land, with boundary lines drawn inside the larger parcel to indicate the creation of new boundary lines and roads), both of which are generally referred to as Cadastral surveying. They collect data on important geological features below and on the land.
Construction surveying
Construction surveying is generally performed by specialized technicians. Unlike land surveyors, the resulting plan does not have legal status. Construction surveyors perform the following tasks:
Surveying existing conditions of the future work site, including topography, existing buildings and infrastructure, and underground infrastructure when possible;
"lay-out" or "setting-out": placing reference points and markers that will guide the construction of new structures such as roads or buildings;
Verifying the location of structures during construction;
As-Built surveying: a survey conducted at the end of the construction project to verify that the work authorized was completed to the specifications set on plans.
Transportation engineering
Transportation engineering is concerned with moving people and goods efficiently, safely, and in a manner conducive to a vibrant community. This involves specifying, designing, constructing, and maintaining transportation infrastructure which includes streets, canals, highways, rail systems, airports, ports, and mass transit. It includes areas such as transportation design, transportation planning, traffic engineering, some aspects of urban engineering, queueing theory, pavement engineering, Intelligent Transportation System (ITS), and infrastructure management.
Municipal or urban engineering
Municipal engineering is concerned with municipal infrastructure. This involves specifying, designing, constructing, and maintaining streets, sidewalks, water supply networks, sewers, street lighting, municipal solid waste management and disposal, storage depots for various bulk materials used for maintenance and public works (salt, sand, etc.), public parks and cycling infrastructure. In the case of underground utility networks, it may also include the civil portion (conduits and access chambers) of the local distribution networks of electrical and telecommunications services. It can also include the optimization of waste collection and bus service networks. Some of these disciplines overlap with other civil engineering specialties, however municipal engineering focuses on the coordination of these infrastructure networks and services, as they are often built simultaneously, and managed by the same municipal authority. Municipal engineers may also design the site civil works for large buildings, industrial plants or campuses (i.e. access roads, parking lots, potable water supply, treatment or pretreatment of waste water, site drainage, etc.)
Water resources engineering
Water resources engineering is concerned with the collection and management of water (as a natural resource). As a discipline, it therefore combines elements of hydrology, environmental science, meteorology, conservation, and resource management. This area of civil engineering relates to the prediction and management of both the quality and the quantity of water in both underground (aquifers) and above ground (lakes, rivers, and streams) resources. Water resource engineers analyze and model very small to very large areas of the earth to predict the amount and content of water as it flows into, through, or out of a facility. However, the actual design of the facility may be left to other engineers.
Hydraulic engineering concerns the flow and conveyance of fluids, principally water. This area of civil engineering is intimately related to the design of pipelines, water supply network, drainage facilities (including bridges, dams, channels, culverts, levees, storm sewers), and canals. Hydraulic engineers design these facilities using the concepts of fluid pressure, fluid statics, fluid dynamics, and hydraulics, among others.
Civil engineering systems
Civil engineering systems is a discipline that promotes using systems thinking to manage complexity and change in civil engineering within its broader public context. It posits that the proper development of civil engineering infrastructure requires a holistic, coherent understanding of the relationships between all of the crucial factors that contribute to successful projects while at the same time emphasizing the importance of attention to technical detail. Its purpose is to help integrate the entire civil engineering project life cycle from conception, through planning, designing, making, operating to decommissioning.
See also
Architectural engineering
Civil engineering software
Engineering drawing
Geological Engineering
Geomatics engineering
Glossary of civil engineering
Index of civil engineering articles
List of civil engineers
List of engineering branches
List of Historic Civil Engineering Landmarks
Macro-engineering
Railway engineering
Site survey
Associations
American Society of Civil Engineers
Canadian Society for Civil Engineering
Chartered Institution of Civil Engineering Surveyors
Council for the Regulation of Engineering in Nigeria
Earthquake Engineering Research Institute
Engineers Australia
European Federation of National Engineering Associations
International Federation of Consulting Engineers
Indian Geotechnical Society
Institution of Civil Engineers
Institution of Structural Engineers
Institute of Engineering (Nepal)
International Society of Soil Mechanics and Geotechnical Engineering
Institution of Engineers, Bangladesh
Institution of Engineers (India)
Institution of Engineers of Ireland
Institute of Transportation Engineers
Japan Society of Civil Engineers
Pakistan Engineering Council
Philippine Institute of Civil Engineers
Transportation Research Board
References
Further reading
External links
The Institution of Civil Engineers
Civil Engineering Software Database
The Institution of Civil Engineering Surveyors
Civil engineering classes, from MIT OpenCourseWare
Engineering disciplines
Articles containing video clips | Civil engineering | [
"Engineering"
] | 4,028 | [
"Construction",
"Civil engineering",
"nan"
] |
5,765 | https://en.wikipedia.org/wiki/%C3%87atalh%C3%B6y%C3%BCk | Çatalhöyük (English: Chatalhoyuk ; ; also Çatal Höyük and Çatal Hüyük; from Turkish çatal "fork" + höyük "tumulus") is a tell (a mounded accretion due to long-term human settlement) of a very large Neolithic and Chalcolithic proto-city settlement in southern Anatolia, which existed from approximately 7500 BC to 5600 BC and flourished around 7000 BC. In July 2012, it was inscribed as a UNESCO World Heritage Site.
Çatalhöyük overlooks the Konya Plain, southeast of the present-day city of Konya (ancient Iconium) in Turkey, approximately from the twin-coned volcano of Mount Hasan. The eastern settlement forms a mound that would have risen about above the plain at the time of the latest Neolithic occupation. There is also a smaller settlement mound to the west and a Byzantine settlement a few hundred meters to the east. The prehistoric mound settlements were abandoned before the Bronze Age. A channel of the Çarşamba River once flowed between the two mounds, and the settlement was built on alluvial clay which may have been favorable for early agriculture. Currently, the closest river is the Euphrates.
Archaeology
The site was first excavated by James Mellaart in 1958. He later led a team which further excavated there for four seasons between 1961 and 1965. These excavations revealed this section of Anatolia as a centre of advanced culture in the Neolithic period. Excavation revealed 18 successive layers of buildings signifying various stages of the settlement and eras of history. The bottom layer of buildings can be dated as early as 7100 BC while the top layer of the later West Mound is from 5600 BC.
Mellaart was banned from Turkey for his involvement in the Dorak affair, in which he published drawings of supposedly important Bronze Age artifacts that later went missing. After this scandal, the site lay idle until 1993, when excavations began under the leadership of Ian Hodder, then at the University of Cambridge. The Hodder-led excavations ended in 2018. Hodder, a former student of Mellaart, chose the site as the first "real world" test of his then-controversial theory of post-processual archaeology. The site has always had a strong research emphasis upon engagement with digital methodologies, driven by the project's experimental and reflexive methodological framework. According to Mickel, Hodder's Çatalhöyük Research Project (ÇRP) established itself as a site for progressive methodologiesin terms of adaptable and democratized recording, integration of computerized technologies, sampling strategies, and community involvement."
New excavations are being directed by Ali Umut Türkcan from Anadolu University.
Culture
Çatalhöyük was composed entirely of domestic buildings with no obvious public buildings. While some of the larger rooms have rather ornate murals, the purpose of others remains unclear.
Initial estimates suggested an average population of between 5,000 and 7,000. However, more recent work using revised ideas of the distribution of residential buildings, and employing archaeological and ethnographic data exploring building use, suggests that between 600 and 800 people would have lived at Çatalhöyük East during an average year during the Middle phase (6700–6500 BC).
The sites were set up as large numbers of buildings clustered together. Households looked to their neighbors for help, trade, and possible marriage for their children. The inhabitants lived in mudbrick houses that were crammed together in an aggregate structure. No footpaths or streets were used between the dwellings, which were clustered in a honeycomb-like maze. Most were accessed by holes in the ceiling and doors on the side of the houses, with doors reached by ladders and stairs. The rooftops were effectively streets. The ceiling openings also served as the only source of ventilation, allowing smoke from the houses' open hearths and ovens to escape.
Houses had plaster interiors accessed by squared-off timber ladders or steep stairs. These were usually on the south wall of the room, as were cooking hearths and ovens. The main rooms contained raised platforms that may have been used for a range of domestic activities. Typical houses contained two rooms for everyday activity, such as cooking and crafting. All interior walls and platforms were plastered to a smooth finish. Ancillary rooms were used as storage, and were accessed through low openings from main rooms.
All rooms were kept scrupulously clean. Archaeologists identified very little rubbish in the buildings, finding middens outside the ruins, with sewage and food waste, as well as significant amounts of ash from burning wood, reeds, and animal dung. In good weather, many daily activities may also have taken place on the rooftops, which may have formed a plaza. In later periods, large communal ovens appear to have been built on these rooftops. Over time, houses were renewed by partial demolition and rebuilding on a foundation of rubble, which was how the mound was gradually built up. As many as eighteen levels of settlement have been uncovered.
As a part of ritual life, the people of Çatalhöyük buried their dead within the village. Human remains have been found in pits beneath the floors and especially beneath hearths, the platforms within the main rooms, and beds. Bodies were tightly flexed before burial and were often placed in baskets or wound and wrapped in reed mats. Disarticulated bones in some graves suggest that bodies may have been exposed in the open air for a time before the bones were gathered and buried. In some cases, graves were disturbed, and the individual's head removed from the skeleton. These heads may have been used in rituals, as some were found in other areas of the community. In a woman's grave, spinning whorls were recovered and in a man's grave, stone axes. Some skulls were plastered and painted with ochre to recreate faces, a custom more characteristic of Neolithic sites in Syria and Neolithic Jericho than at sites closer by.
Vivid murals and figurines are found throughout the settlement on interior and exterior walls. Distinctive clay figurines of women, notably the Seated Woman of Çatalhöyük, have been found in the upper levels of the site. Although no identifiable temples have been found, the graves, murals, and figurines suggest that the people of Çatalhöyük had a religion rich in symbols. Rooms with concentrations of these items may have been shrines or public meeting areas. Predominant images include men with erect phalluses, hunting scenes, red images of the now extinct aurochs (wild cattle) and stags, and vultures swooping down on headless figures. Relief figures are carved on walls, such as of lionesses facing one another.
Heads of animals, especially of cattle, were mounted on walls. A painting of the village, with the twin mountain peaks of Hasan Dağ in the background, is frequently cited as the world's oldest map, and the first landscape painting. However, some archaeologists question this interpretation. Stephanie Meece, for example, argues that it is more likely a painting of a leopard skin instead of a volcano, and a decorative geometric design instead of a map.
Religion
A feature of Çatalhöyük are its female figurines. Mellaart, the original excavator, argued that these carefully made figurines, carved and molded from marble, blue and brown limestone, schist, calcite, basalt, alabaster, and clay, represented a female deity. Although a male deity existed as well, "statues of a female deity far outnumber those of the male deity, who moreover, does not appear to be represented at all after Level VI". To date, eighteen levels have been identified. These figurines were found primarily in areas Mellaart believed to be shrines. The stately goddess seated on a throne flanked by two lionesses was found in a grain bin, which Mellaart suggests might have been a means of ensuring the harvest or protecting the food supply.
Whereas Mellaart excavated nearly two hundred buildings in four seasons, the current excavator, Ian Hodder, spent an entire season excavating one building alone. Hodder and his team, in 2004 and 2005, began to believe that the patterns suggested by Mellaart were false. They found one similar figurine, but the vast majority did not imitate the Mother Goddess style that Mellaart suggested. Instead of a Mother Goddess culture, Hodder points out that the site gives little indication of a matriarchy or patriarchy.
In an article in the Turkish Daily News, Hodder is reported as denying that Çatalhöyük was a matriarchal society and quoted as saying "When we look at what they eat and drink and at their social statues, we see that men and women had the same social status. There was a balance of power. Another example is the skulls found. If one's social status was of high importance in Çatalhöyük, the body and head were separated after death. The number of female and male skulls found during the excavations is almost equal." In another article in the Hurriyet Daily News Hodder is reported to say "We have learned that men and women were equally approached".
In a report in September 2009 on the discovery of around 2000 figurines Hodder is quoted as saying:
Professor Lynn Meskell explained that while the original excavations had found only 200 figures, the new excavations had uncovered 2,000 figures, most of which depicted animals, and fewer than 5% of the figurines depicted women.
Estonian folklorist Uku Masing has suggested as early as in 1976, that Çatalhöyük was probably a hunting and gathering religion and the Mother Goddess figurine did not represent a female deity. He implied that perhaps a longer period of time was needed to develop symbols for agricultural rites. His theory was developed in the paper "Some remarks on the mythology of the people of Catal Hüyük".
Economy
Çatalhöyük has strong evidence of an egalitarian society, as no houses with distinctive features (belonging to royalty or religious hierarchy for example) have been found so far. The most recent investigations also reveal little social distinction based on gender, with men and women receiving equivalent nutrition and seeming to have equal social status, as typically found in Paleolithic cultures. Children observed domestic areas. They learned how to perform rituals and how to build or repair houses by watching the adults make statues, beads, and other objects.
Çatalhöyük's spatial layout may be due to the close kin relations exhibited amongst the people. It can be seen, in the layout, that the people were "divided into two groups who lived on opposite sides of the town, separated by a gully." Furthermore, because no nearby towns were found from which marriage partners could be drawn, "this spatial separation must have marked two intermarrying kinship groups." This would help explain how a settlement so early on would become so large.
In the upper levels of the site, it becomes apparent that the people of Çatalhöyük were honing skills in agriculture and the domestication of animals. Female figurines have been found within bins used for storage of cereals, such as wheat and barley, and the figurines are presumed to be of a deity protecting the grain. Peas were also grown, and almonds, pistachios, and fruit were harvested from trees in the surrounding hills. Sheep were domesticated and evidence suggests the beginning of cattle domestication as well. However, hunting continued to be a major source of food for the community. Pottery and obsidian tools appear to have been major industries; obsidian tools were probably both used and also traded for items such as Mediterranean sea shells and flint from Syria. Noting the lack of hierarchy and economic inequality, historian and anti-capitalist author Murray Bookchin has argued that Çatalhöyük was an early example of anarcho-communism.
Conversely, a 2014 paper argues that the picture of Çatalhöyük is more complex and that while there seemed to have been an egalitarian distribution of cooking tools and some stone tools, unbroken quern-stones and storage units were more unevenly distributed. Private property existed but shared tools also existed. It was also suggested that Çatalhöyük was becoming less egalitarian, with greater inter-generational wealth transmission.
Museum
In 2023 a new state-of-the-art museum has opened on the site, constructed by the Konya municipality. In October 2024 a bookshop and cafe was added to the site. Non-Turkish visitors are charged five euros per person for entry. There are numerous visitor-activated information kiosks, some of which provide information in English as well as Turkish. Full information on all aspects of the various discoveries is available in eight rooms, including an underground reconstruction of a typical dwelling used by people of 90 centuries ago.
See also
Körtiktepe
Göbekli Tepe
Boncuklu Höyük
Cities of the ancient Near East
Cucuteni–Trypillian culture
Kamyana Mohyla
List of largest cities throughout history
List of Stone Age art
Matriarchy
Neolithic Revolution
Old Europe (archaeology)
Sacred bull
Venus figurines
References
Sources
Bailey, Douglass. Prehistoric Figurines: Representation and Corporeality in the Neolithic. New York: Routledge, 2005 (hardcover, ; paperback, ).
Balter, Michael. The Goddess and the Bull: Çatalhöyük: An Archaeological Journey to the Dawn of Civilization. New York: Free Press, 2004 (hardcover, ); Walnut Creek, CA: Left Coast Press, 2006 (paperback, ). A highly condensed version was published in The Smithsonian Magazine, May 2005.
Dural, Sadrettin. "Protecting Catalhoyuk: Memoir of an Archaeological Site Guard." Contributions by Ian Hodder. Translated by Duygu Camurcuoglu Cleere. Walnut Creek, CA: Left Coast Press, 2007. .
Hodder, Ian. "Women and Men at Çatalhöyük," Scientific American Magazine, January 2004 (update V15:1, 2005).
Hodder, I. (2014). "Çatalhöyük excavations: the 2000-2008 seasons.", British Institute at Ankara, Monumenta Archaeologica 29,
Hodder, Ian. Twenty-Five Years of Research at Çatalhöyük, Near Eastern Archaeology; Chicago, vol. 83, iss. 2, pp. 72–29, June 2020
Hodder, Ian. The Leopard's Tale: Revealing the Mysteries of Çatalhöyük. London; New York: Thames & Hudson, 2006 (hardcover, ). (The UK title of this work is Çatalhöyük: The Leopard's Tale.)
Hodder, Ian; Bogaard, Amy; Engel, Claudia; Pearson, Jessica; Wolfhagen, Jesse., "Spatial autocorrelation analysis and the social organisation of crop and herd management at Çatalhöyük", Anatolian Studies, London, vol. 72, pp. 1–15, 2022
Mallett, Marla, "The Goddess from Anatolia: An Updated View of the Catak Huyuk Controversy ," in Oriental Rug Review, Vol. XIII, No. 2 (December 1992/January 1993).
Mellaart, James. Çatal Hüyük: A Neolithic Town in Anatolia. London: Thames & Hudson, 1967; New York: McGraw-Hill Book Company, 1967. Online at archive.org
On the Surface: Çatalhöyük 1993–95, edited by Ian Hodder. Cambridge: McDonald Institute for Archaeological Research and British Institute of Archaeology at Ankara, 1996 ().
Todd, Ian A. Çatal Hüyük in Perspective. Menlo Park, CA: Cummings Pub. Co., 1976 (; ).
External links
What we learned from 25 Years of Research at Catalhoyuk - Ian Hodder - Oriental Institute lecture Dec 4, 2019
Çatalhöyük — Excavations of a Neolithic Anatolian Höyük, Çatalhöyük excavation official website
Çatalhöyük photos
The First Cities: Why Settle Down? The Mystery of Communities, by Michael Balter, Çatalhöyük excavation official biographer
Interview with Ian Hodder March 201 "Ian Hodder: Çatalhöyük, Religion & Templeton's 25%"
Populated places established in the 8th millennium BC
Populated places disestablished in the 7th millennium BC
1958 archaeological discoveries
Archaeological discoveries in Turkey
Archaeological museums in Turkey
Archaeological sites in Central Anatolia
Archaeological sites of prehistoric Anatolia
Buildings and structures in Konya Province
Chalcolithic sites of Asia
Former populated places in Turkey
Megasites
Museums in Konya Province
Neolithic settlements
Neolithic sites of Asia
Tells (archaeology) | Çatalhöyük | [
"Physics",
"Mathematics"
] | 3,458 | [
"Quantity",
"Megasites",
"Physical quantities",
"Size"
] |
5,770 | https://en.wikipedia.org/wiki/List%20of%20country%20calling%20codes | Country calling codes, country dial-in codes, international subscriber dialing (ISD) codes, or most commonly, telephone country codes are telephone number prefixes for reaching telephone subscribers in foreign countries or areas via international telecommunication networks. Country codes are defined by the International Telecommunication Union (ITU) in ITU-T standards E.123 and E.164. The prefixes enable international direct dialing (IDD).
Country codes constitute the international telephone numbering plan. They are used only when dialing a telephone number in a country or world region other than the caller's. Country codes are dialed before the national telephone number, but require at least one additional prefix, the international call prefix which is an exit code from the national numbering plan to the international one. In most countries, this prefix is 00, an ITU recommendation; it is 011 in the countries of the North American Numbering Plan while a minority of countries use other prefixes.
Overview
The nine world zones are generally organized geographically, with exceptions for political and historical alignment.
Zone 1 uses an integrated numbering plan; four digits (1xxx) determine the area served in Canada, the United States and its territories, and much of the Caribbean.
Zone 2 uses two 2-digit codes (20, 27) and eight sets of 3-digit codes (21x–26x, 28x, 29x), mostly to serve Africa, but also Aruba, Faroe Islands, Greenland and British Indian Ocean Territory.
Zones 3 and 4 use sixteen 2-digit codes (30–34, 36, 39–41, 43–49) and four sets of 3-digit codes (35x, 37x, 38x, 42x) to serve Europe.
Zone 5 uses eight 2-digit codes (51–58) and two sets of 3-digit codes (50x, 59x) to serve South and Central America.
Zone 6 uses seven 2-digit codes (60–66) and three sets of 3-digit codes (67x–69x) to serve Southeast Asia and Oceania.
Zone 7 uses an integrated numbering plan; two digits (7x) determine the area served: Russia or Kazakhstan.
Zone 8 uses four 2-digit codes (81, 82, 84, 86) and four sets of 3-digit codes (80x, 85x, 87x, 88x) to serve East Asia, South Asia and special services. 83x and 89x are unallocated.
Zone 9 uses seven 2-digit codes (90–95, 98) and three sets of 3-digit codes (96x, 97x, 99x) to serve the Middle East, West Asia, Central Asia, parts of South Asia and Eastern Europe.
Ordered by world zone
World zones are organized principally, but only approximately, by geographic location. Exceptions exist for political and historical alignments.
Zone 1: North American Numbering Plan (NANP)
NANP members are assigned three-digit numbering plan area (NPA) codes under the common country prefix 1, shown in the format 1 (NPA).
1 North American Numbering Plan
1 – , including United States territories:
1 (340) –
1 (670) –
1 (671) –
1 (684) –
1 (787, 939) –
1 –
Caribbean nations, Dutch and British Overseas Territories:
1 (242) –
1 (246) –
1 (264) –
1 (268) –
1 (284) –
1 (345) –
1 (441) –
1 (473) –
1 (649) –
1 (658, 876) –
1 (664) –
1 (721) –
1 (758) –
1 (767) –
1 (784) –
1 (809, 829, 849) –
1 (868) –
1 (869) –
Zone 2: Mostly Africa
(but also Aruba, Faroe Islands, Greenland and British Indian Ocean Territory)
20 –
210 – unassigned
211 –
212 – (including Western Sahara)
213 –
214 – unassigned
215 – unassigned
216 –
217 – unassigned
218 –
219 – unassigned
220 –
221 –
222 –
223 –
224 –
225 –
226 –
227 –
228 –
229 –
230 –
231 –
232 –
233 –
234 –
235 –
236 –
237 –
238 –
239 –
240 –
241 –
242 –
243 –
244 –
245 –
246 –
247 –
248 –
249 –
250 –
251 –
252 – (including )
253 –
254 –
255 –
255 (24) – , in place of never-implemented 259
256 –
257 –
258 –
259 – unassigned (was intended for People's Republic of Zanzibar but never implemented – see 255 Tanzania)
260 –
261 –
262 –
262 (269,639) – (formerly at 269 Comoros)
263 –
264 – (formerly 27 (6x) as South West Africa)
265 –
266 –
267 –
268 –
269 – (formerly assigned to Mayotte, now at 262)
27 –
28x – unassigned (reserved for country code expansion)
290 –
290 (8) –
291 –
292 – unassigned
293 – unassigned
294 – unassigned
295 – unassigned (formerly assigned to San Marino, now at 378)
296 – unassigned
297 –
298 –
299 –
Zones 3–4: Europe
Some of the larger countries were assigned two-digit codes to compensate for their usually longer domestic numbers. Small countries were assigned three-digit codes, which also has been the practice since the 1980s.
30 –
31 –
32 –
33 –
34 –
350 –
351 –
351 (291) – (landlines only)
351 (292) – (landlines only, Horta, Azores area)
351 (295) – (landlines only, Angra do Heroísmo area)
351 (296) – (landlines only, Ponta Delgada and São Miguel Island area)
352 –
353 –
354 –
355 –
356 –
357 – (including )
358 –
358 (18) –
359 –
36 – (formerly assigned to Turkey, now at 90)
37 – formerly assigned to East Germany until its reunification with West Germany, now part of 49 Germany
370 –
371 –
372 –
373 –
374 –
375 –
376 – (formerly 33 628)
377 – (formerly 33 93)
378 – (interchangeably with 39 0549; earlier was allocated 295 but never used)
379 – (assigned but uses 39 06698).
38 – formerly assigned to Yugoslavia until its break-up in 1991
380 –
381 –
382 –
383 –
384 – unassigned
385 –
386 –
387 –
388 – unassigned (formerly assigned to the European Telephony Numbering Space)
389 –
39 –
39 (0549) – (interchangeably with 378)
39 (06 698) – (assigned 379 but not in use)
40 –
41 –
41 (91) – Campione d'Italia, an Italian enclave. 91 is the prefix for the Swiss canton Ticino in which the enclave resides. Its phone system is fully integrated into the Swiss system.
42 – formerly assigned to Czechoslovakia, later to its breakup successors (CZ, SK) until 1997
420 –
421 –
422 – unassigned
423 – (formerly at 41 (75))
424 – unassigned
425 – unassigned
426 – unassigned
427 – unassigned
428 – unassigned
429 – unassigned
43 –
44 –
44 (1481) –
44 (1534) –
44 (1624) –
45 –
46 –
47 –
47 (79) –
48 –
49 –
Zone 5: South and Central Americas
500 –
500 –
501 –
502 –
503 –
504 –
505 –
506 –
507 –
508 –
509 –
51 –
52 –
53 –
54 –
55 –
56 –
57 –
58 –
590 – (including Saint Barthélemy, Saint Martin)
591 –
592 –
593 –
594 –
595 –
596 – (formerly assigned to Peru, now 51)
597 –
598 –
599 – Former , now grouped as follows:
599 3 –
599 4 –
599 5 – unassigned (formerly assigned to Sint Maarten, now included in NANP as 1 (721))
599 7 –
599 8 – unassigned (formerly assigned to Aruba, now at 297)
599 9 –
Zone 6: Southeast Asia and Oceania
60 –
61 – (see also 672 below)
61 (8 9162) –
61 (8 9164) –
62 –
63 –
64 –
64 –
65 –
66 –
670 – (formerly 62/39 during the Indonesian occupation; formerly assigned to Northern Mariana Islands, now part of NANP as 1 (670))
671 – unassigned (formerly assigned to Guam, now part of NANP as 1 (671))
672 – Australian External Territories (see also 61 Australia above); formerly assigned to Portuguese Timor (see 670)
672 (1) – Australian Antarctic Territory
672 (3) –
673 –
674 –
675 –
676 –
677 –
678 –
679 –
680 –
681 –
682 –
683 –
684 – unassigned (formerly assigned to American Samoa, now part of NANP as 1 (684))
685 –
686 –
687 –
688 –
689 –
690 –
691 –
692 –
693 – unassigned
694 – unassigned
695 – unassigned
696 – unassigned
697 – unassigned
698 – unassigned
699 – unassigned
Zone 7: Russia and neighboring regions
Formerly assigned to the Soviet Union until its dissolution in 1991.
7 (1–5, 8, 9) –
7 (840, 940) – (formerly 995 (44))
7 (850, 929) – (formerly 995 (34))
7 (6, 7) – (reserved 997 but abandoned in November 2023)
Zone 8: East Asia, South Asia, and special services
800 – Universal International Freephone Service
801 – unassigned
802 – unassigned
803 – unassigned
804 – unassigned
805 – unassigned
806 – unassigned
807 – unassigned
808 – Universal International Shared Cost Numbers
809 – unassigned
81 –
82 –
83x – unassigned (reserved for country code expansion)
84 –
850 –
851 – unassigned
852 –
853 –
854 – unassigned
855 –
856 –
857 – unassigned (formerly assigned to ANAC satellite service)
858 – unassigned (formerly assigned to ANAC satellite service)
859 – unassigned
86 –
870 – Global Mobile Satellite System (Inmarsat)
871 – unassigned (formerly assigned to Inmarsat Atlantic East, discontinued in 2008)
872 – unassigned (formerly assigned to Inmarsat Pacific, discontinued in 2008)
873 – unassigned (formerly assigned to Inmarsat Indian, discontinued in 2008)
874 – unassigned (formerly assigned to Inmarsat Atlantic West, discontinued in 2008)
875 – unassigned (reserved for future maritime mobile service)
876 – unassigned (reserved for future maritime mobile service)
877 – unassigned (reserved for future maritime mobile service)
878 – unassigned (formerly used for Universal Personal Telecommunications Service, discontinued in 2022)
879 – unassigned (reserved for national non-commercial purposes)
880 –
881 – Global Mobile Satellite System
882 – International Networks
883 – International Networks
884 – unassigned
885 – unassigned
886 –
887 – unassigned
888 – unassigned (formerly assigned to OCHA for Telecommunications for Disaster Relief service)
889 – unassigned
89x – unassigned (reserved for country code expansion)
Zone 9: West, Central, and South Asia, and part of Eastern Europe
90 –
90 (392) –
91 –
91 (191) – Jammu
91 (194) – Kashmir
92 –
92 (581) –
92 (582) –
93 –
94 –
95 –
960 –
961 –
962 –
963 –
964 –
965 –
966 –
967 –
968 –
969 – unassigned (formerly assigned to South Yemen until its unification with North Yemen, now part of 967 Yemen)
970 – (interchangeably with 972)
971 –
972 – (also , interchangeably with 970)
973 –
974 –
975 –
976 –
977 –
978 – unassigned (formerly assigned to Dubai, now part of 971 United Arab Emirates)
979 – Universal International Premium Rate Service (UIPRS); (formerly assigned to Abu Dhabi, now part of 971 United Arab Emirates)
98 –
990 – unassigned
991 – unassigned (formerly used for International Telecommunications Public Correspondence Service)
992 –
993 –
994 –
995 –
995 (34) – formerly (now 7 (850, 929))
995 (44) – formerly (now 7 (840, 940))
996 –
997 – (reserved but abandoned in November 2023; uses 7 (6xx, 7xx))
998 –
999 – unassigned (reserved for future global service)
Alphabetical order
Summary
This table lists in its first column the initial digits of the country code shared by each country in each row, which is arranged in columns for the last digit. When three-digit codes share a common leading pair, the shared prefix is marked by an arrow, ( ↙ ) pointing down and left to the three-digit codes. Unassigned codes are denoted by a dash (—). Countries are identified by ISO 3166-1 alpha-2 country codes; codes for non-geographic services are denoted by two asterisks (**).
Locations with no country code
In Antarctica, telecommunication services are provided by the parent country of each base:
Other places with no country codes in use, although a code may be reserved:
See also
List of mobile telephone prefixes by country
National conventions for writing telephone numbers
References
External links
Communication-related lists
International telecommunications
Lists of country codes
Telecommunications lists | List of country calling codes | [
"Mathematics"
] | 3,002 | [
"Mathematical objects",
"Numbers",
"Telephone numbers"
] |
5,781 | https://en.wikipedia.org/wiki/Chinese%20numerals | Chinese numerals are words and characters used to denote numbers in written Chinese.
Today, speakers of Chinese languages use three written numeral systems: the system of Arabic numerals used worldwide, and two indigenous systems. The more familiar indigenous system is based on Chinese characters that correspond to numerals in the spoken language. These may be shared with other languages of the Chinese cultural sphere such as Korean, Japanese, and Vietnamese. Most people and institutions in China primarily use the Arabic or mixed Arabic-Chinese systems for convenience, with traditional Chinese numerals used in finance, mainly for writing amounts on cheques, banknotes, some ceremonial occasions, some boxes, and on commercials.
The other indigenous system consists of the Suzhou numerals, or huama, a positional system, the only surviving form of the rod numerals. These were once used by Chinese mathematicians, and later by merchants in Chinese markets, such as those in Hong Kong until the 1990s, but were gradually supplanted by Arabic numerals.
Basic counting in Chinese
The Chinese character numeral system consists of the Chinese characters used by the Chinese written language to write spoken numerals. Similar to spelling-out numbers in English (e.g., "one thousand nine hundred forty-five"), it is not an independent system per se. Since it reflects spoken language, it does not use the positional system as in Arabic numerals, in the same way that spelling out numbers in English does not.
Ordinary numerals
There are characters representing the numbers zero through nine, and other characters representing larger numbers such as tens, hundreds, thousands, ten thousands and hundred millions. There are two sets of characters for Chinese numerals: one for everyday writing, known as (), and one for use in commercial, accounting or financial contexts, known as ( or 'capital numbers'). The latter were developed by Wu Zetian () and were further refined by the Hongwu Emperor (). They arose because the characters used for writing numerals are geometrically simple, so simply using those numerals cannot prevent forgeries in the same way spelling numbers out in English would. A forger could easily change the everyday characters (30) to (5000) just by adding a few strokes. That would not be possible when writing using the financial characters (30) and (5000). They are also referred to as "banker's numerals" of "anti-fraud numerals". For the same reason, rod numerals were never used in commercial records.
Regional usage
Powers of 10
Large numbers
For numbers larger than 10,000, similarly to the long and short scales in the West, there have been four systems in ancient and modern usage. The original one, with unique names for all powers of ten up to the 14th, is ascribed to the Yellow Emperor in the 6th century book by Zhen Luan, . In modern Chinese, only the second system is used, in which the same ancient names are used, but each represents a myriad, times the previous:
In practice, this situation does not lead to ambiguity, with the exception of , which means 1012 according to the system in common usage throughout the Chinese communities as well as in Japan and Korea, but has also been used for 106 in recent years (especially in mainland China for megabyte). To avoid problems arising from the ambiguity, the PRC government never uses this character in official documents, but uses ) or instead. Partly due to this, combinations of and are often used instead of the larger units of the traditional system as well, for example instead of . The ROC government in Taiwan uses to mean 1012 in official documents.
Large numbers from Buddhism
Numerals beyond zǎi come from Buddhist texts in Sanskrit, but are mostly found in ancient texts. Some of the following words are still being used today, but may have transferred meanings.
Small numbers
The following are characters used to denote small order of magnitude in Chinese historically. With the introduction of SI units, some of them have been incorporated as SI prefixes, while the rest have fallen into disuse.
Small numbers from Buddhism
SI prefixes
In the People's Republic of China, the early translation for the SI prefixes in 1981 was different from those used today. The larger (, , , , ) and smaller Chinese numerals (, , , , ) were defined as translation for the SI prefixes as mega, giga, tera, peta, exa, micro, nano, pico, femto, atto, resulting in the creation of yet more values for each numeral.
The Republic of China (Taiwan) defined as the translation for mega and as the translation for tera. This translation is widely used in official documents, academic communities, informational industries, etc. However, the civil broadcasting industries sometimes use to represent "megahertz".
Today, the governments of both China and Taiwan use phonetic transliterations for the SI prefixes. However, the governments have each chosen different Chinese characters for certain prefixes. The following table lists the two different standards together with the early translation.
Reading and transcribing numbers
Whole numbers
Multiple-digit numbers are constructed using a multiplicative principle; first the digit itself (from 1 to 9), then the place (such as 10 or 100); then the next digit.
In Mandarin, the multiplier (liǎng) is often used rather than for all numbers 200 and greater with the "2" numeral (although as noted earlier this varies from dialect to dialect and person to person). Use of both or are acceptable for the number 200. When writing in the Cantonese dialect, is used to represent the "2" numeral for all numbers. In the southern Min dialect of Chaozhou (Teochew), (no6) is used to represent the "2" numeral in all numbers from 200 onwards. Thus:
For the numbers 11 through 19, the leading 'one' () is usually omitted. In some dialects, like Shanghainese, when there are only two significant digits in the number, the leading 'one' and the trailing zeroes are omitted. Sometimes, the one before "ten" in the middle of a number, such as 213, is omitted. Thus:
Notes:
Nothing is ever omitted in large and more complicated numbers such as this.
In certain older texts like the Protestant Bible, or in poetic usage, numbers such as 114 may be written as [100] [10] [4] ().
Outside of Taiwan, digits are sometimes grouped by myriads instead of thousands. Hence it is more convenient to think of numbers here as in groups of four, thus 1,234,567,890 is regrouped here as 12,3456,7890. Larger than a myriad, each number is therefore four zeroes longer than the one before it, thus 10000 × = . If one of the numbers is between 10 and 19, the leading 'one' is omitted as per the above point. Hence (numbers in parentheses indicate that the number has been written as one number rather than expanded):
In Taiwan, pure Arabic numerals are officially always and only grouped by thousands. Unofficially, they are often not grouped, particularly for numbers below 100,000. Mixed Arabic-Chinese numerals are often used in order to denote myriads. This is used both officially and unofficially, and come in a variety of styles:
Interior zeroes before the unit position (as in 1002) must be spelt explicitly. The reason for this is that trailing zeroes (as in 1200) are often omitted as shorthand, so ambiguity occurs. One zero is sufficient to resolve the ambiguity. Where the zero is before a digit other than the units digit, the explicit zero is not ambiguous and is therefore optional, but preferred. Thus:
Fractional values
To construct a fraction, the denominator is written first, followed by , then the literary possessive particle , and lastly the numerator. This is the opposite of how fractions are read in English, which is numerator first. Each half of the fraction is written the same as a whole number. For example, to express "two thirds", the structure "three parts of-this two" is used. Mixed numbers are written with the whole-number part first, followed by , then the fractional part.
Percentages are constructed similarly, using as the denominator. (The number 100 is typically expressed as , like the English 'one hundred'. However, for percentages, is used on its own.)
Because percentages and other fractions are formulated the same, Chinese are more likely than not to express 10%, 20% etc. as 'parts of 10' (or 1/10, 2/10, etc. i.e. ; , ; , etc.) rather than "parts of 100" (or 10/100, 20/100, etc. i.e. ; , ; , etc.)
In Taiwan, the most common formation of percentages in the spoken language is the number per hundred followed by the word , a contraction of the Japanese ; , itself taken from 'percent'. Thus 25% is ; .
Decimal numbers are constructed by first writing the whole number part, then inserting a point (), and finally the fractional part. The fractional part is expressed using only the numbers for 0 to 9, similarly to English.
functions as a number and therefore requires a measure word. For example: .
Ordinal numbers
Ordinal numbers are formed by adding before the number.
The Heavenly Stems are a traditional Chinese ordinal system.
Negative numbers
Negative numbers are formed by adding before the number.
Usage
Chinese grammar requires the use of classifiers (measure words) when a numeral is used together with a noun to express a quantity. For example, "three people" is expressed as , "three ( particle) person", where / is a classifier. There exist many different classifiers, for use with different sets of nouns, although / is the most common, and may be used informally in place of other classifiers.
Chinese uses cardinal numbers in certain situations in which English would use ordinals. For example, (literally "three story/storey") means "third floor" ("second floor" in British ). Likewise, (literally "twenty-one century") is used for "21st century".
Numbers of years are commonly spoken as a sequence of digits, as in ("two zero zero one") for the year 2001. Names of months and days (in the Western system) are also expressed using numbers: ("one month") for January, etc.; and ("week one") for Monday, etc. There is only one exception: Sunday is , or informally , both literally "week day". When meaning "week", "" and "" are interchangeable. "" or "" means "day of worship". Chinese Catholics call Sunday "" , "Lord's day".
Full dates are usually written in the format 2001年1月20日 for January 20, 2001 (using "year", "month", and "day") – all the numbers are read as cardinals, not ordinals, with no leading zeroes, and the year is read as a sequence of digits. For brevity the , and may be dropped to give a date composed of just numbers. For example "6-4" in Chinese is "six-four", short for "month six, day four" i.e. June Fourth, a common Chinese shorthand for the 1989 Tiananmen Square protests (because of the violence that occurred on June 4). For another example 67, in Chinese is sixty seven, short for year nineteen sixty seven, a common Chinese shorthand for the Hong Kong 1967 leftist riots.
Counting rod and Suzhou numerals
In the same way that Roman numerals were standard in ancient and medieval Europe for mathematics and commerce, the Chinese formerly used the rod numerals, which is a positional system. The Suzhou numerals () system is a variation of the Southern Song rod numerals. Nowadays, the huāmǎ system is only used for displaying prices in Chinese markets or on traditional handwritten invoices.
Hand gestures
There is a common method of using of one hand to signify the numbers one to ten. While the five digits on one hand can easily express the numbers one to five, six to ten have special signs that can be used in commerce or day-to-day communication.
Historical use of numerals in China
Most Chinese numerals of later periods were descendants of the Shang dynasty oracle numerals of the 14th century BC. The oracle bone script numerals were found on tortoise shell and animal bones. In early civilizations, the Shang were able to express any numbers, however large, with only nine symbols and a counting board though it was still not positional.
Some of the bronze script numerals such as 1, 2, 3, 4, 10, 11, 12, and 13 became part of the system of rod numerals.
In this system, horizontal rod numbers are used for the tens, thousands, hundred thousands etc. It is written in Sunzi Suanjing that "one is vertical, ten is horizontal".
The counting rod numerals system has place value and decimal numerals for computation, and was used widely by Chinese merchants, mathematicians and astronomers from the Han dynasty to the 16th century.
In 690 AD, Wu Zetian promulgated Zetian characters, one of which was . The word is now used as a synonym for the number zero.
Alexander Wylie, Christian missionary to China, in 1853 already refuted the notion that "the Chinese numbers were written in words at length", and stated that in ancient China, calculation was carried out by means of counting rods, and "the written character is evidently a rude presentation of these". After being introduced to the rod numerals, he said "Having thus obtained a simple but effective system of figures, we find the Chinese in actual use of a method of notation depending on the theory of local value [i.e. place-value], several centuries before such theory was understood in Europe, and while yet the science of numbers had scarcely dawned among the Arabs."
During the Ming and Qing dynasties (after Arabic numerals were introduced into China), some Chinese mathematicians used Chinese numeral characters as positional system digits. After the Qing period, both the Chinese numeral characters and the Suzhou numerals were replaced by Arabic numerals in mathematical writings.
Cultural influences
Traditional Chinese numeric characters are also used in Japan and Korea and were used in Vietnam before the 20th century. In vertical text (that is, read top to bottom), using characters for numbers is the norm, while in horizontal text, Arabic numerals are most common. Chinese numeric characters are also used in much the same formal or decorative fashion that Roman numerals are in Western cultures. Chinese numerals may appear together with Arabic numbers on the same sign or document.
See also
Numbers in Chinese culture
Celestial stem
Notes
References
Numerals
Numerals
Chinese mathematics | Chinese numerals | [
"Mathematics"
] | 3,135 | [
"Numeral systems",
"Numerals"
] |
5,783 | https://en.wikipedia.org/wiki/Computer%20program | A computer program is a sequence or set of instructions in a programming language for a computer to execute. It is one component of software, which also includes documentation and other intangible components.
A computer program in its human-readable form is called source code. Source code needs another computer program to execute because computers can only execute their native machine instructions. Therefore, source code may be translated to machine instructions using a compiler written for the language. (Assembly language programs are translated using an assembler.) The resulting file is called an executable. Alternatively, source code may execute within an interpreter written for the language.
If the executable is requested for execution, then the operating system loads it into memory and starts a process. The central processing unit will soon switch to this process so it can fetch, decode, and then execute each machine instruction.
If the source code is requested for execution, then the operating system loads the corresponding interpreter into memory and starts a process. The interpreter then loads the source code into memory to translate and execute each statement. Running the source code is slower than running an executable. Moreover, the interpreter must be installed on the computer.
Example computer program
The "Hello, World!" program is used to illustrate a language's basic syntax. The syntax of the language BASIC (1964) was intentionally limited to make the language easy to learn. For example, variables are not declared before being used. Also, variables are automatically initialized to zero. Here is an example computer program, in Basic, to average a list of numbers:
10 INPUT "How many numbers to average?", A
20 FOR I = 1 TO A
30 INPUT "Enter number:", B
40 LET C = C + B
50 NEXT I
60 LET D = C/A
70 PRINT "The average is", D
80 END
Once the mechanics of basic computer programming are learned, more sophisticated and powerful languages are available to build large computer systems.
History
Improvements in software development are the result of improvements in computer hardware. At each stage in hardware's history, the task of computer programming changed dramatically.
Analytical Engine
In 1837, Jacquard's loom inspired Charles Babbage to attempt to build the Analytical Engine.
The names of the components of the calculating device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled. The device had a store which consisted of memory to hold 1,000 numbers of 50 decimal digits each. Numbers from the store were transferred to the mill for processing. The engine was programmed using two sets of perforated cards. One set directed the operation and the other set inputted the variables. However, the thousands of cogged wheels and gears never fully worked together.
Ada Lovelace worked for Charles Babbage to create a description of the Analytical Engine (1843). The description contained Note G which completely detailed a method for calculating Bernoulli numbers using the Analytical Engine. This note is recognized by some historians as the world's first computer program.
Universal Turing machine
In 1936, Alan Turing introduced the Universal Turing machine, a theoretical device that can model every computation.
It is a finite-state machine that has an infinitely long read/write tape. The machine can move the tape back and forth, changing its contents as it performs an algorithm. The machine starts in the initial state, goes through a sequence of steps, and halts when it encounters the halt state. All present-day computers are Turing complete.
ENIAC
The Electronic Numerical Integrator And Computer (ENIAC) was built between July 1943 and Fall 1945. It was a Turing complete, general-purpose computer that used 17,468 vacuum tubes to create the circuits. At its core, it was a series of Pascalines wired together. Its 40 units weighed 30 tons, occupied , and consumed $650 per hour (in 1940s currency) in electricity when idle. It had 20 base-10 accumulators. Programming the ENIAC took up to two months. Three function tables were on wheels and needed to be rolled to fixed function panels. Function tables were connected to function panels by plugging heavy black cables into plugboards. Each function table had 728 rotating knobs. Programming the ENIAC also involved setting some of the 3,000 switches. Debugging a program took a week. It ran from 1947 until 1955 at Aberdeen Proving Ground, calculating hydrogen bomb parameters, predicting weather patterns, and producing firing tables to aim artillery guns.
Stored-program computers
Instead of plugging in cords and turning switches, a stored-program computer loads its instructions into memory just like it loads its data into memory. As a result, the computer could be programmed quickly and perform calculations at very fast speeds. Presper Eckert and John Mauchly built the ENIAC. The two engineers introduced the stored-program concept in a three-page memo dated February 1944. Later, in September 1944, John von Neumann began working on the ENIAC project. On June 30, 1945, von Neumann published the First Draft of a Report on the EDVAC, which equated the structures of the computer with the structures of the human brain. The design became known as the von Neumann architecture. The architecture was simultaneously deployed in the constructions of the EDVAC and EDSAC computers in 1949.
The IBM System/360 (1964) was a family of computers, each having the same instruction set architecture. The Model 20 was the smallest and least expensive. Customers could upgrade and retain the same application software. The Model 195 was the most premium. Each System/360 model featured multiprogramming—having multiple processes in memory at once. When one process was waiting for input/output, another could compute.
IBM planned for each model to be programmed using PL/1. A committee was formed that included COBOL, Fortran and ALGOL programmers. The purpose was to develop a language that was comprehensive, easy to use, extendible, and would replace Cobol and Fortran. The result was a large and complex language that took a long time to compile.
Computers manufactured until the 1970s had front-panel switches for manual programming. The computer program was written on paper for reference. An instruction was represented by a configuration of on/off settings. After setting the configuration, an execute button was pressed. This process was then repeated. Computer programs also were automatically inputted via paper tape, punched cards or magnetic-tape. After the medium was loaded, the starting address was set via switches, and the execute button was pressed.
Very Large Scale Integration
A major milestone in software development was the invention of the Very Large Scale Integration (VLSI) circuit (1964). Following World War II, tube-based technology was replaced with point-contact transistors (1947) and bipolar junction transistors (late 1950s) mounted on a circuit board. During the 1960s, the aerospace industry replaced the circuit board with an integrated circuit chip.
Robert Noyce, co-founder of Fairchild Semiconductor (1957) and Intel (1968), achieved a technological improvement to refine the production of field-effect transistors (1963). The goal is to alter the electrical resistivity and conductivity of a semiconductor junction. First, naturally occurring silicate minerals are converted into polysilicon rods using the Siemens process. The Czochralski process then converts the rods into a monocrystalline silicon, boule crystal. The crystal is then thinly sliced to form a wafer substrate. The planar process of photolithography then integrates unipolar transistors, capacitors, diodes, and resistors onto the wafer to build a matrix of metal–oxide–semiconductor (MOS) transistors. The MOS transistor is the primary component in integrated circuit chips.
Originally, integrated circuit chips had their function set during manufacturing. During the 1960s, controlling the electrical flow migrated to programming a matrix of read-only memory (ROM). The matrix resembled a two-dimensional array of fuses. The process to embed instructions onto the matrix was to burn out the unneeded connections. There were so many connections, firmware programmers wrote a computer program on another chip to oversee the burning. The technology became known as Programmable ROM. In 1971, Intel installed the computer program onto the chip and named it the Intel 4004 microprocessor.
The terms microprocessor and central processing unit (CPU) are now used interchangeably. However, CPUs predate microprocessors. For example, the IBM System/360 (1964) had a CPU made from circuit boards containing discrete components on ceramic substrates.
Sac State 8008
The Intel 4004 (1971) was a 4-bit microprocessor designed to run the Busicom calculator. Five months after its release, Intel released the Intel 8008, an 8-bit microprocessor. Bill Pentz led a team at Sacramento State to build the first microcomputer using the Intel 8008: the Sac State 8008 (1972). Its purpose was to store patient medical records. The computer supported a disk operating system to run a Memorex, 3-megabyte, hard disk drive. It had a color display and keyboard that was packaged in a single console. The disk operating system was programmed using IBM's Basic Assembly Language (BAL). The medical records application was programmed using a BASIC interpreter. However, the computer was an evolutionary dead-end because it was extremely expensive. Also, it was built at a public university lab for a specific purpose. Nonetheless, the project contributed to the development of the Intel 8080 (1974) instruction set.
x86 series
In 1978, the modern software development environment began when Intel upgraded the Intel 8080 to the Intel 8086. Intel simplified the Intel 8086 to manufacture the cheaper Intel 8088. IBM embraced the Intel 8088 when they entered the personal computer market (1981). As consumer demand for personal computers increased, so did Intel's microprocessor development. The succession of development is known as the x86 series. The x86 assembly language is a family of backward-compatible machine instructions. Machine instructions created in earlier microprocessors were retained throughout microprocessor upgrades. This enabled consumers to purchase new computers without having to purchase new application software. The major categories of instructions are:
Memory instructions to set and access numbers and strings in random-access memory.
Integer arithmetic logic unit (ALU) instructions to perform the primary arithmetic operations on integers.
Floating point ALU instructions to perform the primary arithmetic operations on real numbers.
Call stack instructions to push and pop words needed to allocate memory and interface with functions.
Single instruction, multiple data (SIMD) instructions to increase speed when multiple processors are available to perform the same algorithm on an array of data.
Changing programming environment
VLSI circuits enabled the programming environment to advance from a computer terminal (until the 1990s) to a graphical user interface (GUI) computer. Computer terminals limited programmers to a single shell running in a command-line environment. During the 1970s, full-screen source code editing became possible through a text-based user interface. Regardless of the technology available, the goal is to program in a programming language.
Programming paradigms and languages
Programming language features exist to provide building blocks to be combined to express programming ideals. Ideally, a programming language should:
express ideas directly in the code.
express independent ideas independently.
express relationships among ideas directly in the code.
combine ideas freely.
combine ideas only where combinations make sense.
express simple ideas simply.
The programming style of a programming language to provide these building blocks may be categorized into programming paradigms. For example, different paradigms may differentiate:
procedural languages, functional languages, and logical languages.
different levels of data abstraction.
different levels of class hierarchy.
different levels of input datatypes, as in container types and generic programming.
Each of these programming styles has contributed to the synthesis of different programming languages.
A programming language is a set of keywords, symbols, identifiers, and rules by which programmers can communicate instructions to the computer. They follow a set of rules called a syntax.
Keywords are reserved words to form declarations and statements.
Symbols are characters to form operations, assignments, control flow, and delimiters.
Identifiers are words created by programmers to form constants, variable names, structure names, and function names.
Syntax Rules are defined in the Backus–Naur form.
Programming languages get their basis from formal languages. The purpose of defining a solution in terms of its formal language is to generate an algorithm to solve the underlining problem. An algorithm is a sequence of simple instructions that solve a problem.
Generations of programming language
The evolution of programming languages began when the EDSAC (1949) used the first stored computer program in its von Neumann architecture. Programming the EDSAC was in the first generation of programming language.
The first generation of programming language is machine language. Machine language requires the programmer to enter instructions using instruction numbers called machine code. For example, the ADD operation on the PDP-11 has instruction number 24576.
The second generation of programming language is assembly language. Assembly language allows the programmer to use mnemonic instructions instead of remembering instruction numbers. An assembler translates each assembly language mnemonic into its machine language number. For example, on the PDP-11, the operation 24576 can be referenced as ADD in the source code. The four basic arithmetic operations have assembly instructions like ADD, SUB, MUL, and DIV. Computers also have instructions like DW (Define Word) to reserve memory cells. Then the MOV instruction can copy integers between registers and memory.
The basic structure of an assembly language statement is a label, operation, operand, and comment.
Labels allow the programmer to work with variable names. The assembler will later translate labels into physical memory addresses.
Operations allow the programmer to work with mnemonics. The assembler will later translate mnemonics into instruction numbers.
Operands tell the assembler which data the operation will process.
Comments allow the programmer to articulate a narrative because the instructions alone are vague.
The key characteristic of an assembly language program is it forms a one-to-one mapping to its corresponding machine language target.
The third generation of programming language uses compilers and interpreters to execute computer programs. The distinguishing feature of a third generation language is its independence from particular hardware. Early languages include Fortran (1958), COBOL (1959), ALGOL (1960), and BASIC (1964). In 1973, the C programming language emerged as a high-level language that produced efficient machine language instructions. Whereas third-generation languages historically generated many machine instructions for each statement, C has statements that may generate a single machine instruction. Moreover, an optimizing compiler might overrule the programmer and produce fewer machine instructions than statements. Today, an entire paradigm of languages fill the imperative, third generation spectrum.
The fourth generation of programming language emphasizes what output results are desired, rather than how programming statements should be constructed. Declarative languages attempt to limit side effects and allow programmers to write code with relatively few errors. One popular fourth generation language is called Structured Query Language (SQL). Database developers no longer need to process each database record one at a time. Also, a simple statement can generate output records without having to understand how they are retrieved.
Imperative languages
Imperative languages specify a sequential algorithm using declarations, expressions, and statements:
A declaration introduces a variable name to the computer program and assigns it to a datatype – for example: var x: integer;
An expression yields a value – for example: 2 + 2 yields 4
A statement might assign an expression to a variable or use the value of a variable to alter the program's control flow – for example: x := 2 + 2; if x = 4 then do_something();
Fortran
FORTRAN (1958) was unveiled as "The IBM Mathematical FORmula TRANslating system". It was designed for scientific calculations, without string handling facilities. Along with declarations, expressions, and statements, it supported:
arrays.
subroutines.
"do" loops.
It succeeded because:
programming and debugging costs were below computer running costs.
it was supported by IBM.
applications at the time were scientific.
However, non-IBM vendors also wrote Fortran compilers, but with a syntax that would likely fail IBM's compiler. The American National Standards Institute (ANSI) developed the first Fortran standard in 1966. In 1978, Fortran 77 became the standard until 1991. Fortran 90 supports:
records.
pointers to arrays.
COBOL
COBOL (1959) stands for "COmmon Business Oriented Language". Fortran manipulated symbols. It was soon realized that symbols did not need to be numbers, so strings were introduced. The US Department of Defense influenced COBOL's development, with Grace Hopper being a major contributor. The statements were English-like and verbose. The goal was to design a language so managers could read the programs. However, the lack of structured statements hindered this goal.
COBOL's development was tightly controlled, so dialects did not emerge to require ANSI standards. As a consequence, it was not changed for 15 years until 1974. The 1990s version did make consequential changes, like object-oriented programming.
Algol
ALGOL (1960) stands for "ALGOrithmic Language". It had a profound influence on programming language design. Emerging from a committee of European and American programming language experts, it used standard mathematical notation and had a readable, structured design. Algol was first to define its syntax using the Backus–Naur form. This led to syntax-directed compilers. It added features like:
block structure, where variables were local to their block.
arrays with variable bounds.
"for" loops.
functions.
recursion.
Algol's direct descendants include Pascal, Modula-2, Ada, Delphi and Oberon on one branch. On another branch the descendants include C, C++ and Java.
Basic
BASIC (1964) stands for "Beginner's All-Purpose Symbolic Instruction Code". It was developed at Dartmouth College for all of their students to learn. If a student did not go on to a more powerful language, the student would still remember Basic. A Basic interpreter was installed in the microcomputers manufactured in the late 1970s. As the microcomputer industry grew, so did the language.
Basic pioneered the interactive session. It offered operating system commands within its environment:
The 'new' command created an empty slate.
Statements evaluated immediately.
Statements could be programmed by preceding them with line numbers.
The 'list' command displayed the program.
The 'run' command executed the program.
However, the Basic syntax was too simple for large programs. Recent dialects added structure and object-oriented extensions. Microsoft's Visual Basic is still widely used and produces a graphical user interface.
C
C programming language (1973) got its name because the language BCPL was replaced with B, and AT&T Bell Labs called the next version "C". Its purpose was to write the UNIX operating system. C is a relatively small language, making it easy to write compilers. Its growth mirrored the hardware growth in the 1980s. Its growth also was because it has the facilities of assembly language, but uses a high-level syntax. It added advanced features like:
inline assembler.
arithmetic on pointers.
pointers to functions.
bit operations.
freely combining complex operators.
C allows the programmer to control which region of memory data is to be stored. Global variables and static variables require the fewest clock cycles to store. The stack is automatically used for the standard variable declarations. Heap memory is returned to a pointer variable from the malloc() function.
The global and static data region is located just above the program region. (The program region is technically called the text region. It is where machine instructions are stored.)
The global and static data region is technically two regions. One region is called the initialized data segment, where variables declared with default values are stored. The other region is called the block started by segment, where variables declared without default values are stored.
Variables stored in the global and static data region have their addresses set at compile-time. They retain their values throughout the life of the process.
The global and static region stores the global variables that are declared on top of (outside) the main() function. Global variables are visible to main() and every other function in the source code.
On the other hand, variable declarations inside of main(), other functions, or within { } block delimiters are local variables. Local variables also include formal parameter variables. Parameter variables are enclosed within the parenthesis of a function definition. Parameters provide an interface to the function.
Local variables declared using the static prefix are also stored in the global and static data region. Unlike global variables, static variables are only visible within the function or block. Static variables always retain their value. An example usage would be the function int increment_counter(){static int counter = 0; counter++; return counter;}
The stack region is a contiguous block of memory located near the top memory address. Variables placed in the stack are populated from top to bottom. A stack pointer is a special-purpose register that keeps track of the last memory address populated. Variables are placed into the stack via the assembly language PUSH instruction. Therefore, the addresses of these variables are set during runtime. The method for stack variables to lose their scope is via the POP instruction.
Local variables declared without the static prefix, including formal parameter variables, are called automatic variables and are stored in the stack. They are visible inside the function or block and lose their scope upon exiting the function or block.
The heap region is located below the stack. It is populated from the bottom to the top. The operating system manages the heap using a heap pointer and a list of allocated memory blocks. Like the stack, the addresses of heap variables are set during runtime. An out of memory error occurs when the heap pointer and the stack pointer meet.
C provides the malloc() library function to allocate heap memory. Populating the heap with data is an additional copy function. Variables stored in the heap are economically passed to functions using pointers. Without pointers, the entire block of data would have to be passed to the function via the stack.
C++
In the 1970s, software engineers needed language support to break large projects down into modules. One obvious feature was to decompose large projects physically into separate files. A less obvious feature was to decompose large projects logically into abstract data types. At the time, languages supported concrete (scalar) datatypes like integer numbers, floating-point numbers, and strings of characters. Abstract datatypes are structures of concrete datatypes, with a new name assigned. For example, a list of integers could be called integer_list.
In object-oriented jargon, abstract datatypes are called classes. However, a class is only a definition; no memory is allocated. When memory is allocated to a class and bound to an identifier, it is called an object.
Object-oriented imperative languages developed by combining the need for classes and the need for safe functional programming. A function, in an object-oriented language, is assigned to a class. An assigned function is then referred to as a method, member function, or operation. Object-oriented programming is executing operations on objects.
Object-oriented languages support a syntax to model subset/superset relationships. In set theory, an element of a subset inherits all the attributes contained in the superset. For example, a student is a person. Therefore, the set of students is a subset of the set of persons. As a result, students inherit all the attributes common to all persons. Additionally, students have unique attributes that other people do not have. Object-oriented languages model subset/superset relationships using inheritance. Object-oriented programming became the dominant language paradigm by the late 1990s.
C++ (1985) was originally called "C with Classes". It was designed to expand C's capabilities by adding the object-oriented facilities of the language Simula.
An object-oriented module is composed of two files. The definitions file is called the header file. Here is a C++ header file for the GRADE class in a simple school application:
// grade.h
// -------
// Used to allow multiple source files to include
// this header file without duplication errors.
// ----------------------------------------------
#ifndef GRADE_H
#define GRADE_H
class GRADE {
public:
// This is the constructor operation.
// ----------------------------------
GRADE ( const char letter );
// This is a class variable.
// -------------------------
char letter;
// This is a member operation.
// ---------------------------
int grade_numeric( const char letter );
// This is a class variable.
// -------------------------
int numeric;
};
#endif
A constructor operation is a function with the same name as the class name. It is executed when the calling operation executes the new statement.
A module's other file is the source file. Here is a C++ source file for the GRADE class in a simple school application:
// grade.cpp
// ---------
#include "grade.h"
GRADE::GRADE( const char letter )
{
// Reference the object using the keyword 'this'.
// ----------------------------------------------
this->letter = letter;
// This is Temporal Cohesion
// -------------------------
this->numeric = grade_numeric( letter );
}
int GRADE::grade_numeric( const char letter )
{
if ( ( letter == 'A' || letter == 'a' ) )
return 4;
else
if ( ( letter == 'B' || letter == 'b' ) )
return 3;
else
if ( ( letter == 'C' || letter == 'c' ) )
return 2;
else
if ( ( letter == 'D' || letter == 'd' ) )
return 1;
else
if ( ( letter == 'F' || letter == 'f' ) )
return 0;
else
return -1;
}
Here is a C++ header file for the PERSON class in a simple school application:
// person.h
// --------
#ifndef PERSON_H
#define PERSON_H
class PERSON {
public:
PERSON ( const char *name );
const char *name;
};
#endif
Here is a C++ source file for the PERSON class in a simple school application:
// person.cpp
// ----------
#include "person.h"
PERSON::PERSON ( const char *name )
{
this->name = name;
}
Here is a C++ header file for the STUDENT class in a simple school application:
// student.h
// ---------
#ifndef STUDENT_H
#define STUDENT_H
#include "person.h"
#include "grade.h"
// A STUDENT is a subset of PERSON.
// --------------------------------
class STUDENT : public PERSON{
public:
STUDENT ( const char *name );
GRADE *grade;
};
#endif
Here is a C++ source file for the STUDENT class in a simple school application:
// student.cpp
// -----------
#include "student.h"
#include "person.h"
STUDENT::STUDENT ( const char *name ):
// Execute the constructor of the PERSON superclass.
// -------------------------------------------------
PERSON( name )
{
// Nothing else to do.
// -------------------
}
Here is a driver program for demonstration:
// student_dvr.cpp
// ---------------
#include <iostream>
#include "student.h"
int main( void )
{
STUDENT *student = new STUDENT( "The Student" );
student->grade = new GRADE( 'a' );
std::cout
// Notice student inherits PERSON's name
<< student->name
<< ": Numeric grade = "
<< student->grade->numeric
<< "\n";
return 0;
}
Here is a makefile to compile everything:
# makefile
# --------
all: student_dvr
clean:
rm student_dvr *.o
student_dvr: student_dvr.cpp grade.o student.o person.o
c++ student_dvr.cpp grade.o student.o person.o -o student_dvr
grade.o: grade.cpp grade.h
c++ -c grade.cpp
student.o: student.cpp student.h
c++ -c student.cpp
person.o: person.cpp person.h
c++ -c person.cpp
Declarative languages
Imperative languages have one major criticism: assigning an expression to a non-local variable may produce an unintended side effect. Declarative languages generally omit the assignment statement and the control flow. They describe what computation should be performed and not how to compute it. Two broad categories of declarative languages are functional languages and logical languages.
The principle behind a functional language is to use lambda calculus as a guide for a well defined semantic. In mathematics, a function is a rule that maps elements from an expression to a range of values. Consider the function:
times_10(x) = 10 * x
The expression 10 * x is mapped by the function times_10() to a range of values. One value happens to be 20. This occurs when x is 2. So, the application of the function is mathematically written as:
times_10(2) = 20
A functional language compiler will not store this value in a variable. Instead, it will push the value onto the computer's stack before setting the program counter back to the calling function. The calling function will then pop the value from the stack.
Imperative languages do support functions. Therefore, functional programming can be achieved in an imperative language, if the programmer uses discipline. However, a functional language will force this discipline onto the programmer through its syntax. Functional languages have a syntax tailored to emphasize the what.
A functional program is developed with a set of primitive functions followed by a single driver function. Consider the snippet:
function max( a, b ){/* code omitted */}
function min( a, b ){/* code omitted */}
function range( a, b, c ) {
return max( a, max( b, c ) ) - min( a, min( b, c ) );
}
The primitives are max() and min(). The driver function is range(). Executing:
put( range( 10, 4, 7) ); will output 6.
Functional languages are used in computer science research to explore new language features. Moreover, their lack of side-effects have made them popular in parallel programming and concurrent programming. However, application developers prefer the object-oriented features of imperative languages.
Lisp
Lisp (1958) stands for "LISt Processor". It is tailored to process lists. A full structure of the data is formed by building lists of lists. In memory, a tree data structure is built. Internally, the tree structure lends nicely for recursive functions. The syntax to build a tree is to enclose the space-separated elements within parenthesis. The following is a list of three elements. The first two elements are themselves lists of two elements:
((A B) (HELLO WORLD) 94)
Lisp has functions to extract and reconstruct elements. The function head() returns a list containing the first element in the list. The function tail() returns a list containing everything but the first element. The function cons() returns a list that is the concatenation of other lists. Therefore, the following expression will return the list x:
cons(head(x), tail(x))
One drawback of Lisp is when many functions are nested, the parentheses may look confusing. Modern Lisp environments help ensure parenthesis match. As an aside, Lisp does support the imperative language operations of the assignment statement and goto loops. Also, Lisp is not concerned with the datatype of the elements at compile time. Instead, it assigns (and may reassign) the datatypes at runtime. Assigning the datatype at runtime is called dynamic binding. Whereas dynamic binding increases the language's flexibility, programming errors may linger until late in the software development process.
Writing large, reliable, and readable Lisp programs requires forethought. If properly planned, the program may be much shorter than an equivalent imperative language program. Lisp is widely used in artificial intelligence. However, its usage has been accepted only because it has imperative language operations, making unintended side-effects possible.
ML
ML (1973) stands for "Meta Language". ML checks to make sure only data of the same type are compared with one another. For example, this function has one input parameter (an integer) and returns an integer:
ML is not parenthesis-eccentric like Lisp. The following is an application of times_10():
times_10 2
It returns "20 : int". (Both the results and the datatype are returned.)
Like Lisp, ML is tailored to process lists. Unlike Lisp, each element is the same datatype. Moreover, ML assigns the datatype of an element at compile-time. Assigning the datatype at compile-time is called static binding. Static binding increases reliability because the compiler checks the context of variables before they are used.
Prolog
Prolog (1972) stands for "PROgramming in LOGic". It is a logic programming language, based on formal logic. The language was developed by Alain Colmerauer and Philippe Roussel in Marseille, France. It is an implementation of Selective Linear Definite clause resolution, pioneered by Robert Kowalski and others at the University of Edinburgh.
The building blocks of a Prolog program are facts and rules. Here is a simple example:
cat(tom). % tom is a cat
mouse(jerry). % jerry is a mouse
animal(X) :- cat(X). % each cat is an animal
animal(X) :- mouse(X). % each mouse is an animal
big(X) :- cat(X). % each cat is big
small(X) :- mouse(X). % each mouse is small
eat(X,Y) :- mouse(X), cheese(Y). % each mouse eats each cheese
eat(X,Y) :- big(X), small(Y). % each big animal eats each small animal
After all the facts and rules are entered, then a question can be asked:
Will Tom eat Jerry?
?- eat(tom,jerry).
true
The following example shows how Prolog will convert a letter grade to its numeric value:
numeric_grade('A', 4).
numeric_grade('B', 3).
numeric_grade('C', 2).
numeric_grade('D', 1).
numeric_grade('F', 0).
numeric_grade(X, -1) :- not X = 'A', not X = 'B', not X = 'C', not X = 'D', not X = 'F'.
grade('The Student', 'A').
?- grade('The Student', X), numeric_grade(X, Y).
X = 'A',
Y = 4
Here is a comprehensive example:
1) All dragons billow fire, or equivalently, a thing billows fire if the thing is a dragon:
billows_fire(X) :-
is_a_dragon(X).
2) A creature billows fire if one of its parents billows fire:
billows_fire(X) :-
is_a_creature(X),
is_a_parent_of(Y,X),
billows_fire(Y).
3) A thing X is a parent of a thing Y if X is the mother of Y or X is the father of Y:
is_a_parent_of(X, Y):- is_the_mother_of(X, Y).
is_a_parent_of(X, Y):- is_the_father_of(X, Y).
4) A thing is a creature if the thing is a dragon:
is_a_creature(X) :-
is_a_dragon(X).
5) Norberta is a dragon, and Puff is a creature. Norberta is the mother of Puff.
is_a_dragon(norberta).
is_a_creature(puff).
is_the_mother_of(norberta, puff).
Rule (2) is a recursive (inductive) definition. It can be understood declaratively, without the need to understand how it is executed.
Rule (3) shows how functions are represented by using relations. Here, the mother and father functions ensure that every individual has only one mother and only one father.
Prolog is an untyped language. Nonetheless, inheritance can be represented by using predicates. Rule (4) asserts that a creature is a superclass of a dragon.
Questions are answered using backward reasoning. Given the question:
?- billows_fire(X).
Prolog generates two answers :
X = norberta
X = puff
Practical applications for Prolog are knowledge representation and problem solving in artificial intelligence.
Object-oriented programming
Object-oriented programming is a programming method to execute operations (functions) on objects. The basic idea is to group the characteristics of a phenomenon into an object container and give the container a name. The operations on the phenomenon are also grouped into the container. Object-oriented programming developed by combining the need for containers and the need for safe functional programming. This programming method need not be confined to an object-oriented language. In an object-oriented language, an object container is called a class. In a non-object-oriented language, a data structure (which is also known as a record) may become an object container. To turn a data structure into an object container, operations need to be written specifically for the structure. The resulting structure is called an abstract datatype. However, inheritance will be missing. Nonetheless, this shortcoming can be overcome.
Here is a C programming language header file for the GRADE abstract datatype in a simple school application:
/* grade.h */
/* ------- */
/* Used to allow multiple source files to include */
/* this header file without duplication errors. */
/* ---------------------------------------------- */
#ifndef GRADE_H
#define GRADE_H
typedef struct
{
char letter;
} GRADE;
/* Constructor */
/* ----------- */
GRADE *grade_new( char letter );
int grade_numeric( char letter );
#endif
The grade_new() function performs the same algorithm as the C++ constructor operation.
Here is a C programming language source file for the GRADE abstract datatype in a simple school application:
/* grade.c */
/* ------- */
#include "grade.h"
GRADE *grade_new( char letter )
{
GRADE *grade;
/* Allocate heap memory */
/* -------------------- */
if ( ! ( grade = calloc( 1, sizeof ( GRADE ) ) ) )
{
fprintf(stderr,
"ERROR in %s/%s/%d: calloc() returned empty.\n",
,
,
);
exit( 1 );
}
grade->letter = letter;
return grade;
}
int grade_numeric( char letter )
{
if ( ( letter == 'A' || letter == 'a' ) )
return 4;
else
if ( ( letter == 'B' || letter == 'b' ) )
return 3;
else
if ( ( letter == 'C' || letter == 'c' ) )
return 2;
else
if ( ( letter == 'D' || letter == 'd' ) )
return 1;
else
if ( ( letter == 'F' || letter == 'f' ) )
return 0;
else
return -1;
}
In the constructor, the function calloc() is used instead of malloc() because each memory cell will be set to zero.
Here is a C programming language header file for the PERSON abstract datatype in a simple school application:
/* person.h */
/* -------- */
#ifndef PERSON_H
#define PERSON_H
typedef struct
{
char *name;
} PERSON;
/* Constructor */
/* ----------- */
PERSON *person_new( char *name );
#endif
Here is a C programming language source file for the PERSON abstract datatype in a simple school application:
/* person.c */
/* -------- */
#include "person.h"
PERSON *person_new( char *name )
{
PERSON *person;
if ( ! ( person = calloc( 1, sizeof ( PERSON ) ) ) )
{
fprintf(stderr,
"ERROR in %s/%s/%d: calloc() returned empty.\n",
,
,
);
exit( 1 );
}
person->name = name;
return person;
}
Here is a C programming language header file for the STUDENT abstract datatype in a simple school application:
/* student.h */
/* --------- */
#ifndef STUDENT_H
#define STUDENT_H
#include "person.h"
#include "grade.h"
typedef struct
{
/* A STUDENT is a subset of PERSON. */
/* -------------------------------- */
PERSON *person;
GRADE *grade;
} STUDENT;
/* Constructor */
/* ----------- */
STUDENT *student_new( char *name );
#endif
Here is a C programming language source file for the STUDENT abstract datatype in a simple school application:
/* student.c */
/* --------- */
#include "student.h"
#include "person.h"
STUDENT *student_new( char *name )
{
STUDENT *student;
if ( ! ( student = calloc( 1, sizeof ( STUDENT ) ) ) )
{
fprintf(stderr,
"ERROR in %s/%s/%d: calloc() returned empty.\n",
,
,
);
exit( 1 );
}
/* Execute the constructor of the PERSON superclass. */
/* ------------------------------------------------- */
student->person = person_new( name );
return student;
}
Here is a driver program for demonstration:
/* student_dvr.c */
/* ------------- */
#include <stdio.h>
#include "student.h"
int main( void )
{
STUDENT *student = student_new( "The Student" );
student->grade = grade_new( 'a' );
printf( "%s: Numeric grade = %d\n",
/* Whereas a subset exists, inheritance does not. */
student->person->name,
/* Functional programming is executing functions just-in-time (JIT) */
grade_numeric( student->grade->letter ) );
return 0;
}
Here is a makefile to compile everything:
# makefile
# --------
all: student_dvr
clean:
rm student_dvr *.o
student_dvr: student_dvr.c grade.o student.o person.o
gcc student_dvr.c grade.o student.o person.o -o student_dvr
grade.o: grade.c grade.h
gcc -c grade.c
student.o: student.c student.h
gcc -c student.c
person.o: person.c person.h
gcc -c person.c
The formal strategy to build object-oriented objects is to:
Identify the objects. Most likely these will be nouns.
Identify each object's attributes. What helps to describe the object?
Identify each object's actions. Most likely these will be verbs.
Identify the relationships from object to object. Most likely these will be verbs.
For example:
A person is a human identified by a name.
A grade is an achievement identified by a letter.
A student is a person who earns a grade.
Syntax and semantics
The syntax of a computer program is a list of production rules which form its grammar. A programming language's grammar correctly places its declarations, expressions, and statements. Complementing the syntax of a language are its semantics. The semantics describe the meanings attached to various syntactic constructs. A syntactic construct may need a semantic description because a production rule may have an invalid interpretation. Also, different languages might have the same syntax; however, their behaviors may be different.
The syntax of a language is formally described by listing the production rules. Whereas the syntax of a natural language is extremely complicated, a subset of the English language can have this production rule listing:
a sentence is made up of a noun-phrase followed by a verb-phrase;
a noun-phrase is made up of an article followed by an adjective followed by a noun;
a verb-phrase is made up of a verb followed by a noun-phrase;
an article is 'the';
an adjective is 'big' or
an adjective is 'small';
a noun is 'cat' or
a noun is 'mouse';
a verb is 'eats';
The words in bold-face are known as non-terminals. The words in 'single quotes' are known as terminals.
From this production rule listing, complete sentences may be formed using a series of replacements. The process is to replace non-terminals with either a valid non-terminal or a valid terminal. The replacement process repeats until only terminals remain. One valid sentence is:
sentence
noun-phrase verb-phrase
article adjective noun verb-phrase
the adjective noun verb-phrase
the big noun verb-phrase
the big cat verb-phrase
the big cat verb noun-phrase
the big cat eats noun-phrase
the big cat eats article adjective noun
the big cat eats the adjective noun
the big cat eats the small noun
the big cat eats the small mouse
However, another combination results in an invalid sentence:
the small mouse eats the big cat
Therefore, a semantic is necessary to correctly describe the meaning of an eat activity.
One production rule listing method is called the Backus–Naur form (BNF). BNF describes the syntax of a language and itself has a syntax. This recursive definition is an example of a meta-language. The syntax of BNF includes:
::= which translates to is made up of a[n] when a non-terminal is to its right. It translates to is when a terminal is to its right.
| which translates to or.
< and > which surround non-terminals.
Using BNF, a subset of the English language can have this production rule listing:
<sentence> ::= <noun-phrase><verb-phrase>
<noun-phrase> ::= <article><adjective><noun>
<verb-phrase> ::= <verb><noun-phrase>
<article> ::= the
<adjective> ::= big | small
<noun> ::= cat | mouse
<verb> ::= eats
Using BNF, a signed-integer has the production rule listing:
<signed-integer> ::= <sign><integer>
<sign> ::= + | -
<integer> ::= <digit> | <digit><integer>
<digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
Notice the recursive production rule:
<integer> ::= <digit> | <digit><integer>
This allows for an infinite number of possibilities. Therefore, a semantic is necessary to describe a limitation of the number of digits.
Notice the leading zero possibility in the production rules:
<integer> ::= <digit> | <digit><integer>
<digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
Therefore, a semantic is necessary to describe that leading zeros need to be ignored.
Two formal methods are available to describe semantics. They are denotational semantics and axiomatic semantics.
Software engineering and computer programming
Software engineering is a variety of techniques to produce quality computer programs. Computer programming is the process of writing or editing source code. In a formal environment, a systems analyst will gather information from managers about all the organization's processes to automate. This professional then prepares a detailed plan for the new or modified system. The plan is analogous to an architect's blueprint.
Performance objectives
The systems analyst has the objective to deliver the right information to the right person at the right time. The critical factors to achieve this objective are:
The quality of the output. Is the output useful for decision-making?
The accuracy of the output. Does it reflect the true situation?
The format of the output. Is the output easily understood?
The speed of the output. Time-sensitive information is important when communicating with the customer in real time.
Cost objectives
Achieving performance objectives should be balanced with all of the costs, including:
Development costs.
Uniqueness costs. A reusable system may be expensive. However, it might be preferred over a limited-use system.
Hardware costs.
Operating costs.
Applying a systems development process will mitigate the axiom: the later in the process an error is detected, the more expensive it is to correct.
Waterfall model
The waterfall model is an implementation of a systems development process. As the waterfall label implies, the basic phases overlap each other:
The investigation phase is to understand the underlying problem.
The analysis phase is to understand the possible solutions.
The design phase is to plan the best solution.
The implementation phase is to program the best solution.
The maintenance phase lasts throughout the life of the system. Changes to the system after it is deployed may be necessary. Faults may exist, including specification faults, design faults, or coding faults. Improvements may be necessary. Adaption may be necessary to react to a changing environment.
Computer programmer
A computer programmer is a specialist responsible for writing or modifying the source code to implement the detailed plan. A programming team is likely to be needed because most systems are too large to be completed by a single programmer. However, adding programmers to a project may not shorten the completion time. Instead, it may lower the quality of the system. To be effective, program modules need to be defined and distributed to team members. Also, team members must interact with one another in a meaningful and effective way.
Computer programmers may be programming in the small: programming within a single module. Chances are a module will execute modules located in other source code files. Therefore, computer programmers may be programming in the large: programming modules so they will effectively couple with each other. Programming-in-the-large includes contributing to the application programming interface (API).
Program modules
Modular programming is a technique to refine imperative language programs. Refined programs may reduce the software size, separate responsibilities, and thereby mitigate software aging. A program module is a sequence of statements that are bounded within a block and together identified by a name. Modules have a function, context, and logic:
The function of a module is what it does.
The context of a module are the elements being performed upon.
The logic of a module is how it performs the function.
The module's name should be derived first by its function, then by its context. Its logic should not be part of the name. For example, function compute_square_root( x ) or function compute_square_root_integer( i : integer ) are appropriate module names. However, function compute_square_root_by_division( x ) is not.
The degree of interaction within a module is its level of cohesion. Cohesion is a judgment of the relationship between a module's name and its function. The degree of interaction between modules is the level of coupling. Coupling is a judgement of the relationship between a module's context and the elements being performed upon.
Cohesion
The levels of cohesion from worst to best are:
Coincidental Cohesion: A module has coincidental cohesion if it performs multiple functions, and the functions are completely unrelated. For example, function read_sales_record_print_next_line_convert_to_float(). Coincidental cohesion occurs in practice if management enforces silly rules. For example, "Every module will have between 35 and 50 executable statements."
Logical Cohesion: A module has logical cohesion if it has available a series of functions, but only one of them is executed. For example, function perform_arithmetic( perform_addition, a, b ).
Temporal Cohesion: A module has temporal cohesion if it performs functions related to time. One example, function initialize_variables_and_open_files(). Another example, stage_one(), stage_two(), ...
Procedural Cohesion: A module has procedural cohesion if it performs multiple loosely related functions. For example, function read_part_number_update_employee_record().
Communicational Cohesion: A module has communicational cohesion if it performs multiple closely related functions. For example, function read_part_number_update_sales_record().
Informational Cohesion: A module has informational cohesion if it performs multiple functions, but each function has its own entry and exit points. Moreover, the functions share the same data structure. Object-oriented classes work at this level.
Functional Cohesion: a module has functional cohesion if it achieves a single goal working only on local variables. Moreover, it may be reusable in other contexts.
Coupling
The levels of coupling from worst to best are:
Content Coupling: A module has content coupling if it modifies a local variable of another function. COBOL used to do this with the alter verb.
Common Coupling: A module has common coupling if it modifies a global variable.
Control Coupling: A module has control coupling if another module can modify its control flow. For example, perform_arithmetic( perform_addition, a, b ). Instead, control should be on the makeup of the returned object.
Stamp Coupling: A module has stamp coupling if an element of a data structure passed as a parameter is modified. Object-oriented classes work at this level.
Data Coupling: A module has data coupling if all of its input parameters are needed and none of them are modified. Moreover, the result of the function is returned as a single object.
Data flow analysis
Data flow analysis is a design method used to achieve modules of functional cohesion and data coupling. The input to the method is a data-flow diagram. A data-flow diagram is a set of ovals representing modules. Each module's name is displayed inside its oval. Modules may be at the executable level or the function level.
The diagram also has arrows connecting modules to each other. Arrows pointing into modules represent a set of inputs. Each module should have only one arrow pointing out from it to represent its single output object. (Optionally, an additional exception arrow points out.) A daisy chain of ovals will convey an entire algorithm. The input modules should start the diagram. The input modules should connect to the transform modules. The transform modules should connect to the output modules.
Functional categories
Computer programs may be categorized along functional lines. The main functional categories are application software and system software. System software includes the operating system, which couples computer hardware with application software. The purpose of the operating system is to provide an environment where application software executes in a convenient and efficient manner. Both application software and system software execute utility programs. At the hardware level, a microcode program controls the circuits throughout the central processing unit.
Application software
Application software is the key to unlocking the potential of the computer system. Enterprise application software bundles accounting, personnel, customer, and vendor applications. Examples include enterprise resource planning, customer relationship management, and supply chain management software.
Enterprise applications may be developed in-house as a one-of-a-kind proprietary software. Alternatively, they may be purchased as off-the-shelf software. Purchased software may be modified to provide custom software. If the application is customized, then either the company's resources are used or the resources are outsourced. Outsourced software development may be from the original software vendor or a third-party developer.
The potential advantages of in-house software are features and reports may be developed exactly to specification. Management may also be involved in the development process and offer a level of control. Management may decide to counteract a competitor's new initiative or implement a customer or vendor requirement. A merger or acquisition may necessitate enterprise software changes. The potential disadvantages of in-house software are time and resource costs may be extensive. Furthermore, risks concerning features and performance may be looming.
The potential advantages of off-the-shelf software are upfront costs are identifiable, the basic needs should be fulfilled, and its performance and reliability have a track record. The potential disadvantages of off-the-shelf software are it may have unnecessary features that confuse end users, it may lack features the enterprise needs, and the data flow may not match the enterprise's work processes.
Application service provider
One approach to economically obtaining a customized enterprise application is through an application service provider. Specialty companies provide hardware, custom software, and end-user support. They may speed the development of new applications because they possess skilled information system staff. The biggest advantage is it frees in-house resources from staffing and managing complex computer projects. Many application service providers target small, fast-growing companies with limited information system resources. On the other hand, larger companies with major systems will likely have their technical infrastructure in place. One risk is having to trust an external organization with sensitive information. Another risk is having to trust the provider's infrastructure reliability.
Operating system
An operating system is the low-level software that supports a computer's basic functions, such as scheduling processes and controlling peripherals.
In the 1950s, the programmer, who was also the operator, would write a program and run it. After the program finished executing, the output may have been printed, or it may have been punched onto paper tape or cards for later processing. More often than not the program did not work. The programmer then looked at the console lights and fiddled with the console switches. If less fortunate, a memory printout was made for further study. In the 1960s, programmers reduced the amount of wasted time by automating the operator's job. A program called an operating system was kept in the computer at all times.
The term operating system may refer to two levels of software. The operating system may refer to the kernel program that manages the processes, memory, and devices. More broadly, the operating system may refer to the entire package of the central software. The package includes a kernel program, command-line interpreter, graphical user interface, utility programs, and editor.
Kernel Program
The kernel's main purpose is to manage the limited resources of a computer:
The kernel program should perform process scheduling, which is also known as a context switch. The kernel creates a process control block when a computer program is selected for execution. However, an executing program gets exclusive access to the central processing unit only for a time slice. To provide each user with the appearance of continuous access, the kernel quickly preempts each process control block to execute another one. The goal for system developers is to minimize dispatch latency.
The kernel program should perform memory management.
When the kernel initially loads an executable into memory, it divides the address space logically into regions. The kernel maintains a master-region table and many per-process-region (pregion) tables—one for each running process. These tables constitute the virtual address space. The master-region table is used to determine where its contents are located in physical memory. The pregion tables allow each process to have its own program (text) pregion, data pregion, and stack pregion.
The program pregion stores machine instructions. Since machine instructions do not change, the program pregion may be shared by many processes of the same executable.
To save time and memory, the kernel may load only blocks of execution instructions from the disk drive, not the entire execution file completely.
The kernel is responsible for translating virtual addresses into physical addresses. The kernel may request data from the memory controller and, instead, receive a page fault. If so, the kernel accesses the memory management unit to populate the physical data region and translate the address.
The kernel allocates memory from the heap upon request by a process. When the process is finished with the memory, the process may request for it to be freed. If the process exits without requesting all allocated memory to be freed, then the kernel performs garbage collection to free the memory.
The kernel also ensures that a process only accesses its own memory, and not that of the kernel or other processes.
The kernel program should perform file system management. The kernel has instructions to create, retrieve, update, and delete files.
The kernel program should perform device management. The kernel provides programs to standardize and simplify the interface to the mouse, keyboard, disk drives, printers, and other devices. Moreover, the kernel should arbitrate access to a device if two processes request it at the same time.
The kernel program should perform network management. The kernel transmits and receives packets on behalf of processes. One key service is to find an efficient route to the target system.
The kernel program should provide system level functions for programmers to use.
Programmers access files through a relatively simple interface that in turn executes a relatively complicated low-level I/O interface. The low-level interface includes file creation, file descriptors, file seeking, physical reading, and physical writing.
Programmers create processes through a relatively simple interface that in turn executes a relatively complicated low-level interface.
Programmers perform date/time arithmetic through a relatively simple interface that in turn executes a relatively complicated low-level time interface.
The kernel program should provide a communication channel between executing processes. For a large software system, it may be desirable to engineer the system into smaller processes. Processes may communicate with one another by sending and receiving signals.
Originally, operating systems were programmed in assembly; however, modern operating systems are typically written in higher-level languages like C, Objective-C, and Swift.
Utility program
A utility program is designed to aid system administration and software execution. Operating systems execute hardware utility programs to check the status of disk drives, memory, speakers, and printers. A utility program may optimize the placement of a file on a crowded disk. System utility programs monitor hardware and network performance. When a metric is outside an acceptable range, a trigger alert is generated.
Utility programs include compression programs so data files are stored on less disk space. Compressed programs also save time when data files are transmitted over the network. Utility programs can sort and merge data sets. Utility programs detect computer viruses.
Microcode program
A microcode program is the bottom-level interpreter that controls the data path of software-driven computers.
(Advances in hardware have migrated these operations to hardware execution circuits.) Microcode instructions allow the programmer to more easily implement the digital logic level—the computer's real hardware. The digital logic level is the boundary between computer science and computer engineering.
A logic gate is a tiny transistor that can return one of two signals: on or off.
Having one transistor forms the NOT gate.
Connecting two transistors in series forms the NAND gate.
Connecting two transistors in parallel forms the NOR gate.
Connecting a NOT gate to a NAND gate forms the AND gate.
Connecting a NOT gate to a NOR gate forms the OR gate.
These five gates form the building blocks of binary algebra—the digital logic functions of the computer.
Microcode instructions are mnemonics programmers may use to execute digital logic functions instead of forming them in binary algebra. They are stored in a central processing unit's (CPU) control store.
These hardware-level instructions move data throughout the data path.
The micro-instruction cycle begins when the microsequencer uses its microprogram counter to fetch the next machine instruction from random-access memory. The next step is to decode the machine instruction by selecting the proper output line to the hardware module.
The final step is to execute the instruction using the hardware module's set of gates.
Instructions to perform arithmetic are passed through an arithmetic logic unit (ALU). The ALU has circuits to perform elementary operations to add, shift, and compare integers. By combining and looping the elementary operations through the ALU, the CPU performs its complex arithmetic.
Microcode instructions move data between the CPU and the memory controller. Memory controller microcode instructions manipulate two registers. The memory address register is used to access each memory cell's address. The memory data register is used to set and read each cell's contents.
Microcode instructions move data between the CPU and the many computer buses. The disk controller bus writes to and reads from hard disk drives. Data is also moved between the CPU and other functional units via the peripheral component interconnect express bus.
Notes
References
Computer programming
Software | Computer program | [
"Technology",
"Engineering"
] | 14,264 | [
"Computer programming",
"Computers",
"Software engineering",
"Computer science",
"nan",
"Software"
] |
5,826 | https://en.wikipedia.org/wiki/Complex%20number | In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation ; every complex number can be expressed in the form , where and are real numbers. Because no real number satisfies the above equation, was called an imaginary number by René Descartes. For the complex number is called the , and is called the . The set of complex numbers is denoted by either of the symbols or . Despite the historical nomenclature, "imaginary" complex numbers have a mathematical existence as firm as that of the real numbers, and they are fundamental tools in the scientific description of the natural world.
Complex numbers allow solutions to all polynomial equations, even those that have no solutions in real numbers. More precisely, the fundamental theorem of algebra asserts that every non-constant polynomial equation with real or complex coefficients has a solution which is a complex number. For example, the equation
has no real solution, because the square of a real number cannot be negative, but has the two nonreal complex solutions and .
Addition, subtraction and multiplication of complex numbers can be naturally defined by using the rule along with the associative, commutative, and distributive laws. Every nonzero complex number has a multiplicative inverse. This makes the complex numbers a field with the real numbers as a subfield. Because of these properties, , and which form is written depends upon convention and style considerations.
The complex numbers also form a real vector space of dimension two, with as a standard basis. This standard basis makes the complex numbers a Cartesian plane, called the complex plane. This allows a geometric interpretation of the complex numbers and their operations, and conversely some geometric objects and operations can be expressed in terms of complex numbers. For example, the real numbers form the real line, which is pictured as the horizontal axis of the complex plane, while real multiples of are the vertical axis. A complex number can also be defined by its geometric polar coordinates: the radius is called the absolute value of the complex number, while the angle from the positive real axis is called the argument of the complex number. The complex numbers of absolute value one form the unit circle. Adding a fixed complex number to all complex numbers defines a translation in the complex plane, and multiplying by a fixed complex number is a similarity centered at the origin (dilating by the absolute value, and rotating by the argument). The operation of complex conjugation is the reflection symmetry with respect to the real axis.
The complex numbers form a rich structure that is simultaneously an algebraically closed field, a commutative algebra over the reals, and a Euclidean vector space of dimension two.
Definition and basic operations
A complex number is an expression of the form , where and are real numbers, and is an abstract symbol, the so-called imaginary unit, whose meaning will be explained further below. For example, is a complex number.
For a complex number , the real number is called its real part , and the real number (not the complex number ) is its imaginary part. The real part of a complex number is denoted , , or ; the imaginary part is , , or : for example,, .
A complex number can be identified with the ordered pair of real numbers , which may be interpreted as coordinates of a point in a Euclidean plane with standard coordinates, which is then called the complex plane or Argand diagram,. The horizontal axis is generally used to display the real part, with increasing values to the right, and the imaginary part marks the vertical axis, with increasing values upwards.
A real number can be regarded as a complex number , whose imaginary part is 0. A purely imaginary number is a complex number , whose real part is zero. It is common to write , , and ; for example, .
The set of all complex numbers is denoted by (blackboard bold) or (upright bold).
In some disciplines such as electromagnetism and electrical engineering, is used instead of , as frequently represents electric current, and complex numbers are written as or .
Addition and subtraction
Two complex numbers and are added by separately adding their real and imaginary parts. That is to say:
Similarly, subtraction can be performed as
The addition can be geometrically visualized as follows: the sum of two complex numbers and , interpreted as points in the complex plane, is the point obtained by building a parallelogram from the three vertices , and the points of the arrows labeled and (provided that they are not on a line). Equivalently, calling these points , , respectively and the fourth point of the parallelogram the triangles and are congruent.
Multiplication
The product of two complex numbers is computed as follows:
For example,
In particular, this includes as a special case the fundamental formula
This formula distinguishes the complex number i from any real number, since the square of any (negative or positive) real number is always a non-negative real number.
With this definition of multiplication and addition, familiar rules for the arithmetic of rational or real numbers continue to hold for complex numbers. More precisely, the distributive property, the commutative properties (of addition and multiplication) hold. Therefore, the complex numbers form an algebraic structure known as a field, the same way as the rational or real numbers do.
Complex conjugate, absolute value, argument and division
The complex conjugate of the complex number is defined as
It is also denoted by some authors by . Geometrically, is the "reflection" of about the real axis. Conjugating twice gives the original complex number: A complex number is real if and only if it equals its own conjugate. The unary operation of taking the complex conjugate of a complex number cannot be expressed by applying only their basic operations addition, subtraction, multiplication and division.
For any complex number , the product
is a non-negative real number. This allows to define the absolute value (or modulus or magnitude) of z to be the square root
By Pythagoras' theorem, is the distance from the origin to the point representing the complex number z in the complex plane. In particular, the circle of radius one around the origin consists precisely of the numbers z such that . If is a real number, then : its absolute value as a complex number and as a real number are equal.
Using the conjugate, the reciprocal of a nonzero complex number can be computed to be
More generally, the division of an arbitrary complex number by a non-zero complex number equals
This process is sometimes called "rationalization" of the denominator (although the denominator in the final expression may be an irrational real number), because it resembles the method to remove roots from simple expressions in a denominator.
The argument of (sometimes called the "phase" ) is the angle of the radius with the positive real axis, and is written as , expressed in radians in this article. The angle is defined only up to adding integer multiples of , since a rotation by (or 360°) around the origin leaves all points in the complex plane unchanged. One possible choice to uniquely specify the argument is to require it to be within the interval , which is referred to as the principal value.
The argument can be computed from the rectangular form by means of the arctan (inverse tangent) function.
Polar form
For any complex number z, with absolute value and argument , the equation
holds. This identity is referred to as the polar form of z. It is sometimes abbreviated as .
In electronics, one represents a phasor with amplitude and phase in angle notation:
If two complex numbers are given in polar form, i.e., and , the product and division can be computed as
(These are a consequence of the trigonometric identities for the sine and cosine function.)
In other words, the absolute values are multiplied and the arguments are added to yield the polar form of the product. The picture at the right illustrates the multiplication of
Because the real and imaginary part of are equal, the argument of that number is 45 degrees, or (in radian). On the other hand, it is also the sum of the angles at the origin of the red and blue triangles are arctan(1/3) and arctan(1/2), respectively. Thus, the formula
holds. As the arctan function can be approximated highly efficiently, formulas like this – known as Machin-like formulas – are used for high-precision approximations of :
Powers and roots
The n-th power of a complex number can be computed using de Moivre's formula, which is obtained by repeatedly applying the above formula for the product:
For example, the first few powers of the imaginary unit i are .
The th roots of a complex number are given by
for . (Here is the usual (positive) th root of the positive real number .) Because sine and cosine are periodic, other integer values of do not give other values. For any , there are, in particular n distinct complex n-th roots. For example, there are 4 fourth roots of 1, namely
In general there is no natural way of distinguishing one particular complex th root of a complex number. (This is in contrast to the roots of a positive real number x, which has a unique positive real n-th root, which is therefore commonly referred to as the n-th root of x.) One refers to this situation by saying that the th root is a -valued function of .
Fundamental theorem of algebra
The fundamental theorem of algebra, of Carl Friedrich Gauss and Jean le Rond d'Alembert, states that for any complex numbers (called coefficients) , the equation
has at least one complex solution z, provided that at least one of the higher coefficients is nonzero. This property does not hold for the field of rational numbers (the polynomial does not have a rational root, because is not a rational number) nor the real numbers (the polynomial does not have a real root, because the square of is positive for any real number ).
Because of this fact, is called an algebraically closed field. It is a cornerstone of various applications of complex numbers, as is detailed further below.
There are various proofs of this theorem, by either analytic methods such as Liouville's theorem, or topological ones such as the winding number, or a proof combining Galois theory and the fact that any real polynomial of odd degree has at least one real root.
History
The solution in radicals (without trigonometric functions) of a general cubic equation, when all three of its roots are real numbers, contains the square roots of negative numbers, a situation that cannot be rectified by factoring aided by the rational root test, if the cubic is irreducible; this is the so-called casus irreducibilis ("irreducible case"). This conundrum led Italian mathematician Gerolamo Cardano to conceive of complex numbers in around 1545 in his Ars Magna, though his understanding was rudimentary; moreover, he later described complex numbers as being "as subtle as they are useless". Cardano did use imaginary numbers, but described using them as "mental torture." This was prior to the use of the graphical complex plane. Cardano and other Italian mathematicians, notably Scipione del Ferro, in the 1500s created an algorithm for solving cubic equations which generally had one real solution and two solutions containing an imaginary number. Because they ignored the answers with the imaginary numbers, Cardano found them useless.
Work on the problem of general polynomials ultimately led to the fundamental theorem of algebra, which shows that with complex numbers, a solution exists to every polynomial equation of degree one or higher. Complex numbers thus form an algebraically closed field, where any polynomial equation has a root.
Many mathematicians contributed to the development of complex numbers. The rules for addition, subtraction, multiplication, and root extraction of complex numbers were developed by the Italian mathematician Rafael Bombelli. A more abstract formalism for the complex numbers was further developed by the Irish mathematician William Rowan Hamilton, who extended this abstraction to the theory of quaternions.
The earliest fleeting reference to square roots of negative numbers can perhaps be said to occur in the work of the Greek mathematician Hero of Alexandria in the 1st century AD, where in his Stereometrica he considered, apparently in error, the volume of an impossible frustum of a pyramid to arrive at the term in his calculations, which today would simplify to . Negative quantities were not conceived of in Hellenistic mathematics and Hero merely replaced it by its positive
The impetus to study complex numbers as a topic in itself first arose in the 16th century when algebraic solutions for the roots of cubic and quartic polynomials were discovered by Italian mathematicians (Niccolò Fontana Tartaglia and Gerolamo Cardano). It was soon realized (but proved much later) that these formulas, even if one were interested only in real solutions, sometimes required the manipulation of square roots of negative numbers. In fact, it was proved later that the use of complex numbers is unavoidable when all three roots are real and distinct. However, the general formula can still be used in this case, with some care to deal with the ambiguity resulting from the existence of three cubic roots for nonzero complex numbers. Rafael Bombelli was the first to address explicitly these seemingly paradoxical solutions of cubic equations and developed the rules for complex arithmetic, trying to resolve these issues.
The term "imaginary" for these quantities was coined by René Descartes in 1637, who was at pains to stress their unreal nature:
A further source of confusion was that the equation seemed to be capriciously inconsistent with the algebraic identity , which is valid for non-negative real numbers and , and which was also used in complex number calculations with one of , positive and the other negative. The incorrect use of this identity in the case when both and are negative, and the related identity , even bedeviled Leonhard Euler. This difficulty eventually led to the convention of using the special symbol in place of to guard against this mistake. Even so, Euler considered it natural to introduce students to complex numbers much earlier than we do today. In his elementary algebra text book, Elements of Algebra, he introduces these numbers almost at once and then uses them in a natural way throughout.
In the 18th century complex numbers gained wider use, as it was noticed that formal manipulation of complex expressions could be used to simplify calculations involving trigonometric functions. For instance, in 1730 Abraham de Moivre noted that the identities relating trigonometric functions of an integer multiple of an angle to powers of trigonometric functions of that angle could be re-expressed by the following de Moivre's formula:
In 1748, Euler went further and obtained Euler's formula of complex analysis:
by formally manipulating complex power series and observed that this formula could be used to reduce any trigonometric identity to much simpler exponential identities.
The idea of a complex number as a point in the complex plane (above) was first described by Danish–Norwegian mathematician Caspar Wessel in 1799, although it had been anticipated as early as 1685 in Wallis's A Treatise of Algebra.
Wessel's memoir appeared in the Proceedings of the Copenhagen Academy but went largely unnoticed. In 1806 Jean-Robert Argand independently issued a pamphlet on complex numbers and provided a rigorous proof of the fundamental theorem of algebra. Carl Friedrich Gauss had earlier published an essentially topological proof of the theorem in 1797 but expressed his doubts at the time about "the true metaphysics of the square root of −1". It was not until 1831 that he overcame these doubts and published his treatise on complex numbers as points in the plane, largely establishing modern notation and terminology:
If one formerly contemplated this subject from a false point of view and therefore found a mysterious darkness, this is in large part attributable to clumsy terminology. Had one not called +1, −1, positive, negative, or imaginary (or even impossible) units, but instead, say, direct, inverse, or lateral units, then there could scarcely have been talk of such darkness.
In the beginning of the 19th century, other mathematicians discovered independently the geometrical representation of the complex numbers: Buée, Mourey, Warren, Français and his brother, Bellavitis.
The English mathematician G.H. Hardy remarked that Gauss was the first mathematician to use complex numbers in "a really confident and scientific way" although mathematicians such as Norwegian Niels Henrik Abel and Carl Gustav Jacob Jacobi were necessarily using them routinely before Gauss published his 1831 treatise.
Augustin-Louis Cauchy and Bernhard Riemann together brought the fundamental ideas of complex analysis to a high state of completion, commencing around 1825 in Cauchy's case.
The common terms used in the theory are chiefly due to the founders. Argand called the direction factor, and the modulus; Cauchy (1821) called the reduced form (l'expression réduite) and apparently introduced the term argument; Gauss used for , introduced the term complex number for , and called the norm. The expression direction coefficient, often used for , is due to Hankel (1867), and absolute value, for modulus, is due to Weierstrass.
Later classical writers on the general theory include Richard Dedekind, Otto Hölder, Felix Klein, Henri Poincaré, Hermann Schwarz, Karl Weierstrass and many others. Important work (including a systematization) in complex multivariate calculus has been started at beginning of the 20th century. Important results have been achieved by Wilhelm Wirtinger in 1927.
Abstract algebraic aspects
While the above low-level definitions, including the addition and multiplication, accurately describes the complex numbers, there are other, equivalent approaches that reveal the abstract algebraic structure of the complex numbers more immediately.
Construction as a quotient field
One approach to is via polynomials, i.e., expressions of the form
where the coefficients are real numbers. The set of all such polynomials is denoted by . Since sums and products of polynomials are again polynomials, this set forms a commutative ring, called the polynomial ring (over the reals). To every such polynomial p, one may assign the complex number , i.e., the value obtained by setting . This defines a function
This function is surjective since every complex number can be obtained in such a way: the evaluation of a linear polynomial at is . However, the evaluation of polynomial at i is 0, since This polynomial is irreducible, i.e., cannot be written as a product of two linear polynomials. Basic facts of abstract algebra then imply that the kernel of the above map is an ideal generated by this polynomial, and that the quotient by this ideal is a field, and that there is an isomorphism
between the quotient ring and . Some authors take this as the definition of .
Accepting that is algebraically closed, because it is an algebraic extension of in this approach, is therefore the algebraic closure of
Matrix representation of complex numbers
Complex numbers can also be represented by matrices that have the form
Here the entries and are real numbers. As the sum and product of two such matrices is again of this form, these matrices form a subring of the ring of matrices.
A simple computation shows that the map
is a ring isomorphism from the field of complex numbers to the ring of these matrices, proving that these matrices form a field. This isomorphism associates the square of the absolute value of a complex number with the determinant of the corresponding matrix, and the conjugate of a complex number with the transpose of the matrix.
The geometric description of the multiplication of complex numbers can also be expressed in terms of rotation matrices by using this correspondence between complex numbers and such matrices. The action of the matrix on a vector corresponds to the multiplication of by . In particular, if the determinant is , there is a real number such that the matrix has the form
In this case, the action of the matrix on vectors and the multiplication by the complex number are both the rotation of the angle .
Complex analysis
The study of functions of a complex variable is known as complex analysis and has enormous practical use in applied mathematics as well as in other branches of mathematics. Often, the most natural proofs for statements in real analysis or even number theory employ techniques from complex analysis (see prime number theorem for an example).
Unlike real functions, which are commonly represented as two-dimensional graphs, complex functions have four-dimensional graphs and may usefully be illustrated by color-coding a three-dimensional graph to suggest four dimensions, or by animating the complex function's dynamic transformation of the complex plane.
Convergence
The notions of convergent series and continuous functions in (real) analysis have natural analogs in complex analysis. A sequence of complex numbers is said to converge if and only if its real and imaginary parts do. This is equivalent to the (ε, δ)-definition of limits, where the absolute value of real numbers is replaced by the one of complex numbers. From a more abstract point of view, , endowed with the metric
is a complete metric space, which notably includes the triangle inequality
for any two complex numbers and .
Complex exponential
Like in real analysis, this notion of convergence is used to construct a number of elementary functions: the exponential function , also written , is defined as the infinite series, which can be shown to converge for any z:
For example, is Euler's number . Euler's formula states:
for any real number . This formula is a quick consequence of general basic facts about convergent power series and the definitions of the involved functions as power series. As a special case, this includes Euler's identity
Complex logarithm
For any positive real number t, there is a unique real number x such that . This leads to the definition of the natural logarithm as the inverse
of the exponential function. The situation is different for complex numbers, since
by the functional equation and Euler's identity.
For example, , so both and are possible values for the complex logarithm of .
In general, given any non-zero complex number w, any number z solving the equation
is called a complex logarithm of , denoted . It can be shown that these numbers satisfy
where is the argument defined above, and the (real) natural logarithm. As arg is a multivalued function, unique only up to a multiple of , log is also multivalued. The principal value of log is often taken by restricting the imaginary part to the interval . This leads to the complex logarithm being a bijective function taking values in the strip (that is denoted in the above illustration)
If is not a non-positive real number (a positive or a non-real number), the resulting principal value of the complex logarithm is obtained with . It is an analytic function outside the negative real numbers, but it cannot be prolongated to a function that is continuous at any negative real number , where the principal value is .
Complex exponentiation is defined as
and is multi-valued, except when is an integer. For , for some natural number , this recovers the non-uniqueness of th roots mentioned above. If is real (and an arbitrary complex number), one has a preferred choice of , the real logarithm, which can be used to define a preferred exponential function.
Complex numbers, unlike real numbers, do not in general satisfy the unmodified power and logarithm identities, particularly when naïvely treated as single-valued functions; see failure of power and logarithm identities. For example, they do not satisfy
Both sides of the equation are multivalued by the definition of complex exponentiation given here, and the values on the left are a subset of those on the right.
Complex sine and cosine
The series defining the real trigonometric functions sine and cosine, as well as the hyperbolic functions sinh and cosh, also carry over to complex arguments without change. For the other trigonometric and hyperbolic functions, such as tangent, things are slightly more complicated, as the defining series do not converge for all complex values. Therefore, one must define them either in terms of sine, cosine and exponential, or, equivalently, by using the method of analytic continuation.
Holomorphic functions
A function → is called holomorphic or complex differentiable at a point if the limit
exists (in which case it is denoted by ). This mimics the definition for real differentiable functions, except that all quantities are complex numbers. Loosely speaking, the freedom of approaching in different directions imposes a much stronger condition than being (real) differentiable. For example, the function
is differentiable as a function , but is not complex differentiable.
A real differentiable function is complex differentiable if and only if it satisfies the Cauchy–Riemann equations, which are sometimes abbreviated as
Complex analysis shows some features not apparent in real analysis. For example, the identity theorem asserts that two holomorphic functions and agree if they agree on an arbitrarily small open subset of . Meromorphic functions, functions that can locally be written as with a holomorphic function , still share some of the features of holomorphic functions. Other functions have essential singularities, such as at .
Applications
Complex numbers have applications in many scientific areas, including signal processing, control theory, electromagnetism, fluid dynamics, quantum mechanics, cartography, and vibration analysis. Some of these applications are described below.
Complex conjugation is also employed in inversive geometry, a branch of geometry studying reflections more general than ones about a line. In the network analysis of electrical circuits, the complex conjugate is used in finding the equivalent impedance when the maximum power transfer theorem is looked for.
Geometry
Shapes
Three non-collinear points in the plane determine the shape of the triangle . Locating the points in the complex plane, this shape of a triangle may be expressed by complex arithmetic as
The shape of a triangle will remain the same, when the complex plane is transformed by translation or dilation (by an affine transformation), corresponding to the intuitive notion of shape, and describing similarity. Thus each triangle is in a similarity class of triangles with the same shape.
Fractal geometry
The Mandelbrot set is a popular example of a fractal formed on the complex plane. It is defined by plotting every location where iterating the sequence does not diverge when iterated infinitely. Similarly, Julia sets have the same rules, except where remains constant.
Triangles
Every triangle has a unique Steiner inellipse – an ellipse inside the triangle and tangent to the midpoints of the three sides of the triangle. The foci of a triangle's Steiner inellipse can be found as follows, according to Marden's theorem: Denote the triangle's vertices in the complex plane as , , and . Write the cubic equation , take its derivative, and equate the (quadratic) derivative to zero. Marden's theorem says that the solutions of this equation are the complex numbers denoting the locations of the two foci of the Steiner inellipse.
Algebraic number theory
As mentioned above, any nonconstant polynomial equation (in complex coefficients) has a solution in . A fortiori, the same is true if the equation has rational coefficients. The roots of such equations are called algebraic numbers – they are a principal object of study in algebraic number theory. Compared to , the algebraic closure of , which also contains all algebraic numbers, has the advantage of being easily understandable in geometric terms. In this way, algebraic methods can be used to study geometric questions and vice versa. With algebraic methods, more specifically applying the machinery of field theory to the number field containing roots of unity, it can be shown that it is not possible to construct a regular nonagon using only compass and straightedge – a purely geometric problem.
Another example is the Gaussian integers; that is, numbers of the form , where and are integers, which can be used to classify sums of squares.
Analytic number theory
Analytic number theory studies numbers, often integers or rationals, by taking advantage of the fact that they can be regarded as complex numbers, in which analytic methods can be used. This is done by encoding number-theoretic information in complex-valued functions. For example, the Riemann zeta function is related to the distribution of prime numbers.
Improper integrals
In applied fields, complex numbers are often used to compute certain real-valued improper integrals, by means of complex-valued functions. Several methods exist to do this; see methods of contour integration.
Dynamic equations
In differential equations, it is common to first find all complex roots of the characteristic equation of a linear differential equation or equation system and then attempt to solve the system in terms of base functions of the form . Likewise, in difference equations, the complex roots of the characteristic equation of the difference equation system are used, to attempt to solve the system in terms of base functions of the form .
Linear algebra
Since is algebraically closed, any non-empty complex square matrix has at least one (complex) eigenvalue. By comparison, real matrices do not always have real eigenvalues, for example rotation matrices (for rotations of the plane for angles other than 0° or 180°) leave no direction fixed, and therefore do not have any real eigenvalue. The existence of (complex) eigenvalues, and the ensuing existence of eigendecomposition is a useful tool for computing matrix powers and matrix exponentials.
Complex numbers often generalize concepts originally conceived in the real numbers. For example, the conjugate transpose generalizes the transpose, hermitian matrices generalize symmetric matrices, and unitary matrices generalize orthogonal matrices.
In applied mathematics
Control theory
In control theory, systems are often transformed from the time domain to the complex frequency domain using the Laplace transform. The system's zeros and poles are then analyzed in the complex plane. The root locus, Nyquist plot, and Nichols plot techniques all make use of the complex plane.
In the root locus method, it is important whether zeros and poles are in the left or right half planes, that is, have real part greater than or less than zero. If a linear, time-invariant (LTI) system has poles that are
in the right half plane, it will be unstable,
all in the left half plane, it will be stable,
on the imaginary axis, it will have marginal stability.
If a system has zeros in the right half plane, it is a nonminimum phase system.
Signal analysis
Complex numbers are used in signal analysis and other fields for a convenient description for periodically varying signals. For given real functions representing actual physical quantities, often in terms of sines and cosines, corresponding complex functions are considered of which the real parts are the original quantities. For a sine wave of a given frequency, the absolute value of the corresponding is the amplitude and the argument is the phase.
If Fourier analysis is employed to write a given real-valued signal as a sum of periodic functions, these periodic functions are often written as complex-valued functions of the form
and
where ω represents the angular frequency and the complex number A encodes the phase and amplitude as explained above.
This use is also extended into digital signal processing and digital image processing, which use digital versions of Fourier analysis (and wavelet analysis) to transmit, compress, restore, and otherwise process digital audio signals, still images, and video signals.
Another example, relevant to the two side bands of amplitude modulation of AM radio, is:
In physics
Electromagnetism and electrical engineering
In electrical engineering, the Fourier transform is used to analyze varying voltages and currents. The treatment of resistors, capacitors, and inductors can then be unified by introducing imaginary, frequency-dependent resistances for the latter two and combining all three in a single complex number called the impedance. This approach is called phasor calculus.
In electrical engineering, the imaginary unit is denoted by , to avoid confusion with , which is generally in use to denote electric current, or, more particularly, , which is generally in use to denote instantaneous electric current.
Because the voltage in an AC circuit is oscillating, it can be represented as
To obtain the measurable quantity, the real part is taken:
The complex-valued signal is called the analytic representation of the real-valued, measurable signal .
Fluid dynamics
In fluid dynamics, complex functions are used to describe potential flow in two dimensions.
Quantum mechanics
The complex number field is intrinsic to the mathematical formulations of quantum mechanics, where complex Hilbert spaces provide the context for one such formulation that is convenient and perhaps most standard. The original foundation formulas of quantum mechanics – the Schrödinger equation and Heisenberg's matrix mechanics – make use of complex numbers.
Relativity
In special and general relativity, some formulas for the metric on spacetime become simpler if one takes the time component of the spacetime continuum to be imaginary. (This approach is no longer standard in classical relativity, but is used in an essential way in quantum field theory.) Complex numbers are essential to spinors, which are a generalization of the tensors used in relativity.
Characterizations, generalizations and related notions
Algebraic characterization
The field has the following three properties:
First, it has characteristic 0. This means that for any number of summands (all of which equal one).
Second, its transcendence degree over , the prime field of is the cardinality of the continuum.
Third, it is algebraically closed (see above).
It can be shown that any field having these properties is isomorphic (as a field) to For example, the algebraic closure of the field of the -adic number also satisfies these three properties, so these two fields are isomorphic (as fields, but not as topological fields). Also, is isomorphic to the field of complex Puiseux series. However, specifying an isomorphism requires the axiom of choice. Another consequence of this algebraic characterization is that contains many proper subfields that are isomorphic to .
Characterization as a topological field
The preceding characterization of describes only the algebraic aspects of That is to say, the properties of nearness and continuity, which matter in areas such as analysis and topology, are not dealt with. The following description of as a topological field (that is, a field that is equipped with a topology, which allows the notion of convergence) does take into account the topological properties. contains a subset (namely the set of positive real numbers) of nonzero elements satisfying the following three conditions:
is closed under addition, multiplication and taking inverses.
If and are distinct elements of , then either or is in .
If is any nonempty subset of , then for some in
Moreover, has a nontrivial involutive automorphism (namely the complex conjugation), such that is in for any nonzero in
Any field with these properties can be endowed with a topology by taking the sets as a base, where ranges over the field and ranges over . With this topology is isomorphic as a topological field to
The only connected locally compact topological fields are and This gives another characterization of as a topological field, because can be distinguished from because the nonzero complex numbers are connected, while the nonzero real numbers are not.
Other number systems
The process of extending the field of reals to is an instance of the Cayley–Dickson construction. Applying this construction iteratively to then yields the quaternions, the octonions, the sedenions, and the trigintaduonions. This construction turns out to diminish the structural properties of the involved number systems.
Unlike the reals, is not an ordered field, that is to say, it is not possible to define a relation that is compatible with the addition and multiplication. In fact, in any ordered field, the square of any element is necessarily positive, so precludes the existence of an ordering on Passing from to the quaternions loses commutativity, while the octonions (additionally to not being commutative) fail to be associative. The reals, complex numbers, quaternions and octonions are all normed division algebras over . By Hurwitz's theorem they are the only ones; the sedenions, the next step in the Cayley–Dickson construction, fail to have this structure.
The Cayley–Dickson construction is closely related to the regular representation of thought of as an -algebra (an -vector space with a multiplication), with respect to the basis . This means the following: the -linear map
for some fixed complex number can be represented by a matrix (once a basis has been chosen). With respect to the basis , this matrix is
that is, the one mentioned in the section on matrix representation of complex numbers above. While this is a linear representation of in the 2 × 2 real matrices, it is not the only one. Any matrix
has the property that its square is the negative of the identity matrix: . Then
is also isomorphic to the field and gives an alternative complex structure on This is generalized by the notion of a linear complex structure.
Hypercomplex numbers also generalize and For example, this notion contains the split-complex numbers, which are elements of the ring (as opposed to for complex numbers). In this ring, the equation has four solutions.
The field is the completion of the field of rational numbers, with respect to the usual absolute value metric. Other choices of metrics on lead to the fields of -adic numbers (for any prime number ), which are thereby analogous to . There are no other nontrivial ways of completing than and by Ostrowski's theorem. The algebraic closures of still carry a norm, but (unlike ) are not complete with respect to it. The completion of turns out to be algebraically closed. By analogy, the field is called -adic complex numbers.
The fields and their finite field extensions, including are called local fields.
See also
Analytic continuation
Circular motion using complex numbers
Complex-base system
Complex coordinate space
Complex geometry
Geometry of numbers
Dual-complex number
Eisenstein integer
Geometric algebra (which includes the complex plane as the 2-dimensional spinor subspace )
Unit complex number
Notes
References
Historical references
— A gentle introduction to the history of complex numbers and the beginnings of complex analysis.
— An advanced perspective on the historical development of the concept of number.
Composition algebras | Complex number | [
"Mathematics"
] | 7,956 | [
"Mathematical objects",
"Numbers",
"Linear algebra",
"Complex numbers",
"Algebra"
] |
5,828 | https://en.wikipedia.org/wiki/Cryptozoology | Cryptozoology is a pseudoscience and subculture that searches for and studies unknown, legendary, or extinct animals whose present existence is disputed or unsubstantiated, particularly those popular in folklore, such as Bigfoot, the Loch Ness Monster, Yeti, the chupacabra, the Jersey Devil, or the Mokele-mbembe. Cryptozoologists refer to these entities as cryptids, a term coined by the subculture. Because it does not follow the scientific method, cryptozoology is considered a pseudoscience by mainstream science: it is neither a branch of zoology nor of folklore studies. It was originally founded in the 1950s by zoologists Bernard Heuvelmans and Ivan T. Sanderson.
Scholars have noted that the subculture rejected mainstream approaches from an early date, and that adherents often express hostility to mainstream science. Scholars studying cryptozoologists and their influence (including cryptozoology's association with Young Earth creationism) noted parallels in cryptozoology and other pseudosciences such as ghost hunting and ufology, and highlighted uncritical media propagation of cryptozoologist claims.
Terminology, history, and approach
As a field, cryptozoology originates from the works of Bernard Heuvelmans, a Belgian zoologist, and Ivan T. Sanderson, a Scottish zoologist. Notably, Heuvelmans published On the Track of Unknown Animals (French: ) in 1955, a landmark work among cryptozoologists that was followed by numerous other similar works. In addition, Sanderson published a series of books that contributed to the developing hallmarks of cryptozoology, including Abominable Snowmen: Legend Come to Life (1961). Heuvelmans himself traced cryptozoology to the work of Anthonie Cornelis Oudemans, who theorized that a large unidentified species of seal was responsible for sea serpent reports.
Cryptozoology is 'the study of hidden animals' (from Ancient Greek: κρυπτός, kryptós "hidden, secret"; Ancient Greek ζῷον, zōion "animal", and λόγος, logos, i.e. "knowledge, study"). The term dates from 1959 or before— Heuvelmans attributes the coinage of the term cryptozoology to Sanderson. Following cryptozoology, the term cryptid was coined in 1983 by cryptozoologist J. E. Wall in the summer issue of the International Society of Cryptozoology newsletter. According to Wall "[It has been] suggested that new terms be coined to replace sensational and often misleading terms like 'monster'. My suggestion is 'cryptid', meaning a living thing having the quality of being hidden or unknown ... describing those creatures which are (or may be) subjects of cryptozoological investigation."
The Oxford English Dictionary defines the noun cryptid as "an animal whose existence or survival to the present day is disputed or unsubstantiated; any animal of interest to a cryptozoologist". While used by most cryptozoologists, the term cryptid is not used by academic zoologists. In a textbook aimed at undergraduates, academics Caleb W. Lack and Jacques Rousseau note that the subculture's focus on what it deems to be "cryptids" is a pseudoscientific extension of older belief in monsters and other similar entities from the folkloric record, yet with a "new, more scientific-sounding name: cryptids".
While biologists regularly identify new species, cryptozoologists often focus on creatures from the folkloric record. Most famously, these include the Loch Ness Monster, Champ (folklore), Bigfoot, the chupacabra, as well as other "imposing beasts that could be labeled as monsters". In their search for these entities, cryptozoologists may employ devices such as motion-sensitive cameras, night-vision equipment, and audio-recording equipment. While there have been attempts to codify cryptozoological approaches, unlike biologists, zoologists, botanists, and other academic disciplines, however, "there are no accepted, uniform, or successful methods for pursuing cryptids". Some scholars have identified precursors to modern cryptozoology in certain medieval approaches to the folkloric record, and the psychology behind the cryptozoology approach has been the subject of academic study.
Few cryptozoologists have a formal science education, and fewer still have a science background directly relevant to cryptozoology. Adherents often misrepresent the academic backgrounds of cryptozoologists. According to writer Daniel Loxton and paleontologist Donald Prothero, "[c]ryptozoologists have often promoted 'Professor Roy Mackal, PhD.' as one of their leading figures and one of the few with a legitimate doctorate in biology. What is rarely mentioned, however, is that he had no training that would qualify him to undertake competent research on exotic animals. This raises the specter of 'credential mongering', by which an individual or organization feints a person's graduate degree as proof of expertise, even though his or her training is not specifically relevant to the field under consideration." Besides Heuvelmans, Sanderson, and Mackal, other notable cryptozoologists with academic backgrounds include Grover Krantz, Karl Shuker, and Richard Greenwell.
Historically, notable cryptozoologists have often identified instances featuring "irrefutable evidence" (such as Sanderson and Krantz), only for the evidence to be revealed as the product of a hoax. This may occur during a closer examination by experts or upon confession of the hoaxer.
Expeditions
Cryptozoologists have often led unsuccessful expeditions to find evidence of cryptids. Bigfoot researcher René Dahinden led searches into caves to find evidence of sasquatch, as early sasquatch legends claimed they lived in rocky areas. Despite the failure of these searches, he spent years trying to find proof of bigfoot. Lensgrave Adam Christoffer Knuth led an expedition into Lake Tele in the Congo to find the Mokele-mbembe in 2018. While the expedition was a failure, they discovered a new species of green algae.
Young Earth creationism
A subset of cryptozoology promotes the pseudoscience of Young Earth creationism, rejecting conventional science in favor of a literal Biblical interpretation and promoting concepts such as "living dinosaurs". Science writer Sharon A. Hill observes that the Young Earth creationist segment of cryptozoology is "well-funded and able to conduct expeditions with a goal of finding a living dinosaur that they think would invalidate evolution".
Anthropologist Jeb J. Card says that "[c]reationists have embraced cryptozoology and some cryptozoological expeditions are funded by and conducted by creationists hoping to disprove evolution." In a 2013 interview, paleontologist Donald Prothero notes an uptick in creationist cryptozoologists. He observes that "[p]eople who actively search for Loch Ness monsters or Mokele Mbembe do it entirely as creationist ministers. They think that if they found a dinosaur in the Congo it would overturn all of evolution. It wouldn't. It would just be a late-occurring dinosaur, but that's their mistaken notion of evolution."
Citing a 2013 exhibit at the Petersburg, Kentucky-based Creation Museum, which claimed that dragons were once biological creatures who walked the earth alongside humanity and is broadly dedicated to Young Earth creationism, religious studies academic Justin Mullis notes that "[c]ryptozoology has a long and curious history with Young Earth Creationism, with this new exhibit being just one of the most recent examples".
Academic Paul Thomas analyzes the influence and connections between cryptozoology in his 2020 study of the Creation Museum and the creationist theme park Ark Encounter. Thomas comments that, "while the Creation Museum and the Ark Encounter are flirting with pseudoarchaeology, coquettishly whispering pseudoarchaeological rhetoric, they are each fully in bed with cryptozoology" and observes that "[y]oung-earth creationists and cryptozoologists make natural bed fellows. As with pseudoarchaeology, both young-earth creationists and cryptozoologists bristle at the rejection of mainstream secular science and lament a seeming conspiracy to prevent serious consideration of their claims."
Lack of critical media coverage
Media outlets have often uncritically disseminated information from cryptozoologist sources, including newspapers that repeat false claims made by cryptozoologists or television shows that feature cryptozoologists as monster hunters (such as the popular and purportedly nonfiction American television show MonsterQuest, which aired from 2007 to 2010). Media coverage of purported "cryptids" often fails to provide more likely explanations, further propagating claims made by cryptozoologists.
Reception and pseudoscience
There is a broad consensus among academics that cryptozoology is a pseudoscience. The subculture is regularly criticized for reliance on anecdotal information and because in the course of investigating animals that most scientists believe are unlikely to have existed, cryptozoologists do not follow the scientific method. No academic course of study nor university degree program grants the status of cryptozoologist and the subculture is primarily the domain of individuals without training in the natural sciences.
Anthropologist Jeb J. Card summarizes cryptozoology in a survey of pseudoscience and pseudoarchaeology:
Card notes that "cryptozoologists often show their disdain and even hatred for professional scientists, including those who enthusiastically participated in cryptozoology", which he traces back to Heuvelmans's early "rage against critics of cryptozoology". He finds parallels with cryptozoology and other pseudosciences, such as ghost hunting and ufology, and compares the approach of cryptozoologists to colonial big-game hunters, and to aspects of European imperialism. According to Card, "[m]ost cryptids are framed as the subject of indigenous legends typically collected in the heyday of comparative folklore, though such legends may be heavily modified or worse. Cryptozoology's complicated mix of sympathy, interest, and appropriation of indigenous culture (or non-indigenous construction of it) is also found in New Age circles and dubious "Indian burial grounds" and other legends [...] invoked in hauntings such as the "Amityville" hoax [...]".
In a 2011 foreword for The American Biology Teacher, then National Association of Biology Teachers president Dan Ward uses cryptozoology as an example of "technological pseudoscience" that may confuse students about the scientific method. Ward says that "Cryptozoology [...] is not valid science or even science at all. It is monster hunting." Historian of science Brian Regal includes an entry for cryptozoology in his Pseudoscience: A Critical Encyclopedia (2009). Regal says that "as an intellectual endeavor, cryptozoology has been studied as much as cryptozoologists have sought hidden animals".
In a 1992 issue of Folklore, folklorist Véronique Campion-Vincent says:
Campion-Vincent says that "four currents can be distinguished in the study of mysterious animal appearances": "Forteans" ("compiler[s] of anomalies" such as via publications like the Fortean Times), "occultists" (which she describes as related to "Forteans"), "folklorists", and "cryptozoologists". Regarding cryptozoologists, Campion-Vincent says that "this movement seems to deserve the appellation of parascience, like parapsychology: the same corpus is reviewed; many scientists participate, but for those who have an official status of university professor or researcher, the participation is a private hobby".
In her Encyclopedia of American Folklore, academic Linda Watts says that "folklore concerning unreal animals or beings, sometimes called monsters, is a popular field of inquiry" and describes cryptozoology as an example of "American narrative traditions" that "feature many monsters".
In his analysis of cryptozoology, folklorist Peter Dendle says that "cryptozoology devotees consciously position themselves in defiance of mainstream science" and that:
In a paper published in 2013, Dendle refers to cryptozoologists as "contemporary monster hunters" that "keep alive a sense of wonder in a world that has been very thoroughly charted, mapped, and tracked, and that is largely available for close scrutiny on Google Earth and satellite imaging" and that "on the whole the devotion of substantial resources for this pursuit betrays a lack of awareness of the basis for scholarly consensus (largely ignoring, for instance, evidence of evolutionary biology and the fossil record)."
According to historian Mike Dash, few scientists doubt there are thousands of unknown animals, particularly invertebrates, awaiting discovery; however, cryptozoologists are largely uninterested in researching and cataloging newly discovered species of ants or beetles, instead focusing their efforts towards "more elusive" creatures that have often defied decades of work aimed at confirming their existence.
Paleontologist George Gaylord Simpson (1984) lists cryptozoology among examples of human gullibility, along with creationism:
Paleontologist Donald Prothero (2007) cites cryptozoology as an example of pseudoscience and categorizes it, along with Holocaust denial and UFO abductions claims, as aspects of American culture that are "clearly baloney".
In Scientifical Americans: The Culture of Amateur Paranormal Researchers (2017), Hill surveys the field and discusses aspects of the subculture, noting internal attempts at creating more scientific approaches and the involvement of Young Earth creationists and a prevalence of hoaxes. She concludes that many cryptozoologists are "passionate and sincere in their belief that mystery animals exist. As such, they give deference to every report of a sighting, often without critical questioning. As with the ghost seekers, cryptozoologists are convinced that they will be the ones to solve the mystery and make history. With the lure of mystery and money undermining diligent and ethical research, the field of cryptozoology has serious credibility problems."
Organizations
There have been several organizations, of varying types, dedicated or related to cryptozoology. These include:
International Fortean Organization – a network of professional Fortean researchers and writers based in the United States
International Society of Cryptozoology – an American organisation that existed from 1982 to 1998
Kosmopoisk – a Russian organisation whose interests include cryptozoology and Ufology
The Centre for Fortean Zoology- an English organization centered around hunting for unknown animals
Museums and exhibitions
The zoological and cryptozoological collection and archive of Bernard Heuvelmans is held at the Musée Cantonal de Zoologie in Lausanne and consists of around "1,000 books, 25,000 files, 25,000 photographs, correspondence, and artifacts".
In 2006, the Bates College Museum of Art held the "Cryptozoology: Out of Time Place Scale" exhibition, which compared cryptozoological creatures with recently extinct animals like the thylacine and extant taxa like the coelacanth, once thought long extinct (living fossils). The following year, the American Museum of Natural History put on a mixed exhibition of imaginary and extinct animals, including the elephant bird Aepyornis maximus and the great ape Gigantopithecus blacki, under the name "Mythic Creatures: Dragons, Unicorns and Mermaids".
In 2003, cryptozoologist Loren Coleman opened the International Cryptozoology Museum in Portland, Maine. The museum houses more than 3000 cryptozoology related artifacts.
See also
Ethnozoology
Fearsome critters, fabulous beasts that were said to inhabit the timberlands of North America
Folk belief
List of cryptids, a list of cryptids notable within cryptozoology
List of cryptozoologists, a list of notable cryptozoologists
Scientific skepticism
References
Sources
Bartholomew, Robert E. 2012. The Untold Story of Champ: A Social History of America's Loch Ness Monster. State University of New York Press.
Campion-Vincent, Véronique. 1992. "Appearances of Beasts and Mystery-cats in France". Folklore 103.2 (1992): 160–183.
Card, Jeb J. 2016. "Steampunk Inquiry: A Comparative Vivisection of Discovery Pseudoscience" in Card, Jeb J. and Anderson, David S. Lost City, Found Pyramid: Understanding Alternative Archaeologies and Pseudoscientific Practices, pp. 24–25. University of Alabama Press.
Church, Jill M. (2009). Cryptozoology. In H. James Birx. Encyclopedia of Time: Science, Philosophy, Theology & Culture, Volume 1. SAGE Publications. pp. 251–252.
Dash, Mike. 2000. Borderlands: The Ultimate Exploration of the Unknown. Overlook Press.
Dendle, Peter. 2006. "Cryptozoology in the Medieval and Modern Worlds". Folklore, Vol. 117, No. 2 (Aug., 2006), pp. 190–206. Taylor & Francis.
Dendle, Peter. 2013. "Monsters and the Twenty-First Century" in The Ashgate Research Companion to Monsters and the Monstrous. Ashgate Publishing.
Hill, Sharon A. 2017. Scientifical Americans: The Culture of Amateur Paranormal Researchers. McFarland.
Lack, Caleb W. and Jacques Rousseau. 2016. Critical Thinking, Science, and Pseudoscience: Why We Can't Trust Our Brains. Springer.
Lee, Jeffrey A. 2000. The Scientific Endeavor: A Primer on Scientific Principles and Practice. Benjamin Cummings.
Loxton, Daniel and Donald Prothero. 2013. Abominable Science: Origins of the Yeti, Nessie, and other Famous Cryptids. Columbia University Press.
Mullis, Justin. 2019. "Cryptofiction! Science Fiction and the Rise of Cryptozoology" in Caterine, Darryl & John W. Morehead (ed.). 2019. The Paranormal and Popular Culture: A Postmodern Religious Landscape, pp. 240–252. Routledge. .
Mullis, Justin. 2021. "Thomas Jefferson: The First Cryptozoologist?". In Joseph P. Laycock & Natasha L. Mikles (eds). Religion, Culture, and the Monstrous: Of Gods and Monsters, pp. 185–197. Lexington Books.
Nagel, Brian. 2009. Pseudoscience: A Critical Encyclopedia. ABC-CLIO.
Paxton, C.G.M. 2011. "Putting the 'ology' into cryptozoology." Biofortean Notes. Vol. 7, pp. 7–20, 310.
Prothero, Donald R. 2007. Evolution: What the Fossils Say and Why It Matters. Columbia University Press.
Radford, Benjamin. 2014. "Bigfoot at 50: Evaluating a Half-Century of Bigfoot Evidence" in Farha, Bryan (ed.). Pseudoscience and Deception: The Smoke and Mirrors of Paranormal Claims. University Press of America.
Regal, Brian. 2011a. "Cryptozoology" in McCormick, Charlie T. and Kim Kennedy (ed.). Folklore: An Encyclopedia of Beliefs, Customs, Tales, Music, and Art, pp. 326–329. 2nd edition. ABC-CLIO. .
Regal, Brian. 2011b. Sasquatch: Crackpots, Eggheads, and Cryptozoology. Springer. .
Roesch, Ben S & John L. Moore. (2002). Cryptozoology. In Michael Shermer (ed.). The Skeptic Encyclopedia of Pseudoscience: Volume One. ABC-CLIO. pp. 71–78.
Shea, Rachel Hartigan. 2013. "The Science Behind Bigfoot and Other Monsters".National Geographic, September 9, 2013. Online.
Shermer, Michael. 2003. "Show Me the Body" in Scientific American, issue 288 (5), p. 27. Online.
Simpson, George Gaylord (1984). "Mammals and Cryptozoology". Proceedings of the American Philosophical Society. Vol. 128, No. 1 (Mar. 30, 1984), pp. 1–19. American Philosophical Society.
Thomas, Paul. 2020. Storytelling the Bible at the Creation Museum, Ark Encounter, and Museum of the Bible. Bloomsbury Publishing.
Uscinski, Joseph. 2020. Conspiracy Theories: A Primer. Rowman & Littlefield Publishers.
Wall, J. E. 1983. The ISC Newsletter, vol. 2, issue 10, p. 10. International Society of Cryptozoology.
Ward, Daniel. 2011. "From the President". The American Biology Teacher, 73.8 (2011): 440–440.
Watts, Linda S. 2007. Encyclopedia of American Folklore. Facts on File.
External links
Forteana
Pseudoscience
Subcultures
Young Earth creationism
Zoology | Cryptozoology | [
"Biology"
] | 4,403 | [
"Zoology"
] |
5,863 | https://en.wikipedia.org/wiki/Copenhagen%20interpretation | The Copenhagen interpretation is a collection of views about the meaning of quantum mechanics, stemming from the work of Niels Bohr, Werner Heisenberg, Max Born, and others. While "Copenhagen" refers to the Danish city, the use as an "interpretation" was apparently coined by Heisenberg during the 1950s to refer to ideas developed in the 1925–1927 period, glossing over his disagreements with Bohr. Consequently, there is no definitive historical statement of what the interpretation entails.
Features common across versions of the Copenhagen interpretation include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule, and the principle of complementarity, which states that objects have certain pairs of complementary properties that cannot all be observed or measured simultaneously. Moreover, the act of "observing" or "measuring" an object is irreversible, and no truth can be attributed to an object except according to the results of its measurement (that is, the Copenhagen interpretation rejects counterfactual definiteness). Copenhagen-type interpretations hold that quantum descriptions are objective, in that they are independent of physicists' personal beliefs and other arbitrary mental factors.
Over the years, there have been many objections to aspects of Copenhagen-type interpretations, including the discontinuous and stochastic nature of the "observation" or "measurement" process, the difficulty of defining what might count as a measuring device, and the seeming reliance upon classical physics in describing such devices. Still, including all the variations, the interpretation remains one of the most commonly taught.
Background
Starting in 1900, investigations into atomic and subatomic phenomena forced a revision to the basic concepts of classical physics. However, it was not until a quarter-century had elapsed that the revision reached the status of a coherent theory. During the intervening period, now known as the time of the "old quantum theory", physicists worked with approximations and heuristic corrections to classical physics. Notable results from this period include Max Planck's calculation of the blackbody radiation spectrum, Albert Einstein's explanation of the photoelectric effect, Einstein and Peter Debye's work on the specific heat of solids, Niels Bohr and Hendrika Johanna van Leeuwen's proof that classical physics cannot account for diamagnetism, Bohr's model of the hydrogen atom and Arnold Sommerfeld's extension of the Bohr model to include relativistic effects. From 1922 through 1925, this method of heuristic corrections encountered increasing difficulties; for example, the Bohr–Sommerfeld model could not be extended from hydrogen to the next simplest case, the helium atom.
The transition from the old quantum theory to full-fledged quantum physics began in 1925, when Werner Heisenberg presented a treatment of electron behavior based on discussing only "observable" quantities, meaning to Heisenberg the frequencies of light that atoms absorbed and emitted. Max Born then realized that in Heisenberg's theory, the classical variables of position and momentum would instead be represented by matrices, mathematical objects that can be multiplied together like numbers with the crucial difference that the order of multiplication matters. Erwin Schrödinger presented an equation that treated the electron as a wave, and Born discovered that the way to successfully interpret the wave function that appeared in the Schrödinger equation was as a tool for calculating probabilities.
Quantum mechanics cannot easily be reconciled with everyday language and observation, and has often seemed counter-intuitive to physicists, including its inventors. The ideas grouped together as the Copenhagen interpretation suggest a way to think about how the mathematics of quantum theory relates to physical reality.
Origin and use of the term
The 'Copenhagen' part of the term refers to the city of Copenhagen in Denmark. During the mid-1920s, Heisenberg had been an assistant to Bohr at his institute in Copenhagen. Together they helped originate quantum mechanical theory. At the 1927 Solvay Conference, a dual talk by Max Born and Heisenberg declared "we consider quantum mechanics to be a closed theory, whose fundamental physical and mathematical assumptions are no longer susceptible of any modification." In 1929, Heisenberg gave a series of invited lectures at the University of Chicago explaining the new field of quantum mechanics. The lectures then served as the basis for his textbook, The Physical Principles of the Quantum Theory, published in 1930. In the book's preface, Heisenberg wrote:
On the whole, the book contains nothing that is not to be found in previous publications, particularly in the investigations of Bohr. The purpose of the book seems to me to be fulfilled if it contributes somewhat to the diffusion of that 'Kopenhagener Geist der Quantentheorie' [Copenhagen spirit of quantum theory] if I may so express myself, which has directed the entire development of modern atomic physics.
The term 'Copenhagen interpretation' suggests something more than just a spirit, such as some definite set of rules for interpreting the mathematical formalism of quantum mechanics, presumably dating back to the 1920s. However, no such text exists, and the writings of Bohr and Heisenberg contradict each other on several important issues. It appears that the particular term, with its more definite sense, was coined by Heisenberg around 1955, while criticizing alternative "interpretations" (e.g., David Bohm's) that had been developed. Lectures with the titles 'The Copenhagen Interpretation of Quantum Theory' and 'Criticisms and Counterproposals to the Copenhagen Interpretation', that Heisenberg delivered in 1955, are reprinted in the collection Physics and Philosophy. Before the book was released for sale, Heisenberg privately expressed regret for having used the term, due to its suggestion of the existence of other interpretations, that he considered to be "nonsense". In a 1960 review of Heisenberg's book, Bohr's close collaborator Léon Rosenfeld called the term an "ambiguous expression" and suggested it be discarded. However, this did not come to pass, and the term entered widespread use. Bohr's ideas in particular are distinct despite the use of his Copenhagen home in the name of the interpretation.
Principles
There is no uniquely definitive statement of the Copenhagen interpretation. The term encompasses the views developed by a number of scientists and philosophers during the second quarter of the 20th century. This lack of a single, authoritative source that establishes the Copenhagen interpretation is one difficulty with discussing it; another complication is that the philosophical background familiar to Einstein, Bohr, Heisenberg, and contemporaries is much less so to physicists and even philosophers of physics in more recent times. Bohr and Heisenberg never totally agreed on how to understand the mathematical formalism of quantum mechanics, and Bohr distanced himself from what he considered Heisenberg's more subjective interpretation. Bohr offered an interpretation that is independent of a subjective observer, or measurement, or collapse; instead, an "irreversible" or effectively irreversible process causes the decay of quantum coherence which imparts the classical behavior of "observation" or "measurement".
Different commentators and researchers have associated various ideas with the term. Asher Peres remarked that very different, sometimes opposite, views are presented as "the Copenhagen interpretation" by different authors. N. David Mermin coined the phrase "Shut up and calculate!" to summarize Copenhagen-type views, a saying often misattributed to Richard Feynman and which Mermin later found insufficiently nuanced. Mermin described the Copenhagen interpretation as coming in different "versions", "varieties", or "flavors".
Some basic principles generally accepted as part of the interpretation include the following:
Quantum mechanics is intrinsically indeterministic.
The correspondence principle: in the appropriate limit, quantum theory comes to resemble classical physics and reproduces the classical predictions.
The Born rule: the wave function of a system yields probabilities for the outcomes of measurements upon that system.
Complementarity: certain properties cannot be jointly defined for the same system at the same time. In order to talk about a specific property of a system, that system must be considered within the context of a specific laboratory arrangement. Observable quantities corresponding to mutually exclusive laboratory arrangements cannot be predicted together, but the consideration of multiple such mutually exclusive experiments is necessary to characterize a system.
Hans Primas and Roland Omnès give a more detailed breakdown that, in addition to the above, includes the following:
Quantum physics applies to individual objects. The probabilities computed by the Born rule do not require an ensemble or collection of "identically prepared" systems to understand.
The results provided by measuring devices are essentially classical, and should be described in ordinary language. This was particularly emphasized by Bohr, and was accepted by Heisenberg.
Per the above point, the device used to observe a system must be described in classical language, while the system under observation is treated in quantum terms. This is a particularly subtle issue for which Bohr and Heisenberg came to differing conclusions. According to Heisenberg, the boundary between classical and quantum can be shifted in either direction at the observer's discretion. That is, the observer has the freedom to move what would become known as the "Heisenberg cut" without changing any physically meaningful predictions. On the other hand, Bohr argued both systems are quantum in principle, and the object-instrument distinction (the "cut") is dictated by the experimental arrangement. For Bohr, the "cut" was not a change in the dynamical laws that govern the systems in question, but a change in the language applied to them.
During an observation, the system must interact with a laboratory device. When that device makes a measurement, the wave function of the system collapses, irreversibly reducing to an eigenstate of the observable that is registered. The result of this process is a tangible record of the event, made by a potentiality becoming an actuality.
Statements about measurements that are not actually made do not have meaning. For example, there is no meaning to the statement that a photon traversed the upper path of a Mach–Zehnder interferometer unless the interferometer were actually built in such a way that the path taken by the photon is detected and registered.
Wave functions are objective, in that they do not depend upon personal opinions of individual physicists or other such arbitrary influences.
There are some fundamental agreements and disagreements between the views of Bohr and Heisenberg. For example, Heisenberg emphasized a sharp "cut" between the observer (or the instrument) and the system being observed, while Bohr offered an interpretation that is independent of a subjective observer or measurement or collapse, which relies on an "irreversible" or effectively irreversible process, which could take place within the quantum system.
Another issue of importance where Bohr and Heisenberg disagreed is wave–particle duality. Bohr maintained that the distinction between a wave view and a particle view was defined by a distinction between experimental setups, whereas Heisenberg held that it was defined by the possibility of viewing the mathematical formulas as referring to waves or particles. Bohr thought that a particular experimental setup would display either a wave picture or a particle picture, but not both. Heisenberg thought that every mathematical formulation was capable of both wave and particle interpretations.
Nature of the wave function
A wave function is a mathematical entity that provides a probability distribution for the outcomes of each possible measurement on a system. Knowledge of the wave function together with the rules for the system's evolution in time exhausts all that can be predicted about the system's behavior. Generally, Copenhagen-type interpretations deny that the wave function provides a directly apprehensible image of an ordinary material body or a discernible component of some such, or anything more than a theoretical concept.
Probabilities via the Born rule
The Born rule is essential to the Copenhagen interpretation. Formulated by Max Born in 1926, it gives the probability that a measurement of a quantum system will yield a given result. In its simplest form, it states that the probability density of finding a particle at a given point, when measured, is proportional to the square of the magnitude of the particle's wave function at that point.
Collapse
The concept of wave function collapse postulates that the wave function of a system can change suddenly and discontinuously upon measurement. Prior to a measurement, a wave function involves the various probabilities for the different potential outcomes of that measurement. But when the apparatus registers one of those outcomes, no traces of the others linger. Since Bohr did not view the wavefunction as something physical, he never talks about "collapse". Nevertheless, many physicists and philosophers associate collapse with the Copenhagen interpretation.
Heisenberg spoke of the wave function as representing available knowledge of a system, and did not use the term "collapse", but instead termed it "reduction" of the wave function to a new state representing the change in available knowledge which occurs once a particular phenomenon is registered by the apparatus.
Role of the observer
Because they assert that the existence of an observed value depends upon the intercession of the observer, Copenhagen-type interpretations are sometimes called "subjective". All of the original Copenhagen protagonists considered the process of observation as mechanical and independent of the individuality of the observer. Wolfgang Pauli, for example, insisted that measurement results could be obtained and recorded by "objective registering apparatus". As Heisenberg wrote,
In the 1970s and 1980s, the theory of decoherence helped to explain the appearance of quasi-classical realities emerging from quantum theory, but was insufficient to provide a technical explanation for the apparent wave function collapse.
Completion by hidden variables?
In metaphysical terms, the Copenhagen interpretation views quantum mechanics as providing knowledge of phenomena, but not as pointing to 'really existing objects', which it regards as residues of ordinary intuition. This makes it an epistemic theory. This may be contrasted with Einstein's view, that physics should look for 'really existing objects', making itself an ontic theory.
The metaphysical question is sometimes asked: "Could quantum mechanics be extended by adding so-called "hidden variables" to the mathematical formalism, to convert it from an epistemic to an ontic theory?" The Copenhagen interpretation answers this with a strong 'No'. It is sometimes alleged, for example by J.S. Bell, that Einstein opposed the Copenhagen interpretation because he believed that the answer to that question of "hidden variables" was "yes". By contrast, Max Jammer writes "Einstein never proposed a hidden variable theory." Einstein explored the possibility of a hidden variable theory, and wrote a paper describing his exploration, but withdrew it from publication because he felt it was faulty.
Acceptance among physicists
During the 1930s and 1940s, views about quantum mechanics attributed to Bohr and emphasizing complementarity became commonplace among physicists. Textbooks of the time generally maintained the principle that the numerical value of a physical quantity is not meaningful or does not exist until it is measured. Prominent physicists associated with Copenhagen-type interpretations have included Lev Landau, Wolfgang Pauli, Rudolf Peierls, Asher Peres, Léon Rosenfeld, and Ray Streater.
Throughout much of the 20th century, the Copenhagen tradition had overwhelming acceptance among physicists. According to a very informal poll (some people voted for multiple interpretations) conducted at a quantum mechanics conference in 1997, the Copenhagen interpretation remained the most widely accepted label that physicists applied to their own views. A similar result was found in a poll conducted in 2011.
Consequences
The nature of the Copenhagen interpretation is exposed by considering a number of experiments and paradoxes.
Schrödinger's cat
This thought experiment highlights the implications that accepting uncertainty at the microscopic level has on macroscopic objects. A cat is put in a sealed box, with its life or death made dependent on the state of a subatomic particle. Thus a description of the cat during the course of the experiment—having been entangled with the state of a subatomic particle—becomes a "blur" of "living and dead cat." But this cannot be accurate because it implies the cat is actually both dead and alive until the box is opened to check on it. But the cat, if it survives, will only remember being alive. Schrödinger resists "so naively accepting as valid a 'blurred model' for representing reality." How can the cat be both alive and dead?
In Copenhagen-type views, the wave function reflects our knowledge of the system. The wave function means that, once the cat is observed, there is a 50% chance it will be dead, and 50% chance it will be alive. (Some versions of the Copenhagen interpretation reject the idea that a wave function can be assigned to a physical system that meets the everyday definition of "cat"; in this view, the correct quantum-mechanical description of the cat-and-particle system must include a superselection rule.)
Wigner's friend
"Wigner's friend" is a thought experiment intended to make that of Schrödinger's cat more striking by involving two conscious beings, traditionally known as Wigner and his friend. (In more recent literature, they may also be known as Alice and Bob, per the convention of describing protocols in information theory.) Wigner puts his friend in with the cat. The external observer believes the system is in state . However, his friend is convinced that the cat is alive, i.e. for him, the cat is in the state . How can Wigner and his friend see different wave functions?
In a Heisenbergian view, the answer depends on the positioning of Heisenberg cut, which can be placed arbitrarily (at least according to Heisenberg, though not to Bohr). If Wigner's friend is positioned on the same side of the cut as the external observer, his measurements collapse the wave function for both observers. If he is positioned on the cat's side, his interaction with the cat is not considered a measurement. Different Copenhagen-type interpretations take different positions as to whether observers can be placed on the quantum side of the cut.
Double-slit experiment
In the basic version of this experiment, a light source, such as a laser beam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate. The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles. However, the light is always found to be absorbed at the screen at discrete points, as individual particles (not waves); the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as would a wave). Such experiments demonstrate that particles do not form the interference pattern if one detects which slit they pass through.
According to Bohr's complementarity principle, light is neither a wave nor a stream of particles. A particular experiment can demonstrate particle behavior (passing through a definite slit) or wave behavior (interference), but not both at the same time.
The same experiment has been performed for light, electrons, atoms, and molecules. The extremely small de Broglie wavelength of objects with larger mass makes experiments increasingly difficult, but in general quantum mechanics considers all matter as possessing both particle and wave behaviors.
Einstein–Podolsky–Rosen paradox
This thought experiment involves a pair of particles prepared in what later authors would refer to as an entangled state. In a 1935 paper, Einstein, Boris Podolsky, and Nathan Rosen pointed out that, in this state, if the position of the first particle were measured, the result of measuring the position of the second particle could be predicted. If instead the momentum of the first particle were measured, then the result of measuring the momentum of the second particle could be predicted. They argued that no action taken on the first particle could instantaneously affect the other, since this would involve information being transmitted faster than light, which is forbidden by the theory of relativity. They invoked a principle, later known as the "Einstein–Podolsky–Rosen (EPR) criterion of reality", positing that, "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity". From this, they inferred that the second particle must have a definite value of position and of momentum prior to either being measured.
Bohr's response to the EPR paper was published in the Physical Review later that same year. He argued that EPR had reasoned fallaciously. Because measurements of position and of momentum are complementary, making the choice to measure one excludes the possibility of measuring the other. Consequently, a fact deduced regarding one arrangement of laboratory apparatus could not be combined with a fact deduced by means of the other, and so, the inference of predetermined position and momentum values for the second particle was not valid. Bohr concluded that EPR's "arguments do not justify their conclusion that the quantum description turns out to be essentially incomplete."
Criticism
Incompleteness and indeterminism
Einstein was an early and persistent supporter of objective reality. Bohr and Heisenberg advanced the position that no physical property could be understood without an act of measurement, while Einstein refused to accept this. Abraham Pais recalled a walk with Einstein when the two discussed quantum mechanics: "Einstein suddenly stopped, turned to me and asked whether I really believed that the moon exists only when I look at it." While Einstein did not doubt that quantum mechanics was a correct physical theory in that it gave correct predictions, he maintained that it could not be a complete theory. The most famous product of his efforts to argue the incompleteness of quantum theory is the Einstein–Podolsky–Rosen thought experiment, which was intended to show that physical properties like position and momentum have values even if not measured. The argument of EPR was not generally persuasive to other physicists.
Carl Friedrich von Weizsäcker, while participating in a colloquium at Cambridge, denied that the Copenhagen interpretation asserted "What cannot be observed does not exist". Instead, he suggested that the Copenhagen interpretation follows the principle "What is observed certainly exists; about what is not observed we are still free to make suitable assumptions. We use that freedom to avoid paradoxes."
Einstein was likewise dissatisfied with the indeterminism of quantum theory. Regarding the possibility of randomness in nature, Einstein said that he was "convinced that He [God] does not throw dice." Bohr, in response, reputedly said that "it cannot be for us to tell God, how he is to run the world".
The Heisenberg cut
Much criticism of Copenhagen-type interpretations has focused on the need for a classical domain where observers or measuring devices can reside, and the imprecision of how the boundary between quantum and classical might be defined. This boundary came to be termed the Heisenberg cut (while John Bell derisively called it the "shifty split"). As typically portrayed, Copenhagen-type interpretations involve two different kinds of time evolution for wave functions, the deterministic flow according to the Schrödinger equation and the probabilistic jump during measurement, without a clear criterion for when each kind applies. Why should these two different processes exist, when physicists and laboratory equipment are made of the same matter as the rest of the universe? And if there is somehow a split, where should it be placed? Steven Weinberg writes that the traditional presentation gives "no way to locate the boundary between the realms in which [...] quantum mechanics does or does not apply."
The problem of thinking in terms of classical measurements of a quantum system becomes particularly acute in the field of quantum cosmology, where the quantum system is the universe. How does an observer stand outside the universe in order to measure it, and who was there to observe the universe in its earliest stages? Advocates of Copenhagen-type interpretations have disputed the seriousness of these objections. Rudolf Peierls noted that "the observer does not have to be contemporaneous with the event"; for example, we study the early universe through the cosmic microwave background, and we can apply quantum mechanics to that just as well as to any electromagnetic field. Likewise, Asher Peres argued that physicists are, conceptually, outside those degrees of freedom that cosmology studies, and applying quantum mechanics to the radius of the universe while neglecting the physicists in it is no different from quantizing the electric current in a superconductor while neglecting the atomic-level details.
Alternatives
A large number of alternative interpretations have appeared, sharing some aspects of the Copenhagen interpretation while providing alternatives to other aspects.
The ensemble interpretation is similar; it offers an interpretation of the wave function, but not for single particles. The consistent histories interpretation advertises itself as "Copenhagen done right". More recently, interpretations inspired by quantum information theory like QBism and relational quantum mechanics have appeared. Experts on quantum foundational issues continue to favor the Copenhagen interpretation over other alternatives. Physicists who have suggested that the Copenhagen tradition needs to be built upon or extended include Rudolf Haag and Anton Zeilinger.
Under realism and determinism, if the wave function is regarded as ontologically real, and collapse is entirely rejected, a many-worlds interpretation results. If wave function collapse is regarded as ontologically real as well, an objective collapse theory is obtained. Bohmian mechanics shows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function. The transactional interpretation is also explicitly nonlocal.
Some physicists espoused views in the "Copenhagen spirit" and then went on to advocate other interpretations. For example, David Bohm and Alfred Landé both wrote textbooks that put forth ideas in the Bohr–Heisenberg tradition, and later promoted nonlocal hidden variables and an ensemble interpretation respectively. John Archibald Wheeler began his career as an "apostle of Niels Bohr"; he then supervised the PhD thesis of Hugh Everett that proposed the many-worlds interpretation. After supporting Everett's work for several years, he began to distance himself from the many-worlds interpretation in the 1970s. Late in life, he wrote that while the Copenhagen interpretation might fairly be called "the fog from the north", it "remains the best interpretation of the quantum that we have".
Other physicists, while influenced by the Copenhagen tradition, have expressed frustration at how it took the mathematical formalism of quantum theory as given, rather than trying to understand how it might arise from something more fundamental. (E. T. Jaynes described the mathematical formalism of quantum physics as "a peculiar mixture describing in part realities of Nature, in part incomplete human information about Nature—all scrambled up together by Heisenberg and Bohr into an omelette that nobody has seen how to unscramble".) This dissatisfaction has motivated new interpretative variants as well as technical work in quantum foundations.
See also
Bohr–Einstein debates
Einstein's thought experiments
Fifth Solvay Conference
Philosophical interpretation of classical physics
Physical ontology
Popper's experiment
Von Neumann–Wigner interpretation
Notes
References
Further reading
Interpretations of quantum mechanics
Quantum measurement
University of Copenhagen | Copenhagen interpretation | [
"Physics"
] | 5,739 | [
"Interpretations of quantum mechanics",
"Quantum measurement",
"Quantum mechanics"
] |
5,869 | https://en.wikipedia.org/wiki/Category%20theory | Category theory is a general theory of mathematical structures and their relations. It was introduced by Samuel Eilenberg and Saunders Mac Lane in the middle of the 20th century in their foundational work on algebraic topology. Category theory is used in almost all areas of mathematics. In particular, many constructions of new mathematical objects from previous ones that appear similarly in several contexts are conveniently expressed and unified in terms of categories. Examples include quotient spaces, direct products, completion, and duality.
Many areas of computer science also rely on category theory, such as functional programming and semantics.
A category is formed by two sorts of objects: the objects of the category, and the morphisms, which relate two objects called the source and the target of the morphism. Metaphorically, a morphism is an arrow that maps its source to its target. Morphisms can be composed if the target of the first morphism equals the source of the second one. Morphism composition has similar properties as function composition (associativity and existence of an identity morphism for each object). Morphisms are often some sort of functions, but this is not always the case. For example, a monoid may be viewed as a category with a single object, whose morphisms are the elements of the monoid.
The second fundamental concept of category theory is the concept of a functor, which plays the role of a morphism between two categories and : it maps objects of to objects of and morphisms of to morphisms of in such a way that sources are mapped to sources, and targets are mapped to targets (or, in the case of a contravariant functor, sources are mapped to targets and vice-versa). A third fundamental concept is a natural transformation that may be viewed as a morphism of functors.
Categories, objects, and morphisms
Categories
A category consists of the following three mathematical entities:
A class , whose elements are called objects;
A class , whose elements are called morphisms or maps or arrows. Each morphism has a source object and target object .The expression would be verbally stated as " is a morphism from to ".The expression – alternatively expressed as , , or – denotes the hom-class of all morphisms from to .
A binary operation , called composition of morphisms, such that for any three objects , , and , we haveThe composition of and is written as or , governed by two axioms:
Associativity: If , , and then
Identity: For every object , there exists a morphism (also denoted as ) called the identity morphism for , such that for every morphism , we haveFrom the axioms, it can be proved that there is exactly one identity morphism for every object.
Examples
The category Set
As the class of objects , we choose the class of all sets.
As the class of morphisms , we choose the class of all functions. Therefore, for two objects and , i.e. sets, we have to be the class of all functions such that .
The composition of morphisms is simply the usual function composition, i.e. for two morphisms and , we have , , which is obviously associative. Furthermore, for every object we have the identity morphism to be the identity map , on
Morphisms
Relations among morphisms (such as ) are often depicted using commutative diagrams, with "points" (corners) representing objects and "arrows" representing morphisms.
Morphisms can have any of the following properties. A morphism is:
a monomorphism (or monic) if implies for all morphisms .
an epimorphism (or epic) if implies for all morphisms .
a bimorphism if f is both epic and monic.
an isomorphism if there exists a morphism such that .
an endomorphism if . end(a) denotes the class of endomorphisms of a.
an automorphism if f is both an endomorphism and an isomorphism. aut(a) denotes the class of automorphisms of a.
a retraction if a right inverse of f exists, i.e. if there exists a morphism with .
a section if a left inverse of f exists, i.e. if there exists a morphism with .
Every retraction is an epimorphism, and every section is a monomorphism. Furthermore, the following three statements are equivalent:
f is a monomorphism and a retraction;
f is an epimorphism and a section;
f is an isomorphism.
Functors
Functors are structure-preserving maps between categories. They can be thought of as morphisms in the category of all (small) categories.
A (covariant) functor F from a category C to a category D, written , consists of:
for each object x in C, an object F(x) in D; and
for each morphism in C, a morphism in D,
such that the following two properties hold:
For every object x in C, ;
For all morphisms and , .
A contravariant functor is like a covariant functor, except that it "turns morphisms around" ("reverses all the arrows"). More specifically, every morphism in C must be assigned to a morphism in D. In other words, a contravariant functor acts as a covariant functor from the opposite category Cop to D.
Natural transformations
A natural transformation is a relation between two functors. Functors often describe "natural constructions" and natural transformations then describe "natural homomorphisms" between two such constructions. Sometimes two quite different constructions yield "the same" result; this is expressed by a natural isomorphism between the two functors.
If F and G are (covariant) functors between the categories C and D, then a natural transformation η from F to G associates to every object X in C a morphism in D such that for every morphism in C, we have ; this means that the following diagram is commutative:
The two functors F and G are called naturally isomorphic if there exists a natural transformation from F to G such that ηX is an isomorphism for every object X in C.
Other concepts
Universal constructions, limits, and colimits
Using the language of category theory, many areas of mathematical study can be categorized. Categories include sets, groups and topologies.
Each category is distinguished by properties that all its objects have in common, such as the empty set or the product of two topologies, yet in the definition of a category, objects are considered atomic, i.e., we do not know whether an object A is a set, a topology, or any other abstract concept. Hence, the challenge is to define special objects without referring to the internal structure of those objects. To define the empty set without referring to elements, or the product topology without referring to open sets, one can characterize these objects in terms of their relations to other objects, as given by the morphisms of the respective categories. Thus, the task is to find universal properties that uniquely determine the objects of interest.
Numerous important constructions can be described in a purely categorical way if the category limit can be developed and dualized to yield the notion of a colimit.
Equivalent categories
It is a natural question to ask: under which conditions can two categories be considered essentially the same, in the sense that theorems about one category can readily be transformed into theorems about the other category? The major tool one employs to describe such a situation is called equivalence of categories, which is given by appropriate functors between two categories. Categorical equivalence has found numerous applications in mathematics.
Further concepts and results
The definitions of categories and functors provide only the very basics of categorical algebra; additional important topics are listed below. Although there are strong interrelations between all of these topics, the given order can be considered as a guideline for further reading.
The functor category DC has as objects the functors from C to D and as morphisms the natural transformations of such functors. The Yoneda lemma is one of the most famous basic results of category theory; it describes representable functors in functor categories.
Duality: Every statement, theorem, or definition in category theory has a dual which is essentially obtained by "reversing all the arrows". If one statement is true in a category C then its dual is true in the dual category Cop. This duality, which is transparent at the level of category theory, is often obscured in applications and can lead to surprising relationships.
Adjoint functors: A functor can be left (or right) adjoint to another functor that maps in the opposite direction. Such a pair of adjoint functors typically arises from a construction defined by a universal property; this can be seen as a more abstract and powerful view on universal properties.
Higher-dimensional categories
Many of the above concepts, especially equivalence of categories, adjoint functor pairs, and functor categories, can be situated into the context of higher-dimensional categories. Briefly, if we consider a morphism between two objects as a "process taking us from one object to another", then higher-dimensional categories allow us to profitably generalize this by considering "higher-dimensional processes".
For example, a (strict) 2-category is a category together with "morphisms between morphisms", i.e., processes which allow us to transform one morphism into another. We can then "compose" these "bimorphisms" both horizontally and vertically, and we require a 2-dimensional "exchange law" to hold, relating the two composition laws. In this context, the standard example is Cat, the 2-category of all (small) categories, and in this example, bimorphisms of morphisms are simply natural transformations of morphisms in the usual sense. Another basic example is to consider a 2-category with a single object; these are essentially monoidal categories. Bicategories are a weaker notion of 2-dimensional categories in which the composition of morphisms is not strictly associative, but only associative "up to" an isomorphism.
This process can be extended for all natural numbers n, and these are called n-categories. There is even a notion of ω-category corresponding to the ordinal number ω.
Higher-dimensional categories are part of the broader mathematical field of higher-dimensional algebra, a concept introduced by Ronald Brown. For a conversational introduction to these ideas, see John Baez, 'A Tale of n-categories' (1996).
Historical notes
Whilst specific examples of functors and natural transformations had been given by Samuel Eilenberg and Saunders Mac Lane in a 1942 paper on group theory, these concepts were introduced in a more general sense, together with the additional notion of categories, in a 1945 paper by the same authors (who discussed applications of category theory to the field of algebraic topology). Their work was an important part of the transition from intuitive and geometric homology to homological algebra, Eilenberg and Mac Lane later writing that their goal was to understand natural transformations, which first required the definition of functors, then categories.
Stanislaw Ulam, and some writing on his behalf, have claimed that related ideas were current in the late 1930s in Poland. Eilenberg was Polish, and studied mathematics in Poland in the 1930s. Category theory is also, in some sense, a continuation of the work of Emmy Noether (one of Mac Lane's teachers) in formalizing abstract processes; Noether realized that understanding a type of mathematical structure requires understanding the processes that preserve that structure (homomorphisms). Eilenberg and Mac Lane introduced categories for understanding and formalizing the processes (functors) that relate topological structures to algebraic structures (topological invariants) that characterize them.
Category theory was originally introduced for the need of homological algebra, and widely extended for the need of modern algebraic geometry (scheme theory). Category theory may be viewed as an extension of universal algebra, as the latter studies algebraic structures, and the former applies to any kind of mathematical structure and studies also the relationships between structures of different nature. For this reason, it is used throughout mathematics. Applications to mathematical logic and semantics (categorical abstract machine) came later.
Certain categories called topoi (singular topos) can even serve as an alternative to axiomatic set theory as a foundation of mathematics. A topos can also be considered as a specific type of category with two additional topos axioms. These foundational applications of category theory have been worked out in fair detail as a basis for, and justification of, constructive mathematics. Topos theory is a form of abstract sheaf theory, with geometric origins, and leads to ideas such as pointless topology.
Categorical logic is now a well-defined field based on type theory for intuitionistic logics, with applications in functional programming and domain theory, where a cartesian closed category is taken as a non-syntactic description of a lambda calculus. At the very least, category theoretic language clarifies what exactly these related areas have in common (in some abstract sense).
Category theory has been applied in other fields as well, see applied category theory. For example, John Baez has shown a link between Feynman diagrams in physics and monoidal categories. Another application of category theory, more specifically topos theory, has been made in mathematical music theory, see for example the book The Topos of Music, Geometric Logic of Concepts, Theory, and Performance by Guerino Mazzola.
More recent efforts to introduce undergraduates to categories as a foundation for mathematics include those of William Lawvere and Rosebrugh (2003) and Lawvere and Stephen Schanuel (1997) and Mirroslav Yotov (2012).
See also
Domain theory
Enriched category theory
Glossary of category theory
Group theory
Higher category theory
Higher-dimensional algebra
Important publications in category theory
Lambda calculus
Outline of category theory
Timeline of category theory and related mathematics
Applied category theory
Notes
References
Citations
Sources
.
.
.
Notes for a course offered as part of the MSc. in Mathematical Logic, Manchester University.
.
, draft of a book.
Based on .
Further reading
External links
Theory and Application of Categories, an electronic journal of category theory, full text, free, since 1995.
Cahiers de Topologie et Géométrie Différentielle Catégoriques, an electronic journal of category theory, full text, free, funded in 1957.
nLab, a wiki project on mathematics, physics and philosophy with emphasis on the n-categorical point of view.
The n-Category Café, essentially a colloquium on topics in category theory.
Category Theory, a web page of links to lecture notes and freely available books on category theory.
, a formal introduction to category theory.
, with an extensive bibliography.
List of academic conferences on category theory
— An informal introduction to higher order categories.
WildCats is a category theory package for Mathematica. Manipulation and visualization of objects, morphisms, categories, functors, natural transformations, universal properties.
, a channel about category theory.
.
Video archive of recorded talks relevant to categories, logic and the foundations of physics.
Interactive Web page which generates examples of categorical constructions in the category of finite sets.
Category Theory for the Sciences, an instruction on category theory as a tool throughout the sciences.
Category Theory for Programmers A book in blog form explaining category theory for computer programmers.
Introduction to category theory.
Higher category theory
Foundations of mathematics | Category theory | [
"Mathematics"
] | 3,259 | [
"Functions and mappings",
"Mathematical structures",
"Foundations of mathematics",
"Mathematical objects",
"Higher category theory",
"Fields of abstract algebra",
"Category theory",
"Mathematical relations"
] |
5,876 | https://en.wikipedia.org/wiki/Coronary%20artery%20disease | Coronary artery disease (CAD), also called coronary heart disease (CHD), or ischemic heart disease (IHD), is a type of heart disease involving the reduction of blood flow to the cardiac muscle due to a build-up of atheromatous plaque in the arteries of the heart. It is the most common of the cardiovascular diseases. CAD can cause stable angina, unstable angina, myocardial ischemia, and myocardial infarction.
A common symptom is angina, which is chest pain or discomfort which may travel into the shoulder, arm, back, neck, or jaw. Occasionally it may feel like heartburn. In stable angina, symptoms occur with exercise or emotional stress, last less than a few minutes, and improve with rest. Shortness of breath may also occur and sometimes no symptoms are present. In many cases, the first sign is a heart attack. Other complications include heart failure or an abnormal heartbeat.
Risk factors include high blood pressure, smoking, diabetes, lack of exercise, obesity, high blood cholesterol, poor diet, depression, and excessive alcohol consumption. A number of tests may help with diagnosis including: electrocardiogram, cardiac stress testing, coronary computed tomographic angiography, biomarkers (high-sensitivity cardiac troponins) and coronary angiogram, among others.
Ways to reduce CAD risk include eating a healthy diet, regularly exercising, maintaining a healthy weight, and not smoking. Medications for diabetes, high cholesterol, or high blood pressure are sometimes used. There is limited evidence for screening people who are at low risk and do not have symptoms. Treatment involves the same measures as prevention. Additional medications such as antiplatelets (including aspirin), beta blockers, or nitroglycerin may be recommended. Procedures such as percutaneous coronary intervention (PCI) or coronary artery bypass surgery (CABG) may be used in severe disease. In those with stable CAD it is unclear if PCI or CABG in addition to the other treatments improves life expectancy or decreases heart attack risk.
In 2015, CAD affected 110 million people and resulted in 8.9 million deaths. It makes up 15.6% of all deaths, making it the most common cause of death globally. The risk of death from CAD for a given age decreased between 1980 and 2010, especially in developed countries. The number of cases of CAD for a given age also decreased between 1990 and 2010. In the United States in 2010, about 20% of those over 65 had CAD, while it was present in 7% of those 45 to 64, and 1.3% of those 18 to 45; rates were higher among males than females of a given age.
Signs and symptoms
The most common symptom is chest pain or discomfort that occurs regularly with activity, after eating, or at other predictable times; this phenomenon is termed stable angina and is associated with narrowing of the arteries of the heart. Angina also includes chest tightness, heaviness, pressure, numbness, fullness, or squeezing. Angina that changes in intensity, character or frequency is termed unstable. Unstable angina may precede myocardial infarction. In adults who go to the emergency department with an unclear cause of pain, about 30% have pain due to coronary artery disease. Angina, shortness of breath, sweating, nausea or vomiting, and lightheadedness are signs of a heart attack or myocardial infarction, and immediate emergency medical services are crucial.
With advanced disease, the narrowing of coronary arteries reduces the supply of oxygen-rich blood flowing to the heart, which becomes more pronounced during strenuous activities during which the heart beats faster and has an increased oxygen demand. For some, this causes severe symptoms, while others experience no symptoms at all.
Symptoms in females
Symptoms in females can differ from those in males, and the most common symptom reported by females of all races is shortness of breath. Other symptoms more commonly reported by females than males are extreme fatigue, sleep disturbances, indigestion, and anxiety. However, some females experience irregular heartbeat, dizziness, sweating, and nausea. Burning, pain, or pressure in the chest or upper abdomen that can travel to the arm or jaw can also be experienced in females, but females less commonly report it than males. Generally, females experience symptoms 10 years later than males. Females are less likely to recognize symptoms and seek treatment.
Risk factors
Coronary artery disease is characterized by heart problems that result from atherosclerosis. Atherosclerosis is a type of arteriosclerosis which is the "chronic inflammation of the arteries which causes them to harden and accumulate cholesterol plaques (atheromatous plaques) on the artery walls". CAD has several well-determined risk factors that contribute to atherosclerosis. These risk factors for CAD include "smoking, diabetes, high blood pressure (hypertension), abnormal (high) amounts of cholesterol and other fat in the blood (dyslipidemia), type 2 diabetes and being overweight or obese (having excess body fat)" due to lack of exercise and a poor diet. Some other risk factors include high blood pressure, smoking, diabetes, lack of exercise, obesity, high blood cholesterol, poor diet, depression, family history, psychological stress and excessive alcohol. About half of cases are linked to genetics. Smoking and obesity are associated with about 36% and 20% of cases, respectively. Smoking just one cigarette per day about doubles the risk of CAD. Lack of exercise has been linked to 7–12% of cases. Exposure to the herbicide Agent Orange may increase risk. Rheumatologic diseases such as rheumatoid arthritis, systemic lupus erythematosus, psoriasis, and psoriatic arthritis are independent risk factors as well.
Job stress appears to play a minor role accounting for about 3% of cases. In one study, females who were free of stress from work life saw an increase in the diameter of their blood vessels, leading to decreased progression of atherosclerosis. In contrast, females who had high levels of work-related stress experienced a decrease in the diameter of their blood vessels and significantly increased disease progression. Having a type A behavior pattern, a group of personality characteristics including time urgency, competitiveness, hostility, and impatience, is linked to an increased risk of coronary disease.
Blood fats
The consumption of different types of fats including trans fat (trans unsaturated), and saturated fat, in a diet "influences the level of cholesterol that is present in the bloodstream". Unsaturated fats originate from plant sources (such as oils). There are two types of unsaturated fats, cis and trans isomers. Cis unsaturated fats are bent in molecular structure and trans are linear in structure. Saturated fats originate from animal sources (such as animal fats) and are also molecularly linear in structure. The linear configurations of unsaturated trans and saturated fats allow them to easily accumulate and stack at the arterial walls when consumed in high amounts (and other positive measures towards physical health are not met).
Fats and cholesterol are insoluble in blood and thus are amalgamated with proteins to form lipoproteins for transport. Low density lipoproteins (LDL) transport cholesterol from the liver to the rest of the body and therefore raise blood cholesterol levels. The consumption of "saturated fats increases LDL levels within the body, thus raising blood cholesterol levels".
High density lipoproteins (HDL) are considered 'good' lipoproteins as they search for excess cholesterol in the body and transport it back to the liver for disposal. Trans fats also "increase LDL levels whilst decreasing HDL levels within the body, significantly raising blood cholesterol levels".
High levels of cholesterol in the bloodstream lead to atherosclerosis. With increased levels of LDL in the bloodstream, "LDL particles will form deposits and accumulate within the arterial walls, which will lead to the development of plaques, restricting blood flow". The resultant reduction in the heart's blood supply due to atherosclerosis in coronary arteries "causes shortness of breath, angina pectoris (chest pains that are usually relieved by rest), and potentially fatal heart attacks (myocardial infarctions)".
Genetics
The heritability of coronary artery disease has been estimated between 40% and 60%. Genome-wide association studies have identified over 160 genetic susceptibility loci for coronary artery disease.
Transcriptome
Several RNA Transcripts associated with CAD - FoxP1, ICOSLG, IKZF4/Eos, SMYD3, TRIM28, and TCF3/E2A are likely markers of regulatory T cells (Tregs), consistent with known reductions in Tregs in CAD.
The RNA changes are mostly related to ciliary and endocytic transcripts, which in the circulating immune system would be related to the immune synapse. One of the most differentially expressed genes, fibromodulin (FMOD), which is increased 2.8-fold in CAD, is found mainly in connective tissue and is a modulator of the TGF-beta signaling pathway. However, not all of the RNA changes may be related to the immune synapse. For example, Nebulette, the most down-regulated transcript (2.4-fold), is found in cardiac muscle; it is a 'cytolinker' that connects actin and desmin to facilitate cytoskeletal function and vesicular movement. The endocytic pathway is further modulated by changes in tubulin, a key microtubule protein, and fidgetin, a tubulin-severing enzyme that is a marker for cardiovascular risk identified by genome-wide association study. Protein recycling would be modulated by changes in the proteasomal regulator SIAH3, and the ubiquitin ligase MARCHF10. On the ciliary aspect of the immune synapse, several of the modulated transcripts are related to ciliary length and function. Stereocilin is a partner to mesothelin, a related super-helical protein, whose transcript is also modulated in CAD. DCDC2, a double-cortin protein, is a modulator of ciliary length. In the signaling pathways of the immune synapse, there were numerous transcripts that related directly to T cell function and the control of differentiation. Butyrophilin is a co-regulator for T cell activation. Fibromodulin is a modulator of the TGF-beta signaling pathway, a primary determinant of Tre differentiation. Further impact on the TGF-beta pathway is reflected in concurrent changes in the BMP receptor 1B RNA (BMPR1B), because the bone morphogenic proteins are members of the TGF-beta superfamily, and likewise impact Treg differentiation. Several of the transcripts (TMEM98, NRCAM, SFRP5, SHISA2) are elements of the Wnt signaling pathway, which is a major determinant of Treg differentiation.
Other
Endometriosis in females under the age of 40.
Depression and hostility appear to be risks.
The number of categories of adverse childhood experiences (psychological, physical, or sexual abuse; violence against mother; or living with household members who used substances, mentally ill, suicidal, or incarcerated) showed a graded correlation with the presence of adult diseases including coronary artery (ischemic heart) disease.
Hemostatic factors: High levels of fibrinogen and coagulation factor VII are associated with an increased risk of CAD.
Low hemoglobin.
In the Asian population, the b fibrinogen gene G-455A polymorphism was associated with the risk of CAD.
Patient-specific vessel ageing or remodelling determines endothelial cell behaviour and thus disease growth and progression. Such 'hemodynamic markers' are thus patient-specific risk surrogates.
HIV is a known risk factor for developing atherosclerosis and coronary artery disease.
Pathophysiology
Limitation of blood flow to the heart causes ischemia (cell starvation secondary to a lack of oxygen) of the heart's muscle cells. The heart's muscle cells may die from lack of oxygen and this is called a myocardial infarction (commonly referred to as a heart attack). It leads to damage, death, and eventual scarring of the heart muscle without regrowth of heart muscle cells. Chronic high-grade narrowing of the coronary arteries can induce transient ischemia which leads to the induction of a ventricular arrhythmia, which may terminate into a dangerous heart rhythm known as ventricular fibrillation, which often leads to death.
Typically, coronary artery disease occurs when part of the smooth, elastic lining inside a coronary artery (the arteries that supply blood to the heart muscle) develops atherosclerosis. With atherosclerosis, the artery's lining becomes hardened, stiffened, and accumulates deposits of calcium, fatty lipids, and abnormal inflammatory cells – to form a plaque. Calcium phosphate (hydroxyapatite) deposits in the muscular layer of the blood vessels appear to play a significant role in stiffening the arteries and inducing the early phase of coronary arteriosclerosis. This can be seen in a so-called metastatic mechanism of calciphylaxis as it occurs in chronic kidney disease and hemodialysis. Although these people have kidney dysfunction, almost fifty percent of them die due to coronary artery disease. Plaques can be thought of as large "pimples" that protrude into the channel of an artery, causing partial obstruction to blood flow. People with coronary artery disease might have just one or two plaques or might have dozens distributed throughout their coronary arteries. A more severe form is chronic total occlusion (CTO) when a coronary artery is completely obstructed for more than 3 months.
Microvascular angina is a type of angina pectoris in which chest pain and chest discomfort occur without signs of blockages in the larger coronary arteries of their hearts when an angiogram (coronary angiogram) is being performed.
The exact cause of microvascular angina is unknown. Explanations include microvascular dysfunction or epicardial atherosclerosis. For reasons that are not well understood, females are more likely than males to have it; however, hormones and other risk factors unique to females may play a role.
Diagnosis
The diagnosis of CAD depends largely on the nature of the symptoms and imaging. The first investigation when CAD is suspected is an electrocardiogram (ECG/EKG), both for stable angina and acute coronary syndrome. An X-ray of the chest, blood tests and resting echocardiography may be performed.
For stable symptomatic patients, several non-invasive tests can diagnose CAD depending on pre-assessment of the risk profile. Noninvasive imaging options include; Computed tomography angiography (CTA) (anatomical imaging, best test in patients with low-risk profile to "rule out" the disease), positron emission tomography (PET), single-photon emission computed tomography (SPECT)/nuclear stress test/myocardial scintigraphy and stress echocardiography (the three latter can be summarized as functional noninvasive methods and are typically better to "rule in"). Exercise ECG or stress test is inferior to non-invasive imaging methods due to the risk of false negative and false positive test results. The use of non-invasive imaging is not recommended on individuals who are exhibiting no symptoms and are otherwise at low risk for developing coronary disease. Invasive testing with coronary angiography (ICA) can be used when non-invasive testing is inconclusive or show a high event risk.
The diagnosis of microvascular angina (previously known as cardiac syndrome X – the rare coronary artery disease that is more common in females, as mentioned, is a diagnosis of exclusion. Therefore, usually, the same tests are used as in any person suspected of having coronary artery disease:
Intravascular ultrasound
Magnetic resonance imaging (MRI)
Stable angina
Stable angina is the most common manifestation of ischemic heart disease, and is associated with reduced quality of life and increased mortality. It is caused by epicardial coronary stenosis which results in reduced blood flow and oxygen supply to the myocardium.
Stable angina is short-term chest pain during physical exertion caused by an imbalance between myocardial oxygen supply and metabolic oxygen demand. Various forms of cardiac stress tests may be used to induce both symptoms and detect changes by way of electrocardiography (using an ECG), echocardiography (using ultrasound of the heart) or scintigraphy (using uptake of radionuclide by the heart muscle). If part of the heart seems to receive an insufficient blood supply, coronary angiography may be used to identify stenosis of the coronary arteries and suitability for angioplasty or bypass surgery.
In minor to moderate cases, nitroglycerine may be used to alleviate acute symptoms of stable angina or may be used immediately before exertion to prevent the onset of angina. Sublingual nitroglycerine is most commonly used to provide rapid relief for acute angina attacks and as a complement to anti-anginal treatments in patients with refractory and recurrent angina. When nitroglycerine enters the bloodstream, it forms free radical nitric oxide, or NO, which activates guanylate cyclase and in turn stimulates the release of cyclic GMP. This molecular signaling stimulates smooth muscle relaxation, ultimately resulting in vasodilation and consequently improved blood flow to regions of the heart affected by atherosclerotic plaque.
Stable coronary artery disease (SCAD) is also often called stable ischemic heart disease (SIHD). A 2015 monograph explains that "Regardless of the nomenclature, stable angina is the chief manifestation of SIHD or SCAD." There are U.S. and European clinical practice guidelines for SIHD/SCAD. In patients with non-severe asymptomatic aortic valve stenosis and no overt coronary artery disease, the increased troponin T (above 14 pg/mL) was found associated with an increased 5-year event rate of ischemic cardiac events (myocardial infarction, percutaneous coronary intervention, or coronary artery bypass surgery).
Acute coronary syndrome
Diagnosis of acute coronary syndrome generally takes place in the emergency department, where ECGs may be performed sequentially to identify "evolving changes" (indicating ongoing damage to the heart muscle). Diagnosis is clear-cut if ECGs show elevation of the "ST segment", which in the context of severe typical chest pain is strongly indicative of an acute myocardial infarction (MI); this is termed a STEMI (ST-elevation MI) and is treated as an emergency with either urgent coronary angiography and percutaneous coronary intervention (angioplasty with or without stent insertion) or with thrombolysis ("clot buster" medication), whichever is available. In the absence of ST-segment elevation, heart damage is detected by cardiac markers (blood tests that identify heart muscle damage). If there is evidence of damage (infarction), the chest pain is attributed to a "non-ST elevation MI" (NSTEMI). If there is no evidence of damage, the term "unstable angina" is used. This process usually necessitates hospital admission and close observation on a coronary care unit for possible complications (such as cardiac arrhythmias – irregularities in the heart rate). Depending on the risk assessment, stress testing or angiography may be used to identify and treat coronary artery disease in patients who have had an NSTEMI or unstable angina.
Risk assessment
There are various risk assessment systems for determining the risk of coronary artery disease, with various emphasis on the different variables above. A notable example is Framingham Score, used in the Framingham Heart Study. It is mainly based on age, gender, diabetes, total cholesterol, HDL cholesterol, tobacco smoking, and systolic blood pressure. When predicting risk in younger adults (18–39 years old), the Framingham Risk Score remains below 10–12% for all deciles of baseline-predicted risk.
Polygenic score is another way of risk assessment. In one study the relative risk of incident coronary events was 91% higher among participants at high genetic risk than among those at low genetic risk.
Prevention
Up to 90% of cardiovascular disease may be preventable if established risk factors are avoided. Prevention involves adequate physical exercise, decreasing obesity, treating high blood pressure, eating a healthy diet, decreasing cholesterol levels, and stopping smoking. Medications and exercise are roughly equally effective. High levels of physical activity reduce the risk of coronary artery disease by about 25%. Life's Essential 8 are the key measures for improving and maintaining cardiovascular health, as defined by the American Heart Association. AHA added sleep as a factor influencing heart health in 2022.
Most guidelines recommend combining these preventive strategies. A 2015 Cochrane Review found some evidence that counseling and education to bring about behavioral change might help in high-risk groups. However, there was insufficient evidence to show an effect on mortality or actual cardiovascular events.
In diabetes mellitus, there is little evidence that very tight blood sugar control improves cardiac risk although improved sugar control appears to decrease other problems such as kidney failure and blindness.
A 2024 study published in The Lancet Diabetes & Endocrinology found that the oral glucose tolerance test (OGTT) is more effective than hemoglobin A1c (HbA1c) for detecting dysglycemia in patients with coronary artery disease. The study highlighted that 2-hour post-load glucose levels of at least 9 mmol/L were strong predictors of cardiovascular outcomes, while HbA1c levels of at least 5.9% were also significant but not independently associated when combined with OGTT results.
Diet
A diet high in fruits and vegetables decreases the risk of cardiovascular disease and death. Vegetarians have a lower risk of heart disease, possibly due to their greater consumption of fruits and vegetables. Evidence also suggests that the Mediterranean diet and a high fiber diet lower the risk.
The consumption of trans fat (commonly found in hydrogenated products such as margarine) has been shown to cause a precursor to atherosclerosis and increase the risk of coronary artery disease.
Evidence does not support a beneficial role for omega-3 fatty acid supplementation in preventing cardiovascular disease (including myocardial infarction and sudden cardiac death). There is tentative evidence that intake of menaquinone (Vitamin K2), but not phylloquinone (Vitamin K1), may reduce the risk of CAD mortality.
Secondary prevention
Secondary prevention is preventing further sequelae of already established disease. Effective lifestyle changes include:
Weight control
Smoking cessation
Avoiding the consumption of trans fats (in partially hydrogenated oils)
Decreasing psychosocial stress
Exercise
Aerobic exercise, like walking, jogging, or swimming, can reduce the risk of mortality from coronary artery disease. Aerobic exercise can help decrease blood pressure and the amount of blood cholesterol (LDL) over time. It also increases HDL cholesterol.
Although exercise is beneficial, it is unclear whether doctors should spend time counseling patients to exercise. The U.S. Preventive Services Task Force found "insufficient evidence" to recommend that doctors counsel patients on exercise but "it did not review the evidence for the effectiveness of physical activity to reduce chronic disease, morbidity, and mortality", only the effectiveness of counseling itself. The American Heart Association, based on a non-systematic review, recommends that doctors counsel patients on exercise.
Psychological symptoms are common in people with CHD, and while many psychological treatments may be offered following cardiac events, there is no evidence that they change mortality, the risk of revascularization procedures, or the rate of non-fatal myocardial infarction.
Antibiotics for secondary prevention of coronary heart disease
Early studies suggested that antibiotics might help patients with coronary disease to reduce the risk of heart attacks and strokes. However, a 2021 Cochrane meta-analysis found that antibiotics given for secondary prevention of coronary heart disease are harmful for people with increased mortality and occurrence of stroke. So, the use of antibiotics is not currently supported for preventing secondary coronary heart disease.
Neuropsychological Assessment
A thorough systematic review found that indeed there is a link between a CHD condition and brain dysfunction in females. Consequently, since research is showing that cardiovascular diseases, like CHD, can play a role as a precursor for dementia, like Alzheimer's disease, individuals with CHD should have a neuropsychological assessment.
Treatment
There are a number of treatment options for coronary artery disease:
Lifestyle changes
Medical treatment – commonly prescribed drugs (e.g., cholesterol lowering medications, beta-blockers, nitroglycerin, calcium channel blockers, etc.);
Coronary interventions as angioplasty and coronary stent;
Coronary artery bypass grafting (CABG)
Medications
Statins, which reduce cholesterol, reduce the risk of coronary artery disease
Nitroglycerin
Calcium channel blockers and/or beta-blockers
Antiplatelet drugs such as aspirin
It is recommended that blood pressure typically be reduced to less than 140/90 mmHg. The diastolic blood pressure however should not be lower than 60 mmHg. Beta-blockers are recommended first line for this use.
Aspirin
In those with no previous history of heart disease, aspirin decreases the risk of a myocardial infarction but does not change the overall risk of death. Aspirin therapy to prevent heart disease is thus recommended only in adults who are at increased risk for cardiovascular events, which may include postmenopausal females, males above 40, and younger people with risk factors for coronary heart disease, including high blood pressure, a family history of heart disease, or diabetes. The benefits outweigh the harms most favorably in people at high risk for a cardiovascular event, where high risk is defined as at least a 3% chance over a five-year period, but others with lower risk may still find the potential benefits worth the associated risks.
Anti-platelet therapy
Clopidogrel plus aspirin (dual anti-platelet therapy) reduces cardiovascular events more than aspirin alone in those with a STEMI. In others at high risk but not having an acute event, the evidence is weak. Specifically, its use does not change the risk of death in this group. In those who have had a stent, more than 12 months of clopidogrel plus aspirin does not affect the risk of death.
Surgery
Revascularization for acute coronary syndrome has a mortality benefit. Percutaneous revascularization for stable ischaemic heart disease does not appear to have benefits over medical therapy alone. In those with disease in more than one artery, coronary artery bypass grafts appear better than percutaneous coronary interventions. Newer "anaortic" or no-touch off-pump coronary artery revascularization techniques have shown reduced postoperative stroke rates comparable to percutaneous coronary intervention. Hybrid coronary revascularization has also been shown to be a safe and feasible procedure that may offer some advantages over conventional CABG though it is more expensive.
Epidemiology
As of 2010, CAD was the leading cause of death globally resulting in over 7 million deaths. This increased from 5.2 million deaths from CAD worldwide in 1990. It may affect individuals at any age but becomes dramatically more common at progressively older ages, with approximately a tripling with each decade of life. Males are affected more often than females.
The World Health Organization reported that: "The world's biggest killer is ischemic heart disease, responsible for 13% of the world's total deaths. Since 2000, the largest increase in deaths has been for this disease, rising by 2.7 million to 9.1 million deaths in 2021."
It is estimated that 60% of the world's cardiovascular disease burden will occur in the South Asian subcontinent despite only accounting for 20% of the world's population. This may be secondary to a combination of genetic predisposition and environmental factors. Organizations such as the Indian Heart Association are working with the World Heart Federation to raise awareness about this issue.
Coronary artery disease is the leading cause of death for both males and females and accounts for approximately 600,000 deaths in the United States every year. According to present trends in the United States, half of healthy 40-year-old males will develop CAD in the future, and one in three healthy 40-year-old females. It is the most common reason for death of males and females over 20 years of age in the United States.
After analysing data from 2 111 882 patients, the recent meta-analysis revealed that the incidence of coronary artery diseases in breast cancer survivors was 4.29 (95% CI 3.09–5.94) per 1000 person-years.
Society and culture
Names
Other terms sometimes used for this condition are "hardening of the arteries" and "narrowing of the arteries". In Latin it is known as morbus ischaemicus cordis (MIC).
Support groups
The Infarct Combat Project (ICP) is an international nonprofit organization founded in 1998 which tries to decrease ischemic heart diseases through education and research.
Industry influence on research
In 2016 research into the archives of the Sugar Association, the trade association for the sugar industry in the US, had sponsored an influential literature review published in 1965 in the New England Journal of Medicine that downplayed early findings about the role of a diet heavy in sugar in the development of CAD and emphasized the role of fat; that review influenced decades of research funding and guidance on healthy eating.
Research
Research efforts are focused on new angiogenic treatment modalities and various (adult) stem-cell therapies. A region on chromosome 17 was confined to families with multiple cases of myocardial infarction. Other genome-wide studies have identified a firm risk variant on chromosome 9 (9p21.3). However, these and other loci are found in intergenic segments and need further research in understanding how the phenotype is affected.
A more controversial link is that between Chlamydophila pneumoniae infection and atherosclerosis. While this intracellular organism has been demonstrated in atherosclerotic plaques, evidence is inconclusive as to whether it can be considered a causative factor. Treatment with antibiotics in patients with proven atherosclerosis has not demonstrated a decreased risk of heart attacks or other coronary vascular diseases.
Myeloperoxidase has been proposed as a biomarker.
Plant-based nutrition has been suggested as a way to reverse coronary artery disease, but strong evidence is still lacking for claims of potential benefits.
Several immunosuppressive drugs targeting the chronic inflammation in coronary artery disease have been tested.
See also
Mental stress-induced myocardial ischemia
References
External links
Risk Assessment of having a heart attack or dying of coronary artery disease, from the American Heart Association.
Aging-associated diseases
Heart diseases
Ischemic heart diseases
Wikipedia medicine articles ready to translate
Wikipedia emergency medicine articles ready to translate | Coronary artery disease | [
"Biology"
] | 6,636 | [
"Senescence",
"Aging-associated diseases"
] |
5,879 | https://en.wikipedia.org/wiki/Caesium | Caesium (IUPAC spelling; also spelled cesium in American English) is a chemical element; it has symbol Cs and atomic number 55. It is a soft, silvery-golden alkali metal with a melting point of , which makes it one of only five elemental metals that are liquid at or near room temperature. Caesium has physical and chemical properties similar to those of rubidium and potassium. It is pyrophoric and reacts with water even at . It is the least electronegative stable element, with a value of 0.79 on the Pauling scale. It has only one stable isotope, caesium-133. Caesium is mined mostly from pollucite. Caesium-137, a fission product, is extracted from waste produced by nuclear reactors. It has the largest atomic radius of all elements whose radii have been measured or calculated, at about 260 picometres.
The German chemist Robert Bunsen and physicist Gustav Kirchhoff discovered caesium in 1860 by the newly developed method of flame spectroscopy. The first small-scale applications for caesium were as a "getter" in vacuum tubes and in photoelectric cells. Caesium is widely used in highly accurate atomic clocks. In 1967, the International System of Units began using a specific hyperfine transition of neutral caesium-133 atoms to define the basic unit of time, the second.
Since the 1990s, the largest application of the element has been as caesium formate for drilling fluids, but it has a range of applications in the production of electricity, in electronics, and in chemistry. The radioactive isotope caesium-137 has a half-life of about 30 years and is used in medical applications, industrial gauges, and hydrology. Nonradioactive caesium compounds are only mildly toxic, but the pure metal's tendency to react explosively with water means that caesium is considered a hazardous material, and the radioisotopes present a significant health and environmental hazard.
Spelling
Caesium is the spelling recommended by the International Union of Pure and Applied Chemistry (IUPAC). The American Chemical Society (ACS) has used the spelling cesium since 1921, following Webster's New International Dictionary. The element was named after the Latin word caesius, meaning "bluish grey". In medieval and early modern writings caesius was spelled with the ligature æ as cæsius; hence, an alternative but now old-fashioned orthography is cæsium. More spelling explanation at ae/oe vs e.
Characteristics
Physical properties
Of all elements that are solid at room temperature, caesium is the softest: it has a hardness of 0.2 Mohs. It is a very ductile, pale metal, which darkens in the presence of trace amounts of oxygen. When in the presence of mineral oil (where it is best kept during transport), it loses its metallic lustre and takes on a duller, grey appearance. It has a melting point of , making it one of the few elemental metals that are liquid near room temperature. The others are rubidium (), francium (estimated at ), mercury (), and gallium (); bromine is also liquid at room temperature (melting at ), but it is a halogen and not a metal. Mercury is the only stable elemental metal with a known melting point lower than caesium. In addition, the metal has a rather low boiling point, , the lowest of all stable metals other than mercury. Copernicium and flerovium have been predicted to have lower boiling points than mercury and caesium, but they are extremely radioactive and it is not certain if they are metals.
Caesium forms alloys with the other alkali metals, gold, and mercury (amalgams). At temperatures below , it does not alloy with cobalt, iron, molybdenum, nickel, platinum, tantalum, or tungsten. It forms well-defined intermetallic compounds with antimony, gallium, indium, and thorium, which are photosensitive. It mixes with all the other alkali metals (except lithium); the alloy with a molar distribution of 41% caesium, 47% potassium, and 12% sodium has the lowest melting point of any known metal alloy, at . A few amalgams have been studied: is black with a purple metallic lustre, while CsHg is golden-coloured, also with a metallic lustre.
The golden colour of caesium comes from the decreasing frequency of light required to excite electrons of the alkali metals as the group is descended. For lithium through rubidium this frequency is in the ultraviolet, but for caesium it enters the blue–violet end of the spectrum; in other words, the plasmonic frequency of the alkali metals becomes lower from lithium to caesium. Thus caesium transmits and partially absorbs violet light preferentially while other colours (having lower frequency) are reflected; hence it appears yellowish. Its compounds burn with a blue or violet colour.
Allotropes
Caesium exists in the form of different allotropes, one of them a dimer called dicaesium.
Chemical properties
Caesium metal is highly reactive and pyrophoric. It ignites spontaneously in air, and reacts explosively with water even at low temperatures, more so than the other alkali metals. It reacts with ice at temperatures as low as . Because of this high reactivity, caesium metal is classified as a hazardous material. It is stored and shipped in dry, saturated hydrocarbons such as mineral oil. It can be handled only under inert gas, such as argon. However, a caesium-water explosion is often less powerful than a sodium-water explosion with a similar amount of sodium. This is because caesium explodes instantly upon contact with water, leaving little time for hydrogen to accumulate. Caesium can be stored in vacuum-sealed borosilicate glass ampoules. In quantities of more than about , caesium is shipped in hermetically sealed, stainless steel containers.
The chemistry of caesium is similar to that of other alkali metals, in particular rubidium, the element above caesium in the periodic table. As expected for an alkali metal, the only common oxidation state is +1. It differs from this value in caesides, which contain the Cs− anion and thus have caesium in the −1 oxidation state. Under conditions of extreme pressure (greater than 30 GPa), theoretical studies indicate that the inner 5p electrons could form chemical bonds, where caesium would behave as the seventh 5p element, suggesting that higher caesium fluorides with caesium in oxidation states from +2 to +6 could exist under such conditions. Some slight differences arise from the fact that it has a higher atomic mass and is more electropositive than other (nonradioactive) alkali metals. Caesium is the most electropositive chemical element. The caesium ion is also larger and less "hard" than those of the lighter alkali metals.
Compounds
Most caesium compounds contain the element as the cation , which binds ionically to a wide variety of anions. One noteworthy exception is the caeside anion (), and others are the several suboxides (see section on oxides below). More recently, caesium is predicted to behave as a p-block element and capable of forming higher fluorides with higher oxidation states (i.e., CsFn with n > 1) under high pressure. This prediction needs to be validated by further experiments.
Salts of Cs+ are usually colourless unless the anion itself is coloured. Many of the simple salts are hygroscopic, but less so than the corresponding salts of lighter alkali metals. The phosphate, acetate, carbonate, halides, oxide, nitrate, and sulfate salts are water-soluble. Its double salts are often less soluble, and the low solubility of caesium aluminium sulfate is exploited in refining Cs from ores. The double salts with antimony (such as ), bismuth, cadmium, copper, iron, and lead are also poorly soluble.
Caesium hydroxide (CsOH) is hygroscopic and strongly basic. It rapidly etches the surface of semiconductors such as silicon. CsOH has been previously regarded by chemists as the "strongest base", reflecting the relatively weak attraction between the large Cs+ ion and OH−; it is indeed the strongest Arrhenius base; however, a number of compounds such as n-butyllithium, sodium amide, sodium hydride, caesium hydride, etc., which cannot be dissolved in water as reacting violently with it but rather only used in some anhydrous polar aprotic solvents, are far more basic on the basis of the Brønsted–Lowry acid–base theory.
A stoichiometric mixture of caesium and gold will react to form yellow caesium auride (Cs+Au−) upon heating. The auride anion here behaves as a pseudohalogen. The compound reacts violently with water, yielding caesium hydroxide, metallic gold, and hydrogen gas; in liquid ammonia it can be reacted with a caesium-specific ion exchange resin to produce tetramethylammonium auride. The analogous platinum compound, red caesium platinide (), contains the platinide ion that behaves as a .
Complexes
Like all metal cations, Cs+ forms complexes with Lewis bases in solution. Because of its large size, Cs+ usually adopts coordination numbers greater than 6, the number typical for the smaller alkali metal cations. This difference is apparent in the 8-coordination of CsCl. This high coordination number and softness (tendency to form covalent bonds) are properties exploited in separating Cs+ from other cations in the remediation of nuclear wastes, where 137Cs+ must be separated from large amounts of nonradioactive K+.
Halides
Caesium fluoride (CsF) is a hygroscopic white solid that is widely used in organofluorine chemistry as a source of fluoride anions. Caesium fluoride has the halite structure, which means that the Cs+ and F− pack in a cubic closest packed array as do Na+ and Cl− in sodium chloride. Notably, caesium and fluorine have the lowest and highest electronegativities, respectively, among all the known elements.
Caesium chloride (CsCl) crystallizes in the simple cubic crystal system. Also called the "caesium chloride structure", this structural motif is composed of a primitive cubic lattice with a two-atom basis, each with an eightfold coordination; the chloride atoms lie upon the lattice points at the edges of the cube, while the caesium atoms lie in the holes in the centre of the cubes. This structure is shared with CsBr and CsI, and many other compounds that do not contain Cs. In contrast, most other alkaline halides have the sodium chloride (NaCl) structure. The CsCl structure is preferred because Cs+ has an ionic radius of 174 pm and 181 pm.
Oxides
More so than the other alkali metals, caesium forms numerous binary compounds with oxygen. When caesium burns in air, the superoxide is the main product. The "normal" caesium oxide () forms yellow-orange hexagonal crystals, and is the only oxide of the anti- type. It vaporizes at , and decomposes to caesium metal and the peroxide at temperatures above . In addition to the superoxide and the ozonide , several brightly coloured suboxides have also been studied. These include , , , (dark-green), CsO, , as well as . The latter may be heated in a vacuum to generate . Binary compounds with sulfur, selenium, and tellurium also exist.
Isotopes
Caesium has 41 known isotopes, ranging in mass number (i.e. number of nucleons in the nucleus) from 112 to 152. Several of these are synthesized from lighter elements by the slow neutron capture process (S-process) inside old stars and by the R-process in supernova explosions. The only stable caesium isotope is 133Cs, with 78 neutrons. Although it has a large nuclear spin (+), nuclear magnetic resonance studies can use this isotope at a resonating frequency of 11.7 MHz.
The radioactive 135Cs has a very long half-life of about 2.3 million years, the longest of all radioactive isotopes of caesium. 137Cs and 134Cs have half-lives of 30 and two years, respectively. 137Cs decomposes to a short-lived 137mBa by beta decay, and then to nonradioactive barium, while 134Cs transforms into 134Ba directly. The isotopes with mass numbers of 129, 131, 132 and 136, have half-lives between a day and two weeks, while most of the other isotopes have half-lives from a few seconds to fractions of a second. At least 21 metastable nuclear isomers exist. Other than 134mCs (with a half-life of just under 3 hours), all are very unstable and decay with half-lives of a few minutes or less.
The isotope 135Cs is one of the long-lived fission products of uranium produced in nuclear reactors. However, this fission product yield is reduced in most reactors because the predecessor, 135Xe, is a potent neutron poison and frequently transmutes to stable 136Xe before it can decay to 135Cs.
The beta decay from 137Cs to 137mBa results in gamma radiation as the 137mBa relaxes to ground state 137Ba, with the emitted photons having an energy of 0.6617 MeV. 137Cs and 90Sr are the principal medium-lived products of nuclear fission, and the prime sources of radioactivity from spent nuclear fuel after several years of cooling, lasting several hundred years. Those two isotopes are the largest source of residual radioactivity in the area of the Chernobyl disaster. Because of the low capture rate, disposing of 137Cs through neutron capture is not feasible and the only current solution is to allow it to decay over time.
Almost all caesium produced from nuclear fission comes from the beta decay of originally more neutron-rich fission products, passing through various isotopes of iodine and xenon. Because iodine and xenon are volatile and can diffuse through nuclear fuel or air, radioactive caesium is often created far from the original site of fission. With nuclear weapons testing in the 1950s through the 1980s, 137Cs was released into the atmosphere and returned to the surface of the earth as a component of radioactive fallout. It is a ready marker of the movement of soil and sediment from those times.
Occurrence
Caesium is a relatively rare element, estimated to average 3 parts per million in the Earth's crust. It is the 45th most abundant element and 36th among the metals. Caesium is 30 times less abundant than rubidium, with which it is closely associated, chemically.
Due to its large ionic radius, caesium is one of the "incompatible elements". During magma crystallization, caesium is concentrated in the liquid phase and crystallizes last. Therefore, the largest deposits of caesium are zone pegmatite ore bodies formed by this enrichment process. Because caesium does not substitute for potassium as readily as rubidium does, the alkali evaporite minerals sylvite (KCl) and carnallite () may contain only 0.002% caesium. Consequently, caesium is found in few minerals. Percentage amounts of caesium may be found in beryl () and avogadrite (), up to 15 wt% Cs2O in the closely related mineral pezzottaite (), up to 8.4 wt% Cs2O in the rare mineral londonite (), and less in the more widespread rhodizite. The only economically important ore for caesium is pollucite , which is found in a few places around the world in zoned pegmatites, associated with the more commercially important lithium minerals, lepidolite and petalite. Within the pegmatites, the large grain size and the strong separation of the minerals results in high-grade ore for mining.
The world's most significant and richest known source of caesium is the Tanco Mine at Bernic Lake in Manitoba, Canada, estimated to contain 350,000 metric tons of pollucite ore, representing more than two-thirds of the world's reserve base. Although the stoichiometric content of caesium in pollucite is 42.6%, pure pollucite samples from this deposit contain only about 34% caesium, while the average content is 24 wt%. Commercial pollucite contains more than 19% caesium. The Bikita pegmatite deposit in Zimbabwe is mined for its petalite, but it also contains a significant amount of pollucite. Another notable source of pollucite is in the Karibib Desert, Namibia. At the present rate of world mine production of 5 to 10 metric tons per year, reserves will last for thousands of years.
Production
Mining and refining pollucite ore is a selective process and is conducted on a smaller scale than for most other metals. The ore is crushed, hand-sorted, but not usually concentrated, and then ground. Caesium is then extracted from pollucite primarily by three methods: acid digestion, alkaline decomposition, and direct reduction.
In the acid digestion, the silicate pollucite rock is dissolved with strong acids, such as hydrochloric (HCl), sulfuric (), hydrobromic (HBr), or hydrofluoric (HF) acids. With hydrochloric acid, a mixture of soluble chlorides is produced, and the insoluble chloride double salts of caesium are precipitated as caesium antimony chloride (), caesium iodine chloride (), or caesium hexachlorocerate (). After separation, the pure precipitated double salt is decomposed, and pure CsCl is precipitated by evaporating the water.
The sulfuric acid method yields the insoluble double salt directly as caesium alum (). The aluminium sulfate component is converted to insoluble aluminium oxide by roasting the alum with carbon, and the resulting product is leached with water to yield a solution.
Roasting pollucite with calcium carbonate and calcium chloride yields insoluble calcium silicates and soluble caesium chloride. Leaching with water or dilute ammonia () yields a dilute chloride (CsCl) solution. This solution can be evaporated to produce caesium chloride or transformed into caesium alum or caesium carbonate. Though not commercially feasible, the ore can be directly reduced with potassium, sodium, or calcium in vacuum to produce caesium metal directly.
Most of the mined caesium (as salts) is directly converted into caesium formate (HCOO−Cs+) for applications such as oil drilling. To supply the developing market, Cabot Corporation built a production plant in 1997 at the Tanco mine near Bernic Lake in Manitoba, with a capacity of per year of caesium formate solution. The primary smaller-scale commercial compounds of caesium are caesium chloride and nitrate.
Alternatively, caesium metal may be obtained from the purified compounds derived from the ore. Caesium chloride and the other caesium halides can be reduced at with calcium or barium, and caesium metal distilled from the result. In the same way, the aluminate, carbonate, or hydroxide may be reduced by magnesium.
The metal can also be isolated by electrolysis of fused caesium cyanide (CsCN). Exceptionally pure and gas-free caesium can be produced by thermal decomposition of caesium azide , which can be produced from aqueous caesium sulfate and barium azide. In vacuum applications, caesium dichromate can be reacted with zirconium to produce pure caesium metal without other gaseous products.
+ 2 → 2 + 2 +
The price of 99.8% pure caesium (metal basis) in 2009 was about , but the compounds are significantly cheaper.
History
In 1860, Robert Bunsen and Gustav Kirchhoff discovered caesium in the mineral water from Dürkheim, Germany. Because of the bright blue lines in the emission spectrum, they derived the name from the Latin word , meaning . Caesium was the first element to be discovered with a spectroscope, which had been invented by Bunsen and Kirchhoff only a year previously.
To obtain a pure sample of caesium, of mineral water had to be evaporated to yield of concentrated salt solution. The alkaline earth metals were precipitated either as sulfates or oxalates, leaving the alkali metal in the solution. After conversion to the nitrates and extraction with ethanol, a sodium-free mixture was obtained. From this mixture, the lithium was precipitated by ammonium carbonate. Potassium, rubidium, and caesium form insoluble salts with chloroplatinic acid, but these salts show a slight difference in solubility in hot water, and the less-soluble caesium and rubidium hexachloroplatinate () were obtained by fractional crystallization. After reduction of the hexachloroplatinate with hydrogen, caesium and rubidium were separated by the difference in solubility of their carbonates in alcohol. The process yielded of rubidium chloride and of caesium chloride from the initial 44,000 litres of mineral water.
From the caesium chloride, the two scientists estimated the atomic weight of the new element at 123.35 (compared to the currently accepted one of 132.9). They tried to generate elemental caesium by electrolysis of molten caesium chloride, but instead of a metal, they obtained a blue homogeneous substance which "neither under the naked eye nor under the microscope showed the slightest trace of metallic substance"; as a result, they assigned it as a subchloride (). In reality, the product was probably a colloidal mixture of the metal and caesium chloride. The electrolysis of the aqueous solution of chloride with a mercury cathode produced a caesium amalgam which readily decomposed under the aqueous conditions. The pure metal was eventually isolated by the Swedish chemist Carl Setterberg while working on his doctorate with Kekulé and Bunsen. In 1882, he produced caesium metal by electrolysing caesium cyanide, avoiding the problems with the chloride.
Historically, the most important use for caesium has been in research and development, primarily in chemical and electrical fields. Very few applications existed for caesium until the 1920s, when it came into use in radio vacuum tubes, where it had two functions; as a getter, it removed excess oxygen after manufacture, and as a coating on the heated cathode, it increased the electrical conductivity. Caesium was not recognized as a high-performance industrial metal until the 1950s. Applications for nonradioactive caesium included photoelectric cells, photomultiplier tubes, optical components of infrared spectrophotometers, catalysts for several organic reactions, crystals for scintillation counters, and in magnetohydrodynamic power generators. Caesium is also used as a source of positive ions in secondary ion mass spectrometry (SIMS).
Since 1967, the International System of Measurements has based the primary unit of time, the second, on the properties of caesium. The International System of Units (SI) defines the second as the duration of 9,192,631,770 cycles at the microwave frequency of the spectral line corresponding to the transition between two hyperfine energy levels of the ground state of caesium-133. The 13th General Conference on Weights and Measures of 1967 defined a second as: "the duration of 9,192,631,770 cycles of microwave light absorbed or emitted by the hyperfine transition of caesium-133 atoms in their ground state undisturbed by external fields".
Applications
Petroleum exploration
The largest present-day use of nonradioactive caesium is in caesium formate drilling fluids for the extractive oil industry. Aqueous solutions of caesium formate (HCOO−Cs+)—made by reacting caesium hydroxide with formic acid—were developed in the mid-1990s for use as oil well drilling and completion fluids. The function of a drilling fluid is to lubricate drill bits, to bring rock cuttings to the surface, and to maintain pressure on the formation during drilling of the well. Completion fluids assist the emplacement of control hardware after drilling but prior to production by maintaining the pressure.
The high density of the caesium formate brine (up to 2.3 g/cm3, or 19.2 pounds per gallon), coupled with the relatively benign nature of most caesium compounds, reduces the requirement for toxic high-density suspended solids in the drilling fluid—a significant technological, engineering and environmental advantage. Unlike the components of many other heavy liquids, caesium formate is relatively environment-friendly. Caesium formate brine can be blended with potassium and sodium formates to decrease the density of the fluids to that of water (1.0 g/cm3, or 8.3 pounds per gallon). Furthermore, it is biodegradable and may be recycled, which is important in view of its high cost (about $4,000 per barrel in 2001). Alkali formates are safe to handle and do not damage the producing formation or downhole metals as corrosive alternative, high-density brines (such as zinc bromide solutions) sometimes do; they also require less cleanup and reduce disposal costs.
Atomic clocks
Caesium-based atomic clocks use the electromagnetic transitions in the hyperfine structure of caesium-133 atoms as a reference point. The first accurate caesium clock was built by Louis Essen in 1955 at the National Physical Laboratory in the UK. Caesium clocks have improved over the past half-century and are regarded as "the most accurate realization of a unit that mankind has yet achieved." These clocks measure frequency with an error of 2 to 3 parts in 1014, which corresponds to an accuracy of 2 nanoseconds per day, or one second in 1.4 million years. The latest versions are more accurate than 1 part in 1015, about 1 second in 20 million years. The caesium standard is the primary standard for standards-compliant time and frequency measurements. Caesium clocks regulate the timing of cell phone networks and the Internet.
Definition of the second
The second, symbol s, is the SI unit of time. The BIPM restated its definition at its 26th conference in 2018: "[The second] is defined by taking the fixed numerical value of the caesium frequency , the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom, to be when expressed in the unit Hz, which is equal to s−1."
Electric power and electronics
Caesium vapour thermionic generators are low-power devices that convert heat energy to electrical energy. In the two-electrode vacuum tube converter, caesium neutralizes the space charge near the cathode and enhances the current flow.
Caesium is also important for its photoemissive properties, converting light to electron flow. It is used in photoelectric cells because caesium-based cathodes, such as the intermetallic compound , have a low threshold voltage for emission of electrons. The range of photoemissive devices using caesium include optical character recognition devices, photomultiplier tubes, and video camera tubes. Nevertheless, germanium, rubidium, selenium, silicon, tellurium, and several other elements can be substituted for caesium in photosensitive materials.
Caesium iodide (CsI), bromide (CsBr) and fluoride (CsF) crystals are employed for scintillators in scintillation counters widely used in mineral exploration and particle physics research to detect gamma and X-ray radiation. Being a heavy element, caesium provides good stopping power with better detection. Caesium compounds may provide a faster response (CsF) and be less hygroscopic (CsI).
Caesium vapour is used in many common magnetometers.
The element is used as an internal standard in spectrophotometry. Like other alkali metals, caesium has a great affinity for oxygen and is used as a "getter" in vacuum tubes. Other uses of the metal include high-energy lasers, vapour glow lamps, and vapour rectifiers.
Centrifugation fluids
The high density of the caesium ion makes solutions of caesium chloride, caesium sulfate, and caesium trifluoroacetate () useful in molecular biology for density gradient ultracentrifugation. This technology is used primarily in the isolation of viral particles, subcellular organelles and fractions, and nucleic acids from biological samples.
Chemical and medical use
Relatively few chemical applications use caesium. Doping with caesium compounds enhances the effectiveness of several metal-ion catalysts for chemical synthesis, such as acrylic acid, anthraquinone, ethylene oxide, methanol, phthalic anhydride, styrene, methyl methacrylate monomers, and various olefins. It is also used in the catalytic conversion of sulfur dioxide into sulfur trioxide in the production of sulfuric acid.
Caesium fluoride enjoys a niche use in organic chemistry as a base and as an anhydrous source of fluoride ion. Caesium salts sometimes replace potassium or sodium salts in organic synthesis, such as cyclization, esterification, and polymerization. Caesium has also been used in thermoluminescent radiation dosimetry (TLD): When exposed to radiation, it acquires crystal defects that, when heated, revert with emission of light proportionate to the received dose. Thus, measuring the light pulse with a photomultiplier tube can allow the accumulated radiation dose to be quantified.
Nuclear and isotope applications
Caesium-137 is a radioisotope commonly used as a gamma-emitter in industrial applications. Its advantages include a half-life of roughly 30 years, its availability from the nuclear fuel cycle, and having 137Ba as a stable end product. The high water solubility is a disadvantage which makes it incompatible with large pool irradiators for food and medical supplies. It has been used in agriculture, cancer treatment, and the sterilization of food, sewage sludge, and surgical equipment. Radioactive isotopes of caesium in radiation devices were used in the medical field to treat certain types of cancer, but emergence of better alternatives and the use of water-soluble caesium chloride in the sources, which could create wide-ranging contamination, gradually put some of these caesium sources out of use. Caesium-137 has been employed in a variety of industrial measurement gauges, including moisture, density, levelling, and thickness gauges. It has also been used in well logging devices for measuring the electron density of the rock formations, which is analogous to the bulk density of the formations.
Caesium-137 has been used in hydrologic studies analogous to those with tritium. As a daughter product of fission bomb testing from the 1950s through the mid-1980s, caesium-137 was released into the atmosphere, where it was absorbed readily into solution. Known year-to-year variation within that period allows correlation with soil and sediment layers. Caesium-134, and to a lesser extent caesium-135, have also been used in hydrology to measure the caesium output by the nuclear power industry. While they are less prevalent than either caesium-133 or caesium-137, these bellwether isotopes are produced solely from anthropogenic sources.
Other uses
Caesium and mercury were used as a propellant in early ion engines designed for spacecraft propulsion on very long interplanetary or extraplanetary missions. The fuel was ionized by contact with a charged tungsten electrode. But corrosion by caesium on spacecraft components has pushed development in the direction of inert gas propellants, such as xenon, which are easier to handle in ground-based tests and do less potential damage to the spacecraft. Xenon was used in the experimental spacecraft Deep Space 1 launched in 1998. Nevertheless, field-emission electric propulsion thrusters that accelerate liquid metal ions such as caesium have been built.
Caesium nitrate is used as an oxidizer and pyrotechnic colorant to burn silicon in infrared flares, such as the LUU-19 flare, because it emits much of its light in the near infrared spectrum. Caesium compounds may have been used as fuel additives to reduce the radar signature of exhaust plumes in the Lockheed A-12 CIA reconnaissance aircraft. Caesium and rubidium have been added as a carbonate to glass because they reduce electrical conductivity and improve stability and durability of fibre optics and night vision devices. Caesium fluoride or caesium aluminium fluoride are used in fluxes formulated for brazing aluminium alloys that contain magnesium.
Magnetohydrodynamic (MHD) power-generating systems were researched, but failed to gain widespread acceptance. Caesium metal has also been considered as the working fluid in high-temperature Rankine cycle turboelectric generators.
Caesium salts have been evaluated as antishock reagents following the administration of arsenical drugs. Because of their effect on heart rhythms, however, they are less likely to be used than potassium or rubidium salts. They have also been used to treat epilepsy.
Caesium-133 can be laser cooled and used to probe fundamental and technological problems in quantum physics. It has a particularly convenient Feshbach spectrum to enable studies of ultracold atoms requiring tunable interactions.
Health and safety hazards
Nonradioactive caesium compounds are only mildly toxic, and nonradioactive caesium is not a significant environmental hazard. Because biochemical processes can confuse and substitute caesium with potassium, excess caesium can lead to hypokalemia, arrhythmia, and acute cardiac arrest, but such amounts would not ordinarily be encountered in natural sources.
The median lethal dose (LD50) for caesium chloride in mice is 2.3 g per kilogram, which is comparable to the LD50 values of potassium chloride and sodium chloride. The principal use of nonradioactive caesium is as caesium formate in petroleum drilling fluids because it is much less toxic than alternatives, though it is more costly.
Caesium is one of the most reactive elements and is highly explosive in the presence of water. The hydrogen gas produced by the reaction is heated by the thermal energy released at the same time, causing ignition and a violent explosion. This can occur with other alkali metals, but caesium is so potent that this explosive reaction can be triggered even by cold water.
It is highly pyrophoric: the autoignition temperature of caesium is , and it ignites explosively in air to form caesium hydroxide and various oxides. Caesium hydroxide is a very strong base, and will rapidly corrode glass.
The isotopes 134 and 137 are present in the biosphere in small amounts from human activities, differing by location. Radiocaesium does not accumulate in the body as readily as other fission products (such as radioiodine and radiostrontium). About 10% of absorbed radiocaesium washes out of the body relatively quickly in sweat and urine. The remaining 90% has a biological half-life between 50 and 150 days. Radiocaesium follows potassium and tends to accumulate in plant tissues, including fruits and vegetables. Plants vary widely in the absorption of caesium, sometimes displaying great resistance to it. It is also well-documented that mushrooms from contaminated forests accumulate radiocaesium (caesium-137) in the fungal sporocarps. Accumulation of caesium-137 in lakes has been a great concern after the Chernobyl disaster. Experiments with dogs showed that a single dose of 3.8 millicuries (140 MBq, 4.1 μg of caesium-137) per kilogram is lethal within three weeks; smaller amounts may cause infertility and cancer. The International Atomic Energy Agency and other sources have warned that radioactive materials, such as caesium-137, could be used in radiological dispersion devices, or "dirty bombs".
See also
Acerinox accident, a caesium-137 contamination accident in 1998
Goiânia accident, a major radioactive contamination incident in 1987 involving caesium-137
Kramatorsk radiological accident, a 137Cs lost-source incident between 1980 and 1989
Notes
References
External links
Caesium or Cesium at The Periodic Table of Videos (University of Nottingham)
View the reaction of Caesium (most reactive metal in the periodic table) with Fluorine (most reactive non-metal) courtesy of The Royal Institution.
1860 introductions
Alkali metals
Chemical elements with body-centered cubic structure
Chemical elements
Glycine receptor agonists
Reducing agents
Articles containing video clips
Pyrophoric materials | Caesium | [
"Physics",
"Chemistry",
"Technology"
] | 7,981 | [
"Chemical elements",
"Redox",
"Reducing agents",
"Atoms",
"Matter"
] |
5,881 | https://en.wikipedia.org/wiki/Century | A century is a period of 100 years or 10 decades. Centuries are numbered ordinally in English and many other languages. The word century comes from the Latin centum, meaning one hundred. Century is sometimes abbreviated as c.
A centennial or centenary is a hundredth anniversary, or a celebration of this, typically the remembrance of an event which took place a hundred years earlier.
Start and end of centuries
Although a century can mean any arbitrary period of 100 years, there are two viewpoints on the nature of standard centuries. One is based on strict construction, while the other is based on popular perception.
According to the strict construction, the 1st century AD, which began with AD 1, ended with AD 100, and the 2nd century with AD 200; in this model, the n-th century starts with a year that follows a year with a multiple of 100 (except the first century as it began after the year 1 BC) and ends with the next coming year with a multiple of 100 (100n), i.e. the 20th century comprises the years 1901 to 2000, and the 21st century comprises the years 2001 to 2100 in strict usage.
In common perception and practice, centuries are structured by grouping years based on sharing the 'hundreds' digit(s). In this model, the n-th century starts with the year that ends in "00" and ends with the year ending in "99"; for example, in popular culture, the years 1900 to 1999 constitute the 20th century, and the years 2000 to 2099 constitute the 21st century. (This is similar to the grouping of "0-to-9 decades" which share the 'tens' digit.)
To facilitate calendrical calculations by computer, the astronomical year numbering and ISO 8601 systems both contain a year zero, with the astronomical year 0 corresponding to the year 1 BC, the astronomical year -1 corresponding to 2 BC, and so on.
Alternative naming systems
Informally, years may be referred to in groups based on the hundreds part of the year. In this system, the years 1900–1999 are referred to as the nineteen hundreds (1900s). Aside from English usage, this system is used in Swedish, Danish, Norwegian, Icelandic, Finnish and Hungarian. The Swedish (or ), Danish (or ), Norwegian (or ), Finnish (or ) and Hungarian (or ) refer unambiguously to the years 1900–1999. In Swedish, however, a century is in more rare cases referred to as ("the n-th century") rather than , i.e. the 17th century is (in rare cases) referred to as rather than 1600-talet and mainly also referring to the years 1601–1700 rather than 1600–1699; according to Svenska Akademiens ordbok, may refer to either the years 1501–1600 or 1500–1599.
Similar dating units in other calendar systems
While the century has been commonly used in the West, other cultures and calendars have utilized differently sized groups of years in a similar manner. The Hindu calendar, in particular, summarizes its years into groups of 60, while the Aztec calendar considers groups of 52.
See also
Age of Discovery
Ancient history
Before Christ and Anno Domini
Common Era
Decade
List of decades, centuries, and millennia
Lustrum
Middle Ages
Millennium
Modern era
Saeculum
Year
Notes
References
Bibliography
The Battle of the Centuries, Ruth Freitag, U.S. Government Printing Office. Available from the Superintendent of Documents, P.O. Box 371954, Pittsburgh, PA 15250- 7954. Cite stock no. 030-001-00153-9. Retrieved 3 March 2019.
100 (number)
Units of time | Century | [
"Physics",
"Mathematics"
] | 758 | [
"Physical quantities",
"Time",
"Units of time",
"Quantity",
"Spacetime",
"Units of measurement"
] |
5,898 | https://en.wikipedia.org/wiki/Carabiner | A carabiner or karabiner (), often shortened to biner or to crab, colloquially known as a (climbing) clip, is a specialized type of shackle, a metal loop with a spring-loaded gate used to quickly and reversibly connect components, most notably in safety-critical systems. The word comes from the German , short for , meaning "carbine hook," as the device was used by carabiniers to attach their carbines to their belts.
Use
Carabiners are widely used in rope-intensive activities such as climbing, fall arrest systems, arboriculture, caving, sailing, hot-air ballooning, rope rescue, construction, industrial rope work, window cleaning, whitewater rescue, and acrobatics. They are predominantly made from both steel and aluminium. Those used in sports tend to be of a lighter weight than those used in commercial applications and rope rescue.
Often referred to as carabiner-style or as mini-carabiners, carabiner keyrings and other light-use clips of similar style and design have also become popular. Most are stamped with a "not for climbing" or similar warning due to a common lack of load-testing and safety standards in manufacturing.
While any metal link with a spring-loaded gate is technically a carabiner, the strict usage among the climbing community specifically refers only to devices manufactured and tested for load-bearing in safety-critical systems like rock and mountain climbing, typically rated to 20 kN or more.
Carabiners on hot-air balloons are used to connect the envelope to the basket and are rated at 2.5, 3, or 4 tonnes.
Load-bearing screw-gate carabiners are used to connect the diver's umbilical to the surface supplied diver's harness. They are usually rated for a safe working load of 5 kN or more (equivalent to a weight in excess of approximately 500 kg).
Types
Shape
Carabiners come in four characteristic shapes:
Oval: Symmetric. Most basic and utilitarian. Smooth regular curves are gentle on equipment and allow easy repositioning of loads. Their greatest disadvantage is that a load is shared equally on both the strong solid spine and the weaker gated axis. Often preferred type for racking biners due to their symmetric shape.
D: Asymmetric shape transfers the majority of the load on to the spine, the carabiner's strongest axis.
Offset-D: Variant of a D with a greater asymmetry, allowing for a wider gate opening.
Pear/HMS: Wider and rounder shape at the top than offset-D's, and typically larger. Used for belaying with a munter hitch, and with some types of belay device. The largest HMS carabiners can also be used for rappelling with a munter hitch (the size is needed to accommodate the hitch with two strands of rope). These are usually the heaviest carabiners.
Locking mechanisms
Carabiners fall into three broad locking categories: non-locking, manual locking, and auto locking.
Non-locking
Non-locking carabiners (or snap-links) have a sprung swinging gate that accepts a rope, webbing sling, or other hardware. Rock climbers frequently connect two non-locking carabiners with a short length of webbing to create a quickdraw (an extender).
Two gate types are common:
Solid gate: The more traditional carabiner design, incorporating a solid metal gate with separate pin and spring mechanisms. Most modern carabiners feature a 'key-lock nose shape and gate opening, which is less prone to snagging than traditional notch and pin design. Most locking carabiners are based on the solid gate design.
Wire gate: A single piece of bent spring-steel wire forms the gate. Wire gate carabiners are significantly lighter than solid gates, with roughly the same strength. Wire gates are less prone to icing up than solid gates, an advantage in Alpine mountaineering and ice climbing. The reduced gate mass makes their wire bales less prone to "gate flutter", a dangerous condition created when the carabiner suddenly impacts rock or other hard surfaces during a fall, and the gate opens momentarily due to momentum (and both lowers the breaking strength of the carabiner when open, and potentially allows the rope to escape). Simple wiregate designs feature a notch that can snag objects (similar to original solid gate designs), but newer designs feature a shroud or guide wires around the "hooked" part of the carabiner nose to prevent snagging.
Both solid and wire gate carabiners can be either "straight gate" or "bent gate". Bent-gate carabiners are easier to clip a rope into using only one hand, and so are often used for the rope-end carabiner of quickdraws and alpine draws used for lead climbing.
Locking
Locking carabiners have the same general shape as non-locking carabiners, but have an additional mechanism securing the gate to prevent unintentional opening during use. These mechanisms may be either threaded sleeves ("screw-lock"), spring-loaded sleeves ("twist-lock"), magnetic levers ("Magnetron"), other spring loaded unlocking levers or opposing double spring loaded gates ("twin-gate").
Manual
Screw-lock (or screw gate): Have a threaded sleeve over the gate which must be engaged and disengaged manually. They have fewer moving parts than spring-loaded mechanisms, are less prone to malfunctioning due to contamination or component fatigue, and are easier to employ one-handed. They, however, require more total effort and are more time-consuming than pull-lock, twist-lock or lever-lock.
Auto-locking
Twist-lock, push-lock, twist-and-push-lock: Have a security sleeve over the gate which must be manually rotated and/or pulled to disengage, but which springs automatically to locked position upon release. They offer the advantage of re-engaging without additional user input, but being spring-loaded are prone to both spring fatigue and their more complex mechanisms becoming balky from dirt, ice, or other contamination. They are also difficult to open one-handed and with gloves on, and sometimes jam, getting stuck after being tightened under load, and being very hard to undo once the load is removed.
Multiple-levers: Having at least two spring loaded levers that are each operated with one hand.
Magnetic: Have two small levers with embedded magnets on either side of the locking gate which must be pushed towards each other or pinched simultaneously to unlock. Upon release the levers pull shut and into the locked position against a small steel insert in the carabiner nose. With the gate open the magnets in the two levers repel each other so they do not lock or stick together, which might prevent the gate from closing properly. Advantages are very easy one-handed operation, re-engaging without additional user input and few mechanical parts that can fail.
Double-Gate: Have two opposed overlapping gates at the opening which prevent a rope or anchor from inadvertently passing through the gate in either direction. Gates may only be opened by pushing outwards from in between towards either direction. The carabiner can therefore be opened by splitting the gates with a fingertip, allowing easy one hand operation. The likelihood of a rope under tension to split the gates is therefore practically none. The lack of a rotating lock prevents a rolling knot, such as the Munter hitch, from unlocking the gate and passing through, giving a measure of inherent safety in use and reducing mechanical complexity.
Certification
Europe
Recreation: Carabiners sold for use in climbing in Europe must conform to standard EN 12275:1998 "Mountaineering equipment – Connectors – Safety requirements and test methods", which governs testing protocols, rated strengths, and markings. A breaking strength of at least 20 kN (20,000 newtons = approximately 2040 kilograms of force which is significantly more than the weight of a small car) with the gate closed and 7 kN with the gate open is the standard for most climbing applications, although requirements vary depending on the activity. Carabiners are marked on the side with single letters showing their intended area of use, for example, K (via ferrata), B (base), and H (for belaying with an Italian or Munter hitch).
Industry: Carabiners used for access in commercial and industrial environments within Europe must comply with EN 362:2004 "Personal protective equipment against falls from a height. Connectors." The minimum gate closed breaking strength of a carabiner conforming with EN 362:2004 is nominally the same as that of EN 12275:1998 at around 20 kN. Carabiners complying with both EN 12275:1998 and EN 362:2004 are available.
United States
Climbing and mountaineering: Minimum breaking strength (MBS) requirements and calculations for climbing and mountaineering carabiners in the USA are set out in ASTM Standard F1774. This standard calls for a MBS of 20 kN on the long axis, and 7 kN on the short axis (cross load).
Rescue: Carabiners used for rescue are addressed in ASTM F1956. This document addresses two classifications of carabiners, light use and heavy-duty. Light use carabiners are the most widely used, and are commonly found in applications including technical rope rescue, mountain rescue, cave rescue, cliff rescue, military, SWAT, and even by some non-NFPA fire departments. ASTM requirements for light use carabiners are 27 kN MBS on the long axis, 7 kN on the short axis. Requirements for the lesser-used heavy duty rescue carabiners are 40 kN MBS long axis, 10.68 kN short axis.
Fire rescue: Minimum breaking strength requirements and calculations for rescue carabiners used by NFPA compliant agencies are set out in National Fire Protection Association standard 1983-2012 edition Fire Service Life Safety Rope and Equipment. The standard defines two classes of rescue carabiners. Technical use rescue carabiners are required to have minimum breaking strengths of 27 kN gate closed, 7 kN gate open and 7 kN minor axis. General use rescue carabiners are required to have minimum breaking strengths of 40 kN gate closed, 11 kN gate open and 11 kN minor axis. Testing procedures for rescue carabiners are set out in ASTM International standard F 1956 Standard Specification of Rescue Carabiners.
Fall protection: Carabiners used for fall protection in US industry are classified as "connectors" and are required to meet Occupational Safety and Health Administration standard 1910.66 App C Personal Fall Arrest System which specifies "drop forged, pressed or formed steel, or made of equivalent materials" and a minimum breaking strength of .
American National Standards Institute/American Society of Safety Engineers standard ANSI Z359.1-2007 Safety Requirement for Personal Fall Arrest Systems, Subsystems and Components, section 3.2.1.4 (for snap hooks and carabiners) is a voluntary consensus standard. This standard requires that all connectors/ carabiners support a minimum breaking strength (MBS) of and feature an auto-locking gate mechanism which supports a minimum breaking strength (MBS) of .
History
The first known hooks that had a sprung, hinged gate where the spring kept it closed (characteristics expected of a carabiner) were depicted by Nuremberg patrician Martin Löffelholz von Kolberg in about 1505 in the Codex Löffelholz, in the Holy Roman Empire. These then became the clip used to hold a cavalry carbine or arquebus, with the earliest known mention of them being in 1616 by Johann Jacob von Wallhausen, in the Holy Roman Empire. They were widely used in many European countries during the 17th century, and typically had a belt attachment and swivel joint, much like a modern luggage strap or handbag strap. The load bearing latch was added in the 1790s, for the British cavalry design. They were used for many other purposes during the 19th century, such as for luggage straps, mining and connecting ropes. Some common designs first appeared during that time, including S-carabiners. Oval links, which had also appeared in 1485, also reappeared as carabiners. Screw gates and internal springs were developed. Prussian fire brigades began to use carabiners for connecting themselves to ladders in 1847, and this became the modern gourd-shaped design by 1868. German and Austrian mountaineers started using them during the late 19th century, with a mention of their use from 1879, and their continued use for climbing by climbers in Saxon Switzerland. The majority used gourd shaped carabiners which were created for mining or other utility purposes.
The common myth suggesting that mountaineering carabiners were invented or made by German climber Otto "Rambo" Herzog in the 1910s has no basis in fact. He used them for some challenging climbs and some new techniques at a time when such "artificial aids" were still controversial in mountain climbing, but he did not invent them or develop any designs, and he was born long after other climbers were already using carabiners.
During the 1920s many designs were used by mountain climbers, such as gourd-shaped, oval or elliptical, mostly sold for general hardware. By the early 1930s, carabiners were being sold for climbing, oval designs being the most popular. During this decade, hardened steel carabiners appeared and the first aluminium carabiner prototypes were made by Pierre Allain, although they were never sold. These were the first carabiners designed specifically for climbing and the first offset D-shaped carabiners. Aluminium carabiners were first sold to the military in 1941, which were the first commercial carabiners designed specifically for climbing. Slightly offset D-shaped carabiners were sold in the late 1940s, which became the standard offset D-shape (which is now the most common) in the 1950s.
Chouinard Equipment introduced the 22 kN aluminium carabiner in 1968, though this strength had already been far surpassed by steel carabiners. Wiregate carabiners were first patented in 1969, and were sold for maritime use. They were first sold for climbing in 1996. The popular keylock, which avoids snagging, was developed around 1984–1987.
See also
Maillon
Lobster clasp
Rock-climbing equipment
Glossary of climbing terms
References
Climbing equipment
Caving equipment
German inventions
Mountaineering equipment
Fasteners | Carabiner | [
"Engineering"
] | 3,000 | [
"Construction",
"Fasteners"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.