id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4 values | section stringlengths 4 49 ⌀ | sublist stringclasses 9 values |
|---|---|---|---|---|---|---|
994097 | https://en.wikipedia.org/wiki/Auditory%20cortex | Auditory cortex | The auditory cortex is the part of the temporal lobe that processes auditory information in humans and many other vertebrates. It is a part of the auditory system, performing basic and higher functions in hearing, such as possible relations to language switching. It is located bilaterally, roughly at the upper sides of the temporal lobes – in humans, curving down and onto the medial surface, on the superior temporal plane, within the lateral sulcus and comprising parts of the transverse temporal gyri, and the superior temporal gyrus, including the planum polare and planum temporale (roughly Brodmann areas 41 and 42, and partially 22).
The auditory cortex takes part in the spectrotemporal, meaning involving time and frequency, analysis of the inputs passed on from the ear. The cortex then filters and passes on the information to the dual stream of speech processing. The auditory cortex's function may help explain why particular brain damage leads to particular outcomes. For example, unilateral destruction, in a region of the auditory pathway above the cochlear nucleus, results in slight hearing loss, whereas bilateral destruction results in cortical deafness.
Structure
The auditory cortex was previously subdivided into primary (A1) and secondary (A2) projection areas and further association areas. The modern divisions of the auditory cortex are the core (which includes primary auditory cortex, A1), the belt (secondary auditory cortex, A2), and the parabelt (tertiary auditory cortex, A3). The belt is the area immediately surrounding the core; the parabelt is adjacent to the lateral side of the belt.
Besides receiving input from the ears via lower parts of the auditory system, it also transmits signals back to these areas and is interconnected with other parts of the cerebral cortex. Within the core (A1), its structure preserves tonotopy, the orderly representation of frequency, due to its ability to map low to high frequencies corresponding to the apex and base, respectively, of the cochlea.
Data about the auditory cortex has been obtained through studies in rodents, cats, macaques, and other animals. In humans, the structure and function of the auditory cortex has been studied using functional magnetic resonance imaging (fMRI), electroencephalography (EEG), and electrocorticography.
Development
Like many areas in the neocortex, the functional properties of the adult primary auditory cortex (A1) are highly dependent on the sounds encountered early in life. This has been best studied using animal models, especially cats and rats. In the rat, exposure to a single frequency during postnatal day (P) 11 to 13 can cause a 2-fold expansion in the representation of that frequency in A1. Importantly, the change is persistent, in that it lasts throughout the animal's life, and specific, in that the same exposure outside of that period causes no lasting change in the tonotopy of A1. Sexual dimorphism within the auditory cortex can be seen in humans between males in females through the planum temporale, encompassing Wernicke's region, for the planum temporale within males has been observed to have a larger planum temporale volume on average, reflecting previous studies discussing interactions between sex hormones and asymmetrical brain development.
Function
As with other primary sensory cortical areas, auditory sensations reach perception only if received and processed by a cortical area. Evidence for this comes from lesion studies in human patients who have sustained damage to cortical areas through tumors or strokes, or from animal experiments in which cortical areas were deactivated by surgical lesions or other methods. Damage to the auditory cortex in humans leads to a loss of any awareness of sound, but an ability to react reflexively to sounds remains as there is a great deal of subcortical processing in the auditory brainstem and midbrain.
Neurons in the auditory cortex are organized according to the frequency of sound to which they respond best. Neurons at one end of the auditory cortex respond best to low frequencies; neurons at the other respond best to high frequencies. There are multiple auditory areas (much like the multiple areas in the visual cortex), which can be distinguished anatomically and on the basis that they contain a complete "frequency map." The purpose of this frequency map (known as a tonotopic map) likely reflects the fact that the cochlea is arranged according to sound frequency. The auditory cortex is involved in tasks such as identifying and segregating "auditory objects" and identifying the location of a sound in space. For example, it has been shown that A1 encodes complex and abstract aspects of auditory stimuli without encoding their "raw" aspects like frequency content, presence of a distinct sound or its echoes.
Human brain scans indicated that a peripheral part of this brain region is active when trying to identify musical pitch. Individual cells consistently get excited by sounds at specific frequencies, or multiples of that frequency.
The auditory cortex plays an important yet ambiguous role in hearing. When the auditory information passes into the cortex, the specifics of what exactly takes place are unclear. There is a large degree of individual variation in the auditory cortex, as noted by English biologist James Beament, who wrote, "The cortex is so complex that the most we may ever hope for is to understand it in principle, since the evidence we already have suggests that no two cortices work in precisely the same way."
In the hearing process, multiple sounds are transduced simultaneously. The role of the auditory system is to decide which components form the sound link. Many have surmised that this linkage is based on the location of sounds. However, there are numerous distortions of sound when reflected off different media, which makes this thinking unlikely. The auditory cortex forms groupings based on fundamentals; in music, for example, this would include harmony, timing, and pitch.
The primary auditory cortex lies in the superior temporal gyrus of the temporal lobe and extends into the lateral sulcus and the transverse temporal gyri (also called Heschl's gyri). Final sound processing is then performed by the parietal and frontal lobes of the human cerebral cortex. Animal studies indicate that auditory fields of the cerebral cortex receive ascending input from the auditory thalamus and that they are interconnected on the same and on the opposite cerebral hemispheres.
The auditory cortex is composed of fields that differ from each other in both structure and function. The number of fields varies in different species, from as few as 2 in rodents to as many as 15 in the rhesus monkey. The number, location, and organization of fields in the human auditory cortex are not known at this time. What is known about the human auditory cortex comes from a base of knowledge gained from studies in mammals, including primates, used to interpret electrophysiological tests and functional imaging studies of the brain in humans.
When each instrument of a symphony orchestra or jazz band plays the same note, the quality of each sound is different, but the musician perceives each note as having the same pitch. The neurons of the auditory cortex of the brain are able to respond to pitch. Studies in the marmoset monkey have shown that pitch-selective neurons are located in a cortical region near the anterolateral border of the primary auditory cortex. This location of a pitch-selective area has also been identified in recent functional imaging studies in humans.
The primary auditory cortex is subject to modulation by numerous neurotransmitters, including norepinephrine, which has been shown to decrease cellular excitability in all layers of the temporal cortex. alpha-1 adrenergic receptor activation, by norepinephrine, decreases glutamatergic excitatory postsynaptic potentials at AMPA receptors.
Relationship to the auditory system
The auditory cortex is the most highly organized processing unit of sound in the brain. This cortex area is the neural crux of hearing, and—in humans—language and music. The auditory cortex is divided into three separate parts: the primary, secondary, and tertiary auditory cortex. These structures are formed concentrically around one another, with the primary cortex in the middle and the tertiary cortex on the outside.
The primary auditory cortex is tonotopically organized, which means that neighboring cells in the cortex respond to neighboring frequencies. Tonotopic mapping is preserved throughout most of the audition circuit. The primary auditory cortex receives direct input from the medial geniculate nucleus of the thalamus and thus is thought to identify the fundamental elements of music, such as pitch and loudness.
An evoked response study of congenitally deaf kittens used local field potentials to measure cortical plasticity in the auditory cortex. These kittens were stimulated and measured against a control (an un-stimulated congenitally deaf cat (CDC)) and normal hearing cats. The field potentials measured for artificially stimulated CDC were eventually much stronger than that of a normal hearing cat. This finding accords with a study by Eckart Altenmuller, in which it was observed that students who received musical instruction had greater cortical activation than those who did not.
The auditory cortex has distinct responses to sounds in the gamma band. When subjects are exposed to three or four cycles of a 40 hertz click, an abnormal spike appears in the EEG data, which is not present for other stimuli. The spike in neuronal activity correlating to this frequency is not restrained to the tonotopic organization of the auditory cortex. It has been theorized that gamma frequencies are resonant frequencies of certain areas of the brain and appear to affect the visual cortex as well. Gamma band activation (25 to 100 Hz) has been shown to be present during the perception of sensory events and the process of recognition. In a 2000 study by Kneif and colleagues, subjects were presented with eight musical notes to well-known tunes, such as Yankee Doodle and Frère Jacques. Randomly, the sixth and seventh notes were omitted and an electroencephalogram, as well as a magnetoencephalogram were each employed to measure the neural results. Specifically, the presence of gamma waves, induced by the auditory task at hand, were measured from the temples of the subjects. The omitted stimulus response (OSR) was located in a slightly different position; 7 mm more anterior, 13 mm more medial and 13 mm more superior in respect to the complete sets. The OSR recordings were also characteristically lower in gamma waves as compared to the complete musical set. The evoked responses during the sixth and seventh omitted notes are assumed to be imagined, and were characteristically different, especially in the right hemisphere. The right auditory cortex has long been shown to be more sensitive to tonality (high spectral resolution), while the left auditory cortex has been shown to be more sensitive to minute sequential differences (rapid temporal changes) in sound, such as in speech.
Tonality is represented in more places than just the auditory cortex; one other specific area is the rostromedial prefrontal cortex (RMPFC). A study explored the areas of the brain which were active during tonality processing, using fMRI. The results of this experiment showed preferential blood-oxygen-level-dependent activation of specific voxels in RMPFC for specific tonal arrangements. Though these collections of voxels do not represent the same tonal arrangements between subjects or within subjects over multiple trials, it is interesting and informative that RMPFC, an area not usually associated with audition, seems to code for immediate tonal arrangements in this respect. RMPFC is a subsection of the medial prefrontal cortex, which projects to many diverse areas including the amygdala, and is thought to aid in the inhibition of negative emotion.
Another study has suggested that people who experience 'chills' while listening to music have a higher volume of fibres connecting their auditory cortex to areas associated with emotional processing.
In a study involving dichotic listening to speech, in which one message is presented to the right ear and another to the left, it was found that the participants chose letters with stops (e.g. 'p', 't', 'k', 'b') far more often when presented to the right ear than the left. However, when presented with phonemic sounds of longer duration, such as vowels, the participants did not favor any particular ear. Due to the contralateral nature of the auditory system, the right ear is connected to Wernicke's area, located within the posterior section of the superior temporal gyrus in the left cerebral hemisphere.
Sounds entering the auditory cortex are treated differently depending on whether or not they register as speech. When people listen to speech, according to the strong and weak speech mode hypotheses, they, respectively, engage perceptual mechanisms unique to speech or engage their knowledge of language as a whole.
| Biology and health sciences | Sensory nervous system | Biology |
994489 | https://en.wikipedia.org/wiki/Pomfret | Pomfret | Pomfrets are scombriform fish belonging to the family Bramidae. The family currently includes 20 species across seven genera. Several species are important food sources for humans, especially Brama brama in South Asia. The earlier form of the pomfret's name was "", a word which probably ultimately comes from Portuguese pampo, referring to various fish such as the blue butterfish (Stromateus fiatola). The fish meat is white in color.
Distribution
They are found globally in the Atlantic, Indian, and Pacific Oceans, as well as numerous seas including the Norwegian, Mediterranean, and Sea of Japan. Nearly all species can be found in the high seas. However, fish in the genera Pterycombus and Pteraclis tend to be found off continental shelves. Further, fishes in the genus Eumegistus are hypothesized to be largely benthic and found to occupy deep water shelves.
Some species of pomfrets are also known as monchong, specifically in Hawaiian cuisine.
Genera
The following genera are placed within the family Bramidae:
Brama Bloch & Schneider, 1801
Eumegistus Jordan & Jordan, 1922
Pteraclis Gronow, 1772
Pterycombus Fries, 1837
Taractes Lowe, 1843
Taractichthys Mead & Maul, 1958
Xenobrama Yatsu & Nakamura, 1989
The following fossil genera are also known:
?†Bramoides Casier, 1966
?†Goniocranion Casier, 1966 (possibly a lampriform)
†Paucaichthys Baciu & Bannikov, 2003
The fossil genus Digoria was also previously placed with the Bramidae, but is now known to be a beardfish.
| Biology and health sciences | Acanthomorpha | Animals |
995939 | https://en.wikipedia.org/wiki/PortMiami | PortMiami | The Port of Miami, styled as PortMiami and formally known as the Dante B. Fascell Port of Miami, is a major seaport located in Biscayne Bay at the mouth of the Miami River in Miami, Florida. It is the largest passenger port in the world and one of the largest cargo ports in the United States.
The port is located on Dodge, Lummus and Sam's Islands, which is the combination of three historic islands (Dodge, Lummus and Sam's Islands) that have since been combined into one. It is connected to Downtown Miami by Port Boulevard—a causeway over the Intracoastal Waterway—and to the neighboring Watson Island via the PortMiami Tunnel. It is named in honor of 19-term Florida Congressman Dante Fascell.
As of 2023, PortMiami accounts for approximately 334,500 jobs and has an annual economic revenue of $43 billion to the state of Florida.
History
In the early 1900s, Government Cut was dredged along with a new channel to what now is known as Bicentennial Park in downtown Miami. This new access to the mainland created the Main Channel which greatly improved the shipping access to the new port. From these original dredging spoils which were disposed on the south side of the new Main Channel, new islands were inadvertently created which later became Dodge, Lummus and Sam's Island along with several other smaller islands.
PortMiami's improved shipping access and growth of the South Florida community led to an expansion of the port. On April 5, 1960, Resolution No. 4830, "Joint Resolution Providing for Construction of Modern Seaport Facilities at Dodge Island Site" was approved by the Dade County Board of Commissioners. On April 6, 1960, the City of Miami approved City Resolution No. 31837 to construct the new port. The new port on Dodge Island required expansion of the island by joining it together with the surrounding islands. After the seawalls, administrative buildings, and a vehicle and railroad bridge were completed, Port of Miami operations were moved to the new Dodge Island port. Additional fill material enlarged the connected Lummus and Sam's islands as well as the North, South and NOAO slips, creating a completely artificial island for PortMiami.
The port is officially named after Florida House of Representatives member Dante Fascell, who served for four decades from 1955 to 1993, and died in 1998.
In 1993, the first dredge of PortMiami occurred, deepening it to . In 2006, a $40 million project to expand the South Harbor finished. In 2011, a project to reconnect PortMiami to the mainland via railroad began. In 2013, a dredging project began to deepen the harbors around PortMiami from . In April 2019, the Miami-Dade Tourism and Ports Committee approved a deal for Royal Caribbean Cruises to build a new office and parking garage on Dodge Island.
Today
Cruise ship operations
PortMiami is the busiest cruise/passenger port in the world. It accommodates major cruise lines such as Carnival, Royal Caribbean, Norwegian, and MSC, among others, and also serves as the homeport of the largest cruise ship in the world by gross tonnage, Icon of the Seas. Over 7.2 million cruise passengers pass through the port each year (FY2023/2024).
As of July 2024, there are nine operating passenger terminal facilities at PortMiami: A, B, C, D, E, F, G, J, and V. Of the nine, there are three facilities that are purpose-built for a specific company, while other companies share the other terminals. Other company-specific facilities are in their planning or construction stages.
Current passenger terminals
Terminals and projects
On June 28, 2016, Royal Caribbean Group announced plans for a new facility that would redevelop "Terminal A" at PortMiami. It would be Royal Caribbean's homeport and be fully capable of serving larger Oasis-class ships. The terminal, dubbed the "Crown of Miami," was completed in November 2018.
On March 7, 2018, Norwegian Cruise Line Holdings announced plans for a new facility that would redevelop "Terminal B" at PortMiami. It would be fully capable of serving Norwegian's largest ships, the Breakaway Plus-class ships. Norwegian originally intended to open the terminal, dubbed the "Pearl of Miami," by fall 2019, but budgeting issues and the COVID-19 pandemic postponed its opening date until August 2021, when Terminal B officially serviced its first cruise ship.
On November 28, 2018, Virgin Voyages announced plans to build a new terminal located on the northwest side of PortMiami. On September 19, 2019, Virgin Voyages finalized the $150 million contract with Miami-Dade County to begin redeveloping the area currently occupied by "Terminal H", which would be renamed "Terminal V" upon completion. This facility effectively replaced "Terminal H". Prior to August 2019, "Terminal H" was primarily occupied by FRS Caribbean, which operated a ferry service between Miami and Bimini in the Bahamas. The new terminal is designed to be the homeport for Virgin Voyages' first two vessels, the Scarlet Lady and the Valiant Lady. "Terminal V" was completed in February 2022.
On September 19, 2019, Carnival Cruise Line announced that it had received approval from Miami-Dade County for an expansion of its company's facilities at PortMiami by renovating and expanding "Terminal F", making it the company's third passenger facility at the port and the company's largest terminal in North America, at . The terminal was completed on February 14, 2023 to coincide with the debut of Carnival's second vessel, Carnival Celebration, which is currently homeported in Miami. The terminal will be operated by Carnival under a 20-year lease.
In July 2018, MSC Cruises announced plans to build "Terminal AA/AAA" for its upcoming cruise ships, a forthcoming class of cruise ship with an approximate gross tonnage of 215,800 tons. On September 19, 2019, MSC and Miami-Dade County finalized the contract to construct the new facility. The new $300 million building will span and include two berths capable of operating simultaneously, separately named as "AA" and "AAA," and be operated by MSC under a 62-year lease. In September 2018, it was announced that Disney Cruise Line had entered into an agreement with Miami-Dade County to plan for a brand-new terminal, "Terminal K", on the south side of PortMiami and east of Terminal J. The inauguration of the terminal was expected to coincide with Disney's expansion into Miami with two vessels homeported at the port in the mid-2020s. The construction of the terminal would have been dependent on improvements made to the port's infrastructure that could have enabled Disney's vessels to operate on the south side of the port. Dates for groundbreaking and completion were not announced at the time of announcement. However, in July 2020, in light of the COVID-19 pandemic and its economic repercussions, PortMiami issued a new proposal that accommodates MSC's difficulties in receiving financing for the project by amending the ground lease, while also granting the port additional time to prepare the site for the project prior to turning over the premises to MSC. Additionally, in an effort to reduce costs for its expansion projects, the port issued an accompanying resolution requiring the new MSC complex to share facilities with Disney Cruise Line, and will require Miami-Dade County to establish a new berthing rights agreement with Disney Cruise Line based on the proposal. Construction of "Terminal AA/AAA" began in March 2022, and is expected to be completed in 2024. In the future, a third birth will added to operate up to three cruise ships simultaneously.
Royal Caribbean Group also announced plans to redevelop "Terminal G" at PortMiami. A larger terminal would be constructed, and would be able to accommodate larger Oasis-class and Icon-class ships. The new terminal is expected to be completed in winter of 2027.
Container ship operations
As the "Cargo Gateway of the Americas," the port primarily handles containerized cargo with small amounts of breakbulk, vehicles and industrial equipment. It is the largest container port in the state of Florida and ninth in the United States.
Over 9.6 million tons of cargo and over (FY 2018/2019) of intermodal container traffic move through the seaport per year. The economic impact from cargo operations at PortMiami to Florida amounts to $35 billion.
As of 2021, nearly 1,000 cargo ships docked at the port. In terms of TEU, China is PortMiami's largest trade partner, while Honduras is ranked first in terms of trade value. Computers represent the port's most valuable export, while insulated wire and cable are considered the most valuable import.
Design and infrastructure
The port currently operates eight passenger terminals, six gantry crane wharves, seven Ro-Ro (Roll-on-Roll-off) docks, four refrigerated yards for containers, break bulk cargo warehouses and nine gantry container handling cranes. In addition, the port tenants operate the cruise and cargo terminals which includes their cargo handling and support equipment.
To retain the port's competitive rank as a world-class port, in 1997 the port undertook a redevelopment program of over $250 million which is well underway to accommodate the changing demands of cruise vessel operators, passengers, shippers and carriers. To further resolve accessibility, the PortMiami Tunnel was constructed in 2010 and completed in 2014, providing direct vehicle access from the port to the interstate highway system via State Road 836, thereby bypassing congestion in downtown Miami.
As part of the massive PortMiami redevelopment program, new ultramodern cruise terminals, roadways and parking garages have been constructed. Additionally, a new gantry crane dock and container storage yards have been constructed along with the electrification of the gantry crane docks to include the conversion of several cranes has been completed. In addition, the Port acquired two state-of-the-art super post-panamax gantry cranes which are amongst the largest in the world; able to load and unload 22 container (8 foot wide each), or nearly 200 foot, wide mega container ships. This, along with the planned Deep Dredge Project, would make it possible for PortMiami to facilitate even the future largest containerships in the world, the Maersk Triple E Class. The new and restructured roadway system with new lighting, landscaping and signage greets visitors to the 'Cruise Capital of the World and Cargo Gateway of the Americas'. The roadways will change again with the completion of the PortMiami Tunnel. And to enhance cargo port accessibility, the newly constructed Security Gates opened at the end of 2006 to increase the processing rate for container trucks and help eliminate the daily traffic backups.
Tunnel and Deep Dredge
Four major projects directly and indirectly related to PortMiami are expected to increase both the capacity and efficiency of the port: the expansion of the Panama Canal, the PortMiami Deep Dredge Project, the PortMiami Tunnel, and the restoration and upgrade of the bridge and rail line connecting PortMiami to the mainland.
On May 24, 2010, construction began on the Miami Port Tunnel, a $1 billion project providing a much-needed direct connection from the port to I-395. Prior to the tunnel's completion, the only way to enter and exit the port was via surface streets through downtown Miami. Construction on the tunnel finished in 2014.
Another major development for the PortMiami was the PortMiami Deep Dredge project, enabling Super Post Panamax Megaships to enter the United States after the completion of the Panama Canal expansion in 2014. The ports of Norfolk, New York and Baltimore have also deepened their ports to the required 50 feet. It is estimated that this project could double Miami's cargo business in the next 10 years as well as creating over 30,000 permanent jobs for Miami, which currently has a very high unemployment rate.
There were plans to build a soccer-specific stadium at PortMiami. The plans were proposed by a group, led by David Beckham, seeking to bring a Major League Soccer team to Miami. The group has stated that they would fund such a stadium privately, but there has been opposition on multiple grounds, including the added traffic to downtown Miami and the impact on wildlife. The stadium has now been relocated to a new site.
Railroad access
In 2011, PortMiami was awarded a federal grant, as part of the Transportation Investment Generating Economic Recovery (TIGER) program, to restore a connection between the Florida East Coast Railway's yard in Hialeah and PortMiami, directly connecting the port to rail networks across the United States, as well as re-establishing the port's on-dock rail capability (loading and unloading directly between ships and trains). The railroad bridge connecting the port to the mainland was damaged by Hurricane Wilma in 2005, at which time service was suspended. The project was scheduled to be finished in time for the completion of the other projects in 2014. The rail project is related to another scheme to increase PortMiami's capacity; an inland intermodal center, known as Flagler Logistics Hub, to be built near the airport on 300 acres of land in Hialeah.
There was some opposition to the railroad line being returned to service, with claims that it would be as much of a problem to downtown traffic as container trucks, and that the noise would be a disturbance to nearby residents. However, trains are occasional and will be reserved for specialty freight, such as oversized loads and hazardous materials, which will be banned from the tunnel. As well, trains will be able to travel at up to on the newly renovated line, in contrast to the old limit of , and so will be able to cross Biscayne Boulevard in 90 seconds. The current plan is for the line to be strictly for intermodal services, with the project including a rail yard and station at the port. However, a passenger station may be added in the future.
The cost of restoring the rail link between the port and the Hialeah Railyard was estimated at $46.9 million, $28 million of which was applied for through a federal grant in 2010. Later that year, a grant of $22 million was awarded for this project, as well as to build an on site intermodal rail yard at the port. During the 2000s the percent of Florida East Coast Railway's business has increased from around 60% to around 80% intermodal freight. However, this was partially due to a decrease in other freight traffic caused by the 2008 recession, which reduced the number of trains, many carrying rock aggregate used in construction, from about 20 to 14 per day.
There was a plan to start a passenger service connecting Jacksonville to Miami using the FECR mainline, with stops at popular tourist attractions. The State of Florida had provided $116 million of the $268 million needed to fund that project. The remaining funding for the passenger line is expected to come from a federal grant, and the remaining funding to fix the local freight line from the Port to Hialeah is supposed to come from the Florida East Coast Railroad (FEC) at $10.9 million, the Florida Department of Transportation (FDOT) at $10.9 million, with the PortMiami itself providing $4.8 million. (The passenger service never began; however, the plan was effectively replaced by Brightline.) In April 2011, Atlas Railroad Construction was chosen to rebuild the line, which was to be completed by 2012 and was estimated to remove 5% of the road traffic from the port. On July 15, 2011, a ground-breaking ceremony marking the beginning of the rail link project, which is expected to create over 800 jobs and generate $33.38 million in wages, was performed by US Senator Bill Nelson, Secretary of Transportation Ray LaHood, Miami-Dade Mayor Carlos Giménez, and Miami city mayor Tomás Regalado. The project has been named the PortMiami Intermodal and Rail Reconnection Project.
| Technology | Specific piers and ports | null |
996278 | https://en.wikipedia.org/wiki/Molecular%20geometry | Molecular geometry | Molecular geometry is the three-dimensional arrangement of the atoms that constitute a molecule. It includes the general shape of the molecule as well as bond lengths, bond angles, torsional angles and any other geometrical parameters that determine the position of each atom.
Molecular geometry influences several properties of a substance including its reactivity, polarity, phase of matter, color, magnetism and biological activity. The angles between bonds that an atom forms depend only weakly on the rest of molecule, i.e. they can be understood as approximately local and hence transferable properties.
Determination
The molecular geometry can be determined by various spectroscopic methods and diffraction methods. IR, microwave and Raman spectroscopy can give information about the molecule geometry from the details of the vibrational and rotational absorbance detected by these techniques. X-ray crystallography, neutron diffraction and electron diffraction can give molecular structure for crystalline solids based on the distance between nuclei and concentration of electron density. Gas electron diffraction can be used for small molecules in the gas phase. NMR and FRET methods can be used to determine complementary information including relative distances,
dihedral angles,
angles, and connectivity. Molecular geometries are best determined at low temperature because at higher temperatures the molecular structure is averaged over more accessible geometries (see next section). Larger molecules often exist in multiple stable geometries (conformational isomerism) that are close in energy on the potential energy surface. Geometries can also be computed by ab initio quantum chemistry methods to high accuracy. The molecular geometry can be different as a solid, in solution, and as a gas.
The position of each atom is determined by the nature of the chemical bonds by which it is connected to its neighboring atoms. The molecular geometry can be described by the positions of these atoms in space, evoking bond lengths of two joined atoms, bond angles of three connected atoms, and torsion angles (dihedral angles) of three consecutive bonds.
Influence of thermal excitation
Since the motions of the atoms in a molecule are determined by quantum mechanics, "motion" must be defined in a quantum mechanical way. The overall (external) quantum mechanical motions translation and rotation hardly change the geometry of the molecule. (To some extent rotation influences the geometry via Coriolis forces and centrifugal distortion, but this is negligible for the present discussion.) In addition to translation and rotation, a third type of motion is molecular vibration, which corresponds to internal motions of the atoms such as bond stretching and bond angle variation. The molecular vibrations are harmonic (at least to good approximation), and the atoms oscillate about their equilibrium positions, even at the absolute zero of temperature. At absolute zero all atoms are in their vibrational ground state and show zero point quantum mechanical motion, so that the wavefunction of a single vibrational mode is not a sharp peak, but approximately a Gaussian function (the wavefunction for n = 0 depicted in the article on the quantum harmonic oscillator). At higher temperatures the vibrational modes may be thermally excited (in a classical interpretation one expresses this by stating that "the molecules will vibrate faster"), but they oscillate still around the recognizable geometry of the molecule.
To get a feeling for the probability that the vibration of molecule may be thermally excited,
we inspect the Boltzmann factor , where ΔE is the excitation energy of the vibrational mode, k the Boltzmann constant and T the absolute temperature. At 298 K (25 °C), typical values for the Boltzmann factor β are:
β = 0.089 for ΔE = 500 cm−1
β = 0.008 for ΔE = 1000 cm−1
β = 0.0007 for ΔE = 1500 cm−1.
(The reciprocal centimeter is an energy unit that is commonly used in infrared spectroscopy; 1 cm−1 corresponds to ). When an excitation energy is 500 cm−1, then about 8.9 percent of the molecules are thermally excited at room temperature. To put this in perspective: the lowest excitation vibrational energy in water is the bending mode (about 1600 cm−1). Thus, at room temperature less than 0.07 percent of all the molecules of a given amount of water will vibrate faster than at absolute zero.
As stated above, rotation hardly influences the molecular geometry. But, as a quantum mechanical motion, it is thermally excited at relatively (as compared to vibration) low temperatures. From a classical point of view it can be stated that at higher temperatures more molecules will rotate faster,
which implies that they have higher angular velocity and angular momentum. In quantum mechanical language: more eigenstates of higher angular momentum become thermally populated with rising temperatures. Typical rotational excitation energies are on the order of a few cm−1. The results of many spectroscopic experiments are broadened because they involve an averaging over rotational states. It is often difficult to extract geometries from spectra at high temperatures, because the number of rotational states probed in the experimental averaging increases with increasing temperature. Thus, many spectroscopic observations can only be expected to yield reliable molecular geometries at temperatures close to absolute zero, because at higher temperatures too many higher rotational states are thermally populated.
Bonding
Molecules, by definition, are most often held together with covalent bonds involving single, double, and/or triple bonds, where a "bond" is a shared pair of electrons (the other method of bonding between atoms is called ionic bonding and involves a positive cation and a negative anion).
Molecular geometries can be specified in terms of 'bond lengths', 'bond angles' and 'torsional angles'. The bond length is defined to be the average distance between the nuclei of two atoms bonded together in any given molecule. A bond angle is the angle formed between three atoms across at least two bonds. For four atoms bonded together in a chain, the torsional angle is the angle between the plane formed by the first three atoms and the plane formed by the last three atoms.
There exists a mathematical relationship among the bond angles for one central atom and four peripheral atoms (labeled 1 through 4) expressed by the following determinant. This constraint removes one degree of freedom from the choices of (originally) six free bond angles to leave only five choices of bond angles. (The angles θ11, θ22, θ33, and θ44 are always zero and that this relationship can be modified for a different number of peripheral atoms by expanding/contracting the square matrix.)
Molecular geometry is determined by the quantum mechanical behavior of the electrons. Using the valence bond approximation this can be understood by the type of bonds between the atoms that make up the molecule. When atoms interact to form a chemical bond, the atomic orbitals of each atom are said to combine in a process called orbital hybridisation. The two most common types of bonds are sigma bonds (usually formed by hybrid orbitals) and pi bonds (formed by unhybridized p orbitals for atoms of main group elements). The geometry can also be understood by molecular orbital theory where the electrons are delocalised.
An understanding of the wavelike behavior of electrons in atoms and molecules is the subject of quantum chemistry.
Isomers
Isomers are types of molecules that share a chemical formula but have difference geometries, resulting in different properties:
A pure substance is composed of only one type of isomer of a molecule (all have the same geometrical structure).
Structural isomers have the same chemical formula but different physical arrangements, often forming alternate molecular geometries with very different properties. The atoms are not bonded (connected) together in the same orders.
Functional isomers are special kinds of structural isomers, where certain groups of atoms exhibit a special kind of behavior, such as an ether or an alcohol.
Stereoisomers may have many similar physicochemical properties (melting point, boiling point) and at the same time very different biochemical activities. This is because they exhibit a handedness that is commonly found in living systems. One manifestation of this chirality or handedness is that they have the ability to rotate polarized light in different directions.
Protein folding concerns the complex geometries and different isomers that proteins can take.
Types of molecular structure
A bond angle is the geometric angle between two adjacent bonds. Some common shapes of simple molecules include:
Linear: In a linear model, atoms are connected in a straight line. The bond angles are set at 180°. For example, carbon dioxide and nitric oxide have a linear molecular shape.
Trigonal planar: Molecules with the trigonal planar shape are somewhat triangular and in one plane (flat). Consequently, the bond angles are set at 120°. For example, boron trifluoride.
Angular: Angular molecules (also called bent or V-shaped) have a non-linear shape. For example, water (H2O), which has an angle of about 105°. A water molecule has two pairs of bonded electrons and two unshared lone pairs.
Tetrahedral: Tetra- signifies four, and -hedral relates to a face of a solid, so "tetrahedral" literally means "having four faces". This shape is found when there are four bonds all on one central atom, with no extra unshared electron pairs. In accordance with the VSEPR (valence-shell electron pair repulsion theory), the bond angles between the electron bonds are arccos(−) = 109.47°. For example, methane (CH4) is a tetrahedral molecule.
Octahedral: Octa- signifies eight, and -hedral relates to a face of a solid, so "octahedral" means "having eight faces". The bond angle is 90 degrees. For example, sulfur hexafluoride (SF6) is an octahedral molecule.
Trigonal pyramidal: A trigonal pyramidal molecule has a pyramid-like shape with a triangular base. Unlike the linear and trigonal planar shapes but similar to the tetrahedral orientation, pyramidal shapes require three dimensions in order to fully separate the electrons. Here, there are only three pairs of bonded electrons, leaving one unshared lone pair. Lone pair – bond pair repulsions change the bond angle from the tetrahedral angle to a slightly lower value. For example, ammonia (NH3).
VSEPR table
The bond angles in the table below are ideal angles from the simple VSEPR theory (pronounced "Vesper Theory"), followed by the actual angle for the example given in the following column where this differs. For many cases, such as trigonal pyramidal and bent, the actual angle for the example differs from the ideal angle, and examples differ by different amounts. For example, the angle in H2S (92°) differs from the tetrahedral angle by much more than the angle for H2O (104.48°) does.
The greater the number of lone pairs contained in a molecule, the smaller the angles between the atoms of that molecule. The VSEPR theory predicts that lone pairs repel each other, thus pushing the different atoms away from them.
In art
Molecule Art is a relatively obscure form of abstract art in which Molecular Geometry, most often a skeletal formation.
3D representations
Line or stick – atomic nuclei are not represented, just the bonds as sticks or lines. As in 2D molecular structures of this type, atoms are implied at each vertex.
Electron density plot – shows the electron density determined either crystallographically or using quantum mechanics rather than distinct atoms or bonds.
Ball and stick – atomic nuclei are represented by spheres (balls) and the bonds as sticks.
Spacefilling models or CPK models (also an atomic coloring scheme in representations) – the molecule is represented by overlapping spheres representing the atoms.
Cartoon – a representation used for proteins where loops, beta sheets, and alpha helices are represented diagrammatically and no atoms or bonds are explicitly represented (e.g. the protein backbone is represented as a smooth pipe).
| Physical sciences | Bond structure | Chemistry |
996322 | https://en.wikipedia.org/wiki/Lavender%20%28color%29 | Lavender (color) | Lavender is a light shade of purple or violet. It applies particularly to the color of the flower of the same name. The web color called lavender is displayed adjacent—it matches the color of the palest part of the flower; however, the more saturated color shown as floral lavender more closely matches the average color of the lavender flower as shown in the picture and is the tone of lavender historically and traditionally considered lavender by average people as opposed to website designers. The color lavender might be described as a medium purple, a pale bluish purple, or a light pinkish-purple. The term lavender may be used in general to apply to a wide range of pale, light, or grayish-purples, but only on the blue side; lilac is pale purple on the pink side. In paints, the color lavender is made by mixing purple and white paint.
Historical development of the concept of the color
The first recorded use of the word lavender as a color term in English was in 1705.
Originally, the name lavender only applied to flowers. By 1930, the book A Dictionary of Color identified three major shades of lavender—[floral] lavender, lavender gray, and lavender blue, and in addition a fourth shade of lavender called old lavender (a darker lavender gray) (all four of these shades of lavender are shown below). By 1955, the publication of the ISCC-NBS Dictionary of Color Names (a color dictionary used by stamp collectors to identify the colors of stamps), now on the Internet, listed dozens of different shades of lavender. Today, although the color floral lavender (the color of the flower of the lavender plant) remains the standard for lavender, just as there are many shades of pink (light red, light rose, and light magenta colors), there are many shades of lavender (some light magenta, some light purple, [mostly] light violet [as well as some grayish-violet], and some light indigo colors).
Variations
Lavender blush
Displayed at right is the web color lavender blush. It is a pale pinkish tone of lavender.
Lavender mist (web color lavender)
The color designated as the web color lavender is a very pale tint of lavender that in other (artistic) contexts may be described as lavender mist.
Languid lavender
Displayed at right is the color languid lavender. The source of this color is the Plochere Color System, a color system formulated in 1948 that is widely used by interior designers.
Lavender gray
The historical name for this color is lavender gray. It is listed in A Dictionary of Color as one of the three major variations of lavender in 1930 along with lavender blue (shown below) and [floral] lavender (also shown below). (This book also designates a fourth shade of lavender, called old lavender, also shown below). This color is similar to Prismacolor colored pencil PC 1026, Greyed Lavender.
Soap
The color soap is displayed at right. Soap is a color formulated by Crayola in 1994 as one of the colors in its Magic Scent specialty box of colors.
This color is a representation of soap scented with lavender, one of the most popular scents for soap.
Pale lavender
At right is displayed the pale tint of lavender shown as lavender in sample 209 in the ISCC-NBS Dictionary of Color Names.
Lavender blue
Lavender blue was listed in A Dictionary of Color as one of the three major variations of lavender in 1930 along with lavender gray (shown above) and [floral] lavender (shown below). It is identified as being the same color as periwinkle. The first use of the term lavender blue as a color term was in 1926.
Light lavender (wisteria)
The color wisteria is displayed at right. Wisteria, a light medium violet color is equivalent to light lavender.
The Prismacolor colored pencil PC 956, which used to be called light violet and is now called lilac (the actual color of the colored pencil is equivalent to wisteria rather than lilac) is this color.
Wisteria in this exact shade is one of the Crayola crayon colors on the list of Crayola crayon colors. It was formulated as a Crayola color in 1993. The first recorded use of wisteria as a color name in English was in 1892.
Pink lavender
The color pink lavender is displayed at right.
The source of this color is the "Pantone Textile Paper eXtended (TPX)" color list, color #14-3207 TPX—Pink Lavender.
Lavender pink
After the introduction of the Munsell color system, in which purple, described as equivalent to red-violet, is described as one of the five psychological primary colors along with red, yellow, green, and blue, some people began to think of lavender as being somewhat more pinkish color.
This color can be described as lavender pink or pale pinkish-purple when purple is defined as equivalent to red-violet as artists do.
This tone of lavender, displayed at right, is the color designated as lavender (color #74) in the list of Crayola crayon colors. This version of "lavender" is a lot pinker than the other shades of lavender shown here.
Medium lavender magenta (web color plum)
At right is displayed the color medium lavender magenta which is equivalent to the web color version of plum (pale plum).
This color may be regarded both as a tone of lavender since it is a light color between rose and blue and as a light medium tone of magenta because its red and blue values are equal (the color signature of a tone of magenta for computer display).
Heliotrope
The color heliotrope is shown at right. Another name for this color is psychedelic lavender because this color was a popular color often used in the hippie psychedelic poster art of the late 1960s for the Fillmore Auditorium and the Avalon Ballroom in San Francisco. These posters were sold in the head shops of the Haight-Ashbury neighborhood and were drawn and produced by such artists as Wes Wilson, Stanley Mouse, Rick Griffin, and Victor Moscoso.
Lavender (floral)
At right is displayed the color Lavender (floral). This color matches the color shown as "lavender" (viewed under a full-spectrum fluorescent lamp) in the 1930 book A Dictionary of Color (reference below), the world standard for color names before the introduction of computers. This color may also be called floral lavender. It is a medium violet.
This tone of lavender would be the approximate color resulting from a mix of 50% violet paint and 50% white paint.
This tone of lavender may be regarded as actual lavender and the other tones displayed in this article can be regarded as all variations on this shade.
This lavender also closely matches the color given as lavender in a basic purple .
Amethyst
The color amethyst is a moderate, transparent violet. Its name is derived from the stone amethyst, a form of quartz. Amethyst is the birthstone for those born in February.
The first recorded use of amethyst as a color name in English was in 1572.
Though the color of natural amethyst varies from purple to yellow, the amethyst color referred to here is the moderate purple color most commonly associated with amethyst stones. There is disagreement as to the cause of the purple color of the amethyst stone. Some believe that the color is due to the presence of manganese, while others have suggested that the amethyst color could be from ferric thiocyanate or sulfur found in amethyst stones.
Deep lavender (web color medium purple)
Displayed at right is the web color medium purple which is equivalent to deep medium violet or deep lavender.
Lavender purple (purple mountain majesty)
Displayed at right is the color purple mountain majesty, a Crayola color since 1993. This color may be regarded as a medium lavender gray.
This color was the color called lavender in Crayola crayons before 1958, when Crayola switched to calling the color shown above as lavender pink as being lavender. See the website "Lost Crayola Crayon Colors". Because of that, another name for this color is lavender purple.
This color is a representation of the way mountains look when they are far away.
English lavender
Displayed at right is the color English lavender.
English lavender is a medium light tone of grayish pinkish lavender.
The source of this color is the "Pantone Textile Paper eXtended (TPX)" color list, color #17-3617 TPX—English Lavender.
Twilight lavender
The color twilight lavender is displayed at right. Twilight lavender is a color formulated by Crayola in 1990 as one of the colors in its Silver Swirls specialty box of metallic colors.
Although this is supposed to be a metallic color, there is no mechanism for displaying metallic colors on a computer.
Old lavender
The dark lavender gray color displayed at right is called old lavender. It is a dark grayish-violet.
The first recorded use of old lavender as a color name in English was in the year 1924.
In culture
Fashion
In the formal rules of mourning dress in Georgian, Victorian, and Edwardian Britain, lavender was one of the few colors generally considered acceptable for women's clothing during the period of half-mourning, during which the bereaved was considered to still be in mourning and required to dress in sober, muted tones, but was not expected to wear the all-black garments of full mourning.
Lavender roses are symbolic of "love at first sight".
In 1995 actress Uma Thurman wore a notable lavender colored dress by Prada to the Academy Award ceremonies, where she was nominated for Best Supporting Actress.
LGBTQ
Lavender is a color that can represent the LGBTQ community.
Just as in the 1890s mauve symbolized homosexuality, the tone of lavender described above as [true] lavender or floral lavender became the symbol of homosexuality in the 1950s and 1960s. The gay anthem Das lila Lied was released in 1920 with the line "we only love lavender night...". In 1923, Harold Hersey wrote a tongue-in-cheek poem called "The Lavender Cowboy" about an unmanly cowboy with "only two hairs on his chest". Seán O'Casey wrote in 1928, "I am very sorry... that I have hurt the refined sentimentalities of C. W. Allen by neglecting to use the lavender... language of the 18th and 19th centuries." Cole Porter's 1929 song "I'm a Gigolo" went: "I'm a famous gigolo, And of lavender, my nature's got just a dash in it." A 1935 dictionary of slang reported "streak of lavender," meant an effeminate man or a sissy, a term used in 1926 by Carl Sandburg to describe young Abraham Lincoln. In the 1960s, homophiles were sometimes referred to as the lavender boys (this term is still used by some people (both gay and non-gay) to refer to gays). A lavender convention is a convention of homosexuals. A heterosexual who has some homosexual tendencies is described as someone with a dash of lavender. In the 1970s pink became more often associated with homosexuality because of the use of the pink triangle as a symbol of gay liberation. However, gays of the baby boom generation still think of lavender as the gayest color. According to some sources, the reason lavender symbolizes homosexuality is because it is the color that is obtained when pink (the color symbolizing girls) is mixed with baby blue (the color symbolizing boys).
Lavender roses are sometimes given by LGBT people to each other on Valentine's Day or may be given to those entering into a same-sex marriage.
A lavender marriage is a marriage between a man and a woman in which one, or both, parties are, or are assumed to be, homosexual. Usually, but not always, both parties are assumed to be complicit in a public deception to hide their homosexuality (although in some lavender marriages, the marriage partners may be bisexual).
In the bandana code of the gay leather subculture, wearing a lavender bandana symbolizes that the wearer has a fetish for dressing in drag.
Lavender is the name of an LGBT magazine in Minnesota.
The Lavender Dragon Society was a club for gay Asian Americans in the San Francisco Bay Area in the late 1990s and early 2000s.
The Lavender Families Resource Network, formerly known as the Mothers' National Defense Fund, was a Seattle-based organization started in 1974 with the aim to provide lesbian and bisexual women with resources and information relating to child-rearing, custody and adoption issues, and donor insemination.
Lavender Graduation is an event that occurs annually at many colleges and universities where LGBT students get their own graduation ceremony to celebrate their identities and achievements.
The study of language used specifically by LGBT speakers is known as lavender linguistics.
The LGBTQIA caucus of the Green Party of the United States is the Lavender Greens.
The lavender scare refers to the fear and persecution of homosexuals in the 1950s in the United States, which paralleled the anti-communist campaign known as McCarthyism.
Lavender is the name of a 2019 American short LGBT romantic drama film directed by Matthew Puccini.
A synonym used for the original gay liberation movement which began in 1969 is referring to the movement as the lavender revolution.
During the 1960s feminism movement, lesbians were rejected by the then head of the National Organization for Women, Betty Friedan, calling them the "Lavender Menace". These lesbian radical feminists adopted the name for their own informal group, consisting of many members of the Gay Liberation Front.
The Lavender Panthers was a gay rights activist group in San Francisco during the early 1970s, led by the Reverend Ray Broshears.
Lavender Country was an American country music band whose 1973 self-titled album was made up entirely of gay-themed songs.
Music
"Lavender Blue", originally an English folk song dating to the 17th century. This song became very popular during the 1950s rock and roll era, when it was sung by Solomon Burke. A hit version of the song, sung by Burl Ives, was featured in the Walt Disney movie So Dear to My Heartand the 2015 film Cinderella.
"Lavender" is a song by Marillion on the 1985 album Misplaced Childhood. It quotes the folk song "Lavender Blue" in its lyrics.
"Lavender" is a song by October Noir on the 2019 album Thirteen.
"Lavender Haze" is a song by Taylor Swift on the 2022 album Midnights.
Religion
In Theosophy and the Ascended Master Teachings (a group of religions based on Theosophy), since the color violet represents the deity St. Germain, and lavender is simply a light tone of violet, lavender as well as violet objects may be placed on the altar to St. Germain.
The Christian holiday of Easter is represented by lavender and yellow because the crocus flower, which is lavender and yellow, blooms in Europe in the spring.
Film, television and video games
Ladies in Lavender is a 2004 film by Charles Dance.
Lavender Town is a purple-tinted town in the Kanto region in the Pokémon video games. It is well known for being a graveyard for Pokémon, and the many ghosts that haunt it.
Numismatics
The Reserve Bank of India (RBI) issued a lavender colored banknote of ₹100 denomination under Mahatma Gandhi New Series. The bank note measures 142 mm × 66 mm.
| Physical sciences | Colors | Physics |
2185979 | https://en.wikipedia.org/wiki/Loellingite | Loellingite | Loellingite, also spelled löllingite, is an iron arsenide mineral with formula FeAs2. It is often found associated with arsenopyrite (FeAsS) from which it is hard to distinguish. Cobalt, nickel and sulfur substitute in the structure. The orthorhombic lollingite group includes the nickel iron arsenide rammelsbergite and the cobalt iron arsenide safflorite. Leucopyrite is an old synonym for loellingite.
It forms opaque silvery white orthorhombic prismatic crystals often exhibiting crystal twinning. It also occurs in anhedral masses and tarnishes on exposure to air. It has a Mohs hardness of 5.5 to 6 and a quite high specific gravity of 7.1 to 7.5. It becomes magnetic after heating.
Loellingite was first described in 1845 at the Lölling district in Carinthia, Austria, for which it was named.
It occurs in mesothermal ore deposits associated with skutterudite, native bismuth, nickeline, nickel-skutterudite, siderite and calcite. It has also been reported from pegmatites.
| Physical sciences | Minerals | Earth science |
2187085 | https://en.wikipedia.org/wiki/Nymphaea%20lotus | Nymphaea lotus | Nymphaea lotus, the white Egyptian lotus, tiger lotus, white lotus, or Egyptian water-lily, is a flowering plant of the family Nymphaeaceae.
Distribution
It grows in various parts of East Africa and Southeast Asia. Nymphaea lotus var. thermalis was believed to be a Tertiary relict variety endemic to the thermal waters of Europe, for example, the Peţa River in Romania. DNA analysis has concluded that Nymphaea lotus var. thermalis lacks distinctiveness from Nymphaea lotus and therefore cannot be classified as a relic population.
Cultivation
It was introduced into Western cultivation in 1802 by Loddiges Nursery. Eduard Ortgies crossed Nymphaea lotus (N. dentata) with Nymphaea pubescens (N. rubra) to produce the first Nymphaea hybrid, illustrated in Flore des serres 8 t. 775, 776 under the name Nymphaea ortgiesiano-rubra. It is a popular ornamental aquatic plant in Venezuela.
Description
This species of water lily has lily pads that float on the water and blossoms that rise above the water.
It is a perennial, growing to 45 cm in height. The flower is white, sometimes tinged with pink.
Ecology
It is found in ponds and prefers clear, warm, still, and slightly acidic waters. It can be found in association with other aquatic plant species, such as Utricularia stellaris.
Nymphaea lotus has the exceptional ability to persist through a dry season with rhizomes. It possesses the ability to reduce evaporation by up to 18 percent on most of the days during the summer period.
Uses
As an aquarium plant
Nymphaea lotus is often used as a freshwater aquarium plant. In ornamental garden pools and in greenhouse culture, it is grown for its flowers, which do not normally appear under aquarium conditions. Aquarists prefer to trim the floating lily pads and just maintain the underwater foliage. Strong light is required for a deep reddish color in the "red" forms.
The tiger-like variegations appear under intense illumination.
As a symbol
In ancient times, the Egyptian lotus was worshipped, especially in Egypt. It was considered a symbol of creation there. In Ancient Greece, it was a symbol of innocence and modesty.
The Egyptian lotus is the national flower of Egypt.
Claire Waight Keller included the flower to represent Malawi in Meghan Markle's wedding veil, which included the distinctive flora of each Commonwealth country.
As food
In some parts of Africa, the rhizomes and tubers are eaten for the starch they contain either boiled, roasted, or ground into flour after drying. The young fruits are sometimes consumed as a salad. The seeds are turned into a meal.
The tubers or seeds are used as a famine food in India.
The white lotus in Ancient Egypt
The ancient Egyptians cultivated the white lotus in ponds and marshes.
This flower often appears in ancient Egyptian decorations. They believed that the lotus flower gave them strength and power; remains of the flower have been found in the burial tomb of Ramesses II. Egyptian tomb paintings from around 1500 BC provide some of the earliest physical evidence of ornamental horticulture and landscape design; they depict lotus ponds surrounded by symmetrical rows of acacias and palms. In Egyptian mythology, Horus was occasionally shown in art as a naked boy with a finger in his mouth sitting on a lotus with his mother. The lotus was one of the two earliest Egyptian capitals motifs, the topmost members of a column. At that time, the motifs of importance are those based on the lotus and papyrus plants respectively, and these, with the palm tree capital, were the chief types employed by the Egyptians, until under the Ptolemies in the 3rd to 1st centuries BC, various other river plants were also employed, and the conventional lotus capital went through various modifications. Women often wore amulets during childbirth that depicted Heqet as a frog, sitting in a lotus.
The number 1,000 in ancient Egyptian numerals is represented by the symbol of the white lotus. The related hieroglyph is: M12
The ancient Egyptians also extracted perfume from this flower. They also used the white lotus in funerary garlands, temple offerings and female adornment.
The white lotus is a candidate for the plant eaten by the Lotophagi of Homer's Odyssey.
Health effects
Though the plant contains a quinolizidine alkaloid, nupharin, and related chemicals, either described according to sources as poisonous, intoxicating or without effects, it seems to have been consumed since antiquity. The effects of the alkaloids could be those of a psychedelic aphrodisiac, though these effects are more those encountered in Nymphaea caerulea, the blue Egyptian water lily.
Chemistry
The chloroform, ethyl acetate and n-butanol extracts of the leaf shows the presence of phenolic compounds (flavonoids, coumarins and tannins), sterols and alkaloids.
Other compounds include myricitrin, myricetin 3-(6-p-coumaroylglucoside)|myricetin-3-(6′′-p''-coumaroylglucoside), myricetin-3′-O-(6′′-p-coumaroyl)glucoside and two epimeric macrocyclic derivatives, nympholide A and B, myricetin-3-O-rhamnoside and penta-O-galloyl-β-D-glucose.
| Biology and health sciences | Nymphaeales | Plants |
2187296 | https://en.wikipedia.org/wiki/Disaster%20response | Disaster response | Disaster response refers to the actions taken directly before, during, or immediately after a disaster. The objective is to save lives, ensure health and safety, and meet the subsistence needs of the people affected. It includes warning and evacuation, search and rescue, providing immediate assistance, assessing damage, continuing assistance, and the immediate restoration or construction of infrastructure. An example of this would be building provisional storm drains or diversion dams. Emergency response aims to provide immediate help to keep people alive, improve their health and support their morale. It can involve specific but limited aid, such as helping refugees with transport, temporary shelter, and food. Or it can involve establishing semi-permanent settlements in camps and other locations. It may also involve initial repairs to damage to infrastructure, or diverting it.
The response phase focuses on keeping people safe, preventing the next disasters and meeting people's basic needs until more permanent and sustainable solutions are available. The governments where the disaster has happened have the main responsibility for addressing these needs. Humanitarian organisations are often present in this phase of the disaster management cycle. This is particularly so in countries where the government does not have the resources for a full response.
Definition
Disaster response refers to the actions taken directly before, during or in the immediate aftermath of a disaster. The objective is to save lives, ensure health and safety and to meet the subsistence needs of the people affected.
The Business Dictionary provide a more comprehensive definition for "disaster response"; Aggregate of decisions and measures to (1) contain or mitigate the effects of a disastrous event to prevent any further loss of life and/or property, (2) restore order in its immediate aftermath, and (3) re-establish normality through reconstruction and re-rehabilitation shortly thereafter. The first and immediate response is called emergency response.
The Johns Hopkins and the International Federation of Red Cross and Red Crescent Societies (IFRC) state: "The word disaster implies a sudden overwhelming and unforeseen event. At the household level, a disaster could result in a major illness, death, a substantial economic or social misfortune. At the community level, it could be a flood, a fire, a collapse of buildings in an earthquake, the destruction of livelihoods, an epidemic or displacement through conflict. When occurring at district or provincial level, a large number of people can be affected."
A recent case study of a disaster response undertaken by the IFRC can be viewed here.
The level of disaster response depends on a number of factors and particular situation awareness. Studies undertaken by Son, Aziz and Peña-Mora (2007) shows that "initial work demand gradually spreads and increases based on a wide range of variables including scale of disaster, vulnerability of affected area which in turn is affected by population density, site-specific conditions (e.g. exposure to hazardous conditions) and effects of cascading disasters resulting from inter-dependence between elements of critical infrastructure".
In the British Government's Emergency Response and Recovery guidance, disaster response refers to decisions and actions taken in accordance with the strategic, tactical and operational objectives defined by emergency responders. At a high level these will be to protect life, contain and mitigate the impacts of the emergency and create the conditions for a return to normality. Response encompasses the decisions and actions taken to deal with the immediate effects of an emergency. In many scenarios it is likely to be relatively short and to last for a matter of hours or days—rapid implementation of arrangements for collaboration, co-ordination and communication are, therefore, vital. Response encompasses the effort to deal not only with the direct effects of the emergency itself (e.g. fighting fires, rescuing individuals) but also the indirect effects (e.g. disruption, media interest).
Common objectives for responders are:
saving and protecting human life;
relieving suffering;
containing the emergency – limiting its escalation or spread and mitigating its impacts;
providing the public and businesses with warnings, advice and information;
protecting the health and safety of responding personnel;
safeguarding the environment;
as far as reasonably practicable, protecting property;
maintaining or restoring critical activities;
maintaining normal services at an appropriate level;
promoting and facilitating self-help in affected communities;
facilitating investigations and inquiries (e.g. by preserving the scene and effective records management);
facilitating the recovery of the community (including the humanitarian assistance, economic, infrastructure and environmental impacts);
evaluating the response and recovery effort; and
identifying and taking action to implement lessons identified.
Disaster response planning
The United States National Fire Protection Association (NFPA) 1600 Standard (NFPA, 2010) specify elements of an emergency response, as: defined responsibilities; specific actions to be taken (which must include protective actions for life safety); and communication directives. Within the standard, NFPA recognize that disasters and day-to-day emergencies are characteristically different. Nevertheless, the prescribed response elements are the same.
In support of the NFPA standard, Statoil's (2013) practical application of emergency response is across three distinct "lines" that incorporate NFPA's elements. Line 1 is responsible for the operational management of an incident; line 2, typically housed off-site, is responsible for tactical guidance and additional resource management. Finally, in the case of major incidents, line 3 provides strategic guidance, group resource management, and government and media relations.
While it is impossible to plan for every disaster, crisis or emergency, the Statoil investigation into the terrorist attacks on In Amenas place emphasis on the importance of having a disaster response. The report concludes that a disaster response framework may be utilized in an array of disaster situations, such as that at In Amenas.
Disaster risk reduction (DRR) is action taken to "[reduce] existing disaster risk and [manage] residual risk." DRR plans aim to decrease the amount of disaster response necessary by planning ahead and making communities resilient to any potential hazardous events that might occur. A number of international frameworks such as the Sendai Framework for Disaster Risk Reduction have been enacted to increase the implementation of global mitigation plans in the event of disasters.
Organizations
United Nations
The United Nations Office for the Coordination of Humanitarian Affairs (OCHA); is responsible for bringing together humanitarian actors to ensure a coherent response to emergencies that require an international response. OCHA plays a key role in operational coordination in crisis situations. This includes assessing situations and needs; agreeing common priorities; developing common strategies to address issues such as negotiating access, mobilizing funding and other resources; clarifying consistent public messaging; and monitoring progress.
United Kingdom
The organisation in the United Kingdom for the provision of communications disaster response is RAYNET. The UK organisation for the provision of disaster response by off-road vehicles is 4x4 Response.
European Union
In addition to providing funding to humanitarian aid, the European Commission's Directorate-General for European Civil Protection and Humanitarian Aid Operations (DG-ECHO) is in charge of the EU Civil Protection Mechanism to coordinate the response to disasters in Europe and beyond and contributes to at least 75% of the transport and/or operational costs of deployments. Established in 2001, the Mechanism fosters cooperation among national civil protection authorities across Europe. Currently 34 countries are members of the Mechanism; all 27 EU Member States in addition to Iceland, Norway, Serbia, North Macedonia, Montenegro, Turkey and Bosnia and Herzegovina. The Mechanism was set up to enable coordinated assistance from the participating states to victims of natural and man-made disasters in Europe and elsewhere.
Canada
In Canada, GlobalMedic was established in 1998 as a non-sectarian humanitarian-aid NGO to provide disaster relief services to large scale catastrophes around the world. Time magazine recognized the work of GlobalMedic in its 2010 Time 100 issue. It has a roster of over 1,000 volunteers from across Canada that includes professional rescuers, police officers, firefighters and paramedics who donate their time to respond to international disasters. Their personnel are divided into Rapid Response Teams (RRTs) that operate rescue units, Water Purification Units (WPUs) designed to provide safe drinking water; and Emergency Medical Units (EMUs) that use inflatable field hospitals to provide emergency medical treatment. Since 2004, GlobalMedic teams have deployed to over 60 humanitarian disasters around the world.
India
In India, the National Disaster Management Authority is responsible for planning for mitigating effects of natural disasters and anticipating and avoiding man-made disasters. It also coordinates the capacity-building and response of government agencies in the time of crises and emergencies. The National Disaster Response Force is an inter-government disaster response agency that specializes in search, rescue and rehabilitation.
United States of America
In the US, the Federal Emergency Management Agency coordinates federal operational and logistical disaster response capability needed to save and sustain lives, minimize suffering, and protect property in a timely and effective manner in communities that become overwhelmed by disasters. The Centers for Disease Control and Prevention offer information for specific types of emergencies, such as disease outbreaks, natural disasters and severe weather, as well as chemical and radiation accidents. Also, the Emergency Preparedness and Response Program of the National Institute for Occupational Safety and Health develops resources to address responder safety and health during responder and recovery operations.
Among volunteers, the American Red Cross is chartered by Congress in 1900 to lead and coordinate non-profit efforts. They are supported by disaster relief organizations from many religious denominations and community service agencies. Licensed amateur radio operators support most volunteer organizations, and are often affiliated with the American Radio Relay League (ARRL).
Disaster response organizations
In addition to the response by the government, a great deal of assistance in the wake of any disaster comes from charities, disaster response and non-governmental organizations. The biggest international umbrella organizations are the Inter-Agency Standing Committee and the International Council for Voluntary Agencies.
Humanitarian OSM Team works to update and provide map in areas struck by disaster.
Disaster response technologies
Ad hoc infrastructure
A range of infrastructures could be restored ad hoc quickly after a disaster using technologies.
Communications
The Government Emergency Telecommunications Service supports federal, state, local and tribal government personnel, industry and non-governmental organizations during a crisis or emergency by providing emergency access and priority handling for local and long-distance calls over the public switched telephone network. There is a Nationwide Wireless Priority Service that allows a user to wait for cellular bandwidth to open.
Wireless mesh networks can be deployed rapidly to enable Internet connectivity, substitute failed mobile phone networks and emergency- and post-disaster communication – including for disaster response coordination and emergency calls. Mesh networks such as B.A.T.M.A.N. are often developed and deployed open-source by volunteer communities with little resources.
Electricity
Emergency power systems – such as mobile microgeneration units, mobile charging- and power supply-stations or specially designed or extended smart grids – could support important electrical systems on loss of normal power supply or restore power supply for small regions whose connections to the main power grid were cut off.
Transportation
The transportation infrastructure may have become unpassable due to a disaster, complicating logistics, evacuation and disaster response.
Technologies may allow for quick ad hoc sufficient restoration of the transportation network or substitutions of parts of it. Such include the rapid construction of stable bridges based on mobile lightweight and/or locally sourced materials or components, which militaries have been involved in.
Waste management
Disaster waste is often managed in an ad hoc manner. The waste generated by a disaster can overwhelm existing solid waste management facilities and affect other response activities. Depending on the type of disaster, its scope and recovery duration conventional waste may need to be managed in similar ways and both may be associated with the transportation network restoration.
Emergency accommodation
Emergency accommodation is sometimes considered to be an element of infrastructure. Temporary accommodation for people and animals after disasters is an issue. Sometimes existing private accommodation infrastructure and logistics are repurposed for the disaster response.
Water supply
Water supply, drainage and sewerage infrastructure, and the functioning of wastewater treatment plants may be disrupted by disasters.
====Vaccination infrastructure====
Long-term disaster response, as well as medical infrastructure local to disaster regions with increased health risk, may include vaccination infrastructure.
Response coordination websites
Volunteers, as well as other people involved in a disaster response such as locals and civil organizations like the Technische Hilfswerk, can be coordinated and coordinate with the help of websites and similar ICTs such as for preventing traffic jams, "disaster tourists" and other obstruction of the transportation network, for allocating different forms of help to locations in need, reporting missing persons and increasing efficiency. Such websites for specific individual affected regions have been set up after the 2021 European floods.
Emergency response systems
Smart Emergency Response System (SERS) prototype was built in the SmartAmerica Challenge 2013–2014, a United States government initiative. SERS has been created by a team of nine organizations led by MathWorks. The project was featured at the White House in June 2014 and described by Todd Park (U.S. Chief Technology Officer) as an exemplary achievement.
The SmartAmerica initiative challenges the participants to build cyber-physical systems as a glimpse of the future to save lives, create jobs, foster businesses, and improve the economy. SERS primarily saves lives. The system provides the survivors and the emergency personnel with information to locate and assist each other during a disaster. SERS allows to submit help requests to a MATLAB-based mission center connecting first responders, apps, search-and-rescue dogs, a 6-feet-tall humanoid, robots, drones, and autonomous aircraft and ground vehicles. The command and control center optimizes the available resources to serve every incoming requests and generates an action plan for the mission. The Wi-Fi network is created on the fly by the drones equipped with antennas. In addition, the autonomous rotorcrafts, planes, and ground vehicles are simulated with Simulink and visualized in a 3D environment (Google Earth) to unlock the ability to observe the operations on a mass scale.
The International Charter Space and Major Disasters provides for the charitable retasking of satellite assets, providing coverage from 15 space agencies, etc. which is wide albeit contingent. It focuses on the beginning of the disaster cycle, when timely data is of the essence.
Digital technologies are increasingly being used in humanitarian action, they have shown to improve the health and recovery of populations affected by both natural and man-made disasters. They are used in humanitarian response to facilitate and coordinate aid in various stages including preparedness, response, and recovery from emergencies. More specifically, mobile health (mHealth), which is defined as the use of communication devices such as mobile phones for the purpose of health services information. Nowadays, millions of people use mobile phones as a means of daily communication and data transference, out of which 64% live in developing countries. One of the most important characteristics of disasters are the harms caused to infrastructures, accessibility issues, and an exponential need of medical and emergency services. In such situations, the use of mobile phones for mHealth can be vital, especially when other communication infrastructures are hindered. In such conditions, the abundance of mobile technology in developing countries provide the opportunity to be harnessed for helping victims and vulnerable people.
Mobile health information technology platforms, in the acute phase of disaster response, create a common operational framework that improves disaster response by standardizing data acquisition, organizing information storage, and facilitating communication among medical staff. One of the challenges in disaster response is the need of pertinent, effective and continuous analysis of the situation and information in order to evaluate needs and resources. mHealth has been shown to provide effective disaster preparedness with real time collection of medical data as well as helping identify and create needs assessments during disasters. Using mobile technology in heath has set the stage for the dynamic organization of medical resources and promotion of patient care done through quick triage, patient tracking, and documentation storage and maintenance.
Managing an effective and influential response requires cooperation, which is also facilitated through mHealth. A retrospective study demonstrated that applying mHealth can lead to up to 15% decrease of unnecessary hospital transfers during disasters. In addition, they provide field hospital administrators with real-time census information essential for planning, resource allocation, inter-facility patient transfers, and inter-agency collaboration. mHealth technology systems can improve post-operative care and patient handoffs between volunteer providers. Data entry with mobile devices is now widely used to facilitate the registration of displaced individuals, to conduct surveys, identify those in need of assistance, and to capture data on issues such as food security, vaccination rates, and mortality.
Above all, mHealth can harness the power of information to improve patient outcomes. Efforts led by the Harvard Humanitarian Initiative and Operational Medicine Institute during the Haiti earthquake resulted in the creation of a web-based mHealth system that created a patient log of 617 unique entries used by on-the-ground medical providers and field hospital administrators. This helped facilitate provider triage, improve provider handoffs, and track vulnerable populations such as unaccompanied minors, pregnant women, traumatic orthopedic injuries and specified infectious diseases. Also, during the Haiti earthquake, the International Red Crescent sent more than 45 million SMSs to Viole mobile phone users. This resulted in 95% of the receiver reporting they had gained useful information, and out of these 90% reported the SMS helped in their preparedness.
Problematic individual and collective responses
Previous experiences with false alarms cause some people to ignore legitimate danger signals, such as a fire alarm.
Amanda Ripley points out that (contrary to many portrayals in movies) among the general public in fires and large-scale disasters, there is a remarkable lack of panic and sometimes dangerous denial of, lack of reaction to, or rationalization of warning signs that should be obvious. She says that this is often attributed to local or national character, but appears to be universal, and is typically followed by consultations with nearby people when the signals finally get enough attention. Disaster survivors advocate training everyone to recognize warning signs and practice responding.
A study published in 2020 showed that social networks can function poorly as pathways for inconvenient truths that people would rather ignore and that the interplay between communication and action may depend on the structure of social networks. It also showed that communication networks suppress necessary "evacuations" in test-scenarios because of spontaneous and diffuse emergence of false reassurance when compared to groups of isolated individuals and that larger networks with a smaller proportion of informed subjects suffered more damage due to human-caused misinformation. Following disaster, collective processing of emotions leads to greater resilience and community engagement.
Impacts of disasters
On men and women
In the immediate aftermath of a disaster, an affected population has a number of needs. In disaster response relief, many actors tend to focus on addressing the most immediate needs first. For example, the United Nations Office for the Coordination of Humanitarian Affairs (OCHA) emphasizes that:
These priorities are more than just addressing basic needs, as they represent shared needs between men and women. Consequently, addressing these needs first, helps disaster relief responses to reach as many people as possible. While meeting these gender inclusive needs are critical, men and women also have different needs which must be addressed. Specifically, there are biological differences between men and women, which create different needs. For instance, the needs of women in a post-disaster context can include; having access to menstrual products, having access to a secure toilet (as going to a non secure toilet can leave women more vulnerable to the potential for rape or sexual assault) and having critical pre or post natal services, to name a few. These areas are also immediate needs that need to be addressed in post-disaster relief responses. Beyond women's immediate needs, women can face long-term income disparities as a result of disasters.
On women
Women's income is disproportionately impacted by disasters. A study undertaken by Le Masson et al. in 2016, found that following Hurricane Katrina in 2005, "the ratio of women's to men's earnings in New Orleans declined from 81.6% prior to the disaster to 61.8% in 2006". Underlying this disproportionate impact are gendered vulnerabilities. One notable gendered vulnerability is the double burden. The double burden is the combination of paid and unpaid work. One of the key forms of unpaid labor is care labor. Care labor (also referred to as social reproduction) encompasses, "tasks of providing for dependants, for children, the sick, the elderly and all the rest of us". This double burden exacerbates the unequal impact that disasters have on women. Lafrenière, Sweetman and Thylin emphasize that "women operate as unpaid carers keeping societies and economies functioning ... Poverty and crisis make this unpaid work even more critical for survival. This makes it imperative for humanitarian responders to understand the scope and extent of this unpaid care work and to work with women carers". Another critical underlying gendered vulnerability is unequal access to economic resources. Globally, "women have less access to livelihoods assets (such as financial accounts) and opportunities than men". In times of disaster, the lack of access to sufficient financial resources can "force [women] to turn to risky behaviour such as prostitution or transactional sex as a means of survival. Crises also tend to increase the burdens of care and household responsibilities for women, making their ability to economically support themselves and their dependents more difficult".
| Physical sciences | Earth science basics: General | Earth science |
2188079 | https://en.wikipedia.org/wiki/Anguinae | Anguinae | Anguinae is a subfamily of legless lizards in the family Anguidae, commonly called glass lizards, glass snakes or slow worms. The first two names come from the fact their tails easily break or snap off. Members of Anguinae are native to North America, Europe, Asia, and North Africa.
Evolution
They first appeared in Europe during the early Eocene, approximately 48.6 million years ago, originating from North American ancestors that crossed over from Greenland via the Thule Land Bridge and spread toward Asia sometime after the drying of the Turgai Strait at the beginning of the Oligocene, and then across the Bering Land Bridge to North America during the Miocene.
Description
Very vestigial hindlegs are present in Hyalosaurus and Pseudopus, but are entirely absent in the other genera. Members of the group largely feed on insects and other invertebrates. The largest living species, the Sheltopusik (Pseudopus apodus), can reach lengths of .
Taxonomy
The subfamily contains the following genera:
Dopasia (7 species), native to eastern Asia
Hyalosaurus (1 species), native to North Africa
Ophisaurus (6 species), native to eastern North America
Pseudopus (1 extant species, the Sheltopusik), native to Europe and Asia
Anguis - slowworms (5 species), native to Europe and Western Asia
Relationships after Lavin & Girman, 2019:
| Biology and health sciences | Lizards and other Squamata | Animals |
2188689 | https://en.wikipedia.org/wiki/Weather%20front | Weather front | A weather front is a boundary separating air masses for which several characteristics differ, such as air density, wind, temperature, and humidity. Disturbed and unstable weather due to these differences often arises along the boundary. For instance, cold fronts can bring bands of thunderstorms and cumulonimbus precipitation or be preceded by squall lines, while warm fronts are usually preceded by stratiform precipitation and fog. In summer, subtler humidity gradients known as dry lines can trigger severe weather. Some fronts produce no precipitation and little cloudiness, although there is invariably a wind shift.
Cold fronts generally move from west to east, whereas warm fronts move poleward, although any direction is possible. Occluded fronts are a hybrid merge of the two, and stationary fronts are stalled in their motion. Cold fronts and cold occlusions move faster than warm fronts and warm occlusions because the dense air behind them can lift as well as push the warmer air. Mountains and bodies of water can affect the movement and properties of fronts, other than atmospheric conditions. When the density contrast has diminished between the air masses, for instance after flowing out over a uniformly warm ocean, the front can degenerate into a mere line which separates regions of differing wind velocity known as a shear line. This is most common over the open ocean.
Bergeron classification of air masses
The Bergeron classification is the most widely accepted form of air mass classification. Air mass classifications are indicated by three letters: Fronts separate air masses of different types or origins, and are located along troughs of lower pressure.
The first letter describes its moisture properties, with
c used for ontinental air masses (dry) and
m used for aritime air masses (moist).
The second letter describes the thermal characteristic of its source region:
T for ropical,
P for olar,
A for rctic or ntarctic,
M for onsoon,
E for quatorial, and
S for uperior air (dry air formed by significant upward lift in the atmosphere).
The third letter designates the stability of the atmosphere; it is labeled:
k if the air mass is older (“older”) than the ground below it.
w if the air mass is armer than the ground below it.
Surface weather analysis
A surface weather analysis is a special type of weather map which provides a top view of weather elements over a geographical area at a specified time based on information from ground-based weather stations. Weather maps are created by detecting, plotting and tracing the values of relevant quantities such as sea-level pressure, temperature, and cloud cover onto a geographical map to help find synoptic scale features such as weather fronts. Surface weather analyses have special symbols which show frontal systems, cloud cover, precipitation, or other important information. For example, an H may represent a high pressure area, implying fair or clear weather. An L on the other hand may represent low pressure, which frequently accompanies precipitation and storms. Low pressure also creates surface winds deriving from high pressure zones and vice versa. Various symbols are used not just for frontal zones and other surface boundaries on weather maps, but also to depict the present weather at various locations on the weather map. In addition, areas of precipitation help determine the frontal type and location.
Types
There are two different meanings used within meteorology to describe weather around a frontal zone. The term "anafront" describes boundaries which show instability, meaning air rises rapidly along and over the boundary to cause significant weather changes and heavy precipitation. A "katafront" is weaker, bringing smaller changes in temperature and moisture, as well as limited rainfall.
Cold front
A cold front is located along and on the bounds of the warm side of a tightly packed temperature gradient. On surface analysis charts, this temperature gradient is visible in isotherms and can sometimes also be identified using isobars since cold fronts often align with a surface trough. On weather maps, the surface position of the cold front is marked by a blue line with triangles pointing in the direction where cold air travels and it is placed at the leading edge of the cooler air mass. Cold fronts often bring rain, and sometimes heavy thunderstorms as well. Cold fronts can produce sharper and more intense changes in weather and move at a rate that is up to twice as fast as warm fronts, since cold air is more dense than warm air, lifting as well as pushing the warm air preceding the boundary. The lifting motion often creates a narrow line of showers and thunderstorms if enough humidity is present as the lifted moist warm air condenses. The concept of colder, dense air "wedging" under the less dense warmer air is too simplistic, as the upward motion is really part of a maintenance process for geostrophic balance on the rotating Earth in response to frontogenesis.
Warm front
Warm fronts are at the leading edge of a homogeneous advancing warm air mass, which is located on the equatorward edge of the gradient in isotherms, and lie within broader troughs of low pressure than cold fronts. A warm front moves more slowly than the cold front which usually follows because cold air is denser and harder to lift from the Earth's surface.
This also forces temperature differences across warm fronts to be broader in scale. Clouds appearing ahead of the warm front are mostly stratiform, and rainfall more gradually increases as the front approaches. Fog can also occur preceding a warm frontal passage. Clearing and warming is usually rapid after frontal passage. If the warm air mass is unstable, thunderstorms may be embedded among the stratiform clouds ahead of the front, and after frontal passage thundershowers may still continue. On weather maps, the surface location of a warm front is marked with a red line of semicircles pointing in the direction the air mass is travelling.
Occluded front
An occluded front is formed when a cold front overtakes a warm front, and usually forms around mature low-pressure areas, including cyclones. The cold and warm fronts curve naturally poleward into the point of occlusion, which is also known as the triple point. It lies within a sharp trough, but the air mass behind the boundary can be either warm or cold. In a cold occlusion, the air mass overtaking the warm front is cooler than the cold air mass receding from the warm front and plows under both air masses. In a warm occlusion, the cold air mass overtaking the warm front is warmer than the cold air mass receding from the warm front and rides over the colder air while lifting the warm air.
A wide variety of weather can be found along an occluded front, with thunderstorms possible, but usually their passage is also associated with a drying of the air mass. Within the occlusion of the front, a circulation of air brings warm air upward and sends drafts of cold air downward, or vice versa depending on the type of occlusion the front is experiencing. Precipitations and clouds are associated with the trowal, the projection on the Earth's surface of the tongue of warm air aloft formed during the occlusion process of the depression or storm.
Occluded fronts are indicated on a weather map by a purple line with alternating half-circles and triangles pointing in direction of travel. The trowal is indicated by a series of blue and red junction lines.
Warm sector
The warm sector is a near-surface air mass in between the warm front and the cold front, usually found on the equatorward side of an extratropical cyclone. With its warm and humid characteristics, this air is susceptive to convective instability and can sustain thunderstorms, especially if lifted by the advancing cold front.
Stationary front
A stationary front is a non-moving (or stalled) boundary between two air masses, neither of which is strong enough to replace the other. They tend to remain essentially in the same area for extended periods of time, especially with parallel winds directions; They usually move in waves but not persistently. There is normally a broad temperature gradient behind the boundary with more widely spaced isotherm packing.
A wide variety of weather can be found along a stationary front, but usually clouds and prolonged precipitation are found there. Stationary fronts either dissipate after several days or devolve into shear lines, but they can transform into a cold or warm front if the conditions aloft change. Stationary fronts are marked on weather maps with alternating red half-circles and blue spikes pointing opposite to each other, indicating no significant movement.
When stationary fronts become smaller in scale and stabilizes in temperature, degenerating to a narrow zone where wind direction changes significantly over a relatively short distance, they become known as shearlines. A shearline is depicted as a line of red dots and dashes. Stationary fronts may bring light snow or rain for a long period of time.
Dry line
A similar phenomenon to a weather front is the dry line, which is the boundary between air masses with significant moisture differences instead of temperature. When the westerlies increase on the north side of surface highs, areas of lowered pressure will form downwind of north–south oriented mountain chains, leading to the formation of a lee trough. Near the surface during daylight hours, warm moist air is denser than dry air of greater temperature, and thus the warm moist air wedges under the drier air like a cold front. At higher altitudes, the warm moist air is less dense than the cooler dry air and the boundary slope reverses. In the vicinity of the reversal aloft, severe weather is possible, especially when an occlusion or triple point is formed with a cold front. A weaker form of the dry line seen more commonly is the lee trough, which displays weaker differences in moisture. When moisture pools along the boundary during the warm season, it can be the focus of diurnal thunderstorms.
The dry line may occur anywhere on earth in regions intermediate between desert areas and warm seas. The southern plains west of the Mississippi River in the United States are a particularly favored location. The dry line normally moves eastward during the day and westward at night. A dry line is depicted on National Weather Service (NWS) surface analyses as an orange line with scallops facing into the moist sector. Dry lines are one of the few surface fronts where the pips indicated do not necessarily reflect the direction of motion.
Squall line
Organized areas of thunderstorm activity not only reinforce pre-existing frontal zones, but can outrun actively existing cold fronts in a pattern where the upper level jet splits apart into two streams, with the resultant Mesoscale Convective System (MCS) forming at the point of the upper level split in the wind pattern running southeast into the warm sector parallel to low-level thickness lines. When the convection is strong and linear or curved, the MCS is called a squall line, with the feature placed at the leading edge of the significant wind shift and pressure rise. Even weaker and less organized areas of thunderstorms lead to locally cooler air and higher pressures, and outflow boundaries exist ahead of this type of activity, which can act as foci for additional thunderstorm activity later in the day.
These features are often depicted in the warm season across the United States on surface analyses and lie within surface troughs. If outflow boundaries or squall lines form over arid regions, a haboob may result. Squall lines are depicted on NWS surface analyses as an alternating pattern of two red dots and a dash labelled SQLN or squal line, while outflow boundaries are depicted as troughs with a label of outflow boundary.
Precipitation produced
Fronts are the principal cause of significant weather. Convective precipitation (showers, thundershowers, heavy rain and related unstable weather) is caused by air being lifted and condensing into clouds by the movement of the cold front or cold occlusion under a mass of warmer, moist air. If the temperature differences of the two air masses involved are large and the turbulence is extreme because of wind shear and the presence of a strong jet stream, "roll clouds" and tornadoes may occur.
In the warm season, lee troughs, breezes, outflow boundaries and occlusions can lead to convection if enough moisture is available. Orographic precipitation is precipitation created through the lifting action of air due to air masses moving over terrain such as mountains and hills, which is most common behind cold fronts that move into mountainous areas. It may sometimes occur in advance of warm fronts moving northward to the east of mountainous terrain. However, precipitation along warm fronts is relatively steady, as in light rain or drizzle. Fog, sometimes extensive and dense, often occurs in pre-warm-frontal areas. Although, not all fronts produce precipitation or even clouds because moisture must be present in the air mass which is being lifted.
Movement
Fronts are generally guided by winds aloft, but do not move as quickly. Cold fronts and occluded fronts in the Northern Hemisphere usually travel from the northwest to southeast, while warm fronts move more poleward with time. In the Northern Hemisphere a warm front moves from southwest to northeast. In the Southern Hemisphere, the reverse is true; a cold or occluded front usually moves from southwest to northeast, and a warm front moves from northwest to southeast. Movement is largely caused by the pressure gradient force (horizontal differences in atmospheric pressure) and the Coriolis effect, which is caused by Earth's spinning about its axis. Frontal zones can be slowed by geographic features like mountains and large bodies of warm water.
| Physical sciences | Meteorology: General | null |
9372603 | https://en.wikipedia.org/wiki/Olericulture | Olericulture | Olericulture is the science of vegetable growing, dealing with the culture of non-woody (herbaceous) plants for food.
Olericulture is the production of plants for use of the edible parts. Vegetable crops can be classified into nine major categories:
Potherbs and greens – spinach and collards
Salad crops – lettuce, celery
Cole crops – cabbage and cauliflower
Root crops (tubers) – potatoes, beets, carrots, radishes
Bulb crops – onions, leeks
Legumes – beans, peas
Cucurbits – melons, squash, cucumber
Solanaceous crops – tomatoes, peppers, potatoes
Sweet corn
Olericulture deals with the production, storage, processing and marketing of vegetables. It encompasses crop establishment, including cultivar selection, seedbed preparation and establishment of vegetable crops by seed and transplants.
It also includes maintenance and care of vegetable crops as well commercial and non-traditional vegetable crop production including organic gardening and organic farming; sustainable agriculture and horticulture; hydroponics; and biotechnology.
| Technology | Agriculture_2 | null |
11848762 | https://en.wikipedia.org/wiki/Root%20canal%20treatment | Root canal treatment | Root canal treatment (also known as endodontic therapy, endodontic treatment, or root canal therapy) is a treatment sequence for the infected pulp of a tooth that is intended to result in the elimination of infection and the protection of the decontaminated tooth from future microbial invasion. It is generally done when the cavity is too big for a normal filling. Root canals, and their associated pulp chamber, are the physical hollows within a tooth that are naturally inhabited by nerve tissue, blood vessels and other cellular entities.
Endodontic therapy involves the removal of these structures, disinfection and the subsequent shaping, cleaning, and decontamination of the hollows with small files and irrigating solutions, and the obturation (filling) of the decontaminated canals. Filling of the cleaned and decontaminated canals is done with an inert filling such as gutta-percha and typically a zinc oxide eugenol-based cement. Epoxy resin is employed to bind gutta-percha in some root canal procedures. Another option is to use an antiseptic filling material containing paraformaldehyde like N2. Endodontics includes both primary and secondary endodontic treatments as well as periradicular surgery which is generally used for teeth that still have potential for salvage.
Treatment procedure
The procedure is often complicated and may involve multiple visits over a period of weeks.
Diagnostic and preparation
Before endodontic therapy is carried out, a correct diagnosis of the dental pulp and the surrounding periapical tissues is required. This allows the endodontist to choose the most appropriate treatment option, allowing preservation and longevity of the tooth and surrounding tissues. Treatment options for an irreversibly inflamed pulp (irreversible pulpitis) include either extraction of the tooth or removal of the pulp. Partial pulp amputation (pulpotomy) is the treatment of choice to preserve the pulp in teeth with open apical foramen.
Removing the infected/inflamed pulpal tissue enables the endodontist to preserve the longevity and function of the tooth. The treatment option chosen involves taking into account the expected prognosis of the tooth, as well as the patient's wishes. A full history is required, along with a clinical examination (both inside and outside the mouth), and the use of diagnostic tests.
There are several tests that can aid in the diagnosis of the dental pulp and the surrounding tissues:
Palpation (this is where the tip of the root is felt from the overlying tissues to see if there is any swelling or tenderness present)
Mobility (this is assessing if there is more than normal movement of the tooth in the socket)
Percussion (TTP, tender to percussion; the tooth is tapped to see if there is any tenderness)
Transillumination (shining a light through the tooth to see if there are any noticeable fractures)
Tooth Slooth (this is where the patient is asked to bite down upon a plastic instrument; useful if the patient complains of pain on biting as this can be used to localise the tooth)
Radiographs
Dental pulp tests
If a tooth is considered so threatened (because of decay, cracking, etc.) that future infection is considered very likely or inevitable, a pulpectomy (removal of the pulp tissue) is advisable to prevent such infection. Usually, some inflammation and/or infection is already present within and/or below the tooth. To cure the infection and save the tooth, the dentist drills into the pulp chamber and removes the infected pulp. To eliminate bacteria from the pulp chamber and root canals, the use of efficient antiseptics and disinfectants is necessary. The soft tissues are either drilled out of the root canal(s) with engine driven rotary files, or with long needle-shaped hand instruments known as hand files (H files and K files).
Opening in the crown
The endodontist makes an opening through the enamel and dentin tissues of the tooth, usually using a dental drill fitted with a dental burr.
Isolating the tooth
The use of a rubber dam for tooth isolation is mandatory in endodontic treatment for several reasons:
It provides an aseptic operating field, isolating the tooth from oral and salivary contamination. Root canal contamination with saliva introduces new microorganisms to the root canal which compromise the prognosis.
It facilitates the use of the strong medicaments necessary to clean the root canal system.
It protects the patient from the inhalation or ingestion of endodontic instruments.
Removal of pulp tissue
Procedures for shaping
There have been a number of progressive iterations to the mechanical preparation of the root canal for endodontic therapy. The first, referred to as the standardized technique, was developed by Ingle in 1961, and had disadvantages such as the potential for loss of working length and inadvertent ledging, zipping or perforation. Subsequent refinements have been numerous, and are usually described as techniques. These include the step-back, circumferential filing, incremental, anticurvature filing, step-down, double flare, crown-down-pressureless, balanced force, canal master, apical box, progressive enlargement, modified double flare, passive stepback, alternated rotary motions, and apical patency techniques.
The step back technique, also known as telescopic or serial root canal preparation, is divided in two phases: in the first, the working length is established and then the apical part of the canal is delicately shaped since a size 25 K-file reaches the working length; in the second, the remaining canal is prepared with manual or rotating instrumentation. This procedure, however, has some disadvantages, such as the potential for inadvertent apical transportation. Incorrect instrumentation length can occur, which can be addressed by the modified step back. Obstructing debris can be dealt with by the passive step back technique. The crown down is a procedure in which the dentist prepares the canal beginning from the coronal part after exploring the patency of the whole canal with the master apical file.
There is a hybrid procedure combining step back and crown down: after the canal's patency check, the coronal third is prepared with hand or Gates Glidden drills, then the working length is determined and finally the apical portion is shaped using step back techniques. The double flare is a procedure introduced by Fava where the canal is explored using a small file. The canal is prepared in crown down manner using K-files then follows a "step back" preparation with 1 mm increments with increasing file sizes. With early coronal enlargement, also described as "three times technique", apical canals are prepared after a working length assessment using an apex locator; then progressively enlarged with Gates Glidden drills (only coronal and middle third). For the eponymic third time the dentist "arrives at the apex" and, if necessary, prepares the foramen with a size 25 K-file; the last phase is divided in two refining passages: the first with a 1-mm staggered instrument, the second with 0.5-mm staggering. From the early nineties engine-driven instrumentation were gradually introduced including the ProFile system, the Greater Taper files, the ProTaper files, and other systems like Light Speed, Quantec, K-3 rotary, Real World Endo, and the Hero 642.
All of these procedures involve frequent irrigation and recapitulation with the master apical file, a small file that reaches the apical foramen. High frequency ultrasound based techniques have also been described. These can be useful in particular for cases with complex anatomy, or for retained foreign body retrieval from a failed prior endodontic procedure.
Operative techniques for instruments
There are two slightly different anti-curvature techniques. In the balanced forces technique, the dentist inserts a file into the canal and rotates clockwise a quarter of a turn, engaging dentin, then rotates counter-clockwise half/ three-quarter of a revolution, applying pressure in an apical direction, shearing off tissue previously meshed. From the balanced forces stem two other techniques: the reverse balanced force (where GT instruments are rotated first anti-clockwise and then clockwise) and the gentler "feed and pull" where the instrument is rotated only a quarter of a revolution and moved coronally after an engagement, but not drawn out.
Use of anesthetics
Since 2000, lidocaine is the most commonly used local anesthetic for root canal therapy.
Irrigation
The root canal is flushed with an irrigant. Some common ones are listed below:
Sodium hypochlorite (NaClO) in concentrations ranging between 0.5% and 5.25%
6% sodium hypochlorite with surface modifiers for better flow into nooks and crannies
2% chlorhexidine gluconate
0.2% chlorhexidine gluconate plus 0.2% cetrimonium chloride
17% ethylenediaminetetraacetic acid (EDTA)
Framycetin sulfate
Mixture of citric acid, doxycycline, and polysorbate 80 (detergent) (MTAD)
Saline
Near anhydrous ethanol
The primary aim of chemical irrigation is to kill microbes and dissolve pulpal tissue. Certain irrigants, such as sodium hypochlorite and chlorhexidine, have proved to be effective antimicrobials in vitro and are widely used during root canal therapy worldwide. According to a systematic review, however, there is a lack of good quality evidence to support the use of one irrigant over another in terms of both short and long term prognosis of therapy.
Root canal irrigation systems are divided into two categories: manual agitation techniques and machine-assisted agitation techniques. Manual irrigation includes positive-pressure irrigation, which is commonly performed with a syringe and a side vented needle. Machine-assisted irrigation techniques include sonics and ultrasonics, as well as newer systems which deliver apical negative-pressure irrigation.
Filling the root canal
The standard filling material is gutta-percha, a natural polymer prepared from latex from the percha tree (Palaquium gutta). The standard endodontic technique involves inserting a gutta-percha cone (a "point") into the cleaned-out root canal along with a sealing cement. Another technique uses melted or heat-softened gutta-percha which is then injected or pressed into the root canal passage(s). However, since gutta-percha shrinks as it cools, thermal techniques can be unreliable and sometimes a combination of techniques is used. Gutta-percha is radiopaque, allowing verification afterwards that the root canal passages have been completely filled and are without voids.
Pain control can be difficult to achieve at times because of anesthetic inactivation by the acidity of the abscess around the tooth apex. Sometimes the abscess can be drained, antibiotics prescribed, and the procedure reattempted when inflammation has been mitigated. The tooth can also be unroofed to allow drainage and help relieve pressure.
A root treated tooth may be eased from the occlusion as a measure to prevent tooth fracture prior to the cementation of a crown or similar restoration. Sometimes the dentist performs preliminary treatment of the tooth by removing all of the infected pulp of the tooth and applying a dressing and temporary filling to the tooth. This is called a pulpectomy. The dentist may also remove just the coronal portion of the dental pulp, which contains 90% of the nerve tissue, and leave intact the pulp in the canals. This procedure, called a "pulpotomy", tends to essentially eliminate all the pain. A pulpotomy may be a relatively definitive treatment for infected primary teeth. The pulpectomy and pulpotomy procedures aim to eliminate pain until the follow-up visit for finishing the root canal procedure. Further occurrences of pain could indicate the presence of continuing infection or retention of vital nerve tissue.
Some dentists may decide to temporarily fill the canal with calcium hydroxide paste in order to thoroughly sterilize the site. This strong base is left in place for a week or more to disinfect and reduce inflammation in surrounding tissue, requiring the patient to return for a second or third visit to complete the procedure. There appears to be no benefit from this multi-visit option, however, and single-visit procedures actually show better (though not statistically significant) patient outcomes than multi-visit ones.
Temporary filling
Temporary filling-materials allow the creation of hermetic coronal-seals preventing from coronal microleakage (i.e. contamination of the root canal by bacteria); their presence over the entire time-period to fill the root canal and restore the tooth crown is mandatory, for increasing the probability of the endodontic-treatment success. However, these temporary filling-materials create coronal seals which only remain hermetic during less than 30 days in average (mainly because of the bacteria the saliva contains). Some temporary filling-materials may remain hermetic during 40–70 days. However the estimated standard-deviations of these higher average-durations are important and their computations used observations from dye-based tests, which are less reliable than saliva-based tests.
Final restoration
Molars and premolars that have had root canal therapy should be protected with a crown that covers the cusps of the tooth. This is because the access made into the root canal system removes a significant amount of tooth structure. Molars and premolars are the primary teeth used in chewing and will almost certainly fracture in the future without cuspal coverage. Anterior teeth typically do not require full coverage restorations after a root canal procedure, unless there is extensive tooth loss from decay or for esthetics or unusual occlusion. Placement of a crown or cusp-protecting cast gold covering is recommended also because these have the best ability to seal the treated tooth. There is insufficient evidence to assess the effects of crowns compared to conventional fillings for the restoration of root-filled teeth, decision of restoration should rely on the clinical experience of the practitioner and the preference of the patients. If the tooth is not perfectly sealed, the canal may leak, causing eventual failure. A tooth with a root canal treatment still has the ability to decay, and without proper home care and an adequate fluoride source the tooth structure can become severely decayed (often without the patient's knowledge since the nerve has been removed, leaving the tooth without any pain perception). Thus, non-restorable carious destruction is the main reason for extraction of teeth after root canal therapy, accounting for up to two-thirds of these extractions. Therefore, it is very important to have regular X-rays taken of the root canal to ensure that the tooth is not having any problems that the patient would not be aware of.
Endodontic retreatment
Endodontic treatment may fail for many reasons: one common reason for failure is inadequate chemomechanical debridement of the root canal. This may be due to poor endodontic access, missed anatomy or inadequate shaping of the canal, particularly in the apical third of the root canal, also due to the difficulty of reaching the accessory canals which are minute canals that extend in from the pulp to the periodontium in a random direction. They are mostly found in the apical third of the root.
Exposure of the obturation material to the oral environment may mean the gutta-percha is contaminated with oral bacteria. If complex and expensive restorative dentistry is contemplated then ideally the contaminated gutta percha would be replaced in a retreatment procedure to minimise the risk of failure.
The type of bacteria found within a failed canal may differ from the normal infected tooth. Enterococcus faecalis and/or other facultative enteric bacteria or Pseudomonas sp. are found in this situation.
Endodontic retreatment is technically demanding; it can be a time-consuming procedure, as meticulous care is required by the dentist. Retreatment cases are typically referred to a specialist endodontist. Use of an operating microscope or other magnification may improve outcomes.
Currently, there is no strong evidence favoring surgical or non-surgical retreatment of periapical lesions. However, studies have reported that patients experience more pain and swelling after surgical retreatment compared to non-surgical. When comparing surgical techniques, the use of ultrasonic devices may improve healing after retreatment. Application of nanomotor implants have been proposed to achieve thorough disinfection of the dentine. There is no evidence that the use of antibiotics after endodontic retreatment prevents post-operative infection.
Instruments and equipment used
Since 2000, there have been great innovations in the art and science of root canal therapy. Dentists now must be educated on the current concepts in order to optimally perform a root canal procedure. Root canal therapy has become more automated and can be performed faster thanks in part to machine-driven rotary technology and more advanced root canal filling methods. Many root canal procedures are done in one dental visit which may last for around 1–2 hours. Newer technologies are available (e.g. cone-beam CT scanning) that allow more efficient, scientific measurements to be taken of the dimensions of the root canal, however, the use of CT scanning in endodontics has to be justified. Many dentists use dental loupes to perform root canal therapy, and the consensus is that procedures performed using loupes or other forms of magnification (e.g. a surgical microscope) are more likely to succeed than those performed without them. Although general dentists are becoming versed in these advanced technologies, they are still more likely to be used by root canal specialist (known as endodontists).
Laser root canal procedures are a controversial innovation. Lasers may be fast but have not been shown to thoroughly disinfect the whole tooth, and may cause damage to the tooth.
Postoperative pain
Several randomized clinical trials concluded that the use of rotary instruments is associated with a lower incidence of pain following the endodontic procedure when compared to the use of manual hand instruments. Corticosteroid intra-oral injections were found to alleviate pain in the first 24 hours in patients with symptomatic irreversible pulp inflammation.
Complications
Instrument fractures
Instruments may separate (break) during root canal treatment, meaning a portion of the metal file used during the procedure remains inside the tooth. The file segment may be left behind if an acceptable level of cleaning and shaping has already been completed and attempting to remove the segment would risk damage to the tooth. While potentially disconcerting to the patient, having metal inside of a tooth is relatively common, such as with metal posts, amalgam fillings, gold crowns, and porcelain fused to metal crowns. The occurrence of file separation depends on the narrowness, curvature, length, calcification and number of roots on the tooth being treated. Complications resulting from incompletely cleaned canals, due to blockage from the separated file, can be addressed with surgical root canal treatment. The risk of endodontic files fracturing can be minimised by:
Ensuring access cavity allows straight-line introduction of files into canals
Creating a glide path before use of larger taper NiTi files
Using rotary instruments at the manufacturer's recommended speed and torque setting
Adopting a single-use file policy to prevent overuse of files
Inspecting the file thoroughly every time before inserting it inside the canal
Using ample amounts of irrigation solutions
Avoiding the use of rotary files in severely curved or dilacerated canals
Sodium hypochlorite accident
A sodium hypochlorite incident results in an immediate reaction of severe pain, followed by edema, haematoma and ecchymosis, as a consequence of the solution escaping the confines of the tooth and entering the periapical space. This may be caused iatrogenically by binding or excessive pressure on the irrigant syringe or it may occur if the tooth has an unusually large apical foramen. It is usually self-resolving and may take two to five weeks to fully resolve.
Tooth discoloration
Tooth discoloration is common following root canal treatment; however, the exact causes for this are not completely understood. Failure to completely clean out the necrotic soft tissue of the pulp system may cause staining, and certain root canal materials (e.g. gutta percha and root canal sealer cements) can also cause staining. Another possible factor is that the lack of pulp pressure in dentinal tubules once the pulp is removed leads to incorporation of dietary stains in dentin.
Poor-quality root filling
Another common complication of root canal therapy is when the entire length of the root canal is not completely cleaned out and filled (obturated) with root canal filling material (usually gutta percha). On the other hand, the root canal filling material may be extruded from the apex leading to other complications. The X-ray in the right margin shows two adjacent teeth that had received bad root canal therapy. The root canal filling material (3, 4, and 10) does not extend to the end of the tooth roots (5, 6 and 11). The dark circles at the bottom of the tooth roots (7 and 8) indicated infection in the surrounding bone. Recommended treatment is either to redo the root canal therapy or extract the tooth and place dental implants. Poor quality filling material or sealant may also cause root canal treatment to fail.
Outcome and prognosis
Root-canal-treated teeth may fail to heal—for example, if the dentist does not find, clean and fill all of the root canals within a tooth. On a maxillary molar, there is more than a 50% chance that the tooth has four canals instead of just three, but the fourth canal, often called a "mesio-buccal 2", tends to be very difficult to see and often requires special instruments and magnification in order to see it (most commonly found in first maxillary molars; studies have shown an average of 76% up to 96% of such teeth with the presence of an MB2 canal). This infected canal may cause a continued infection or "flare-up" of the tooth. Any tooth may have more canals than expected, and these canals may be missed when the root canal procedure is performed. Sometimes canals may be unusually shaped, making them impossible to clean and fill completely; some infected material may remain in the canal. Sometimes the canal filling does not fully extend to the apex of the tooth, or it does not fill the canal as densely as it should. Sometimes a tooth root may be perforated while the root canal is being treated, making it difficult to fill the tooth. The perforation may be filled with a root repair material, such as one derived from natural cement called mineral trioxide aggregate (MTA). A specialist can often re-treat failing root canals, and these teeth will then heal, often years after the initial root canal procedure.
The survival or functionality of the endodontically treated tooth is often the most important aspect of the endodontic treatment outcomes, rather than its apical healing alone. One issue was about the commonly used sanitising substances which incompletely sanitised the root-canal space. A properly restored tooth following root canal therapy yields long-term success rates near 97%. In a large-scale study of over 1.6 million patients who had root canal therapy, 97% had retained their teeth 8 years following the procedure, with most untoward events, such as re-treatment, apical surgery or extraction, occurring during the first 3 years after the initial endodontic treatment. Endodontically treated teeth are prone to extraction mainly due to non-restorable carious destruction, other times due to the improper fit of the crown margins that encircles the tooth which lead to the ingress of bacteria, and to a lesser extent to endodontic-related reasons such as endodontic failure, vertical root fracture, or perforation (procedural error).
Systemic issues
An infected tooth may endanger other parts of the body. People with special vulnerabilities, such as a recent prosthetic joint replacement, an unrepaired congenital heart defect, or immunocompromisation, may need to take antibiotics to protect from infection spreading during dental procedures. The American Dental Association (ADA) asserts that any risks can be adequately controlled. A properly performed root canal treatment effectively removes the infected part of the pulp from the tooth.
In the early 1900s, several researchers theorized that bacteria from teeth which had necrotic pulps or which had received endodontic treatment could cause chronic or local infection in areas distant from the tooth through the transfer of bacteria through the bloodstream. This was called the "focal infection theory", and it led some dentists to advocate dental extraction. This theory was discredited in the 1930s.
Bacteremia (bacteria in the bloodstream) can be caused by many everyday activities, e.g. brushing teeth, but may also occur after any dental procedure which involves bleeding. It is particularly likely after dental extractions due to the movement of the tooth and force needed to dislodge it, but endodontically treated teeth alone do not cause bacteremia or systemic disease.
Alternatives
The alternatives to root canal therapy include no treatment or tooth extraction. Following tooth extraction, options for prosthetic replacement may include dental implants, a fixed partial denture (commonly referred to as a 'bridge'), or a removable denture. There are risks to forgoing treatment, including pain, infection and the possibility of worsening dental infection such that the tooth will become irreparable (root canal treatment will not be successful, often due to excessive loss of tooth structure). If extensive loss of tooth structure occurs, extraction may be the only option.
Implant therapy versus endodontic therapy
Research comparing endodontic therapy with implant therapy is considerable, both as an initial treatment and in retreatment for failed initial endodontic approaches. Endodontic therapy allows avoidance of disruption of the periodontal fiber, which helps with proprioception for occlusal feedback, a reflex important in preventing patients from chewing improperly and damaging the temporomandibular joint. In a comparison of initial nonsurgical endodontic treatment and single-tooth implants, both were found to have similar success rates. While the procedures are similar in terms of pain and discomfort, a notable difference is that patients who have implants have reported "the worst pain of their life" during the extraction, with the implantation itself being relatively painless. The worst pain of endodontic therapy was reported with the initial anesthetic injection. Some patients receiving implants also describe a dull nagging pain after the procedure, while those with endodontic therapy describe "sensation" or "sensitivity" in the area. Other studies have found that endodontic therapy patients report the maximum pain the day following treatment, while extraction and implantation patients reported maximum pain the end of the week after the operation.
Implants also take longer, with a typically 3- to 6-month gap between the tooth implantation and receiving the crown, depending on the severity of infection. With regard to gender, women tend to report higher psychological disability after endodontic therapy, and a higher rate of physical disability after tooth implantation, while men do not show a statistically significant difference in response. Mastication is significantly stronger in endodontically treated teeth as compared to implants. Initial success rates after single tooth implants and endodontic microsurgery are similar the first 2 to 4 years following surgery, though after this the success rate of endodontic microsurgery is decreased as compared to implantation.
To an extent, the criteria for success due to the inherent differences in the procedure have historically limited comparisons, with success of endodontic therapy defined as the absence of periapical lucency on radiographs, or the absence of visible cavity at the root of the tooth on imaging. Implant success, on the other hand, is defined by osseointegration, or fusion of the implant to the adjacent maxilla or mandible. Endodontically treated teeth have significantly less requirement for follow up treatment after final restoration, while implants need more appointments to finish treatment and more maintenance.
| Biology and health sciences | Dental treatments | Health |
11857262 | https://en.wikipedia.org/wiki/Herpes%20gladiatorum | Herpes gladiatorum | Herpes gladiatorum is one of the most infectious of herpes-caused diseases, and is transmissible by skin-to-skin contact. The disease was first described in the 1960s in the New England Journal of Medicine. It is caused by contagious infection with human herpes simplex virus type 1 (HSV-1), which more commonly causes oral herpes (cold sores). Another strain, HSV-2 usually causes genital herpes, although the strains are very similar and either can cause herpes in any location.
While the disease is commonly passed through normal human contact, it is strongly associated with contact sports—outbreaks in sporting clubs being relatively common.
Other names for the disease are herpes rugbiorum or "scrumpox" (after rugby football), "wrestler's herpes" or "mat pox" (after wrestling). In one of the largest outbreaks ever among high-school wrestlers at a four-week intensive training camp, HSV was identified in 60 of 175 wrestlers. Lesions were on the head in 73 percent of the wrestlers, the extremities in 42 percent, and the trunk in 28 percent. Physical symptoms sometimes recur in the skin. Previous adolescent HSV-1 seroconversion would preclude most herpes gladiatorum, but being that stress and trauma are recognized triggers, such a person would be likely to infect others.
Signs and symptoms
Herpes gladiatorum is characterized by a rash with clusters of sometimes painful fluid-filled blisters, often on the neck, chest, face, stomach, and legs. The infection is often accompanied by lymphadenopathy (enlargement of the lymph nodes), fever, sore throat, and headache. Often, the accompanying symptoms are much more of an inconvenience than the actual skin blisters and rash.
Each blister contains infectious virus particles (virions). Close contact, particularly abrasive contact as found in contact sports, causes the infected blisters to burst and pass the infection along. Autoinoculation (self-infection) can occur through self-contact, leading to infection at multiple sites on the body.
Herpes gladiatorum symptoms may last up to a few weeks, and if they occur during the first outbreak, they can be more pronounced. In recurrences of the ailment, symptoms are milder, even if lesions still tend to occur. With recurrent infections scabs may form at 3 days yet the lesions are still considered infectious up until 6.4 days after starting oral antiviral medications. Healing takes place without leaving scars. It is possible that the condition evolves asymptomatically and sores are never present.
Causes
Herpes gladiatorum is a skin infection primarily caused by the herpes simplex virus. The virus infects the cells in the epidermal layer of the skin. The initial viral replication occurs at the entry site in the skin or mucous membrane.
The infections caused by a HSV Type 1 virus may be primary or recurrent. Studies show that even though most of the individuals who are exposed to the virus get infected, only 10% from them will develop sores as well. These types of sores appear within two to twenty days after exposure and usually do not last longer than ten days. Primary infections usually heal completely without leaving scars but the virus that caused the infection in the first place remains in the body in a latent state. This is the reason why most of the people experience recurrences even after the condition is taken care of. The virus moves to the nerve cells from where it can reactivate.
Once the condition has recurred, it is normally a mild infection. The infection may be triggered by several external factors such as sun exposure or trauma.
Infection with either type of the HSV viruses occurs in the following way: First, the virus comes in contact with damaged skin, and then it goes to the nuclei of the cells and reproduces or replicates. The blisters and ulcers formed on the skin are a result of the destruction of infected cells. In its latent form, the virus does not reproduce or replicate until recurrence is triggered by different factors.
Pathophysiology
Herpes gladiatorum is transmitted by direct contact with skin lesions caused by a herpes simplex virus. This is the main reason why the condition is often found in wrestlers. It is believed that the virus may be transmitted through infected wrestlers' mats, but this is still subject of research since the virus cannot live long enough outside the body in order to be able to cause an infection. Direct contact with an infected person or infected secretions is undoubtedly the main way in which this virus may be transmitted.
It is also believed that wearing abrasive clothing may increase the chances to get infected with this type of virus. Shirts made of polyester and cotton may cause frictions that lead to small breaks in the skin which makes it easier to contract the infection. Studies in which athletes were wearing 100% cotton shirts showed a decrease in the number of herpes gladiatorum cases.
The spread is facilitated when a sore is present but it can happen in its absence as well. The patients may know that the virus is present on the skin when they experience the so-called "prodromal symptoms". These include itching or tingling on the skin, right before the blisters or lesions appear. The virus may spread since the first symptoms appear until lesions are completely healed.
The incubation period is situated between 3 and 14 days. This means that a person will experience the symptoms within 14 days after he or she contracted the infection. This type of virus may be transmitted even if the symptoms are not yet present. Some individuals can have very mild symptoms that may not be taken as herpes symptoms and the patient may not recognize them. The asymptomatic transmission occurs when the infection is spread between outbreaks.
Similar infections
Herpes gladiatorum is only caused by the herpes simplex virus. Shingles, also manifesting as skin rashes with blisters, is caused by a different virus, herpes zoster. Other agents may cause skin infections, for example ringworm is primarily due to the fungal dermatophyte, T. tonsurans. Impetigo, cellulitis, folliculitis and carbuncles are usually due to Staphylococcus aureus or Beta-hemolytic streptococcus bacteria. These less common forms can be potentially more serious. Anti-viral treatments will not have an effect in non-viral cases. Bacterial infections must be treated with antibiotics and fungal infections with anti-fungal medication.
Prevention
Key measures to prevent outbreaks of the disease are maintaining hygiene standards and using screening to exclude persons with suspicious infections from engaging in contact sports. A skin check performed before practice or competition takes place can identify individuals who should be evaluated, and if necessary treated by a healthcare professional. In certain situations, i.e. participating in wrestling camps, consider placing participants on valacyclovir 1GM daily for the duration of camp. 10-year study has shown 89.5% reduction in outbreaks and probable prevention of contracting the virus. Medication must be started 5 days before participation to ensure proper concentrations exist.
Treatment
Herpes outbreaks should be treated with antiviral medications like Acyclovir, Valacyclovir, or Famcyclovir, each of which is available in tablet form.
Oral antiviral medication is often used as a prophylactic to suppress or prevent outbreaks from occurring. The recommended dosage for suppression therapy for recurrent outbreaks is 1,000 mg of valacyclovir once a day or 400 mg Acyclovir taken twice a day. In addition to preventing outbreaks, these medications greatly reduce the chance of infecting someone while the patient is not having an outbreak.
Often, people have regular outbreaks of anywhere from 1 to 10 times per year, but stress (because the virus lies next to the nerve cells), or a weakened immune system due to a temporary or permanent illness can also spark outbreaks. Some people become infected but fail to ever have a single outbreak, although they remain carriers of the virus and can pass the disease on to an uninfected person through asymptomatic shedding (when the virus is active on the skin but rashes or blisters do not appear).
The use of antiviral medications has been shown to be effective in preventing acquisition of the herpes virus. Specific usage of these agents focus on wrestling camps where intense contact between individuals occur on a daily basis over several weeks. They have also been used for large outbreaks during seasonal competition, but further research needs to be performed to verify efficacy.
| Biology and health sciences | Viral diseases | Health |
3022746 | https://en.wikipedia.org/wiki/Macrauchenia | Macrauchenia | Macrauchenia ("long llama", based on the now-invalid llama genus, Auchenia, from Greek "big neck") is an extinct genus of large ungulate native to South America from the Pliocene or Middle Pleistocene to the end of the Late Pleistocene. It is a member of the extinct order Litopterna, a group of South American native ungulates distinct from the two orders which contain all living ungulates which had been present in South America since the early Cenozoic, over 60 million years ago, prior to the arrival of living ungulates in South America around 2.5 million years ago as part of the Great American Interchange. The bodyform of Macrauchenia has been described as similar to a camel, being one of the largest-known litopterns, with an estimated body mass of around 1 tonne. The genus gives its name to its family, Macraucheniidae, which like Macrauchenia typically had long necks and three-toed feet, as well as a retracted nasal region, which in Macrauchenia manifests as the nasal opening being on the top of the skull between the eye sockets. This has historically been argued to correspond to the presence of a tapir-like proboscis, though recent authors suggest a moose-like prehensile lip or a saiga antelope-like nose to filter dust are more likely.
Only one species is generally considered valid, M. patachonica, which was described by Richard Owen based on remains discovered by Charles Darwin during the voyage of the Beagle. M. patachonica is primarily known from localities in the Pampas, but is known from remains found across the Southern Cone extending as far south as southernmost Patagonia, and as far north as Southern Peru. Another genus of macraucheniid Xenorhinotherium was present in northeast Brazil and Venezuela during the Late Pleistocene.
Macrauchenia is thought to have been a mixed feeder that both consumed woody vegetation and grass that lived in herds and probably engaged in seasonal migrations. Macrauchenia is suggested to have been a swift runner that was capable of moving at considerable speed.
Macrauchenia became extinct as part of the end-Pleistocene extinction event around 12,000 years ago, along with the vast majority of other large mammals native to the Americas. This followed the arrival of humans to the Americas, and possible evidence of human interactions with Macrauchenia has been found at a number of sites with some authors suggesting human hunting may have played a role in its extinction.
Taxonomy
Macrauchenia fossils were first collected on 9 February 1834 at Port St Julian in southern Patagonia in what is now Argentina by Charles Darwin, when HMS Beagle was surveying the port (the Argentine Confederation claimed the region but did not effectively control it at the time). As a non-expert he tentatively identified the leg bones and fragments of spine he found as "some large animal, I fancy a Mastodon". In 1837, soon after the Beagle return, the anatomist Richard Owen identified the bones, including vertebrae from the back and neck, as from a gigantic creature resembling a llama or camel, which Owen named Macrauchenia patachonica. In naming it, Owen noted the original Greek terms (, large or long) and (, neck), as used by Illiger as the basis of Auchenia as a generic name for the llama, Vicugna and so on.
Macrauchenia patachonica is currently considered to be the only valid species of Macrauchenia. Macrauchenia boliviensis from the probably early Miocene aged Kollukollu Formation of Bolivia described by Thomas Henry Huxley in 1860 is now considered to be an indeterminate member of Macraucheniidae. The species Macrauchenia ensenadensis described by Florentino Ameghino in 1888 from the Early Pleistocene has been transferred to the closely related genus Macraucheniopsis.
Evolution
Macrauchenia is part of the extinct ungulate order Litopterna, which is grouped with several other orders as part of the South American native ungulates (SANUs), which formed a conspicuous element of South America's Cenozoic mammal fauna beginning during the Paleocene, over 60 million years ago. Litopterns generally have body forms similar to those of living ungulates.
The relationships of litopterns (as well as other SANUs) to living mammals was historically uncertain. Sequences of mitochondrial DNA extracted from remains of M. patachonica found in a cave in southern Chile published in 2017 indicates that the closest living relatives of Macrauchenia (and by inference, Litopterna) are members of the extant ungulate order Perissodactyla (which includes the equids, rhinoceroses, and tapirs), with litopterns estimated to have genetically diverged from perissodactyls around 66 million years ago. Analysis of collagen sequences obtained from Macrauchenia and the contemporaneous large rhinoceros-like South American ungulate Toxodon, which belongs to another SANU order, Notoungulata, in 2015 reached a similar conclusion and suggests that litopterns are more closely related to notoungulates than to perissodactyls.
The earliest known fossils of litopterns are from the early Paleocene, around 62.5 million years ago. The family to which Macrauchenia belongs, Macraucheniidae, first appeared during the Late Eocene or Oligocene, around 39-30 million years ago, depending on what species are included. Members of the family are typically characterised by having three-toed feet and long necks. The family reached its apex of diversity in the Late Miocene, around 10-6 million years ago, before declining to low diversity during the Pliocene and Pleistocene, as part of a broader decline of SANU diversity during this period. The cause of this diversity decline is uncertain, though it has been suggested to be due to climatic changes, as well as possibly competition/predation from immigrants from North America, who arrived following the formation of the Isthmus of Panama during the Pliocene as part of an event called the Great American Interchange. The earliest fossils attributed to Macrauchenia date to the late Pliocene, though remains of Macrauchenia patachonica are primarily known from the Late Pleistocene.
Cladogram of Macraucheniidae after Lobo, Gelfo & Azevedo (2024):
Description
Macrauchenia had a bodyform superficially like a camel, with a long neck composed of camel or giraffe-like elongated cervical vertebrae. In most of the cervical vertebrae, the canal for the artery passes through the neural arch. Macrauchenia was one of the largest macraucheniids and South American native ungulates, with an estimated body mass of around , considerably larger than earlier macraucheniids, which generally only weighed around .
Skull
The skull of Macrauchenia is relatively elongate and has an eye socket (orbit) entirely enclosed by bone, which is situated behind the teeth. Like other macraucheniids, there are a total of 44 teeth in the upper and lower jaws (the primitive number in placental mammals). The teeth form a continuous row on both jaws without any diastema (gaps) and they are all brachydont (low crowned). The most unusual feature of Macrauchenia's skull is its retracted nasal region, shared with other derived macraucheniines, which have the opening on the top of the skull roof between the eyesockets. Behind the nasal opening there is a substantially depressed region with numerous pits and ridges, which served as attachments for the nasal muscles. While historically this unusual nasal structure was taken as evidence for a tapir-like probiscis/trunk, recent authors have expressed doubts about this, alternatively suggesting that it may have instead formed a moose-like prehensile lip, or a saiga antelope-like nasal structure which served to filter dust (which was likely prevalent in the environment where Macrauchenia lived), perhaps combining the function of dust filtering organ and a prehensile lip. Behind the nasal opening, the top of the skull shows the development of extensive sinuses.
Limbs
The humerus bone is very short and robust. The radius and ulna in the forelimbs and the tibia and fibula in the hindlimbs are fused to each other, with the combined radius-ulna bone being broad in front view, and the fibula is much more slender than the tibia. The femur has a well developed third trochanter, and is long relative to the length of the tibia. The forefeet and hindfeet each had three functional digits. The development of a suprapatellar fossa (an indentation) on the knee joint has led to suggestions that this functioned analogously to the stay apparatus found in living horses, allowing the knees to be passively locked while standing.
Distribution
Fossils of Macrauchenia are known from across the Southern Cone, ranging from northern Chile, southern Peru, southern Bolivia, and the Pampas in Uruguay, northern Argentina and southern Brazil, southwards to extreme southernmost part of Patagonia in southern Chile and Argentina. Macrauchenia is thought to have primarily inhabited arid, open environments with only scattered woody vegetation. A closely related genus, Xenorhinotherium, inhabited more tropical environments in eastern Brazil and Venezuela.
Paleobiology
Analysis of dental calculus extracted from the teeth of an individual of Macrauchenia suggests that it was a mixed feeder (engaging in both browsing and grazing), with this individual having a diet predominantly consisting of C3 grasses. Dental microwear analysis of another individual also supports grazing being an important part of the diet for Macrauchenia. Like living perissodactyls, litopterns including Macrauchenia were probably hindgut fermenters. A 2022 study suggested that based on the anatomy of the cervical vertebrae Macrauchenia likely held its neck in an erect posture when at rest and browsing, similar to that of a llama, though the neck was highly flexible and able to adopt many postures including being lowered to the ground for feeding, as well as being able to flex side to side. Its elongated neck likely allowed it to efficiently browse vegetation without wasting energy. It has been speculated that Macrauchenia may have sometimes reared up onto its hind legs like a gerenuk when feeding.
Macrauchenia is thought to have probably lived in herds, as evidenced by the finding of at least 3 individuals preserved together at the Kamac Mayu site in Chile, with herding individuals probably moving in coordination. Macrauchenia is suggested to have concentrated on foraging in small areas before swiftly moving on to other feeding areas. It has been suggested that Macrauchenia engaged in long distance seasonal migrations in search of food. Like living animals of similar size, it has been suggested that Macrauchenia probably only gave birth to a single offspring at a time.
A 2020 study suggested that Macrauchenia was a capable and fast runner with fossilised footprints suggesting that its feet were held in a digitigrade stance, with the neck probably being held horizontally when running. The running style of Macrauchenia has been suggested to be similar to that of a saiga antelope or spotted hyenas, with a lack of flexing in the spine. A 2005 study suggested that Macrauchenia may have been adapted to swerving as a strategy of avoiding predators, based on the strength of the limb bones. The morphology of its hindlimbs suggests that they were adapted to rapidly accelerating, which may have been useful for both efficient locomotion and escaping predators. Isotopic analysis suggests that Macrauchenia was regularly consumed as prey by the large sabertooth cat Smilodon populator.
Relationship with humans and extinction
Macrauchenia became extinct as part of the end-Pleistocene extinction event at the end of the Late Pleistocene, around 12–10,000 years ago, along with most large (megafaunal) mammals native to the Americas. The extinctions followed the arrival of humans in the Americas, which in South America occurred at least 14,500 years ago (as evidenced by Monte Verde II in Chile). The causes of the extinction have long been controversial with human hunting and climatic change widely considered to be the most probable causes.
Several potential instances of human interaction with Macrauchenia have been recorded. A left mandible collected from somewhere in the Pampas region in the 19th century in the collections of the Museum national d’Histoire naturelle in France has been suggested to display cut marks caused by human butchery, probably to extract the tongue. At Arroyo Seco 2 near Tres Arroyos in the Pampas in Argentina, bones of Macrauchenia amongst those of other megafauna were found associated with human artifacts dating to approximately 14,782–11,142 calibrated years Before Present. While some megafauna remains at the site show clear evidence of exploitation, those of Macrauchenia do not, perhaps because post-depositional degradation of the bones may have erased cut marks. At the El Guanaco site in the Argentinean Pampas, remains of Macrauchenia, alongside those of the glyptodont Doedicurus, horses and rhea eggshells are associated with stone tools.
At the Paso Otero 5 site in the Pampas of northeast Argentina, burned bones of Macrauchenia alongside those of numerous other extinct megafauna species are associated with Fishtail points (a type of knapped stone spear point common across South America at the end of the Pleistocene, suggested to be used to hunt large mammals). The bones of the megafauna were probably deliberately burned as fuel. No cut marks are visible on the vast majority of bones at the site (with only one bone of a llama possibly displaying any butchery marks), which may be due to the burning degrading the bones.
| Biology and health sciences | Mammals: General | Animals |
3024615 | https://en.wikipedia.org/wiki/Finite%20potential%20well | Finite potential well | The finite potential well (also known as the finite square well) is a concept from quantum mechanics. It is an extension of the infinite potential well, in which a particle is confined to a "box", but one which has finite potential "walls". Unlike the infinite potential well, there is a probability associated with the particle being found outside the box. The quantum mechanical interpretation is unlike the classical interpretation, where if the total energy of the particle is less than the potential energy barrier of the walls it cannot be found outside the box. In the quantum interpretation, there is a non-zero probability of the particle being outside the box even when the energy of the particle is less than the potential energy barrier of the walls (cf quantum tunnelling).
Particle in a one-dimensional potential well
For the one-dimensional case on the x-axis, the time-independent Schrödinger equation can be written as:
where
is the reduced Planck constant,
is the mass of the particle,
is the potential energy at each point x,
is the (complex valued) wavefunction, or "eigenfunction", and
is the energy, a real number, sometimes called eigenenergy.
For the case of the particle in a one-dimensional box of length L, the potential is outside the box, and zero for x between and . The wavefunction is composed of different wavefunctions; depending on whether x is inside or outside of the box, such that:
Inside the box
For the region inside the box, V(x) = 0 and Equation 1 reduces to
resembling the time-independent free schrödinger equation, hence
Letting
the equation becomes
with a general solution of
where A and B can be any complex numbers, and k can be any real number.
Outside the box
For the region outside of the box, since the potential is constant, and equation becomes:
There are two possible families of solutions, depending on whether E is less than (the particle is in a bound state) or E is greater than (the particle is in an unbounded state).
If we solve the time-independent Schrödinger equation for an energy , letting such that
then the solution has the same form as the inside-well case:
and, hence, will be oscillatory both inside and outside the well. Thus, the solution is never square integrable; that is, it is always a non-normalizable state. This does not mean, however, that it is impossible for a quantum particle to have energy greater than , it merely means that the system has continuous spectrum above , i.e., the non-normalizable states still contribute to the continuous part of the spectrum as generalized eigenfunctions of an unbounded operator.
This analysis will focus on the bound state, where . Letting
produces
where the general solution is exponential:
Similarly, for the other region outside the box:
Now in order to find the specific solution for the problem at hand, we must specify the appropriate boundary conditions and find the values for A, B, F, G, H and I that satisfy those conditions.
Finding wavefunctions for the bound state
Solutions to the Schrödinger equation must be continuous, and continuously differentiable. These requirements are boundary conditions on the differential equations previously derived, that is, the matching conditions between the solutions inside and outside the well.
In this case, the finite potential well is symmetrical, so symmetry can be exploited to reduce the necessary calculations.
Summarizing the previous sections:
where we found , , and to be:
We see that as goes to , the term goes to infinity. Likewise, as goes to , the term goes to infinity. In order for the wave function to be square integrable, we must set , and we have:
and
Next, we know that the overall function must be continuous and differentiable. In other words, the values of the functions and their derivatives must match up at the dividing points:
These equations have two sorts of solutions, symmetric, for which and , and antisymmetric, for which and . For the symmetric case we get
so taking the ratio gives
Similarly for the antisymmetric case we get
Recall that both and depend on the energy. What we have found is that the continuity conditions cannot be satisfied for an arbitrary value of the energy; because that is a result of the infinite potential well case. Thus, only certain energy values, which are solutions to one or either of these two equations, are allowed. Hence we find that the energy levels of the system below are discrete; the corresponding eigenfunctions are bound states. (By contrast, for the energy levels above are continuous.)
The energy equations cannot be solved analytically. Nevertheless, we will see that in the symmetric case, there always exists at least one bound state, even if the well is very shallow.
Graphical or numerical solutions to the energy equations are aided by rewriting them a little and it should be mentioned that a nice approximation method has been found by Lima which works for any pair of parameters and . If we introduce the dimensionless variables and , and note from the definitions of and that , where , the master equations read
In the plot to the right, for , solutions exist where the blue semicircle intersects the purple or grey curves ( and ). Each purple or grey curve represents a possible solution, within the range . The total number of solutions, , (i.e., the number of purple/grey curves that are intersected by the blue circle) is therefore determined by dividing the radius of the blue circle, , by the range of each solution and using the floor or ceiling functions:
In this case there are exactly three solutions, since .
and , with the corresponding energies
If we want, we can go back and find the values of the constants in the equations now (we also need to impose the normalisation condition). On the right we show the energy levels and wave functions in this case (where ).
We note that however small is (however shallow or narrow the well), there is always at least one bound state.
Two special cases are worth noting. As the height of the potential becomes large, , the radius of the semicircle gets larger and the roots get closer and closer to the values , and we recover the case of the infinite square well.
The other case is that of a very narrow, deep well - specifically the case and with fixed. As it will tend to zero, and so there will only be one bound state. The approximate solution is then , and the energy tends to . But this is just the energy of the bound state of a Delta function potential of strength , as it should be.
A simpler graphical solution for the energy levels can be obtained by normalizing the potential and the energy through multiplication by . The normalized quantities are
giving directly the relation between the allowed couples as
for the even and odd parity wave functions, respectively. In the previous equations only the positive derivative parts of the functions have to be considered. The chart giving directly the allowed couples is reported in the figure.
Asymmetric well
Consider a one-dimensional asymmetric potential well given by the potential
with . The corresponding solution for the wave function with is found to be
and
The energy levels are determined once is solved as a root of the following transcendental equation
where Existence of root to above equation is not always guaranteed, for example, one can always find a value of so small, that for given values of and , there exists no discrete energy level. The results of symmetrical well is obtained from above equation by setting .
Particle in a spherical potential well
Consider the following spherical potential well
where is the radius from the origin. The solution for the wavefunction with zero angular momentum () and with an energy is given by
satisfying the condition
This equation does not always have a solution indicating that in some cases, there are no bound states. The minimum depth of the potential well for which the bound state first appears at is given by
which increases with decreasing well radius . Thus, bound states are not possible if the well is sufficiently shallow and narrow. For well depth slightly exceeding the minimum value, i.e., for , the ground state energy (since we are considering case) is given by
Spherically symmetric annular well
The results above can be used to show that, as to the one-dimensional case, there is two bound states in a spherical cavity, as spherical coordinates make equivalent the radius at any direction.
The ground state (n = 1) of a spherically symmetric potential will always have zero orbital angular momentum (ℓ = n−1), and the reduced wave function satisfies the equation
where is the radial part of the wave function. Notice that for (n = 1) angular part is constant (ℓ = 0).
This is identical to the one-dimensional equation, except for the boundary conditions. As before,
The energy levels for
are determined once is solved as a root of the following transcendental equation
where
Existence of root to above equation is always guaranteed. The results are always with spherical symmetry. It fulfils the condition where the wave does not find any potential inside the sphere: .
Different differential equation lay on when ℓ ≠0, so as above titles, here it is:
The solution can be rationalized by some changes of variable and function to rise a Bessel like differential equation, which solution is:
where , and are Bessel, Newman and Hankel spherical functions respectively, and could be rewritten as function of standard Bessel function.
The energy levels for
are determined once is solved as a root of the following transcendental equation
where
Also this two transcendental equations are solutions:
and also,
Existence of roots to above equations are always guaranteed. The results are always with spherical symmetry.
| Physical sciences | Quantum mechanics | Physics |
3025876 | https://en.wikipedia.org/wiki/Electric%20bicycle | Electric bicycle | An electric bicycle, e-bike, electrically assisted pedal cycle, or electrically power assisted cycle is a motorized bicycle with an integrated electric motor used to assist propulsion. Many kinds of e-bikes are available worldwide, but they generally fall into two broad categories: bikes that assist the rider's pedal-power (i.e. pedelecs) and bikes that add a throttle, integrating moped-style functionality. Both retain the ability to be pedaled by the rider and are therefore not electric motorcycles. E-bikes use rechargeable batteries and typically are motor-powered up to . High-powered varieties can often travel up to or more than .
Depending on local laws, many e-bikes (e.g., pedelecs) are legally classified as bicycles rather than mopeds or motorcycles. This exempts them from the more stringent laws regarding the certification and operation of more powerful two-wheelers which are often classed as electric motorcycles, such as licensing and mandatory safety equipment. E-bikes can also be defined separately and treated under distinct electric bicycle laws.
Bicycles, e-bikes, and e-scooters, alongside e-cargo bikes, are commonly classified as micro-mobility vehicles. When comparing bicycles, e-bikes, and e-scooters from active and inclusiveness perspectives, traditional bicycles, while promoting physical activity, are less accessible to certain demographics due to the need for greater physical exertion, which also limits the distances bicycles can cover compared to e-bikes and e-scooters. E-scooters, however, cannot be categorized as an active transport mode, as they require minimal physical effort and, therefore, offer no health benefits. Additionally, the substantial incidence of accidents and injuries involving e-scooters underscores the considerable safety concerns and perceived risks associated with their use in urban settings. E-bikes stand out as the only option that combines the benefits of active transport with inclusivity, as their electric-motor, pedal-assist feature helps riders cover greater distances. The motor helps users overcome obstacles such as steep inclines and the need for high physical effort, making e-bikes suitable for a wide variety of users. This feature also allows e-bikes to traverse distances that would typically necessitate the use of private cars or multi-modal travel, such as both a bicycle and local public transport, establishing them as not only an active and inclusive mode but also a standalone travel option.
History
1890s to 1980s
In the 1890s, electric bicycles were documented within various U.S. patents. For example, on 31 December 1895, Ogden Bolton Jr. was granted a patent for a battery-powered bicycle with "6-pole brush-and-commutator direct current (DC) hub motor mounted in the rear wheel" (). There were no gears and the motor could draw up to 100 amperes from a 10-volt battery.
Two years later, in 1897, Hosea W. Libbey of Boston invented an electric bicycle () that was propelled by a "double electric motor". The motor was designed within the hub of the crankset axle. (This model was later re-invented and imitated in the late 1990s by Giant Lafree e-bikes.)
By 1898, a rear-wheel drive electric bicycle, which used a driving belt along the outside edge of the wheel, was patented by Mathew J. Steffens. An 1899 patent by John Schnepf () depicted an electric bicycle with a rear-wheel friction, "roller-wheel"-style drive. In 1969, Schnepf's invention was expanded by G.A. Wood Jr. (). Wood's device used four fractional horsepower motors connected through a series of gears.
Hub motors fell out of favor until the latter part of the first decade of the 2000s when they made a resurgence on inexpensive electric bicycles.
1990s to present day
From 1992, Vector Services Limited offered the Zike e-bike. The bicycle included nickel–cadmium battery (NiCad) batteries that were built into a frame member and included an 850 g permanent-magnet motor.
Torque sensors and power controls were developed during the late 1990s. For example, a Japanese patent (6163148) was granted in 1997 to a team led by Yutaka Takada, for a "Sensor, drive force auxiliary device ... and torque sensor zero point adjusting mechanism".
American car executive Lee Iacocca founded EV Global Motors in 1997, a company that produced an electric bicycle model named E-bike SX, and it was one of the early efforts to popularize e-bikes in the US.
By 2007, e-bikes were thought to make up 10 to 20 percent of all two-wheeled vehicles on the streets of many major Chinese cities. A typical unit requires eight hours to charge the battery, which provides the range of , at the speed of around .
In the 2010s electric bicycles attracted considerable traction in Europe led by government policies and environmental awareness encouraging sustainable technologies. Some countries such as Germany and Netherlands turned into significant e-bikes markets with the aim to reduce urban congestion and carbon emissions. Moreover, the evolution of lithium-ion battery (Li-ion) technology contributed to e-bikes adoption. They provided faster charging times, lighter weight and longer range in order to make e-bikes more efficient and practical for daily use.
Gallery
Classes
E-bikes are classed according to the power that their electric motor can deliver and the control system, i.e., when and how the power from the motor is applied. Also the classification of e-bikes is complicated as much of the definition is due to the legality of what constitutes a bicycle and what constitutes a moped or motorcycle. As such, the classification of these e-bikes varies greatly across countries and local jurisdictions.
Despite these legal complications, the classification of e-bikes is mainly decided by whether the e-bike's motor assists the rider using a pedal-assist system or by a power-on-demand one. Definitions of these are as follows:
With pedal-assist, the electric motor is regulated by pedaling. The pedal-assist augments the efforts of the rider when they are pedaling. These e-bikes – called pedelecs – have a sensor to detect the pedaling speed, the pedaling force, or both. Brake activation is sensed to disable the motor as well.
With power-on-demand, the motor is activated by a throttle, usually handlebar-mounted just like on most motorcycles or scooters.
Therefore, very broadly, e-bikes can be classed as:
E-bikes with pedal-assist only: either pedelecs (legally classed as bicycles) or S-Pedelecs (often legally classed as mopeds)
Pedelecs: have pedal-assist only, motor assists only up to a decent but not excessive speed (usually ), motor power up to , often legally classed as bicycles
S-Pedelecs: have pedal-assist only, motor power can be greater than , can attain a higher speed (e.g., )) before motor stops assisting, sometimes legally classed as a moped or motorcycle.
E-bikes with power-on-demand and pedal-assist
E-bikes with power-on-demand only frequently have more powerful motors than pedelecs. The more powerful of these are legally classed as mopeds or motorcycles, but may not meet the legal requirements for registration as street-legal motorcycles.
Pedal-assist only
E-bikes with pedal-assist only are usually called pedelecs but can be broadly classified into pedelecs proper and the more powerful S-Pedelecs.
Pedelecs
The term "pedelec" (from pedal electric cycle) refers to a pedal-assist e-bike with a relatively low-powered electric motor and a decent but not excessive top speed. Pedelecs are legally classed as bicycles rather than low-powered motorcycles or mopeds.
The most influential definition of pedelecs comes from the EU. EU directive (EN15194 standard) for motor vehicles considers a bicycle to be a pedelec if:
The pedal-assist, i.e. the motorized assistance that only engages when the rider is pedaling, cuts out once
is reached, and
when the motor produces maximum continuous rated power of not more than (n.b. the motor can produce more power for short periods, such as when the rider is struggling to get up a steep hill).
An e-bike conforming to these conditions is considered to be a pedelec in the EU and is legally classed as a bicycle. The EN15194 standard is valid across the whole of the EU and has been adopted by some non-EU European nations including the UK, and also some non-European jurisdictions (such as the state of Victoria in Australia).
Pedelecs are much like conventional bicycles in use and function—the electric motor only provides assistance, for example, when the rider is climbing or struggling against a headwind. Pedelecs are therefore especially useful for people in hilly areas where riding a bike would prove too strenuous for many to consider taking up cycling as a daily means of transport. They are also useful for riders who more generally need some assistance, e.g. for people with heart, leg muscle or knee joint issues.
S-Pedelecs
More powerful pedelecs which are not legally classed as bicycles are dubbed S-Pedelecs (short for Schnell-Pedelecs, i.e. Speedy-Pedelecs) in Germany. These have a motor more powerful than and less limited, or unlimited, pedal-assist, i.e. the motor does not stop assisting the rider once has been reached. S-Pedelec class e-bikes are therefore usually classified as mopeds or motorcycles rather than as bicycles and therefore may (depending on the jurisdiction) need to be registered and insured, the rider may need some sort of driver's license (either car or motorcycle) and motorcycle helmets may have to be worn. In the United States, many states have adopted S-Pedelecs into the Class 3 category, limited to not more than of power and speed. In Europe they are likely to be classed as mopeds requiring a registration plate and a licensed driver.
Power-on-demand and pedal-assist
Some newer electric bikes include a pedal assist system (PAS) with or without throttle, allowing riders to pedal while using the electric motor to increase range. There are electric propulsion conversion kits for ordinary bicycles.
Power-on-demand only
Some e-bikes have an electric motor that operates on a power-on-demand basis only; the motor is engaged and operated manually using a throttle, with control usually on the handgrip as on a motorbike or scooter. These sorts of e-bikes often, but not always, have more powerful motors than pedelecs.
With power-on-demand only e-bikes the rider can:
ride by pedal power alone, i.e. fully human-powered.
ride by electric motor alone by operating the throttle manually.
ride using both together at the same time.
Some power-on-demand only e-bikes are very different from, and cannot be classified as, bicycles. For example, the Noped is a term used by the Ministry of Transportation of Ontario for e-bikes which are not fitted with pedals.
Popularity
E-bike usage worldwide has experienced rapid growth since 1998. China is the world's leading producer of e-bikes. According to the data of the China Bicycle Association, a government-chartered industry group, in 2004 China's manufacturers sold 7.5 million e-bikes nationwide, which was almost twice the year 2003 sales; domestic sales reached 10 million in 2005, and 16 to 18 million in 2006. In 2016, approximately 210 million electric bikes were used daily in China.
According to trade umbrella body CONEBI, electric bike sales in the EU were over 5 million in 2021, up from 2 million e-bikes in 2016, up from 700,000 in 2010 and 200,000 in 2007. In 2019, the EU implemented a 79.3% protective tariff on imported Chinese e-bikes to protect EU producers. In 2022, electric bikes continued to grow market share in the EU, rising to 57% of bike sales in the Netherlands, 49% in Austria, 48% in Germany and 47% in Belgium.
Motors and drivetrains
DC motors are commonly used in electric bicycles, either brushed or brushless. Many configurations are available, varying in cost and complexity; direct-drive and geared motor units are both used. An electric power-assist system may be added to almost any pedal cycle using chain drive, belt drive, hub motors or friction drive.
Brushless hub motors are the most common in modern designs. The motor is built into the wheel hub itself, while the stator is fixed solidly to the axle, and the magnets are attached to and rotating with the wheel. The bicycle wheel hub is the motor. The power levels of motors used are influenced by available legal categories and are often, but not always limited to under 750 watts. With a front-drive the motor sits in the front hub, and with a rear-drive the motor sits in the rear hub. Hub motors were common in 19th century electric bicycle designs but fell out of favor until their resurgence in the 2000s.
Another type of electric assist motor is the mid-drive system, where the electric motor is not built into the wheel but is usually mounted beside or under the bottom bracket shell. The propulsion is provided at the pedals rather than at the wheel, being eventually applied to the wheel via the bicycle's standard drive train. Freewheel crank, that is a freewheel in the bottom bracket, is a necessary part in mid-drive systems to allow the electric motor to work inside its optimal rotational speed range (r/min).
Because the power is applied through the chain and sprocket, power is typically limited to around 250–500 watts to protect against fast wear on the drivetrain. An electric mid-drive combined with an internal gear hub at the back hub may require care due to the lack of a clutch mechanism to soften the shock to the gears at the moment of re-engagement. A continuously variable transmission or a fully automatic internal gear hub may reduce the shocks due to the viscosity of oils used for liquid coupling instead of the mechanical couplings of the conventional internal gear hubs.
The main advantage mid-drive motors have over hub motors is that power is applied through the chain (or belt) and thus it uses the existing rear gears (either external or internal). This allows for the motor to operate more efficiently at a wider range of vehicle speeds. Without using the bicycle's gears, equivalent hub motors tend to be less effective propelling the ebike slowly up steep hills and also propelling the ebike fast on the flat.
Batteries
E-bikes use rechargeable batteries in addition to electric motors and some form of control. Battery systems in use include sealed lead–acid (SLA), nickel–cadmium (NiCad), nickel–metal hydride (NiMH) or lithium-ion polymer (Li-ion). Batteries vary according to the voltage, total charge capacity (amp hours), weight, the number of charging cycles before performance degrades, and ability to handle over-voltage charging conditions. The energy costs of operating e-bikes are small, but there can be considerable battery replacement costs. The lifespan of a battery pack varies depending on the type of usage. Shallow discharge/recharge cycles help extend the overall battery life.
Range is a key consideration with e-bikes, and is affected by factors such as motor efficiency, battery capacity, efficiency of the driving electronics, aerodynamics, hills and weight of the bike and rider. Some manufacturers, such as the Canadian BionX or American Vintage Electric Bikes, have the option of using regenerative braking, the motor acts as a generator to slow the bike down prior to the brake pads engaging. This is useful for extending the range and the life of brake pads and wheel rims. There are also experiments using fuel cells. e.g. the PHB.
Some experiments have also been undertaken with super capacitors to supplement or replace batteries for cars and some SUVS.
E-bikes developed in Switzerland in the late 1980s for the Tour de Sol solar vehicle race came with solar charging stations but these were later fixed on roofs and connected so as to feed into the electric mains. The bicycles were then charged from the mains, as is common today. While e-bike batteries were produced mainly by bigger companies in past, many small to medium companies have started using new methods for creating more durable batteries.
Lithium ion batteries used in e-bikes and related vehicles such as electric scooters have been under scrutiny since 2019 due to their susceptibility to overheating and catching fire. A rise in incidents where e-bike batteries were implicated in fires has been attributed to increases in popularity and lack of regulations. Lower-quality batteries are more likely to be manufactured with defects that can cause bulging or bursting, however, there is an incredibly low instance of issue among larger more established manufacturers. In 2024, the world's largest electric bike maker, Giant Manufacturing, went on record to say that it had never experienced an issue with a single battery. Gig workers who rely on using e-bikes to do their jobs may also be limited in their choice of vehicle and purchase a cheap or second-hand e-bike that is more prone to damage. Some jurisdictions, such as New York City and San Francisco, have passed laws requiring that all electric mobility devices sold have UL safety certifications.
Design variations
Not all e-bikes take the form of conventional push-bikes with an incorporated motor, such as the Cytronex bicycles which use a small battery disguised as a water bottle.
Some are designed to take the appearance of low capacity motorcycles "moto-style", but smaller in size and consisting of an electric motor rather than a petrol engine. For example, the Sakura e-bike incorporates a 200 W motor found on standard e-bikes, but also includes plastic cladding, front and rear lights, and a speedometer. It is styled as a modern moped "moped-style", and is often mistaken for one.
Converting a non-electric bicycle to its electric equivalent can be complicated but numerous 'replace a wheel' solutions are now available on the market.
An Electric Pusher Trailer is an e-bike design which incorporates a motor and battery into a trailer that pushes any bicycle. One such trailer is the two-wheeled Ridekick. Other, rarer designs include that of a 'chopper' styled e-bike, which are designed as more of a 'fun' or 'novelty' e-bike than as a purposeful mobility aid or mode of transport.
Electric cargo bikes allow the rider to carry large, heavy items which would be difficult to transport without electric power supplementing the human power input. These bikes can also allow for adults to continue biking into parenthood, enabling the transportation of children without using a car.
There are many e-bikes design variations available, some with batteries attached to the frame, some housed within the tube. Some use fat tires for improved stability and off-road capability.
Various designs (including those mentioned above) are designed to fit inside most area laws, and the ones that contain pedals can be used on roads in the United Kingdom, among other countries.
Folding e-bikes are also available.
Electric self-balancing unicycles do not conform to e-bike legislation in most countries and therefore cannot be used on the road, but may be legal to use on the sidewalk. They are the cheapest electric cycles and used by the last mile commuters, for urban use and to be combined with public transport, including buses. They are not legal for use on the public highway (including footways and cycle paths) in the United Kingdom.
Tricycles
Electric trikes have also been produced that conform to the e-bike legislation. These have the benefit of additional low speed stability and are often favored by people with disabilities. Cargo carrying tricycles are also gaining acceptance, with a small but growing number of couriers using them for package deliveries in city centers. Latest designs of these trikes resemble a cross-between a pedal cycle and a small van.
Health effects
E-bike use was shown to increase the amount of physical activity. E-bike users in seven European cities had 10% higher weekly energy expenditure than other cyclists because they cycled longer trips.
E-bikes can also provide a source of exercise for individuals who have trouble exercising for an extended time (due to injury or excessive weight, for example) as the bike can allow the rider to take short breaks from pedaling and also provide confidence to the rider that they'll be able to complete the selected path without becoming too fatigued or without having forced their knee joints too hard (people who need to use their knee joints without wearing them out unnecessarily may in some electric bikes adjust the level of motor assistance according to the terrain). A University of Tennessee study provides evidence that energy expenditure (EE) and oxygen consumption (VO2) for e-bikes are 24% lower than that for conventional bicycles, and 64% lower than for walking. Further, the study notes that the difference between e-bikes and bicycles are most pronounced on the uphill segments.
There are individuals who claim to have lost considerable amounts of weight by using an electric bike. A recent prospective cohort study however found that people using e-bikes have a higher BMI than those using conventional bikes. By making the biking terrain less of an issue, people who would not otherwise consider biking can use the electric assistance when needed and otherwise pedal as they are able.
E-bikes can be a useful part of cardiac rehabilitation programs, since health professionals will often recommend a stationary bike be used in the early stages of these. Exercise-based cardiac rehabilitation programs can reduce deaths in people with coronary heart disease by around 27%.
Road traffic safety
Schleinitz et al. (2014) concluded that e-bike users in Germany were no more likely than conventional cyclists to be involved in "safety-critical situations". However, Dozza et al. (2015) concluded (from an analysis of Swedish cyclists) that e-bikers may be involved in more critical incidents but with "lower severity". Additionally, e-bikers were less likely to have dangerous interactions with motorized vehicles.
In the United States, the risk of accidents and injuries is a growing concern for e-bike users, parents, and drivers alike. According to the U.S. Consumer Product Safety Commission (CPSC), an estimated 53,200 e-bike-related emergency department visits occurred between 2017 and 2022. During this period, there were 104 e-bike fatalities, accounting for 45% of all micromobility-related deaths.
Environmental effects
E-bikes are zero-emissions vehicles, as they emit no combustion by-products, but the environmental effects of electricity generation and power distribution and of manufacturing and recycling batteries must be accounted for. E-bikes emit similar pollutants per kilometer as buses, with emission rates several times lower than motorcycles and cars. E-bikes are generally seen as environmentally desirable in an urban environment.
A 2018 study in England found that e-bikes, if used to replace car travel, have the capability to "cut car carbon dioxide (CO2) emissions in England by up to 50% (about 30 million tonnes per year)".
A 2020 study focusing on the Yorkshire region of England suggested that the greatest opportunities are in rural and sub-urban settings: city dwellers already have many low-carbon travel options, so the greatest impact would be on encouraging use outside urban areas. The study further suggested there may also be scope for e-bikes to help people who are most affected by rising transport costs.
The environmental effects involved in recharging the batteries can of course be reduced. The small size of the battery pack on an e-bike, relative to the larger pack used in an electric car, makes them very good candidates for charging via solar power or other renewable energy resources. Sanyo capitalized on this benefit when it set up "solar parking lots", in which e-bike riders can charge their vehicles while parked under photovoltaic panels.
The environmental credentials of e-bikes, and electric / human powered hybrids generally, have led some municipal authorities to use them, such as Little Rock, Arkansas, with their Wavecrest electric power-assisted bicycles or Cloverdale, California police with Zap e-bikes. China's e-bike manufacturers, such as Xinri, are now partnering with universities in a bid to improve their technology in line with international environmental standards, backed by the Chinese government who is keen to improve the export potential of the Chinese manufactured e-bikes.
Both land management regulators and mountain bike trail access advocates have argued for bans of electric bicycles on outdoor trails that are accessible to mountain bikes, citing potential safety hazards as well as the potential for electric bikes to damage trails. A study conducted by the International Mountain Bicycling Association, however, found that the physical impacts of low-powered pedal-assist electric mountain bikes (eMTB) may be similar to traditional mountain bikes (MTB).
A recent study on the environment impact of e-bikes versus other forms of transportation found that e-bikes are:
18 times more energy efficient than an SUV
13 times more energy efficient than a sedan
6 times more energy efficient than rail transit
Of about equal impact to the environment as a conventional bicycle.
There are strict shipping regulations for lithium-ion batteries, due to safety concerns. In this regard, lithium iron phosphate batteries are safer than lithium cobalt oxide batteries.
Experience by country
China
China has experienced an explosive growth of sales of non-assisted e-bikes including scooter type, with annual sales jumping from 56,000 units in 1998 to over 21 million in 2008, and reaching an estimated fleet of 120 million e-bikes in early 2010. This boom was triggered by Chinese local governments' efforts to restrict motorcycles in city centers to avoid traffic disruption and accidents. By late 2009 motorcycles, were banned or restricted in over ninety major Chinese cities. Commuters began replacing traditional bicycles and motorcycles and e-bike became an alternative to commuting by car. Nevertheless, road safety concerns continue as around 2,500 e-bike related deaths were registered in 2007. By late 2009, ten cities had also banned or imposed restrictions on e-bikes on the same grounds as motorcycles. Among these cities were Guangzhou, Shenzhen, Changsha, Foshan, Changzhou, and Dongguang.
In April 2019, China's regulatory policies changed, and new standards around electric bikes were introduced, governing a bicycle's weight, maximum speed and nominal voltage among other factors. Vehicles which apply the new standard, including international 25 km/h speed limit, are legally considered as bicycles and do not require registration. E-bikes out of this standard are considered as motorcycles and are subject to helmet and license regulation.
China is the world's leading manufacturer of e-bikes, with 22.2 million units produced in 2009. Some of the biggest manufacturers of E-bikes in the world are BYD and Geoby. Production is concentrated in five regions, Tianjin, Zhejiang, Jiangsu, Shandong, and Shanghai. China exported 370,000 e-bikes in 2009. In 2019, about 223,000 China companies were in businesses related to the electric-bike industry.
The market was valued at US$13.98 billion in 2023 and is projected to reach US$34.61 billion by 2033, growing at a CAGR of 9.48% from 2024 to 2033.
Netherlands
The Netherlands has a fleet of 23 million bicycles for its population of 18 million (as of 2024). E-bikes have reached a market share of 10% by 2009, as e-bikes sales quadrupled from 40,000 units to 153,000 between 2006 and 2009, and the electric-powered models represented 25% of the total bicycle sales revenue in that year. By early 2010 one in every eight bicycles sold in the country is electric-powered despite the fact that on average an e-bike is three times more expensive than a regular bicycle. E-bike sales have now overtaken those of unpowered bikes, reaching 423,000 in 2019 and 547,000 in 2020.
A 2008 market survey showed that the average distance traveled in the Netherlands by commuters on a standard bicycle is while with an e-bike this distance increases to . This survey also showed that e-bike ownership is particularly popular among people aged 65 and over, but limited among commuters. The e-bike is used in particular for recreational bicycle trips, shopping and errands.
United States
In 2009 the U.S. had an estimated fleet of 200,000 e-bikes. In 2012 they were increasingly favored in New York as food-delivery vehicles. The North American Electric Bike Market is expected to grow at a CAGR of 10.13% from 2021 to 2028.
India
In India electric bicycles market was valued at US$1.14 million in 2021, and is expected to reach US$2.31 million by 2027, projecting a CAGR of 12.69% during this forecast period.
Use in warfare
Ukraine is using e-bikes in the war against Russia. These donated bikes are used for snipers and anti-tank weapons. This echoes past usage of bicycle infantry in wartime, particularly by the Japanese forces.
| Technology | Human-powered transport | null |
8707643 | https://en.wikipedia.org/wiki/Electronic%20circuit | Electronic circuit | An electronic circuit is composed of individual electronic components, such as resistors, transistors, capacitors, inductors and diodes, connected by conductive wires or traces through which electric current can flow. It is a type of electrical circuit. For a circuit to be referred to as electronic, rather than electrical, generally at least one active component must be present. The combination of components and wires allows various simple and complex operations to be performed: signals can be amplified, computations can be performed, and data can be moved from one place to another.
Circuits can be constructed of discrete components connected by individual pieces of wire, but today it is much more common to create interconnections by photolithographic techniques on a laminated substrate (a printed circuit board or PCB) and solder the components to these interconnections to create a finished circuit. In an integrated circuit or IC, the components and interconnections are formed on the same substrate, typically a semiconductor such as doped silicon or (less commonly) gallium arsenide.
An electronic circuit can usually be categorized as an analog circuit, a digital circuit, or a mixed-signal circuit (a combination of analog circuits and digital circuits). The most widely used semiconductor device in electronic circuits is the MOSFET (metal–oxide–semiconductor field-effect transistor).
Analog circuits
Analog electronic circuits are those in which current or voltage may vary continuously with time to correspond to the information being represented.
The basic components of analog circuits are wires, resistors, capacitors, inductors, diodes, and transistors. Analog circuits are very commonly represented in schematic diagrams, in which wires are shown as lines, and each component has a unique symbol. Analog circuit analysis employs Kirchhoff's circuit laws: all the currents at a node (a place where wires meet), and the voltage around a closed loop of wires is 0. Wires are usually treated as ideal zero-voltage interconnections; any resistance or reactance is captured by explicitly adding a parasitic element, such as a discrete resistor or inductor. Active components such as transistors are often treated as controlled current or voltage sources: for example, a field-effect transistor can be modeled as a current source from the source to the drain, with the current controlled by the gate-source voltage.
When the circuit size is comparable to a wavelength of the relevant signal frequency, a more sophisticated approach must be used, the distributed-element model. Wires are treated as transmission lines, with nominally constant characteristic impedance, and the impedances at the start and end determine transmitted and reflected waves on the line. Circuits designed according to this approach are distributed-element circuits. Such considerations typically become important for circuit boards at frequencies above a GHz; integrated circuits are smaller and can be treated as lumped elements for frequencies less than 10GHz or so.
Digital circuits
In digital electronic circuits, electric signals take on discrete values, to represent logical and numeric values. These values represent the information that is being processed. In the vast majority of cases, binary encoding is used: one voltage (typically the more positive value) represents a binary '1' and another voltage (usually a value near the ground potential, 0 V) represents a binary '0'. Digital circuits make extensive use of transistors, interconnected to create logic gates that provide the functions of Boolean logic: AND, NAND, OR, NOR, XOR and combinations thereof. Transistors interconnected so as to provide positive feedback are used as latches and flip flops, circuits that have two or more metastable states, and remain in one of these states until changed by an external input. Digital circuits therefore can provide logic and memory, enabling them to perform arbitrary computational functions. (Memory based on flip-flops is known as static random-access memory (SRAM). Memory based on the storage of charge in a capacitor, dynamic random-access memory (DRAM), is also widely used.)
The design process for digital circuits is fundamentally different from the process for analog circuits. Each logic gate regenerates the binary signal, so the designer need not account for distortion, gain control, offset voltages, and other concerns faced in an analog design. As a consequence, extremely complex digital circuits, with billions of logic elements integrated on a single silicon chip, can be fabricated at low cost. Such digital integrated circuits are ubiquitous in modern electronic devices, such as calculators, mobile phone handsets, and computers. As digital circuits become more complex, issues of time delay, logic races, power dissipation, non-ideal switching, on-chip and inter-chip loading, and leakage currents, become limitations to circuit density, speed and performance.
Digital circuitry is used to create general purpose computing chips, such as microprocessors, and custom-designed logic circuits, known as application-specific integrated circuit (ASICs). Field-programmable gate arrays (FPGAs), chips with logic circuitry whose configuration can be modified after fabrication, are also widely used in prototyping and development.
Mixed-signal circuits
Mixed-signal or hybrid circuits contain elements of both analog and digital circuits. Examples include comparators, timers, phase-locked loops, analog-to-digital converters, and digital-to-analog converters. Most modern radio and communications circuitry uses mixed signal circuits. For example, in a receiver, analog circuitry is used to amplify and frequency-convert signals so that they reach a suitable state to be converted into digital values, after which further signal processing can be performed in the digital domain.
Design
Prototyping
| Technology | Electronics: General | null |
8708317 | https://en.wikipedia.org/wiki/Saltasauridae | Saltasauridae | Saltasauridae (named after the Salta region of Argentina where they were first found) is a family of armored herbivorous sauropods from the Upper Cretaceous. They are known from fossils found in South America, Africa, Asia, North America, and Europe. They are characterized by their vertebrae and feet, which are similar to those of Saltasaurus, the first of the group to be discovered and the source of the name. The last and largest of the group and only one found in North America, Alamosaurus, was in length and one of the last sauropods to go extinct.
Most of the saltasaurids were smaller, around in length, and one, Rocasaurus, was only long. Like all sauropods, the saltasaurids were quadrupeds, their necks and tails were held almost parallel to the ground, and their small heads had only tiny, peg-like teeth. They were herbivorous, stripping leaves off of plants and digesting them in their enormous guts. Although large animals, they were smaller than other sauropods of their time, and many possessed distinctive additional defenses in the form of scutes along their backs.
Description
As sauropods, the Saltasauridae are herbivorous saurischians with the characteristic body plan of a small head, long neck, four erect legs, and a counterbalancing tail. Most sauropods are from the clade Neosauropoda, which is further split into the narrow-toothed Diplodocoidea and the broad-toothed Macronaria. The Macronarians emerged in the Jurassic and a subclade, the Titanosauria, survived into the Cretaceous and spread across the continents. Because of their diversity, wide distribution, and the fragmentary or incomplete nature of most specimens, little is known about the titanosaurs beyond their size and tendency to have scutes.
The saltasaurids, one of the several titanosaur families, are recognized by the convexities in certain caudal vertebrae and the markings on their coracoid bones. All saltasaurids have thirty-five or fewer caudal vertebrae, each of which is convex on both sides of its centrum, and the one closest to the tail is shorter than the others. Their coracoid bones have rectangular margins on the anteroventral side, as well as a lip where they meet the infraglenoid. The Opisthocoelicaudiinae, a subfamily of the saltasaurids, are unique in that they lack phalanges in their forelimbs. Although Saltasaurus is known to possess dorsal osteoderms, scutes have not been discovered in all saltasaurids, and it is unclear when and where the evolution of osteoderms occurred in saltasaurids and titanosaurs in general.
History of study
The first saltasaurid to be discovered was Alamosaurus, found by paleontologist Charles Gilmore in Utah in 1922. The next species would not be described until Opisthocoelicaudia was named by Magdalena Borsuk-Bialynicka from a postcranial material in Mongolia in 1977. In 1980, Jose Bonaparte and Jaime Powell discovered Saltasaurus in Argentina. This was the first sauropod to be discovered with armor and proved that sauropods had thrived in Cretaceous South America. Paul Sereno eventually recognized a cladistic relationship between Opisthocoelicauda and Saltasaurus to create the family Saltasauridae.
Classification
The group is defined by the characteristics shared by all with the two best-known members, Saltasaurus and Opisthocoelicaudia. Paleontologists J Wilson and P Upchurch defined the Saltasauridae in 2003 as the least inclusive clade containing Opisthocoelicaudia skarzynskii, and Saltasaurus loricatus, their most recent common ancestor, and all that species’ descendants.
Taxonomy
This taxonomy is based on those of González Riga et al. (2009) and Curry Rogers & Wilson (2005).
Family Saltasauridae
Unclear Subfamilies
Petrobrasaurus puestohernandezi
Trigonosaurus pricei
Subfamily Opisthocoelicaudiinae
Alamosaurus sanjuanensis
Borealosaurus wimani
Opisthocoelicaudia skarzynskii
Subfamily Saltasaurinae
Bonatitan reigi
Microcoelus patagonicus
Neuquensaurus australis
Neuquensaurus robustus
Rocasaurus muniozi
Saltasaurus loricatus
Phylogeny
The family is then further divided into two subfamilies. Wilson and Upchurch defined Saltasaurinae in 2003 as the least-inclusive clade containing Saltasaurus but not Opisthocoelicaudia. The same paleontologists defined Opisthocoelicaudiinae as the inverse: the least-inclusive clade containing Opisthocoelicaudia but not Saltasaurus. Some species, due to the incompleteness of their skeletons, cannot yet be placed in either subfamily.
Saltasauridae in a cladogram after Navarro et al., 2022:
Paleobiology
Geographic range
Many fragmentary saltasaurids have been discovered since 1980, placing members of the family in territories as widely dispersed as today's Australia, Madagascar, and France, in addition to their earlier-known residencies in North and South America. Like the other titanosaurs, the saltasaurids where a widespread, successful group that colonized all continents in the Cretaceous.
Feeding habits
Like all titanosaurs, the saltasaurids possessed small, peg-like teeth that were not usable for chewing. Coproliths from an unidentified titanosaur found in India suggest a diet of conifers, cycads, and early species of grasses. Unable to chew and probably lacking gastroliths, sauropods survived by retaining plant matter in their stomachs for long periods of time, fermenting it to extract as many resources as possible. Their long necks allowed them to graze over a large area while standing, reducing energy use.
Osteoderms
The osteoderms of Saltasaurus consisted of numerous, large bony plates embedded in the dorsal skin, each surrounded by a pattern of smaller plates. The large osteoderms contained some hollow spaces for blood vessels and spongy trabecular bone, while the small ones were solid. Patches of skin from unidentified Cretaceous titanosaurs have revealed similar scale patterns in embryos (a large scale surrounded by ten smaller ones) but no bone or mineralized structure, suggesting that, like crocodiles, those saltasaurids that possessed armor only developed it some time after hatching. Analysis of the osteoderms of the titanosaur Rapetosaurus revealed that the bones were hollow in adults, while those of juveniles were solid pieces similar to those in crocodiles. Paleontologist Kristina Curry Rogers, who made this discovery, theorized that the adult animals used their hollow osteoderms to store minerals during lean times. It is unknown whether any of the Saltasauridae used their osteoderms in a similar manner.
Reproduction and development
The same Argentine dig site, Auca Mahuevo, that provided information on embryonic skin, has also yielded information on the nesting habits of titanosaurs, but not saltasaurids specifically. The nests were constructed on the surface by piling debris in a ring around the eggs, with the eggs themselves left uncovered. Each egg was porous and spherical, about 14 cm in diameter, and they were laid in clutches. The embryos show smaller rostrum and nares close to the anterior portion of the face compared to adult titanosaurs, suggesting that the nostrils may have moved towards the back of the head as the animal grew.
| Biology and health sciences | Sauropods | Animals |
8712750 | https://en.wikipedia.org/wiki/E-reader | E-reader | An e-reader, also called an e reader or e device, is a mobile electronic device that is designed primarily for the purpose of reading digital e-books and periodicals.
Any device that can display text on a screen may act as an e-reader; however, specialized e-reader devices may optimize portability, readability, and battery life for this purpose. Their main advantage over printed books is portability: an e-reader is capable of holding thousands of books while weighing less than one. Another advantage is the convenience provided by add-on features.
Overview
An e-reader is a device designed as a convenient way to read e-books. It is similar in form factor to a tablet computer, but often features electronic paper ("e-ink") rather than an LCD screen. This yields much longer battery life — the battery can last for several weeks — and better readability, similar to that of paper even in sunlight. Drawbacks of this kind of display include a slow refresh rate and (usually) a grayscale-only display, which makes it unsuitable for sophisticated interactive applications such as those found on tablets. This may be perceived as an advantage, however, as the user may more easily focus on reading. The Sony Librie, released in 2004 and the precursor to the Sony Reader, was the first e-reader to use electronic paper.
Many e-readers can use the internet through Wi-Fi and the built-in software can provide a link to a digital Open Publication Distribution System (OPDS) library or an e-book retailer, allowing the user to buy, borrow, and receive digital e-books. An e-reader may also download e-books from a computer or read them from a memory card. However, the use of memory cards is decreasing as most of the 2010s era e-readers lack a card slot.
History
An idea similar to that of an e-reader is described in a 1930 manifesto written by Bob Brown titled The Readies, which describes "a simple reading machine which I can carry or move around, attach to any old electric light plug and read hundred-thousand-word novels in 10 minutes". His hypothetical machine would use a microfilm-style ribbon of miniaturized text which could be scrolled past a magnifying glass, and would allow the reader to adjust the type size. He envisioned that eventually words could be "recorded directly on the palpitating ether".
The establishment of the E Ink Corporation in 1997 led to the development of electronic paper, a technology which allows a display screen to reflect light like ordinary paper without the need for a backlight. Among the first commercial e-readers were Sony's Data Discman (which was using Mini CDs with special caddies) and the Rocket eBook. Several others were introduced around 1998, but did not gain widespread acceptance. Electronic paper was incorporated first into the Sony Librie that was released in 2004 and Sony Reader in 2006, followed by the Amazon Kindle, a device which, upon its release in 2007, sold out within five and a half hours. The Kindle includes access to the Kindle Store for e-book sales and delivery.
In 2009, new marketing models for e-books were being developed and a new generation of reading hardware was produced. E-books (as opposed to e-readers) had yet to achieve global distribution. In the United States, the Amazon Kindle model and Sony's PRS-500 were the dominant e-reading devices. By March 2010, some reported that the Barnes & Noble Nook may have been selling more units than the Kindle in the US. The Ectaco jetBook Color was the first color e-reader on the market, but its muted colors were criticized. Since 2021, color E-ink readers have been introduced into the market.
Research released in March 2011 indicated that e-books and e-readers were more popular with the older generation than the younger generation in the UK. The survey, carried out by Silver Poll, found that around 6% of people over 55 owned an e-reader, compared with just 5% of 18- to 24-year-olds. According to an IDC study from March 2011, sales for all e-readers worldwide rose to 12.8 million in 2010; 48% of them were Amazon Kindles, followed by Barnes & Noble Nooks, Pandigital, and Sony Readers (about 800,000 units for 2010).
On January 27, 2010 Apple Inc. launched a multi-function tablet computer called the iPad and announced agreements with five of the six largest publishers that would allow Apple to distribute e-books. The iPad includes a built-in app for e-book reading called iBooks and had the iBookstore for content sales and delivery. The iPad, the first commercially profitable tablet, was followed in 2011 by the release of the first Android-based tablets as well as LCD tablet versions of the Nook and Kindle. Unlike previous dedicated e-readers, tablet computers are multi-functional, utilize LCD touchscreen displays, and are more agnostic to e-book vendor apps, allowing for the installation of multiple e-book reading apps. Many Android tablets accept external media and allow uploading files directly onto the tablet's file system without resorting to online stores or cloud services. Many tablet-based and smartphone-based readers are capable of displaying PDF and DJVU files, which few of the dedicated e-book readers can handle. This opens a possibility to read publications originally published on paper and later scanned into a digital format. While these files may not be considered e-books in their strict sense, they preserve the original look of printed editions. The growth in general-purpose tablet use allowed for further growth in the popularity of e-books in the 2010s.
In 2012, there was a 26% decline in e-reader sales worldwide from a maximum of 23.2 million in 2011. The reason was given for this "alarmingly precipitous decline" was the rise of more general-purpose tablets that provided e-book reading apps along with many other abilities in a similar form factor. In 2013, ABI Research claimed that the decline in the e-reader market was due to the aging of the customer base. In 2014, the industry reported e-reader sales worldwide to be around 12 million, with only Amazon.com and Kobo Inc. distributing e-readers globally and various regional distribution by Barnes & Noble (US/UK), Tolino (Germany), Icarus (Netherlands), PocketBook International (Eastern Europe and Russia) and Onyx Boox (China and Vietnam). At the end of 2015, eMarketer estimated that there were 83.4 million e-reader users in the US, with the number predicted to grow by 3.5% in 2016. In late 2014, PricewaterhouseCoopers predicted that by 2018 e-books would make up over 50% of total consumer publishing revenue in the U.S. and UK, while at that time, e-books were over 30% of the share of the revenue.
Until late 2013, the use of an e-reader was not allowed on airplanes during takeoff and landing. In November 2013, the FAA allowed use of e-readers on airplanes at all times if set to Airplane Mode. European authorities followed this guidance the following month.
E-reader applications
Many of the major book retailers and third-party developers offer e-reader applications for desktops, tablets, and mobile devices, to allow the reading of e-books and other documents independent of dedicated e-book devices. E-reader applications are available for computers running Linux, MacOS, and Windows, as well as for smartphones running Android, iOS and Windows Phone.
Impact
The introduction of e-readers brought substantial changes to the publishing industry, also awakening fears and predictions about the possible disappearance of books and print periodicals.
Criticism
Disadvantages of epaper display
The graphical design of ebooks underlies the format and technical limits of e-readers because until recently the vast majority of E-ink readers did not support color displays and had a limited resolution and size. As of 2024, however, colour e-readers aren't that rare and there are many colour devices in the market like BOOX Go Color 7 and Kobo Libra Colour which both utilize Kaleido 3 epaper screens supporting up to 4096 colours. The reading experience on epaper displays which are not illuminated depends on the environment lighting condition.
Closed ecosystems for retrieving ebooks and lack of freedom
E-readers are usually designed to only offer access to the online shop of one provider. This structure is referred to as (digital) ecosystem and helps smaller companies (e.g. Kibano Digireader) to compete against multinational companies (like Amazon, Apple, etc.). On the other hand, customers only have the possibility of purchasing books from a limited selection of ebooks in the online shop (accessible via the e-reader) and therefore do not have the possibility of purchasing e-books from the open market. Because of the use of ecosystems, companies are not forced to compete against each other and therefore the cost of e-books do not decrease. With only the option of using an online shop, the social interaction of buying or borrowing a book disappears. There are, however, notable exceptions such as Onyx Boox and Meebook devices which run an open Android system. Then the users can download and read ebooks from whichever source they prefer either by installing a bookstore app(e.g. Kindle, Kobo and the like), use a web browser or directly download the ebook file. There are also ebook readers with an open Linux system. A notable example is PineNote from Pine64. However, the software ecosystem of these ebook readers usually aren't mature as mainstream options in the market.
In the EU, media products, including paper books, often have a tax reduction. Therefore, the VAT for conventional books was often lower than that of e-books. In legal terms, e-books were considered a service since it was regarded as a temporary lease of the product. Therefore, ebook prices were often similar to paper book prices, even if the production of ebooks has a lower cost. In October 2018, the EU allowed its member countries to charge the same VAT for ebooks as for paper books.
Richard Stallman has expressed concern about the perceived loss of freedom or privacy that comes with e-readers, namely the inability to read whatever a reader prefers without the possibility of being tracked.
Positive aspects
E-readers can hold thousands of books limited only by their memory and use the same physical space as a conventional book. Most E-ink displays are not back-illuminated and therefore seem to cause no more eye strain than a traditional book and less eye strain than LCD screens, with a longer battery life. Features such as the ability to adjust font size and spacing can help people who have difficulty reading or dyslexia. Some e-readers link to definitions or translations of key words. Amazon notes that 85% of its e-reader users look up a word while reading.
E-readers can instantly download content from supported public libraries by using apps like OverDrive.
Popular e-readers
Amazon (Global): Kindle, Kindle Paperwhite, Kindle Voyage, Kindle Oasis, Kindle Oasis 2, Kindle Scribe
Barnes & Noble (US/UK): Nook, Nook GlowLight, Nook GlowLight Plus
Bookeen (France): Cybook Opus, Cybook Orizon, Cybook Odyssey, Cybook Odyssey HD FrontLight
Kobo (Global): Kobo Touch, Kobo Glo, Kobo Mini, Kobo Aura, Kobo Aura HD
Onyx Boox (Europe, Russia, China and Vietnam): Onyx Boox Max2, Onyx Boox Note
PocketBook (Europe and Russia): PocketBook Touch, PocketBook Mini, PocketBook Touch Lux, PocketBook Color Lux, PocketBook Aqua
Tolino (Germany): Tolino Shine, Tolino Shine 2 HD, Tolino Vision, Tolino Vision 2
Alternative e-readers devices or platform
Apple: iPad and iPad Mini
Amazon: Fire Tablet
Android based tablet
| Technology | Printing | null |
5523577 | https://en.wikipedia.org/wiki/Foreland%20basin | Foreland basin | A foreland basin is a structural basin that develops adjacent and parallel to a mountain belt. Foreland basins form because the immense mass created by crustal thickening associated with the evolution of a mountain belt causes the lithosphere to bend, by a process known as lithospheric flexure. The width and depth of the foreland basin is determined by the flexural rigidity of the underlying lithosphere, and the characteristics of the mountain belt. The foreland basin receives sediment that is eroded off the adjacent mountain belt, filling with thick sedimentary successions that thin away from the mountain belt. Foreland basins represent an endmember basin type, the other being rift basins. Accommodation (the space available for sediments to be deposited) is provided by loading and downflexure to form foreland basins, in contrast to rift basins, where accommodation space is generated by lithospheric extension.
Types of foreland basin
Foreland basins can be divided into two categories:
Peripheral (Pro) foreland basins, which occur on the plate that is subducted or underthrust during plate collision (i.e. the outer arc of the orogen)
Examples include the North Alpine Foreland Basin of Europe, or the Ganges Basin of Asia
Retroarc (Retro) foreland basins, which occur on the plate that overrides during plate convergence or collision (i.e. situated behind the magmatic arc that is linked with the subduction of oceanic lithosphere)
Examples include the Andean basins, or Late Mesozoic to Cenozoic Rocky Mountain Basins of North America
Foreland basin system
DeCelles & Giles (1996) provide a thorough definition of the foreland basin system. Foreland basin systems comprise three characteristic properties:
An elongate region of potential sediment accommodation that forms on continental crust between a contractional orogenic belt and the adjacent craton, mainly in response to geodynamic processes related to subduction and the resulting peripheral or retroarc fold-thrust belt;
It consists of four discrete depozones, referred to as the wedge-top, foredeep, forebulge and back-bulge depozones (depositional zones) – which of these depozones a sediment particle occupies depends on its location at the time of deposition, rather than its ultimate geometric relationship with the thrust belt;
The longitudinal dimension of the foreland basin system is roughly equal to the length of the fold-thrust belt, and does not include sediment that spills into remnant ocean basins or continental rifts (impactogens).
Foreland basin systems: depozones
The wedge-top sits on top of the moving thrust sheets and contains all the sediments charging from the active tectonic thrust wedge. This is where piggyback basins form.
The foredeep is the thickest sedimentary zone and thickens toward the orogen. Sediments are deposited via distal fluvial, lacustrine, deltaic, and marine depositional systems.
The forebulge and backbulge are the thinnest and most distal zones and are not always present. When present, they are defined by regional unconformities as well as aeolian and shallow-marine deposits.
Sedimentation is most rapid near the moving thrust sheet. Sediment transport within the foredeep is generally parallel to the strike of the thrust fault and basin axis.
Plate motion and seismicity
The motion of the adjacent plates of the foreland basin can be determined by studying the active deformation zone with which it is connected. Today GPS measurements provide the rate at which one plate is moving relative to another. It is also important to consider that present day kinematics are unlikely to be the same as when deformation began. Thus, it is crucial to consider non-GPS models to determine the long-term evolution of continental collisions and in how it helped develop the adjacent foreland basins.
Comparing both modern GPS (Sella et al. 2002) and non-GPS models allows deformation rates to be calculated. Comparing these numbers to the geologic regime helps constrain the number of probable models as well as which model is more geologically accurate within a specific region.
Seismicity determines where active zones of seismic activity occur as well as measure the total fault displacements and the timing of the onset of deformation.
Formation of basins
Foreland basins form because as the mountain belt grows, it exerts a significant mass on the Earth's crust, which causes it to bend, or flex, downwards. This occurs so that the weight of the mountain belt can be compensated by isostasy at the upflex of the forebulge.
The plate tectonic evolution of a peripheral foreland basin involves three general stages. First, the passive margin stage with orogenic loading of previously stretched continental margin during the early stages of convergence. Second, the "early convergence stage defined by deep water conditions", and lastly a "later convergent stage during which a subaerial wedge is flanked with terrestrial or shallow marine foreland basins".
The temperature underneath the orogen is much higher and weakens the lithosphere. Thus, the thrust belt is mobile and the foreland basin system becomes deformed over time. Syntectonic unconformities demonstrate simultaneous subsidence and tectonic activity.
Foreland basins are filled with sediments which erode from the adjacent mountain belt. In the early stages, the foreland basin is said to be underfilled. During this stage, deep water and commonly marine sediments, known as flysch, are deposited. Eventually, the basin becomes completely filled. At this point, the basin enters the overfilled stage and deposition of terrestrial clastic sediments occurs. These are known as molasse. Sediment fill within the foredeep acts as an additional load on the continental lithosphere.
Lithospheric behavior
Although the degree to which the lithosphere relaxes over time is still controversial, most workers accept an elastic or visco-elastic rheology to describe the lithospheric deformation of the foreland basin. Allen & Allen (2005) describe a moving load system, one in which the deflection moves as a wave through the foreland plate before the load system. The deflection shape is commonly described as an asymmetrical low close to the load along the foreland and a broader uplifted deflection along the forebulge. The transport rate or flux of erosion, as well as sedimentation, is a function of topographic relief.
For the loading model, the lithosphere is initially stiff, with the basin broad and shallow. Relaxation of the lithosphere allows subsidence near the thrust, narrowing of basin, forebulge toward thrust. During times of thrusting, the lithosphere is stiff and the forebulge broadens. The timing of the thrust deformation is opposite that of the relaxing of the lithosphere. The bending of the lithosphere under the orogenic load controls the drainage pattern of the foreland basin. The flexural tilting of the basin and the sediment supply from the orogen.
Lithospheric strength envelopes
Strength envelopes indicate that the rheological structure of the lithosphere underneath the foreland and the orogen are very different. The foreland basin typically shows a thermal and rheological structure similar to a rifted continental margin with three brittle layers above three ductile layers. The temperature underneath the orogen is much higher and thus greatly weakens the lithosphere. According to Zhou et al. (2003), "under compressional stress the lithosphere beneath the mountain range becomes ductile almost entirely, except a thin (about 6 km in the center) brittle layer near the surface and perhaps a thin brittle layer in the uppermost mantle." This lithospheric weakening underneath the orogenic belt may in part cause the regional lithospheric flexure behavior.
Thermal history
Foreland basins are considered to be hypothermal basins (cooler than normal), with low geothermal gradient and heat flow. Heat flow values average between 1 and 2 HFU (40–90 mWm−2. Rapid subsidence may be responsible for these low values.
Over time sedimentary layers become buried and lose porosity. This can be due to sediment compaction or the physical or chemical changes, such as pressure or cementation. Thermal maturation of sediments is a factor of temperature and time and occurs at shallower depths due to past heat redistribution of migrating brines.
Vitrinite reflectance, which typically demonstrates an exponential evolution of organic matter as a function of time, is the best organic indicator for thermal maturation. Studies have shown that present day thermal measurements of heat flow and geothermal gradients closely correspond to a regime's tectonic origin and development as well as the lithospheric mechanics.
Fluid migration
Migrating fluids originate from the sediments of the foreland basin and migrate in response to deformation. As a result, brine can migrate over great distances. Evidence of long-range migration includes: 1) correlation of petroleum to distant source rocks, 2) ore bodies deposited from metal-bearing brines, 3) anomalous thermal histories for shallow sediments, 4) regional potassium metasomatism and 5) epigenetic dolomite cements in ore bodies and deep aquifers.
Fluid source
Fluids carrying heat, minerals, and petroleum, have a vast impact on the tectonic regime within the foreland basin. Before deformation, sediment layers are porous and full of fluids, such as water and hydrated minerals. Once these sediments are buried and compacted, the pores become smaller and some of the fluids, about , leave the pores. This fluid has to go somewhere. Within the foreland basin, these fluids potentially can heat and mineralize materials, as well as mix with the local hydrostatic head.
Major driving force for fluid migration
Orogen topography is the major driving force of fluid migration. The heat from the lower crust moves via conduction and groundwater advection. Local hydrothermal areas occur when deep fluid flow moves very quickly. This can also explain very high temperatures at shallow depths.
Other minor constraints include tectonic compression, thrusting, and sediment compaction. These are considered minor because they are limited by the slow rates of tectonic deformation, lithology and depositional rates, on the order of 0–10 cm yr−1, but more likely closer to 1 or less than 1 cm yr−1. Overpressured zones might allow for faster migration, when 1 kilometer or more of shaley sediments accumulate per 1 million years.
Bethke & Marshak (1990) state that "groundwater that recharges at high elevation migrates through the subsurface in response to its high potential energy toward areas where the water table is lower."
Hydrocarbon migration
Bethke & Marshak (1990) explain that petroleum migrates not only in response to the hydrodynamic forces that drive groundwater flow, but to the buoyancy and capillary effects of the petroleum moving through microscopic pores. Migration patterns flow away from the orogenic belt and into the cratonic interior. Frequently, natural gas is found closer to the orogen and oil is found further away.
Modern (Cenozoic) foreland basin systems
Asia
Ganges Basin
Pro-foreland to the south of the Himalaya, in northern India and Pakistan
Began to form 65 million years ago during the collision of India and Eurasia
Filled with a sedimentary succession more than 12 km thick
Northern Tarim Basin
Pro-foreland to the south of the Tian Shan
Formed initially during the Late Paleozoic, during the Carboniferous and Devonian
Rejuvenated during the Cenozoic as a result of far field stress associated with the India-Eurasia collision and the renewed uplift of the Tian Shan
Thickest sedimentary section is beneath Kashgar, where Cenozoic sediment is more than 10,000 metres thick
Southern Junggar Basin
Retro-foreland to the north of the Tian Shan
Formed initially during the Late Paleozoic and rejuvenated during the Cenozoic
Thickest sedimentary section is west of Urumqi, where Mesozoic sediment is more than 8,000 metres thick
Middle East
Persian Gulf
Foreland to the west of the Zagros mountains
Underfilled stage
Terrestrial part of the basin covers parts of Iraq and Kuwait
Europe
North Alpine Basin (the Molasse Basin)
Peripheral foreland basin to the north of the Alps, in Austria, Switzerland, Germany and France.
Formed during the Palaeocene to Neogene (65.5–2.6 Ma) convergence and collision between Eurasia and the Adriatic Plate
Complications arise in the formation of the Rhine Graben
Po Basin, northern Italy.
Retro-foreland basin of the Western and Central Southern Alps and pro-foreland of the Northern Apennines. It developed through extensional phases followed by compressional stages. Its compressional architecture is overprinted on the inherited extensional framework.
The compressional architecture "developed intermittently at the front of two different mountain chains, the Northern Apennines and the Southern Alps, progressively converging one towards the other."
There were two extensional cycles: a) Eastward pre-rift extension cycles culminating in the Anisian to Carnian (middle to early late Triassic, 247–227 Ma) cycle formation of the carbonate platform and basin system; b) Late Triassic–Liassic syn-rift extension phases related to the Piedmont-Liguria and Ionian oceanic basin spreading. After this the maximum basin widening and deepening was reached with progressive formation of the Lombardian, Belluno, and Adriatic carbonate basins.
Veneto-Friuli foreland basin, an alluvial plain in north-eastern Italy.
Developed as the result of superposition of three overlapping foreland systems which differed in age and tectonic movement direction as this plain is the foreland of three surrounding chains. These are: a) the External Dinarides to the East, with Late Palaeocene to the Middle Eocene WSW vergent main deformation phases; b) the Eastern Southern Alps to the north, with mostly Middle-Late Miocene (17–7 Ma) deformation and south-directed tectonic movement; c) the Northern Apennines to the southwest, with Plio-Pleistocene (5 Ma-Recent) NE-directed deformation.
It is separated from the Central Western Alps and its foreland (the Po foreland basin) by the Lessini and Berici Mountains and Euganei Hills structural high, a relatively undeformed foreland block.
Flexure began in the Late Cretaceous with an E-ward faint bending due to the build-up of the External Dinaric thrust belt. There followed two main depositional/flexure cycles: a) the Chattian-Langhian cycle (Late Oligocene-Middle Eocene, 28–14 Ma) with a weak northward bending that accommodated sediments mainly from the uplifted and eroded axial sector of the Alps; b) the Serravallian-Early Messinian cycle (Middle to Late Miocene) with a NNW-ward prominent bending due to quick uplift of the Southern Alps. In the Pliocene-Pleistocene only the south-western-most part (southern part of the Veneto Basin) bent towards SW as result of the Northern Apennines build-up.
Central and Southern Adriatic basins
Located between Italy and the Balkan Peninsula. It includes the Adriatic Sea Istria, the Gargano Promontory and the Apulian Peninsula.
Formed by two orogenies, the Dinarides orogeny (Latest Cretaceous, 75–66 Ma to Eocene, 56–34 Ma) and Apennine orogeny (Miocene to Pliocene (23–2.6 Ma). It is connected to the Po Basin.
Foreland basins of the Carpathian Mountains
Carpathian Foredeep
Continuation of North Alpine Molasse Basin to the Western Carpathians, located in southern Poland and western Ukraine.
East Carpathian Foreland Basin
The foreland basin of the Eastern Carpathians which extends through southern Poland, western Ukraine, Moldova and Romania and is 800 km long. In the late Miocene to early Pliocene it was an important sediment supplier to the Dacian Basin and the Black Sea.
Dacian Basin
This is a foreland basin by the Romanian section of the Eastern Carpathians and the Southern Carpathians (also in Romania). It is a post-collisional basin which developed in the Messinian to Pliocene (7–2.6 Ma). Initially the sedimentation from this basin was mostly just in a pre-existent foredeep area. Subsequently it extended southward over the northern part of the Moesian Platform and a part of the Scythian platform.
Ebro Basin
Peripheral foreland basin to the south of the Pyrenees, in northern Spain
Substantial deformation of the foreland basin has occurred in the north, exemplified by the foreland fold and thrust belt in the western Catalan province. The basin is well known for the spectacular exposures of syn- and post-tectonic sediment strata due to the peculiar drainage evolution of the basin.
Guadalquivir Basin
Formed during the Neogene north of the Betic Cordillera (southern Spain), on a Hercynian basement.
Aquitaine Basin
Retro-foreland basin to the north of the Pyrenees, in southern France
North America
Western Canadian Sedimentary Basin
Foreland to the east of the Rocky Mountains, Alberta
South America
Andean foreland basins
Caguán-Putumayo Basin
Cesar-Ranchería Basin
Llanos Basin
Magallanes Basin
Marañón Basin
Middle Magdalena Valley
Neuquén Basin
Oriente Basin
Ucayali Basin
Upper Magdalena Valley
Ancient foreland basin systems
Asia
Longmen Shan Basin
Foreland to the east of the Longmen Shan mountains
Peak evolution during the Triassic to Jurassic
Urals Foreland
Foreland to the west of the Ural Mountains, in Russia
Formed during the Paleozoic
Europe
Windermere Supergroup
Foreland basin caused by subduction of Iapetus ocean under Avalonia
Ordovician to Silurian in age
Underlies most of England
North America
Western Interior Basin
Foreland to the east of the Sevier orogenic belt
Covered most of the western and central Northwest Territories; western and central Alberta; central and eastern Montana; Wyoming; central and eastern Utah; Colorado; central and eastern New Mexico; western Texas; eastern Chihuahua; Coahuila; eastern Durango; northern Zacatecas; Aguascalientes; eastern and central Guanajuato; western San Luis Potosí; Querétaro; and all but the western edge of Michoacán
Evolved during the Cretaceous
Deepest parts of the basin filled with the Mancos Shale
Most of the Bighorn Basin filled with the Thermopolis Shale
Appalachian Basin
Foreland to the west of the Appalachian mountains, in Eastern United States
Bend Arch – Fort Worth Basin
Pro-Foreland to the east of the Ouachita orogenic belt
Formed during the Paleozoic
South America
Foreland to the east of the Central Andes orogenic belt – The Southern Chaco Foreland Basin in northern Argentina
| Physical sciences | Tectonics | Earth science |
5526595 | https://en.wikipedia.org/wiki/Power%20hammer | Power hammer | Power hammers are mechanical forging hammers that use an electrical power source or steam to raise the hammer preparatory to striking, and accelerate it onto the work being hammered. They are also called open die power forging hammers. They have been used by blacksmiths, bladesmiths, metalworkers, and manufacturers since the late 1880s, having replaced trip hammers.
Design and operation
A typical power hammer consists of a frame, an anvil, and a reciprocating ram holding a hammer head or die. The workpiece is placed on the lower anvil or die and the head or upper die strikes the workpiece. The power hammer is a direct descendant of the trip hammer, differing in that the power hammer stores potential energy in an arrangement of mechanical linkages and springs, in compressed air, or steam, and by the fact that it accelerates the ram on the downward stroke. This provides more force than simply allowing the weight to fall. Predecessors like trip hammers, steam drop hammers, board or strap hammers, used the power source to raise the ram or hammer head, but let it fall solely under gravity.
Power hammers are rated by weight of moving parts that act directly on the work piece. This includes the weight of the parts that may consist of upper die, ram, mechanical linkage arms and spring(s) or ram, piston, and associated connecting rod(s). Specific design elements are dictated by the power source. The largest power hammer was powered by steam and was rated at .
Types
Power hammers are generally categorized by their power source.
Steam
Steam hammers use steam to drive the hammer. These tended to be the largest models as the great energy of steam was needed to operate them. A locomotive works was one location where such large hammers were needed and the workpieces were sometimes so large it required an overhead crane and several men to position the piece in the hammer, and a man to operate the machine.
Mechanical
These hammers tended to be smaller and were operated by a single man both holding the workpiece and operating the machine. The majority of these mechanical linkage machines were powered by line shaft flat belt systems or later electric motors that rotated a crank on the machine that drove the ram.
Air
Air-power hammers use pneumatics to drive the hammer.
History
Steam and mechanical power hammers were made into the middle of the 20th century in the United States. At the end of the 19th century the mechanical power hammer became popular in smaller blacksmith and repair shops. These machines were typically rated between of falling weight. Many may still be seen in use in small manufacturing and artist-blacksmith shops today. In the middle of the 20th century power hammers driven by compressed air began to gain popularity and several manufacturers are currently producing these hammers today.
| Technology | Metallurgy | null |
5529757 | https://en.wikipedia.org/wiki/Fundamental%20thermodynamic%20relation | Fundamental thermodynamic relation | In thermodynamics, the fundamental thermodynamic relation are four fundamental equations which demonstrate how four important thermodynamic quantities depend on variables that can be controlled and measured experimentally. Thus, they are essentially equations of state, and using the fundamental equations, experimental data can be used to determine sought-after quantities like G (Gibbs free energy) or H (enthalpy). The relation is generally expressed as a microscopic change in internal energy in terms of microscopic changes in entropy, and volume for a closed system in thermal equilibrium in the following way.
Here, U is internal energy, T is absolute temperature, S is entropy, P is pressure, and V is volume.
This is only one expression of the fundamental thermodynamic relation. It may be expressed in other ways, using different variables (e.g. using thermodynamic potentials). For example, the fundamental relation may be expressed in terms of the enthalpy H as
in terms of the Helmholtz free energy F as
and in terms of the Gibbs free energy G as
.
The first and second laws of thermodynamics
The first law of thermodynamics states that:
where and are infinitesimal amounts of heat supplied to the system by its surroundings and work done by the system on its surroundings, respectively.
According to the second law of thermodynamics we have for a reversible process:
Hence:
By substituting this into the first law, we have:
Letting be reversible pressure-volume work done by the system on its surroundings,
we have:
This equation has been derived in the case of reversible changes. However, since U, S, and V are thermodynamic state functions that depend on only the initial and final states of a thermodynamic process, the above relation holds also for non-reversible changes. If the composition, i.e. the amounts of the chemical components, in a system of uniform temperature and pressure can also change, e.g. due to a chemical reaction, the fundamental thermodynamic relation generalizes to:
The are the chemical potentials corresponding to particles of type .
If the system has more external parameters than just the volume that can change, the fundamental thermodynamic relation generalizes to
Here the are the generalized forces corresponding to the external parameters . (The negative sign used with pressure is unusual and arises because pressure represents a compressive stress that tends to decrease volume. Other generalized forces tend to increase their conjugate displacements.)
Relationship to statistical mechanics
The fundamental thermodynamic relation and statistical mechanical principles can be derived from one another.
Derivation from statistical mechanical principles
The above derivation uses the first and second laws of thermodynamics. The first law of thermodynamics is essentially a definition of heat, i.e. heat is the change in the internal energy of a system that is not caused by a change of the external parameters of the system.
However, the second law of thermodynamics is not a defining relation for the entropy. The fundamental definition of entropy of an isolated system containing an amount of energy is:
where is the number of quantum states in a small interval between and . Here is a macroscopically small energy interval that is kept fixed. Strictly speaking this means that the entropy depends on the choice of . However, in the thermodynamic limit (i.e. in the limit of infinitely large system size), the specific entropy (entropy per unit volume or per unit mass) does not depend on . The entropy is thus a measure of the uncertainty about exactly which quantum state the system is in, given that we know its energy to be in some interval of size .
Deriving the fundamental thermodynamic relation from first principles thus amounts to proving that the above definition of entropy implies that for reversible processes we have:
The fundamental assumption of statistical mechanics is that all the states at a particular energy are equally likely. This allows us to extract all the thermodynamical quantities of interest. The temperature is defined as:
This definition can be derived from the microcanonical ensemble, which is a system of a constant number of particles, a constant volume and that does not exchange energy with its environment. Suppose that the system has some external parameter, x, that can be changed. In general, the energy eigenstates of the system will depend on x. According to the adiabatic theorem of quantum mechanics, in the limit of an infinitely slow change of the system's Hamiltonian, the system will stay in the same energy eigenstate and thus change its energy according to the change in energy of the energy eigenstate it is in.
The generalized force, X, corresponding to the external parameter x is defined such that is the work performed by the system if x is increased by an amount dx. E.g., if x is the volume, then X is the pressure. The generalized force for a system known to be in energy eigenstate is given by:
Since the system can be in any energy eigenstate within an interval of , we define the generalized force for the system as the expectation value of the above expression:
To evaluate the average, we partition the energy eigenstates by counting how many of them have a value for within a range between and . Calling this number , we have:
The average defining the generalized force can now be written:
We can relate this to the derivative of the entropy with respect to x at constant energy E as follows. Suppose we change x to x + dx. Then will change because the energy eigenstates depend on x, causing energy eigenstates to move into or out of the range between and . Let's focus again on the energy eigenstates for which lies within the range between and . Since these energy eigenstates increase in energy by Y dx, all such energy eigenstates that are in the interval ranging from E − Y dx to E move from below E to above E. There are
such energy eigenstates. If , all these energy eigenstates will move into the range between and and contribute to an increase in . The number of energy eigenstates that move from below to above is, of course, given by . The difference
is thus the net contribution to the increase in . Note that if Y dx is larger than there will be energy eigenstates that move from below to above . They are counted in both and , therefore the above expression is also valid in that case.
Expressing the above expression as a derivative with respect to E and summing over Y yields the expression:
The logarithmic derivative of with respect to x is thus given by:
The first term is intensive, i.e. it does not scale with system size. In contrast, the last term scales as the inverse system size and thus vanishes in the thermodynamic limit. We have thus found that:
Combining this with
Gives:
which we can write as:
Derivation of statistical mechanical principles from the fundamental thermodynamic relation
It has been shown that the fundamental thermodynamic relation together with the following three postulates
is sufficient to build the theory of statistical mechanics without the equal a priori probability postulate.
For example, in order to derive the Boltzmann distribution, we assume the probability density of microstate satisfies . The normalization factor (partition function) is therefore
The entropy is therefore given by
If we change the temperature by while keeping the volume of the system constant, the change of entropy satisfies
where
Considering that
we have
From the fundamental thermodynamic relation, we have
Since we kept constant when perturbing , we have . Combining the equations above, we have
Physics laws should be universal, i.e., the above equation must hold for arbitrary systems, and the only way for this to happen is
That is
It has been shown that the third postulate in the above formalism can be replaced by the following:
However, the mathematical derivation will be much more complicated.
| Physical sciences | Thermodynamics | Physics |
5530147 | https://en.wikipedia.org/wiki/Herbig%20Ae/Be%20star | Herbig Ae/Be star | A Herbig Ae/Be star (HAeBe) is a pre-main-sequence star – a young () star of spectral types A or B. These stars are still embedded in gas-dust envelopes and are sometimes accompanied by circumstellar disks. Hydrogen and calcium emission lines are observed in their spectra. They are 2-8 Solar mass () objects, still existing in the star formation (gravitational contraction) stage and approaching the main sequence (i.e. they are not burning hydrogen in their center).
Description
In the Hertzsprung–Russell diagram, Herbig Ae/Be stars are located to the right of the main sequence. They are named after the American astronomer George Herbig, who first distinguished them from other stars in 1960.
The original Herbig criteria were:
Spectral type earlier than F0 (in order to exclude T Tauri stars),
Balmer emission lines in the stellar spectrum (in order to be similar to T Tauri stars),
Projected location within the boundaries of a dark interstellar cloud (in order to select really young stars near their birthplaces),
Illumination of a nearby bright reflection nebula (in order to guarantee physical link with star formation region).
There are now several known isolated Herbig Ae/Be stars (i.e. not connected with dark clouds or nebulae). Thus the most reliable criteria now can be:
Spectral type earlier than F0,
Balmer emission lines in the stellar spectrum,
Infrared radiation excess (in comparison with normal stars) due to circumstellar dust (in order to distinguish from classical Be stars, which have infrared excess due to free-free emission).
Sometimes Herbig Ae/Be stars show significant brightness variability. They are believed to be due to clumps (protoplanets and planetesimals) in the circumstellar disk. In the lowest brightness stage the radiation from the star becomes bluer and linearly polarized (when the clump obscures direct star light, scattered from disk light relatively increases – it is the same effect as the blue color of our sky).
Analogs of Herbig Ae/Be stars in the smaller mass range (<2 ) – F, G, K, M spectral type pre-main-sequence stars – are called T Tauri stars. More massive (>8 ) stars in pre-main-sequence stage are not observed, because they evolve very quickly: when they become visible (i.e. disperses surrounding circumstellar gas and dust cloud), the hydrogen in the center is already burning and they are main-sequence objects.
Planets
Planets around Herbig Ae/Be stars include:
HD 95086 b around an A-type star
HD 100546 b around a B-type star
Gallery
| Physical sciences | Stellar astronomy | Astronomy |
7205947 | https://en.wikipedia.org/wiki/Young%20stellar%20object | Young stellar object | Young stellar object (YSO) denotes a star in its early stage of evolution. This class consists of two groups of objects: protostars and pre-main-sequence stars.
Classification by spectral energy distribution
A star forms by accumulation of material that falls in to a protostar from a circumstellar disk or envelope. Material in the disk is cooler than the surface of the protostar, so it radiates at longer wavelengths of light producing excess infrared emission. As material in the disk is depleted, the infrared excess decreases. Thus, YSOs are usually classified into evolutionary stages based on the slope of their spectral energy distribution in the mid-infrared, using a scheme introduced by Lada (1987). He proposed three classes (I, II and III), based on the values of intervals of spectral index :
.
Here is wavelength, and is flux density.
The is calculated in the wavelength interval of 2.2–20 (near- and mid-infrared region). Andre et al. (1993) discovered a class 0: objects with strong submillimeter emission, but very faint at . Greene et al. (1994) added a fifth class of "flat spectrum" sources.
Class 0 sources – undetectable at
Class I sources have
Flat spectrum sources have
Class II sources have
Class III sources have
This classification schema roughly reflects evolutionary sequence. It is believed that most deeply embedded Class 0 sources evolve towards Class I stage, dissipating their circumstellar envelopes. Eventually they become optically visible on the stellar birthline as pre-main-sequence stars.
Class II objects have circumstellar disks and correspond roughly to classical T Tauri stars, while Class III stars have lost their disks and correspond approximately to weak-line T Tauri stars. An intermediate stage where disks can only be detected at longer wavelengths (e.g., at ) are known as transition-disk objects.
Characteristics
YSOs are also associated with early star evolution phenomena: jets and bipolar outflows, disk winds, masers, Herbig–Haro objects, and protoplanetary disks (circumstellar disks or proplyds).
Classification of YSOs by mass
These stars may be differentiated by mass: Massive YSOs, intermediate-mass YSOs, and brown dwarfs.
Gallery
| Physical sciences | Stellar astronomy | Astronomy |
1546365 | https://en.wikipedia.org/wiki/Barreleye | Barreleye | Barreleyes, also known as spook fish (a name also applied to several species of chimaera), are small deep-sea argentiniform fish comprising the family Opisthoproctidae found in tropical-to-temperate waters of the Atlantic, Pacific, and Indian Oceans.
These fish are named because of their barrel-shaped, tubular eyes, which are generally directed upwards to detect the silhouettes of available prey; however, the fish are capable of directing their eyes forward, as well. The family name Opisthoproctidae is derived from the Ancient Greek words opisthe 'behind' and proktos 'anus'.
Description
The morphology of the Opisthoproctidae varies between three main forms: the stout, deep-bodied barreleyes of the genera Opisthoproctus and Macropinna, the extremely slender and elongated spookfishes of the genera Dolichopteryx and Bathylychnops, and the intermediate fusiform spookfishes of the genera Rhynchohyalus and Winteria.
All species have large, telescoping eyes, which dominate and protrude from the head, but are enclosed within a large transparent dome of soft tissue. These eyes generally gaze upwards, but can also be directed forwards. The opisthoproctid eye has a large lens and a retina with an exceptionally high complement of rod cells and a high density of rhodopsin (the "visual purple" pigment); no cone cells are present. To better serve their vision, barreleyes have large, dome-shaped, transparent heads; this presumably allows the eyes to collect even more incident light and likely protects the sensitive eyes from the nematocysts (stinging cells) of the siphonophores, from which the barreleye is believed to steal food. It may also serve as an accessory lens (modulated by intrinsic or peripheral muscles), or refract light with an index very close to seawater. Dolichopteryx longipes is the only vertebrate known to use a mirror (as well as a lens) in its eyes for focusing images.
The toothless mouth is small and terminal, ending in a pointed snout. As in related families (e.g. Argentinidae), an epibranchial or crumenal organ is present behind the fourth gill arch. This organ—analogous to the gizzard—consists of a small diverticulum wherein the gill rakers insert and interdigitate for the purpose of grinding up ingested material. The living body of most species is a dark brown, covered in large, silvery imbricate scales, but these are absent in Dolichopteryx, leaving the body itself a transparent white. In all species, a variable number of dark melanophores colour the muzzle, ventral surface, and midline.
Also present in Dolichopteryx, Opisthoproctus, and Winteria species are a number of luminous organs; Dolichopteryx has several along the length of its belly, and Opisthoproctus has a single organ in the form of a rectal pouch. These organs glow with a weak light due to the presence of symbiotic bioluminescent bacteria, specifically, Photobacterium phosphoreum (family Vibrionaceae). The ventral surfaces of Opisthoproctus species are characterised by a flattened and projecting 'sole'; in the mirrorbelly (Opisthoproctus grimaldii) and Opisthoproctus soleatus, this sole may act as a reflector, by directing the emitted light downwards. The strains of P. phosphoreum present in the two Opisthoproctus species have been isolated and cultured in the lab. Through restriction fragment length polymorphism analysis, the two strains have been shown to differ only slightly.
In all species, the fins are spineless and fairly small; in Dolichopteryx however, the pectoral fins are greatly elongated and wing-like, extending about half the body's length, and are apparently used for stationkeeping in the water column. The pectoral fins are inserted low on the body, and in some species, the pelvic fins are inserted ventrolaterally rather than strictly ventrally. Several species also possess either a ventral or dorsal adipose fin, and the caudal fin is forked to emarginated. The anal fin is either present or greatly reduced, and may not be externally visible; it is strongly retrorse in Opisthoproctus. A single dorsal fin originates slightly before or directly over the anal fin. A perceptible hump in the back begins just behind the head. The gas bladder is absent in most species, and the lateral line is uninterrupted. The branchiostegal rays (bony rays supporting the gill membranes behind the lower jaw) number two to four. The javelin spookfish (Bathylychnops exilis) is by far the largest species at standard length; most other species are under .
Life cycle
Barreleyes inhabit moderate depths, from the mesopelagic to bathypelagic zone, circa 400–2,500 m deep. They are presumably solitary and do not undergo diel vertical migrations; instead, barreleyes remain just below the limit of light penetration and use their sensitive, upward-pointing tubular eyes—adapted for enhanced binocular vision at the expense of lateral vision—to survey the waters above. The high number of rods in their eyes' retinae allows barreleyes to resolve the silhouettes of objects overhead in the faintest of ambient light (and to accurately distinguish bioluminescent light from ambient light), and their binocular vision allows the fish to accurately track and home in on small zooplankton such as hydroids, copepods, and other pelagic crustaceans. The distribution of some species coincides with the isohaline and isotherm layers of the ocean; for example, in Opisthoproctus soleatus, upper distribution limits coincide with the 400-m isotherm for .
What little is known of barreleye reproduction indicates they are pelagic spawners; that is, eggs and sperm are released en masse directly into the water. The fertilized eggs are buoyant and planktonic; the larvae and juveniles drift with the currents—likely at much shallower depths than the adults—and upon metamorphosis into adult form, they descend to deeper waters. Dolichopteryx species are noted for their paedomorphic features, the result of neoteny (the retention of larval characteristics).
The bioluminescent organs of Dolichopteryx and Opisthoproctus, together with the reflective soles of the latter, may serve as camouflage in the form of counterillumination. This predator avoidance strategy involves the use of ventral light to break up the fishes' silhouettes, so that (when viewed from below) they blend in with the ambient light from above. Counterillumination is also seen in several other unrelated deep-sea families, which include the marine hatchetfish (Sternoptychidae). Also found in marine hatchetfish and other unrelated families are tubular eyes, such as telescopefish and tube-eye.
| Biology and health sciences | Osmeriformes and relatives | Animals |
1546516 | https://en.wikipedia.org/wiki/Sailfish | Sailfish | The sailfish is one of two species of marine fish in the genus Istiophorus, which belong to the family Istiophoridae (marlins). They are predominantly blue to gray in colour and have a characteristically large dorsal fin known as the sail, which often stretches the entire length of the back. Another notable characteristic is the elongated rostrum (bill) consistent with that of other marlins and the swordfish, which together constitute what are known as billfish in sport fishing circles. Sailfish live in colder pelagic waters of all Earth's oceans, and hold the record for the highest speed of any marine animal.
Species
There is a dispute based on the taxonomy of the sailfish, and either one or two species have been recognized. No differences have been found in mtDNA, morphometrics or meristics between the two supposed species and most authorities now only recognize a single species, Istiophorus platypterus, found in warmer oceans around the world. FishBase continues to recognize two species:
Atlantic sailfish (I. albicans).
Indo-Pacific sailfish (I. platypterus).
Description
Considered by many scientists the fastest fish in the ocean, sailfish grow quickly, reaching in length in a single year, and feed on the surface or at middle depths on smaller pelagic forage fish and squid. Sailfish were previously estimated to reach maximum swimming speeds of , but research published in 2015 and 2016 indicate sailfish do not exceed speeds between . During predator–prey interactions, sailfish reached burst speeds of and did not surpass .
Generally, sailfish do not grow to more than in length and rarely weigh over .
Some sources indicate that sailfish are capable of changing colours as a method of confusing prey, displaying emotion, and/or communicating with other sailfish.
Sailfish have been documented attacking humans in self-defense; a sailfish stabbed a woman in the groin when her party tried to catch it.
Hunting behaviour
Sailfish have been reported to use their bills for hitting schooling fish by tapping (short-range movement) or slashing (horizontal large-range movement) at them.
The sail is normally kept folded down when swimming and only raised when the sailfish attack their prey. The raised sail has been shown to reduce sideways oscillations of the head, which is likely to make the bill less detectable by prey fish. This strategy allows sailfish to put their bills close to fish schools or even into them without being noticed by the prey before hitting them.
Sailfish usually attack one at a time, and the small teeth on their bills inflict injuries on their prey fish in terms of scale and tissue removal. Typically, about two prey fish are injured during a sailfish attack, but only 24% of attacks result in capture. As a result, injured fish increase in number over time in a fish school under attack. Given that injured fish are easier to catch, sailfish benefit from the attacks of their conspecifics but only up to a particular group size. A mathematical model showed that sailfish in groups of up to 70 individuals should gain benefits in this way. The underlying mechanism was termed proto-cooperation because it does not require any spatial coordination of attacks and could be a precursor to more complex forms of group hunting.
The bill movement of sailfish during attacks on fish is usually either to the left or to the right side. Identification of individual sailfish based on the shape of their dorsal fins identified individual preferences for hitting to the right or left side. The strength of this side preference was positively correlated with capture success. These side-preferences are believed to be a form of behavioural specialization that improves performance. However, a possibility exists that sailfish with strong side preferences could become predictable to their prey because fish could learn after repeated interactions in which direction the predator will hit. Given that individuals with right- and left-sided preferences are about equally frequent in sailfish populations, living in groups possibly offers a way out of this predictability. The larger the sailfish group, the greater the possibility that individuals with right- and left-sided preferences are about equally frequent. Therefore, prey fish should find it hard to predict in which direction the next attack will take place. Taken together, these results suggest a potential novel benefit of group hunting which allows individual predators to specialize in their hunting strategy without becoming predictable to their prey.
The injuries that sailfish inflict on their prey appear to reduce their swimming speeds, with injured fish being more frequently found in the back (compared with the front) of the school than uninjured ones. When a sardine school is approached by a sailfish, the sardines usually turn away and flee in the opposite direction. As a result, the sailfish usually attacks sardine schools from behind, putting at risk those fish that are the rear of the school because of their reduced swimming speeds.
Habitat
The sailfish is an epipelagic and oceanic species and shows a strong tendency to approach continental coasts, islands and reefs tropical and temperate waters of the Pacific and Indian oceans.
Sailfish in some areas are reliant on coral reefs as areas for feeding and breeding. As witnessed in the Persian Gulf, the disappearance of coral reefs in a sailfish's habitat may be followed by the disappearance of the species from that area.
Predators
When freshly hatched, sailfish are hunted by other fishes that mainly survive on eating plankton. The size of their predators increases as they grow, and adult sailfish are not eaten by anything other than larger predatory fish like open ocean shark species and orcas.
Timeline
| Biology and health sciences | Acanthomorpha | Animals |
1549177 | https://en.wikipedia.org/wiki/Chino%20cloth | Chino cloth | Chino cloth ( ) is a twill fabric originally made from pure cotton. The most common items made from it, trousers, are widely called chinos. Today it is also found in cotton-synthetic blends.
Developed in the mid-19th century for British and French military uniforms, it has since migrated into civilian wear. Trousers of such a fabric gained popularity in the U.S. when Spanish–American War veterans returned from the Philippines with their twill military trousers.
Etymology
It is unknown why American veterans called the trousers "chinos". It is theorized that the cloth or the trousers were made in China.
The American Heritage Dictionary says that the word is from American Spanish chino, literally "toasted", in reference to its usual color. But this is not a usual meaning of the Spanish word.
History
First designed to be used in the military, chino fabric was originally made to be simple, durable and comfortable for soldiers to wear; the use of natural earth-tone colors also began the move towards camouflage, instead of the brightly colored tunics used prior. The British and United States armies started wearing it as standard during the latter half of the 1800s.
The all-cotton fabric is widely used for trousers, referred to as chinos. The original khaki (light brown) is the traditional and most popular color, but chinos are made in many shades.
| Technology | Fabrics and fibers | null |
1551135 | https://en.wikipedia.org/wiki/Absorption%20band | Absorption band | In spectroscopy, an absorption band is a range of wavelengths, frequencies or energies in the electromagnetic spectrum that are characteristic of a particular transition from initial to final state in a substance.
According to quantum mechanics, atoms and molecules can only hold certain defined quantities of energy, or exist in specific states. When such quanta of electromagnetic radiation are emitted or absorbed by an atom or molecule, energy of the radiation changes the state of the atom or molecule from an initial state to a final state.
Overview
When electromagnetic radiation is absorbed by an atom or molecule, the energy of the radiation changes the state of the atom or molecule from an initial state to a final state. The number of states in a specific energy range is discrete for gaseous or diluted systems, with discrete energy levels. Condensed systems, like liquids or solids, have a continuous density of states distribution and often possess continuous energy bands. In order for a substance to change its energy it must do so in a series of "steps" by the absorption of a photon. This absorption process can move a particle, like an electron, from an occupied state to an empty or unoccupied state. It can also move a whole vibrating or rotating system, like a molecule, from one vibrational or rotational state to another or it can create a quasiparticle like a phonon or a plasmon in a solid.
Electromagnetic transitions
When a photon is absorbed, the electromagnetic field of the photon disappears as it initiates a change in the state of the system that absorbs the photon. Energy, momentum, angular momentum, magnetic dipole moment and electric dipole moment are transported from the photon to the system. Because there are conservation laws, that have to be satisfied, the transition has to meet a series of constraints. This results in a series of selection rules. It is not possible to make any transition that lies within the energy or frequency range that is observed.
The strength of an electromagnetic absorption process is mainly determined by two factors. First, transitions that only change the magnetic dipole moment of the system are much weaker than transitions that change the electric dipole moment and that transitions to higher order moments, like quadrupole transitions, are weaker than dipole transitions. Second, not all transitions have the same transition matrix element, absorption coefficient or oscillator strength.
For some types of bands or spectroscopic disciplines temperature and statistical mechanics plays an important role. For (far) infrared, microwave and radio frequency ranges the temperature dependent occupation numbers of states and the difference between Bose-Einstein statistics and Fermi-Dirac statistics determines the intensity of observed absorptions. For other energy ranges thermal motion effects, like Doppler broadening may determine the linewidth.
Band and line shape
A wide variety of absorption band and line shapes exist, and the analysis of the band or line shape can be used to determine information about the system that causes it. In many cases it is convenient to assume that a narrow spectral line is a Lorentzian or Gaussian, depending respectively on the decay mechanism or temperature effects like Doppler broadening. Analysis of the spectral density and the intensities, width and shape of spectral lines sometimes can yield a lot of information about the observed system like it is done with Mössbauer spectra.
In systems with a very large number of states like macromolecules and large conjugated systems the separate energy levels can't always be distinguished in an absorption spectrum. If the line broadening mechanism is known and the shape of then spectral density is clearly visible in the spectrum, it is possible to get the desired data. Sometimes it is enough to know the lower or upper limits of the band or its position for an analysis.
For condensed matter and solids the shape of absorption bands are often determined by transitions between states in their continuous density of states distributions. For crystals, the electronic band structure determines the density of states. In fluids, glasses and amorphous solids, there is no long range correlation and the dispersion relations are isotropic. For charge-transfer complexes and conjugated systems, the band width is complicated by a variety of factors, compared to condensed matter.
Types
Electronic transitions
Electromagnetic transitions in atoms, molecules and condensed matter mainly take place at energies corresponding to the UV and visible part of the spectrum. Core electrons in atoms, and many other phenomena, are observed with different brands of XAS in the X-ray energy range. Electromagnetic transitions in atomic nuclei, as observed in Mössbauer spectroscopy, take place in the gamma ray part of the spectrum. The main factors that cause broadening of the spectral line into an absorption band of a molecular solid are the distributions of vibrational and rotational energies of the molecules in the sample (and also those of their excited states). In solid crystals the shape of absorption bands are determined by the density of states of initial and final states of electronic states or lattice vibrations, called phonons, in the crystal structure. In gas phase spectroscopy, the fine structure afforded by these factors can be discerned, but in solution-state spectroscopy, the differences in molecular micro environments further broaden the structure to give smooth bands. Electronic transition bands of molecules may be from tens to several hundred nanometers in breadth.
Vibrational transitions
Vibrational transitions and optical phonon transitions take place in the infrared part of the spectrum, at wavelengths of around 1-30 micrometres.
Rotational transitions
Rotational transitions take place in the far infrared and microwave regions.
Other transitions
Absorption bands in the radio frequency range are found in NMR spectroscopy. The frequency ranges and intensities are determined by the magnetic moment of the nuclei that are observed, the applied magnetic field and temperature occupation number differences of the magnetic states.
Applications
Materials with broad absorption bands are being applied in pigments, dyes and optical filters. Titanium dioxide, zinc oxide and chromophores are applied as UV absorbers and reflectors in sunscreen.
Absorption bands of interest to the atmospheric physicist
In oxygen:
the Hopfield bands, very strong, between about 67 and 100 nanometres in the ultraviolet (named after John J. Hopfield);
a diffuse system between 101.9 and 130 nanometres;
the Schumann–Runge continuum, very strong, between 135 and 176 nanometres;
the Schumann–Runge bands between 176 and 192.6 nanometres (named for Victor Schumann and Carl Runge);
the Herzberg bands between 240 and 260 nanometres (named after Gerhard Herzberg);
the atmospheric bands between 538 and 771 nanometres in the visible spectrum; including the oxygen δ (~580 nm), γ (~629 nm), B (~688 nm), and A-band (~759-771 nm)
a system in the infrared at about 1000 nanometres.
In ozone:
the Hartley bands between 200 and 300 nanometres in the ultraviolet, with a very intense maximum absorption at 255 nanometres (named after Walter Noel Hartley);
the Huggins bands, weak absorption between 320 and 360 nanometres (named after Sir William Huggins);
the Chappuis bands (sometimes misspelled "Chappius"), a weak diffuse system between 375 and 650 nanometres in the visible spectrum (named after J. Chappuis); and
the Wulf bands in the infrared beyond 700 nm, centered at 4,700, 9,600 and 14,100 nanometres, the latter being the most intense (named after Oliver R. Wulf).
In nitrogen:
The Lyman–Birge–Hopfield bands, sometimes known as the Birge–Hopfield bands, in the far ultraviolet: 140– 170 nm (named after Theodore Lyman, Raymond T. Birge, and John J. Hopfield)
| Physical sciences | Electromagnetic radiation | Physics |
4668 | https://en.wikipedia.org/wiki/Binomial%20coefficient | Binomial coefficient | In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem. Commonly, a binomial coefficient is indexed by a pair of integers and is written It is the coefficient of the term in the polynomial expansion of the binomial power ; this coefficient can be computed by the multiplicative formula
which using factorial notation can be compactly expressed as
For example, the fourth power of is
and the binomial coefficient is the coefficient of the term.
Arranging the numbers in successive rows for gives a triangular array called Pascal's triangle, satisfying the recurrence relation
The binomial coefficients occur in many areas of mathematics, and especially in combinatorics. In combinatorics the symbol is usually read as " choose " because there are ways to choose an (unordered) subset of elements from a fixed set of elements. For example, there are ways to choose elements from , namely , , , , and .
The first form of the binomial coefficients can be generalized to for any complex number and integer , and many of their properties continue to hold in this more general form.
History and notation
Andreas von Ettingshausen introduced the notation in 1826, although the numbers were known centuries earlier (see Pascal's triangle). In about 1150, the Indian mathematician Bhaskaracharya gave an exposition of binomial coefficients in his book Līlāvatī.
Alternative notations include , , , , , and , in all of which the stands for combinations or choices; the notation means the number of ways to choose k out of n objects. Many calculators use variants of the because they can represent it on a single-line display. In this form the binomial coefficients are easily compared to the numbers of -permutations of , written as , etc.
Definition and interpretations
For natural numbers (taken to include 0) and , the binomial coefficient can be defined as the coefficient of the monomial in the expansion of . The same coefficient also occurs (if ) in the binomial formula
(valid for any elements , of a commutative ring),
which explains the name "binomial coefficient".
Another occurrence of this number is in combinatorics, where it gives the number of ways, disregarding order, that objects can be chosen from among objects; more formally, the number of -element subsets (or -combinations) of an -element set. This number can be seen as equal to the one of the first definition, independently of any of the formulas below to compute it: if in each of the factors of the power one temporarily labels the term with an index (running from to ), then each subset of indices gives after expansion a contribution , and the coefficient of that monomial in the result will be the number of such subsets. This shows in particular that is a natural number for any natural numbers and . There are many other combinatorial interpretations of binomial coefficients (counting problems for which the answer is given by a binomial coefficient expression), for instance the number of words formed of bits (digits 0 or 1) whose sum is is given by , while the number of ways to write where every is a nonnegative integer is given by . Most of these interpretations can be shown to be equivalent to counting -combinations.
Computing the value of binomial coefficients
Several methods exist to compute the value of without actually expanding a binomial power or counting -combinations.
Recursive formula
One method uses the recursive, purely additive formula
for all integers such that
with boundary values
for all integers .
The formula follows from considering the set and counting separately (a) the -element groupings that include a particular set element, say "", in every group (since "" is already chosen to fill one spot in every group, we need only choose from the remaining ) and (b) all the k-groupings that don't include ""; this enumerates all the possible -combinations of elements. It also follows from tracing the contributions to Xk in . As there is zero or in , one might extend the definition beyond the above boundaries to include when either or . This recursive formula then allows the construction of Pascal's triangle, surrounded by white spaces where the zeros, or the trivial coefficients, would be.
Multiplicative formula
A more efficient method to compute individual binomial coefficients is given by the formula
where the numerator of the first fraction, , is a falling factorial.
This formula is easiest to understand for the combinatorial interpretation of binomial coefficients.
The numerator gives the number of ways to select a sequence of distinct objects, retaining the order of selection, from a set of objects. The denominator counts the number of distinct sequences that define the same -combination when order is disregarded. This formula can also be stated in a recursive form. Using the "C" notation from above, , where . It is readily derived by evaluating and can intuitively be understood as starting at the leftmost coefficient of the -th row of Pascal's triangle, whose value is always , and recursively computing the next coefficient to its right until the -th one is reached.
Due to the symmetry of the binomial coefficients with regard to and , calculation of the above product, as well as the recursive relation, may be optimised by setting its upper limit to the smaller of and .
Factorial formula
Finally, though computationally unsuitable, there is the compact form, often used in proofs and derivations, which makes repeated use of the familiar factorial function:
where denotes the factorial of . This formula follows from the multiplicative formula above by multiplying numerator and denominator by ; as a consequence it involves many factors common to numerator and denominator. It is less practical for explicit computation (in the case that is small and is large) unless common factors are first cancelled (in particular since factorial values grow very rapidly). The formula does exhibit a symmetry that is less evident from the multiplicative formula (though it is from the definitions)
which leads to a more efficient multiplicative computational routine. Using the falling factorial notation,
Generalization and connection to the binomial series
The multiplicative formula allows the definition of binomial coefficients to be extended by replacing n by an arbitrary number α (negative, real, complex) or even an element of any commutative ring in which all positive integers are invertible:
With this definition one has a generalization of the binomial formula (with one of the variables set to 1), which justifies still calling the binomial coefficients:
This formula is valid for all complex numbers α and X with |X| < 1. It can also be interpreted as an identity of formal power series in X, where it actually can serve as definition of arbitrary powers of power series with constant coefficient equal to 1; the point is that with this definition all identities hold that one expects for exponentiation, notably
If α is a nonnegative integer n, then all terms with are zero, and the infinite series becomes a finite sum, thereby recovering the binomial formula. However, for other values of α, including negative integers and rational numbers, the series is really infinite.
Pascal's triangle
Pascal's rule is the important recurrence relation
which can be used to prove by mathematical induction that is a natural number for all integer n ≥ 0 and all integer k, a fact that is not immediately obvious from formula (1). To the left and right of Pascal's triangle, the entries (shown as blanks) are all zero.
Pascal's rule also gives rise to Pascal's triangle:
Row number contains the numbers for . It is constructed by first placing 1s in the outermost positions, and then filling each inner position with the sum of the two numbers directly above. This method allows the quick calculation of binomial coefficients without the need for fractions or multiplications. For instance, by looking at row number 5 of the triangle, one can quickly read off that
Combinatorics and statistics
Binomial coefficients are of importance in combinatorics because they provide ready formulas for certain frequent counting problems:
There are ways to choose k elements from a set of n elements. See Combination.
There are ways to choose k elements from a set of n elements if repetitions are allowed. See Multiset.
There are strings containing k ones and n zeros.
There are strings consisting of k ones and n zeros such that no two ones are adjacent.
The Catalan numbers are
The binomial distribution in statistics is
Binomial coefficients as polynomials
For any nonnegative integer k, the expression can be written as a polynomial with denominator :
this presents a polynomial in t with rational coefficients.
As such, it can be evaluated at any real or complex number t to define binomial coefficients with such first arguments. These "generalized binomial coefficients" appear in Newton's generalized binomial theorem.
For each k, the polynomial can be characterized as the unique degree k polynomial satisfying and .
Its coefficients are expressible in terms of Stirling numbers of the first kind:
The derivative of can be calculated by logarithmic differentiation:
This can cause a problem when evaluated at integers from to , but using identities below we can compute the derivative as:
Binomial coefficients as a basis for the space of polynomials
Over any field of characteristic 0 (that is, any field that contains the rational numbers), each polynomial p(t) of degree at most d is uniquely expressible as a linear combination of binomial coefficients, because the binomial coefficients consist of one polynomial of each degree. The coefficient ak is the kth difference of the sequence p(0), p(1), ..., p(k). Explicitly,
Integer-valued polynomials
Each polynomial is integer-valued: it has an integer value at all integer inputs . (One way to prove this is by induction on k using Pascal's identity.) Therefore, any integer linear combination of binomial coefficient polynomials is integer-valued too. Conversely, () shows that any integer-valued polynomial is an integer linear combination of these binomial coefficient polynomials. More generally, for any subring R of a characteristic 0 field K, a polynomial in K[t] takes values in R at all integers if and only if it is an R-linear combination of binomial coefficient polynomials.
Example
The integer-valued polynomial can be rewritten as
Identities involving binomial coefficients
The factorial formula facilitates relating nearby binomial coefficients. For instance, if k is a positive integer and n is arbitrary, then
and, with a little more work,
We can also get
Moreover, the following may be useful:
For constant n, we have the following recurrence:
To sum up, we have
Sums of the binomial coefficients
The formula
says that the elements in the th row of Pascal's triangle always add up to 2 raised to the th power. This is obtained from the binomial theorem () by setting and . The formula also has a natural combinatorial interpretation: the left side sums the number of subsets of {1, ..., n} of sizes k = 0, 1, ..., n, giving the total number of subsets. (That is, the left side counts the power set of {1, ..., n}.) However, these subsets can also be generated by successively choosing or excluding each element 1, ..., n; the n independent binary choices (bit-strings) allow a total of choices. The left and right sides are two ways to count the same collection of subsets, so they are equal.
The formulas
and
follow from the binomial theorem after differentiating with respect to (twice for the latter) and then substituting .
The Chu–Vandermonde identity, which holds for any complex values m and n and any non-negative integer k, is
and can be found by examination of the coefficient of in the expansion of using equation (). When , equation () reduces to equation (). In the special case , using (), the expansion () becomes (as seen in Pascal's triangle at right)
where the term on the right side is a central binomial coefficient.
Another form of the Chu–Vandermonde identity, which applies for any integers j, k, and n satisfying , is
The proof is similar, but uses the binomial series expansion () with negative integer exponents.
When , equation () gives the hockey-stick identity
and its relative
Let F(n) denote the n-th Fibonacci number.
Then
This can be proved by induction using () or by Zeckendorf's representation. A combinatorial proof is given below.
Multisections of sums
For integers s and t such that series multisection gives the following identity for the sum of binomial coefficients:
For small , these series have particularly nice forms; for example,
Partial sums
Although there is no closed formula for partial sums
of binomial coefficients, one can again use () and induction to show that for ,
with special case
for . This latter result is also a special case of the result from the theory of finite differences that for any polynomial P(x) of degree less than n,
Differentiating () k times and setting x = −1 yields this for
,
when 0 ≤ k < n,
and the general case follows by taking linear combinations of these.
When P(x) is of degree less than or equal to n,
where is the coefficient of degree n in P(x).
More generally for (),
where m and d are complex numbers. This follows immediately applying () to the polynomial instead of , and observing that still has degree less than or equal to n, and that its coefficient of degree n is dnan.
The series is convergent for k ≥ 2. This formula is used in the analysis of the German tank problem. It follows from which is proved by induction on M.
Identities with combinatorial proofs
Many identities involving binomial coefficients can be proved by combinatorial means. For example, for nonnegative integers , the identity
(which reduces to () when q = 1) can be given a double counting proof, as follows. The left side counts the number of ways of selecting a subset of [n] = {1, 2, ..., n} with at least q elements, and marking q elements among those selected. The right side counts the same thing, because there are ways of choosing a set of q elements to mark, and to choose which of the remaining elements of [n] also belong to the subset.
In Pascal's identity
both sides count the number of k-element subsets of [n]: the two terms on the right side group them into those that contain element n and those that do not.
The identity () also has a combinatorial proof. The identity reads
Suppose you have empty squares arranged in a row and you want to mark (select) n of them. There are ways to do this. On the other hand, you may select your n squares by selecting k squares from among the first n and squares from the remaining n squares; any k from 0 to n will work. This gives
Now apply () to get the result.
If one denotes by the sequence of Fibonacci numbers, indexed so that , then the identity
has the following combinatorial proof. One may show by induction that counts the number of ways that a strip of squares may be covered by and tiles. On the other hand, if such a tiling uses exactly of the tiles, then it uses of the tiles, and so uses tiles total. There are ways to order these tiles, and so summing this coefficient over all possible values of gives the identity.
Sum of coefficients row
The number of k-combinations for all k, , is the sum of the nth row (counting from 0) of the binomial coefficients. These combinations are enumerated by the 1 digits of the set of base 2 numbers counting from 0 to , where each digit position is an item from the set of n.
Dixon's identity
Dixon's identity is
or, more generally,
where a, b, and c are non-negative integers.
Continuous identities
Certain trigonometric integrals have values expressible in terms of binomial coefficients: For any
These can be proved by using Euler's formula to convert trigonometric functions to complex exponentials, expanding using the binomial theorem, and integrating term by term.
Congruences
If n is prime, then for every k with
More generally, this remains true if n is any number and k is such that all the numbers between 1 and k are coprime to n.
Indeed, we have
Generating functions
Ordinary generating functions
For a fixed , the ordinary generating function of the sequence is
For a fixed , the ordinary generating function of the sequence is
The bivariate generating function of the binomial coefficients is
A symmetric bivariate generating function of the binomial coefficients is
which is the same as the previous generating function after the substitution .
Exponential generating function
A symmetric exponential bivariate generating function of the binomial coefficients is:
Divisibility properties
In 1852, Kummer proved that if m and n are nonnegative integers and p is a prime number, then the largest power of p dividing equals pc, where c is the number of carries when m and n are added in base p.
Equivalently, the exponent of a prime p in
equals the number of nonnegative integers j such that the fractional part of k/pj is greater than the fractional part of n/pj. It can be deduced from this that is divisible by n/gcd(n,k). In particular therefore it follows that p divides for all positive integers r and s such that . However this is not true of higher powers of p: for example 9 does not divide .
A somewhat surprising result by David Singmaster (1974) is that any integer divides almost all binomial coefficients. More precisely, fix an integer d and let f(N) denote the number of binomial coefficients with n < N such that d divides . Then
Since the number of binomial coefficients with n < N is N(N + 1) / 2, this implies that the density of binomial coefficients divisible by d goes to 1.
Binomial coefficients have divisibility properties related to least common multiples of consecutive integers. For example:
divides .
is a multiple of .
Another fact:
An integer is prime if and only if
all the intermediate binomial coefficients
are divisible by n.
Proof:
When p is prime, p divides
for all
because is a natural number and p divides the numerator but not the denominator.
When n is composite, let p be the smallest prime factor of n and let . Then and
otherwise the numerator has to be divisible by , this can only be the case when is divisible by p. But n is divisible by p, so p does not divide and because p is prime, we know that p does not divide and so the numerator cannot be divisible by n.
Bounds and asymptotic formulas
The following bounds for hold for all values of n and k such that :
The first inequality follows from the fact that
and each of these terms in this product is . A similar argument can be made to show the second inequality. The final strict inequality is equivalent to , that is clear since the RHS is a term of the exponential series .
From the divisibility properties we can infer that
where both equalities can be achieved.
The following bounds are useful in information theory:
where is the binary entropy function. It can be further tightened to
for all .
Both n and k large
Stirling's approximation yields the following approximation, valid when both tend to infinity:
Because the inequality forms of Stirling's formula also bound the factorials, slight variants on the above asymptotic approximation give exact bounds.
In particular, when is sufficiently large, one has
and . More generally, for and (again, by applying Stirling's formula to the factorials in the binomial coefficient),
If n is large and k is linear in n, various precise asymptotic estimates exist for the binomial coefficient . For example, if then
where d = n − 2k.
much larger than
If is large and is (that is, if ), then
where again is the little o notation.
Sums of binomial coefficients
A simple and rough upper bound for the sum of binomial coefficients can be obtained using the binomial theorem:
More precise bounds are given by
valid for all integers with .
Generalized binomial coefficients
The infinite product formula for the gamma function also gives an expression for binomial coefficients
which yields the asymptotic formulas
as .
This asymptotic behaviour is contained in the approximation
as well. (Here is the k-th harmonic number and is the Euler–Mascheroni constant.)
Further, the asymptotic formula
hold true, whenever and for some complex number .
Generalizations
Generalization to multinomials
Binomial coefficients can be generalized to multinomial coefficients defined to be the number:
where
While the binomial coefficients represent the coefficients of , the multinomial coefficients
represent the coefficients of the polynomial
The case r = 2 gives binomial coefficients:
The combinatorial interpretation of multinomial coefficients is distribution of n distinguishable elements over r (distinguishable) containers, each containing exactly ki elements, where i is the index of the container.
Multinomial coefficients have many properties similar to those of binomial coefficients, for example the recurrence relation:
and symmetry:
where is a permutation of (1, 2, ..., r).
Taylor series
Using Stirling numbers of the first kind the series expansion around any arbitrarily chosen point is
Binomial coefficient with
The definition of the binomial coefficients can be extended to the case where is real and is integer.
In particular, the following identity holds for any non-negative integer :
This shows up when expanding into a power series using the Newton binomial series :
Products of binomial coefficients
One can express the product of two binomial coefficients as a linear combination of binomial coefficients:
where the connection coefficients are multinomial coefficients. In terms of labelled combinatorial objects, the connection coefficients represent the number of ways to assign labels to a pair of labelled combinatorial objects—of weight m and n respectively—that have had their first k labels identified, or glued together to get a new labelled combinatorial object of weight . (That is, to separate the labels into three portions to apply to the glued part, the unglued part of the first object, and the unglued part of the second object.) In this regard, binomial coefficients are to exponential generating series what falling factorials are to ordinary generating series.
The product of all binomial coefficients in the nth row of the Pascal triangle is given by the formula:
Partial fraction decomposition
The partial fraction decomposition of the reciprocal is given by
Newton's binomial series
Newton's binomial series, named after Sir Isaac Newton, is a generalization of the binomial theorem to infinite series:
The identity can be obtained by showing that both sides satisfy the differential equation {{math|1=(1 + z) f'''(z) = α f(z)}}.
The radius of convergence of this series is 1. An alternative expression is
where the identity
is applied.
Multiset (rising) binomial coefficient
Binomial coefficients count subsets of prescribed size from a given set. A related combinatorial problem is to count multisets of prescribed size with elements drawn from a given set, that is, to count the number of ways to select a certain number of elements from a given set with the possibility of selecting the same element repeatedly. The resulting numbers are called multiset coefficients; the number of ways to "multichoose" (i.e., choose with replacement) k items from an n element set is denoted .
To avoid ambiguity and confusion with n's main denotation in this article, let and .
Multiset coefficients may be expressed in terms of binomial coefficients by the rule
One possible alternative characterization of this identity is as follows:
We may define the falling factorial as
and the corresponding rising factorial as
so, for example,
Then the binomial coefficients may be written as
while the corresponding multiset coefficient is defined by replacing the falling with the rising factorial:
Generalization to negative integers n
For any n,
In particular, binomial coefficients evaluated at negative integers n are given by signed multiset coefficients. In the special case , this reduces to
For example, if n = −4 and k = 7, then r = 4 and f = 10:
Two real or complex valued arguments
The binomial coefficient is generalized to two real or complex valued arguments using the gamma function or beta function via
This definition inherits these following additional properties from :
moreover,
The resulting function has been little-studied, apparently first being graphed in . Notably, many binomial identities fail: but for n positive (so negative). The behavior is quite complex, and markedly different in various octants (that is, with respect to the x and y axes and the line ), with the behavior for negative x'' having singularities at negative integer values and a checkerboard of positive and negative regions:
in the octant it is a smoothly interpolated form of the usual binomial, with a ridge ("Pascal's ridge").
in the octant and in the quadrant the function is close to zero.
in the quadrant the function is alternatingly very large positive and negative on the parallelograms with vertices
in the octant the behavior is again alternatingly very large positive and negative, but on a square grid.
in the octant it is close to zero, except for near the singularities.
Generalization to q-series
The binomial coefficient has a q-analog generalization known as the Gaussian binomial coefficient.
Generalization to infinite cardinals
The definition of the binomial coefficient can be generalized to infinite cardinals by defining:
where is some set with cardinality . One can show that the generalized binomial coefficient is well-defined, in the sense that no matter what set we choose to represent the cardinal number , will remain the same. For finite cardinals, this definition coincides with the standard definition of the binomial coefficient.
Assuming the Axiom of Choice, one can show that for any infinite cardinal .
| Mathematics | Combinatorics | null |
4674 | https://en.wikipedia.org/wiki/B-tree | B-tree | In computer science, a B-tree is a self-balancing tree data structure that maintains sorted data and allows searches, sequential access, insertions, and deletions in logarithmic time. The B-tree generalizes the binary search tree, allowing for nodes with more than two children. Unlike other self-balancing binary search trees, the B-tree is well suited for storage systems that read and write relatively large blocks of data, such as databases and file systems.
History
B-trees were invented by Rudolf Bayer and Edward M. McCreight while working at Boeing Research Labs to efficiently manage index pages for large random-access files. The basic assumption was that indices would be so voluminous that only small chunks of the tree could fit in main memory. Bayer and McCreight's paper Organization and maintenance of large ordered indices was first circulated in July 1970 and later published in Acta Informatica.
Bayer and McCreight never explained what, if anything, the B stands for; Boeing, balanced, between, broad, bushy, and Bayer have been suggested. McCreight, when asked "I want to know what B in B-Tree stands for," answered:Everybody does!
So you just have no idea what a lunchtime conversation can turn into. So there we were, Rudy and I, at lunch. We had to give the thing a name.... We were working for Boeing at the time, but we couldn't use the name without talking to the lawyers. So there's a B.
It has to do with Balance. There's another B.
Rudy was the senior author. Rudy (Bayer) was several years older than I am, and had ... many more publications than I did. So there's another B.
And so at the lunch table, we never did resolve whether there was one of those that made more sense than the rest.
What Rudy likes to say is, the more you think about what the B in B-Tree means, the better you understand B-Trees!
Definition
According to Knuth's definition, a B-tree of order m is a tree which satisfies the following properties:
Every node has at most m children.
Every node, except for the root and the leaves, has at least ⌈m/2⌉ children.
The root node has at least two children unless it is a leaf.
All leaves appear on the same level.
A non-leaf node with k children contains k−1 keys.
Each internal node's keys act as separation values which divide its subtrees. For example, if an internal node has 3 child nodes (or subtrees) then it must have 2 keys: a1 and a2. All values in the leftmost subtree will be less than a1, all values in the middle subtree will be between a1 and a2, and all values in the rightmost subtree will be greater than a2.
Internal nodes
Internal nodes (also known as inner nodes) are all nodes except for leaf nodes and the root node. They are usually represented as an ordered set of elements and child pointers. Every internal node contains a maximum of U children and a minimum of L children. Thus, the number of elements is always 1 less than the number of child pointers (the number of elements is between L−1 and U−1). U must be either 2L or 2L−1; therefore each internal node is at least half full. The relationship between U and L implies that two half-full nodes can be joined to make a legal node, and one full node can be split into two legal nodes (if there's room to push one element up into the parent). These properties make it possible to delete and insert new values into a B-tree and adjust the tree to preserve the B-tree properties.
The root node
The root node's number of children has the same upper limit as internal nodes, but has no lower limit. For example, when there are fewer than L−1 elements in the entire tree, the root will be the only node in the tree with no children at all.
Leaf nodes
In Knuth's terminology, the "leaf" nodes are the actual data objects / chunks. The internal nodes that are one level above these leaves are what would be called the "leaves" by other authors: these nodes only store keys (at most m-1, and at least m/2-1 if they are not the root) and pointers (one for each key) to nodes carrying the data objects / chunks.
A B-tree of depth n+1 can hold about U times as many items as a B-tree of depth n, but the cost of search, insert, and delete operations grows with the depth of the tree. As with any balanced tree, the cost grows much more slowly than the number of elements.
Some balanced trees store values only at leaf nodes, and use different kinds of nodes for leaf nodes and internal nodes. B-trees keep values in every node in the tree except leaf nodes.
Differences in terminology
The literature on B-trees is not uniform in its terminology.
Bayer and McCreight (1972), Comer (1979), and others define the order of B-tree as the minimum number of keys in a non-root node. Folk and Zoellick points out that terminology is ambiguous because the maximum number of keys is not clear. An order 3 B-tree might hold a maximum of 6 keys or a maximum of 7 keys. Knuth (1998) avoids the problem by defining the order to be the maximum number of children (which is one more than the maximum number of keys).
The term leaf is also inconsistent. Bayer and McCreight (1972) considered the leaf level to be the lowest level of keys, but Knuth considered the leaf level to be one level below the lowest keys. There are many possible implementation choices. In some designs, the leaves may hold the entire data record; in other designs, the leaves may only hold pointers to the data record. Those choices are not fundamental to the idea of a B-tree.
For simplicity, most authors assume there are a fixed number of keys that fit in a node. The basic assumption is the key size is fixed and the node size is fixed. In practice, variable length keys may be employed.
Informal description
Node structure
As with other trees, B-trees can be represented as a collection of three types of nodes: root, internal (a.k.a. interior), and leaf.
Note the following variable definitions:
: Maximum number of potential search keys for each node in a B-tree. (this value is constant over the entire tree).
: The pointer to a child node which starts a sub-tree.
: The pointer to a record which stores the data.
: The search key at the zero-based node index .
In B-trees, the following properties are maintained for these nodes:
If exists in any node in a B tree, then exists in that node where .
All leaf nodes have the same number of ancestors (i.e., they are all at the same depth).
Each internal node in a B-tree has the following format:
Each leaf node in a B-tree has the following format:
The node bounds are summarized in the table below:
Insertion and deletion
To maintain the predefined range of child nodes, internal nodes may be joined or split.
Usually, the number of keys is chosen to vary between and , where is the minimum number of keys, and is the minimum degree or branching factor of the tree. The factor of 2 will guarantee that nodes can be split or combined.
If an internal node has keys, then adding a key to that node can be accomplished by splitting the hypothetical key node into two key nodes and moving the key that would have been in the middle to the parent node. Each split node has the required minimum number of keys. Similarly, if an internal node and its neighbor each have keys, then a key may be deleted from the internal node by combining it with its neighbor. Deleting the key would make the internal node have keys; joining the neighbor would add keys plus one more key brought down from the neighbor's parent. The result is an entirely full node of keys.
A B-tree is kept balanced after insertion by splitting a would-be overfilled node, of keys, into two -key siblings and inserting the mid-value key into the parent. Depth only increases when the root is split, maintaining balance. Similarly, a B-tree is kept balanced after deletion by merging or redistributing keys among siblings to maintain the -key minimum for non-root nodes. A merger reduces the number of keys in the parent potentially forcing it to merge or redistribute keys with its siblings, and so on. The only change in depth occurs when the root has two children, of and (transitionally) keys, in which case the two siblings and parent are merged, reducing the depth by one.
This depth will increase slowly as elements are added to the tree, but an increase in the overall depth is infrequent, and results in all leaf nodes being one more node farther away from the root.
Comparison to other trees
Because a range of child nodes is permitted, B-trees do not need re-balancing as frequently as other self-balancing search trees, but may waste some space, since nodes are not entirely full.
B-trees have substantial advantages over alternative implementations when the time to access the data of a node greatly exceeds the time spent processing that data, because then the cost of accessing the node may be amortized over multiple operations within the node. This usually occurs when the node data are in secondary storage such as disk drives. By maximizing the number of keys within each internal node, the height of the tree decreases and the number of expensive node accesses is reduced. In addition, rebalancing of the tree occurs less often. The maximum number of child nodes depends on the information that must be stored for each child node and the size of a full disk block or an analogous size in secondary storage. While 2–3 B-trees are easier to explain, practical B-trees using secondary storage need a large number of child nodes to improve performance.
Variants
The term B-tree may refer to a specific design or a general class of designs. In the narrow sense, a B-tree stores keys in its internal nodes but need not store those keys in the records at the leaves. The general class includes variations such as the B+ tree, the B* tree and the B*+ tree.
In the B+ tree, the internal nodes do not store any pointers to records, thus all pointers to records are stored in the leaf nodes. In addition, a leaf node may include a pointer to the next leaf node to speed up sequential access. Because B+ tree internal nodes have fewer pointers, each node can hold more keys, causing the tree to be shallower and thus faster to search.
The B* tree balances more neighboring internal nodes to keep the internal nodes more densely packed. This variant ensures non-root nodes are at least 2/3 full instead of 1/2. As the most costly part of operation of inserting the node in B-tree is splitting the node, B*-trees are created to postpone splitting operation as long as they can. To maintain this, instead of immediately splitting up a node when it gets full, its keys are shared with a node next to it. This spill operation is less costly to do than split, because it requires only shifting the keys between existing nodes, not allocating memory for a new one. For inserting, first it is checked whether the node has some free space in it, and if so, the new key is just inserted in the node. However, if the node is full (it has keys, where is the order of the tree as maximum number of pointers to subtrees from one node), it needs to be checked whether the right sibling exists and has some free space. If the right sibling has keys, then keys are redistributed between the two sibling nodes as evenly as possible. For this purpose, keys from the current node, the new key inserted, one key from the parent node and keys from the sibling node are seen as an ordered array of keys. The array becomes split by half, so that lowest keys stay in the current node, the next (middle) key is inserted in the parent and the rest go to the right sibling. (The newly inserted key might end up in any of the three places.) The situation when right sibling is full, and left isn't is analogous. When both the sibling nodes are full, then the two nodes (current node and a sibling) are split into three and one more key is shifted up the tree, to the parent node. If the parent is full, then spill/split operation propagates towards the root node. Deleting nodes is somewhat more complex than inserting however.
The B*+ tree combines the main B+ tree and B* tree features together.
B-trees can be turned into order statistic trees to allow rapid searches for the Nth record in key order, or counting the number of records between any two records, and various other related operations.
B-tree usage in databases
Time to search a sorted file
Sorting and searching algorithms can be characterized by the number of comparison operations that must be performed using order notation. A binary search of a sorted table with records, for example, can be done in roughly comparisons. If the table had 1,000,000 records, then a specific record could be located with at most 20 comparisons: .
Large databases have historically been kept on disk drives. The time to read a record on a disk drive far exceeds the time needed to compare keys once the record is available due to seek time and a rotational delay. The seek time may be 0 to 20 or more milliseconds, and the rotational delay averages about half the rotation period. For a 7200 RPM drive, the rotation period is 8.33 milliseconds. For a drive such as the Seagate ST3500320NS, the track-to-track seek time is 0.8 milliseconds and the average reading seek time is 8.5 milliseconds. For simplicity, assume reading from disk takes about 10 milliseconds.
The time to locate one record out of a million in the example above would take 20 disk reads times 10 milliseconds per disk read, which is 0.2 seconds.
The search time is reduced because individual records are grouped together in a disk block. A disk block might be 16 kilobytes. If each record is 160 bytes, then 100 records could be stored in each block. The disk read time above was actually for an entire block. Once the disk head is in position, one or more disk blocks can be read with little delay. With 100 records per block, the last 6 or so comparisons don't need to do any disk reads—the comparisons are all within the last disk block read.
To speed up the search further, the time to do the first 13 to 14 comparisons (which each required a disk access) must be reduced.
An index speeds the search
A B-tree index can be used to improve performance. A B-tree index creates a multi-level tree structure that breaks a database down into fixed-size blocks or pages. Each level of this tree can be used to link those pages via an address location, allowing one page (known as a node, or internal page) to refer to another with leaf pages at the lowest level. One page is typically the starting point of the tree, or the "root". This is where the search for a particular key would begin, traversing a path that terminates in a leaf. Most pages in this structure will be leaf pages which refer to specific table rows.
Because each node (or internal page) can have more than two children, a B-tree index will usually have a shorter height (the distance from the root to the farthest leaf) than a Binary Search Tree. In the example above, initial disk reads narrowed the search range by a factor of two. That can be improved by creating an auxiliary index that contains the first record in each disk block (sometimes called a sparse index). This auxiliary index would be 1% of the size of the original database, but it can be searched quickly. Finding an entry in the auxiliary index would tell us which block to search in the main database; after searching the auxiliary index, we would have to search only that one block of the main database—at a cost of one more disk read.
In the above example the index would hold 10,000 entries and would take at most 14 comparisons to return a result. Like the main database, the last six or so comparisons in the auxiliary index would be on the same disk block. The index could be searched in about eight disk reads, and the desired record could be accessed in 9 disk reads.
Creating an auxiliary index can be repeated to make an auxiliary index to the auxiliary index. That would make an aux-aux index that would need only 100 entries and would fit in one disk block.
Instead of reading 14 disk blocks to find the desired record, we only need to read 3 blocks. This blocking is the core idea behind the creation of the B-tree, where the disk blocks fill-out a hierarchy of levels to make up the index. Reading and searching the first (and only) block of the aux-aux index which is the root of the tree identifies the relevant block in aux-index in the level below. Reading and searching that aux-index block identifies the relevant block to read, until the final level, known as the leaf level, identifies a record in the main database. Instead of 150 milliseconds, we need only 30 milliseconds to get the record.
The auxiliary indices have turned the search problem from a binary search requiring roughly disk reads to one requiring only disk reads where is the blocking factor (the number of entries per block: entries per block in our example; reads).
In practice, if the main database is being frequently searched, the aux-aux index and much of the aux index may reside in a disk cache, so they would not incur a disk read. The B-tree remains the standard index implementation in almost all relational databases, and many nonrelational databases use them too.
Insertions and deletions
If the database does not change, then compiling the index is simple to do, and the index need never be changed. If there are changes, managing the database and its index require additional computation.
Deleting records from a database is relatively easy. The index can stay the same, and the record can just be marked as deleted. The database remains in sorted order. If there are a large number of lazy deletions, then searching and storage become less efficient.
Insertions can be very slow in a sorted sequential file because room for the inserted record must be made. Inserting a record before the first record requires shifting all of the records down one. Such an operation is just too expensive to be practical. One solution is to leave some spaces. Instead of densely packing all the records in a block, the block can have some free space to allow for subsequent insertions. Those spaces would be marked as if they were "deleted" records.
Both insertions and deletions are fast as long as space is available on a block. If an insertion won't fit on the block, then some free space on some nearby block must be found and the auxiliary indices adjusted. The best case is that enough space is available nearby so that the amount of block reorganization can be minimized. Alternatively, some out-of-sequence disk blocks may be used.
Advantages of B-tree usage for databases
The B-tree uses all of the ideas described above. In particular, a B-tree:
keeps keys in sorted order for sequential traversing
uses a hierarchical index to minimize the number of disk reads
uses partially full blocks to speed up insertions and deletions
keeps the index balanced with a recursive algorithm
In addition, a B-tree minimizes waste by making sure the interior nodes are at least half full. A B-tree can handle an arbitrary number of insertions and deletions.
Best case and worst case heights
Let be the height of the classic B-tree (see for the tree height definition). Let be the number of entries in the tree. Let m be the maximum number of children a node can have. Each node can have at most keys.
It can be shown (by induction for example) that a B-tree of height h with all its nodes completely filled has entries. Hence, the best case height (i.e. the minimum height) of a B-tree is:
Let be the minimum number of children an internal (non-root) node must have. For an ordinary B-tree,
Comer (1979) and Cormen et al. (2001) give the worst case height (the maximum height) of a B-tree as
Algorithms
Search
Searching is similar to searching a binary search tree. Starting at the root, the tree is recursively traversed from top to bottom. At each level, the search reduces its field of view to the child pointer (subtree) whose range includes the search value. A subtree's range is defined by the values, or keys, contained in its parent node. These limiting values are also known as separation values.
Binary search is typically (but not necessarily) used within nodes to find the separation values and child tree of interest.
Insertion
All insertions start at a leaf node. To insert a new element, search the tree to find the leaf node where the new element should be added. Insert the new element into that node with the following steps:
If the node contains fewer than the maximum allowed number of elements, then there is room for the new element. Insert the new element in the node, keeping the node's elements ordered.
Otherwise the node is full, evenly split it into two nodes so:
A single median is chosen from among the leaf's elements and the new element that is being inserted.
Values less than the median are put in the new left node and values greater than the median are put in the new right node, with the median acting as a separation value.
The separation value is inserted in the node's parent, which may cause it to be split, and so on. If the node has no parent (i.e., the node was the root), create a new root above this node (increasing the height of the tree).
If the splitting goes all the way up to the root, it creates a new root with a single separator value and two children, which is why the lower bound on the size of internal nodes does not apply to the root. The maximum number of elements per node is U−1. When a node is split, one element moves to the parent, but one element is added. So, it must be possible to divide the maximum number U−1 of elements into two legal nodes. If this number is odd, then U=2L and one of the new nodes contains (U−2)/2 = L−1 elements, and hence is a legal node, and the other contains one more element, and hence it is legal too. If U−1 is even, then U=2L−1, so there are 2L−2 elements in the node. Half of this number is L−1, which is the minimum number of elements allowed per node.
An alternative algorithm supports a single pass down the tree from the root to the node where the insertion will take place, splitting any full nodes encountered on the way pre-emptively. This prevents the need to recall the parent nodes into memory, which may be expensive if the nodes are on secondary storage. However, to use this algorithm, we must be able to send one element to the parent and split the remaining U−2 elements into two legal nodes, without adding a new element. This requires U = 2L rather than U = 2L−1, which accounts for why some textbooks impose this requirement in defining B-trees.
Deletion
There are two popular strategies for deletion from a B-tree.
Locate and delete the item, then restructure the tree to retain its invariants, OR
Do a single pass down the tree, but before entering (visiting) a node, restructure the tree so that once the key to be deleted is encountered, it can be deleted without triggering the need for any further restructuring
The algorithm below uses the former strategy.
There are two special cases to consider when deleting an element:
The element in an internal node is a separator for its child nodes
Deleting an element may put its node under the minimum number of elements and children
The procedures for these cases are in order below.
Deletion from a leaf node
Search for the value to delete.
If the value is in a leaf node, simply delete it from the node.
If underflow happens, rebalance the tree as described in section "Rebalancing after deletion" below.
Deletion from an internal node
Each element in an internal node acts as a separation value for two subtrees, therefore we need to find a replacement for separation. Note that the largest element in the left subtree is still less than the separator. Likewise, the smallest element in the right subtree is still greater than the separator. Both of those elements are in leaf nodes, and either one can be the new separator for the two subtrees. Algorithmically described below:
Choose a new separator (either the largest element in the left subtree or the smallest element in the right subtree), remove it from the leaf node it is in, and replace the element to be deleted with the new separator.
The previous step deleted an element (the new separator) from a leaf node. If that leaf node is now deficient (has fewer than the required number of nodes), then rebalance the tree starting from the leaf node.
Rebalancing after deletion
Rebalancing starts from a leaf and proceeds toward the root until the tree is balanced. If deleting an element from a node has brought it under the minimum size, then some elements must be redistributed to bring all nodes up to the minimum. Usually, the redistribution involves moving an element from a sibling node that has more than the minimum number of nodes. That redistribution operation is called a rotation. If no sibling can spare an element, then the deficient node must be merged with a sibling. The merge causes the parent to lose a separator element, so the parent may become deficient and need rebalancing. The merging and rebalancing may continue all the way to the root. Since the minimum element count doesn't apply to the root, making the root be the only deficient node is not a problem. The algorithm to rebalance the tree is as follows:
If the deficient node's right sibling exists and has more than the minimum number of elements, then rotate left
Copy the separator from the parent to the end of the deficient node (the separator moves down; the deficient node now has the minimum number of elements)
Replace the separator in the parent with the first element of the right sibling (right sibling loses one node but still has at least the minimum number of elements)
The tree is now balanced
Otherwise, if the deficient node's left sibling exists and has more than the minimum number of elements, then rotate right
Copy the separator from the parent to the start of the deficient node (the separator moves down; deficient node now has the minimum number of elements)
Replace the separator in the parent with the last element of the left sibling (left sibling loses one node but still has at least the minimum number of elements)
The tree is now balanced
Otherwise, if both immediate siblings have only the minimum number of elements, then merge with a sibling sandwiching their separator taken off from their parent
Copy the separator to the end of the left node (the left node may be the deficient node or it may be the sibling with the minimum number of elements)
Move all elements from the right node to the left node (the left node now has the maximum number of elements, and the right node – empty)
Remove the separator from the parent along with its empty right child (the parent loses an element)
If the parent is the root and now has no elements, then free it and make the merged node the new root (tree becomes shallower)
Otherwise, if the parent has fewer than the required number of elements, then rebalance the parent
Note: The rebalancing operations are different for B+ trees (e.g., rotation is different because parent has copy of the key) and B*-tree (e.g., three siblings are merged into two siblings).
Sequential access
While freshly loaded databases tend to have good sequential behaviour, this behaviour becomes increasingly difficult to maintain as a database grows, resulting in more random I/O and performance challenges.
Initial construction
A common special case is adding a large amount of pre-sorted data into an initially empty B-tree. While it is quite possible to simply perform a series of successive inserts, inserting sorted data results in a tree composed almost entirely of half-full nodes. Instead, a special "bulk loading" algorithm can be used to produce a more efficient tree with a higher branching factor.
When the input is sorted, all insertions are at the rightmost edge of the tree, and in particular any time a node is split, we are guaranteed that no more insertions will take place in the left half. When bulk loading, we take advantage of this, and instead of splitting overfull nodes evenly, split them as unevenly as possible: leave the left node completely full and create a right node with zero keys and one child (in violation of the usual B-tree rules).
At the end of bulk loading, the tree is composed almost entirely of completely full nodes; only the rightmost node on each level may be less than full. Because those nodes may also be less than half full, to re-establish the normal B-tree rules, combine such nodes with their (guaranteed full) left siblings and divide the keys to produce two nodes at least half full. The only node which lacks a full left sibling is the root, which is permitted to be less than half full.
In filesystems
In addition to its use in databases, the B-tree (or ) is also used in filesystems to allow quick random access to an arbitrary block in a particular file. The basic problem is turning the file block address into a disk block address.
Some operating systems require the user to allocate the maximum size of the file when the file is created. The file can then be allocated as contiguous disk blocks. In that case, to convert the file block address into a disk block address, the operating system simply adds the file block address to the address of the first disk block constituting the file. The scheme is simple, but the file cannot exceed its created size.
Other operating systems allow a file to grow. The resulting disk blocks may not be contiguous, so mapping logical blocks to physical blocks is more involved.
MS-DOS, for example, used a simple File Allocation Table (FAT). The FAT has an entry for each disk block, and that entry identifies whether its block is used by a file and if so, which block (if any) is the next disk block of the same file. So, the allocation of each file is represented as a linked list in the table. In order to find the disk address of file block , the operating system (or disk utility) must sequentially follow the file's linked list in the FAT. Worse, to find a free disk block, it must sequentially scan the FAT. For MS-DOS, that was not a huge penalty because the disks and files were small and the FAT had few entries and relatively short file chains. In the FAT12 filesystem (used on floppy disks and early hard disks), there were no more than 4,080 entries, and the FAT would usually be resident in memory. As disks got bigger, the FAT architecture began to confront penalties. On a large disk using FAT, it may be necessary to perform disk reads to learn the disk location of a file block to be read or written.
TOPS-20 (and possibly TENEX) used a 0 to 2 level tree that has similarities to a B-tree. A disk block was 512 36-bit words. If the file fit in a 512 (29) word block, then the file directory would point to that physical disk block. If the file fit in 218 words, then the directory would point to an aux index; the 512 words of that index would either be NULL (the block isn't allocated) or point to the physical address of the block. If the file fit in 227 words, then the directory would point to a block holding an aux-aux index; each entry would either be NULL or point to an aux index. Consequently, the physical disk block for a 227 word file could be located in two disk reads and read on the third.
Apple's filesystem HFS+ and APFS, Microsoft's NTFS, AIX (jfs2) and some Linux filesystems, such as Bcachefs, Btrfs and ext4, use B-trees.
B*-trees are used in the HFS and Reiser4 file systems.
DragonFly BSD's HAMMER file system uses a modified B+-tree.
Performance
A B-tree grows slower with growing data amount, than the linearity of a linked list. Compared to a skip list, both structures have the same performance, but the B-tree scales better for growing n. A T-tree, for main memory database systems, is similar but more compact.
Variations
Access concurrency
Lehman and Yao showed that all the read locks could be avoided (and thus concurrent access greatly improved) by linking the tree blocks at each level together with a "next" pointer. This results in a tree structure where both insertion and search operations descend from the root to the leaf. Write locks are only required as a tree block is modified. This maximizes access concurrency by multiple users, an important consideration for databases and/or other B-tree-based ISAM storage methods. The cost associated with this improvement is that empty pages cannot be removed from the btree during normal operations. (However, see for various strategies to implement node merging, and source code at.)
United States Patent 5283894, granted in 1994, appears to show a way to use a 'Meta Access Method' to allow concurrent B+ tree access and modification without locks. The technique accesses the tree 'upwards' for both searches and updates by means of additional in-memory indexes that point at the blocks in each level in the block cache. No reorganization for deletes is needed and there are no 'next' pointers in each block as in Lehman and Yao.
Parallel algorithms
Since B-trees are similar in structure to red-black trees, parallel algorithms for red-black trees can be applied to B-trees as well.
Maple tree
A Maple tree is a B-tree developed for use in the Linux kernel to reduce lock contention in virtual memory management.
(a,b)-tree
(a,b)-trees are generalizations of B-trees. B-trees require that each internal node have a minimum of children and a maximum of children, for some preset value of . In contrast, an (a,b)-tree allows the minimum number of children for an internal node to be set arbitrarily low. In an (a,b)-tree, each internal node has between and children, for some preset values of and .
| Mathematics | Data structures and types | null |
4677 | https://en.wikipedia.org/wiki/Binomial%20theorem | Binomial theorem | In elementary algebra, the binomial theorem (or binomial expansion) describes the algebraic expansion of powers of a binomial. According to the theorem, the power expands into a polynomial with terms of the form , where the exponents and are nonnegative integers satisfying and the coefficient of each term is a specific positive integer depending on and . For example, for ,
The coefficient in each term is known as the binomial coefficient or (the two have the same value). These coefficients for varying and can be arranged to form Pascal's triangle. These numbers also occur in combinatorics, where gives the number of different combinations (i.e. subsets) of elements that can be chosen from an -element set. Therefore is usually pronounced as " choose ".
Statement
According to the theorem, the expansion of any nonnegative integer power of the binomial is a sum of the form
where each is a positive integer known as a binomial coefficient, defined as
This formula is also referred to as the binomial formula or the binomial identity. Using summation notation, it can be written more concisely as
The final expression follows from the previous one by the symmetry of and in the first expression, and by comparison it follows that the sequence of binomial coefficients in the formula is symmetrical,
A simple variant of the binomial formula is obtained by substituting for , so that it involves only a single variable. In this form, the formula reads
Examples
The first few cases of the binomial theorem are:
In general, for the expansion of on the right side in the th row (numbered so that the top row is the 0th row):
the exponents of in the terms are (the last term implicitly contains );
the exponents of in the terms are (the first term implicitly contains );
the coefficients form the th row of Pascal's triangle;
before combining like terms, there are terms in the expansion (not shown);
after combining like terms, there are terms, and their coefficients sum to .
An example illustrating the last two points: with .
A simple example with a specific positive value of :
A simple example with a specific negative value of :
Geometric explanation
For positive values of and , the binomial theorem with is the geometrically evident fact that a square of side can be cut into a square of side , a square of side , and two rectangles with sides and . With , the theorem states that a cube of side can be cut into a cube of side , a cube of side , three rectangular boxes, and three rectangular boxes.
In calculus, this picture also gives a geometric proof of the derivative if one sets and interpreting as an infinitesimal change in , then this picture shows the infinitesimal change in the volume of an -dimensional hypercube, where the coefficient of the linear term (in ) is the area of the faces, each of dimension :
Substituting this into the definition of the derivative via a difference quotient and taking limits means that the higher order terms, and higher, become negligible, and yields the formula interpreted as
"the infinitesimal rate of change in volume of an -cube as side length varies is the area of of its -dimensional faces".
If one integrates this picture, which corresponds to applying the fundamental theorem of calculus, one obtains Cavalieri's quadrature formula, the integral – see proof of Cavalieri's quadrature formula for details.
Binomial coefficients
The coefficients that appear in the binomial expansion are called binomial coefficients. These are usually written and pronounced " choose ".
Formulas
The coefficient of is given by the formula
which is defined in terms of the factorial function . Equivalently, this formula can be written
with factors in both the numerator and denominator of the fraction. Although this formula involves a fraction, the binomial coefficient is actually an integer.
Combinatorial interpretation
The binomial coefficient can be interpreted as the number of ways to choose elements from an -element set (a combination). This is related to binomials for the following reason: if we write as a product
then, according to the distributive law, there will be one term in the expansion for each choice of either or from each of the binomials of the product. For example, there will only be one term , corresponding to choosing from each binomial. However, there will be several terms of the form , one for each way of choosing exactly two binomials to contribute a . Therefore, after combining like terms, the coefficient of will be equal to the number of ways to choose exactly elements from an -element set.
Proofs
Combinatorial proof
Expanding yields the sum of the products of the form where each is or . Rearranging factors shows that each product equals for some between and . For a given , the following are proved equal in succession:
the number of terms equal to in the expansion
the number of -character strings having in exactly positions
the number of -element subsets of
either by definition, or by a short combinatorial argument if one is defining as
This proves the binomial theorem.
Example
The coefficient of in
equals because there are three strings of length 3 with exactly two 's, namely,
corresponding to the three 2-element subsets of , namely,
where each subset specifies the positions of the in a corresponding string.
Inductive proof
Induction yields another proof of the binomial theorem. When , both sides equal , since and Now suppose that the equality holds for a given ; we will prove it for . For , let denote the coefficient of in the polynomial . By the inductive hypothesis, is a polynomial in and such that is if , and otherwise. The identity
shows that is also a polynomial in and , and
since if , then and . Now, the right hand side is
by Pascal's identity. On the other hand, if , then and , so we get . Thus
which is the inductive hypothesis with substituted for and so completes the inductive step.
Generalizations
Newton's generalized binomial theorem
Around 1665, Isaac Newton generalized the binomial theorem to allow real exponents other than nonnegative integers. (The same generalization also applies to complex exponents.) In this generalization, the finite sum is replaced by an infinite series. In order to do this, one needs to give meaning to binomial coefficients with an arbitrary upper index, which cannot be done using the usual formula with factorials. However, for an arbitrary number , one can define
where is the Pochhammer symbol, here standing for a falling factorial. This agrees with the usual definitions when is a nonnegative integer. Then, if and are real numbers with , and is any complex number, one has
When is a nonnegative integer, the binomial coefficients for are zero, so this equation reduces to the usual binomial theorem, and there are at most nonzero terms. For other values of , the series typically has infinitely many nonzero terms.
For example, gives the following series for the square root:
Taking , the generalized binomial series gives the geometric series formula, valid for :
More generally, with , we have for :
So, for instance, when ,
Replacing with yields:
So, for instance, when , we have for :
Further generalizations
The generalized binomial theorem can be extended to the case where and are complex numbers. For this version, one should again assume and define the powers of and using a holomorphic branch of log defined on an open disk of radius centered at . The generalized binomial theorem is valid also for elements and of a Banach algebra as long as , and is invertible, and .
A version of the binomial theorem is valid for the following Pochhammer symbol-like family of polynomials: for a given real constant , define and
for Then
The case recovers the usual binomial theorem.
More generally, a sequence of polynomials is said to be of binomial type if
for all ,
, and
for all , , and .
An operator on the space of polynomials is said to be the basis operator of the sequence if and for all . A sequence is binomial if and only if its basis operator is a Delta operator. Writing for the shift by operator, the Delta operators corresponding to the above "Pochhammer" families of polynomials are the backward difference for , the ordinary derivative for , and the forward difference for .
Multinomial theorem
The binomial theorem can be generalized to include powers of sums with more than two terms. The general version is
where the summation is taken over all sequences of nonnegative integer indices through such that the sum of all is . (For each term in the expansion, the exponents must add up to ). The coefficients are known as multinomial coefficients, and can be computed by the formula
Combinatorially, the multinomial coefficient counts the number of different ways to partition an -element set into disjoint subsets of sizes .
Multi-binomial theorem
When working in more dimensions, it is often useful to deal with products of binomial expressions. By the binomial theorem this is equal to
This may be written more concisely, by multi-index notation, as
General Leibniz rule
The general Leibniz rule gives the th derivative of a product of two functions in a form similar to that of the binomial theorem:
Here, the superscript indicates the th derivative of a function, . If one sets and , cancelling the common factor of from each term gives the ordinary binomial theorem.
History
Special cases of the binomial theorem were known since at least the 4th century BC when Greek mathematician Euclid mentioned the special case of the binomial theorem for exponent . Greek mathematician Diophantus cubed various binomials, including . Indian mathematician Aryabhata's method for finding cube roots, from around 510 AD, suggests that he knew the binomial formula for exponent .
Binomial coefficients, as combinatorial quantities expressing the number of ways of selecting objects out of without replacement (combinations), were of interest to ancient Indian mathematicians. The Jain Bhagavati Sutra (c. 300 BC) describes the number of combinations of philosophical categories, senses, or other things, with correct results up through (probably obtained by listing all possibilities and counting them) and a suggestion that higher combinations could likewise be found. The Chandaḥśāstra by the Indian lyricist Piṅgala (3rd or 2nd century BC) somewhat crypically describes a method of arranging two types of syllables to form metres of various lengths and counting them; as interpreted and elaborated by Piṅgala's 10th-century commentator Halāyudha his "method of pyramidal expansion" (meru-prastāra) for counting metres is equivalent to Pascal's triangle. Varāhamihira (6th century AD) describes another method for computing combination counts by adding numbers in columns. By the 9th century at latest Indian mathematicians learned to express this as a product of fractions , and clear statements of this rule can be found in Śrīdhara's Pāṭīgaṇita (8th–9th century), Mahāvīra's Gaṇita-sāra-saṅgraha (c. 850), and Bhāskara II's Līlāvatī (12th century).
The Persian mathematician al-Karajī (953–1029) wrote a now-lost book containing the binomial theorem and a table of binomial coefficients, often credited as their first appearance.
An explicit statement of the binomial theorem appears in al-Samawʾal's al-Bāhir (12th century), there credited to al-Karajī. Al-Samawʾal algebraically expanded the square, cube, and fourth power of a binomial, each in terms of the previous power, and noted that similar proofs could be provided for higher powers, an early form of mathematical induction. He then provided al-Karajī's table of binomial coefficients (Pascal's triangle turned on its side) up to and a rule for generating them equivalent to the recurrence relation . The Persian poet and mathematician Omar Khayyam was probably familiar with the formula to higher orders, although many of his mathematical works are lost. The binomial expansions of small degrees were known in the 13th century mathematical works of Yang Hui and also Chu Shih-Chieh. Yang Hui attributes the method to a much earlier 11th century text of Jia Xian, although those writings are now also lost.
In Europe, descriptions of the construction of Pascal's triangle can be found as early as Jordanus de Nemore's De arithmetica (13th century). In 1544, Michael Stifel introduced the term "binomial coefficient" and showed how to use them to express in terms of , via "Pascal's triangle". Other 16th century mathematicians including Niccolò Fontana Tartaglia and Simon Stevin also knew of it. 17th-century mathematician Blaise Pascal studied the eponymous triangle comprehensively in his Traité du triangle arithmétique.
By the early 17th century, some specific cases of the generalized binomial theorem, such as for , can be found in the work of Henry Briggs' Arithmetica Logarithmica (1624). Isaac Newton is generally credited with discovering the generalized binomial theorem, valid for any real exponent, in 1665, inspired by the work of John Wallis's Arithmetic Infinitorum and his method of interpolation. A logarithmic version of the theorem for fractional exponents was discovered independently by James Gregory who wrote down his formula in 1670.
Applications
Multiple-angle identities
For the complex numbers the binomial theorem can be combined with de Moivre's formula to yield multiple-angle formulas for the sine and cosine. According to De Moivre's formula,
Using the binomial theorem, the expression on the right can be expanded, and then the real and imaginary parts can be taken to yield formulas for and . For example, since
But De Moivre's formula identifies the left side with , so
which are the usual double-angle identities. Similarly, since
De Moivre's formula yields
In general,
and
There are also similar formulas using Chebyshev polynomials.
Series for e
The number is often defined by the formula
Applying the binomial theorem to this expression yields the usual infinite series for . In particular:
The th term of this sum is
As , the rational expression on the right approaches , and therefore
This indicates that can be written as a series:
Indeed, since each term of the binomial expansion is an increasing function of , it follows from the monotone convergence theorem for series that the sum of this infinite series is equal to .
Probability
The binomial theorem is closely related to the probability mass function of the negative binomial distribution. The probability of a (countable) collection of independent Bernoulli trials with probability of success all not happening is
An upper bound for this quantity is
In abstract algebra
The binomial theorem is valid more generally for two elements and in a ring, or even a semiring, provided that . For example, it holds for two matrices, provided that those matrices commute; this is useful in computing powers of a matrix.
The binomial theorem can be stated by saying that the polynomial sequence is of binomial type.
| Mathematics | Elementary algebra | null |
4700 | https://en.wikipedia.org/wiki/Bessel%20function | Bessel function | Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions of Bessel's differential equation
for an arbitrary complex number , which represents the order of the Bessel function. Although and produce the same differential equation, it is conventional to define different Bessel functions for these two values in such a way that the Bessel functions are mostly smooth functions of .
The most important cases are when is an integer or half-integer. Bessel functions for integer are also known as cylinder functions or the cylindrical harmonics because they appear in the solution to Laplace's equation in cylindrical coordinates. Spherical Bessel functions with half-integer are obtained when solving the Helmholtz equation in spherical coordinates.
Applications of Bessel functions
Bessel's equation arises when finding separable solutions to Laplace's equation and the Helmholtz equation in cylindrical or spherical coordinates. Bessel functions are therefore especially important for many problems of wave propagation and static potentials. In solving problems in cylindrical coordinate systems, one obtains Bessel functions of integer order (); in spherical problems, one obtains half-integer orders (). For example:
Electromagnetic waves in a cylindrical waveguide
Pressure amplitudes of inviscid rotational flows
Heat conduction in a cylindrical object
Modes of vibration of a thin circular or annular acoustic membrane (such as a drumhead or other membranophone) or thicker plates such as sheet metal (see Kirchhoff–Love plate theory, Mindlin–Reissner plate theory)
Diffusion problems on a lattice
Solutions to the radial Schrödinger equation (in spherical and cylindrical coordinates) for a free particle
Position space representation of the Feynman propagator in quantum field theory
Solving for patterns of acoustical radiation
Frequency-dependent friction in circular pipelines
Dynamics of floating bodies
Angular resolution
Diffraction from helical objects, including DNA
Probability density function of product of two normally distributed random variables
Analyzing of the surface waves generated by microtremors, in geophysics and seismology.
Bessel functions also appear in other problems, such as signal processing (e.g., see FM audio synthesis, Kaiser window, or Bessel filter).
Definitions
Because this is a linear differential equation, solutions can be scaled to any amplitude. The amplitudes chosen for the functions originate from the early work in which the functions appeared as solutions to definite integrals rather than solutions to differential equations. Because the differential equation is second-order, there must be two linearly independent solutions. Depending upon the circumstances, however, various formulations of these solutions are convenient. Different variations are summarized in the table below and described in the following sections.
Bessel functions of the second kind and the spherical Bessel functions of the second kind are sometimes denoted by and , respectively, rather than and .
Bessel functions of the first kind:
Bessel functions of the first kind, denoted as , are solutions of Bessel's differential equation. For integer or positive , Bessel functions of the first kind are finite at the origin (); while for negative non-integer , Bessel functions of the first kind diverge as approaches zero. It is possible to define the function by times a Maclaurin series (note that need not be an integer, and non-integer powers are not permitted in a Taylor series), which can be found by applying the Frobenius method to Bessel's equation:
where is the gamma function, a shifted generalization of the factorial function to non-integer values. Some earlier authors define the Bessel function of the first kind differently, essentially without the division by in ; this definition is not used in this article. The Bessel function of the first kind is an entire function if is an integer, otherwise it is a multivalued function with singularity at zero. The graphs of Bessel functions look roughly like oscillating sine or cosine functions that decay proportionally to (see also their asymptotic forms below), although their roots are not generally periodic, except asymptotically for large . (The series indicates that is the derivative of , much like is the derivative of ; more generally, the derivative of can be expressed in terms of by the identities below.)
For non-integer , the functions and are linearly independent, and are therefore the two solutions of the differential equation. On the other hand, for integer order , the following relationship is valid (the gamma function has simple poles at each of the non-positive integers):
This means that the two solutions are no longer linearly independent. In this case, the second linearly independent solution is then found to be the Bessel function of the second kind, as discussed below.
Bessel's integrals
Another definition of the Bessel function, for integer values of , is possible using an integral representation:
which is also called Hansen-Bessel formula.
This was the approach that Bessel used, and from this definition he derived several properties of the function. The definition may be extended to non-integer orders by one of Schläfli's integrals, for :
Relation to hypergeometric series
The Bessel functions can be expressed in terms of the generalized hypergeometric series as
This expression is related to the development of Bessel functions in terms of the Bessel–Clifford function.
Relation to Laguerre polynomials
In terms of the Laguerre polynomials and arbitrarily chosen parameter , the Bessel function can be expressed as
Bessel functions of the second kind:
The Bessel functions of the second kind, denoted by , occasionally denoted instead by , are solutions of the Bessel differential equation that have a singularity at the origin () and are multivalued. These are sometimes called Weber functions, as they were introduced by , and also Neumann functions after Carl Neumann.
For non-integer , is related to by
In the case of integer order , the function is defined by taking the limit as a non-integer tends to :
If is a nonnegative integer, we have the series
where is the digamma function, the logarithmic derivative of the gamma function.
There is also a corresponding integral formula (for ):
In the case where : (with being Euler's constant)
is necessary as the second linearly independent solution of the Bessel's equation when is an integer. But has more meaning than that. It can be considered as a "natural" partner of . | Mathematics | Specific functions | null |
4715 | https://en.wikipedia.org/wiki/Boolean%20satisfiability%20problem | Boolean satisfiability problem | In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) asks whether there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the formula's variables can be consistently replaced by the values TRUE or FALSE to make the formula evaluate to TRUE. If this is the case, the formula is called satisfiable, else unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable.
SAT is the first problem that was proven to be NP-complete—this is the Cook–Levin theorem. This means that all problems in the complexity class NP, which includes a wide range of natural decision and optimization problems, are at most as difficult to solve as SAT. There is no known algorithm that efficiently solves each SAT problem, and it is generally believed that no such algorithm exists, but this belief has not been proven mathematically, and resolving the question of whether SAT has a polynomial-time algorithm is equivalent to the P versus NP problem, which is a famous open problem in the theory of computing.
Nevertheless, as of 2007, heuristic SAT-algorithms are able to solve problem instances involving tens of thousands of variables and formulas consisting of millions of symbols, which is sufficient for many practical SAT problems from, e.g., artificial intelligence, circuit design, and automatic theorem proving.
Definitions
A propositional logic formula, also called Boolean expression, is built from variables, operators AND (conjunction, also denoted by ∧), OR (disjunction, ∨), NOT (negation, ¬), and parentheses. A formula is said to be satisfiable if it can be made TRUE by assigning appropriate logical values (i.e. TRUE, FALSE) to its variables. The Boolean satisfiability problem (SAT) is, given a formula, to check whether it is satisfiable. This decision problem is of central importance in many areas of computer science, including theoretical computer science, complexity theory, algorithmics, cryptography and artificial intelligence.
Conjunctive normal form
A literal is either a variable (in which case it is called a positive literal) or the negation of a variable (called a negative literal). A clause is a disjunction of literals (or a single literal). A clause is called a Horn clause if it contains at most one positive literal. A formula is in conjunctive normal form (CNF) if it is a conjunction of clauses (or a single clause).
For example, is a positive literal, is a negative literal, and is a clause. The formula is in conjunctive normal form; its first and third clauses are Horn clauses, but its second clause is not. The formula is satisfiable, by choosing x1 = FALSE, x2 = FALSE, and x3 arbitrarily, since (FALSE ∨ ¬FALSE) ∧ (¬FALSE ∨ FALSE ∨ x3) ∧ ¬FALSE evaluates to (FALSE ∨ TRUE) ∧ (TRUE ∨ FALSE ∨ x3) ∧ TRUE, and in turn to TRUE ∧ TRUE ∧ TRUE (i.e. to TRUE). In contrast, the CNF formula a ∧ ¬a, consisting of two clauses of one literal, is unsatisfiable, since for a=TRUE or a=FALSE it evaluates to TRUE ∧ ¬TRUE (i.e., FALSE) or FALSE ∧ ¬FALSE (i.e., again FALSE), respectively.
For some versions of the SAT problem, it is useful to define the notion of a generalized conjunctive normal form formula, viz. as a conjunction of arbitrarily many generalized clauses, the latter being of the form for some Boolean function R and (ordinary) literals . Different sets of allowed Boolean functions lead to different problem versions. As an example, R(¬x,a,b) is a generalized clause, and R(¬x,a,b) ∧ R(b,y,c) ∧ R(c,d,¬z) is a generalized conjunctive normal form. This formula is used below, with R being the ternary operator that is TRUE just when exactly one of its arguments is.
Using the laws of Boolean algebra, every propositional logic formula can be transformed into an equivalent conjunctive normal form, which may, however, be exponentially longer. For example, transforming the formula (x1∧y1) ∨ (x2∧y2) ∨ ... ∨ (xn∧yn) into conjunctive normal form yields
;
while the former is a disjunction of n conjunctions of 2 variables, the latter consists of 2n clauses of n variables.
However, with use of the Tseytin transformation, we may find an equisatisfiable conjunctive normal form formula with length linear in the size of the original propositional logic formula.
Complexity
SAT was the first problem known to be NP-complete, as proved by Stephen Cook at the University of Toronto in 1971 and independently by Leonid Levin at the Russian Academy of Sciences in 1973. Until that time, the concept of an NP-complete problem did not even exist. The proof shows how every decision problem in the complexity class NP can be reduced to the SAT problem for CNF formulas, sometimes called CNFSAT. A useful property of Cook's reduction is that it preserves the number of accepting answers. For example, deciding whether a given graph has a 3-coloring is another problem in NP; if a graph has 17 valid 3-colorings, then the SAT formula produced by the Cook–Levin reduction will have 17 satisfying assignments.
NP-completeness only refers to the run-time of the worst case instances. Many of the instances that occur in practical applications can be solved much more quickly. See §Algorithms for solving SAT below.
3-satisfiability
Like the satisfiability problem for arbitrary formulas, determining the satisfiability of a formula in conjunctive normal form where each clause is limited to at most three literals is NP-complete also; this problem is called 3-SAT, 3CNFSAT, or 3-satisfiability. To reduce the unrestricted SAT problem to 3-SAT, transform each clause to a conjunction of clauses
where are fresh variables not occurring elsewhere. Although the two formulas are not logically equivalent, they are equisatisfiable. The formula resulting from transforming all clauses is at most 3 times as long as its original; that is, the length growth is polynomial.
3-SAT is one of Karp's 21 NP-complete problems, and it is used as a starting point for proving that other problems are also NP-hard. This is done by polynomial-time reduction from 3-SAT to the other problem. An example of a problem where this method has been used is the clique problem: given a CNF formula consisting of c clauses, the corresponding graph consists of a vertex for each literal, and an edge between each two non-contradicting literals from different clauses; see the picture. The graph has a c-clique if and only if the formula is satisfiable.
There is a simple randomized algorithm due to Schöning (1999) that runs in time (4/3)n where n is the number of variables in the 3-SAT proposition, and succeeds with high probability to correctly decide 3-SAT.
The exponential time hypothesis asserts that no algorithm can solve 3-SAT (or indeed k-SAT for any ) in time (that is, fundamentally faster than exponential in n).
Selman, Mitchell, and Levesque (1996) give empirical data on the difficulty of randomly generated 3-SAT formulas, depending on their size parameters. Difficulty is measured in number recursive calls made by a DPLL algorithm. They identified a phase transition region from almost-certainly-satisfiable to almost-certainly-unsatisfiable formulas at the clauses-to-variables ratio at about 4.26.
3-satisfiability can be generalized to k-satisfiability (k-SAT, also k-CNF-SAT), when formulas in CNF are considered with each clause containing up to k literals. However, since for any k ≥ 3, this problem can neither be easier than 3-SAT nor harder than SAT, and the latter two are NP-complete, so must be k-SAT.
Some authors restrict k-SAT to CNF formulas with exactly k literals. This does not lead to a different complexity class either, as each clause with j < k literals can be padded with fixed dummy variables to . After padding all clauses, 2k–1 extra clauses must be appended to ensure that only can lead to a satisfying assignment. Since k does not depend on the formula length, the extra clauses lead to a constant increase in length. For the same reason, it does not matter whether duplicate literals are allowed in clauses, as in .
Special cases of SAT
Conjunctive normal form
Conjunctive normal form (in particular with 3 literals per clause) is often considered the canonical representation for SAT formulas. As shown above, the general SAT problem reduces to 3-SAT, the problem of determining satisfiability for formulas in this form.
Disjunctive normal form
SAT is trivial if the formulas are restricted to those in disjunctive normal form, that is, they are a disjunction of conjunctions of literals. Such a formula is indeed satisfiable if and only if at least one of its conjunctions is satisfiable, and a conjunction is satisfiable if and only if it does not contain both x and NOT x for some variable x. This can be checked in linear time. Furthermore, if they are restricted to being in full disjunctive normal form, in which every variable appears exactly once in every conjunction, they can be checked in constant time (each conjunction represents one satisfying assignment). But it can take exponential time and space to convert a general SAT problem to disjunctive normal form; to obtain an example, exchange "∧" and "∨" in the above exponential blow-up example for conjunctive normal forms.
Exactly-1 3-satisfiability
A variant of the 3-satisfiability problem is the one-in-three 3-SAT (also known variously as 1-in-3-SAT and exactly-1 3-SAT). Given a conjunctive normal form with three literals per clause, the problem is to determine whether there exists a truth assignment to the variables so that each clause has exactly one TRUE literal (and thus exactly two FALSE literals). In contrast, ordinary 3-SAT requires that every clause has at least one TRUE literal. Formally, a one-in-three 3-SAT problem is given as a generalized conjunctive normal form with all generalized clauses using a ternary operator R that is TRUE just if exactly one of its arguments is. When all literals of a one-in-three 3-SAT formula are positive, the satisfiability problem is called one-in-three positive 3-SAT.
One-in-three 3-SAT, together with its positive case, is listed as NP-complete problem "LO4" in the standard reference Computers and Intractability: A Guide to the Theory of NP-Completeness by Michael R. Garey and David S. Johnson. One-in-three 3-SAT was proved to be NP-complete by Thomas Jerome Schaefer as a special case of Schaefer's dichotomy theorem, which asserts that any problem generalizing Boolean satisfiability in a certain way is either in the class P or is NP-complete.
Schaefer gives a construction allowing an easy polynomial-time reduction from 3-SAT to one-in-three 3-SAT. Let "(x or y or z)" be a clause in a 3CNF formula. Add six fresh Boolean variables a, b, c, d, e, and f, to be used to simulate this clause and no other. Then the formula R(x,a,d) ∧ R(y,b,d) ∧ R(a,b,e) ∧ R(c,d,f) ∧ R(z,c,FALSE) is satisfiable by some setting of the fresh variables if and only if at least one of x, y, or z is TRUE, see picture (left). Thus any 3-SAT instance with m clauses and n variables may be converted into an equisatisfiable one-in-three 3-SAT instance with 5m clauses and n + 6m variables. Another reduction involves only four fresh variables and three clauses: R(¬x,a,b) ∧ R(b,y,c) ∧ R(c,d,¬z), see picture (right).
Not-all-equal 3-satisfiability
Another variant is the not-all-equal 3-satisfiability problem (also called NAE3SAT). Given a conjunctive normal form with three literals per clause, the problem is to determine if an assignment to the variables exists such that in no clause all three literals have the same truth value. This problem is NP-complete, too, even if no negation symbols are admitted, by Schaefer's dichotomy theorem.
Linear SAT
A 3-SAT formula is Linear SAT (LSAT) if each clause (viewed as a set of literals) intersects at most one other clause, and, moreover, if two clauses intersect, then they have exactly one literal in common. An LSAT formula can be depicted as a set of disjoint semi-closed intervals on a line. Deciding whether an LSAT formula is satisfiable is NP-complete.
2-satisfiability
SAT is easier if the number of literals in a clause is limited to at most 2, in which case the problem is called 2-SAT. This problem can be solved in polynomial time, and in fact is complete for the complexity class NL. If additionally all OR operations in literals are changed to XOR operations, then the result is called exclusive-or 2-satisfiability, which is a problem complete for the complexity class SL = L.
Horn-satisfiability
The problem of deciding the satisfiability of a given conjunction of Horn clauses is called Horn-satisfiability, or HORN-SAT. It can be solved in polynomial time by a single step of the unit propagation algorithm, which produces the single minimal model of the set of Horn clauses (w.r.t. the set of literals assigned to TRUE). Horn-satisfiability is P-complete. It can be seen as P's version of the Boolean satisfiability problem. Also, deciding the truth of quantified Horn formulas can be done in polynomial time.
Horn clauses are of interest because they are able to express implication of one variable from a set of other variables. Indeed, one such clause ¬x1 ∨ ... ∨ ¬xn ∨ y can be rewritten as x1 ∧ ... ∧ xn → y; that is, if x1,...,xn are all TRUE, then y must be TRUE as well.
A generalization of the class of Horn formulas is that of renameable-Horn formulae, which is the set of formulas that can be placed in Horn form by replacing some variables with their respective negation. For example, (x1 ∨ ¬x2) ∧ (¬x1 ∨ x2 ∨ x3) ∧ ¬x1 is not a Horn formula, but can be renamed to the Horn formula (x1 ∨ ¬x2) ∧ (¬x1 ∨ x2 ∨ ¬y3) ∧ ¬x1 by introducing y3 as negation of x3. In contrast, no renaming of (x1 ∨ ¬x2 ∨ ¬x3) ∧ (¬x1 ∨ x2 ∨ x3) ∧ ¬x1 leads to a Horn formula. Checking the existence of such a replacement can be done in linear time; therefore, the satisfiability of such formulae is in P as it can be solved by first performing this replacement and then checking the satisfiability of the resulting Horn formula.
XOR-satisfiability
Another special case is the class of problems where each clause contains XOR (i.e. exclusive or) rather than (plain) OR operators. This is in P, since an XOR-SAT formula can also be viewed as a system of linear equations mod 2, and can be solved in cubic time by Gaussian elimination; see the box for an example. This recast is based on the kinship between Boolean algebras and Boolean rings, and the fact that arithmetic modulo two forms a finite field. Since a XOR b XOR c evaluates to TRUE if and only if exactly 1 or 3 members of {a,b,c} are TRUE, each solution of the 1-in-3-SAT problem for a given CNF formula is also a solution of the XOR-3-SAT problem, and in turn each solution of XOR-3-SAT is a solution of 3-SAT; see the picture. As a consequence, for each CNF formula, it is possible to solve the XOR-3-SAT problem defined by the formula, and based on the result infer either that the 3-SAT problem is solvable or that the 1-in-3-SAT problem is unsolvable.
Provided that the complexity classes P and NP are not equal, neither 2-, nor Horn-, nor XOR-satisfiability is NP-complete, unlike SAT.
Schaefer's dichotomy theorem
The restrictions above (CNF, 2CNF, 3CNF, Horn, XOR-SAT) bound the considered formulae to be conjunctions of subformulas; each restriction states a specific form for all subformulas: for example, only binary clauses can be subformulas in 2CNF.
Schaefer's dichotomy theorem states that, for any restriction to Boolean functions that can be used to form these subformulas, the corresponding satisfiability problem is in P or NP-complete. The membership in P of the satisfiability of 2CNF, Horn, and XOR-SAT formulae are special cases of this theorem.
The following table summarizes some common variants of SAT.
Extensions of SAT
An extension that has gained significant popularity since 2003 is satisfiability modulo theories (SMT) that can enrich CNF formulas with linear constraints, arrays, all-different constraints, uninterpreted functions, etc. Such extensions typically remain NP-complete, but very efficient solvers are now available that can handle many such kinds of constraints.
The satisfiability problem becomes more difficult if both "for all" (∀) and "there exists" (∃) quantifiers are allowed to bind the Boolean variables. An example of such an expression would be ; it is valid, since for all values of x and y, an appropriate value of z can be found, viz. z=TRUE if both x and y are FALSE, and z=FALSE else. SAT itself (tacitly) uses only ∃ quantifiers. If only ∀ quantifiers are allowed instead, the so-called tautology problem is obtained, which is co-NP-complete. If any number of both quantifiers are allowed, the problem is called the quantified Boolean formula problem (QBF), which can be shown to be PSPACE-complete. It is widely believed that PSPACE-complete problems are strictly harder than any problem in NP, although this has not yet been proved. Using highly parallel P systems, QBF-SAT problems can be solved in linear time.
Ordinary SAT asks if there is at least one variable assignment that makes the formula true. A variety of variants deal with the number of such assignments:
MAJ-SAT asks if at least half of all assignments make the formula TRUE. It is known to be complete for PP, a probabilistic class. Surprisingly, MAJ-kSAT is demonstrated to be in P for every finite integer k.
#SAT, the problem of counting how many variable assignments satisfy a formula, is a counting problem, not a decision problem, and is #P-complete.
UNIQUE SAT is the problem of determining whether a formula has exactly one assignment. It is complete for US, the complexity class describing problems solvable by a non-deterministic polynomial time Turing machine that accepts when there is exactly one nondeterministic accepting path and rejects otherwise.
UNAMBIGUOUS-SAT is the name given to the satisfiability problem when the input is restricted to formulas having at most one satisfying assignment. The problem is also called USAT. A solving algorithm for UNAMBIGUOUS-SAT is allowed to exhibit any behavior, including endless looping, on a formula having several satisfying assignments. Although this problem seems easier, Valiant and Vazirani have shown that if there is a practical (i.e. randomized polynomial-time) algorithm to solve it, then all problems in NP can be solved just as easily.
MAX-SAT, the maximum satisfiability problem, is an FNP generalization of SAT. It asks for the maximum number of clauses which can be satisfied by any assignment. It has efficient approximation algorithms, but is NP-hard to solve exactly. Worse still, it is APX-complete, meaning there is no polynomial-time approximation scheme (PTAS) for this problem unless P=NP.
WMSAT is the problem of finding an assignment of minimum weight that satisfy a monotone Boolean formula (i.e. a formula without any negation). Weights of propositional variables are given in the input of the problem. The weight of an assignment is the sum of weights of true variables. That problem is NP-complete (see Th. 1 of ).
Other generalizations include satisfiability for first- and second-order logic, constraint satisfaction problems, 0-1 integer programming.
Finding a satisfying assignment
While SAT is a decision problem, the search problem of finding a satisfying assignment reduces to SAT. That is, each algorithm which correctly answers whether an instance of SAT is solvable can be used to find a satisfying assignment. First, the question is asked on the given formula Φ. If the answer is "no", the formula is unsatisfiable. Otherwise, the question is asked on the partly instantiated formula Φ{x1=TRUE}, that is, Φ with the first variable x1 replaced by TRUE, and simplified accordingly. If the answer is "yes", then x1=TRUE, otherwise x1=FALSE. Values of other variables can be found subsequently in the same way. In total, n+1 runs of the algorithm are required, where n is the number of distinct variables in Φ.
This property is used in several theorems in complexity theory:
NP ⊆ P/poly ⇒ PH = Σ2 (Karp–Lipton theorem)
NP ⊆ BPP ⇒ NP = RP
P = NP ⇒ FP = FNP
Algorithms for solving SAT
Since the SAT problem is NP-complete, only algorithms with exponential worst-case complexity are known for it. In spite of this, efficient and scalable algorithms for SAT were developed during the 2000s and have contributed to dramatic advances in the ability to automatically solve problem instances involving tens of thousands of variables and millions of constraints (i.e. clauses). Examples of such problems in electronic design automation (EDA) include formal equivalence checking, model checking, formal verification of pipelined microprocessors, automatic test pattern generation, routing of FPGAs, planning, and scheduling problems, and so on. A SAT-solving engine is also considered to be an essential component in the electronic design automation toolbox.
Major techniques used by modern SAT solvers include the Davis–Putnam–Logemann–Loveland algorithm (or DPLL), conflict-driven clause learning (CDCL), and stochastic local search algorithms such as WalkSAT. Almost all SAT solvers include time-outs, so they will terminate in reasonable time even if they cannot find a solution. Different SAT solvers will find different instances easy or hard, and some excel at proving unsatisfiability, and others at finding solutions. Recent attempts have been made to learn an instance's satisfiability using deep learning techniques.
SAT solvers are developed and compared in SAT-solving contests. Modern SAT solvers are also having significant impact on the fields of software verification, constraint solving in artificial intelligence, and operations research, among others.
| Mathematics | Complexity theory | null |
4746 | https://en.wikipedia.org/wiki/Plague%20%28disease%29 | Plague (disease) | Plague is an infectious disease caused by the bacterium Yersinia pestis. Symptoms include fever, weakness and headache. Usually this begins one to seven days after exposure. There are three forms of plague, each affecting a different part of the body and causing associated symptoms. Pneumonic plague infects the lungs, causing shortness of breath, coughing and chest pain; bubonic plague affects the lymph nodes, making them swell; and septicemic plague infects the blood and can cause tissues to turn black and die.
The bubonic and septicemic forms are generally spread by flea bites or handling an infected animal, whereas pneumonic plague is generally spread between people through the air via infectious droplets. Diagnosis is typically by finding the bacterium in fluid from a lymph node, blood or sputum.
Those at high risk may be vaccinated. Those exposed to a case of pneumonic plague may be treated with preventive medication. If infected, treatment is with antibiotics and supportive care. Typically antibiotics include a combination of gentamicin and a fluoroquinolone. The risk of death with treatment is about 10% while without it is about 70%.
Globally, about 600 cases are reported a year. In 2017, the countries with the most cases include the Democratic Republic of the Congo, Madagascar and Peru. In the United States, infections occasionally occur in rural areas, where the bacteria are believed to circulate among rodents. It has historically occurred in large outbreaks, with the best known being the Black Death in the 14th century, which resulted in more than 50 million deaths in Europe.
Signs and symptoms
There are several different clinical manifestations of plague. The most common form is bubonic plague, followed by septicemic and pneumonic plague. Other clinical manifestations include plague meningitis, plague pharyngitis, and ocular plague. General symptoms of plague include fever, chills, headaches, and nausea. Many people experience swelling in their lymph nodes if they have bubonic plague. For those with pneumonic plague, symptoms may (or may not) include a cough, pain in the chest, and haemoptysis.
Bubonic plague
When a flea bites a human and contaminates the wound with regurgitated blood, the plague-causing bacteria are passed into the tissue. Y. pestis can reproduce inside cells, so even if phagocytosed, they can still survive. Once in the body, the bacteria can enter the lymphatic system, which drains interstitial fluid. Plague bacteria secrete several toxins, one of which is known to cause beta-adrenergic blockade.
Y. pestis spreads through the lymphatic vessels of the infected human until it reaches a lymph node, where it causes acute lymphadenitis. The swollen lymph nodes form the characteristic buboes associated with the disease, and autopsies of these buboes have revealed them to be mostly hemorrhagic or necrotic.
If the lymph node is overwhelmed, the infection can pass into the bloodstream, causing secondary septicemic plague and if the lungs are seeded, it can cause secondary pneumonic plague.
Septicemic plague
Lymphatics ultimately drain into the bloodstream, so the plague bacteria may enter the blood and travel to almost any part of the body. In septicemic plague, bacterial endotoxins cause disseminated intravascular coagulation (DIC), causing tiny clots throughout the body and possibly ischemic necrosis (tissue death due to lack of circulation/perfusion to that tissue) from the clots. DIC results in depletion of the body's clotting resources so that it can no longer control bleeding. Consequently, there is bleeding into the skin and other organs, which can cause red and/or black patchy rash and hemoptysis/hematemesis (coughing up/ vomiting of blood). There are bumps on the skin that look somewhat like insect bites; these are usually red, and sometimes white in the centre. Untreated, the septicemic plague is usually fatal. Early treatment with antibiotics reduces the mortality rate to between 4 and 15 per cent.
Pneumonic plague
The pneumonic form of plague arises from infection of the lungs. It causes coughing and thereby produces airborne droplets that contain bacterial cells and are likely to infect anyone inhaling them. The incubation period for pneumonic plague is short, usually two to four days, but sometimes just a few hours. The initial signs are indistinguishable from several other respiratory illnesses; they include headache, weakness, and spitting or vomiting of blood. The course of the disease is rapid; unless diagnosed and treated soon enough, typically within a few hours, death may follow in one to six days; in untreated cases, mortality is nearly 100%.
Cause
Transmission of Y. pestis to an uninfected individual is possible by any of the following means:
droplet contact – coughing or sneezing on another person
direct physical contact – touching an infected person, including sexual contact
indirect contact – usually by touching soil contamination or a contaminated surface
airborne transmission – if the microorganism can remain in the air for long periods
fecal-oral transmission – usually from contaminated food or water sources
vector borne transmission – carried by insects or other animals.
Yersinia pestis circulates in animal reservoirs, particularly in rodents, in the natural foci of infection found on all continents except Australia. The natural foci of plague are situated in a broad belt in the tropical and sub-tropical latitudes and the warmer parts of the temperate latitudes around the globe, between the parallels 55° N and 40° S.
Contrary to popular belief, rats did not directly start the spread of the bubonic plague. It is mainly a disease in the fleas (Xenopsylla cheopis) that infested the rats, making the rats themselves the first victims of the plague. Rodent-borne infection in a human occurs when a person is bitten by a flea that has been infected by biting a rodent that itself has been infected by the bite of a flea carrying the disease. The bacteria multiply inside the flea, sticking together to form a plug that blocks its stomach and causes it to starve. The flea then bites a host and continues to feed, even though it cannot quell its hunger, and consequently, the flea vomits blood tainted with the bacteria back into the bite wound. The bubonic plague bacterium then infects a new person and the flea eventually dies from starvation. Serious outbreaks of plague are usually started by other disease outbreaks in rodents or a rise in the rodent population.
A 21st-century study of a 1665 outbreak of plague in the village of Eyam in England's Derbyshire Dales – which isolated itself during the outbreak, facilitating modern study – found that three-quarters of cases are likely to have been due to human-to-human transmission, especially within families, a much larger proportion than previously thought.
Diagnosis
Symptoms of plague are usually non-specific and to definitively diagnose plague, laboratory testing is required. Y. pestis can be identified through both a microscope and by culturing a sample and this is used as a reference standard to confirm that a person has a case of plague. The sample can be obtained from the blood, mucus (sputum), or aspirate extracted from inflamed lymph nodes (buboes). If a person is administered antibiotics before a sample is taken or if there is a delay in transporting the person's sample to a laboratory and/or a poorly stored sample, there is a possibility for false negative results.
Polymerase chain reaction (PCR) may also be used to diagnose plague, by detecting the presence of bacterial genes such as the pla gene (plasmogen activator) and caf1 gene, (F1 capsule antigen). PCR testing requires a very small sample and is effective for both alive and dead bacteria. For this reason, if a person receives antibiotics before a sample is collected for laboratory testing, they may have a false negative culture and a positive PCR result.
Blood tests to detect antibodies against Y. pestis can also be used to diagnose plague, however, this requires taking blood samples at different periods to detect differences between the acute and convalescent phases of F1 antibody titres.
In 2020, a study about rapid diagnostic tests that detect the F1 capsule antigen (F1RDT) by sampling sputum or bubo aspirate was released. Results show rapid diagnostic F1RDT test can be used for people who have suspected pneumonic and bubonic plague but cannot be used in asymptomatic people. F1RDT may be useful in providing a fast result for prompt treatment and fast public health response as studies suggest that F1RDT is highly sensitive for both pneumonic and bubonic plague. However, when using the rapid test, both positive and negative results need to be confirmed to establish or reject the diagnosis of a confirmed case of plague and the test result needs to be interpreted within the epidemiological context as study findings indicate that although 40 out of 40 people who had the plague in a population of 1000 were correctly diagnosed, 317 people were diagnosed falsely as positive.
Prevention
Vaccination
Bacteriologist Waldemar Haffkine developed the first plague vaccine in 1897. He conducted a massive inoculation program in British India, and it is estimated that 26 million doses of Haffkine's anti-plague vaccine were sent out from Bombay between 1897 and 1925, reducing the plague mortality by 50–85%.
Since human plague is rare in most parts of the world as of 2023, routine vaccination is not needed other than for those at particularly high risk of exposure, nor for people living in areas with enzootic plague, meaning it occurs at regular, predictable rates in populations and specific areas, such as the western United States. It is not even indicated for most travellers to countries with known recent reported cases, particularly if their travel is limited to urban areas with modern hotels. The United States CDC thus only recommends vaccination for (1) all laboratory and field personnel who are working with Y. pestis organisms resistant to antimicrobials: (2) people engaged in aerosol experiments with Y. pestis; and (3) people engaged in field operations in areas with enzootic plague where preventing exposure is not possible (such as some disaster areas). A systematic review by the Cochrane Collaboration found no studies of sufficient quality to make any statement on the efficacy of the vaccine.
Early diagnosis
Diagnosing plague early leads to a decrease in transmission or spread of the disease.
Prophylaxis
Pre-exposure prophylaxis for first responders and health care providers who will care for patients with pneumonic plague is not considered necessary as long as standard and droplet precautions can be maintained. In cases of surgical mask shortages, patient overcrowding, poor ventilation in hospital wards, or other crises, pre-exposure prophylaxis might be warranted if sufficient supplies of antimicrobials are available.
Postexposure prophylaxis should be considered for people who had close (<6 feet), sustained contact with a patient with pneumonic plague and were not wearing adequate personal protective equipment. Antimicrobial postexposure prophylaxis also can be considered for laboratory workers accidentally exposed to infectious materials and people who had close (<6 feet) or direct contact with infected animals, such as veterinary staff, pet owners, and hunters.
Specific recommendations on pre- and post-exposure prophylaxis are available in the clinical guidelines on treatment and prophylaxis of plague published in 2021.
Treatments
If diagnosed in time, the various forms of plague are usually highly responsive to antibiotic therapy. The antibiotics often used are streptomycin, chloramphenicol and tetracycline. Amongst the newer generation of antibiotics, gentamicin and doxycycline have proven effective in monotherapeutic treatment of plague. Guidelines on treatment and prophylaxis of plague were published by the Centers for Disease Control and Prevention in 2021.
The plague bacterium could develop drug resistance and again become a major health threat. One case of a drug-resistant form of the bacterium was found in Madagascar in 1995. Further outbreaks in Madagascar were reported in November 2014 and October 2017.
Epidemiology
Globally about 600 cases are reported a year. In 2017, the countries with the most cases include the Democratic Republic of the Congo, Madagascar and Peru. It has historically occurred in large outbreaks, with the best known being the Black Death in the 14th century which resulted in more than 50 million dead. In recent years, cases have been distributed between small seasonal outbreaks which occur primarily in Madagascar, and sporadic outbreaks or isolated cases in endemic areas.
In 2022 the possible origin of all modern strands of Yersinia pestis DNA was found in human remains in three graves located in Kyrgyzstan, dated to 1338 and 1339. The siege of Caffa in Crimea in 1346, is known to have been the first plague outbreak with following strands, later to spread over Europe. Sequencing DNA compared to other ancient and modern strands paints a family tree of the bacteria. Bacteria today affecting marmots in Kyrgyzstan, are closest to the strand found in the graves, suggesting this is also the location where plague transferred from animals to humans.
Biological weapon
The plague has a long history as a biological weapon. Historical accounts from ancient China and medieval Europe details the use of infected animal carcasses, such as cows or horses, and human carcasses, by the Xiongnu/Huns, Mongols, Turks and other groups, to contaminate enemy water supplies. Han dynasty general Huo Qubing is recorded to have died of such contamination while engaging in warfare against the Xiongnu. Plague victims were also reported to have been tossed by catapult into cities under siege.
In 1347, the Genoese possession of Caffa, a great trade emporium on the Crimean peninsula, came under siege by an army of Mongol warriors of the Golden Horde under the command of Jani Beg. After a protracted siege during which the Mongol army was reportedly withering from the disease, they decided to use the infected corpses as a biological weapon. The corpses were catapulted over the city walls, infecting the inhabitants. This event might have led to the transfer of the Black Death via their ships into the south of Europe, possibly explaining its rapid spread.
During World War II, the Japanese Army developed weaponized plague, based on the breeding and release of large numbers of fleas. During the Japanese occupation of Manchuria, Unit 731 deliberately infected Chinese, Korean and Manchurian civilians and prisoners of war with the plague bacterium. These subjects, termed "maruta" or "logs", were then studied by dissection, others by vivisection while still conscious. Members of the unit such as Shiro Ishii were exonerated from the Tokyo tribunal by Douglas MacArthur but 12 of them were prosecuted in the Khabarovsk War Crime Trials in 1949 during which some admitted having spread bubonic plague within a radius around the city of Changde.
Ishii innovated bombs containing live mice and fleas, with very small explosive loads, to deliver the weaponized microbes, overcoming the problem of the explosive killing the infected animal and insect by the use of a ceramic, rather than metal, casing for the warhead. While no records survive of the actual usage of the ceramic shells, prototypes exist and are believed to have been used in experiments during WWII.
After World War II, both the United States and the Soviet Union developed means of weaponising pneumonic plague. Experiments included various delivery methods, vacuum drying, sizing the bacterium, developing strains resistant to antibiotics, combining the bacterium with other diseases (such as diphtheria), and genetic engineering. Scientists who worked in USSR bio-weapons programs have stated that the Soviet effort was formidable and that large stocks of weaponised plague bacteria were produced. Information on many of the Soviet and US projects is largely unavailable. Aerosolized pneumonic plague remains the most significant threat.
The plague can be easily treated with antibiotics. Some countries, such as the United States, have large supplies on hand if such an attack should occur, making the threat less severe.
| Biology and health sciences | Bacterial infections | Health |
4751 | https://en.wikipedia.org/wiki/Bacillus | Bacillus | Bacillus (Latin "stick") is a genus of Gram-positive, rod-shaped bacteria, a member of the phylum Bacillota, with 266 named species. The term is also used to describe the shape (rod) of other so-shaped bacteria; and the plural Bacilli is the name of the class of bacteria to which this genus belongs. Bacillus species can be either obligate aerobes which are dependent on oxygen, or facultative anaerobes which can survive in the absence of oxygen. Cultured Bacillus species test positive for the enzyme catalase if oxygen has been used or is present.
Bacillus can reduce themselves to oval endospores and can remain in this dormant state for years. The endospore of one species from Morocco is reported to have survived being heated to 420 °C. Endospore formation is usually triggered by a lack of nutrients: the bacterium divides within its cell wall, and one side then engulfs the other. They are not true spores (i.e., not an offspring). Endospore formation originally defined the genus, but not all such species are closely related, and many species have been moved to other genera of the Bacillota. Only one endospore is formed per cell. The spores are resistant to heat, cold, radiation, desiccation, and disinfectants. Bacillus anthracis needs oxygen to sporulate; this constraint has important consequences for epidemiology and control. In vivo, B. anthracis produces a polypeptide (polyglutamic acid) capsule that kills it from phagocytosis. The genera Bacillus and Clostridium constitute the family Bacillaceae. Species are identified by using morphologic and biochemical criteria. Because the spores of many Bacillus species are resistant to heat, radiation, disinfectants, and desiccation, they are difficult to eliminate from medical and pharmaceutical materials and are a frequent cause of contamination. Not only are they resistant to heat, radiation, etc., but they are also resistant to chemicals such as antibiotics. This resistance allows them to survive for many years and especially in a controlled environment. Bacillus species are well known in the food industries as troublesome spoilage organisms.
Ubiquitous in nature, Bacillus includes symbiotic (sometimes referred to as endophytes) as well as independent species. Two species are medically significant: B. anthracis causes anthrax; and B. cereus causes food poisoning.
Many species of Bacillus can produce copious amounts of enzymes, which are used in various industries, such as in the production of alpha amylase used in starch hydrolysis and the protease subtilisin used in detergents. B. subtilis is a valuable model for bacterial research. Some Bacillus species can synthesize and secrete lipopeptides, in particular surfactins and mycosubtilins. Bacillus species are also found in marine sponges. Marine sponge associated Bacillus subtilis (strains WS1A and YBS29) can synthesize several antimicrobial peptides. These Bacillus subtilis strains can develop disease resistance in Labeo rohita.
Structure
Cell wall
The cell wall of Bacillus is a structure on the outside of the cell that forms the second barrier between the bacterium and the environment, and at the same time maintains the rod shape and withstands the pressure generated by the cell's turgor. The cell wall is made of teichoic and teichuronic acids. B. subtilis is the first bacterium for which the role of an actin-like cytoskeleton in cell shape determination and peptidoglycan synthesis was identified and for which the entire set of peptidoglycan-synthesizing enzymes was localized. The role of the cytoskeleton in shape generation and maintenance is important.
Bacillus species are rod-shaped, endospore-forming aerobic or facultatively anaerobic, Gram-positive bacteria; in some species cultures may turn Gram-negative with age. The many species of the genus exhibit a wide range of physiologic abilities that allow them to live in every natural environment. Only one endospore is formed per cell. The spores are resistant to heat, cold, radiation, desiccation, and disinfectants.
Origin of name
The genus Bacillus was named in 1835 by Christian Gottfried Ehrenberg, to contain rod-shaped (bacillus) bacteria. He had seven years earlier named the genus Bacterium. Bacillus was later amended by Ferdinand Cohn to further describe them as spore-forming, Gram-positive, aerobic or facultatively anaerobic bacteria. Like other genera associated with the early history of microbiology, such as Pseudomonas and Vibrio, the 266 species of Bacillus are ubiquitous. The genus has a very large ribosomal 16S diversity.
Isolation and identification
Established methods for isolating Bacillus species for culture primarily involve suspension of sampled soil in distilled water, heat shock to kill off vegetative cells leaving primarily viable spores in the sample, and culturing on agar plates with further tests to confirm the identity of the cultured colonies. Additionally, colonies which exhibit characteristics typical of Bacillus bacteria can be selected from a culture of an environmental sample which has been significantly diluted following heat shock or hot air drying to select potential Bacillus bacteria for testing.
Cultured colonies are usually large, spreading, and irregularly shaped. Under the microscope, the Bacillus cells appear as rods, and a substantial portion of the cells usually contain oval endospores at one end, making them bulge.
Characteristics of Bacillus spp.
S.I. Paul et al. (2021) isolated and identified multiple strains of Bacillus subtilis (strains WS1A, YBS29, KSP163A, OA122, ISP161A, OI6, WS11, KSP151E, and S8,) from marine sponges of the Saint Martin's Island Area of the Bay of Bengal, Bangladesh. Based on their study, colony, morphological, physiological, and biochemical characteristics of Bacillus spp. are shown in the Table below.
Note: + = Positive, – =Negative, O= Oxidative, F= Fermentative
Phylogeny
It's been long known that the (pre-2020) definition of Bacillus is overly vague.
Xu and Côté (2003) uses 16S and ITS rRNA regions to divide the genus Bacillus into 10 groups, including the nested genera Paenibacillus, Brevibacillus, Geobacillus, Marinibacillus and Virgibacillus.
Ash and Carol (2008) also uses 16S rRNA and found extensive "phylogenetic heterogenity".
'The All-Species Living Tree' Project, which has been in operation since 2008, also maintains a 16S (and 23S if available) tree of all validated species. In this tree, the genus Bacillus contains a very large number of nested taxa and majorly in both 16S and 23S. It is paraphyletic to the Lactobacillales (Lactobacillus, Streptococcus, Staphylococcus, Listeria, etc.), due to Bacillus coahuilensis and others.
Alcaraz et al. 2010 presents a gene concatenation study, which found results similar to the All-Species Living Tree, but with a much more limited number of species in terms of groups. (This scheme used Listeria as an outgroup, so in light of the ARB tree, it may be "inside-out").
Gupta et al. 2020 and Patel et al. 2020 use phylogenomics and comparative genomics to resolve the structure in Bacillus sensu lato. They propose (and validly publish) a number of new genus names, thereby restricting Bacillus has been restricted to only include species closely related to Bacillus subtilis and Bacillus cereus. (This does not make the genus monophyletic, however: a number of nested genera persists between the two groups.) The newly-created genera are: Peribacillus, Cytobacillus, Mesobacillus, Neobacillus, Metabacillus, Alkalihalobacillus, Alteribacter, Ectobacillus, Evansella, Ferdinandcohnia, Gottfriedia, Heyndrickxia, Lederbergia, Litchfieldia, Margalitia, Niallia, Priestia, Robertmurraya, Rossellomorea, Schinkia, Siminovitchia, Sutcliffiella and Weizmannia.
Nikolaidis et al. 2022 studied 1104 Bacillus proteomes using a gene concatenation based on 114 core proteins and delineated the relationships among the various species, defined as Bacillus from the NCBI taxonomy. The various strains were clustered into species, based on Average Nucleotide identity (ANI) values, with a species cutoff of 95%.
One clade, formed by Bacillus anthracis, Bacillus cereus, Bacillus mycoides, Bacillus pseudomycoides, Bacillus thuringiensis, and Bacillus weihenstephanensis under the 2011 classification standards, should be a single species (within 97% 16S identity), but for medical reasons, they are considered separate species (an issue also present for four species of Shigella and Escherichia coli).
Species
B. Symun
B. acidicola
B. acidiproducens
B. acidocaldarius
B. acidoterrestris
B. aeolius
B. aerius
B. aerophilus
B. agaradhaerens
B. agri
B. aidingensis
B. akibai
B. albus
B. alcalophlus
B. algicola
B. alginolyticus
B. alkalidiazotrophicus
B. alkalinitrilicus
B. alkalisediminis
B. alkalitelluris
B. altitudinis
B. alveayuensis
B. alvei
B. amyloliquefaciens
B. a. subsp. amyloliquefaciens
B. a. subsp. plantarum
B. aminovorans
B. amylolyticus
B. andreesenii
B. aneurinilyticus
B. anthracis
B. aquimaris
B. arenosi
B. arseniciselenatis
B. arsenicus
B. aurantiacus
B. arvi
B. aryabhattai
B. asahii
B. atrophaeus
B. axarquiensis
B. azotofixans
B. azotoformans
B. badius
B. barbaricus
B. bataviensis
B. beijingensis
B. benzoevorans
B. beringensis
B. berkeleyi
B. beveridgei
B. bogoriensis
B. boroniphilus
B. borstelensis
B. brevis
B. butanolivorans
B. canaveralius
B. carboniphilus
B. cecembensis
B. cellulosilyticus
B. centrosporus
B. cereus
B. chagannorensis
B. chitinolyticus
B. chondroitinus
B. choshinensis
B. chungangensis
B. cibi
B. circulans
B. clarkii
B. clausii
B. coagulans
B. coahuilensis
B. cohnii
B. composti
B. curdlanolyticus
B. cycloheptanicus
B. cytotoxicus
B. daliensis
B. decisifrondis
B. decolorationis
B. deserti
B. dipsosauri
B. drentensis
B. edaphicus
B. ehimensis
B. eiseniae
B. enclensis
B. endophyticus
B. endoradicis
B. farraginis
B. fastidiosus
B. fengqiuensis
B. filobacterium rodentuim
B. firmus
B. flexus
B. foraminis
B. fordii
B. formosus
B. fortis
B. fumarioli
B. funiculus
B. fusiformis
B. gaemokensis
B. galactophilus
B. galactosidilyticus
B. galliciensis
B. gelatini
B. gibsonii
B. ginsengi
B. ginsengihumi
B. ginsengisoli
B. glucanolyticus
B. gordonae
B. gottheilii
B. graminis
B. halmapalus
B. haloalkaliphilus
B. halochares
B. halodenitrificans
B. halodurans
B. halophilus
B. halosaccharovorans
B. haynesii
B. hemicellulosilyticus
B. hemicentroti
B. herbersteinensis
B. horikoshii
B. horneckiae
B. horti
B. huizhouensis
B. humi
B. hwajinpoensis
B. idriensis
B. indicus
B. infantis
B. infernus
B. insolitus
B. invictae
B. iranensis
B. isabeliae
B. isronensis
B. jeotgali
B. kaustophilus
B. kobensis
B. kochii
B. kokeshiiformis
B. koreensis
B. korlensis
B. kribbensis
B. krulwichiae
B. laevolacticus
B. larvae
B. laterosporus
B. lautus
B. lehensis
B. lentimorbus
B. lentus
B. licheniformis
B. ligniniphilus
B. litoralis
B. locisalis
B. luciferensis
B. luteolus
B. luteus
B. macauensis
B. macerans
B. macquariensis
B. macyae
B. malacitensis
B. mannanilyticus
B. marisflavi
B. marismortui
B. marmarensis
B. massiliensis
B. megaterium
"B. mesentericus"
B. mesonae
B. methanolicus
B. methylotrophicus
B. migulanus
B. mojavensis
B. mucilaginosus
B. muralis
B. murimartini
B. mycoides
B. naganoensis
B. nanhaiensis
B. nanhaiisediminis
B. nealsonii
B. neidei
B. neizhouensis
B. niabensis
B. niacini
B. novalis
B. oceanisediminis
B. odysseyi
B. okhensis
B. okuhidensis
B. oleronius
B. oryzaecorticis
B. oshimensis
B. pabuli
B. pakistanensis
B. pallidus
B. pallidus
B. panacisoli
B. panaciterrae
B. pantothenticus
B. parabrevis
B. paraflexus
B. pasteurii
B. patagoniensis
B. peoriae
B. persepolensis
B. persicus
B. pervagus
B. plakortidis
B. pocheonensis
B. polygoni
B. polymyxa
B. popilliae
B. pseudalcalophilus
B. pseudofirmus
B. pseudomycoides
B. psychrodurans
B. psychrophilus
B. psychrosaccharolyticus
B. psychrotolerans
B. pulvifaciens
B. pumilus
B. purgationiresistens
B. pycnus
B. qingdaonensis
B. qingshengii
B. reuszeri
B. rhizosphaerae
B. rigui
B. ruris
B. safensis
B. salarius
B. salexigens
B. saliphilus
B. schlegelii
B. sediminis
B. selenatarsenatis
B. selenitireducens
B. seohaeanensis
B. shacheensis
B. shackletonii
B. siamensis
B. silvestris
B. simplex
B. siralis
B. smithii
B. soli
B. solimangrovi
B. solisalsi
B. songklensis
B. sonorensis
B. sphaericus
B. sporothermodurans
B. stearothermophilus
B. stratosphericus
B. subterraneus
B. subtilis
B. s. subsp. inaquosorum
B. s. subsp. spizizenii
B. s. subsp. subtilis
B. taeanensis
B. tequilensis
B. thermantarcticus
B. thermoaerophilus
B. thermoamylovorans
B. thermocatenulatus
B. thermocloacae
B. thermocopriae
B. thermodenitrificans
B. thermoglucosidasius
B. thermolactis
B. thermoleovorans
B. thermophilus
B. thermoproteolyticus
B. thermoruber
B. thermosphaericus
B. thiaminolyticus
B. thioparans
B. thuringiensis
B. tianshenii
B. toyonensis
B. trypoxylicola
B. tusciae
B. validus
B. vallismortis
B. vedderi
B. velezensis
B. vietnamensis
B. vireti
B. vulcani
B. wakoensis
B. xiamenensis
B. xiaoxiensis
B. zanthoxyli
B. zhanjiangensis
Ecological and clinical significance Bacillus species are ubiquitous in nature, e.g. in soil. They can occur in extreme environments such as high pH (B. alcalophilus), high temperature (B. thermophilus), and high salt concentrations (B. halodurans). They also are very commonly found as endophytes in plants where they can play a critical role in their immune system, nutrient absorption and nitrogen fixing capabilities. B. thuringiensis produces a toxin that can kill insects and thus has been used as insecticide. B. siamensis has antimicrobial compounds that inhibit plant pathogens, such as the fungi Rhizoctonia solani and Botrytis cinerea, and they promote plant growth by volatile emissions. Some species of Bacillus are naturally competent for DNA uptake by transformation.
Two Bacillus species are medically significant: B. anthracis, which causes anthrax; and B. cereus, which causes food poisoning, with symptoms similar to that caused by Staphylococcus.
B. cereus produces toxins which cause two different set of symptoms:
emetic toxin which can cause vomiting and nausea
diarrhea
B. thuringiensis is an important insect pathogen, and is sometimes used to control insect pests.
B. subtilis is an important model organism. It is also a notable food spoiler, causing ropiness in bread and related food.
B. subtilis can also produce and secrete antibiotics.
Some environmental and commercial strains of B. coagulans may play a role in food spoilage of highly acidic, tomato-based products.
Industrial significance
Many Bacillus species are able to secrete large quantities of enzymes. Bacillus amyloliquefaciens is the source of a natural antibiotic protein barnase (a ribonuclease), alpha amylase used in starch hydrolysis, the protease subtilisin used with detergents, and the BamH1 restriction enzyme used in DNA research.
A portion of the Bacillus thuringiensis genome was incorporated into corn and cotton crops. The resulting plants are resistant to some insect pests.Bacillus subtilis (natto) is the key microbial participant in the ongoing production of the soya-based traditional natto fermentation, and some Bacillus species like Bacillus cereus are on the Food and Drug Administration's GRAS (generally regarded as safe) list.
The capacity of selected Bacillus strains to produce and secrete large quantities (20–25 g/L) of extracellular enzymes has placed them among the most important industrial enzyme producers. The ability of different species to ferment in the acid, neutral, and alkaline pH ranges, combined with the presence of thermophiles in the genus, has led to the development of a variety of new commercial enzyme products with the desired temperature, pH activity, and stability properties to address a variety of specific applications. Classical mutation and (or) selection techniques, together with advanced cloning and protein engineering strategies, have been exploited to develop these products.
Efforts to produce and secrete high yields of foreign recombinant proteins in Bacillus hosts initially appeared to be hampered by the degradation of the products by the host proteases. Recent studies have revealed that the slow folding of heterologous proteins at the membrane-cell wall interface of Gram-positive bacteria renders them vulnerable to attack by wall-associated proteases. In addition, the presence of thiol-disulphide oxidoreductases in B. subtilis may be beneficial in the secretion of disulphide-bond-containing proteins. Such developments from our understanding of the complex protein translocation machinery of Gram-positive bacteria should allow the resolution of current secretion challenges and make Bacillus species preeminent hosts for heterologous protein production.Bacillus strains have also been developed and engineered as industrial producers of nucleotides, the vitamin riboflavin, the flavor agent ribose, and the supplement poly-gamma-glutamic acid. With the recent characterization of the genome of B. subtilis 168 and of some related strains, Bacillus species are poised to become the preferred hosts for the production of many new and improved products as we move through the genomic and proteomic era.
Use as model organismBacillus subtilis is one of the best understood prokaryotes, in terms of molecular and cellular biology. Its superb genetic amenability and relatively large size have provided the powerful tools required to investigate a bacterium from all possible aspects. Recent improvements in fluorescent microscopy techniques have provided novel insight into the dynamic structure of a single cell organism. Research on B. subtilis has been at the forefront of bacterial molecular biology and cytology, and the organism is a model for differentiation, gene/protein regulation, and cell cycle events in bacteria.
| Biology and health sciences | Gram-positive bacteria | Plants |
4781 | https://en.wikipedia.org/wiki/Benzodiazepine | Benzodiazepine | Benzodiazepines (BZD, BDZ, BZs), colloquially known as "benzos", are a class of depressant drugs whose core chemical structure is the fusion of a benzene ring and a diazepine ring. They are prescribed to treat conditions such as anxiety disorders, insomnia, and seizures. The first benzodiazepine, chlordiazepoxide (Librium), was discovered accidentally by Leo Sternbach in 1955, and was made available in 1960 by Hoffmann–La Roche, which followed with the development of diazepam (Valium) three years later, in 1963. By 1977, benzodiazepines were the most prescribed medications globally; the introduction of selective serotonin reuptake inhibitors (SSRIs), among other factors, decreased rates of prescription, but they remain frequently used worldwide.
Benzodiazepines are depressants that enhance the effect of the neurotransmitter gamma-aminobutyric acid (GABA) at the GABAA receptor, resulting in sedative, hypnotic (sleep-inducing), anxiolytic (anti-anxiety), anticonvulsant, and muscle relaxant properties. High doses of many shorter-acting benzodiazepines may also cause anterograde amnesia and dissociation. These properties make benzodiazepines useful in treating anxiety, panic disorder, insomnia, agitation, seizures, muscle spasms, alcohol withdrawal and as a premedication for medical or dental procedures. Benzodiazepines are categorized as short, intermediate, or long-acting. Short- and intermediate-acting benzodiazepines are preferred for the treatment of insomnia; longer-acting benzodiazepines are recommended for the treatment of anxiety.
Benzodiazepines are generally viewed as safe and effective for short-term use of two to four weeks, although cognitive impairment and paradoxical effects such as aggression or behavioral disinhibition can occur. According to the Government of Victoria's (Australia) Department of Health, long-term use can cause "impaired thinking or memory loss, anxiety and depression, irritability, paranoia, aggression, etc." A minority of people have paradoxical reactions after taking benzodiazepines such as worsened agitation or panic.
Benzodiazepines are associated with an increased risk of suicide due to aggression, impulsivity, and negative withdrawal effects. Long-term use is controversial because of concerns about decreasing effectiveness, physical dependence, benzodiazepine withdrawal syndrome, and an increased risk of dementia and cancer. The elderly are at an increased risk of both short- and long-term adverse effects, and as a result, all benzodiazepines are listed in the Beers List of inappropriate medications for older adults. There is controversy concerning the safety of benzodiazepines in pregnancy. While they are not major teratogens, uncertainty remains as to whether they cause cleft palate in a small number of babies and whether neurobehavioural effects occur as a result of prenatal exposure; they are known to cause withdrawal symptoms in the newborn.
In an overdose, benzodiazepines can cause dangerous deep unconsciousness, but are less toxic than their predecessors, the barbiturates, and death rarely results when a benzodiazepine is the only drug taken. Combined with other central nervous system (CNS) depressants such as alcohol and opioids, the potential for toxicity and fatal overdose increases significantly. Benzodiazepines are commonly used recreationally and also often taken in combination with other addictive substances, and are controlled in most countries.
Medical uses
Benzodiazepines possess psycholeptic, sedative, hypnotic, anxiolytic, anticonvulsant, muscle relaxant, and amnesic actions, which are useful in a variety of indications such as alcohol dependence, seizures, anxiety disorders, panic, agitation, and insomnia. Most are administered orally; however, they can also be given intravenously, intramuscularly, or rectally. In general, benzodiazepines are well tolerated and are safe and effective drugs in the short term for a wide range of conditions. Tolerance can develop to their effects and there is also a risk of dependence, and upon discontinuation a withdrawal syndrome may occur. These factors, combined with other possible secondary effects after prolonged use such as psychomotor, cognitive, or memory impairments, limit their long-term applicability. The effects of long-term use or misuse include the tendency to cause or worsen cognitive deficits, depression, and anxiety. The College of Physicians and Surgeons of British Columbia recommends discontinuing the usage of benzodiazepines in those on opioids and those who have used them long term. Benzodiazepines can have serious adverse health outcomes, and these findings support clinical and regulatory efforts to reduce usage, especially in combination with non-benzodiazepine receptor agonists.
Panic disorder
Because of their effectiveness, tolerability, and rapid onset of anxiolytic action, benzodiazepines are frequently used for the treatment of anxiety associated with panic disorder. However, there is disagreement among expert bodies regarding the long-term use of benzodiazepines for panic disorder. The views range from those holding benzodiazepines are not effective long-term and should be reserved for treatment-resistant cases to those holding they are as effective in the long term as selective serotonin reuptake inhibitors (SSRIs).
American Psychiatric Association (APA) guidelines, published in January 2009, note that, in general, benzodiazepines are well tolerated, and their use for the initial treatment for panic disorder is strongly supported by numerous controlled trials. APA states that there is insufficient evidence to recommend any of the established panic disorder treatments over another. The choice of treatment between benzodiazepines, SSRIs, serotonin–norepinephrine reuptake inhibitors (SNRIs), tricyclic antidepressants, and psychotherapy should be based on the patient's history, preference, and other individual characteristics. Selective serotonin reuptake inhibitors are likely to be the best choice of pharmacotherapy for many patients with panic disorder, but benzodiazepines are also often used, and some studies suggest that these medications are still used with greater frequency than the SSRIs. One advantage of benzodiazepines is that they alleviate the anxiety symptoms much faster than antidepressants, and therefore may be preferred in patients for whom rapid symptom control is critical. However, this advantage is offset by the possibility of developing benzodiazepine dependence. APA does not recommend benzodiazepines for persons with depressive symptoms or a recent history of substance use disorder. APA guidelines state that, in general, pharmacotherapy of panic disorder should be continued for at least a year, and that clinical experience supports continuing benzodiazepine treatment to prevent recurrence. Although major concerns about benzodiazepine tolerance and withdrawal have been raised, there is no evidence for significant dose escalation in patients using benzodiazepines long-term. For many such patients, stable doses of benzodiazepines retain their efficacy over several years.
Guidelines issued by the UK-based National Institute for Health and Clinical Excellence (NICE), carried out a systematic review using different methodology and came to a different conclusion. They questioned the accuracy of studies that were not placebo-controlled. And, based on the findings of placebo-controlled studies, they do not recommend use of benzodiazepines beyond two to four weeks, as tolerance and physical dependence develop rapidly, with withdrawal symptoms including rebound anxiety occurring after six weeks or more of use. Nevertheless, benzodiazepines are still prescribed for long-term treatment of anxiety disorders, although specific antidepressants and psychological therapies are recommended as the first-line treatment options with the anticonvulsant drug pregabalin indicated as a second- or third-line treatment and suitable for long-term use. NICE stated that long-term use of benzodiazepines for panic disorder with or without agoraphobia is an unlicensed indication, does not have long-term efficacy, and is, therefore, not recommended by clinical guidelines. Psychological therapies such as cognitive behavioural therapy are recommended as a first-line therapy for panic disorder; benzodiazepine use has been found to interfere with therapeutic gains from these therapies.
Benzodiazepines are usually administered orally; however, very occasionally lorazepam or diazepam may be given intravenously for the treatment of panic attacks.
Generalized anxiety disorder
Benzodiazepines have robust efficacy in the short-term management of generalized anxiety disorder (GAD), but were not shown effective in producing long-term improvement overall. According to National Institute for Health and Clinical Excellence (NICE), benzodiazepines can be used in the immediate management of GAD, if necessary. However, they should not usually be given for longer than 2–4 weeks. The only medications NICE recommends for the longer term management of GAD are antidepressants.
Likewise, Canadian Psychiatric Association (CPA) recommends benzodiazepines alprazolam, bromazepam, lorazepam, and diazepam only as a second-line choice, if the treatment with two different antidepressants was unsuccessful. Although they are second-line agents, benzodiazepines can be used for a limited time to relieve severe anxiety and agitation. CPA guidelines note that after 4–6 weeks the effect of benzodiazepines may decrease to the level of placebo, and that benzodiazepines are less effective than antidepressants in alleviating ruminative worry, the core symptom of GAD. However, in some cases, a prolonged treatment with benzodiazepines as the add-on to an antidepressant may be justified.
A 2015 review found a larger effect with medications than talk therapy. Medications with benefit include serotonin-noradrenaline reuptake inhibitors, benzodiazepines, and selective serotonin reuptake inhibitors.
Anxiety
Benzodiazepines are sometimes used in the treatment of acute anxiety, since they result in rapid and marked relief of symptoms in most individuals; however, they are not recommended beyond 2–4 weeks of use due to risks of tolerance and dependence and a lack of long-term effectiveness. As for insomnia, they may also be used on an irregular/"as-needed" basis, such as in cases where said anxiety is at its worst. Compared to other pharmacological treatments, benzodiazepines are twice as likely to lead to a relapse of the underlying condition upon discontinuation. Psychological therapies and other pharmacological therapies are recommended for the long-term treatment of generalized anxiety disorder. Antidepressants have higher remission rates and are, in general, safe and effective in the short and long term.
Insomnia
Benzodiazepines can be useful for short-term treatment of insomnia. Their use beyond 2 to 4 weeks is not recommended due to the risk of dependence. The Committee on Safety of Medicines report recommended that where long-term use of benzodiazepines for insomnia is indicated then treatment should be intermittent wherever possible. It is preferred that benzodiazepines be taken intermittently and at the lowest effective dose. They improve sleep-related problems by shortening the time spent in bed before falling asleep, prolonging the sleep time, and, in general, reducing wakefulness. However, they worsen sleep quality by increasing light sleep and decreasing deep sleep. Other drawbacks of hypnotics, including benzodiazepines, are possible tolerance to their effects, rebound insomnia, and reduced slow-wave sleep and a withdrawal period typified by rebound insomnia and a prolonged period of anxiety and agitation.
The list of benzodiazepines approved for the treatment of insomnia is fairly similar among most countries, but which benzodiazepines are officially designated as first-line hypnotics prescribed for the treatment of insomnia varies between countries. Longer-acting benzodiazepines such as nitrazepam and diazepam have residual effects that may persist into the next day and are, in general, not recommended.
Since the release of nonbenzodiazepines, also known as z-drugs, in 1992 in response to safety concerns, individuals with insomnia and other sleep disorders have increasingly been prescribed nonbenzodiazepines (2.3% in 1993 to 13.7% of Americans in 2010), less often prescribed benzodiazepines (23.5% in 1993 to 10.8% in 2010). It is not clear as to whether the new non benzodiazepine hypnotics (Z-drugs) are better than the short-acting benzodiazepines. The efficacy of these two groups of medications is similar. According to the US Agency for Healthcare Research and Quality, indirect comparison indicates that side-effects from benzodiazepines may be about twice as frequent as from nonbenzodiazepines. Some experts suggest using nonbenzodiazepines preferentially as a first-line long-term treatment of insomnia. However, the UK National Institute for Health and Clinical Excellence did not find any convincing evidence in favor of Z-drugs. NICE review pointed out that short-acting Z-drugs were inappropriately compared in clinical trials with long-acting benzodiazepines. There have been no trials comparing short-acting Z-drugs with appropriate doses of short-acting benzodiazepines. Based on this, NICE recommended choosing the hypnotic based on cost and the patient's preference.
Older adults should not use benzodiazepines to treat insomnia unless other treatments have failed. When benzodiazepines are used, patients, their caretakers, and their physician should discuss the increased risk of harms, including evidence that shows twice the incidence of traffic collisions among driving patients, and falls and hip fracture for older patients.
Seizures
Prolonged convulsive epileptic seizures are a medical emergency that can usually be dealt with effectively by administering fast-acting benzodiazepines, which are potent anticonvulsants. In a hospital environment, intravenous clonazepam, lorazepam, and diazepam are first-line choices. In the community, intravenous administration is not practical and so rectal diazepam or buccal midazolam are used, with a preference for midazolam as its administration is easier and more socially acceptable.
When benzodiazepines were first introduced, they were enthusiastically adopted for treating all forms of epilepsy. However, drowsiness and tolerance become problems with continued use and none are now considered first-line choices for long-term epilepsy therapy. Clobazam is widely used by specialist epilepsy clinics worldwide and clonazepam is popular in the Netherlands, Belgium and France. Clobazam was approved for use in the United States in 2011. In the UK, both clobazam and clonazepam are second-line choices for treating many forms of epilepsy. Clobazam also has a useful role for very short-term seizure prophylaxis and in catamenial epilepsy. Discontinuation after long-term use in epilepsy requires additional caution because of the risks of rebound seizures. Therefore, the dose is slowly tapered over a period of up to six months or longer.
Alcohol withdrawal
Chlordiazepoxide is the most commonly used benzodiazepine for alcohol detoxification, but diazepam may be used as an alternative. Both are used in the detoxification of individuals who are motivated to stop drinking, and are prescribed for a short period of time to reduce the risks of developing tolerance and dependence to the benzodiazepine medication itself. The benzodiazepines with a longer half-life make detoxification more tolerable, and dangerous (and potentially lethal) alcohol withdrawal effects are less likely to occur. On the other hand, short-acting benzodiazepines may lead to breakthrough seizures, and are, therefore, not recommended for detoxification in an outpatient setting. Oxazepam and lorazepam are often used in patients at risk of drug accumulation, in particular, the elderly and those with cirrhosis, because they are metabolized differently from other benzodiazepines, through conjugation.
Benzodiazepines are the preferred choice in the management of alcohol withdrawal syndrome, in particular, for the prevention and treatment of the dangerous complication of seizures and in subduing severe delirium. Lorazepam is the only benzodiazepine with predictable intramuscular absorption and it is the most effective in preventing and controlling acute seizures.
Other indications
Benzodiazepines are often prescribed for a wide range of conditions:
They can sedate patients receiving mechanical ventilation or those in extreme distress. Caution is exercised in this situation due to the risk of respiratory depression, and it is recommended that benzodiazepine overdose treatment facilities should be available. They have also been found to increase the likelihood of later PTSD after people have been removed from ventilators.
Benzodiazepines are indicated in the management of breathlessness (shortness of breath) in advanced diseases, in particular where other treatments have failed to adequately control symptoms.
Benzodiazepines are effective as medication given a couple of hours before surgery to relieve anxiety. They also produce amnesia, which can be useful, as patients may not remember unpleasantness from the procedure. They are also used in patients with dental phobia as well as some ophthalmic procedures like refractive surgery; although such use is controversial and only recommended for those who are very anxious. Midazolam is the most commonly prescribed for this use because of its strong sedative actions and fast recovery time, as well as its water solubility, which reduces pain upon injection. Diazepam and lorazepam are sometimes used. Lorazepam has particularly marked amnesic properties that may make it more effective when amnesia is the desired effect.
Benzodiazepines are well known for their strong muscle-relaxing properties and can be useful in the treatment of muscle spasms, although tolerance often develops to their muscle relaxant effects. Baclofen or tizanidine are sometimes used as an alternative to benzodiazepines. Tizanidine has been found to have superior tolerability compared to diazepam and baclofen.
Benzodiazepines are also used to treat the acute panic caused by hallucinogen intoxication. Benzodiazepines are also used to calm the acutely agitated individual and can, if required, be given via an intramuscular injection. They can sometimes be effective in the short-term treatment of psychiatric emergencies such as acute psychosis as in schizophrenia or mania, bringing about rapid tranquillization and sedation until the effects of lithium or neuroleptics (antipsychotics) take effect. Lorazepam is most commonly used but clonazepam is sometimes prescribed for acute psychosis or mania; their long-term use is not recommended due to risks of dependence. Further research investigating the use of benzodiazepines alone and in combination with antipsychotic medications for treating acute psychosis is warranted.
Clonazepam, a benzodiazepine is used to treat many forms of parasomnia. Rapid eye movement behavior disorder responds well to low doses of clonazepam. Restless legs syndrome can be treated using clonazepam as a third line treatment option as the use of clonazepam is still investigational.
Benzodiazepines are sometimes used for obsessive–compulsive disorder (OCD), although they are generally believed ineffective for this indication. Effectiveness was, however, found in one small study. Benzodiazepines can be considered as a treatment option in treatment resistant cases.
Antipsychotics are generally a first-line treatment for delirium; however, when delirium is caused by alcohol or sedative hypnotic withdrawal, benzodiazepines are a first-line treatment.
There is some evidence that low doses of benzodiazepines reduce adverse effects of electroconvulsive therapy.
Contraindications
Benzodiazepines require special precaution if used in the elderly, during pregnancy, in children, alcohol or drug-dependent individuals and individuals with comorbid psychiatric disorders.
Because of their muscle relaxant action, benzodiazepines may cause respiratory depression in susceptible individuals. For that reason, they are contraindicated in people with myasthenia gravis, sleep apnea, bronchitis, and COPD. Caution is required when benzodiazepines are used in people with personality disorders or intellectual disability because of frequent paradoxical reactions. In major depression, they may precipitate suicidal tendencies and are sometimes used for suicidal overdoses. Individuals with a history of excessive alcohol use or non-medical use of opioids or barbiturates should avoid benzodiazepines, as there is a risk of life-threatening interactions with these drugs.
Pregnancy
In the United States, the Food and Drug Administration has categorized benzodiazepines into either category D or X meaning potential for harm in the unborn has been demonstrated.
Exposure to benzodiazepines during pregnancy has been associated with a slightly increased (from 0.06 to 0.07%) risk of cleft palate in newborns, a controversial conclusion as some studies find no association between benzodiazepines and cleft palate. Their use by expectant mothers shortly before the delivery may result in a floppy infant syndrome. Newborns with this condition tend to have hypotonia, hypothermia, lethargy, and breathing and feeding difficulties. Cases of neonatal withdrawal syndrome have been described in infants chronically exposed to benzodiazepines in utero. This syndrome may be hard to recognize, as it starts several days after delivery, for example, as late as 21 days for chlordiazepoxide. The symptoms include tremors, hypertonia, hyperreflexia, hyperactivity, and vomiting and may last for up to three to six months. Tapering down the dose during pregnancy may lessen its severity. If used in pregnancy, those benzodiazepines with a better and longer safety record, such as diazepam or chlordiazepoxide, are recommended over potentially more harmful benzodiazepines, such as temazepam or triazolam. Using the lowest effective dose for the shortest period of time minimizes the risks to the unborn child.
Elderly
The benefits of benzodiazepines are least and the risks are greatest in the elderly. They are listed as a potentially inappropriate medication for older adults by the American Geriatrics Society. The elderly are at an increased risk of dependence and are more sensitive to the adverse effects such as memory problems, daytime sedation, impaired motor coordination, and increased risk of motor vehicle accidents and falls, and an increased risk of hip fractures. The long-term effects of benzodiazepines and benzodiazepine dependence in the elderly can resemble dementia, depression, or anxiety syndromes, and progressively worsens over time. Adverse effects on cognition can be mistaken for the effects of old age. The benefits of withdrawal include improved cognition, alertness, mobility, reduced risk of incontinence, and a reduced risk of falls and fractures. The success of gradual-tapering benzodiazepines is as great in the elderly as in younger people. Benzodiazepines should be prescribed to the elderly only with caution and only for a short period at low doses. Short to intermediate-acting benzodiazepines are preferred in the elderly such as oxazepam and temazepam. The high potency benzodiazepines alprazolam and triazolam and long-acting benzodiazepines are not recommended in the elderly due to increased adverse effects. Nonbenzodiazepines such as zaleplon and zolpidem and low doses of sedating antidepressants are sometimes used as alternatives to benzodiazepines.
Long-term use of benzodiazepines is associated with increased risk of cognitive impairment and dementia, and reduction in prescribing levels is likely to reduce dementia risk. The association of a history of benzodiazepine use and cognitive decline is unclear, with some studies reporting a lower risk of cognitive decline in former users, some finding no association and some indicating an increased risk of cognitive decline.
Benzodiazepines are sometimes prescribed to treat behavioral symptoms of dementia. However, like antidepressants, they have little evidence of effectiveness, although antipsychotics have shown some benefit. Cognitive impairing effects of benzodiazepines that occur frequently in the elderly can also worsen dementia.
Adverse effects
The most common side-effects of benzodiazepines are related to their sedating and muscle-relaxing action. They include drowsiness, dizziness, and decreased alertness and concentration. Lack of coordination may result in falls and injuries particularly in the elderly. Another result is impairment of driving skills and increased likelihood of road traffic accidents. Decreased libido and erection problems are a common side effect. Depression and disinhibition may emerge. Hypotension and suppressed breathing (hypoventilation) may be encountered with intravenous use. Less common side effects include nausea and changes in appetite, blurred vision, confusion, euphoria, depersonalization and nightmares. Cases of liver toxicity have been described but are very rare.
The long-term effects of benzodiazepine use can include cognitive impairment as well as affective and behavioural problems. Feelings of turmoil, difficulty in thinking constructively, loss of sex-drive, agoraphobia and social phobia, increasing anxiety and depression, loss of interest in leisure pursuits and interests, and an inability to experience or express feelings can also occur. Not everyone, however, experiences problems with long-term use. Additionally, an altered perception of self, environment and relationships may occur. A study published in 2020 found that long-term use of prescription benzodiazepines is associated with an increase in all-cause mortality among those age 65 or younger, but not those older than 65. The study also found that all-cause mortality was increased further in cases in which benzodiazepines are co-prescribed with opioids, relative to cases in which benzodiazepines are prescribed without opioids, but again only in those age 65 or younger.
Compared to other sedative-hypnotics, visits to the hospital involving benzodiazepines had a 66% greater odds of a serious adverse health outcome. This included hospitalization, patient transfer, or death, and visits involving a combination of benzodiazepines and non-benzodiapine receptor agonists had almost four-times increased odds of a serious health outcome.
In September 2020, the US Food and Drug Administration (FDA) required the boxed warning be updated for all benzodiazepine medicines to describe the risks of abuse, misuse, addiction, physical dependence, and withdrawal reactions consistently across all the medicines in the class.
Cognitive effects
The short-term use of benzodiazepines adversely affects multiple areas of cognition, the most notable one being that it interferes with the formation and consolidation of memories of new material and may induce complete anterograde amnesia. However, researchers hold contrary opinions regarding the effects of long-term administration. One view is that many of the short-term effects continue into the long-term and may even worsen, and are not resolved after stopping benzodiazepine usage. Another view maintains that cognitive deficits in chronic benzodiazepine users occur only for a short period after the dose, or that the anxiety disorder is the cause of these deficits.
While the definitive studies are lacking, the former view received support from a 2004 meta-analysis of 13 small studies. This meta-analysis found that long-term use of benzodiazepines was associated with moderate to large adverse effects on all areas of cognition, with visuospatial memory being the most commonly detected impairment. Some of the other impairments reported were decreased IQ, visiomotor coordination, information processing, verbal learning and concentration. The authors of the meta-analysis and a later reviewer noted that the applicability of this meta-analysis is limited because the subjects were taken mostly from withdrawal clinics; the coexisting drug, alcohol use, and psychiatric disorders were not defined; and several of the included studies conducted the cognitive measurements during the withdrawal period.
Paradoxical effects
Paradoxical reactions, such as increased seizures in epileptics, aggression, violence, impulsivity, irritability and suicidal behavior sometimes occur. These reactions have been explained as consequences of disinhibition and the subsequent loss of control over socially unacceptable behavior. Paradoxical reactions are rare in the general population, with an incidence rate below 1% and similar to placebo. However, they occur with greater frequency in recreational abusers, individuals with borderline personality disorder, children, and patients on high-dosage regimes. In these groups, impulse control problems are perhaps the most important risk factor for disinhibition; learning disabilities and neurological disorders are also significant risks. Most reports of disinhibition involve high doses of high-potency benzodiazepines. Paradoxical effects may also appear after chronic use of benzodiazepines.
Long-term worsening of psychiatric symptoms
While benzodiazepines may have short-term benefits for anxiety, sleep and agitation in some patients, long-term (i.e., greater than 2–4 weeks) use can result in a worsening of the very symptoms the medications are meant to treat. Potential explanations include exacerbating cognitive problems that are already common in anxiety disorders, causing or worsening depression and suicidality, disrupting sleep architecture by inhibiting deep stage sleep, withdrawal symptoms or rebound symptoms in between doses mimicking or exacerbating underlying anxiety or sleep disorders, inhibiting the benefits of psychotherapy by inhibiting memory consolidation and reducing fear extinction, and reducing coping with trauma/stress and increasing vulnerability to future stress. The latter two explanations may be why benzodiazepines are ineffective and/or potentially harmful in PTSD and phobias. Anxiety, insomnia and irritability may be temporarily exacerbated during withdrawal, but psychiatric symptoms after discontinuation are usually less than even while taking benzodiazepines. Functioning significantly improves within 1 year of discontinuation.
Physical dependence, withdrawal and post-withdrawal syndromes
Tolerance
The main problem of the chronic use of benzodiazepines is the development of tolerance and dependence. Tolerance manifests itself as diminished pharmacological effect and develops relatively quickly to the sedative, hypnotic, anticonvulsant, and muscle relaxant actions of benzodiazepines. Tolerance to anti-anxiety effects develops more slowly with little evidence of continued effectiveness beyond four to six months of continued use. In general, tolerance to the amnesic effects does not occur. However, controversy exists as to tolerance to the anxiolytic effects with some evidence that benzodiazepines retain efficacy and opposing evidence from a systematic review of the literature that tolerance frequently occurs and some evidence that anxiety may worsen with long-term use. The question of tolerance to the amnesic effects of benzodiazepines is, likewise, unclear. Some evidence suggests that partial tolerance does develop, and that, "memory impairment is limited to a narrow window within 90 minutes after each dose".
A major disadvantage of benzodiazepines is that tolerance to therapeutic effects develops relatively quickly while many adverse effects persist. Tolerance develops to hypnotic and myorelaxant effects within days to weeks, and to anticonvulsant and anxiolytic effects within weeks to months. Therefore, benzodiazepines are unlikely to be effective long-term treatments for sleep and anxiety. While BZD therapeutic effects disappear with tolerance, depression and impulsivity with high suicidal risk commonly persist. Several studies have confirmed that long-term benzodiazepines are not significantly different from placebo for sleep or anxiety. This may explain why patients commonly increase doses over time and many eventually take more than one type of benzodiazepine after the first loses effectiveness. Additionally, because tolerance to benzodiazepine sedating effects develops more quickly than does tolerance to brainstem depressant effects, those taking more benzodiazepines to achieve desired effects may experience sudden respiratory depression, hypotension or death. Most patients with anxiety disorders and PTSD have symptoms that persist for at least several months, making tolerance to therapeutic effects a distinct problem for them and necessitating the need for more effective long-term treatment (e.g., psychotherapy, serotonergic antidepressants).
Withdrawal symptoms and management
Discontinuation of benzodiazepines or abrupt reduction of the dose, even after a relatively short course of treatment (two to four weeks), may result in two groups of symptoms, rebound and withdrawal. Rebound symptoms are the return of the symptoms for which the patient was treated but worse than before. Withdrawal symptoms are the new symptoms that occur when the benzodiazepine is stopped. They are the main sign of physical dependence.
The most frequent symptoms of withdrawal from benzodiazepines are insomnia, gastric problems, tremors, agitation, fearfulness, and muscle spasms. The less frequent effects are irritability, sweating, depersonalization, derealization, hypersensitivity to stimuli, depression, suicidal behavior, psychosis, seizures, and delirium tremens. Severe symptoms usually occur as a result of abrupt or over-rapid withdrawal. Abrupt withdrawal can be dangerous and lead to excitotoxicity, causing damage and even death to nerve cells as a result of excessive levels of the excitatory neurotransmitter glutamate. Increased glutamatergic activity is thought to be part of a compensatory mechanism to chronic GABAergic inhibition from benzodiazepines. Therefore, a gradual reduction regimen is recommended.
Symptoms may also occur during a gradual dosage reduction, but are typically less severe and may persist as part of a protracted withdrawal syndrome for months after cessation of benzodiazepines. Approximately 10% of patients experience a notable protracted withdrawal syndrome, which can persist for many months or in some cases a year or longer. Protracted symptoms tend to resemble those seen during the first couple of months of withdrawal but usually are of a sub-acute level of severity. Such symptoms do gradually lessen over time, eventually disappearing altogether.
Benzodiazepines have a reputation with patients and doctors for causing a severe and traumatic withdrawal; however, this is in large part due to the withdrawal process being poorly managed. Over-rapid withdrawal from benzodiazepines increases the severity of the withdrawal syndrome and increases the failure rate. A slow and gradual withdrawal customised to the individual and, if indicated, psychological support is the most effective way of managing the withdrawal. Opinion as to the time needed to complete withdrawal ranges from four weeks to several years. A goal of less than six months has been suggested, but due to factors such as dosage and type of benzodiazepine, reasons for prescription, lifestyle, personality, environmental stresses, and amount of available support, a year or more may be needed to withdraw.
Withdrawal is best managed by transferring the physically dependent patient to an equivalent dose of diazepam because it has the longest half-life of all of the benzodiazepines, is metabolised into long-acting active metabolites and is available in low-potency tablets, which can be quartered for smaller doses. A further benefit is that it is available in liquid form, which allows for even smaller reductions. Chlordiazepoxide, which also has a long half-life and long-acting active metabolites, can be used as an alternative.
Nonbenzodiazepines are contraindicated during benzodiazepine withdrawal as they are cross tolerant with benzodiazepines and can induce dependence. Alcohol is also cross tolerant with benzodiazepines and more toxic and thus caution is needed to avoid replacing one dependence with another. During withdrawal, fluoroquinolone-based antibiotics are best avoided if possible; they displace benzodiazepines from their binding site and reduce GABA function and, thus, may aggravate withdrawal symptoms. Antipsychotics are not recommended for benzodiazepine withdrawal (or other CNS depressant withdrawal states) especially clozapine, olanzapine or low potency phenothiazines, e.g., chlorpromazine as they lower the seizure threshold and can worsen withdrawal effects; if used extreme caution is required.
Withdrawal from long term benzodiazepines is beneficial for most individuals. Withdrawal of benzodiazepines from long-term users, in general, leads to improved physical and mental health particularly in the elderly; although some long term users report continued benefit from taking benzodiazepines, this may be the result of suppression of withdrawal effects.
Controversial associations
Beyond the well established link between benzodiazepines and psychomotor impairment resulting in motor vehicle accidents and falls leading to fracture; research in the 2000s and 2010s has raised the association between benzodiazepines (and Z-drugs) and other, as of yet unproven, adverse effects including dementia, cancer, infections, pancreatitis and respiratory disease exacerbations.
Dementia
A number of studies have drawn an association between long-term benzodiazepine use and neuro-degenerative disease, particularly Alzheimer's disease. It has been determined that long-term use of benzodiazepines is associated with increased dementia risk, even after controlling for protopathic bias.
Infections
Some observational studies have detected significant associations between benzodiazepines and respiratory infections such as pneumonia where others have not. A large meta-analysis of pre-marketing randomized controlled trials on the pharmacologically related Z-Drugs suggest a small increase in infection risk as well. An immunodeficiency effect from the action of benzodiazepines on GABA-A receptors has been postulated from animal studies.
Cancer
A meta-analysis of observational studies has determined an association between benzodiazepine use and cancer, though the risk across different agents and different cancers varied significantly. In terms of experimental basic science evidence, an analysis of carcinogenetic and genotoxicity data for various benzodiazepines has suggested a small possibility of carcinogenesis for a small number of benzodiazepines.
Pancreatitis
The evidence suggesting a link between benzodiazepines (and Z-Drugs) and pancreatic inflammation is very sparse and limited to a few observational studies from Taiwan. A criticism of confounding can be applied to these findings as with the other controversial associations above. Further well-designed research from other populations as well as a biologically plausible mechanism is required to confirm this association.
Overdose
Although benzodiazepines are much safer in overdose than their predecessors, the barbiturates, they can still cause problems in overdose. Taken alone, they rarely cause severe complications in overdose; statistics in England showed that benzodiazepines were responsible for 3.8% of all deaths by poisoning from a single drug. However, combining these drugs with alcohol, opiates or tricyclic antidepressants markedly raises the toxicity. The elderly are more sensitive to the side effects of benzodiazepines, and poisoning may even occur from their long-term use. The various benzodiazepines differ in their toxicity; temazepam appears most toxic in overdose and when used with other drugs. The symptoms of a benzodiazepine overdose may include; drowsiness, slurred speech, nystagmus, hypotension, ataxia, coma, respiratory depression, and cardiorespiratory arrest.
A reversal agent for benzodiazepines exists, flumazenil (Anexate), itself belonging to the chemical class of benzodiazepines. Its use as an antidote is not routinely recommended because of the high risk of resedation and seizures. In a double-blind, placebo-controlled trial of 326 people, 4 people had serious adverse events and 61% became resedated following the use of flumazenil. Numerous contraindications to its use exist. It is contraindicated in people with a history of long-term use of benzodiazepines, those having ingested a substance that lowers the seizure threshold or may cause an arrhythmia, and in those with abnormal vital signs. One study found that only 10% of the people presenting with a benzodiazepine overdose are suitable candidates for treatment with flumazenil.
Interactions
Individual benzodiazepines may have different interactions with certain drugs. Depending on their metabolism pathway, benzodiazepines can be divided roughly into two groups. The largest group consists of those that are metabolized by cytochrome P450 (CYP450) enzymes and possess significant potential for interactions with other drugs. The other group comprises those that are metabolized through glucuronidation, such as lorazepam, oxazepam, and temazepam, and, in general, have few drug interactions.
Many drugs, including oral contraceptives, some antibiotics, antidepressants, and antifungal agents, inhibit cytochrome enzymes in the liver. They reduce the rate of elimination of the benzodiazepines that are metabolized by CYP450, leading to possibly excessive drug accumulation and increased side-effects. In contrast, drugs that induce cytochrome P450 enzymes, such as St John's wort, the antibiotic rifampicin, and the anticonvulsants carbamazepine and phenytoin, accelerate elimination of many benzodiazepines and decrease their action. Taking benzodiazepines with alcohol, opioids and other central nervous system depressants potentiates their action. This often results in increased sedation, impaired motor coordination, suppressed breathing, and other adverse effects that have potential to be lethal. Antacids can slow down absorption of some benzodiazepines; however, this effect is marginal and inconsistent.
Pharmacology
Pharmacodynamics
Benzodiazepines work by increasing the effectiveness of the endogenous chemical, GABA, to decrease the excitability of neurons. This reduces the communication between neurons and, therefore, has a calming effect on many of the functions of the brain.
GABA controls the excitability of neurons by binding to the GABAA receptor. The GABAA receptor is a protein complex located in the synapses between neurons. All GABAA receptors contain an ion channel that conducts chloride ions across neuronal cell membranes and two binding sites for the neurotransmitter gamma-aminobutyric acid (GABA), while a subset of GABAA receptor complexes also contain a single binding site for benzodiazepines. Binding of benzodiazepines to this receptor complex does not alter binding of GABA. Unlike other positive allosteric modulators that increase ligand binding, benzodiazepine binding acts as a positive allosteric modulator by increasing the total conduction of chloride ions across the neuronal cell membrane when GABA is already bound to its receptor. This increased chloride ion influx hyperpolarizes the neuron's membrane potential. As a result, the difference between resting potential and threshold potential is increased and firing is less likely.
Different GABAA receptor subtypes have varying distributions within different regions of the brain and, therefore, control distinct neuronal circuits. Hence, activation of different GABAA receptor subtypes by benzodiazepines may result in distinct pharmacological actions. In terms of the mechanism of action of benzodiazepines, their similarities are too great to separate them into individual categories such as anxiolytic or hypnotic. For example, a hypnotic administered in low doses produces anxiety-relieving effects, whereas a benzodiazepine marketed as an anti-anxiety drug at higher doses induces sleep.
The subset of GABAA receptors that also bind benzodiazepines are referred to as benzodiazepine receptors (BzR). The GABAA receptor is a heteromer composed of five subunits, the most common ones being two αs, two βs, and one γ (α2β2γ1). For each subunit, many subtypes exist (α1–6, β1–3, and γ1–3). GABAA receptors that are made up of different combinations of subunit subtypes have different properties, different distributions in the brain and different activities relative to pharmacological and clinical effects. Benzodiazepines bind at the interface of the α and γ subunits on the GABAA receptor. Binding also requires that alpha subunits contain a histidine amino acid residue, (i.e., α1, α2, α3, and α5 containing GABAA receptors). For this reason, benzodiazepines show no affinity for GABAA receptors containing α4 and α6 subunits with an arginine instead of a histidine residue. Once bound to the benzodiazepine receptor, the benzodiazepine ligand locks the benzodiazepine receptor into a conformation in which it has a greater affinity for the GABA neurotransmitter. This increases the frequency of the opening of the associated chloride ion channel and hyperpolarizes the membrane of the associated neuron. The inhibitory effect of the available GABA is potentiated, leading to sedative and anxiolytic effects. For instance, those ligands with high activity at the α1 are associated with stronger hypnotic effects, whereas those with higher affinity for GABAA receptors containing α2 and/or α3 subunits have good anti-anxiety activity.
GABAA receptors participate in the regulation of synaptic pruning by prompting microglial spine engulfment. Benzodiazepines have been shown to upregulate microglial spine engulfment and prompt overzealous eradication of synaptic connections. This mechanism may help explain the increased risk of dementia associated with long-term benzodiazepine treatment.
The benzodiazepine class of drugs also interact with peripheral benzodiazepine receptors. Peripheral benzodiazepine receptors are present in peripheral nervous system tissues, glial cells, and to a lesser extent the central nervous system. These peripheral receptors are not structurally related or coupled to GABAA receptors. They modulate the immune system and are involved in the body response to injury. Benzodiazepines also function as weak adenosine reuptake inhibitors. It has been suggested that some of their anticonvulsant, anxiolytic, and muscle relaxant effects may be in part mediated by this action. Benzodiazepines have binding sites in the periphery, however their effects on muscle tone is not mediated through these peripheral receptors. The peripheral binding sites for benzodiazepines are present in immune cells and gastrointestinal tract.
Pharmacokinetics
A benzodiazepine can be placed into one of three groups by its elimination half-life, or time it takes for the body to eliminate half of the dose. Some benzodiazepines have long-acting active metabolites, such as diazepam and chlordiazepoxide, which are metabolised into desmethyldiazepam. Desmethyldiazepam has a half-life of 36–200 hours, and flurazepam, with the main active metabolite of desalkylflurazepam, with a half-life of 40–250 hours. These long-acting metabolites are partial agonists.
Short-acting compounds have a median half-life of 1–12 hours. They have few residual effects if taken before bedtime, rebound insomnia may occur upon discontinuation, and they might cause daytime withdrawal symptoms such as next day rebound anxiety with prolonged usage. Examples are brotizolam, midazolam, and triazolam.
Intermediate-acting compounds have a median half-life of 12–40 hours. They may have some residual effects in the first half of the day if used as a hypnotic. Rebound insomnia, however, is more common upon discontinuation of intermediate-acting benzodiazepines than longer-acting benzodiazepines. Examples are alprazolam, estazolam, flunitrazepam, clonazepam, lormetazepam, lorazepam, nitrazepam, and temazepam.
Long-acting compounds have a half-life of 40–250 hours. They have a risk of accumulation in the elderly and in individuals with severely impaired liver function, but they have a reduced severity of rebound effects and withdrawal. Examples are diazepam, clorazepate, chlordiazepoxide, and flurazepam.
Chemistry
Benzodiazepines share a similar chemical structure, and their effects in humans are mainly produced by the allosteric modification of a specific kind of neurotransmitter receptor, the GABAA receptor, which increases the overall conductance of these inhibitory channels; this results in the various therapeutic effects as well as adverse effects of benzodiazepines. Other less important modes of action are also known.
The term benzodiazepine is the chemical name for the heterocyclic ring system (see figure to the right), which is a fusion between the benzene and diazepine ring systems. Under Hantzsch–Widman nomenclature, a diazepine is a heterocycle with two nitrogen atoms, five carbon atom and the maximum possible number of cumulative double bonds. The "benzo" prefix indicates the benzene ring fused onto the diazepine ring.
Benzodiazepine drugs are substituted 1,4-benzodiazepines, although the chemical term can refer to many other compounds that do not have useful pharmacological properties. Different benzodiazepine drugs have different side groups attached to this central structure. The different side groups affect the binding of the molecule to the GABAA receptor and so modulate the pharmacological properties. Many of the pharmacologically active "classical" benzodiazepine drugs contain the 5-phenyl-1H-benzo[e] [1,4]diazepin-2(3H)-one substructure (see figure to the right). Benzodiazepines have been found to mimic protein reverse turns structurally, which enable them with their biological activity in many cases.
Nonbenzodiazepines also bind to the benzodiazepine binding site on the GABAA receptor and possess similar pharmacological properties. While the nonbenzodiazepines are by definition structurally unrelated to the benzodiazepines, both classes of drugs possess a common pharmacophore (see figure to the lower-right), which explains their binding to a common receptor site.
Types
2-keto compounds:
clorazepate, diazepam, flurazepam, halazepam, prazepam, and others
3-hydroxy compounds:
lorazepam, lormetazepam, oxazepam, temazepam
7-nitro compounds:
clonazepam, flunitrazepam, nimetazepam, nitrazepam
Triazolo compounds:
adinazolam, alprazolam, estazolam, triazolam
Imidazo compounds:
climazolam, loprazolam, midazolam
1,5-benzodiazepines:
clobazam
History
The first benzodiazepine, chlordiazepoxide (Librium), was synthesized in 1955 by Leo Sternbach while working at Hoffmann–La Roche on the development of tranquilizers. The pharmacological properties of the compounds prepared initially were disappointing, and Sternbach abandoned the project. Two years later, in April 1957, co-worker Earl Reeder noticed a "nicely crystalline" compound left over from the discontinued project while spring-cleaning in the lab. This compound, later named chlordiazepoxide, had not been tested in 1955 because of Sternbach's focus on other issues. Expecting pharmacology results to be negative, and hoping to publish the chemistry-related findings, researchers submitted it for a standard battery of animal tests. The compound showed very strong sedative, anticonvulsant, and muscle relaxant effects. These impressive clinical findings led to its speedy introduction throughout the world in 1960 under the brand name Librium. Following chlordiazepoxide, diazepam marketed by Hoffmann–La Roche under the brand name Valium in 1963, and for a while the two were the most commercially successful drugs. The introduction of benzodiazepines led to a decrease in the prescription of barbiturates, and by the 1970s they had largely replaced the older drugs for sedative and hypnotic uses.
The new group of drugs was initially greeted with optimism by the medical profession, but gradually concerns arose; in particular, the risk of dependence became evident in the 1980s. Benzodiazepines have a unique history in that they were responsible for the largest-ever class-action lawsuit against drug manufacturers in the United Kingdom, involving 14,000 patients and 1,800 law firms that alleged the manufacturers knew of the dependence potential but intentionally withheld this information from doctors. At the same time, 117 general practitioners and 50 health authorities were sued by patients to recover damages for the harmful effects of dependence and withdrawal. This led some doctors to require a signed consent form from their patients and to recommend that all patients be adequately warned of the risks of dependence and withdrawal before starting treatment with benzodiazepines. The court case against the drug manufacturers never reached a verdict; legal aid had been withdrawn and there were allegations that the consultant psychiatrists, the expert witnesses, had a conflict of interest. The court case fell through, at a cost of £30 million, and led to more cautious funding through legal aid for future cases. This made future class action lawsuits less likely to succeed, due to the high cost from financing a smaller number of cases, and increasing charges for losing the case for each person involved.
Although antidepressants with anxiolytic properties have been introduced, and there is increasing awareness of the adverse effects of benzodiazepines, prescriptions for short-term anxiety relief have not significantly dropped. For treatment of insomnia, benzodiazepines are now less popular than nonbenzodiazepines, which include zolpidem, zaleplon and eszopiclone. Nonbenzodiazepines are molecularly distinct, but nonetheless, they work on the same benzodiazepine receptors and produce similar sedative effects.
Benzodiazepines have been detected in plant specimens and brain samples of animals not exposed to synthetic sources, including a human brain from the 1940s. However, it is unclear whether these compounds are biosynthesized by microbes or by plants and animals themselves. A microbial biosynthetic pathway has been proposed.
Society and culture
Legal status
In the United States, benzodiazepines are Schedule IV drugs under the Federal Controlled Substances Act, even when not on the market (for example, flunitrazepam), with the exception of flualprazolam, etizolam, clonazolam, flubromazolam, and diclazepam which are placed in Schedule I.
In Canada, possession of benzodiazepines is legal for personal use. All benzodiazepines are categorized as Schedule IV substances under the Controlled Drugs and Substances Act.
In the United Kingdom, benzodiazepines are Class C controlled drugs, carrying the maximum penalty of 7 years imprisonment, an unlimited fine or both for possession and a maximum penalty of 14 years imprisonment, an unlimited fine or both for supplying benzodiazepines to others.
In the Netherlands, since October 1993, benzodiazepines, including formulations containing less than 20 mg of temazepam, are all placed on List 2 of the Opium Law. A prescription is needed for possession of all benzodiazepines. Temazepam formulations containing 20 mg or greater of the drug are placed on List 1, thus requiring doctors to write prescriptions in the List 1 format.
In East Asia and Southeast Asia, temazepam and nimetazepam are often heavily controlled and restricted. In certain countries, triazolam, flunitrazepam, flutoprazepam and midazolam are also restricted or controlled to certain degrees. In Hong Kong, all benzodiazepines are regulated under Schedule 1 of Hong Kong's Chapter 134 Dangerous Drugs Ordinance. Previously only brotizolam, flunitrazepam and triazolam were classed as dangerous drugs.
Internationally, benzodiazepines are categorized as Schedule IV controlled drugs, apart from flunitrazepam, which is a Schedule III drug under the Convention on Psychotropic Substances.
Recreational use
Benzodiazepines are considered major addictive substances. Non-medical benzodiazepine use is mostly limited to individuals who use other substances, i.e., people who engage in polysubstance use. On the international scene, benzodiazepines are categorized as Schedule IV controlled drugs by the INCB, apart from flunitrazepam, which is a Schedule III drug under the Convention on Psychotropic Substances. Some variation in drug scheduling exists in individual countries; for example, in the United Kingdom, midazolam and temazepam are Schedule III controlled drugs.
British law requires that temazepam (but not midazolam) be stored in safe custody. Safe custody requirements ensures that pharmacists and doctors holding stock of temazepam must store it in securely fixed double-locked steel safety cabinets and maintain a written register, which must be bound and contain separate entries for temazepam and must be written in ink with no use of correction fluid (although a written register is not required for temazepam in the United Kingdom). Disposal of expired stock must be witnessed by a designated inspector (either a local drug-enforcement police officer or official from health authority). Benzodiazepine use ranges from occasional binges on large doses, to chronic and compulsive drug use of high doses.
Benzodiazepines are commonly used recreationally by poly-drug users. Mortality is higher among poly-drug users that also use benzodiazepines. Heavy alcohol use also increases mortality among poly-drug users. Polydrug use involving benzodiazepines and alcohol can result in an increased risk of blackouts, risk-taking behaviours, seizures, and overdose. Dependence and tolerance, often coupled with dosage escalation, to benzodiazepines can develop rapidly among people who misuse drugs; withdrawal syndrome may appear after as little as three weeks of continuous use. Long-term use has the potential to cause both physical and psychological dependence and severe withdrawal symptoms such as depression, anxiety (often to the point of panic attacks), and agoraphobia. Benzodiazepines and, in particular, temazepam are sometimes used intravenously, which, if done incorrectly or in an unsterile manner, can lead to medical complications including abscesses, cellulitis, thrombophlebitis, arterial puncture, deep vein thrombosis, and gangrene. Sharing syringes and needles for this purpose also brings up the possibility of transmission of hepatitis, HIV, and other diseases. Benzodiazepines are also misused intranasally, which may have additional health consequences. Once benzodiazepine dependence has been established, a clinician usually converts the patient to an equivalent dose of diazepam before beginning a gradual reduction program.
A 1999–2005 Australian police survey of detainees reported preliminary findings that self-reported users of benzodiazepines were less likely than non-user detainees to work full-time and more likely to receive government benefits, use methamphetamine or heroin, and be arrested or imprisoned. Benzodiazepines are sometimes used for criminal purposes; they serve to incapacitate a victim in cases of drug assisted rape or robbery.
Overall, anecdotal evidence suggests that temazepam may be the most psychologically habit-forming (addictive) benzodiazepine. Non-medical temazepam use reached epidemic proportions in some parts of the world, in particular, in Europe and Australia, and is a major addictive substance in many Southeast Asian countries. This led authorities of various countries to place temazepam under a more restrictive legal status. Some countries, such as Sweden, banned the drug outright. Temazepam also has certain pharmacokinetic properties of absorption, distribution, elimination, and clearance that make it more apt to non-medical use compared to many other benzodiazepines.
Veterinary use
Benzodiazepines are used in veterinary practice in the treatment of various disorders and conditions. As in humans, they are used in the first-line management of seizures, status epilepticus, and tetanus, and as maintenance therapy in epilepsy (in particular, in cats). They are widely used in small and large animals (including horses, swine, cattle and exotic and wild animals) for their anxiolytic and sedative effects, as pre-medication before surgery, for induction of anesthesia and as adjuncts to anesthesia.
| Biology and health sciences | Drugs and pharmacology | null |
4788 | https://en.wikipedia.org/wiki/Body%20mass%20index | Body mass index | Body mass index (BMI) is a value derived from the mass (weight) and height of a person. The BMI is defined as the body mass divided by the square of the body height, and is expressed in units of kg/m2, resulting from mass in kilograms (kg) and height in metres (m).
The BMI may be determined first by measuring its components by means of a weighing scale and a stadiometer. The multiplication and division may be carried out directly, by hand or using a calculator, or indirectly using a lookup table (or chart). The table displays BMI as a function of mass and height and may show other units of measurement (converted to metric units for the calculation). The table may also show contour lines or colours for different BMI categories.
The BMI is a convenient rule of thumb used to broadly categorize a person as based on tissue mass (muscle, fat, and bone) and height. Major adult BMI classifications are underweight (under 18.5 kg/m2), normal weight (18.5 to 24.9), overweight (25 to 29.9), and obese (30 or more). When used to predict an individual's health, rather than as a statistical measurement for groups, the BMI has limitations that can make it less useful than some of the alternatives, especially when applied to individuals with abdominal obesity, short stature, or high muscle mass.
BMIs under 20 and over 25 have been associated with higher all-cause mortality, with the risk increasing with distance from the 20–25 range.
History
Adolphe Quetelet, a Belgian astronomer, mathematician, statistician, and sociologist, devised the basis of the BMI between 1830 and 1850 as he developed what he called "social physics". Quetelet himself never intended for the index, then called the Quetelet Index, to be used as a means of medical assessment. Instead, it was a component of his study of , or the average man. Quetelet thought of the average man as a social ideal, and developed the body mass index as a means of discovering the socially ideal human person. According to Lars Grue and Arvid Heiberg in the Scandinavian Journal of Disability Research, Quetelet's idealization of the average man would be elaborated upon by Francis Galton a decade later in the development of Eugenics.
The modern term "body mass index" (BMI) for the ratio of human body weight to squared height was coined in a paper published in the July 1972 edition of the Journal of Chronic Diseases by Ancel Keys and others. In this paper, Keys argued that what he termed the BMI was "if not fully satisfactory, at least as good as any other relative weight index as an indicator of relative obesity".
The interest in an index that measures body fat came with observed increasing obesity in prosperous Western societies. Keys explicitly judged BMI as appropriate for population studies and inappropriate for individual evaluation. Nevertheless, due to its simplicity, it has come to be widely used for preliminary diagnoses. Additional metrics, such as waist circumference, can be more useful.
The BMI is expressed in kg/m2, resulting from mass in kilograms and height in metres. If pounds and inches are used, a conversion factor of 703 (kg/m2)/(lb/in2) is applied. (If pounds and feet are used, a conversion factor of 4.88 is used.) When the term BMI is used informally, the units are usually omitted.
BMI provides a simple numeric measure of a person's thickness or thinness, allowing health professionals to discuss weight problems more objectively with their patients. BMI was designed to be used as a simple means of classifying average sedentary (physically inactive) populations, with an average body composition. For such individuals, the BMI value recommendations are as follows: 18.5 to 24.9 kg/m2 may indicate optimal weight, lower than 18.5 may indicate underweight, 25 to 29.9 may indicate overweight, and 30 or more may indicate obese. Lean male athletes often have a high muscle-to-fat ratio and therefore a BMI that is misleadingly high relative to their body-fat percentage.
Categories
A common use of the BMI is to assess how far an individual's body weight departs from what is normal for a person's height. The weight excess or deficiency may, in part, be accounted for by body fat (adipose tissue) although other factors such as muscularity also affect BMI significantly (see discussion below and overweight).
The WHO regards an adult BMI of less than 18.5 as underweight and possibly indicative of malnutrition, an eating disorder, or other health problems, while a BMI of 25 or more is considered overweight and 30 or more is considered obese. In addition to the principle, international WHO BMI cut-off points (16, 17, 18.5, 25, 30, 35 and 40), four additional cut-off points for at-risk Asians were identified (23, 27.5, 32.5 and 37.5). These ranges of BMI values are valid only as statistical categories.
Children and youth
BMI is used differently for people aged 2 to 20. It is calculated in the same way as for adults but then compared to typical values for other children or youth of the same age. Instead of comparison against fixed thresholds for underweight and overweight, the BMI is compared against the percentiles for children of the same sex and age.
A BMI that is less than the 5th percentile is considered underweight and above the 95th percentile is considered obese. Children with a BMI between the 85th and 95th percentile are considered to be overweight.
Studies in Britain from 2013 have indicated that females between the ages 12 and 16 had a higher BMI than males of the same age by 1.0 kg/m2 on average.
International variations
These recommended distinctions along the linear scale may vary from time to time and country to country, making global, longitudinal surveys problematic. People from different populations and descent have different associations between BMI, percentage of body fat, and health risks, with a higher risk of type 2 diabetes mellitus and atherosclerotic cardiovascular disease at BMIs lower than the WHO cut-off point for overweight, 25 kg/m2, although the cut-off for observed risk varies among different populations. The cut-off for observed risk varies based on populations and subpopulations in Europe, Asia and Africa.
Hong Kong
The Hospital Authority of Hong Kong recommends the use of the following BMI ranges:
Japan
A 2000 study from the Japan Society for the Study of Obesity (JASSO) presents the following table of BMI categories:
Singapore
In Singapore, the BMI cut-off figures were revised in 2005 by the Health Promotion Board (HPB), motivated by studies showing that many Asian populations, including Singaporeans, have a higher proportion of body fat and increased risk for cardiovascular diseases and diabetes mellitus, compared with general BMI recommendations in other countries. The BMI cut-offs are presented with an emphasis on health risk rather than weight.
United Kingdom
In the UK, NICE guidance recommends prevention of type 2 diabetes should start at a BMI of 30 in White and 27.5 in Black African, African-Caribbean, South Asian, and Chinese populations.
Research since 2021 based on a large sample of almost 1.5 million people in England found that some ethnic groups would benefit from prevention at or above a BMI of (rounded):
30 in White
28 in Black
just below 30 in Black British
29 in Black African
27 in Black Other
26 in Black Caribbean
27 in Arab and Chinese
24 in South Asian
24 in Pakistani, Indian and Nepali
23 in Tamil and Sri Lankan
21 in Bangladeshi
United States
In 1998, the U.S. National Institutes of Health brought U.S. definitions in line with World Health Organization guidelines, lowering the normal/overweight cut-off from a BMI of 27.8 (men) and 27.3 (women) to a BMI of 25. This had the effect of redefining approximately 25 million Americans, previously healthy, to overweight.
This can partially explain the increase in the overweight diagnosis in the past 20 years, and the increase in sales of weight loss products during the same time. WHO also recommends lowering the normal/overweight threshold for southeast Asian body types to around BMI 23, and expects further revisions to emerge from clinical studies of different body types.
A survey in 2007 showed 63% of Americans were then overweight or obese, with 26% in the obese category (a BMI of 30 or more). By 2014, 37.7% of adults in the United States were obese, 35.0% of men and 40.4% of women; class 3 obesity (BMI over 40) values were 7.7% for men and 9.9% for women. The U.S. National Health and Nutrition Examination Survey of 2015–2016 showed that 71.6% of American men and women had BMIs over 25. Obesity—a BMI of 30 or more—was found in 39.8% of the US adults.
Consequences of elevated level in adults
The BMI ranges are based on the relationship between body weight and disease and death. Overweight and obese individuals are at an increased risk for the following diseases:
Coronary artery disease
Dyslipidemia
Type 2 diabetes
Gallbladder disease
Hypertension
Osteoarthritis
Sleep apnea
Stroke
Infertility
At least 10 cancers, including endometrial, breast, and colon cancer
Epidural lipomatosis
Among people who have never smoked, overweight/obesity is associated with 51% increase in mortality compared with people who have always been a normal weight.
Applications
Public health
The BMI is generally used as a means of correlation between groups related by general mass and can serve as a vague means of estimating adiposity. The duality of the BMI is that, while it is easy to use as a general calculation, it is limited as to how accurate and pertinent the data obtained from it can be. Generally, the index is suitable for recognizing trends within sedentary or overweight individuals because there is a smaller margin of error. The BMI has been used by the WHO as the standard for recording obesity statistics since the early 1980s.
This general correlation is particularly useful for consensus data regarding obesity or various other conditions because it can be used to build a semi-accurate representation from which a solution can be stipulated, or the RDA for a group can be calculated. Similarly, this is becoming more and more pertinent to the growth of children, since the majority of children are sedentary.
Cross-sectional studies indicated that sedentary people can decrease BMI by becoming more physically active. Smaller effects are seen in prospective cohort studies which lend to support active mobility as a means to prevent a further increase in BMI.
Legislation
In France, Italy, and Spain, legislation has been introduced banning the usage of fashion show models having a BMI below 18. In Israel, a model with BMI below 18.5 is banned. This is done to fight anorexia among models and people interested in fashion.
Relationship to health
A study published by Journal of the American Medical Association (JAMA) in 2005 showed that overweight people had a death rate similar to normal weight people as defined by BMI, while underweight and obese people had a higher death rate.
A study published by The Lancet in 2009 involving 900,000 adults showed that overweight and underweight people both had a mortality rate higher than normal weight people as defined by BMI. The optimal BMI was found to be in the range of 22.5–25. The average BMI of athletes is 22.4 for women and 23.6 for men.
High BMI is associated with type 2 diabetes only in people with high serum gamma-glutamyl transpeptidase.
In an analysis of 40 studies involving 250,000 people, patients with coronary artery disease with normal BMIs were at higher risk of death from cardiovascular disease than people whose BMIs put them in the overweight range (BMI 25–29.9).
One study found that BMI had a good general correlation with body fat percentage, and noted that obesity has overtaken smoking as the world's number one cause of death. But it also notes that in the study 50% of men and 62% of women were obese according to body fat defined obesity, while only 21% of men and 31% of women were obese according to BMI, meaning that BMI was found to underestimate the number of obese subjects.
A 2010 study that followed 11,000 subjects for up to eight years concluded that BMI is not the most appropriate measure for the risk of heart attack, stroke or death. A better measure was found to be the waist-to-height ratio. A 2011 study that followed 60,000 participants for up to 13 years found that waist–hip ratio was a better predictor of ischaemic heart disease mortality.
Limitations
The medical establishment and statistical community have both highlighted the limitations of BMI.
Racial and gender differences
Part of the statistical limitations of the BMI scale is the result of Quetelet's original sampling methods. As noted in his primary work, A Treatise on Man and the Development of His Faculties, the data from which Quetelet derived his formula was taken mostly from Scottish Highland soldiers and French Gendarmerie. The BMI was always designed as a metric for European men. For women, and people of non-European origin, the scale is often biased. As noted by sociologist Sabrina Strings, the BMI is largely inaccurate for black people especially, disproportionately labelling them as overweight even for healthy individuals. A 2012 study of BMI in an ethnically diverse population showed that "adult overweight and obesity were associated with an increased risk of mortality ... across the five racial/ethnic groups".
Scaling
The exponent in the denominator of the formula for BMI is arbitrary. The BMI depends upon weight and the square of height. Since mass increases to the third power of linear dimensions, taller individuals with exactly the same body shape and relative composition have a larger BMI. BMI is proportional to the mass and inversely proportional to the square of the height. So, if all body dimensions double, and mass scales naturally with the cube of the height, then BMI doubles instead of remaining the same. This results in taller people having a reported BMI that is uncharacteristically high, compared to their actual body fat levels. In comparison, the Ponderal index is based on the natural scaling of mass with the third power of the height.
However, many taller people are not just "scaled up" short people but tend to have narrower frames in proportion to their height. Carl Lavie has written that "The B.M.I. tables are excellent for identifying obesity and body fat in large populations, but they are far less reliable for determining fatness in individuals."
For US adults, exponent estimates range from 1.92 to 1.96 for males and from 1.45 to 1.95 for females.
Physical characteristics
The BMI overestimates roughly 10% for a large (or tall) frame and underestimates roughly 10% for a smaller frame (short stature). In other words, people with small frames would be carrying more fat than optimal, but their BMI indicates that they are normal. Conversely, large framed (or tall) individuals may be quite healthy, with a fairly low body fat percentage, but be classified as overweight by BMI.
For example, a height/weight chart may say the ideal weight (BMI 21.5) for a man is . But if that man has a slender build (small frame), he may be overweight at and should reduce by 10% to roughly (BMI 19.4). In the reverse, the man with a larger frame and more solid build should increase by 10%, to roughly (BMI 23.7). If one teeters on the edge of small/medium or medium/large, common sense should be used in calculating one's ideal weight. However, falling into one's ideal weight range for height and build is still not as accurate in determining health risk factors as waist-to-height ratio and actual body fat percentage.
Accurate frame size calculators use several measurements (wrist circumference, elbow width, neck circumference, and others) to determine what category an individual falls into for a given height. The BMI also fails to take into account loss of height through ageing. In this situation, BMI will increase without any corresponding increase in weight.
Muscle versus fat
Assumptions about the distribution between muscle mass and fat mass are inexact. BMI generally overestimates adiposity on those with leaner body mass (e.g., athletes) and underestimates excess adiposity on those with fattier body mass.
A study in June 2008 by Romero-Corral et al. examined 13,601 subjects from the United States' third National Health and Nutrition Examination Survey (NHANES III) and found that BMI-defined obesity (BMI ≥ 30) was present in 21% of men and 31% of women. Body fat-defined obesity was found in 50% of men and 62% of women. While BMI-defined obesity showed high specificity (95% for men and 99% for women), BMI showed poor sensitivity (36% for men and 49% for women). In other words, the BMI will be mostly correct when determining a person to be obese, but can err quite frequently when determining a person not to be. Despite this undercounting of obesity by BMI, BMI values in the intermediate BMI range of 20–30 were found to be associated with a wide range of body fat percentages. For men with a BMI of 25, about 20% have a body fat percentage below 20% and about 10% have body fat percentage above 30%.
Body composition for athletes is often better calculated using measures of body fat, as determined by such techniques as skinfold measurements or underwater weighing and the limitations of manual measurement have also led to alternative methods to measure obesity, such as the body volume indicator.
Variation in definitions of categories
It is not clear where on the BMI scale the threshold for overweight and obese should be set. Because of this, the standards have varied over the past few decades. Between 1980 and 2000 the U.S. Dietary Guidelines have defined overweight at a variety of levels ranging from a BMI of 24.9 to 27.1. In 1985, the National Institutes of Health (NIH) consensus conference recommended that overweight BMI be set at a BMI of 27.8 for men and 27.3 for women.
In 1998, an NIH report concluded that a BMI over 25 is overweight and a BMI over 30 is obese. In the 1990s the World Health Organization (WHO) decided that a BMI of 25 to 30 should be considered overweight and a BMI over 30 is obese, the standards the NIH set. This became the definitive guide for determining if someone is overweight.
One study found that the vast majority of people labelled 'overweight' and 'obese' according to current definitions do not in fact face any meaningful increased risk for early death. In a quantitative analysis of several studies, involving more than 600,000 men and women, the lowest mortality rates were found for people with BMIs between 23 and 29; most of the 25–30 range considered 'overweight' was not associated with higher risk.
Alternatives
Corpulence index (exponent of 3)
The corpulence index uses an exponent of 3 rather than 2. The corpulence index yields valid results even for very short and very tall people, which is a problem with BMI. For example, a tall person at an ideal body weight of gives a normal BMI of 20.74 and CI of 13.6, while a tall person with a weight of gives a BMI of 24.84, very close to an overweight BMI of 25, and a CI of 12.4, very close to a normal CI of 12.
New BMI (exponent of 2.5)
A study found that the best exponent E for predicting the fat percent would be between 2 and 2.5 in .
An exponent of 5/2 or 2.5 was proposed by Quetelet in the 19th century:
In general, we do not err much when we assume that during development the squares of the weight at different ages are as the fifth powers of the height
This exponent of 2.5 is used in a revised formula for Body Mass Index, proposed by Nick Trefethen, Professor of numerical analysis at the University of Oxford, which minimizes the distortions for shorter and taller individuals resulting from the use of an exponent of 2 in the traditional BMI formula:
The scaling factor of 1.3 was determined to make the proposed new BMI formula align with the traditional BMI formula for adults of average height, while the exponent of 2.5 is a compromise between the exponent of 2 in the traditional formula for BMI and the exponent of 3 that would be expected for the scaling of weight (which at constant density would theoretically scale with volume, i.e., as the cube of the height) with height. In Trefethen's analysis, an exponent of 2.5 was found to fit empirical data more closely with less distortion than either an exponent of 2 or 3.
BMI prime (exponent of 2, normalization factor)
BMI Prime, a modification of the BMI system, is the ratio of actual BMI to upper limit optimal BMI (currently defined at 25 kg/m2), i.e., the actual BMI expressed as a proportion of upper limit optimal. BMI Prime is a dimensionless number independent of units. Individuals with BMI Prime less than 0.74 are underweight; those with between 0.74 and 1.00 have optimal weight; and those at 1.00 or greater are overweight. BMI Prime is useful clinically because it shows by what ratio (e.g. 1.36) or percentage (e.g. 136%, or 36% above) a person deviates from the maximum optimal BMI.
For instance, a person with BMI 34 kg/m2 has a BMI Prime of 34/25 = 1.36, and is 36% over their upper mass limit. In South East Asian and South Chinese populations (see § international variations), BMI Prime should be calculated using an upper limit BMI of 23 in the denominator instead of 25. BMI Prime allows easy comparison between populations whose upper-limit optimal BMI values differ.
Waist circumference
Waist circumference is a good indicator of visceral fat, which poses more health risks than fat elsewhere. According to the U.S. National Institutes of Health (NIH), waist circumference in excess of for men and for (non-pregnant) women is considered to imply a high risk for type 2 diabetes, dyslipidemia, hypertension, and cardiovascular disease CVD. Waist circumference can be a better indicator of obesity-related disease risk than BMI. For example, this is the case in populations of Asian descent and older people. for men and for women has been stated to pose "higher risk", with the NIH figures "even higher".
Waist-to-hip circumference ratio has also been used, but has been found to be no better than waist circumference alone, and more complicated to measure.
A related indicator is waist circumference divided by height. A 2013 study identified critical threshold values for waist-to-height ratio according to age, with consequent significant reduction in life expectancy if exceeded. These are: 0.5 for people under 40 years of age, 0.5 to 0.6 for people aged 40–50, and 0.6 for people over 50 years of age.
Surface-based body shape index
The Surface-based Body Shape Index (SBSI) is far more rigorous and is based upon four key measurements: the body surface area (BSA), vertical trunk circumference (VTC), waist circumference (WC) and height (H). Data on 11,808 subjects from the National Health and Human Nutrition Examination Surveys (NHANES) 1999–2004, showed that SBSI outperformed BMI, waist circumference, and A Body Shape Index (ABSI), an alternative to BMI.
A simplified, dimensionless form of SBSI, known as SBSI*, has also been developed.
Modified body mass index
Within some medical contexts, such as familial amyloid polyneuropathy, serum albumin is factored in to produce a modified body mass index (mBMI). The mBMI can be obtained by multiplying the BMI by serum albumin, in grams per litre.
| Biology and health sciences | Health and fitness | null |
4802 | https://en.wikipedia.org/wiki/Biome | Biome | A biome () is a distinct geographical region with specific climate, vegetation, and animal life. It consists of a biological community that has formed in response to its physical environment and regional climate. Biomes may span more than one continent. A biome encompasses multiple ecosystems within its boundaries. It can also comprise a variety of habitats.
While a biome can cover small areas, a microbiome is a mix of organisms that coexist in a defined space on a much smaller scale. For example, the human microbiome is the collection of bacteria, viruses, and other microorganisms that are present on or in a human body.
A biota is the total collection of organisms of a geographic region or a time period, from local geographic scales and instantaneous temporal scales all the way up to whole-planet and whole-timescale spatiotemporal scales. The biotas of the Earth make up the biosphere.
Terminology
The term was suggested in 1916 by Clements, originally as a synonym for biotic community of Möbius (1877). Later, it gained its current definition, based on earlier concepts of phytophysiognomy, formation and vegetation (used in opposition to flora), with the inclusion of the animal element and the exclusion of the taxonomic element of species composition. In 1935, Tansley added the climatic and soil aspects to the idea, calling it ecosystem. The International Biological Program (1964–74) projects popularized the concept of biome.
However, in some contexts, the term biome is used in a different manner. In German literature, particularly in the Walter terminology, the term is used similarly as biotope (a concrete geographical unit), while the biome definition used in this article is used as an international, non-regional, terminology—irrespectively of the continent in which an area is present, it takes the same biome name—and corresponds to his "zonobiome", "orobiome" and "pedobiome" (biomes determined by climate zone, altitude or soil).
In the Brazilian literature, the term biome is sometimes used as a synonym of biogeographic province, an area based on species composition (the term floristic province being used when plant species are considered), or also as synonym of the "morphoclimatic and phytogeographical domain" of Ab'Sáber, a geographic space with subcontinental dimensions, with the predominance of similar geomorphologic and climatic characteristics, and of a certain vegetation form. Both include many biomes in fact.
Classifications
To divide the world into a few ecological zones is difficult, notably because of the small-scale variations that exist everywhere on earth and because of the gradual changeover from one biome to the other. Their boundaries must therefore be drawn arbitrarily and their characterization made according to the average conditions that predominate in them.
A 1978 study on North American grasslands found a positive logistic correlation between evapotranspiration in mm/yr and above-ground net primary production in g/m2/yr. The general results from the study were that precipitation and water use led to above-ground primary production, while solar irradiation and temperature lead to below-ground primary production (roots), and temperature and water lead to cool and warm season growth habit. These findings help explain the categories used in Holdridge's bioclassification scheme (see below), which were then later simplified by Whittaker. The number of classification schemes and the variety of determinants used in those schemes, however, should be taken as strong indicators that biomes do not fit perfectly into the classification schemes created.
Holdridge (1947, 1964) life zones
In 1947, the American botanist and climatologist Leslie Holdridge classified climates based on the biological effects of temperature and rainfall on vegetation under the assumption that these two abiotic factors are the largest determinants of the types of vegetation found in a habitat. Holdridge uses the four axes to define 30 so-called "humidity provinces", which are clearly visible in his diagram. While this scheme largely ignores soil and sun exposure, Holdridge acknowledged that these were important.
Allee (1949) biome-types
The principal biome-types by Allee (1949):
Tundra
Taiga
Deciduous forest
Grasslands
Desert
High plateaus
Tropical forest
Minor terrestrial biomes
Kendeigh (1961) biomes
The principal biomes of the world by Kendeigh (1961):
Terrestrial
Temperate deciduous forest
Coniferous forest
Woodland
Chaparral
Tundra
Grassland
Desert
Tropical savanna
Tropical forest
Marine
Oceanic plankton and nekton
Balanoid-gastropod-thallophyte
Pelecypod-annelid
Coral reef
Whittaker (1962, 1970, 1975) biome-types
Whittaker classified biomes using two abiotic factors: precipitation and temperature. His scheme can be seen as a simplification of Holdridge's; more readily accessible, but missing Holdridge's greater specificity.
Whittaker based his approach on theoretical assertions and empirical sampling. He had previously compiled a review of biome classifications.
Key definitions for understanding Whittaker's scheme
Physiognomy: sometimes referring to the plants' appearance; or the biome's apparent characteristics, outward features, or appearance of ecological communities or species - including plants.
Biome: a grouping of terrestrial ecosystems on a given continent that is similar in vegetation structure, physiognomy, features of the environment and characteristics of their animal communities.
Formation: a major kind of community of plants on a given continent.
Biome-type: grouping of convergent biomes or formations of different continents, defined by physiognomy.
Formation-type: a grouping of convergent formations.
Whittaker's distinction between biome and formation can be simplified: formation is used when applied to plant communities only, while biome is used when concerned with both plants and animals. Whittaker's convention of biome-type or formation-type is a broader method to categorize similar communities.
Whittaker's parameters for classifying biome-types
Whittaker used what he called "gradient analysis" of ecocline patterns to relate communities to climate on a worldwide scale. Whittaker considered four main ecoclines in the terrestrial realm.
Intertidal levels: The wetness gradient of areas that are exposed to alternating water and dryness with intensities that vary by location from high to low tide
Climatic moisture gradient
Temperature gradient by altitude
Temperature gradient by latitude
Along these gradients, Whittaker noted several trends that allowed him to qualitatively establish biome-types:
The gradient runs from favorable to the extreme, with corresponding changes in productivity.
Changes in physiognomic complexity vary with how favorable of an environment exists (decreasing community structure and reduction of stratal differentiation as the environment becomes less favorable).
Trends in the diversity of structure follow trends in species diversity; alpha and beta species diversities decrease from favorable to extreme environments.
Each growth-form (i.e. grasses, shrubs, etc.) has its characteristic place of maximum importance along the ecoclines.
The same growth forms may be dominant in similar environments in widely different parts of the world.
Whittaker summed the effects of gradients (3) and (4) to get an overall temperature gradient and combined this with a gradient (2), the moisture gradient, to express the above conclusions in what is known as the Whittaker classification scheme. The scheme graphs average annual precipitation (x-axis) versus average annual temperature (y-axis) to classify biome-types.
Biome-types
Tropical rainforest
Tropical seasonal rainforest
deciduous
semideciduous
Temperate giant rainforest
Montane rainforest
Temperate deciduous forest
Temperate evergreen forest
needleleaf
sclerophyll
Subarctic-subalpine needle-leaved forests (taiga)
Elfin woodland
Thorn forest
Thorn scrub
Temperate woodland
Temperate shrublands
deciduous
heath
sclerophyll
subalpine-needleleaf
subalpine-broadleaf
Savanna
Temperate grassland
Alpine grasslands
Tundra
Tropical desert
Warm-temperate desert
Cool temperate desert scrub
Arctic-alpine desert
Bog
Tropical fresh-water swamp forest
Temperate fresh-water swamp forest
Mangrove swamp
Salt marsh
Wetland
Goodall (1974–) ecosystem types
The multi-authored series Ecosystems of the World, edited by David W. Goodall, provides a comprehensive coverage of the major "ecosystem types or biomes" on Earth:
Walter (1976, 2002) zonobiomes
The eponymously named Heinrich Walter classification scheme considers the seasonality of temperature and precipitation. The system, also assessing precipitation and temperature, finds nine major biome types, with the important climate traits and vegetation types. The boundaries of each biome correlate to the conditions of moisture and cold stress that are strong determinants of plant form, and therefore the vegetation that defines the region. Extreme conditions, such as flooding in a swamp, can create different kinds of communities within the same biome.
Schultz (1988) eco-zones
Schultz (1988, 2005) defined nine ecozones (his concept of ecozone is more similar to the concept of biome than to the concept of ecozone of BBC):
polar/subpolar zone
boreal zone
humid mid-latitudes
dry mid-latitudes
subtropics with winter rain
subtropics with year-round rain
dry tropics and subtropics
tropics with summer rain
tropics with year-round rain
Bailey (1989) ecoregions
Robert G. Bailey nearly developed a biogeographical classification system of ecoregions for the United States in a map published in 1976. He subsequently expanded the system to include the rest of North America in 1981, and the world in 1989. The Bailey system, based on climate, is divided into four domains (polar, humid temperate, dry, and humid tropical), with further divisions based on other climate characteristics (subarctic, warm temperate, hot temperate, and subtropical; marine and continental; lowland and mountain).
100 Polar Domain
120 Tundra Division (Köppen: Ft)
M120 Tundra Division – Mountain Provinces
130 Subarctic Division (Köppen: E)
M130 Subarctic Division – Mountain Provinces
200 Humid Temperate Domain
210 Warm Continental Division (Köppen: portion of Dcb)
M210 Warm Continental Division – Mountain Provinces
220 Hot Continental Division (Köppen: portion of Dca)
M220 Hot Continental Division – Mountain Provinces
230 Subtropical Division (Köppen: portion of Cf)
M230 Subtropical Division – Mountain Provinces
240 Marine Division (Köppen: Do)
M240 Marine Division – Mountain Provinces
250 Prairie Division (Köppen: arid portions of Cf, Dca, Dcb)
260 Mediterranean Division (Köppen: Cs)
M260 Mediterranean Division – Mountain Provinces
300 Dry Domain
310 Tropical/Subtropical Steppe Division
M310 Tropical/Subtropical Steppe Division – Mountain Provinces
320 Tropical/Subtropical Desert Division
330 Temperate Steppe Division
340 Temperate Desert Division
400 Humid Tropical Domain
410 Savanna Division
420 Rainforest Division
Olson & Dinerstein (1998) biomes for WWF / Global 200
A team of biologists convened by the World Wildlife Fund (WWF) developed a scheme that divided the world's land area into biogeographic realms (called "ecozones" in a BBC scheme), and these into ecoregions (Olson & Dinerstein, 1998, etc.). Each ecoregion is characterized by a main biome (also called major habitat type).
This classification is used to define the Global 200 list of ecoregions identified by the WWF as priorities for conservation.
For the terrestrial ecoregions, there is a specific EcoID, format XXnnNN (XX is the biogeographic realm, nn is the biome number, NN is the individual number).
Biogeographic realms (terrestrial and freshwater)
NA: Nearctic
PA: Palearctic
AT: Afrotropic
IM: Indomalaya
AA: Australasia
NT: Neotropic
OC: Oceania
AN: Antarctic
The applicability of the realms scheme above - based on Udvardy (1975)—to most freshwater taxa is unresolved.
Biogeographic realms (marine)
Arctic
Temperate Northern Atlantic
Temperate Northern Pacific
Tropical Atlantic
Western Indo-Pacific
Central Indo-Pacific
Eastern Indo-Pacific
Tropical Eastern Pacific
Temperate South America
Temperate Southern Africa
Temperate Australasia
Southern Ocean
Biomes (terrestrial)
Tropical and subtropical moist broadleaf forests (tropical and subtropical, humid)
Tropical and subtropical dry broadleaf forests (tropical and subtropical, semihumid)
Tropical and subtropical coniferous forests (tropical and subtropical, semihumid)
Temperate broadleaf and mixed forests (temperate, humid)
Temperate coniferous forests (temperate, humid to semihumid)
Boreal forests/taiga (subarctic, humid)
Tropical and subtropical grasslands, savannas, and shrublands (tropical and subtropical, semiarid)
Temperate grasslands, savannas, and shrublands (temperate, semiarid)
Flooded grasslands and savannas (temperate to tropical, fresh or brackish water inundated)
Montane grasslands and shrublands (alpine or montane climate)
Tundra (Arctic)
Mediterranean forests, woodlands, and scrub or sclerophyll forests (temperate warm, semihumid to semiarid with winter rainfall)
Deserts and xeric shrublands (temperate to tropical, arid)
Mangrove (subtropical and tropical, salt water inundated)
Biomes (freshwater)
According to the WWF, the following are classified as freshwater biomes:
Large lakes
Large river deltas
Polar freshwaters
Montane freshwaters
Temperate coastal rivers
Temperate floodplain rivers and wetlands
Temperate upland rivers
Tropical and subtropical coastal rivers
Tropical and subtropical floodplain rivers and wetlands
Tropical and subtropical upland rivers
Xeric freshwaters and endorheic basins
Oceanic islands
Biomes (marine)
Biomes of the coastal and continental shelf areas (neritic zone):
Polar
Temperate shelves and sea
Temperate upwelling
Tropical upwelling
Tropical coral
Summary of the scheme
Biosphere
Biogeographic realms (terrestrial) (8)
Ecoregions (867), each characterized by a biome, a major habitat type (14)
Ecosystems (biotopes)
Biosphere
Biogeographic realms (freshwater) (8)
Ecoregions (426), each characterized by a biome, a major habitat type (12)
Ecosystems (biotopes)
Biosphere
Biogeographic realms (marine) (12)
(Marine provinces) (62)
Ecoregions (232), each characterized by a biome, a major habitat type (5)
Ecosystems (biotopes)
Example:
Biosphere
Biogeographic realm: Palearctic
Ecoregion: Dinaric Mountains mixed forests (PA0418); biome type: temperate broadleaf and mixed forests
Ecosystem: Orjen, vegetation belt between 1,100 and 1,450 m, Oromediterranean zone, nemoral zone (temperate zone)
Biotope: Oreoherzogio-Abietetum illyricae Fuk. (Plant list)
Plant: Silver fir (Abies alba)
Other biomes
Marine biomes
Pruvot (1896) zones or "systems":
Littoral zone
Pelagic zone
Abyssal zone
Longhurst (1998) biomes:
Coastal
Polar
Trade wind
Westerly
Other marine habitat types (not covered yet by the Global 200/WWF scheme):
Open sea
Deep sea
Hydrothermal vents
Cold seeps
Benthic zone
Pelagic zone (trades and westerlies)
Abyssal
Hadal (ocean trench)
Littoral/Intertidal zone
Salt marsh
Estuaries
Coastal lagoons/Atoll lagoons
Kelp forest
Pack ice
Anthropogenic biomes
Humans have altered global patterns of biodiversity and ecosystem processes. As a result, vegetation forms predicted by conventional biome systems can no longer be observed across much of Earth's land surface as they have been replaced by crops and rangelands or cities. Anthropogenic biomes provide an alternative view of the terrestrial biosphere based on global patterns of sustained direct human interaction with ecosystems, including agriculture, human settlements, urbanization, forestry and other uses of land. Anthropogenic biomes offer a way to recognize the irreversible coupling of human and ecological systems at global scales and manage Earth's biosphere and anthropogenic biomes.
Major anthropogenic biomes:
Dense settlements
Croplands
Rangelands
Forested
Indoor
Microbial biomes
Endolithic biomes
The endolithic biome, consisting entirely of microscopic life in rock pores and cracks, kilometers beneath the surface, has only recently been discovered, and does not fit well into most classification schemes.
Effects of climate change
Anthropogenic climate change has the potential to greatly alter the distribution of Earth's biomes. Meaning, biomes around the world could change so much that they would be at risk of becoming new biomes entirely. More specifically, between 54% and 22% of global land area will experience climates that correspond to other biomes. 3.6% of land area will experience climates that are completely new or unusual. An example of a biome shift is woody plant encroachment, which can change grass savanna into shrub savanna.
Average temperatures have risen more than twice the usual amount in both arctic and mountainous biomes, which leads to the conclusion that arctic and mountainous biomes are currently the most vulnerable to climate change. South American terrestrial biomes have been predicted to go through the same temperature trends as arctic and mountainous biomes. With its annual average temperature continuing to increase, the moisture currently located in forest biomes will dry up.
| Biology and health sciences | Ecology | null |
4816 | https://en.wikipedia.org/wiki/Biosphere | Biosphere | The biosphere (), also called the ecosphere (), is the worldwide sum of all ecosystems. It can also be termed the zone of life on the Earth. The biosphere (which is technically a spherical shell) is virtually a closed system with regard to matter, with minimal inputs and outputs. Regarding energy, it is an open system, with photosynthesis capturing solar energy at a rate of around 100 terawatts. By the most general biophysiological definition, the biosphere is the global ecological system integrating all living beings and their relationships, including their interaction with the elements of the lithosphere, cryosphere, hydrosphere, and atmosphere. The biosphere is postulated to have evolved, beginning with a process of biopoiesis (life created naturally from matter, such as simple organic compounds) or biogenesis (life created from living matter), at least some 3.5 billion years ago.
In a general sense, biospheres are any closed, self-regulating systems containing ecosystems. This includes artificial biospheres such as and , and potentially ones on other planets or moons.
Origin and use of the term
The term "biosphere" was coined in 1875 by geologist Eduard Suess, who defined it as the place on Earth's surface where life dwells.
While the concept has a geological origin, it is an indication of the effect of both Charles Darwin and Matthew F. Maury on the Earth sciences. The biosphere's ecological context comes from the 1920s (see Vladimir I. Vernadsky), preceding the 1935 introduction of the term "ecosystem" by Sir Arthur Tansley (see ecology history). Vernadsky defined ecology as the science of the biosphere. It is an interdisciplinary concept for integrating astronomy, geophysics, meteorology, biogeography, evolution, geology, geochemistry, hydrology and, generally speaking, all life and Earth sciences.
Narrow definition
Geochemists define the biosphere as being the total sum of living organisms (the "biomass" or "biota" as referred to by biologists and ecologists). In this sense, the biosphere is but one of four separate components of the geochemical model, the other three being geosphere, hydrosphere, and atmosphere. When these four component spheres are combined into one system, it is known as the ecosphere. This term was coined during the 1960s and encompasses both biological and physical components of the planet.
The Second International Conference on Closed Life Systems defined biospherics as the science and technology of analogs and models of Earth's biosphere; i.e., artificial Earth-like biospheres. Others may include the creation of artificial non-Earth biospheres—for example, human-centered biospheres or a native Martian biosphere—as part of the topic of biospherics.
Earth's biosphere
Overview
Currently, the total number of living cells on the Earth is estimated to be 1030; the total number since the beginning of Earth, as 1040, and the total number for the entire time of a habitable planet Earth as 1041. This is much larger than the total number of estimated stars (and Earth-like planets) in the observable universe as 1024, a number which is more than all the grains of beach sand on planet Earth; but less than the total number of atoms estimated in the observable universe as 1082; and the estimated total number of stars in an inflationary universe (observed and unobserved), as 10100.
Age
The earliest evidence for life on Earth includes biogenic graphite found in 3.7 billion-year-old metasedimentary rocks from Western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone from Western Australia. More recently, in 2015, "remains of biotic life" were found in 4.1 billion-year-old rocks in Western Australia. In 2017, putative fossilized microorganisms (or microfossils) were announced to have been discovered in hydrothermal vent precipitates in the Nuvvuagittuq Belt of Quebec, Canada that were as old as 4.28 billion years, the oldest record of life on earth, suggesting "an almost instantaneous emergence of life" after ocean formation 4.4 billion years ago, and not long after the formation of the Earth 4.54 billion years ago. According to biologist Stephen Blair Hedges, "If life arose relatively quickly on Earth ... then it could be common in the universe."
Extent
Every part of the planet, from the polar ice caps to the equator, features life of some kind. Recent advances in microbiology have demonstrated that microbes live deep beneath the Earth's terrestrial surface and that the total mass of microbial life in so-called "uninhabitable zones" may, in biomass, exceed all animal and plant life on the surface. The actual thickness of the biosphere on Earth is difficult to measure. Birds typically fly at altitudes as high as and fish live as much as underwater in the Puerto Rico Trench.
There are more extreme examples for life on the planet: Rüppell's vulture has been found at altitudes of ; bar-headed geese migrate at altitudes of at least ; yaks live at elevations as high as above sea level; mountain goats live up to . Herbivorous animals at these elevations depend on lichens, grasses, and herbs.
Life forms live in every part of the Earth's biosphere, including soil, hot springs, inside rocks at least deep underground, and at least high in the atmosphere. Marine life under many forms has been found in the deepest reaches of the world ocean while much of the deep sea remains to be explored.
Under certain test conditions, microorganisms have been observed to survive the vacuum of outer space. The total amount of soil and subsurface bacterial carbon is estimated as 5 × 1017 g. The mass of prokaryote microorganisms—which includes bacteria and archaea, but not the nucleated eukaryote microorganisms—may be as much as 0.8 trillion tons of carbon (of the total biosphere mass, estimated at between 1 and 4 trillion tons). Barophilic marine microbes have been found at more than a depth of in the Mariana Trench, the deepest spot in the Earth's oceans. In fact, single-celled life forms have been found in the deepest part of the Mariana Trench, by the Challenger Deep, at depths of . Other researchers reported related studies that microorganisms thrive inside rocks up to below the sea floor under of ocean off the coast of the northwestern United States, as well as beneath the seabed off Japan. Culturable thermophilic microbes have been extracted from cores drilled more than into the Earth's crust in Sweden, from rocks between . Temperature increases with increasing depth into the Earth's crust. The rate at which the temperature increases depends on many factors, including the type of crust (continental vs. oceanic), rock type, geographic location, etc. The greatest known temperature at which microbial life can exist is (Methanopyrus kandleri Strain 116). It is likely that the limit of life in the "deep biosphere" is defined by temperature rather than absolute depth. On 20 August 2014, scientists confirmed the existence of microorganisms living below the ice of Antarctica.
Earth's biosphere is divided into several biomes, inhabited by fairly similar flora and fauna. On land, biomes are separated primarily by latitude. Terrestrial biomes lying within the Arctic and Antarctic Circles are relatively barren of plant and animal life. In contrast, most of the more populous biomes lie near the equator.
Annual variation
Artificial biospheres
Experimental biospheres, also called closed ecological systems, have been created to study ecosystems and the potential for supporting life outside the Earth. These include spacecraft and the following terrestrial laboratories:
Biosphere 2 in Arizona, United States, 3.15 acres (13,000 m2).
BIOS-1, BIOS-2 and BIOS-3 at the Institute of Biophysics in Krasnoyarsk, Siberia, in what was then the Soviet Union.
Biosphere J (CEEF, Closed Ecology Experiment Facilities), an experiment in Japan.
Micro-Ecological Life Support System Alternative (MELiSSA) at Autonomous University of Barcelona
Extraterrestrial biospheres
No biospheres have been detected beyond the Earth; therefore, the existence of extraterrestrial biospheres remains hypothetical. The rare Earth hypothesis suggests they should be very rare, save ones composed of microbial life only. On the other hand, Earth analogs may be quite numerous, at least in the Milky Way galaxy, given the large number of planets. Three of the planets discovered orbiting TRAPPIST-1 could possibly contain biospheres. Given limited understanding of abiogenesis, it is currently unknown what percentage of these planets actually develop biospheres.
Based on observations by the Kepler Space Telescope team, it has been calculated that provided the probability of abiogenesis is higher than 1 to 1000, the closest alien biosphere should be within 100 light-years from the Earth.
It is also possible that artificial biospheres will be created in the future, for example with the terraforming of Mars.
| Biology and health sciences | Ecology | null |
4817 | https://en.wikipedia.org/wiki/Biological%20membrane | Biological membrane | A biological membrane, biomembrane or cell membrane is a selectively permeable membrane that separates the interior of a cell from the external environment or creates intracellular compartments by serving as a boundary between one part of the cell and another. Biological membranes, in the form of eukaryotic cell membranes, consist of a phospholipid bilayer with embedded, integral and peripheral proteins used in communication and transportation of chemicals and ions. The bulk of lipids in a cell membrane provides a fluid matrix for proteins to rotate and laterally diffuse for physiological functioning. Proteins are adapted to high membrane fluidity environment of the lipid bilayer with the presence of an annular lipid shell, consisting of lipid molecules bound tightly to the surface of integral membrane proteins. The cell membranes are different from the isolating tissues formed by layers of cells, such as mucous membranes, basement membranes, and serous membranes.
Composition
Asymmetry
The lipid bilayer consists of two layers- an outer leaflet and an inner leaflet. The components of bilayers are distributed unequally between the two surfaces to create asymmetry between the outer and inner surfaces. This asymmetric organization is important for cell functions such as cell signaling. The asymmetry of the biological membrane reflects the different functions of the two leaflets of the membrane. As seen in the fluid membrane model of the phospholipid bilayer, the outer leaflet and inner leaflet of the membrane are asymmetrical in their composition. Certain proteins and lipids rest only on one surface of the membrane and not the other.
Both the plasma membrane and internal membranes have cytosolic and exoplasmic faces.
This orientation is maintained during membrane trafficking – proteins, lipids, glycoconjugates facing the lumen of the ER and Golgi get expressed on the extracellular side of the plasma membrane. In eukaryotic cells, new phospholipids are manufactured by enzymes bound to the part of the endoplasmic reticulum membrane that faces the cytosol. These enzymes, which use free fatty acids as substrates, deposit all newly made phospholipids into the cytosolic half of the bilayer. To enable the membrane as a whole to grow evenly, half of the new phospholipid molecules then have to be transferred to the opposite monolayer. This transfer is catalyzed by enzymes called flippases. In the plasma membrane, flippases transfer specific phospholipids selectively, so that different types become concentrated in each monolayer.
Using selective flippases is not the only way to produce asymmetry in lipid bilayers, however. In particular, a different mechanism operates for glycolipids—the lipids that show the most striking and consistent asymmetric distribution in animal cells.
Lipids
The biological membrane is made up of lipids with hydrophobic tails and hydrophilic heads. The hydrophobic tails are hydrocarbon tails whose length and saturation is important in characterizing the cell. Lipid rafts occur when lipid species and proteins aggregate in domains in the membrane. These help organize membrane components into localized areas that are involved in specific processes, such as signal transduction.
Red blood cells, or erythrocytes, have a unique lipid composition. The bilayer of red blood cells is composed of cholesterol and phospholipids in equal proportions by weight. Erythrocyte membrane plays a crucial role in blood clotting. In the bilayer of red blood cells is phosphatidylserine. This is usually in the cytoplasmic side of the membrane. However, it is flipped to the outer membrane to be used during blood clotting.
Proteins
Phospholipid bilayers contain different proteins. These membrane proteins have various functions and characteristics and catalyze different chemical reactions. Integral proteins span the membranes with different domains on either side. Integral proteins hold strong association with the lipid bilayer and cannot easily become detached. They will dissociate only with chemical treatment that breaks the membrane. Peripheral proteins are unlike integral proteins in that they hold weak interactions with the surface of the bilayer and can easily become dissociated from the membrane. Peripheral proteins are located on only one face of a membrane and create membrane asymmetry.
Oligosaccharides
Oligosaccharides are sugar containing polymers. In the membrane, they can be covalently bound to lipids to form glycolipids or covalently bound to proteins to form glycoproteins. Membranes contain sugar-containing lipid molecules known as glycolipids. In the bilayer, the sugar groups of glycolipids are exposed at the cell surface, where they can form hydrogen bonds. Glycolipids provide the most extreme example of asymmetry in the lipid bilayer. Glycolipids perform a vast number of functions in the biological membrane that are mainly communicative, including cell recognition and cell-cell adhesion. Glycoproteins are integral proteins. They play an important role in the immune response and protection.
Formation
The phospholipid bilayer is formed due to the aggregation of membrane lipids in aqueous solutions. Aggregation is caused by the hydrophobic effect, where hydrophobic ends come into contact with each other and are sequestered away from water. This arrangement maximises hydrogen bonding between hydrophilic heads and water while minimising unfavorable contact between hydrophobic tails and water. The increase in available hydrogen bonding increases the entropy of the system, creating a spontaneous process.
Function
Biological molecules are amphiphilic or amphipathic, i.e. are simultaneously hydrophobic and hydrophilic. The phospholipid bilayer contains charged hydrophilic headgroups, which interact with polar water. The layers also contain hydrophobic tails, which meet with the hydrophobic tails of the complementary layer. The hydrophobic tails are usually fatty acids that differ in lengths. The interactions of lipids, especially the hydrophobic tails, determine the lipid bilayer physical properties such as fluidity.
Membranes in cells typically define enclosed spaces or compartments in which cells may maintain a chemical or biochemical environment that differs from the outside. For example, the membrane around peroxisomes shields the rest of the cell from peroxides, chemicals that can be toxic to the cell, and the cell membrane separates a cell from its surrounding medium. Peroxisomes are one form of vacuole found in the cell that contain by-products of chemical reactions within the cell. Most organelles are defined by such membranes, and are called membrane-bound organelles.
Selective permeability
Probably the most important feature of a biomembrane is that it is a selectively permeable structure. This means that the size, charge, and other chemical properties of the atoms and molecules attempting to cross it will determine whether they succeed in doing so. Selective permeability is essential for effective separation of a cell or organelle from its surroundings. Biological membranes also have certain mechanical or elastic properties that allow them to change shape and move as required.
Generally, small hydrophobic molecules can readily cross phospholipid bilayers by simple diffusion.
Particles that are required for cellular function but are unable to diffuse freely across a membrane enter through a membrane transport protein or are taken in by means of endocytosis, where the membrane allows for a vacuole to join onto it and push its contents into the cell. Many types of specialized plasma membranes can separate cell from external environment: apical, basolateral, presynaptic and postsynaptic ones, membranes of flagella, cilia, microvillus, filopodia and lamellipodia, the sarcolemma of muscle cells, as well as specialized myelin and dendritic spine membranes of neurons. Plasma membranes can also form different types of "supramembrane" structures such as caveolae, postsynaptic density, podosome, invadopodium, desmosome, hemidesmosome, focal adhesion, and cell junctions. These types of membranes differ in lipid and protein composition.
Distinct types of membranes also create intracellular organelles: endosome; smooth and rough endoplasmic reticulum; sarcoplasmic reticulum; Golgi apparatus; lysosome; mitochondrion (inner and outer membranes); nucleus (inner and outer membranes); peroxisome; vacuole; cytoplasmic granules; cell vesicles (phagosome, autophagosome, clathrin-coated vesicles, COPI-coated and COPII-coated vesicles) and secretory vesicles (including synaptosome, acrosomes, melanosomes, and chromaffin granules).
Different types of biological membranes have diverse lipid and protein compositions. The content of membranes defines their physical and biological properties. Some components of membranes play a key role in medicine, such as the efflux pumps that pump drugs out of a cell.
Fluidity
The hydrophobic core of the phospholipid bilayer is constantly in motion because of rotations around the bonds of lipid tails. Hydrophobic tails of a bilayer bend and lock together. However, because of hydrogen bonding with water, the hydrophilic head groups exhibit less movement as their rotation and mobility are constrained. This results in increasing viscosity of the lipid bilayer closer to the hydrophilic heads.
Below a transition temperature, a lipid bilayer loses fluidity when the highly mobile lipids exhibits less movement becoming a gel-like solid. The transition temperature depends on such components of the lipid bilayer as the hydrocarbon chain length and the saturation of its fatty acids. Temperature-dependence fluidity constitutes an important physiological attribute for bacteria and cold-blooded organisms. These organisms maintain a constant fluidity by modifying membrane lipid fatty acid composition in accordance with differing temperatures.
In animal cells, membrane fluidity is modulated by the inclusion of the sterol cholesterol. This molecule is present in especially large amounts in the plasma membrane, where it constitutes approximately 20% of the lipids in the membrane by weight. Because cholesterol molecules are short and rigid, they fill the spaces between neighboring phospholipid molecules left by the kinks in their unsaturated hydrocarbon tails. In this way, cholesterol tends to stiffen the bilayer, making it more rigid and less permeable.
For all cells, membrane fluidity is important for many reasons. It enables membrane proteins to diffuse rapidly in the plane of the bilayer and to interact with one another, as is crucial, for example, in cell signaling. It permits membrane lipids and proteins to diffuse from sites where they are inserted into the bilayer after their synthesis to other regions of the cell. It allows membranes to fuse with one another and mix their molecules, and it ensures that membrane molecules are distributed evenly between daughter cells when a cell divides. If biological membranes were not fluid, it is hard to imagine how cells could live, grow, and reproduce.
The fluidity property is at the center of the Helfrich model which allows for calculating the energy cost of an elastic deformation to the membrane.
| Biology and health sciences | Cell parts | Biology |
4827 | https://en.wikipedia.org/wiki/Biomedical%20engineering | Biomedical engineering | Biomedical engineering (BME) or medical engineering is the application of engineering principles and design concepts to medicine and biology for healthcare applications (e.g., diagnostic or therapeutic purposes). BME is also traditionally logical sciences to advance health care treatment, including diagnosis, monitoring, and therapy. Also included under the scope of a biomedical engineer is the management of current medical equipment in hospitals while adhering to relevant industry standards. This involves procurement, routine testing, preventive maintenance, and making equipment recommendations, a role also known as a Biomedical Equipment Technician (BMET) or as a clinical engineer.
Biomedical engineering has recently emerged as its own field of study, as compared to many other engineering fields. Such an evolution is common as a new field transitions from being an interdisciplinary specialization among already-established fields to being considered a field in itself. Much of the work in biomedical engineering consists of research and development, spanning a broad array of subfields (see below). Prominent biomedical engineering applications include the development of biocompatible prostheses, various diagnostic and therapeutic medical devices ranging from clinical equipment to micro-implants, imaging technologies such as MRI and EKG/ECG, regenerative tissue growth, and the development of pharmaceutical drugs including biopharmaceuticals.
Subfields and related fields
Bioinformatics
Bioinformatics is an interdisciplinary field that develops methods and software tools for understanding biological data. As an interdisciplinary field of science, bioinformatics combines computer science, statistics, mathematics, and engineering to analyze and interpret biological data.
Bioinformatics is considered both an umbrella term for the body of biological studies that use computer programming as part of their methodology, as well as a reference to specific analysis "pipelines" that are repeatedly used, particularly in the field of genomics. Common uses of bioinformatics include the identification of candidate genes and nucleotides (SNPs). Often, such identification is made with the aim of better understanding the genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. In a less formal way, bioinformatics also tries to understand the organizational principles within nucleic acid and protein sequences.
Biomechanics
Biomechanics is the study of the structure and function of the mechanical aspects of biological systems, at any level from whole organisms to organs, cells and cell organelles, using the methods of mechanics.
Biomaterials
A biomaterial is any matter, surface, or construct that interacts with living systems. As a science, biomaterials is about fifty years old. The study of biomaterials is called biomaterials science or biomaterials engineering. It has experienced steady and strong growth over its history, with many companies investing large amounts of money into the development of new products. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering and materials science.
Biomedical optics
Biomedical optics combines the principles of physics, engineering, and biology to study the interaction of biological tissue and light, and how this can be exploited for sensing, imaging, and treatment. It has a wide range of applications, including optical imaging, microscopy, ophthalmoscopy, spectroscopy, and therapy. Examples of biomedical optics techniques and technologies include optical coherence tomography (OCT), fluorescence microscopy, confocal microscopy, and photodynamic therapy (PDT). OCT, for example, uses light to create high-resolution, three-dimensional images of internal structures, such as the retina in the eye or the coronary arteries in the heart. Fluorescence microscopy involves labeling specific molecules with fluorescent dyes and visualizing them using light, providing insights into biological processes and disease mechanisms. More recently, adaptive optics is helping imaging by correcting aberrations in biological tissue, enabling higher resolution imaging and improved accuracy in procedures such as laser surgery and retinal imaging.
Tissue engineering
Tissue engineering, like genetic engineering (see below), is a major segment of biotechnology – which overlaps significantly with BME.
One of the goals of tissue engineering is to create artificial organs (via biological material) for patients that need organ transplants. Biomedical engineers are currently researching methods of creating such organs. Researchers have grown solid jawbones and tracheas from human stem cells towards this end. Several artificial urinary bladders have been grown in laboratories and transplanted successfully into human patients. Bioartificial organs, which use both synthetic and biological component, are also a focus area in research, such as with hepatic assist devices that use liver cells within an artificial bioreactor construct.
Genetic engineering
Genetic engineering, recombinant DNA technology, genetic modification/manipulation (GM) and gene splicing are terms that apply to the direct manipulation of an organism's genes. Unlike traditional breeding, an indirect method of genetic manipulation, genetic engineering utilizes modern tools such as molecular cloning and transformation to directly alter the structure and characteristics of target genes. Genetic engineering techniques have found success in numerous applications. Some examples include the improvement of crop technology (not a medical application, but see biological systems engineering), the manufacture of synthetic human insulin through the use of modified bacteria, the manufacture of erythropoietin in hamster ovary cells, and the production of new types of experimental mice such as the oncomouse (cancer mouse) for research.
Neural engineering
Neural engineering (also known as neuroengineering) is a discipline that uses engineering techniques to understand, repair, replace, or enhance neural systems. Neural engineers are uniquely qualified to solve design problems at the interface of living neural tissue and non-living constructs. Neural engineering can assist with numerous things, including the future development of prosthetics. For example, cognitive neural prosthetics (CNP) are being heavily researched and would allow for a chip implant to assist people who have prosthetics by providing signals to operate assistive devices.
Pharmaceutical engineering
Pharmaceutical engineering is an interdisciplinary science that includes drug engineering, novel drug delivery and targeting, pharmaceutical technology, unit operations of chemical engineering, and pharmaceutical analysis. It may be deemed as a part of pharmacy due to its focus on the use of technology on chemical agents in providing better medicinal treatment.
Hospital and medical devices
This is an extremely broad category—essentially covering all health care products that do not achieve their intended results through predominantly chemical (e.g., pharmaceuticals) or biological (e.g., vaccines) means, and do not involve metabolism.
A medical device is intended for use in:
the diagnosis of disease or other conditions
in the cure, mitigation, treatment, or prevention of disease.
Some examples include pacemakers, infusion pumps, the heart-lung machine, dialysis machines, artificial organs, implants, artificial limbs, corrective lenses, cochlear implants, ocular prosthetics, facial prosthetics, somato prosthetics, and dental implants.
Stereolithography is a practical example of medical modeling being used to create physical objects. Beyond modeling organs and the human body, emerging engineering techniques are also currently used in the research and development of new devices for innovative therapies, treatments, patient monitoring, of complex diseases.
Medical devices are regulated and classified (in the US) as follows (see also Regulation):
Class I devices present minimal potential for harm to the user and are often simpler in design than Class II or Class III devices. Devices in this category include tongue depressors, bedpans, elastic bandages, examination gloves, and hand-held surgical instruments, and other similar types of common equipment.
Class II devices are subject to special controls in addition to the general controls of Class I devices. Special controls may include special labeling requirements, mandatory performance standards, and postmarket surveillance. Devices in this class are typically non-invasive and include X-ray machines, PACS, powered wheelchairs, infusion pumps, and surgical drapes.
Class III devices generally require premarket approval (PMA) or premarket notification (510k), a scientific review to ensure the device's safety and effectiveness, in addition to the general controls of Class I. Examples include replacement heart valves, hip and knee joint implants, silicone gel-filled breast implants, implanted cerebellar stimulators, implantable pacemaker pulse generators and endosseous (intra-bone) implants.
Medical imaging
Medical/biomedical imaging is a major segment of medical devices. This area deals with enabling clinicians to directly or indirectly "view" things not visible in plain sight (such as due to their size, and/or location). This can involve utilizing ultrasound, magnetism, UV, radiology, and other means.
Alternatively, navigation-guided equipment utilizes electromagnetic tracking technology, such as catheter placement into the brain or feeding tube placement systems. For example, ENvizion Medical's ENvue, an electromagnetic navigation system for enteral feeding tube placement. The system uses an external field generator and several EM passive sensors enabling scaling of the display to the patient's body contour, and a real-time view of the feeding tube tip location and direction, which helps the medical staff ensure the correct placement in the GI tract.
Imaging technologies are often essential to medical diagnosis, and are typically the most complex equipment found in a hospital including: fluoroscopy, magnetic resonance imaging (MRI), nuclear medicine, positron emission tomography (PET), PET-CT scans, projection radiography such as X-rays and CT scans, tomography, ultrasound, optical microscopy, and electron microscopy.
Medical implants
An implant is a kind of medical device made to replace and act as a missing biological structure (as compared with a transplant, which indicates transplanted biomedical tissue). The surface of implants that contact the body might be made of a biomedical material such as titanium, silicone or apatite depending on what is the most functional. In some cases, implants contain electronics, e.g. artificial pacemakers and cochlear implants. Some implants are bioactive, such as subcutaneous drug delivery devices in the form of implantable pills or drug-eluting stents.
Bionics
Artificial body part replacements are one of the many applications of bionics. Concerned with the intricate and thorough study of the properties and function of human body systems, bionics may be applied to solve some engineering problems. Careful study of the different functions and processes of the eyes, ears, and other organs paved the way for improved cameras, television, radio transmitters and receivers, and many other tools.
Biomedical sensors
In recent years biomedical sensors based in microwave technology have gained more attention. Different sensors can be manufactured for specific uses in both diagnosing and monitoring disease conditions, for example microwave sensors can be used as a complementary technique to X-ray to monitor lower extremity trauma. The sensor monitor the dielectric properties and can thus notice change in tissue (bone, muscle, fat etc.) under the skin so when measuring at different times during the healing process the response from the sensor will change as the trauma heals.
Clinical engineering
Clinical engineering is the branch of biomedical engineering dealing with the actual implementation of medical equipment and technologies in hospitals or other clinical settings. Major roles of clinical engineers include training and supervising biomedical equipment technicians (BMETs), selecting technological products/services and logistically managing their implementation, working with governmental regulators on inspections/audits, and serving as technological consultants for other hospital staff (e.g. physicians, administrators, I.T., etc.). Clinical engineers also advise and collaborate with medical device producers regarding prospective design improvements based on clinical experiences, as well as monitor the progression of the state of the art so as to redirect procurement patterns accordingly.
Their inherent focus on practical implementation of technology has tended to keep them oriented more towards incremental-level redesigns and reconfigurations, as opposed to revolutionary research & development or ideas that would be many years from clinical adoption; however, there is a growing effort to expand this time-horizon over which clinical engineers can influence the trajectory of biomedical innovation. In their various roles, they form a "bridge" between the primary designers and the end-users, by combining the perspectives of being both close to the point-of-use, while also trained in product and process engineering. Clinical engineering departments will sometimes hire not just biomedical engineers, but also industrial/systems engineers to help address operations research/optimization, human factors, cost analysis, etc. Also, see safety engineering for a discussion of the procedures used to design safe systems. The clinical engineering department is constructed with a manager, supervisor, engineer, and technician. One engineer per eighty beds in the hospital is the ratio. Clinical engineers are also authorized to audit pharmaceutical and associated stores to monitor FDA recalls of invasive items.
Rehabilitation engineering
Rehabilitation engineering is the systematic application of engineering sciences to design, develop, adapt, test, evaluate, apply, and distribute technological solutions to problems confronted by individuals with disabilities. Functional areas addressed through rehabilitation engineering may include mobility, communications, hearing, vision, and cognition, and activities associated with employment, independent living, education, and integration into the community.
While some rehabilitation engineers have master's degrees in rehabilitation engineering, usually a subspecialty of Biomedical engineering, most rehabilitation engineers have an undergraduate or graduate degrees in biomedical engineering, mechanical engineering, or electrical engineering. A Portuguese university provides an undergraduate degree and a master's degree in Rehabilitation Engineering and Accessibility. Qualification to become a Rehab' Engineer in the UK is possible via a University BSc Honours Degree course such as Health Design & Technology Institute, Coventry University.
The rehabilitation process for people with disabilities often entails the design of assistive devices such as Walking aids intended to promote the inclusion of their users into the mainstream of society, commerce, and recreation.
Regulatory issues
Regulatory issues have been constantly increased in the last decades to respond to the many incidents caused by devices to patients. For example, from 2008 to 2011, in US, there were 119 FDA recalls of medical devices classified as class I. According to U.S. Food and Drug Administration (FDA), Class I recall is associated to "a situation in which there is a reasonable probability that the use of, or exposure to, a product will cause serious adverse health consequences or death"
Regardless of the country-specific legislation, the main regulatory objectives coincide worldwide. For example, in the medical device regulations, a product must be: 1) safe and 2) effective and 3) for all the manufactured devices (why is this part deleted?)
A product is safe if patients, users, and third parties do not run unacceptable risks of physical hazards (death, injuries, ...) in its intended use. Protective measures have to be introduced on the devices to reduce residual risks at an acceptable level if compared with the benefit derived from the use of it.
A product is effective if it performs as specified by the manufacturer in the intended use. Effectiveness is achieved through clinical evaluation, compliance to performance standards or demonstrations of substantial equivalence with an already marketed device.
The previous features have to be ensured for all the manufactured items of the medical device. This requires that a quality system shall be in place for all the relevant entities and processes that may impact safety and effectiveness over the whole medical device lifecycle.
The medical device engineering area is among the most heavily regulated fields of engineering, and practicing biomedical engineers must routinely consult and cooperate with regulatory law attorneys and other experts. The Food and Drug Administration (FDA) is the principal healthcare regulatory authority in the United States, having jurisdiction over medical devices, drugs, biologics, and combination products. The paramount objectives driving policy decisions by the FDA are safety and effectiveness of healthcare products that have to be assured through a quality system in place as specified under 21 CFR 829 regulation. In addition, because biomedical engineers often develop devices and technologies for "consumer" use, such as physical therapy devices (which are also "medical" devices), these may also be governed in some respects by the Consumer Product Safety Commission. The greatest hurdles tend to be 510K "clearance" (typically for Class 2 devices) or pre-market "approval" (typically for drugs and class 3 devices).
In the European context, safety effectiveness and quality is ensured through the "Conformity Assessment" which is defined as "the method by which a manufacturer demonstrates that its device complies with the requirements of the European Medical Device Directive". The directive specifies different procedures according to the class of the device ranging from the simple Declaration of Conformity (Annex VII) for Class I devices to EC verification (Annex IV), Production quality assurance (Annex V), Product quality assurance (Annex VI) and Full quality assurance (Annex II). The Medical Device Directive specifies detailed procedures for Certification. In general terms, these procedures include tests and verifications that are to be contained in specific deliveries such as the risk management file, the technical file, and the quality system deliveries. The risk management file is the first deliverable that conditions the following design and manufacturing steps. The risk management stage shall drive the product so that product risks are reduced at an acceptable level with respect to the benefits expected for the patients for the use of the device. The technical file contains all the documentation data and records supporting medical device certification. FDA technical file has similar content although organized in a different structure. The Quality System deliverables usually include procedures that ensure quality throughout all product life cycles. The same standard (ISO EN 13485) is usually applied for quality management systems in the US and worldwide.
In the European Union, there are certifying entities named "Notified Bodies", accredited by the European Member States. The Notified Bodies must ensure the effectiveness of the certification process for all medical devices apart from the class I devices where a declaration of conformity produced by the manufacturer is sufficient for marketing. Once a product has passed all the steps required by the Medical Device Directive, the device is entitled to bear a CE marking, indicating that the device is believed to be safe and effective when used as intended, and, therefore, it can be marketed within the European Union area.
The different regulatory arrangements sometimes result in particular technologies being developed first for either the U.S. or in Europe depending on the more favorable form of regulation. While nations often strive for substantive harmony to facilitate cross-national distribution, philosophical differences about the optimal extent of regulation can be a hindrance; more restrictive regulations seem appealing on an intuitive level, but critics decry the tradeoff cost in terms of slowing access to life-saving developments.
RoHS II
Directive 2011/65/EU, better known as RoHS 2 is a recast of legislation originally introduced in 2002. The original EU legislation "Restrictions of Certain Hazardous Substances in Electrical and Electronics Devices" (RoHS Directive 2002/95/EC) was replaced and superseded by 2011/65/EU published in July 2011 and commonly known as RoHS 2.
RoHS seeks to limit the dangerous substances in circulation in electronics products, in particular toxins and heavy metals, which are subsequently released into the environment when such devices are recycled.
The scope of RoHS 2 is widened to include products previously excluded, such as medical devices and industrial equipment. In addition, manufacturers are now obliged to provide conformity risk assessments and test reports – or explain why they are lacking. For the first time, not only manufacturers but also importers and distributors share a responsibility to ensure Electrical and Electronic Equipment within the scope of RoHS complies with the hazardous substances limits and have a CE mark on their products.
IEC 60601
The new International Standard IEC 60601 for home healthcare electro-medical devices defining the requirements for devices used in the home healthcare environment. IEC 60601-1-11 (2010) must now be incorporated into the design and verification of a wide range of home use and point of care medical devices along with other applicable standards in the IEC 60601 3rd edition series.
The mandatory date for implementation of the EN European version of the standard is June 1, 2013. The US FDA requires the use of the standard on June 30, 2013, while Health Canada recently extended the required date from June 2012 to April 2013. The North American agencies will only require these standards for new device submissions, while the EU will take the more severe approach of requiring all applicable devices being placed on the market to consider the home healthcare standard.
AS/NZS 3551:2012
AS/ANS 3551:2012 is the Australian and New Zealand standards for the management of medical devices. The standard specifies the procedures required to maintain a wide range of medical assets in a clinical setting (e.g. Hospital). The standards are based on the IEC 606101 standards.
The standard covers a wide range of medical equipment management elements including, procurement, acceptance testing, maintenance (electrical safety and preventive maintenance testing) and decommissioning.
Training and certification
Education
Biomedical engineers require considerable knowledge of both engineering and biology, and typically have a Bachelor's (B.Sc., B.S., B.Eng. or B.S.E.) or Master's (M.S., M.Sc., M.S.E., or M.Eng.) or a doctoral (Ph.D., or MD-PhD) degree in BME (Biomedical Engineering) or another branch of engineering with considerable potential for BME overlap. As interest in BME increases, many engineering colleges now have a Biomedical Engineering Department or Program, with offerings ranging from the undergraduate (B.Sc., B.S., B.Eng. or B.S.E.) to doctoral levels. Biomedical engineering has only recently been emerging as its own discipline rather than a cross-disciplinary hybrid specialization of other disciplines; and BME programs at all levels are becoming more widespread, including the Bachelor of Science in Biomedical Engineering which includes enough biological science content that many students use it as a "pre-med" major in preparation for medical school. The number of biomedical engineers is expected to rise as both a cause and effect of improvements in medical technology.
In the U.S., an increasing number of undergraduate programs are also becoming recognized by ABET as accredited bioengineering/biomedical engineering programs. As of 2023, 155 programs are currently accredited by ABET.
In Canada and Australia, accredited graduate programs in biomedical engineering are common. For example, McMaster University offers an M.A.Sc, an MD/PhD, and a PhD in Biomedical engineering. The first Canadian undergraduate BME program was offered at University of Guelph as a four-year B.Eng. program. The Polytechnique in Montreal is also offering a bachelors's degree in biomedical engineering as is Flinders University.
As with many degrees, the reputation and ranking of a program may factor into the desirability of a degree holder for either employment or graduate admission. The reputation of many undergraduate degrees is also linked to the institution's graduate or research programs, which have some tangible factors for rating, such as research funding and volume, publications and citations. With BME specifically, the ranking of a university's hospital and medical school can also be a significant factor in the perceived prestige of its BME department/program.
Graduate education is a particularly important aspect in BME. While many engineering fields (such as mechanical or electrical engineering) do not need graduate-level training to obtain an entry-level job in their field, the majority of BME positions do prefer or even require them. Since most BME-related professions involve scientific research, such as in pharmaceutical and medical device development, graduate education is almost a requirement (as undergraduate degrees typically do not involve sufficient research training and experience). This can be either a Masters or Doctoral level degree; while in certain specialties a Ph.D. is notably more common than in others, it is hardly ever the majority (except in academia). In fact, the perceived need for some kind of graduate credential is so strong that some undergraduate BME programs will actively discourage students from majoring in BME without an expressed intention to also obtain a master's degree or apply to medical school afterwards.
Graduate programs in BME, like in other scientific fields, are highly varied, and particular programs may emphasize certain aspects within the field. They may also feature extensive collaborative efforts with programs in other fields (such as the university's Medical School or other engineering divisions), owing again to the interdisciplinary nature of BME. M.S. and Ph.D. programs will typically require applicants to have an undergraduate degree in BME, or another engineering discipline (plus certain life science coursework), or life science (plus certain engineering coursework).
Education in BME also varies greatly around the world. By virtue of its extensive biotechnology sector, its numerous major universities, and relatively few internal barriers, the U.S. has progressed a great deal in its development of BME education and training opportunities. Europe, which also has a large biotechnology sector and an impressive education system, has encountered trouble in creating uniform standards as the European community attempts to supplant some of the national jurisdictional barriers that still exist. Recently, initiatives such as BIOMEDEA have sprung up to develop BME-related education and professional standards. Other countries, such as Australia, are recognizing and moving to correct deficiencies in their BME education. Also, as high technology endeavors are usually marks of developed nations, some areas of the world are prone to slower development in education, including in BME.
Licensure/certification
As with other learned professions, each state has certain (fairly similar) requirements for becoming licensed as a registered Professional Engineer (PE), but, in US, in industry such a license is not required to be an employee as an engineer in the majority of situations (due to an exception known as the industrial exemption, which effectively applies to the vast majority of American engineers). The US model has generally been only to require the practicing engineers offering engineering services that impact the public welfare, safety, safeguarding of life, health, or property to be licensed, while engineers working in private industry without a direct offering of engineering services to the public or other businesses, education, and government need not be licensed. This is notably not the case in many other countries, where a license is as legally necessary to practice engineering as it is for law or medicine.
Biomedical engineering is regulated in some countries, such as Australia, but registration is typically only recommended and not required.
In the UK, mechanical engineers working in the areas of Medical Engineering, Bioengineering or Biomedical engineering can gain Chartered Engineer status through the Institution of Mechanical Engineers. The Institution also runs the Engineering in Medicine and Health Division. The Institute of Physics and Engineering in Medicine (IPEM) has a panel for the accreditation of MSc courses in Biomedical Engineering and Chartered Engineering status can also be sought through IPEM.
The Fundamentals of Engineering exam – the first (and more general) of two licensure examinations for most U.S. jurisdictions—does now cover biology (although technically not BME). For the second exam, called the Principles and Practices, Part 2, or the Professional Engineering exam, candidates may select a particular engineering discipline's content to be tested on; there is currently not an option for BME with this, meaning that any biomedical engineers seeking a license must prepare to take this examination in another category (which does not affect the actual license, since most jurisdictions do not recognize discipline specialties anyway). However, the Biomedical Engineering Society (BMES) is, as of 2009, exploring the possibility of seeking to implement a BME-specific version of this exam to facilitate biomedical engineers pursuing licensure.
Beyond governmental registration, certain private-sector professional/industrial organizations also offer certifications with varying degrees of prominence. One such example is the Certified Clinical Engineer (CCE) certification for Clinical engineers.
Career prospects
In 2012 there were about 19,400 biomedical engineers employed in the US, and the field was predicted to grow by 5% (faster than average) from 2012 to 2022. Biomedical engineering has the highest percentage of female engineers compared to other common engineering professions. Now as of 2023, there are 19,700 jobs for this degree, the average pay for a person in this field is around $100,730.00 and making around $48.43 an hour. There is also expected to be a 7% increase in jobs from here 2023 to 2033 (even faster than the last average).
Notable figures
Julia Tutelman Apter (deceased) – One of the first specialists in neurophysiological research and a founding member of the Biomedical Engineering Society
Earl Bakken (deceased) – Invented the first transistorised pacemaker, co-founder of Medtronic.
Forrest Bird (deceased) – aviator and pioneer in the invention of mechanical ventilators
Y.C. Fung (deceased) – professor emeritus at the University of California, San Diego, considered by many to be the founder of modern biomechanics
Leslie Geddes (deceased) – professor emeritus at Purdue University, electrical engineer, inventor, and educator of over 2000 biomedical engineers, received a National Medal of Technology in 2006 from President George Bush for his more than 50 years of contributions that have spawned innovations ranging from burn treatments to miniature defibrillators, ligament repair to tiny blood pressure monitors for premature infants, as well as a new method for performing cardiopulmonary resuscitation (CPR).
Willem Johan Kolff (deceased) – pioneer of hemodialysis as well as in the field of artificial organs
Robert Langer – Institute Professor at MIT, runs the largest BME laboratory in the world, pioneer in drug delivery and tissue engineering
John Macleod (deceased) – one of the co-discoverers of insulin at Case Western Reserve University.
Alfred E. Mann – Physicist, entrepreneur and philanthropist. A pioneer in the field of Biomedical Engineering.
J. Thomas Mortimer – Emeritus professor of biomedical engineering at Case Western Reserve University. Pioneer in Functional Electrical Stimulation (FES)
Robert M. Nerem – professor emeritus at Georgia Institute of Technology. Pioneer in regenerative tissue, biomechanics, and author of over 300 published works. His works have been cited more than 20,000 times cumulatively.
P. Hunter Peckham – Donnell Professor of Biomedical Engineering and Orthopaedics at Case Western Reserve University. Pioneer in Functional Electrical Stimulation (FES)
Nicholas A. Peppas – Chaired Professor in Engineering, University of Texas at Austin, pioneer in drug delivery, biomaterials, hydrogels and nanobiotechnology.
Robert Plonsey – professor emeritus at Duke University, pioneer of electrophysiology
Otto Schmitt (deceased) – biophysicist with significant contributions to BME, working with biomimetics
Ascher Shapiro (deceased) – Institute Professor at MIT, contributed to the development of the BME field, medical devices (e.g. intra-aortic balloons)
Gordana Vunjak-Novakovic – University Professor at Columbia University, pioneer in tissue engineering and bioreactor design
John G. Webster – professor emeritus at the University of Wisconsin–Madison, a pioneer in the field of instrumentation amplifiers for the recording of electrophysiological signals
Fred Weibell, coauthor of Biomedical Instrumentation and Measurements
U.A. Whitaker (deceased) – provider of the Whitaker Foundation, which supported research and education in BME by providing over $700 million to various universities, helping to create 30 BME programs and helping finance the construction of 13 buildings
| Technology | Disciplines | null |
4831 | https://en.wikipedia.org/wiki/Bohr%20model | Bohr model | In atomic physics, the Bohr model or Rutherford–Bohr model was the first successful model of the atom. Developed from 1911 to 1918 by Niels Bohr and building on Ernest Rutherford's nuclear model, it supplanted the plum pudding model of J J Thomson only to be replaced by the quantum atomic model in the 1920s. It consists of a small, dense nucleus surrounded by orbiting electrons. It is analogous to the structure of the Solar System, but with attraction provided by electrostatic force rather than gravity, and with the electron energies quantized (assuming only discrete values).
In the history of atomic physics, it followed, and ultimately replaced, several earlier models, including Joseph Larmor's Solar System model (1897), Jean Perrin's model (1901), the cubical model (1902), Hantaro Nagaoka's Saturnian model (1904), the plum pudding model (1904), Arthur Haas's quantum model (1910), the Rutherford model (1911), and John William Nicholson's nuclear quantum model (1912). The improvement over the 1911 Rutherford model mainly concerned the new quantum mechanical interpretation introduced by Haas and Nicholson, but forsaking any attempt to explain radiation according to classical physics.
The model's key success lies in explaining the Rydberg formula for hydrogen's spectral emission lines. While the Rydberg formula had been known experimentally, it did not gain a theoretical basis until the Bohr model was introduced. Not only did the Bohr model explain the reasons for the structure of the Rydberg formula, it also provided a justification for the fundamental physical constants that make up the formula's empirical results.
The Bohr model is a relatively primitive model of the hydrogen atom, compared to the valence shell model. As a theory, it can be derived as a first-order approximation of the hydrogen atom using the broader and much more accurate quantum mechanics and thus may be considered to be an obsolete scientific theory. However, because of its simplicity, and its correct results for selected systems (see below for application), the Bohr model is still commonly taught to introduce students to quantum mechanics or energy level diagrams before moving on to the more accurate, but more complex, valence shell atom. A related quantum model was proposed by Arthur Erich Haas in 1910 but was rejected until the 1911 Solvay Congress where it was thoroughly discussed. The quantum theory of the period between Planck's discovery of the quantum (1900) and the advent of a mature quantum mechanics (1925) is often referred to as the old quantum theory.
Background
Until the second decade of the 20th century, atomic models were generally speculative. Even the concept of atoms, let alone atoms with internal structure, faced opposition from some scientists.
Planetary models
In the late 1800s speculations on the possible structure of the atom included planetary models with orbiting charged electrons.
These models faced a significant constraint.
In 1897, Joseph Larmor showed that an accelerating charge would radiate power according to classical electrodynamics, a result known as the Larmor formula. Since electrons forced to remain in orbit are continuously accelerating, they would be mechanically unstable. Larmor noted that electromagnetic effect of multiple electrons, suitable arranged, would cancel each other. Thus subsequent atomic models based on classical electrodynamics needed to adopt such special multi-electron arrangements.
Thomson's atom model
When Bohr began his work on a new atomic theory in the summer of 1912 the atomic model proposed by J J Thomson, now known as the Plum pudding model, was the best available. Thomson proposed a model with electrons rotating in coplanar rings within an atomic-sized, positively-charged, spherical volume. Thomson showed that this model was mechanically stable by lengthy calculations and was electrodynamically stable under his original assumption of thousands of electrons per atom. Moreover, he suggested that the particularly stable configurations of electrons in rings was connected to chemical properties of the atoms. He developed a formula for the scattering of beta particles that seemed to match experimental results.
However Thomson himself later showed that the atom had a factor of a thousand fewer electrons, challenging the stability argument and forcing the poorly understood positive sphere to have most of the atom's mass. Thomson was also unable to explain the many lines in atomic spectra.
Rutherford nuclear model
In 1908, Hans Geiger and Ernest Marsden demonstrated that alpha particle occasionally scatter at large angles, a result inconsistent with Thomson's model.
In 1911 Ernest Rutherford developed a new scattering model, showing that the observed large angle scattering could be explained by a compact, highly charged mass at the center of the atom.
Rutherford scattering did not involve the electrons and thus his model of the atom was incomplete.
Bohr begins his first paper on his atomic model by describing Rutherford's atom as consisting of a small, dense, positively charged nucleus attracting negatively charged electrons.
Atomic spectra
By the early twentieth century, it was expected that the atom would account for the many atomic spectral lines. These lines were summarized in empirical formula by Johann Balmer and Johannes Rydberg. In 1897, Lord Rayleigh showed that vibrations of electrical systems predicted spectral lines that depend on the square of the vibrational frequency, contradicting the empirical formula which depended directly on the frequency.
In 1907 Arthur W. Conway showed that, rather than the entire atom vibrating, vibrations of only one of the electrons in the system described by Thomson might be sufficient to account for spectral series. Although Bohr's model would also rely on just the electron to explain the spectrum, he did not assume an electrodynamical model for the atom.
The other important advance in the understanding of atomic spectra was the Rydberg–Ritz combination principle which related atomic spectral line frequencies to differences between 'terms', special frequencies characteristic of each element. Bohr would recognize the terms as energy levels of the atom divided by the Planck constant, leading to the modern view that the spectral lines result from energy differences.
Haas atomic model
In 1910, Arthur Erich Haas proposed a model of the hydrogen atom with an electron circulating on the surface of a sphere of positive charge. The model resembled Thomson's plum pudding model, but Haas added a radical new twist: he constrained the electron's potential energy, , on a sphere of radius to equal the frequency, , of the electron's orbit on the sphere times the Planck constant:
where represents the charge on the electron and the sphere. Haas combined this constraint with the balance-of-forces equation. The attractive force between the electron and the sphere balances the centrifugal force:
where is the mass of the electron. This combination relates the radius of the sphere to the Planck constant:
Haas solved for the Planck constant using the then-current value for the radius of the hydrogen atom.
Three years later, Bohr would use similar equations with different interpretation. Bohr took the Planck constant as given value and used the equations to predict, , the radius of the electron orbiting in the ground state of the hydrogen atom. This value is now called the Bohr radius.
Influence of the Solvay Conference
The first Solvay Conference, in 1911, was one of the first international physics conferences. Nine Nobel or future Nobel laureates attended, including
Ernest Rutherford, Bohr's mentor.
Bohr did not attend but he read the Solvay reports and discussed them with Rutherford.
The subject of the conference was the theory of radiation and the energy quanta of Max Planck's oscillators.
Planck's lecture at the conference ended with comments about atoms and the discussion that followed it concerned atomic models. Hendrik Lorentz raised the question of the composition of the atom based on Haas's model, a form of Thomson's plum pudding model with a quantum modification. Lorentz explained that the size of atoms could be taken to determine the Planck constant as Haas had done or the Planck constant could be taken as determining the size of atoms. Bohr would adopt the second path.
The discussions outlined the need for the quantum theory to be included in the atom. Planck explicitly mentions the failings of classical mechanics. While Bohr had already expressed a similar opinion in his PhD thesis, at Solvay the leading scientists of the day discussed a break with classical theories. Bohr's first paper on his atomic model cites the Solvay proceedings saying: "Whatever the alteration in the laws of motion of the electrons may be, it seems necessary to introduce in the laws in question a quantity foreign to the classical electrodynamics, i.e. Planck's constant, or as it often is called the elementary quantum of action." Encouraged by the Solvay discussions, Bohr would assume the atom was stable and abandon the efforts to stabilize classical models of the atom
Nicholson atom theory
In 1911 John William Nicholson published a model of the atom which would influence Bohr's model. Nicholson developed his model based on the analysis of astrophysical spectroscopy. He connected the observed spectral line frequencies with the orbits of electrons in his atoms. The connection he adopted associated the atomic electron orbital angular momentum with the Planck constant.
Whereas Planck focused on a quantum of energy, Nicholson's angular momentum quantum relates to orbital frequency.
This new concept gave Planck constant an atomic meaning for the first time. In his 1913 paper Bohr cites Nicholson as finding quantized angular momentum important for the atom.
The other critical influence of Nicholson work was his detailed analysis of spectra. Before Nicholson's work Bohr thought the spectral data was not useful for understanding atoms. In comparing his work to Nicholson's, Bohr came to understand the spectral data and their value. When he then learned from a friend about Balmer's compact formula for the spectral line data, Bohr quickly realized his model would match it in detail.
Nicholson's model was based on classical electrodynamics along the lines of J.J. Thomson's plum pudding model but his negative electrons orbiting a positive nucleus rather than circulating in a sphere. To avoid immediate collapse of this system he required that electrons come in pairs so the rotational acceleration of each electron was matched across the orbit. By 1913 Bohr had already shown, from the analysis of alpha particle energy loss, that hydrogen had only a single electron not a matched pair. Bohr's atomic model would abandon classical electrodynamics.
Nicholson's model of radiation was quantum but was attached to the orbits of the electrons. Bohr quantization would associate it with differences in energy levels of his model of hydrogen rather than the orbital frequency.
Bohr's previous work
Bohr completed his PhD in 1911 with a thesis 'Studies on the Electron Theory of Metals', an application of the classical electron theory of Hendrik Lorentz. Bohr noted two deficits of the classical model. The first concerned the specific heat of metals which James Clerk Maxwell noted in 1875: every additional degree of freedom in a theory of metals, like subatomic electrons, cause more disagreement with experiment. The second, the classical theory could not explain magnetism.
After his PhD, Bohr worked briefly in the lab of JJ Thomson before moving to Rutherford's lab in Manchester to study radioactivity. He arrived just after Rutherford completed his proposal of a compact nuclear core for atoms. Charles Galton Darwin, also at Manchester, had just completed an analysis of alpha particle energy loss in metals, concluding the electron collisions where the dominant cause of loss. Bohr showed in a subsequent paper that Darwin's results would improve by accounting for electron binding energy. Importantly this allowed Bohr to conclude that hydrogen atoms have a single electron.
Development
Next, Bohr was told by his friend, Hans Hansen, that the Balmer series is calculated using the Balmer formula, an empirical equation discovered by Johann Balmer in 1885 that described wavelengths of some spectral lines of hydrogen. This was further generalized by Johannes Rydberg in 1888, resulting in what is now known as the Rydberg formula.
After this, Bohr declared, "everything became clear".
In 1913 Niels Bohr put forth three postulates to provide an electron model consistent with Rutherford's nuclear model:
The electron is able to revolve in certain stable orbits around the nucleus without radiating any energy, contrary to what classical electromagnetism suggests. These stable orbits are called stationary orbits and are attained at certain discrete distances from the nucleus. The electron cannot have any other orbit in between the discrete ones.
The stationary orbits are attained at distances for which the angular momentum of the revolving electron is an integer multiple of the reduced Planck constant: , where is called the principal quantum number, and . The lowest value of is 1; this gives the smallest possible orbital radius, known as the Bohr radius, of 0.0529 nm for hydrogen. Once an electron is in this lowest orbit, it can get no closer to the nucleus. Starting from the angular momentum quantum rule as Bohr admits is previously given by Nicholson in his 1912 paper, Bohr was able to calculate the energies of the allowed orbits of the hydrogen atom and other hydrogen-like atoms and ions. These orbits are associated with definite energies and are also called energy shells or energy levels. In these orbits, the electron's acceleration does not result in radiation and energy loss. The Bohr model of an atom was based upon Planck's quantum theory of radiation.
Electrons can only gain and lose energy by jumping from one allowed orbit to another, absorbing or emitting electromagnetic radiation with a frequency determined by the energy difference of the levels according to the Planck relation: , where is the Planck constant.
Other points are:
Like Einstein's theory of the photoelectric effect, Bohr's formula assumes that during a quantum jump a discrete amount of energy is radiated. However, unlike Einstein, Bohr stuck to the classical Maxwell theory of the electromagnetic field. Quantization of the electromagnetic field was explained by the discreteness of the atomic energy levels; Bohr did not believe in the existence of photons.
According to the Maxwell theory the frequency of classical radiation is equal to the rotation frequency rot of the electron in its orbit, with harmonics at integer multiples of this frequency. This result is obtained from the Bohr model for jumps between energy levels and when is much smaller than . These jumps reproduce the frequency of the -th harmonic of orbit . For sufficiently large values of (so-called Rydberg states), the two orbits involved in the emission process have nearly the same rotation frequency, so that the classical orbital frequency is not ambiguous. But for small (or large ), the radiation frequency has no unambiguous classical interpretation. This marks the birth of the correspondence principle, requiring quantum theory to agree with the classical theory only in the limit of large quantum numbers.
The Bohr–Kramers–Slater theory (BKS theory) is a failed attempt to extend the Bohr model, which violates the conservation of energy and momentum in quantum jumps, with the conservation laws only holding on average.
Bohr's condition, that the angular momentum be an integer multiple of , was later reinterpreted in 1924 by de Broglie as a standing wave condition: the electron is described by a wave and a whole number of wavelengths must fit along the circumference of the electron's orbit:
According to de Broglie's hypothesis, matter particles such as the electron behave as waves. The de Broglie wavelength of an electron is
which implies that
or
where is the angular momentum of the orbiting electron. Writing for this angular momentum, the previous equation becomes
which is Bohr's second postulate.
Bohr described angular momentum of the electron orbit as while de Broglie's wavelength of described divided by the electron momentum. In 1913, however, Bohr justified his rule by appealing to the correspondence principle, without providing any sort of wave interpretation. In 1913, the wave behavior of matter particles such as the electron was not suspected.
In 1925, a new kind of mechanics was proposed, quantum mechanics, in which Bohr's model of electrons traveling in quantized orbits was extended into a more accurate model of electron motion. The new theory was proposed by Werner Heisenberg. Another form of the same theory, wave mechanics, was discovered by the Austrian physicist Erwin Schrödinger independently, and by different reasoning. Schrödinger employed de Broglie's matter waves, but sought wave solutions of a three-dimensional wave equation describing electrons that were constrained to move about the nucleus of a hydrogen-like atom, by being trapped by the potential of the positive nuclear charge.
Electron energy levels
The Bohr model gives almost exact results only for a system where two charged points orbit each other at speeds much less than that of light. This not only involves one-electron systems such as the hydrogen atom, singly ionized helium, and doubly ionized lithium, but it includes positronium and Rydberg states of any atom where one electron is far away from everything else. It can be used for K-line X-ray transition calculations if other assumptions are added (see Moseley's law below). In high energy physics, it can be used to calculate the masses of heavy quark mesons.
Calculation of the orbits requires two assumptions.
Classical mechanics
The electron is held in a circular orbit by electrostatic attraction. The centripetal force is equal to the Coulomb force.
where me is the electron's mass, e is the elementary charge, ke is the Coulomb constant and Z is the atom's atomic number. It is assumed here that the mass of the nucleus is much larger than the electron mass (which is a good assumption). This equation determines the electron's speed at any radius:
It also determines the electron's total energy at any radius:
The total energy is negative and inversely proportional to r. This means that it takes energy to pull the orbiting electron away from the proton. For infinite values of r, the energy is zero, corresponding to a motionless electron infinitely far from the proton. The total energy is half the potential energy, the difference being the kinetic energy of the electron. This is also true for noncircular orbits by the virial theorem.
A quantum rule
The angular momentum is an integer multiple of ħ:
Derivation
In classical mechanics, if an electron is orbiting around an atom with period T, and if its coupling to the electromagnetic field is weak, so that the orbit doesn't decay very much in one cycle, it will emit electromagnetic radiation in a pattern repeating at every period, so that the Fourier transform of the pattern will only have frequencies which are multiples of 1/T.
However, in quantum mechanics, the quantization of angular momentum leads to discrete energy levels of the orbits, and the emitted frequencies are quantized according to the energy differences between these levels. This discrete nature of energy levels introduces a fundamental departure from the classical radiation law, giving rise to distinct spectral lines in the emitted radiation.
Bohr assumes that the electron is circling the nucleus in an elliptical orbit obeying the rules of classical mechanics, but with no loss of radiation due to the Larmor formula.
Denoting the total energy as E, the negative electron charge as e, the positive nucleus charge as K=Z|e|, the electron mass as me, half the major axis of the ellipse as a, he starts with these equations:
E is assumed to be negative, because a positive energy is required to unbind the electron from the nucleus and put it at rest at an infinite distance.
Eq. (1a) is obtained from equating the centripetal force to the Coulombian force acting between the nucleus and the electron, considering that (where T is the average kinetic energy and U the average electrostatic potential), and that for Kepler's second law, the average separation between the electron and the nucleus is a.
Eq. (1b) is obtained from the same premises of eq. (1a) plus the virial theorem, stating that, for an elliptical orbit,
Then Bohr assumes that is an integer multiple of the energy of a quantum of light with half the frequency of the electron's revolution frequency, i.e.:
From eq. (1a,1b,2), it descends:
He further assumes that the orbit is circular, i.e. , and, denoting the angular momentum of the electron as L, introduces the equation:
Eq. (4) stems from the virial theorem, and from the classical mechanics relationships between the angular momentum, the kinetic energy and the frequency of revolution.
From eq. (1c,2,4), it stems:
where:
that is:
This results states that the angular momentum of the electron is an integer multiple of the reduced Planck constant.
Substituting the expression for the velocity gives an equation for r in terms of n:
so that the allowed orbit radius at any n is
The smallest possible value of r in the hydrogen atom () is called the Bohr radius and is equal to:
The energy of the n-th level for any atom is determined by the radius and quantum number:
An electron in the lowest energy level of hydrogen () therefore has about 13.6 eV less energy than a motionless electron infinitely far from the nucleus. The next energy level () is −3.4 eV. The third (3) is −1.51 eV, and so on. For larger values of n, these are also the binding energies of a highly excited atom with one electron in a large circular orbit around the rest of the atom. The hydrogen formula also coincides with the Wallis product.
The combination of natural constants in the energy formula is called the Rydberg energy (RE):
This expression is clarified by interpreting it in combinations that form more natural units:
is the rest mass energy of the electron (511 keV),
is the fine-structure constant,
.
Since this derivation is with the assumption that the nucleus is orbited by one electron, we can generalize this result by letting the nucleus have a charge , where Z is the atomic number. This will now give us energy levels for hydrogenic (hydrogen-like) atoms, which can serve as a rough order-of-magnitude approximation of the actual energy levels. So for nuclei with Z protons, the energy levels are (to a rough approximation):
The actual energy levels cannot be solved analytically for more than one electron (see n-body problem) because the electrons are not only affected by the nucleus but also interact with each other via the Coulomb force.
When Z = 1/α (), the motion becomes highly relativistic, and Z2 cancels the α2 in R; the orbit energy begins to be comparable to rest energy. Sufficiently large nuclei, if they were stable, would reduce their charge by creating a bound electron from the vacuum, ejecting the positron to infinity. This is the theoretical phenomenon of electromagnetic charge screening which predicts a maximum nuclear charge. Emission of such positrons has been observed in the collisions of heavy ions to create temporary super-heavy nuclei.
The Bohr formula properly uses the reduced mass of electron and proton in all situations, instead of the mass of the electron,
However, these numbers are very nearly the same, due to the much larger mass of the proton, about 1836.1 times the mass of the electron, so that the reduced mass in the system is the mass of the electron multiplied by the constant 1836.1/(1+1836.1) = 0.99946. This fact was historically important in convincing Rutherford of the importance of Bohr's model, for it explained the fact that the frequencies of lines in the spectra for singly ionized helium do not differ from those of hydrogen by a factor of exactly 4, but rather by 4 times the ratio of the reduced mass for the hydrogen vs. the helium systems, which was much closer to the experimental ratio than exactly 4.
For positronium, the formula uses the reduced mass also, but in this case, it is exactly the electron mass divided by 2. For any value of the radius, the electron and the positron are each moving at half the speed around their common center of mass, and each has only one fourth the kinetic energy. The total kinetic energy is half what it would be for a single electron moving around a heavy nucleus.
(positronium).
Rydberg formula
Beginning in late 1860s, Johann Balmer and later Johannes Rydberg and Walther Ritz developed increasingly accurate empirical formula matching measured atomic spectral lines.
Critical for Bohr's later work, Rydberg expressed his formula in terms of wave-number, equivalent to frequency. These formula contained a constant, , now known the Rydberg constant and a pair of integers indexing the lines:
Despite many attempts, no theory of the atom could reproduce these relatively simple formula.
In Bohr's theory describing the energies of transitions or quantum jumps between orbital energy levels is able to explain these formula. For the hydrogen atom Bohr starts with his derived formula for the energy released as a free electron moves into a stable circular orbit indexed by :
The energy difference between two such levels is then:
Therefore, Bohr's theory gives the Rydberg formula and moreover the numerical value the Rydberg constant for hydrogen in terms of more fundamental constants of nature, including the electron's charge, the electron's mass, and the Planck constant:
Since the energy of a photon is
these results can be expressed in terms of the wavelength of the photon given off:
Bohr's derivation of the Rydberg constant, as well as the concomitant agreement of Bohr's formula with experimentally observed spectral lines of the Lyman ( =1), Balmer ( =2), and Paschen ( =3) series, and successful theoretical prediction of other lines not yet observed, was one reason that his model was immediately accepted.
To apply to atoms with more than one electron, the Rydberg formula can be modified by replacing with or with where is constant representing a screening effect due to the inner-shell and other electrons (see Electron shell and the later discussion of the "Shell Model of the Atom" below). This was established empirically before Bohr presented his model.
Shell model (heavier atoms)
Bohr's original three papers in 1913 described mainly the electron configuration in lighter elements. Bohr called his electron shells, "rings" in 1913. Atomic orbitals within shells did not exist at the time of his planetary model. Bohr explains in Part 3 of his famous 1913 paper that the maximum electrons in a shell is eight, writing: "We see, further, that a ring of n electrons cannot rotate in a single ring round a nucleus of charge ne unless n < 8." For smaller atoms, the electron shells would be filled as follows: "rings of electrons will only join together if they contain equal numbers of electrons; and that accordingly the numbers of electrons on inner rings will only be 2, 4, 8". However, in larger atoms the innermost shell would contain eight electrons, "on the other hand, the periodic system of the elements strongly suggests that already in neon N = 10 an inner ring of eight electrons will occur". Bohr wrote "From the above we are led to the following possible scheme for the arrangement of the electrons in light atoms:"
In Bohr's third 1913 paper Part III called "Systems Containing Several Nuclei", he says that two atoms form molecules on a symmetrical plane and he reverts to describing hydrogen. The 1913 Bohr model did not discuss higher elements in detail and John William Nicholson was one of the first to prove in 1914 that it couldn't work for lithium, but was an attractive theory for hydrogen and ionized helium.
In 1921, following the work of chemists and others involved in work on the periodic table, Bohr extended the model of hydrogen to give an approximate model for heavier atoms. This gave a physical picture that reproduced many known atomic properties for the first time although these properties were proposed contemporarily with the identical work of chemist Charles Rugeley Bury
Bohr's partner in research during 1914 to 1916 was Walther Kossel who corrected Bohr's work to show that electrons interacted through the outer rings, and Kossel called the rings: "shells". Irving Langmuir is credited with the first viable arrangement of electrons in shells with only two in the first shell and going up to eight in the next according to the octet rule of 1904, although Kossel had already predicted a maximum of eight per shell in 1916. Heavier atoms have more protons in the nucleus, and more electrons to cancel the charge. Bohr took from these chemists the idea that each discrete orbit could only hold a certain number of electrons. Per Kossel, after that the orbit is full, the next level would have to be used. This gives the atom a shell structure designed by Kossel, Langmuir, and Bury, in which each shell corresponds to a Bohr orbit.
This model is even more approximate than the model of hydrogen, because it treats the electrons in each shell as non-interacting. But the repulsions of electrons are taken into account somewhat by the phenomenon of screening. The electrons in outer orbits do not only orbit the nucleus, but they also move around the inner electrons, so the effective charge Z that they feel is reduced by the number of the electrons in the inner orbit.
For example, the lithium atom has two electrons in the lowest 1s orbit, and these orbit at Z = 2. Each one sees the nuclear charge of Z = 3 minus the screening effect of the other, which crudely reduces the nuclear charge by 1 unit. This means that the innermost electrons orbit at approximately 1/2 the Bohr radius. The outermost electron in lithium orbits at roughly the Bohr radius, since the two inner electrons reduce the nuclear charge by 2. This outer electron should be at nearly one Bohr radius from the nucleus. Because the electrons strongly repel each other, the effective charge description is very approximate; the effective charge Z doesn't usually come out to be an integer.
The shell model was able to qualitatively explain many of the mysterious properties of atoms which became codified in the late 19th century in the periodic table of the elements. One property was the size of atoms, which could be determined approximately by measuring the viscosity of gases and density of pure crystalline solids. Atoms tend to get smaller toward the right in the periodic table, and become much larger at the next line of the table. Atoms to the right of the table tend to gain electrons, while atoms to the left tend to lose them. Every element on the last column of the table is chemically inert (noble gas).
In the shell model, this phenomenon is explained by shell-filling. Successive atoms become smaller because they are filling orbits of the same size, until the orbit is full, at which point the next atom in the table has a loosely bound outer electron, causing it to expand. The first Bohr orbit is filled when it has two electrons, which explains why helium is inert. The second orbit allows eight electrons, and when it is full the atom is neon, again inert. The third orbital contains eight again, except that in the more correct Sommerfeld treatment (reproduced in modern quantum mechanics) there are extra "d" electrons. The third orbit may hold an extra 10 d electrons, but these positions are not filled until a few more orbitals from the next level are filled (filling the n=3 d orbitals produces the 10 transition elements). The irregular filling pattern is an effect of interactions between electrons, which are not taken into account in either the Bohr or Sommerfeld models and which are difficult to calculate even in the modern treatment.
Moseley's law and calculation (K-alpha X-ray emission lines)
Niels Bohr said in 1962: "You see actually the Rutherford work was not taken seriously. We cannot understand today, but it was not taken seriously at all. There was no mention of it any place. The great change came from Moseley."
In 1913, Henry Moseley found an empirical relationship between the strongest X-ray line emitted by atoms under electron bombardment (then known as the K-alpha line), and their atomic number . Moseley's empiric formula was found to be derivable from Rydberg's formula and later Bohr's formula (Moseley actually mentions only Ernest Rutherford and Antonius Van den Broek in terms of models as these had been published before Moseley's work and Moseley's 1913 paper was published the same month as the first Bohr model paper). The two additional assumptions that [1] this X-ray line came from a transition between energy levels with quantum numbers 1 and 2, and [2], that the atomic number when used in the formula for atoms heavier than hydrogen, should be diminished by 1, to .
Moseley wrote to Bohr, puzzled about his results, but Bohr was not able to help. At that time, he thought that the postulated innermost "K" shell of electrons should have at least four electrons, not the two which would have neatly explained the result. So Moseley published his results without a theoretical explanation.
It was Walther Kossel in 1914 and in 1916 who explained that in the periodic table new elements would be created as electrons were added to the outer shell. In Kossel's paper, he writes: "This leads to the conclusion that the electrons, which are added further, should be put into concentric rings or shells, on each of which ... only a certain number of electrons—namely, eight in our case—should be arranged. As soon as one ring or shell is completed, a new one has to be started for the next element; the number of electrons, which are most easily accessible, and lie at the outermost periphery, increases again from element to element and, therefore, in the formation of each new shell the chemical periodicity is repeated." Later, chemist Langmuir realized that the effect was caused by charge screening, with an inner shell containing only 2 electrons. In his 1919 paper, Irving Langmuir postulated the existence of "cells" which could each only contain two electrons each, and these were arranged in "equidistant layers".
In the Moseley experiment, one of the innermost electrons in the atom is knocked out, leaving a vacancy in the lowest Bohr orbit, which contains a single remaining electron. This vacancy is then filled by an electron from the next orbit, which has n=2. But the n=2 electrons see an effective charge of Z − 1, which is the value appropriate for the charge of the nucleus, when a single electron remains in the lowest Bohr orbit to screen the nuclear charge +Z, and lower it by −1 (due to the electron's negative charge screening the nuclear positive charge). The energy gained by an electron dropping from the second shell to the first gives Moseley's law for K-alpha lines,
or
Here, Rv = RE/h is the Rydberg constant, in terms of frequency equal to 3.28 x 1015 Hz. For values of Z between 11 and 31 this latter relationship had been empirically derived by Moseley, in a simple (linear) plot of the square root of X-ray frequency against atomic number (however, for silver, Z = 47, the experimentally obtained screening term should be replaced by 0.4). Notwithstanding its restricted validity, Moseley's law not only established the objective meaning of atomic number, but as Bohr noted, it also did more than the Rydberg derivation to establish the validity of the Rutherford/Van den Broek/Bohr nuclear model of the atom, with atomic number (place on the periodic table) standing for whole units of nuclear charge. Van den Broek had published his model in January 1913 showing the periodic table was arranged according to charge while Bohr's atomic model was not published until July 1913.
The K-alpha line of Moseley's time is now known to be a pair of close lines, written as (Kα1 and Kα2) in Siegbahn notation.
Shortcomings
The Bohr model gives an incorrect value for the ground state orbital angular momentum: The angular momentum in the true ground state is known to be zero from experiment. Although mental pictures fail somewhat at these levels of scale, an electron in the lowest modern "orbital" with no orbital momentum, may be thought of as not to revolve "around" the nucleus at all, but merely to go tightly around it in an ellipse with zero area (this may be pictured as "back and forth", without striking or interacting with the nucleus). This is only reproduced in a more sophisticated semiclassical treatment like Sommerfeld's. Still, even the most sophisticated semiclassical model fails to explain the fact that the lowest energy state is spherically symmetric – it doesn't point in any particular direction.
In modern quantum mechanics, the electron in hydrogen is a spherical cloud of probability that grows denser near the nucleus. The rate-constant of probability-decay in hydrogen is equal to the inverse of the Bohr radius, but since Bohr worked with circular orbits, not zero area ellipses, the fact that these two numbers exactly agree is considered a "coincidence". (However, many such coincidental agreements are found between the semiclassical vs. full quantum mechanical treatment of the atom; these include identical energy levels in the hydrogen atom and the derivation of a fine-structure constant, which arises from the relativistic Bohr–Sommerfeld model (see below) and which happens to be equal to an entirely different concept, in full modern quantum mechanics).
The Bohr model also failed to explain:
Much of the spectra of larger atoms. At best, it can make predictions about the K-alpha and some L-alpha X-ray emission spectra for larger atoms, if two additional ad hoc assumptions are made. Emission spectra for atoms with a single outer-shell electron (atoms in the lithium group) can also be approximately predicted. Also, if the empiric electron–nuclear screening factors for many atoms are known, many other spectral lines can be deduced from the information, in similar atoms of differing elements, via the Ritz–Rydberg combination principles (see Rydberg formula). All these techniques essentially make use of Bohr's Newtonian energy-potential picture of the atom.
The relative intensities of spectral lines; although in some simple cases, Bohr's formula or modifications of it, was able to provide reasonable estimates (for example, calculations by Kramers for the Stark effect).
The existence of fine structure and hyperfine structure in spectral lines, which are known to be due to a variety of relativistic and subtle effects, as well as complications from electron spin.
The Zeeman effect – changes in spectral lines due to external magnetic fields; these are also due to more complicated quantum principles interacting with electron spin and orbital magnetic fields.
Doublets and triplets appear in the spectra of some atoms as very close pairs of lines. Bohr's model cannot say why some energy levels should be very close together.
Multi-electron atoms do not have energy levels predicted by the model. It does not work for (neutral) helium.
Refinements
Several enhancements to the Bohr model were proposed, most notably the Sommerfeld or Bohr–Sommerfeld models, which suggested that electrons travel in elliptical orbits around a nucleus instead of the Bohr model's circular orbits. This model supplemented the quantized angular momentum condition of the Bohr model with an additional radial quantization condition, the Wilson–Sommerfeld quantization condition
where pr is the radial momentum canonically conjugate to the coordinate qr, which is the radial position, and T is one full orbital period. The integral is the action of action-angle coordinates. This condition, suggested by the correspondence principle, is the only one possible, since the quantum numbers are adiabatic invariants.
The Bohr–Sommerfeld model was fundamentally inconsistent and led to many paradoxes. The magnetic quantum number measured the tilt of the orbital plane relative to the xy plane, and it could only take a few discrete values. This contradicted the obvious fact that an atom could have any orientation relative to the coordinates, without restriction. The Sommerfeld quantization can be performed in different canonical coordinates and sometimes gives different answers. The incorporation of radiation corrections was difficult, because it required finding action-angle coordinates for a combined radiation/atom system, which is difficult when the radiation is allowed to escape. The whole theory did not extend to non-integrable motions, which meant that many systems could not be treated even in principle. In the end, the model was replaced by the modern quantum-mechanical treatment of the hydrogen atom, which was first given by Wolfgang Pauli in 1925, using Heisenberg's matrix mechanics. The current picture of the hydrogen atom is based on the atomic orbitals of wave mechanics, which Erwin Schrödinger developed in 1926.
However, this is not to say that the Bohr–Sommerfeld model was without its successes. Calculations based on the Bohr–Sommerfeld model were able to accurately explain a number of more complex atomic spectral effects. For example, up to first-order perturbations, the Bohr model and quantum mechanics make the same predictions for the spectral line splitting in the Stark effect. At higher-order perturbations, however, the Bohr model and quantum mechanics differ, and measurements of the Stark effect under high field strengths helped confirm the correctness of quantum mechanics over the Bohr model. The prevailing theory behind this difference lies in the shapes of the orbitals of the electrons, which vary according to the energy state of the electron.
The Bohr–Sommerfeld quantization conditions lead to questions in modern mathematics. Consistent semiclassical quantization condition requires a certain type of structure on the phase space, which places topological limitations on the types of symplectic manifolds which can be quantized. In particular, the symplectic form should be the curvature form of a connection of a Hermitian line bundle, which is called a prequantization.
Bohr also updated his model in 1922, assuming that certain numbers of electrons (for example, 2, 8, and 18) correspond to stable "closed shells".
Model of the chemical bond
Niels Bohr proposed a model of the atom and a model of the chemical bond. According to his model for a diatomic molecule, the electrons of the atoms of the molecule form a rotating ring whose plane is perpendicular to the axis of the molecule and equidistant from the atomic nuclei. The dynamic equilibrium of the molecular system is achieved through the balance of forces between the forces of attraction of nuclei to the plane of the ring of electrons and the forces of mutual repulsion of the nuclei. The Bohr model of the chemical bond took into account the Coulomb repulsion – the electrons in the ring are at the maximum distance from each other.
Symbolism of planetary atomic models
Although Bohr's atomic model was superseded by quantum models in the 1920s, the visual image of electrons orbiting a nucleus has remained the popular concept of atoms.
The concept of an atom as a tiny planetary system has been widely used as a symbol for atoms and even for "atomic" energy (even though this is more properly considered nuclear energy). Examples of its use over the past century include but are not limited to:
The logo of the United States Atomic Energy Commission, which was in part responsible for its later usage in relation to nuclear fission technology in particular.
The flag of the International Atomic Energy Agency is a "crest-and-spinning-atom emblem", enclosed in olive branches.
The US minor league baseball Albuquerque Isotopes' logo shows baseballs as electrons orbiting a large letter "A".
A similar symbol, the atomic whirl, was chosen as the symbol for the American Atheists, and has come to be used as a symbol of atheism in general.
The Unicode Miscellaneous Symbols code point U+269B (⚛) for an atom looks like a planetary atom model.
The television show The Big Bang Theory uses a planetary-like image in its print logo.
The JavaScript library React uses planetary-like image as its logo.
On maps, it is generally used to indicate a nuclear power installation.
| Physical sciences | Atomic physics | null |
4882 | https://en.wikipedia.org/wiki/Background%20radiation | Background radiation | Background radiation is a measure of the level of ionizing radiation present in the environment at a particular location which is due to deliberate introduction of radiation sources.
Background radiation originates from a variety of sources, both natural and artificial. These include both cosmic radiation and environmental radioactivity from naturally occurring radioactive materials (such as radon and radium), as well as man-made medical X-rays, fallout from nuclear weapons testing and nuclear accidents.
Definition
Background radiation is defined by the International Atomic Energy Agency as "Dose or the dose rate (or an observed measure related to the dose or dose rate) attributable to all sources other than the one(s) specified. A distinction is thus made between the dose which is already in a location, which is defined here as being "background", and the dose due to a deliberately introduced and specified source. This is important where radiation measurements are taken of a specified radiation source, where the existing background may affect this measurement. An example would be measurement of radioactive contamination in a gamma radiation background, which could increase the total reading above that expected from the contamination alone.
However, if no radiation source is specified as being of concern, then the total radiation dose measurement at a location is generally called the background radiation, and this is usually the case where an ambient dose rate is measured for environmental purposes.
Background dose rate examples
Background radiation varies with location and time, and the following table gives examples:
Natural background radiation
Radioactive material is found throughout nature. Detectable amounts occur naturally in soil, rocks, water, air, and vegetation, from which it is inhaled and ingested into the body. In addition to this internal exposure, humans also receive external exposure from radioactive materials that remain outside the body and from cosmic radiation from space. The worldwide average natural dose to humans is about per year. This is four times the worldwide average artificial radiation exposure, which in 2008 amounted to about per year. In some developed countries, like the US and Japan, artificial exposure is, on average, greater than the natural exposure, due to greater access to medical imaging. In Europe, average natural background exposure by country ranges from under annually in the United Kingdom to more than annually for some groups of people in Finland.
The International Atomic Energy Agency states:
"Exposure to radiation from natural sources is an inescapable feature of everyday life in both working and public environments. This exposure is in most cases of little or no concern to society, but in certain situations the introduction of health protection measures needs to be considered, for example when working with uranium and thorium ores and other Naturally Occurring Radioactive Material (NORM). These situations have become the focus of greater attention by the Agency in recent years."
Terrestrial sources
Terrestrial background radiation, for the purpose of the table above, only includes sources that remain external to the body. The major radionuclides of concern are potassium, uranium and thorium and their decay products, some of which, like radium and radon are intensely radioactive but occur in low concentrations. Most of these sources have been decreasing, due to radioactive decay since the formation of the Earth, because there is no significant amount currently transported to the Earth. Thus, the present activity on Earth from uranium-238 is only half as much as it originally was because of its 4.5 billion year half-life, and potassium-40 (half-life 1.25 billion years) is only at about 8% of original activity. But during the time that humans have existed the amount of radiation has decreased very little.
Many shorter half-life (and thus more intensely radioactive) isotopes have not decayed out of the terrestrial environment because of their on-going natural production. Examples of these are radium-226 (decay product of thorium-230 in decay chain of uranium-238) and radon-222 (a decay product of radium-226 in said chain).
Thorium and uranium (and their daughters) primarily undergo alpha and beta decay, and are not easily detectable. However, many of their daughter products are strong gamma emitters. Thorium-232 is detectable via a 239 keV peak from lead-212, 511, 583 and 2614 keV from thallium-208, and 911 and 969 keV from actinium-228. Uranium-238 manifests as 609, 1120, and 1764 keV peaks of bismuth-214 (cf. the same peak for atmospheric radon). Potassium-40 is detectable directly via its 1461 keV gamma peak.
The level over the sea and other large bodies of water tends to be about a tenth of the terrestrial background. Conversely, coastal areas (and areas by the side of fresh water) may have an additional contribution from dispersed sediment.
Airborne sources
The biggest source of natural background radiation is airborne radon, a radioactive gas that emanates from the ground. Radon and its isotopes, parent radionuclides, and decay products all contribute to an average inhaled dose of 1.26 mSv/a (millisievert per year). Radon is unevenly distributed and varies with weather, such that much higher doses apply to many areas of the world, where it represents a significant health hazard. Concentrations over 500 times the world average have been found inside buildings in Scandinavia, the United States, Iran, and the Czech Republic. Radon is a decay product of uranium, which is relatively common in the Earth's crust, but more concentrated in ore-bearing rocks scattered around the world. Radon seeps out of these ores into the atmosphere or into ground water or infiltrates into buildings. It can be inhaled into the lungs, along with its decay products, where they will reside for a period of time after exposure.
Although radon is naturally occurring, exposure can be enhanced or diminished by human activity, notably house construction. A poorly sealed dwelling floor, or poor basement ventilation, in an otherwise well insulated house can result in the accumulation of radon within the dwelling, exposing its residents to high concentrations. The widespread construction of well insulated and sealed homes in the northern industrialized world has led to radon becoming the primary source of background radiation in some localities in northern North America and Europe. Basement sealing and suction ventilation reduce exposure. Some building materials, for example lightweight concrete with alum shale, phosphogypsum and Italian tuff, may emanate radon if they contain radium and are porous to gas.
Radiation exposure from radon is indirect. Radon has a short half-life (4 days) and decays into other solid particulate radium-series radioactive nuclides. These radioactive particles are inhaled and remain lodged in the lungs, causing continued exposure. Radon is thus assumed to be the second leading cause of lung cancer after smoking, and accounts for 15,000 to 22,000 cancer deaths per year in the US alone. However, the discussion about the opposite experimental results is still going on.
About 100,000 Bq/m3 of radon was found in Stanley Watras's basement in 1984. He and his neighbours in Boyertown, Pennsylvania, United States may hold the record for the most radioactive dwellings in the world. International radiation protection organizations estimate that a committed dose may be calculated by multiplying the equilibrium equivalent concentration (EEC) of radon by a factor of 8 to 9 and the EEC of thoron by a factor of 40 .
Most of the atmospheric background is caused by radon and its decay products. The gamma spectrum shows prominent peaks at 609, 1120, and 1764 keV, belonging to bismuth-214, a radon decay product. The atmospheric background varies greatly with wind direction and meteorological conditions. Radon also can be released from the ground in bursts and then form "radon clouds" capable of traveling tens of kilometers.
Cosmic radiation
The Earth and all living things on it are constantly bombarded by radiation from outer space. This radiation primarily consists of positively charged ions from protons to iron and larger nuclei derived from outside the Solar System. This radiation interacts with atoms in the atmosphere to create an air shower of secondary radiation, including X-rays, muons, protons, alpha particles, pions, electrons, and neutrons. The immediate dose from cosmic radiation is largely from muons, neutrons, and electrons, and this dose varies in different parts of the world based largely on the geomagnetic field and altitude. For example, the city of Denver in the United States (at 1650 meters elevation) receives a cosmic ray dose roughly twice that of a location at sea level. This radiation is much more intense in the upper troposphere, around 10 km altitude, and is thus of particular concern for airline crews and frequent passengers, who spend many hours per year in this environment. During their flights airline crews typically get an additional occupational dose between per year and 2.19 mSv/year, according to various studies.
Similarly, cosmic rays cause higher background exposure in astronauts than in humans on the surface of Earth. Astronauts in low orbits, such as in the International Space Station or the Space Shuttle, are partially shielded by the magnetic field of the Earth, but also suffer from the Van Allen radiation belt which accumulates cosmic rays and results from the Earth's magnetic field. Outside low Earth orbit, as experienced by the Apollo astronauts who traveled to the Moon, this background radiation is much more intense, and represents a considerable obstacle to potential future long term human exploration of the Moon or Mars.
Cosmic rays also cause elemental transmutation in the atmosphere, in which secondary radiation generated by the cosmic rays combines with atomic nuclei in the atmosphere to generate different nuclides. Many so-called cosmogenic nuclides can be produced, but probably the most notable is carbon-14, which is produced by interactions with nitrogen atoms. These cosmogenic nuclides eventually reach the Earth's surface and can be incorporated into living organisms. The production of these nuclides varies slightly with short-term variations in solar cosmic ray flux, but is considered practically constant over long scales of thousands to millions of years. The constant production, incorporation into organisms and relatively short half-life of carbon-14 are the principles used in radiocarbon dating of ancient biological materials, such as wooden artifacts or human remains.
The cosmic radiation at sea level usually manifests as 511 keV gamma rays from annihilation of positrons created by nuclear reactions of high energy particles and gamma rays. At higher altitudes there is also the contribution of continuous bremsstrahlung spectrum.
Food and water
Two of the essential elements that make up the human body, namely potassium and carbon, have radioactive isotopes that add significantly to our background radiation dose. An average human contains about 17 milligrams of potassium-40 (40K) and about 24 nanograms (10−9 g) of carbon-14 (14C), (half-life 5,730 years). Excluding internal contamination by external radioactive material, these two are the largest components of internal radiation exposure from biologically functional components of the human body. About 4,000 nuclei of 40K decay per second, and a similar number of 14C. The energy of beta particles produced by 40K is about 10 times that from the beta particles from 14C decay.
14C is present in the human body at a level of about 3700 Bq (0.1 μCi) with a biological half-life of 40 days. This means there are about 3700 beta particles per second produced by the decay of 14C. However, a 14C atom is in the genetic information of about half the cells, while potassium is not a component of DNA. The decay of a 14C atom inside DNA in one person happens about 50 times per second, changing a carbon atom to one of nitrogen.
The global average internal dose from radionuclides other than radon and its decay products is 0.29 mSv/a, of which 0.17 mSv/a comes from 40K, 0.12 mSv/a comes from the uranium and thorium series, and 12 μSv/a comes from 14C.
Areas with high natural background radiation
Some areas have greater dosage than the country-wide averages. In the world in general, exceptionally high natural background locales include Ramsar in Iran, Guarapari in Brazil, Karunagappalli in India, Arkaroola in Australia and Yangjiang in China.
The highest level of purely natural radiation ever recorded on the Earth's surface was 90 μGy/h on a Brazilian black beach (areia preta in Portuguese) composed of monazite. This rate would convert to 0.8 Gy/a for year-round continuous exposure, but in fact the levels vary seasonally and are much lower in the nearest residences. The record measurement has not been duplicated and is omitted from UNSCEAR's latest reports. Nearby tourist beaches in Guarapari and Cumuruxatiba were later evaluated at 14 and 15 μGy/h. Note that the values quoted here are in Grays. To convert to Sieverts (Sv) a radiation weighting factor is required; these weighting factors vary from 1 (beta & gamma) to 20 (alpha particles).
The highest background radiation in an inhabited area is found in Ramsar, primarily due to the use of local naturally radioactive limestone as a building material. The 1000 most exposed residents receive an average external effective radiation dose of per year, six times the ICRP recommended limit for exposure to the public from artificial sources. They additionally receive a substantial internal dose from radon. Record radiation levels were found in a house where the effective dose due to ambient radiation fields was per year, and the internal committed dose from radon was per year. This unique case is over 80 times higher than the world average natural human exposure to radiation.
Epidemiological studies are underway to identify health effects associated with the high radiation levels in Ramsar. It is much too early to draw unambiguous statistically significant conclusions. While so far support for beneficial effects of chronic radiation (like longer lifespan) has been observed in few places only, a protective and adaptive effect is suggested by at least one study whose authors nonetheless caution that data from Ramsar are not yet sufficiently strong to relax existing regulatory dose limits. However, the recent statistical analyses discussed that there is no correlation between the risk of negative health effects and elevated level of natural background radiation.
Photoelectric
Background radiation doses in the immediate vicinity of particles of high atomic number materials, within the human body, have a small enhancement due to the photoelectric effect.
Neutron background
Most of the natural neutron background is a product of cosmic rays interacting with the atmosphere. The neutron energy peaks at around 1 MeV and rapidly drops above. At sea level, the production of neutrons is about 20 neutrons per second per kilogram of material interacting with the cosmic rays (or, about 100–300 neutrons per square meter per second). The flux is dependent on geomagnetic latitude, with a maximum near the magnetic poles. At solar minimums, due to lower solar magnetic field shielding, the flux is about twice as high vs the solar maximum. It also dramatically increases during solar flares. In the vicinity of larger heavier objects, e.g. buildings or ships, the neutron flux measures higher; this is known as "cosmic ray induced neutron signature", or "ship effect" as it was first detected with ships at sea.
Artificial background radiation
Atmospheric nuclear testing
Frequent above-ground nuclear explosions between the 1940s and 1960s scattered a substantial amount of radioactive contamination. Some of this contamination is local, rendering the immediate surroundings highly radioactive, while some of it is carried longer distances as nuclear fallout; some of this material is dispersed worldwide. The increase in background radiation due to these tests peaked in 1963 at about 0.15 mSv per year worldwide, or about 7% of average background dose from all sources. The Limited Test Ban Treaty of 1963 prohibited above-ground tests, thus by the year 2000 the worldwide dose from these tests has decreased to only 0.005 mSv per year.
This global fallout has caused up to 2.4 million deaths by 2020.
Occupational exposure
The International Commission on Radiological Protection recommends limiting occupational radiation exposure to 50 mSv (5 rem) per year, and 100 mSv (10 rem) in 5 years.
However, background radiation for occupational doses includes radiation that is not measured by radiation dose instruments in potential occupational exposure conditions. This includes both offsite "natural background radiation" and any medical radiation doses. This value is not typically measured or known from surveys, such that variations in the total dose to individual workers is not known. This can be a significant confounding factor in assessing radiation exposure effects in a population of workers who may have significantly different natural background and medical radiation doses. This is most significant when the occupational doses are very low.
At an IAEA conference in 2002, it was recommended that occupational doses below 1–2 mSv per year do not warrant regulatory scrutiny.
Nuclear accidents
Under normal circumstances, nuclear reactors release small amounts of radioactive gases, which cause small radiation exposures to the public. Events classified on the International Nuclear Event Scale as incidents typically do not release any additional radioactive substances into the environment. Large releases of radioactivity from nuclear reactors are extremely rare. To the present day, there were two major civilian accidents – the Chernobyl accident and the Fukushima I nuclear accidents – which caused substantial contamination. The Chernobyl accident was the only one to cause immediate deaths.
Total doses from the Chernobyl accident ranged from 10 to 50 mSv over 20 years for the inhabitants of the affected areas, with most of the dose received in the first years after the disaster, and over 100 mSv for liquidators. There were 28 deaths from acute radiation syndrome.
Total doses from the Fukushima I accidents were between 1 and 15 mSv for the inhabitants of the affected areas. Thyroid doses for children were below 50 mSv. 167 cleanup workers received doses above 100 mSv, with 6 of them receiving more than 250 mSv (the Japanese exposure limit for emergency response workers).
The average dose from the Three Mile Island accident was 0.01 mSv.
Non-civilian: In addition to the civilian accidents described above, several accidents at early nuclear weapons facilities – such as the Windscale fire, the contamination of the Techa River by the nuclear waste from the Mayak compound, and the Kyshtym disaster at the same compound – released substantial radioactivity into the environment. The Windscale fire resulted in thyroid doses of 5–20 mSv for adults and 10–60 mSv for children. The doses from the accidents at Mayak are unknown.
Nuclear fuel cycle
The Nuclear Regulatory Commission, the United States Environmental Protection Agency, and other U.S. and international agencies, require that licensees limit radiation exposure to individual members of the public to 1 mSv (100 mrem) per year.
Energy sources
Per UNECE life-cycle assessment, nearly all sources of energy result in some level of occupational and public exposure to radionuclides as result of their manufacturing or operations. The following table uses man·Sievert/GW-annum:
Coal burning
Coal plants emit radiation in the form of radioactive fly ash which is inhaled and ingested by neighbours, and incorporated into crops. A 1978 paper from Oak Ridge National Laboratory estimated that coal-fired power plants of that time may contribute a whole-body committed dose of 19 μSv/a to their immediate neighbours in a radius of 500 m. The United Nations Scientific Committee on the Effects of Atomic Radiation's 1988 report estimated the committed dose 1 km away to be 20 μSv/a for older plants or 1 μSv/a for newer plants with improved fly ash capture, but was unable to confirm these numbers by test. When coal is burned, uranium, thorium and all the uranium daughters accumulated by disintegration – radium, radon, polonium – are released. Radioactive materials previously buried underground in coal deposits are released as fly ash or, if fly ash is captured, may be incorporated into concrete manufactured with fly ash.
Other sources of dose uptake
Medical
The global average human exposure to artificial radiation is 0.6 mSv/a, primarily from medical imaging. This medical component can range much higher, with an average of 3 mSv per year across the USA population. Other human contributors include smoking, air travel, radioactive building materials, historical nuclear weapons testing, nuclear power accidents and nuclear industry operation.
A typical chest x-ray delivers 20 μSv (2 mrem) of effective dose. A dental x-ray delivers a dose of 5 to 10 μSv. A CT scan delivers an effective dose to the whole body ranging from 1 to 20 mSv (100 to 2000 mrem). The average American receives about 3 mSv of diagnostic medical dose per year; countries with the lowest levels of health care receive almost none. Radiation treatment for various diseases also accounts for some dose, both in individuals and in those around them.
Consumer items
Cigarettes contain polonium-210, originating from the decay products of radon, which stick to tobacco leaves. Heavy smoking results in a radiation dose of 160 mSv/year to localized spots at the bifurcations of segmental bronchi in the lungs from the decay of polonium-210. This dose is not readily comparable to the radiation protection limits, since the latter deal with whole body doses, while the dose from smoking is delivered to a very small portion of the body.
Radiation metrology
In a radiation metrology laboratory, background radiation refers to the measured value from any incidental sources that affect an instrument when a specific radiation source sample is being measured. This background contribution, which is established as a stable value by multiple measurements, usually before and after sample measurement, is subtracted from the rate measured when the sample is being measured.
This is in accordance with the International Atomic Energy Agency definition of background as being "Dose or dose rate (or an observed measure related to the dose or dose rate) attributable to all sources other than the one(s) specified.
The same issue occurs with radiation protection instruments, where a reading from an instrument may be affected by the background radiation. An example of this is a scintillation detector used for surface contamination monitoring. In an elevated gamma background the scintillator material will be affected by the background gamma, which will add to the reading obtained from any contamination which is being monitored. In extreme cases it will make the instrument unusable as the background swamps the lower level of radiation from the contamination. In such instruments the background can be continually monitored in the "Ready" state, and subtracted from any reading obtained when being used in "Measuring" mode.
Regular Radiation measurement is carried out at multiple levels. Government agencies compile radiation readings as part of environmental monitoring mandates, often making the readings available to the public and sometimes in near-real-time. Collaborative groups and private individuals may also make real-time readings available to the public. Instruments used for radiation measurement include the Geiger–Müller tube and the Scintillation detector. The former is usually more compact and affordable and reacts to several radiation types, while the latter is more complex and can detect specific radiation energies and types. Readings indicate radiation levels from all sources including background, and real-time readings are in general unvalidated, but correlation between independent detectors increases confidence in measured levels.
List of near-real-time government radiation measurement sites, employing multiple instrument types:
Europe and Canada: European Radiological Data Exchange Platform (EURDEP) Simple map of Gamma Dose Rates
USA: EPA Radnet near-real-time and laboratory data by state
List of international near-real-time collaborative/private measurement sites, employing primarily Geiger-Muller detectors:
GMC map: http://www.gmcmap.com/ (mix of old-data detector stations and some near-real-time ones)
Netc: http://www.netc.com/
Radmon: http://www.radmon.org/
Radiation Network: http://radiationnetwork.com/
Radioactive@Home: http://radioactiveathome.org/map/
Safecast: http://safecast.org/tilemap (the green circles are real-time detectors)
uRad Monitor: http://www.uradmonitor.com/
| Physical sciences | Physics basics: General | Physics |
4910 | https://en.wikipedia.org/wiki/Beryl | Beryl | Beryl ( ) is a mineral composed of beryllium aluminium silicate with the chemical formula Be3Al2Si6O18. Well-known varieties of beryl include emerald and aquamarine. Naturally occurring hexagonal crystals of beryl can be up to several meters in size, but terminated crystals are relatively rare. Pure beryl is colorless, but it is frequently tinted by impurities; possible colors are green, blue, yellow, pink, and red (the rarest). It is an ore source of beryllium.
Etymology
The word beryl – – is borrowed, via and , from Ancient Greek βήρυλλος bḗryllos, which referred to a 'precious blue-green color-of-sea-water stone'; from Prakrit veruḷiya, veḷuriya 'beryl'
which is ultimately of Dravidian origin, maybe from the name of Belur or Velur, a town in Karnataka, southern India. The term was later adopted for the mineral beryl more exclusively.
When the first eyeglasses were constructed in 13th-century Italy, the lenses were made of beryl (or of rock crystal) as glass could not be made clear enough. Consequently, glasses were named Brille in German (bril in Dutch and briller in Danish).
Deposits
Beryl is a common mineral, and it is widely distributed in nature. It is found most commonly in granitic pegmatites, but also occurs in mica schists, such as those of the Ural Mountains, and in limestone in Colombia. It is less common in ordinary granite and is only infrequently found in nepheline syenite. Beryl is often associated with tin and tungsten ore bodies formed as high-temperature hydrothermal veins. In granitic pegmatites, beryl is found in association with quartz, potassium feldspar, albite, muscovite, biotite, and tourmaline. Beryl is sometimes found in metasomatic contacts of igneous intrusions with gneiss, schist, or carbonate rocks. Common beryl, mined as beryllium ore, is found in small deposits in many countries, but the main producers are Russia, Brazil, and the United States.
New England's pegmatites have produced some of the largest beryls found, including one massive crystal from the Bumpus Quarry in Albany, Maine with dimensions with a mass of around ; it is New Hampshire's state mineral. , the world's largest known naturally occurring crystal of any mineral is a crystal of beryl from Malakialina, Madagascar, long and in diameter, and weighing .
Crystal habit and structure
Beryl belongs to the hexagonal crystal system. Normally beryl forms hexagonal columns but can also occur in massive habits. As a cyclosilicate beryl incorporates rings of silicate tetrahedra of that are arranged in columns along the axis and as parallel layers perpendicular to the axis, forming channels along the axis. These channels permit a variety of ions, neutral atoms, and molecules to be incorporated into the crystal thus disrupting the overall charge of the crystal permitting further substitutions in aluminium, silicon, and beryllium sites in the crystal structure. These impurities give rise to the variety of colors of beryl that can be found. Increasing alkali content within the silicate ring channels causes increases to the refractive indices and birefringence.
Human health impact
Beryl is a beryllium compound that is a known carcinogen with acute toxic effects leading to pneumonitis when inhaled. Care must thus be used when mining, handling, and refining these gems.
Varieties
Aquamarine and maxixe
Aquamarine (from , "sea water") is a blue or cyan variety of beryl. It occurs at most localities which yield ordinary beryl. The gem-gravel placer deposits of Sri Lanka contain aquamarine. Green-yellow beryl, such as that occurring in Brazil, is sometimes called chrysolite aquamarine. The deep blue version of aquamarine is called maxixe (pronounced mah-she-she). Its color results from a radiation-induced color center.
The pale blue color of aquamarine is attributed to Fe2+. Fe3+ ions produce golden-yellow color, and when both Fe2+ and Fe3+ are present, the color is a darker blue as in maxixe. Decoloration of maxixe by light or heat thus may be due to the charge transfer between Fe3+ and Fe2+.
In the United States, aquamarines can be found at the summit of Mount Antero in the Sawatch Range in central Colorado, and in the New England and North Carolina pegmatites. Aquamarines are also present in the state of Wyoming, aquamarine has been discovered in the Big Horn Mountains, near Powder River Pass. Another location within the United States is the Sawtooth Range near Stanley, Idaho, although the minerals are within a wilderness area which prevents collecting. In Brazil, there are mines in the states of Minas Gerais, Espírito Santo, and Bahia, and minorly in Rio Grande do Norte. The mines of Colombia, Skardu Pakistan, Madagascar, Russia, Namibia, Zambia, Malawi, Tanzania, and Kenya also produce aquamarine.
Emerald
Emerald is green beryl, colored by around 2% chromium and sometimes vanadium. Most emeralds are highly included, so their brittleness (resistance to breakage) is classified as generally poor.
The modern English word "emerald" comes via Middle English emeraude, imported from modern French via Old French ésmeraude and Medieval Latin , from Latin , from Greek smaragdos meaning 'green gem'.
Emeralds in antiquity were mined by the Egyptians and in what is now Austria, as well as Swat in contemporary Pakistan. A rare type of emerald known as a trapiche emerald is occasionally found in the mines of Colombia. A trapiche emerald exhibits a "star" pattern; it has raylike spokes of dark carbon impurities that give the emerald a six-pointed radial pattern. It is named for the trapiche, a grinding wheel used to process sugarcane in the region. Colombian emeralds are generally the most prized due to their transparency and fire. Some of the rarest emeralds come from the two main emerald belts in the Eastern Ranges of the Colombian Andes: Muzo and Coscuez west of the Altiplano Cundiboyacense, and Chivor and Somondoco to the east. Fine emeralds are also found in other countries, such as Zambia, Brazil, Zimbabwe, Madagascar, Pakistan, India, Afghanistan and Russia. In the US, emeralds can be found in Hiddenite, North Carolina. In 1998, emeralds were discovered in Yukon.
Emerald is a rare and valuable gemstone and, as such, it has provided the incentive for developing synthetic emeralds. Both hydrothermal and flux-growth synthetics have been produced. The first commercially successful emerald synthesis process was that of Carroll Chatham. The other large producer of flux emeralds was Pierre Gilson Sr., which has been on the market since 1964. Gilson's emeralds are usually grown on natural colorless beryl seeds which become coated on both sides. Growth occurs at the rate of per month, a typical seven-month growth run producing emerald crystals of 7 mm of thickness. The green color of emeralds is widely attributed to presence of Cr3+ ions. Intensely green beryls from Brazil, Zimbabwe and elsewhere in which the color is attributed to vanadium have also been sold and certified as emeralds.
Golden beryl and heliodor
Golden beryl can range in colors from pale yellow to a brilliant gold. Unlike emerald, golden beryl generally has very few flaws. The term "golden beryl" is sometimes synonymous with heliodor (from Greek hēlios – ἥλιος "sun" + dōron – δῶρον "gift") but golden beryl refers to pure yellow or golden yellow shades, while heliodor refers to the greenish-yellow shades. The golden yellow color is attributed to Fe3+ ions. Both golden beryl and heliodor are used as gems. Probably the largest cut golden beryl is the flawless stone on display in the Hall of Gems, Washington, D.C., United States.
Goshenite
Colorless beryl is called goshenite. The name originates from Goshen, Massachusetts, where it was originally discovered. In the past, goshenite was used for manufacturing eyeglasses and lenses owing to its transparency. Nowadays, it is most commonly used for gemstone purposes.
The gem value of goshenite is relatively low. However, goshenite can be colored yellow, green, pink, blue and in intermediate colors by irradiating it with high-energy particles. The resulting color depends on the content of Ca, Sc, Ti, V, Fe, and Co impurities.
Morganite
Morganite, also known as "pink beryl", "rose beryl", "pink emerald" (which is not a legal term according to the new Federal Trade Commission Guidelines and Regulations), and "cesian (or caesian) beryl", is a rare light pink to rose-colored gem-quality variety of beryl. Orange/yellow varieties of morganite can also be found, and color banding is common. It can be routinely heat treated to remove patches of yellow and is occasionally treated by irradiation to improve its color. The pink color of morganite is attributed to Mn2+ ions.
Red beryl
Red variety of beryl (the "bixbite") was first described in 1904 for an occurrence, its type locality, at Maynard's Claim (Pismire Knolls), Thomas Range, Juab County, Utah. The dark red color is attributed to Mn3+ ions. Old synonym "bixbite" is deprecated from the CIBJO because of the possibility of confusion with the mineral bixbyite (both named after mineralogist Maynard Bixby). Red "bixbite" beryl formerly was marketed as "red" or "scarlet emerald", but these terms involving "Emerald" terminology are now prohibited in the US.
Red beryl is very rare and has only been reported from a handful of North American locations: Wah Wah Mountains, Beaver County, Utah; Paramount Canyon, Round Mountain, Juab County, Utah; and Sierra County, New Mexico, although this locality does not often produce gem-grade stones. The bulk of gem-grade red beryl comes from the Ruby-Violet Claim in the Wah Wah Mts. of midwestern Utah, discovered in 1958 by Lamar Hodges, of Fillmore, Utah, while he was prospecting for uranium. Red beryl has been known to be confused with pezzottaite, a caesium analog of beryl, found in Madagascar and, more recently, Afghanistan; cut gems of the two varieties can be distinguished by their difference in refractive index, and the rough crystals easily by their differing crystal systems (pezzottaite trigonal, red beryl hexagonal). Synthetic red beryl is also produced. Like emerald and unlike most other varieties of beryl, the red ones are usually highly included.
While gem beryls are ordinarily found in pegmatites and certain metamorphic stones, red beryl occurs in topaz-bearing rhyolites. It is formed by crystallizing under low pressure and high temperature from a pneumatolytic phase along fractures or within near-surface miarolitic cavities of the rhyolite. Associated minerals include bixbyite, quartz, orthoclase, topaz, spessartine, pseudobrookite and hematite.
| Physical sciences | Silicate minerals | Earth science |
4918 | https://en.wikipedia.org/wiki/Bipolar%20I%20disorder | Bipolar I disorder | Bipolar I disorder (BD-I; pronounced "type one bipolar disorder") is a type of bipolar spectrum disorder characterized by the occurrence of at least one manic episode, with or without mixed or psychotic features. Most people also, at other times, have one or more depressive episodes. Typically, these manic episodes can last at least 7 days for most of each day to the extent that the individual may need medical attention, while the depressive episodes last at least 2 weeks.
It is a type of bipolar disorder and conforms to the classic concept of manic-depressive illness, which can include psychosis during mood episodes.
Diagnosis
The essential feature of bipolar I disorder is a clinical course characterized by the occurrence of one or more manic episodes or mixed episodes. Often, individuals have had one or more major depressive episodes. One episode of mania is sufficient to make the diagnosis of bipolar disorder; the person may or may not have a history of major depressive disorder. Episodes of substance-induced mood disorder due to the direct effects of a medication, or other somatic treatments for depression, substance use disorder, or toxin exposure, or of mood disorder due to a general medical condition need to be excluded before a diagnosis of bipolar I disorder can be made. Bipolar I disorder requires confirmation of only 1 full manic episode for diagnosis, but may be associated with hypomanic and depressive episodes as well. Diagnosis for bipolar II disorder does not include a full manic episode; instead, it requires the occurrence of both a hypomanic episode and a major depressive episode. Serious aggression has been reported to occur in one out of every ten major, first-episode, BD-I patients with psychotic features, the prevalence in this group being particularly high in association with a recent suicide attempt, alcohol use disorder, learning disability, or manic polarity in the first episode.
Bipolar I disorder often coexists with other disorders including PTSD, substance use disorders, and a variety of mood disorders. Studies suggest that psychiatric comorbidities correlate with further impairment of day-to-day life. Up to 40% of people with bipolar disorder also present with PTSD, with higher rates occurring in women and individuals with bipolar I disorder. A diagnosis of bipolar 1 disorder is only given if bipolar episodes are not better accounted for by schizoaffective disorder or superimposed on schizophrenia, schizophreniform disorder, delusional disorder, or a psychotic disorder not otherwise specified.
Medical assessment
Regular medical assessments are performed to rule-out secondary causes of mania and depression. These tests include complete blood count, glucose, serum chemistry/electrolyte panel, thyroid function test, liver function test, renal function test, urinalysis, vitamin B12 and folate levels, HIV screening, syphilis screening, and pregnancy test, and when clinically indicated, an electrocardiogram (ECG), an electroencephalogram (EEG), a computed tomography (CT scan), and/or a magnetic resonance imagining (MRI) may be ordered. Drug screening includes recreational drugs, particularly synthetic cannabinoids, and exposure to toxins.
Diagnostic and Statistical Manual of Mental Disorders, 4th Edition (DSM-IV-TR)
Diagnostic and Statistical Manual of Mental Disorders, 5th Edition (DSM-5)
In May 2013, American Psychiatric Association released the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5). There are several proposed revisions to occur in the diagnostic criteria of Bipolar I Disorder and its subtypes. For Bipolar I Disorder 296.40 (most recent episode hypomanic) and 296.4x (most recent episode manic), the proposed revision includes the following specifiers: with psychotic features, with mixed features, with catatonic features, with rapid cycling, with anxiety (mild to severe), with suicide risk severity, with seasonal pattern, and with postpartum onset. Bipolar I Disorder 296.5x (most recent episode depressed) will include all of the above specifiers plus the following: with melancholic features and with atypical features. The categories for specifiers will be removed in DSM-5 and criterion A will add or there are at least 3 symptoms of major depression of which one of the symptoms is depressed mood or anhedonia. For Bipolar I Disorder 296.7 (most recent episode unspecified), the listed specifiers will be removed.
The criteria for manic and hypomanic episodes in criteria A & B will be edited. Criterion A will include "and present most of the day, nearly every day", and criterion B will include "and represent a noticeable change from usual behavior". These criteria as defined in the DSM-IV-TR have created confusion for clinicians and need to be more clearly defined.
There have also been proposed revisions to criterion B of the diagnostic criteria for a Hypomanic Episode, which is used to diagnose For Bipolar I Disorder 296.40, Most Recent Episode Hypomanic. Criterion B lists "inflated self-esteem, flight of ideas, distractibility, and decreased need for sleep" as symptoms of a Hypomanic Episode. This has been confusing in the field of child psychiatry because these symptoms closely overlap with symptoms of attention deficit hyperactivity disorder (ADHD).
ICD-10
F31 Bipolar Affective Disorder
F31.6 Bipolar Affective Disorder, Current Episode Mixed
F30 Manic Episode
F30.0 Hypomania
F30.1 Mania Without Psychotic Symptoms
F30.2 Mania With Psychotic Symptoms
F32 Depressive Episode
F32.0 Mild Depressive Episode
F32.1 Moderate Depressive Episode
F32.2 Severe Depressive Episode Without Psychotic Symptoms
F32.3 Severe Depressive Episode With Psychotic Symptoms
Treatment
Medication
Mood stabilizers are often used as part of the treatment process.
Lithium is the mainstay in the management of bipolar disorder but it has a narrow therapeutic range and typically requires monitoring
Anticonvulsants, such as valproate, carbamazepine, or lamotrigine
Atypical antipsychotics, such as quetiapine, risperidone, olanzapine, or aripiprazole
Electroconvulsive therapy, a psychiatric treatment in which seizures are electrically induced in anesthetized patients for therapeutic effect
Antidepressant-induced mania occurs in 20–40% of people with bipolar disorder. Mood stabilizers, especially lithium, may protect against this effect, but some research contradicts this.
A frequent problem in these individuals is non-adherence to pharmacological treatment; long-acting injectable antipsychotics may contribute to solving this issue in some patients.
A review of validated treatment guidelines for bipolar disorder by international bodies was published in 2020.
Prognosis
Bipolar I usually has a poor prognosis, which is associated with substance abuse, psychotic features, depressive symptoms, and inter-episode depression. A manic episode can be so severe that it requires hospitalization. An estimated 63% of all BP-I related mania results in hospitalization. The natural course of BP-I, if left untreated, leads to episodes becoming more frequent or severe over time. But with proper treatment, individuals with BP-I can lead a healthy lifestyle.
Education
Psychosocial interventions can be used for managing acute depressive episodes and for maintenance treatment to aid in relapse prevention. This includes psychoeducation, cognitive behavioural therapy (CBT), family-focused therapy (FFT), interpersonal and social rhythm therapy (IPSRT), and peer support.
Information on the condition, importance of regular sleep patterns, routines and eating habits and the importance of compliance with medication as prescribed. Behavior modification through counseling can have positive influence to help reduce the effects of risky behavior during the manic phase. Additionally, the lifetime prevalence for bipolar I disorder is estimated to be 1%.
| Biology and health sciences | Mental disorders | Health |
4925 | https://en.wikipedia.org/wiki/Blue%20whale | Blue whale | The blue whale (Balaenoptera musculus) is a marine mammal and a baleen whale. Reaching a maximum confirmed length of and weighing up to , it is the largest animal known ever to have existed. The blue whale's long and slender body can be of various shades of greyish-blue on its upper surface and somewhat lighter underneath. Four subspecies are recognized: B. m. musculus in the North Atlantic and North Pacific, B. m. intermedia in the Southern Ocean, B. m. brevicauda (the pygmy blue whale) in the Indian Ocean and South Pacific Ocean, and B. m. indica in the Northern Indian Ocean. There is a population in the waters off Chile that may constitute a fifth subspecies.
In general, blue whale populations migrate between their summer feeding areas near the poles and their winter breeding grounds near the tropics. There is also evidence of year-round residencies, and partial or age/sex-based migration. Blue whales are filter feeders; their diet consists almost exclusively of krill. They are generally solitary or gather in small groups, and have no well-defined social structure other than mother–calf bonds. Blue whales vocalize, with a fundamental frequency ranging from 8 to 25 Hz; their vocalizations may vary by region, season, behavior, and time of day. Orcas are their only natural predators.
The blue whale was abundant in nearly all the Earth's oceans until the end of the 19th century. It was hunted almost to the point of extinction by whalers until the International Whaling Commission banned all blue whale hunting in 1966. The International Union for Conservation of Nature has listed blue whales as Endangered as of 2018. It continues to face numerous man-made threats such as ship strikes, pollution, ocean noise, and climate change. Scientists found evidence of this through morphological or epidemiological analysis. These analyses are accompanied by chemical profiles that use fecal and tissue which continue to prove the impact of man-made threats.
Taxonomy
Nomenclature
The genus name, Balaenoptera, means winged whale, while the species name, musculus, could mean "muscle" or a diminutive form of "mouse", possibly a pun by Carl Linnaeus when he named the species in Systema Naturae. One of the first published descriptions of a blue whale comes from Robert Sibbald's Phalainologia Nova, after Sibbald found a stranded whale in the estuary of the Firth of Forth, Scotland, in 1692. The name "blue whale" was derived from the Norwegian blåhval, coined by Svend Foyn shortly after he had perfected the harpoon gun. The Norwegian scientist G. O. Sars adopted it as the common name in 1874.
Blue whales were referred to as "Sibbald's rorqual", after Robert Sibbald, who first described the species. Whalers sometimes referred to them as "sulphur bottom" whales, as the bellies of some individuals are tinged with yellow. This tinge is due to a coating of huge numbers of diatoms. (Herman Melville briefly refers to "sulphur bottom" whales in his novel Moby-Dick.)
Evolution
Blue whales are rorquals in the family Balaenopteridae. A 2018 analysis estimates that the Balaenopteridae family diverged from other families in between 10.48 and 4.98 million years ago during the late Miocene. The earliest discovered anatomically modern blue whale is a partial skull fossil from southern Italy identified as B. cf. musculus, dating to the Early Pleistocene, roughly 1.5–1.25 million years ago. The Australian pygmy blue whale diverged during the Last Glacial Maximum. Their more recent divergence has resulted in the subspecies having a relatively low genetic diversity, and New Zealand blue whales have an even lower genetic diversity.
Whole genome sequencing suggests that blue whales are most closely related to sei whales with gray whales as a sister group. This study also found significant gene flow between minke whales and the ancestors of the blue and sei whale. Blue whales also displayed high genetic diversity.
Hybridization
Blue whales are known to interbreed with fin whales. The earliest description of a possible hybrid between a blue whale and a fin whale was a anomalous female whale with the features of both the blue and the fin whales taken in the North Pacific. A whale captured off northwestern Spain in 1984, was found to have been the product of a blue whale mother and a fin whale father.
Two live blue-fin whale hybrids have since been documented in the Gulf of St. Lawrence (Canada), and in the Azores (Portugal). DNA tests done in Iceland on a blue whale killed in July 2018 by the Icelandic whaling company Hvalur hf, found that the whale was the offspring of a male fin whale and female blue whale; however, the results are pending independent testing and verification of the samples. Because the International Whaling Commission classified blue whales as a "Protection Stock", trading their meat is illegal, and the kill is an infraction that must be reported. Blue-fin hybrids have been detected from genetic analysis of whale meat samples taken from Japanese markets. Blue-fin whale hybrids are capable of being fertile. Molecular tests on a pregnant female whale caught off Iceland in 1986 found that it had a blue whale mother and a fin whale father, while its fetus was sired by a blue whale.
In 2024, a genome analysis of North Atlantic blue whales found evidence that approximately 3.5% of the blue whales' genome was derived from hybridization with fin whales. Gene flow was found to be unidirectional from fin whales to blue whales. Comparison with Antarctic blue whales showed that this hybridization began after the separation of the northern and southern populations. Despite their smaller size, fin whales have similar cruising and sprinting speeds to blue whales, which would allow fin males to complete courtship chases with blue females.
There is a reference to a humpback–blue whale hybrid in the South Pacific, attributed to marine biologist Michael Poole.
Subspecies and stocks
At least four subspecies of blue whale are traditionally recognized, some of which are divided into population stocks or "management units". They have a worldwide distribution, but are mostly absent from the Arctic Ocean and the Mediterranean, Okhotsk, and Bering Sea.
Northern subspecies (B. m. musculus)
North Atlantic population – This population is mainly documented from New England along eastern Canada to Greenland, particularly in the Gulf of St. Lawrence, during summer though some individuals may remain there all year. They also aggregate near Iceland and have increased their presence in the Norwegian Sea. They are reported to migrate south to the West Indies, the Azores and northwest Africa.
Eastern North Pacific population – Whales in this region mostly feed off California's coast from summer to fall and then Oregon, Washington State, the Alaska Gyre and Aleutian Islands later in the fall. During winter and spring, blue whales migrate south to the waters of Mexico, mostly the Gulf of California, and the Costa Rica Dome, where they both feed and breed.
Central/Western Pacific population – This stock is documented around the Kamchatka Peninsula during the summer; some individuals may remain there year-round. They have been recorded wintering in Hawaiian waters, though some can be found in the Gulf of Alaska during fall and early winter.
Northern Indian Ocean subspecies (B. m. indica) – This subspecies can be found year-round in the northwestern Indian Ocean, though some individuals have recorded travelling to the Crozet Islands during between summer and fall.
Pygmy blue whale (B. m. brevicauda)
Madagascar population – This population migrates between the Seychelles and Amirante Islands in the north and the Crozet Islands and Prince Edward Islands in the south were they feed, passing through the Mozambique Channel.
Australia/Indonesia population – Whales in this region appear to winter off Indonesia and migrate to their summer feeding grounds off the coast of Western Australia, with major concentrations at Perth Canyon and an area stretching from the Great Australian Bight and Bass Strait.
Eastern Australia/New Zealand population – This stock may reside in the Tasman Sea and the Lau Basin in winter and feed mostly in the South Taranaki Bight and off the coast of eastern North Island. Blue whales have been detected around New Zealand throughout the year.
Antarctic subspecies (B. m. intermedia) – This subspecies includes all populations found around the Antarctic. They have been recorded to travel as far north as eastern tropical Pacific, the central Indian Ocean, and the waters of southwestern Australia and northern New Zealand.
Blue whales off the Chilean coast might be a separate subspecies based on their geographic separation, genetics, and unique song types. Chilean blue whales might overlap in the Eastern Tropical Pacific with Antarctica blue whales and Eastern North Pacific blue whales. Chilean blue whales are genetically differentiated from Antarctica blue whales such that interbreeding is unlikely. However, the genetic distinction is less between them and the Eastern North Pacific blue whale, hence there might be gene flow between the Southern and Northern Hemispheres. A 2019 study by Luis Pastene, Jorge Acevedo and Trevor Branch provided new morphometric data from a survey of 60 Chilean blue whales, hoping to address the debate about the possible distinction of this population from others in the Southern Hemisphere. Data from this study, based on whales collected in the 1965/1966 whaling season, shows that both the maximum and mean body length of Chilean blue whales lies between these values in pygmy and Antarctic blue whales. Data also indicates a potential difference in snout-eye measurements between the three, and a significant difference in fluke-anus length between the Chilean population and pygmy blue whales. This further confirms Chilean blue whales as a separate population, and implies that they do not fall under the same subspecies as the pygmy blue whale (B. m. brevicauda).
A 2024 genomic study of the global blue whale population found support for the subspecific status of Antarctic and Indo-western Pacific blue whales but not eastern Pacific blue whales. The study found "...divergence between the eastern North and eastern South Pacific, and among the eastern Indian Ocean, the western South Pacific and the northern Indian Ocean." and "no divergence within the Antarctic".
Description
The blue whale is a slender-bodied cetacean with a broad U-shaped head; thin, elongated flippers; a small sickle-shaped dorsal fin located close to the tail, and a large tail stock at the root of the wide and thin flukes. The upper jaw is lined with 70–395 black baleen plates. The throat region has 60–88 grooves which allows the skin to expand during feeding. It has two blowholes that can squirt up in the air. The skin has a mottled grayish-blue coloration, appearing blue underwater. The mottling patterns near the dorsal fin vary between individuals. The underbelly has lighter pigmentation and can appear yellowish due to diatoms in the water, which historically earned them the nickname "sulphur bottom". The male blue whale has the largest penis in the animal kingdom, at around long and wide.
Size
The blue whale is the largest animal known ever to have existed. Some studies have estimated that certain shastasaurid ichthyosaurs and the ancient whale Perucetus could have rivalled the blue whale in size, with Perucetus also being heavier than the blue whale with a mean weight of . These estimates are based on fragmentary remains, and the proposed size for the latter has been disputed in 2024. Other studies estimate that on land, large sauropods like Bruhathkayosaurus (mean weight: 110–170 tons) and Maraapunisaurus (mean weight: 80–120 tons) would have easily rivalled the blue whale, with the former even exceeding the blue whale based on its most liberal estimations (240 tons), although these estimates are based on even more fragmentary specimens that had disintegrated by the time those estimates were made.
The International Whaling Commission (IWC) whaling database reports 88 individuals longer than , including one of . The Discovery Committee reported lengths up to . The longest scientifically measured individual blue whale was from rostrum tip to tail notch. Female blue whales are larger than males. Hydrodynamic models suggest a blue whale could not exceed because of metabolic and energy constraints.
The average length of sexually mature female blue whales is for Eastern North Pacific blue whales, for central and western North Pacific blue whales, for North Atlantic blue whales, for Antarctic blue whales, for Chilean blue whales, and for pygmy blue whales.
In the Northern Hemisphere, males weigh an average and females . Eastern North Pacific blue whale males average and females . Antarctic males average and females . Pygmy blue whale males average to . The weight of the heart of a stranded North Atlantic blue whale was , the largest known in any animal. The record-holder blue whale was recorded at , with estimates of up to .
In 2024, Motani and Pyenson calculated the body mass of blue whales at different lengths, compiling records of their sizes from previous academic literatures and using regression analyses and volumetric analyses. A long individual was estimated to weigh approximately , while a long individual was estimated to weigh approximately . Considering that the largest blue whale was indeed long, they estimated that a blue whale of such length would have weighed approximately .
During the harvest of a female blue whale, Messrs. Irvin and Johnson collected a fetus that is now 70% preserved and used for educational purposes. The fetus was collected in 1922, so some shrinkage may have occurred, making visualization of some features fairly difficult. However, due to this collection researchers now know that the external anatomy of a blue whale fetus is approximately 133 mm. Along with during the developmental phases, the fetus is located where the embryonic and fetal phases converge. This fetus is the youngest gestational age of the specimen recorded.
Life span
Blue whales live around 80–90 years or more. Scientists look at a blue whale's earwax or ear plug to estimate its age. Each year, a light and dark layer of wax is laid corresponding with fasting during migration and feeding time. Each set is thus an indicator of age. The oldest blue whale found was determined, using this method, to be 110 years old. The maximum age of a pygmy blue whale determined this way is 73 years. In addition, female blue whales develop scars or corpora albicantia on their ovaries every time they ovulate. In a female pygmy blue whale, one corpus albicans is formed on average every 2.6 years.
Behaviour and ecology
The blue whale is usually solitary, but can be found in pairs. When productivity is high enough, blue whales can be seen in gatherings of more than 50 individuals. Populations may go on long migrations, traveling to their summer feeding grounds towards the poles and then heading to their winter breeding grounds in more equatorial waters. The animals appear to use memory to locate the best feeding areas. There is evidence of alternative strategies, such as year-round residency, and partial (where only some individuals migrate) or age/sex-based migration. Some whales have been recorded feeding in breeding grounds. The traveling speed for blue whales ranges . Their massive size limits their ability to breach.
The greatest dive depth reported from tagged blue whales was . Their theoretical aerobic dive limit was estimated at 31.2 minutes, however, the longest dive measured was 15.2 minutes. The deepest confirmed dive from a pygmy blue whale was . A blue whale's heart rate can drop to 2 beats per minute (bpm) at deep depths, but upon surfacing, can rise to 37 bpm, which is close to its peak heart rate.
Diet and feeding
The blue whale's diet consists almost exclusively of krill. Blue whales capture krill through lunge feeding; they swim towards them at high speeds as they open their mouths up to 80°. They may engulf of water at one time. They squeeze the water out through their baleen plates with pressure from the throat pouch and tongue, and swallow the remaining krill. Blue whales have been recorded making 180° rolls during lunge-feeding, possibly allowing them to search the prey field and find the densest patches.
While pursuing krill patches, blue whales maximize their calorie intake by increasing the number of lunges while selecting the thickest patches. This provides them enough energy for everyday activities while storing additional energy necessary for migration and reproduction. Due to their size, blue whales have larger energetic demands than most animals resulting in their need for this specific feeding habit. Blue whales have to engulf densities greater than 100 krill/m3 to maintain the cost of lunge feeding. They can consume from one mouthful of krill, which can provide up to 240 times more energy than used in a single lunge. It is estimated that an average-sized blue whale must consume of krill a day.
In the southern ocean, blue whales feed on Antarctic krill (Euphausia superba). In the South Australia, pygmy blue whales (B. m. brevicauda) feeds on Nyctiphanes australis. In California, they feed mostly on Thysanoessa spinifera, but also less commonly on North pacific krill (Euphausia pacifica). Research of the Eastern North Pacific population shows that when diving to feed on krill, the whales reach an average depth of 201 meters, with dives lasting 9.8 minutes on average.
While most blue whales feed almost exclusively on krill, the Northern Indian Ocean subspecies (B. m. indica) instead feeds predominantly on sergestid shrimp. To do so, they dive deeper and for longer periods of time than blue whales in other regions of the world, with dives of 10.7 minutes on average, and a hypothesized dive depth of about 300 meters. Fecal analysis also found the presence of fish, krill, amphipods, cephalopods, and scyphozoan jellyfish in their diet.
Blue whales appear to avoid directly competing with other baleen whales. Different whale species select different feeding spaces and times as well as different prey species. In the Southern Ocean, baleen whales appear to feed on Antarctic krill of different sizes, which may lessen competition between them.
Blue whale feeding habits may differ due to situational disturbances, like environmental shifts or human interference. This can cause a change in diet due to stress response. Due to these changing situations, there was a study performed on Blue whales measuring Cortisol levels and comparing them with the levels of stressed individuals, it gave a closer look to the reasoning behind their diet and behavioral changes.
Reproduction and birth
The age of sexual maturity for blue whales is thought to be 5–15 years. In the Northern Hemisphere, the length at which they reach maturity is for females and for males. In the Southern Hemisphere, the length of maturity is and for females and males respectively. Male pygmy blue whales average at sexual maturity. Female pygmy blue whales are in length and roughly 10 years old at the age of sexual maturity. Little is known about mating behavior, or breeding and birthing areas. Blue whales appear to be polygynous, with males competing for females. A male blue whale typically trails a female and will fight off potential rivals. The species mates from fall to winter.
Pregnant females eat roughly four percent of their body weight daily, amounting to 60% of their overall body weight throughout summer foraging periods. Gestation may last 10–12 months with calves being long and weighing at birth. Estimates suggest that because calves require milk per kg of mass gain, blue whales likely produce of milk per day (ranging from of milk per day). The first video of a calf thought to be nursing was filmed in New Zealand in 2016. Calves may be weaned when they reach 6–8 months old at a length of . They gain roughly during the weaning period. Interbirth periods last two to three years; they average 2.6 years in pygmy blue whales.
Vocalizations
Blue whales produce some of the loudest and lowest frequency vocalizations in the animal kingdom, and their inner ears appear well adapted for detecting low-frequency sounds. The fundamental frequency for blue whale vocalizations ranges from 8 to 25 Hz. Blue whale songs vary between populations.
Vocalizations produced by the Eastern North Pacific population have been well studied. This population produces pulsed calls ("A") and tonal calls ("B"), upswept tones that precede type B calls ("C") and separate downswept tones ("D"). A and B calls are often produced in repeated co-occurring sequences and sung only by males, suggesting a reproductive function. D calls may have multiple functions. They are produced by both sexes during social interactions while feeding. and by males when competing for mates.
Blue whale calls recorded off Sri Lanka have a three-unit phrase. The first unit is a 19.8 to 43.5 Hz pulsive call, and is normally 17.9 ± 5.2 seconds long. The second unit is a 55.9 to 72.4 Hz FM upsweep that is 13.8 ± 1.1 seconds long. The final unit is 28.5 ± 1.6 seconds long with a tone of 108 to 104.7 Hz. A blue whale call recorded off Madagascar, a two-unit phrase, consists of 5–7 pulses with a center frequency of 35.1 ± 0.7 Hz lasting 4.4 ± 0.5 seconds proceeding a 35 ± 0 Hz tone that is 10.9 ± 1.1 seconds long. In the Southern Ocean, blue whales produce 18-second vocals which start with a 9-second-long, 27 Hz tone, and then a 1-second downsweep to 19 Hz, followed by a downsweep further to 18 Hz. Other vocalizations include 1–4 second long, frequency-modulated calls with a frequency of 80 and 38 Hz.
There is evidence that some blue whale songs have temporally declined in tonal frequency. The vocalization of blue whales in the Eastern North Pacific decreased in tonal frequency by 31% from the early 1960s to the early 21st century. The frequency of pygmy blue whales in the Antarctic has decreased by a few tenths of a hertz every year starting in 2002. It is possible that as blue whale populations recover from whaling, there is increasing sexual selection pressure (i.e., a lower frequency indicates a larger body size).
Predators and parasites
The only known natural predator to blue whales is the orca, although the rate of fatal attacks by orcas is unknown. Photograph-identification studies of blue whales have estimated that a high proportion of the individuals in the Gulf of California have rake-like scars, indicative of encounters with orcas. Off southeastern Australia, 3.7% of blue whales photographed had rake marks and 42.1% of photographed pygmy blue whales off Western Australia had rake marks. Documented predation by orcas has been rare. A blue whale mother and calf were first observed being chased at high speeds by orcas off southeastern Australia. The first documented attack occurred in 1977 off southwestern Baja California, Mexico, but the injured whale escaped after five hours. Four more blue whales were documented as being chased by a group of orcas between 1982 and 2003. The first documented predation event by orcas occurred in September 2003, when a group of orcas in the Eastern Tropical Pacific was encountered feeding on a recently killed blue whale calf. In March 2014, a commercial whale watch boat operator recorded an incident involving a group of orcas harassing a blue whale in Monterey Bay. The blue whale defended itself by slapping its tail. A similar incident was recorded by a drone in Monterey Bay in May 2017. The first direct observations of orca predation occurred off the south coast of Western Australia, two in 2019 and one more in 2021. The first victim was estimated to be .
In Antarctic waters, blue whales accumulate diatoms of the species Cocconeis ceticola and the genera Navicola, which are normally removed when the whales enter warmer waters. Other external parasites include barnacles such as Coronula diadema, Coronula reginae, and Cryptolepas rhachianecti, which latch on their skin deep enough to leave behind a pit if removed. Whale lice species make their home in cracks of the skin and are relatively harmless. The copepod species Pennella balaenopterae digs in and attaches itself to the blubber to feed on. Intestinal parasites include the trematode genera Ogmogaster and Lecithodesmus; the tapeworm genera Priapocephalus, Phyllobotrium, Tetrabothrius, Diphyllobotrium, and Diplogonoporus; and the thorny-headed worm genus Bolbosoma. In the North Atlantic, blue whales also contain the protozoans Entamoeba, Giardia and Balantidium.
Conservation
The global blue whale population is estimated to be 5,000–15,000 mature individuals and 10,000–25,000 total as of 2018. By comparison, there were at least 140,000 mature whales in 1926. There are an estimated total of 1,000–3,000 whales in the North Atlantic, 3,000–5,000 in the North Pacific, and 5,000–8,000 in the Antarctic. There are possibly 1,000–3,000 whales in the eastern South Pacific while the pygmy blue whale may number 2,000–5,000 individuals. Blue whales have been protected in areas of the Southern Hemisphere since 1939. In 1955, they were given complete protection in the North Atlantic under the International Convention for the Regulation of Whaling; this protection was extended to the Antarctic in 1965 and the North Pacific in 1966. The protected status of North Atlantic blue whales was not recognized by Iceland until 1960. In the United States, the species is protected under the Endangered Species Act.
Blue whales are formally classified as endangered under both the U.S. Endangered Species Act and the IUCN Red List. They are also listed on Appendix I under the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) and the Convention on the Conservation of Migratory Species of Wild Animals. Although, for some populations, there is not enough information on current abundance trends (e.g., pygmy blue whales), others are critically endangered (e.g., Antarctic blue whales).
Threats
Blue whales were initially difficult to hunt because of their size and speed. This began to change in the mid-19th century with the development of harpoons that can be shot as projectiles. Blue whale whaling peaked between 1930 and 1931 with 30,000 animals taken. Harvesting of the species was particularly high in the Antarctic, with 350,000–360,000 whales taken in the first half of the 20th century. In addition, 11,000 North Atlantic whales (mostly around Iceland) and 9,500 North Pacific whales were killed during the same period. The International Whaling Commission banned all hunting of blue whales in 1966 and gave them worldwide protection. However, the Soviet Union continued to illegally hunt blue whales and other species up until the 1970s.
Ship strikes are a significant mortality factor for blue whales, especially off the U.S. West Coast. A total of 17 blue whales were killed or suspected to have been killed by ships between 1998 and 2019 off the U.S. West Coast. Five deaths in 2007 off California were considered an unusual mortality event, as defined under the Marine Mammal Protection Act. Lethal ship strikes are also a problem in Sri Lankan waters, where their habitat intersects with one of the world's most active shipping routes. Here, strikes caused the deaths of eleven blue whales in 2010 and 2012, and at least two in 2014. Ship-strike mortality claimed the lives of two blue whales off southern Chile in the 2010s. Possible measures for reducing future ship strikes include better predictive models of whale distribution, changes in shipping lanes, vessel speed reductions, and seasonal and dynamic management of shipping lanes. Few cases of blue whale entanglement in commercial fishing gear have been documented. The first report in the U.S. occurred off California in 2015, reportedly some type of deep-water trap/pot fishery. Three more entanglement cases were reported in 2016. In Sri Lanka, a blue whale was documented with a net wrapped through its mouth, along the sides of its body, and wound around its tail.
Increasing man-made underwater noise impacts blue whales. They may be exposed to noise from commercial shipping and seismic surveys as a part of oil and gas exploration. Blue whales in the Southern California Bight decreased calling in the presence of mid-frequency active (MFA) sonar. Exposure to simulated MFA sonar was found to interrupt blue whale deep-dive feeding but no changes in behavior were observed in individuals feeding at shallower depths. The responses also depended on the animal's behavioral state, its (horizontal) distance from the sound source and the availability of prey.
The potential impacts of pollutants on blue whales is unknown. However, because blue whales feed low on the food chain, there is a lesser chance for bioaccumulation of organic chemical contaminants. Analysis of the earwax of a male blue whale killed by a collision with a ship off the coast of California showed contaminants like pesticides, flame retardants, and mercury. Reconstructed persistent organic pollutant (POP) profiles suggested that a substantial maternal transfer occurred during gestation and/or lactation. Male blue whales in the Gulf of St. Lawrence, Canada, were found to have higher concentrations of PCBs, dichlorodiphenyltrichloroethane (DDT), metabolites, and several other organochlorine compounds relative to females, reflecting maternal transfer of these persistent contaminants from females into young.
| Biology and health sciences | Cetaceans | null |
4944 | https://en.wikipedia.org/wiki/Naive%20set%20theory | Naive set theory | Naive set theory is any of several theories of sets used in the discussion of the foundations of mathematics.
Unlike axiomatic set theories, which are defined using formal logic, naive set theory is defined informally, in natural language. It describes the aspects of mathematical sets familiar in discrete mathematics (for example Venn diagrams and symbolic reasoning about their Boolean algebra), and suffices for the everyday use of set theory concepts in contemporary mathematics.
Sets are of great importance in mathematics; in modern formal treatments, most mathematical objects (numbers, relations, functions, etc.) are defined in terms of sets. Naive set theory suffices for many purposes, while also serving as a stepping stone towards more formal treatments.
Method
A naive theory in the sense of "naive set theory" is a non-formalized theory, that is, a theory that uses natural language to describe sets and operations on sets. Such theory treats sets as platonic absolute objects. The words and, or, if ... then, not, for some, for every are treated as in ordinary mathematics. As a matter of convenience, use of naive set theory and its formalism prevails even in higher mathematics – including in more formal settings of set theory itself.
The first development of set theory was a naive set theory. It was created at the end of the 19th century by Georg Cantor as part of his study of infinite sets and developed by Gottlob Frege in his Grundgesetze der Arithmetik.
Naive set theory may refer to several very distinct notions. It may refer to
Informal presentation of an axiomatic set theory, e.g. as in Naive Set Theory by Paul Halmos.
Early or later versions of Georg Cantor's theory and other informal systems.
Decidedly inconsistent theories (whether axiomatic or not), such as a theory of Gottlob Frege that yielded Russell's paradox, and theories of Giuseppe Peano and Richard Dedekind.
Paradoxes
The assumption that any property may be used to form a set, without restriction, leads to paradoxes. One common example is Russell's paradox: there is no set consisting of "all sets that do not contain themselves". Thus consistent systems of naive set theory must include some limitations on the principles which can be used to form sets.
Cantor's theory
Some believe that Georg Cantor's set theory was not actually implicated in the set-theoretic paradoxes (see Frápolli 1991). One difficulty in determining this with certainty is that Cantor did not provide an axiomatization of his system. By 1899, Cantor was aware of some of the paradoxes following from unrestricted interpretation of his theory, for instance Cantor's paradox and the Burali-Forti paradox, and did not believe that they discredited his theory. Cantor's paradox can actually be derived from the above (false) assumption—that any property may be used to form a set—using for " is a cardinal number". Frege explicitly axiomatized a theory in which a formalized version of naive set theory can be interpreted, and it is this formal theory which Bertrand Russell actually addressed when he presented his paradox, not necessarily a theory Cantorwho, as mentioned, was aware of several paradoxespresumably had in mind.
Axiomatic theories
Axiomatic set theory was developed in response to these early attempts to understand sets, with the goal of determining precisely what operations were allowed and when.
Consistency
A naive set theory is not necessarily inconsistent, if it correctly specifies the sets allowed to be considered. This can be done by the means of definitions, which are implicit axioms. It is possible to state all the axioms explicitly, as in the case of Halmos' Naive Set Theory, which is actually an informal presentation of the usual axiomatic Zermelo–Fraenkel set theory. It is "naive" in that the language and notations are those of ordinary informal mathematics, and in that it does not deal with consistency or completeness of the axiom system.
Likewise, an axiomatic set theory is not necessarily consistent: not necessarily free of paradoxes. It follows from Gödel's incompleteness theorems that a sufficiently complicated first order logic system (which includes most common axiomatic set theories) cannot be proved consistent from within the theory itself – even if it actually is consistent. However, the common axiomatic systems are generally believed to be consistent; by their axioms they do exclude some paradoxes, like Russell's paradox. Based on Gödel's theorem, it is just not known – and never can be – if there are no paradoxes at all in these theories or in any first-order set theory.
The term naive set theory is still today also used in some literature to refer to the set theories studied by Frege and Cantor, rather than to the informal counterparts of modern axiomatic set theory.
Utility
The choice between an axiomatic approach and other approaches is largely a matter of convenience. In everyday mathematics the best choice may be informal use of axiomatic set theory. | Mathematics | Discrete mathematics | null |
4947 | https://en.wikipedia.org/wiki/B%C3%A9zout%27s%20identity | Bézout's identity | In mathematics, Bézout's identity (also called Bézout's lemma), named after Étienne Bézout who proved it for polynomials, is the following theorem:
Here the greatest common divisor of and is taken to be . The integers and are called Bézout coefficients for ; they are not unique. A pair of Bézout coefficients can be computed by the extended Euclidean algorithm, and this pair is, in the case of integers one of the two pairs such that and ; equality occurs only if one of and is a multiple of the other.
As an example, the greatest common divisor of 15 and 69 is 3, and 3 can be written as a combination of 15 and 69 as , with Bézout coefficients −9 and 2.
Many other theorems in elementary number theory, such as Euclid's lemma or the Chinese remainder theorem, result from Bézout's identity.
A Bézout domain is an integral domain in which Bézout's identity holds. In particular, Bézout's identity holds in principal ideal domains. Every theorem that results from Bézout's identity is thus true in all principal ideal domains.
Structure of solutions
If and are not both zero and one pair of Bézout coefficients has been computed (for example, using the extended Euclidean algorithm), all pairs can be represented in the form
where is an arbitrary integer, is the greatest common divisor of and , and the fractions simplify to integers.
If and are both nonzero and none of them divides the other, then exactly two of the pairs of Bézout coefficients satisfy
If and are both positive, one has and for one of these pairs, and and for the other. If is a divisor of (including the case ), then one pair of Bézout coefficients is .
This relies on a property of Euclidean division: given two non-zero integers and , if does not divide , there is exactly one pair such that and , and another one such that and .
The two pairs of small Bézout's coefficients are obtained from the given one by choosing for in the above formula either of the two integers next to .
The extended Euclidean algorithm always produces one of these two minimal pairs.
Example
Let and , then . Then the following Bézout's identities are had, with the Bézout coefficients written in red for the minimal pairs and in blue for the other ones.
If is the original pair of Bézout coefficients, then yields the minimal pairs via , respectively ; that is, , and .
Existence proof
Given any nonzero integers and , let . The set is nonempty since it contains either or (with and ). Since is a nonempty set of positive integers, it has a minimum element , by the well-ordering principle. To prove that is the greatest common divisor of and , it must be proven that is a common divisor of and , and that for any other common divisor , one has .
The Euclidean division of by may be written as
The remainder is in , because
Thus is of the form , and hence . However, , and is the smallest positive integer in : the remainder can therefore not be in , making necessarily 0. This implies that is a divisor of . Similarly is also a divisor of , and therefore is a common divisor of and .
Now, let be any common divisor of and ; that is, there exist and such that and . One has thus
That is, is a divisor of . Since , this implies .
Generalizations
For three or more integers
Bézout's identity can be extended to more than two integers: if
then there are integers such that
has the following properties:
is the smallest positive integer of this form
every number of this form is a multiple of
For polynomials
Bézout's identity does not always hold for polynomials. For example, when working in the polynomial ring of integers: the greatest common divisor of and is x, but there does not exist any integer-coefficient polynomials and satisfying .
However, Bézout's identity works for univariate polynomials over a field exactly in the same ways as for integers. In particular the Bézout's coefficients and the greatest common divisor may be computed with the extended Euclidean algorithm.
As the common roots of two polynomials are the roots of their greatest common divisor, Bézout's identity and fundamental theorem of algebra imply the following result:
The generalization of this result to any number of polynomials and indeterminates is Hilbert's Nullstellensatz.
For principal ideal domains
As noted in the introduction, Bézout's identity works not only in the ring of integers, but also in any other principal ideal domain (PID).
That is, if is a PID, and and are elements of , and is a greatest common divisor of and ,
then there are elements and in such that . The reason is that the ideal is principal and equal to .
An integral domain in which Bézout's identity holds is called a Bézout domain.
History and attribution
The French mathematician Étienne Bézout (1730–1783) proved this identity for polynomials. The statement for integers can be found already in the work of an earlier French mathematician, Claude Gaspard Bachet de Méziriac (1581–1638).
Andrew Granville traced the association of Bézout's name with the identity to Bourbaki, arguing that it is a misattribution since the identity is implicit in Euclid's Elements.
| Mathematics | Diophantine equations | null |
4952 | https://en.wikipedia.org/wiki/Rockwell%20B-1%20Lancer | Rockwell B-1 Lancer | The Rockwell B-1 Lancer is a supersonic variable-sweep wing, heavy bomber used by the United States Air Force. It has been nicknamed the "Bone" (from "B-One"). , it is one of the United States Air Force's three strategic bombers, along with the B-2 Spirit and the B-52 Stratofortress. Its 75,000-pound (34,000 kg) payload is the heaviest of any U.S. bomber.
The B-1 was first envisioned in the 1960s as a bomber that would combine the Mach 2 speed of the B-58 Hustler with the range and payload of the B-52, ultimately replacing both. After a long series of studies, North American Rockwell (subsequently renamed Rockwell International, B-1 division later acquired by Boeing) won the design contest for what emerged as the B-1A. Prototypes of this version could fly Mach 2.2 at high altitude and long distances at Mach 0.85 at very low altitudes. The program was canceled in 1977 due to its high cost, the introduction of the AGM-86 cruise missile that flew the same basic speed and distance, and early work on the B-2 stealth bomber.
The program was restarted in 1981, largely as an interim measure due to delays in the B-2 stealth bomber program. The B-1A design was altered, reducing top speed to Mach 1.25 at high altitude, increasing low-altitude speed to Mach 0.96, extensively improving electronic components, and upgrading the airframe to carry more fuel and weapons. Dubbed the B-1B, deliveries of the new variant began in 1985; the plane formally entered service with Strategic Air Command (SAC) as a nuclear bomber the following year. By 1988, all 100 aircraft had been delivered.
With the disestablishment of SAC and its reassignment to the Air Combat Command in 1992, the B-1B's nuclear capabilities were disabled and it was outfitted for conventional bombing. It first served in combat during Operation Desert Fox in 1998 and again during the NATO action in Kosovo the following year. The B-1B has supported U.S. and NATO military forces in Afghanistan and Iraq. As of 2021 the Air Force has 45 B-1Bs. The Northrop Grumman B-21 Raider is to begin replacing the B-1B after 2025; all B-1s are planned to be retired by 2036.
Development
Background
In 1955, the USAF issued requirements for a new bomber combining the payload and range of the Boeing B-52 Stratofortress with the Mach 2 maximum speed of the Convair B-58 Hustler. In December 1957, the USAF selected North American Aviation's B-70 Valkyrie for this role, a six-engine bomber that could cruise at Mach 3 at high altitude (). Soviet Union interceptor aircraft, the only effective anti-bomber weapon in the 1950s, were already unable to intercept the high-flying Lockheed U-2; the Valkyrie would fly at similar altitudes, but much higher speeds, and was expected to fly right by the fighters.
By the late 1950s, however, anti-aircraft surface-to-air missiles (SAMs) could threaten high-altitude aircraft, as demonstrated by the 1960 downing of Gary Powers' U-2. The USAF Strategic Air Command (SAC) was aware of these developments and had begun moving its bombers to low-level penetration even before the U-2 incident. This tactic greatly reduces radar detection distances through the use of terrain masking; using features of the terrain like hills and valleys, the line-of-sight from the radar to the bomber can be broken, rendering the radar (and human observers) incapable of seeing it. Additionally, radars of the era were subject to "clutter" from stray returns from the ground and other objects, which meant a minimum angle existed above the horizon where they could detect a target. Bombers flying at low altitudes could remain under these angles simply by keeping their distance from the radar sites. This combination of effects made SAMs of the era ineffective against low-flying aircraft. The same effects also meant that low-flying aircraft were difficult to detect by higher-flying interceptors, since their radar systems could not readily pick out aircraft against the clutter from ground reflections (lack of look-down/shoot-down capability).
The switch from high-altitude to low-altitude flight profiles severely affected the B-70, the design of which was tuned for high-altitude performance. Higher aerodynamic drag at low level limited the B-70 to subsonic speed while dramatically decreasing its range. The result would be an aircraft with somewhat higher subsonic speed than the B-52, but less range. Because of this, and a growing shift to the intercontinental ballistic missile (ICBM) force, the B-70 bomber program was cancelled in 1961 by President John F. Kennedy, and the two XB-70 prototypes were used in a supersonic research program.
Although never intended for the low-level role, the B-52's flexibility allowed it to outlast its intended successor as the nature of the air war environment changed. The B-52's huge fuel load allowed it to operate at lower altitudes for longer times, and the large airframe allowed the addition of improved radar jamming and deception suites to deal with radars. During the Vietnam War, the concept that all future wars would be nuclear was turned on its head, and the "big belly" modifications increased the B-52's total bomb load to , turning it into a powerful tactical aircraft which could be used against ground troops along with strategic targets from high altitudes. The much smaller bomb bay of the B-70 would have made it much less useful in this role.
Design studies and delays
Although effective, the B-52 was not ideal for the low-level role. This led to a number of aircraft designs known as penetrators, which were tuned specifically for long-range low-altitude flight. The first of these designs to see operation was the supersonic F-111 fighter-bomber, which used variable-sweep wings for tactical missions. A number of studies on a strategic-range counterpart followed.
The first post-B-70 strategic penetrator study was known as the Subsonic Low-Altitude Bomber (SLAB), which was completed in 1961. This produced a design that looked more like an airliner than a bomber, with a large swept wing, T-tail, and large high-bypass engines. This was followed by the similar Extended Range Strike Aircraft (ERSA), which added a variable-sweep wing, then en vogue in the aviation industry. ERSA envisioned a relatively small aircraft with a payload and a range of including flown at low altitudes. In August 1963, the similar Low-Altitude Manned Penetrator design was completed, which called for an aircraft with a bomb load and somewhat shorter range of .
These all culminated in the October 1963 Advanced Manned Precision Strike System (AMPSS), which led to industry studies at Boeing, General Dynamics, and North American (later North American Rockwell). In mid-1964, the USAF had revised its requirements and retitled the project as Advanced Manned Strategic Aircraft (AMSA), which differed from AMPSS primarily in that it also demanded a high-speed high-altitude capability, similar to that of the existing Mach 2-class F-111. Given the lengthy series of design studies, North American Rockwell engineers joked that the new name actually stood for "America's Most Studied Aircraft".
The arguments that led to the cancellation of the B-70 program had led some to question the need for a new strategic bomber of any sort. The USAF was adamant about retaining bombers as part of the nuclear triad concept that included bombers, ICBMs, and submarine-launched ballistic missiles (SLBMs) in a combined package that complicated any potential defense. They argued that the bomber was needed to attack hardened military targets and to provide a safe counterforce option because the bombers could be quickly launched into safe loitering areas where they could not be attacked. However, the introduction of the SLBM made moot the mobility and survivability argument, and a newer generation of ICBMs, such as the Minuteman III, had the accuracy and speed needed to attack point targets. During this time, ICBMs were seen as a less costly option based on their lower unit cost, but development costs were much higher. Secretary of Defense Robert McNamara preferred ICBMs over bombers for the Air Force portion of the deterrent force and felt a new expensive bomber was not needed. McNamara limited the AMSA program to studies and component development beginning in 1964.
Program studies continued; IBM and Autonetics were awarded AMSA advanced avionics study contracts in 1968. McNamara remained opposed to the program in favor of upgrading the existing B-52 fleet and adding nearly 300 FB-111s for shorter range roles then being filled by the B-58. He again vetoed funding for AMSA aircraft development in 1968.
B-1A program
President Richard Nixon reestablished the AMSA program after taking office, keeping with his administration's flexible response strategy that required a broad range of options short of general nuclear war. Nixon's Secretary of Defense, Melvin Laird, reviewed the programs and decided to lower the numbers of FB-111s, since they lacked the desired range, and recommended that the AMSA design studies be accelerated. In April 1969, the program officially became the B-1A. This was the first entry in the new bomber designation series, created in 1962. The Air Force issued a request for proposals in November 1969.
Proposals were submitted by Boeing, General Dynamics and North American Rockwell in January 1970. In June 1970, North American Rockwell was awarded the development contract. The original program called for two test airframes, five flyable aircraft, and 40 engines. This was cut in 1971 to one ground and three flight test aircraft. The company changed its name to Rockwell International and named its aircraft division North American Aircraft Operations in 1973. A fourth prototype, built to production standards, was ordered in the fiscal year 1976 budget. Plans called for 240 B-1As to be built, with initial operational capability set for 1979.
Rockwell's design had features common to the F-111 and XB-70. It used a crew escape capsule, that ejected as a unit to improve crew survivability if the crew had to abandon the aircraft at high speed. Additionally, the design featured large variable-sweep wings in order to provide both more lift during takeoff and landing, and lower drag during a high-speed dash phase. With the wings set to their widest position the aircraft had a much better airfield performance than the B-52, allowing it to operate from a wider variety of bases. Penetration of the Soviet Union's defenses would take place at supersonic speed, crossing them as quickly as possible before entering the more sparsely defended interior of the country where speeds could be reduced again. The large size and fuel capacity of the design would allow the "dash" portion of the flight to be relatively long.
In order to achieve the required Mach 2 performance at high altitudes, the exhaust nozzles and air intake ramps were variable. Initially, it had been expected that a Mach 1.2 performance could be achieved at low altitude, which required that titanium be used in critical areas in the fuselage and wing structure. The low altitude performance requirement was later lowered to Mach 0.85, reducing the amount of titanium and therefore cost. A pair of small vanes mounted near the nose are part of an active vibration damping system that smooths out the otherwise bumpy low-altitude ride. The first three B-1As featured the escape capsule that ejected the cockpit with all four crew members inside. The fourth B-1A was equipped with a conventional ejection seat for each crew member.
The B-1A mockup review occurred in late October 1971; this resulted in 297 requests for alteration to the design due to failures to meet specifications and desired improvements for ease of maintenance and operation. The first B-1A prototype (Air Force serial no. 74–0158) flew on 23 December 1974. As the program continued the per-unit cost continued to rise in part because of high inflation during that period. In 1970, the estimated unit cost was $40 million, and by 1975, this figure had climbed to $70 million.
New problems and cancellation
In 1976, Soviet pilot Viktor Belenko defected to Japan with his MiG-25 "Foxbat". During debriefing he described a new "super-Foxbat" (almost certainly referring to the MiG-31) that had look-down/shoot-down radar in order to attack cruise missiles. This would also make any low-level penetration aircraft "visible" and easy to attack. Given that the B-1's armament suite was similar to the B-52, and it then appeared no more likely to survive Soviet airspace than the B-52, the program was increasingly questioned. In particular, Senator William Proxmire continually derided the B-1 in public, arguing it was an outlandishly expensive dinosaur. During the 1976 federal election campaign, Jimmy Carter made it one of the Democratic Party's platforms, saying "The B-1 bomber is an example of a proposed system which should not be funded and would be wasteful of taxpayers' dollars."
When Carter took office in 1977 he ordered a review of the entire program. By this point the projected cost of the program had risen to over $100 million per aircraft, although this was lifetime cost over 20 years. He was informed of the relatively new work on stealth aircraft that had started in 1975, and he decided that this was a better approach than the B-1. Pentagon officials also stated that the AGM-86 Air-Launched Cruise Missile (ALCM) launched from the existing B-52 fleet would give the USAF equal capability of penetrating Soviet airspace. With a range of , the ALCM could be launched well outside the range of any Soviet defenses and penetrate at low altitude like a bomber (with a much lower radar cross-section (RCS) due to smaller size), and in much greater numbers at a lower cost. A small number of B-52s could launch hundreds of ALCMs, saturating the defense. A program to improve the B-52 and develop and deploy the ALCM would cost at least 20% less than the planned 244 B-1As.
On 30 June 1977, Carter announced that the B-1A would be canceled in favor of ICBMs, SLBMs, and a fleet of modernized B-52s armed with ALCMs. Carter called it "one of the most difficult decisions that I've made since I've been in office." No mention of the stealth work was made public with the program being top secret, but it is now known that in early 1978 he authorized the Advanced Technology Bomber (ATB) project, which eventually led to the B-2 Spirit.
Domestically, the reaction to the cancellation was split along partisan lines. The Department of Defense was surprised by the announcement; it expected that the number of B-1s ordered would be reduced to around 150. Congressman Robert Dornan (R-CA) claimed, "They're breaking out the vodka and caviar in Moscow." However, it appears the Soviets were more concerned by large numbers of ALCMs representing a much greater threat than a smaller number of B-1s. Soviet news agency TASS commented that "the implementation of these militaristic plans has seriously complicated efforts for the limitation of the strategic arms race." Western military leaders were generally happy with the decision. NATO commander Alexander Haig described the ALCM as an "attractive alternative" to the B-1. French General Georges Buis stated "The B-1 is a formidable weapon, but not terribly useful. For the price of one bomber, you can have 200 cruise missiles."
Flight tests of the four B-1A prototypes for the B-1A program continued through April 1981. The program included 70 flights totaling 378 hours. A top speed of Mach 2.22 was reached by the second B-1A. Engine testing also continued during this time with the YF101 engines totaling almost 7,600 hours.
Shifting priorities
It was during this period that the Soviets started to assert themselves in several new theaters of action, in particular through Cuban proxies during the Angolan Civil War starting in 1975 and the Soviet invasion of Afghanistan in 1979. U.S. strategy to this point had been focused on containing Communism and preparation for war in Europe. The new Soviet actions revealed that the military lacked capability outside these narrow confines.
The U.S. Department of Defense responded by accelerating its Rapid Deployment Forces concept but suffered from major problems with airlift and sealift capability. In order to slow an enemy invasion of other countries, air power was critical; however the key Iran-Afghanistan border was outside the range of the United States Navy's carrier-based attack aircraft, leaving this role to the U.S. Air Force.
During the 1980 presidential campaign, Ronald Reagan campaigned heavily on the platform that Carter was weak on defense, citing the cancellation of the B-1 program as an example, a theme he continued using into the 1980s. During this time Carter's defense secretary, Harold Brown, announced the stealth bomber project, apparently implying that this was the reason for the B-1 cancellation.
B-1B program
On taking office, Reagan was faced with the same decision as Carter before: whether to continue with the B-1 for the short term, or to wait for the development of the ATB, a much more advanced aircraft. Studies suggested that the existing B-52 fleet with ALCM would remain a credible threat until 1985. It was predicted that 75% of the B-52 force would survive to attack its targets. After 1985, the introduction of the SA-10 missile, the MiG-31 interceptor and the first effective Soviet Airborne Early Warning and Control (AWACS) systems would make the B-52 increasingly vulnerable. During 1981, funds were allocated to a new study for a bomber for the 1990s time-frame which led to developing the Long-Range Combat Aircraft (LRCA) project. The LRCA evaluated the B-1, F-111, and ATB as possible solutions; an emphasis was placed on multi-role capabilities, as opposed to purely strategic operations.
In 1981, it was believed the B-1 could be in operation before the ATB, covering the transitional period between the B-52's increasing vulnerability and the ATB's introduction. Reagan decided the best solution was to procure both the B-1 and ATB, and on 2 October 1981 he announced that 100 B-1s were to be ordered to fill the LRCA role.
In January 1982, the U.S. Air Force awarded two contracts to Rockwell worth a combined $2.2 billion for the development and production of 100 new B-1 bombers. Numerous changes were made to the design to make it better suited to the now expected missions, resulting in the B-1B. These changes included a reduction in maximum speed, which allowed the variable-aspect intake ramps to be replaced by simpler fixed geometry intake ramps. This reduced the B-1B's radar cross-section which was seen as a good trade off for the speed decrease. High subsonic speeds at low altitude became a focus area for the revised design, and low-level speeds were increased from about Mach 0.85 to 0.92. The B-1B has a maximum speed of Mach 1.25 at higher altitudes.
The B-1B's maximum takeoff weight was increased to from the B-1A's . The weight increase was to allow for takeoff with a full internal fuel load and for external weapons to be carried. Rockwell engineers were able to reinforce critical areas and lighten non-critical areas of the airframe, so the increase in empty weight was minimal. To deal with the introduction of the MiG-31 equipped with the new Zaslon radar system, and other aircraft with look-down capability, the B-1B's electronic warfare suite was significantly upgraded.
Opposition to the plan was widespread within Congress. Critics pointed out that many of the original problems remained in both areas of performance and expense. In particular it seemed the B-52 fitted with electronics similar to the B-1B would be equally able to avoid interception, as the speed advantage of the B-1 was now minimal. It also appeared that the "interim" time frame served by the B-1B would be less than a decade, being rendered obsolete shortly after the introduction of a much more capable ATB design. The primary argument in favor of the B-1 was its large conventional weapon payload, and that its takeoff performance allowed it to operate with a credible bomb load from a much wider variety of airfields. Production subcontracts were spread across many congressional districts, making the aircraft more popular on Capitol Hill.
B-1A No. 1 was disassembled and used for radar testing at the Rome Air Development Center in the former Griffiss Air Force Base, New York. B-1As No. 2 and No. 4 were then modified to include B-1B systems. The first B-1B was completed and began flight testing in March 1983. The first production B-1B was rolled out on 4 September 1984 and first flew on 18 October 1984. The 100th and final B-1B was delivered on 2 May 1988; before the last B-1B was delivered, the USAF had determined that the aircraft was vulnerable to Soviet air defenses.
In 1996, Rockwell International sold most of its space and defense operations to Boeing, which continues as the primary contractor for the B1 as of 2024.
Design
Overview
The B-1 has a blended wing body configuration, with variable-sweep wing, four turbofan engines, triangular ride-control fins and cruciform tail. The wings can sweep from 15 degrees to 67.5 degrees (full forward to full sweep). Forward-swept wing settings are used for takeoff, landings and high-altitude economical cruise. Aft-swept wing settings are used in high subsonic and supersonic flight. The B-1's variable-sweep wings and thrust-to-weight ratio provide it with improved takeoff performance, allowing it to use shorter runways than previous bombers. The length of the aircraft presented a flexing problem due to air turbulence at low altitude. To alleviate this, Rockwell included small triangular fin control surfaces or vanes near the nose on the B-1. The B-1's Structural Mode Control System moves the vanes, and lower rudder, to counteract the effects of turbulence and smooth out the ride.
Unlike the B-1A, the B-1B cannot reach Mach 2+ speeds; its maximum speed is Mach 1.25 (about 950 mph or 1,530 km/h at altitude), but its low-level speed increased to Mach 0.92 (700 mph, 1,130 km/h). The speed of the current version of the aircraft is limited by the need to avoid damage to its structure and air intakes. To help lower its radar cross-section, the B-1B uses serpentine air intake ducts (see S-duct) and fixed intake ramps, which limit its speed compared to the B-1A. Vanes in the intake ducts serve to deflect and shield radar returns from the highly reflective engine compressor blades.
The B-1A's engine was modified slightly to produce the GE F101-102 for the B-1B, with an emphasis on durability, and increased efficiency. The core from this engine was subsequently used in several other engines, including the GE F110 used in the F-14 Tomcat, F-15K/SG variants and later versions of the General Dynamics F-16 Fighting Falcon. It is also the basis for the non-afterburning GE F118 used in the B-2 Spirit and the U-2S. The F101 engine core is also used in the CFM56 civil engine.
The nose-gear door is the location for ground-crew control of the auxiliary power unit (APU) which can be used during a scramble for quick-starting the APU.
Avionics
The B-1's main computer is the IBM AP-101, which was also used on the Space Shuttle orbiter and the B-52 bomber. The computer is programmed with the JOVIAL programming language. The Lancer's offensive avionics include the Westinghouse (now Northrop Grumman) AN/APQ-164 forward-looking offensive passive electronically scanned array radar set with electronic beam steering (and a fixed antenna pointed downward for reduced radar observability), synthetic aperture radar, ground moving target indication (GMTI), and terrain-following radar modes, Doppler navigation, radar altimeter, and an inertial navigation suite. The B-1B Block D upgrade added a Global Positioning System (GPS) receiver beginning in 1995.
The B-1's defensive electronics include the Eaton AN/ALQ-161A radar warning and defensive jamming equipment, which has three sets of antennas; one at the front base of each wing and the third rear-facing in the tail radome. Also in the tail radome is the AN/ALQ-153 missile approach warning system (pulse-Doppler radar). The ALQ-161 is linked to a total of eight AN/ALE-49 flare dispensers located on top behind the canopy, which are handled by the AN/ASQ-184 avionics management system. Each AN/ALE-49 dispenser has a capacity of 12 MJU-23A/B flares. The MJU-23A/B flare is one of the world's largest infrared countermeasure flares at a weight of over . The B-1 has also been equipped to carry the ALE-50 towed decoy system.
Also aiding the B-1's survivability is its relatively low RCS. Although not technically a stealth aircraft, thanks to the aircraft's structure, serpentine intake paths and use of radar-absorbent material its RCS is about 1/50th that of the similar sized B-52. This is approximately 26 ft2 or 2.4 m2, comparable to that of a small fighter aircraft.
The B-1 holds 61 FAI world records for speed, payload, distance, and time-to-climb in different aircraft weight classes. In November 1993, three B-1Bs set a long-distance record for the aircraft, which demonstrated its ability to conduct extended mission lengths to strike anywhere in the world and return to base without any stops. The National Aeronautic Association recognized the B-1B for completing one of the 10 most memorable record flights for 1994.
Upgrades
The B-1 has been upgraded since production, beginning with the "Conventional Mission Upgrade Program" (CMUP), which added a new MIL-STD-1760 smart-weapons interface to enable the use of precision-guided conventional weapons. CMUP was delivered through a series of upgrades:
Block A was the standard B-1B with the capability to deliver non-precision gravity bombs.
Block B brought an improved Synthetic Aperture Radar, and upgrades to the Defensive Countermeasures System and was fielded in 1995.
Block C provided an "enhanced capability" for delivery of up to 30 cluster bomb units (CBUs) per sortie with modifications made to 50 bomb racks.
Block D added a "Near Precision Capability" via improved weapons and targeting systems, and added advanced secure communications capabilities. The first part of the electronic countermeasures upgrade added Joint Direct Attack Munition (JDAM), ALE-50 towed decoy system, and anti-jam radios.
Block E upgraded the avionics computers and incorporated the Wind Corrected Munitions Dispenser (WCMD), the AGM-154 Joint Standoff Weapon (JSOW) and the AGM-158 JASSM (Joint Air to Surface Standoff Munition), substantially improving the bomber's capability. Upgrades were completed in September 2006.
Block F was the Defensive Systems Upgrade Program (DSUP) to improve the aircraft's electronic countermeasures and jamming capabilities, but it was canceled in December 2002 due to cost overruns and delays.
In 2007, the Sniper XR targeting pod was integrated on the B-1 fleet. The pod is mounted on an external hardpoint at the aircraft's chin near the forward bomb bay. Following accelerated testing, the Sniper pod was fielded in summer 2008. Future precision munitions include the Small Diameter Bomb.
The USAF commenced the Integrated Battle Station (IBS) modification in 2012 as a combination of three separate upgrades when it realised the benefits of completing them concurrently; the Fully Integrated Data Link (FIDL), Vertical Situational Display Unit (VSDU) and Central Integrated Test System (CITS). FIDL enables electronic data sharing, eliminating the need to enter information between systems by hand. VSDU replaces existing flight instruments with multifunction color displays, a second display aids with threat evasion and targeting, and acts as a back-up display. CITS saw a new diagnostic system installed that allows crew to monitor over 9,000 parameters on the aircraft. Other additions are to replace the two spinning mass gyroscopic inertial navigation system with ring laser gyroscopic systems and a GPS antenna, replacement of the APQ-164 radar with the Scalable Agile Beam Radar – Global Strike (SABR-GS) active electronically scanned array, and a new attitude indicator. The IBS upgrades were completed in 2020.
In August 2019, the Air Force unveiled a modification to the B-1B to allow it to carry more weapons internally and externally. Using the moveable forward bulkhead, space in the intermediate bay was increased from 180 to 269 in (457 to 683 cm). Expanding the internal bay to make use of the Common Strategic Rotary Launcher (CSRL), as well as utilizing six of the eight external hardpoints that had been previously out of use to keep in line with the New START Treaty, would increase the B-1B's weapon load from 24 to 40. The configuration also enables it to carry heavier weapons in the 5,000 lb (2,300 kg) range, such as hypersonic missiles; the AGM-183 ARRW is planned for integration onto the bomber. In the future the HAWC could be used by the bomber which, combining both internal and external weapon carriage, could conceivably bring the total number of hypersonic weapons to 31.
Operational history
Strategic Air Command
The second B-1B, "The Star of Abilene", was the first B-1B delivered to SAC in June 1985. Initial operational capability was reached on 1 October 1986 and the B-1B was placed on nuclear alert status. The B-1 received the official name "Lancer" on 15 March 1990. However, the bomber has been commonly called the "Bone"; a nickname that appears to stem from an early newspaper article on the aircraft wherein its name was phonetically spelled out as "B-ONE" with the hyphen inadvertently omitted.
In late 1990, engine fires in two Lancers led to a grounding of the fleet. The cause was traced back to problems in the first-stage fan, and the aircraft were placed on "limited alert"; in other words, they were grounded unless a nuclear war broke out. Following inspections and repairs they were returned to duty beginning on 6 February 1991. By 1991, the B-1 had a fledgling conventional capability, forty of them able to drop the Mk-82 General Purpose (GP) bomb, although mostly from low altitude. Despite being cleared for this role, the problems with the engines prevented their use in Operation Desert Storm during the Gulf War. B-1s were primarily reserved for strategic nuclear strike missions at this time, providing the role of airborne nuclear deterrent against the Soviet Union. The B-52 was more suited to the role of conventional warfare and it was used by coalition forces instead.
Originally designed strictly for nuclear war, the B-1's development as an effective conventional bomber was delayed. The collapse of the Soviet Union had brought the B-1's nuclear role into question, leading to President George H. W. Bush ordering a $3 billion conventional refit.
On 26 April 1991, ten B-1Bs narrowly avoided being hit by the 1991 Andover tornado while located at McConnell AFB, which took a direct hit. Two of the bombers were equipped with nuclear warheads.
After the inactivation of SAC and the establishment of the Air Combat Command (ACC) in 1992, the B-1 developed a greater conventional weapons capability. Part of this development was the start-up of the U.S. Air Force Weapons School B-1 Division. In 1994, two additional B-1 bomb wings were also created in the Air National Guard, with former fighter wings in the Kansas Air National Guard and the Georgia Air National Guard converting to the aircraft. By the mid-1990s, the B-1 could employ GP weapons as well as various CBUs. By the end of the 1990s, with the advent of the "Block D" upgrade, the B-1 boasted a full array of guided and unguided munitions.
The B-1B no longer carries nuclear weapons; its nuclear capability was disabled by 1995 with the removal of nuclear arming and fuzing hardware. Under provisions of the New START treaty with Russia, further conversions were performed. These included modification of aircraft hardpoints to prevent nuclear weapon pylons from being attached, removal of weapons bay wiring bundles for arming nuclear weapons, and destruction of nuclear weapon pylons. The conversion process was completed in 2011, and Russian officials inspect the aircraft every year to verify compliance.
Air Combat Command
The B-1 was first used in combat in support of operations in Iraq during Operation Desert Fox in December 1998, employing unguided GP weapons. B-1s have been subsequently used in Operation Allied Force (Kosovo) and, most notably, in Operation Enduring Freedom in Afghanistan and the 2003 invasion of Iraq. The B-1 has deployed an array of conventional weapons in war zones, most notably the GBU-31, JDAM. In the first six months of Operation Enduring Freedom, eight B-1s dropped almost 40 percent of aerial ordnance, including some 3,900 JDAMs. JDAM munitions were heavily used by the B-1 over Iraq, notably on 7 April 2003 in an unsuccessful attempt to kill Saddam Hussein and his two sons. During Operation Enduring Freedom, the B-1 was able to raise its mission capable rate to 79%.
Of the 100 B-1Bs built, 93 remained in 2000 after losses in accidents. In June 2001, the Pentagon sought to place one-third of its then fleet into storage; this proposal resulted in several U.S. Air National Guard officers and members of Congress lobbying against the proposal, including the drafting of an amendment to prevent such cuts. The 2001 proposal was intended to allow money to be diverted to further upgrades to the remaining B-1Bs, such as computer modernization. In 2003, accompanied by the removal of B-1Bs from the two bomb wings in the Air National Guard, the USAF decided to retire 33 aircraft to concentrate its budget on maintaining availability of remaining B-1Bs. In 2004, a new appropriation bill called for some retired aircraft to return to service, and the USAF returned seven mothballed bombers to service to increase the fleet to 67 aircraft.
On 14 July 2007, the Associated Press reported on the growing USAF presence in Iraq, including reintroduction of B-1Bs as a close-at-hand platform to support Coalition ground forces. Beginning in 2008, B-1s were used in Iraq and Afghanistan in an "armed overwatch" role, loitering for surveillance purposes while ready to deliver guided bombs in support of ground troops as required.
The B-1B underwent a series of flight tests using a 50/50 mix of synthetic and petroleum fuel; on 19 March 2008, a B-1B from Dyess Air Force Base, Texas, became the first USAF aircraft to fly at supersonic speed using a synthetic fuel during a flight over Texas and New Mexico. This was conducted as part of an USAF testing and certification program to reduce reliance on traditional oil sources. On 4 August 2008, a B-1B flew the first Sniper Advanced Targeting Pod equipped combat sortie where the crew successfully targeted enemy ground forces and dropped a GBU-38 guided bomb in Afghanistan.
In March 2011, B-1Bs from Ellsworth Air Force Base attacked undisclosed targets in Libya as part of Operation Odyssey Dawn.
With upgrades to keep the B-1 viable, the USAF may keep it in service until approximately 2038. Despite upgrades, a single flight hour needs 48.4 hours of repair. The fuel, repairs, and other needs for a 12-hour mission cost $720,000 (~$ in ) as of 2010. The $63,000 cost per flight hour is, however, less than the $72,000 for the B-52 and the $135,000 of the B-2. In June 2010, senior USAF officials met to consider retiring the entire fleet to meet budget cuts. The Pentagon plans to begin replacing the aircraft with the B-21 Raider after 2025. In the meantime, its "capabilities are particularly well-suited to the vast distances and unique challenges of the Pacific region, and we'll continue to invest in, and rely on, the B-1 in support of the focus on the Pacific" as part of President Obama's "Pivot to East Asia".
In August 2012, the 9th Expeditionary Bomb Squadron returned from a six-month tour in Afghanistan. Its 9 B-1Bs flew 770 sorties, the most of any B-1B squadron on a single deployment. The squadron spent 9,500 hours airborne, keeping one of its bombers in the air at all times. They accounted for a quarter of all combat aircraft sorties over the country during that time and fulfilled an average of two to three air support requests per day. On 4 September 2013, a B-1B participated in a maritime evaluation exercise, deploying munitions such as laser-guided 500 lb GBU-54 bombs, 500 lb and 2,000 lb JDAM, and Long Range Anti-Ship Missiles (LRASM). The aim was to detect and engage several small craft using existing weapons and tactics developed from conventional warfare against ground targets; the B-1 is seen as a useful asset for maritime duties such as patrolling shipping lanes.
Beginning in 2014, the B-1 was used against the Islamic State (IS) in the Syrian Civil War. From August 2014 to January 2015, the B-1 accounted for eight percent of USAF sorties during Operation Inherent Resolve. The 9th Bomb Squadron was deployed to Qatar in July 2014 to support missions in Afghanistan, but when the air campaign against IS began on 8 August, the aircraft were employed in Iraq. During the Battle of Kobane in Syria, the squadron's B-1s dropped 660 bombs over 5 months in support of Kurdish forces defending the city. This amounted to one-third of all bombs used during OIR during the period, and they killed some 1,000 ISIL fighters. The 9th Bomb Squadron's B-1s went "Winchester"–dropping all weapons on board–31 times during their deployment. They dropped over 2,000 JDAMs during the six-month rotation. B-1s from the 28th Bomb Wing flew 490 sorties where they dropped 3,800 munitions on 3,700 targets during a six-month deployment. In February 2016, the B-1s were sent back to the U.S. for cockpit upgrades.
Air Force Global Strike Command
As part of a USAF reorganization announced in April 2015, all B-1s were reassigned from Air Combat Command to Global Strike Command (GSC) in October 2015.
On 8 July 2017, the USAF flew two B-1s near the North Korean border in a show of force amid increasing tensions, particularly in response to North Korea's 4 July test of an ICBM capable of reaching Alaska.
On 14 April 2018, B-1s launched 19 JASSM missiles as part of the 2018 bombing of Damascus and Homs in Syria. In August 2019, six B-1Bs met full mission capability; 15 were undergoing depot maintenance and 39 under repair and inspection.
In February 2021, the USAF announced it will retire 17 B-1s, leaving 45 aircraft in service. Four of these will be stored in a condition that will allow their return to service if required.
In March 2021, B-1s deployed to Norway's Ørland Main Air Station for the first time. During the deployment, they conducted bombing training with Norwegian and Swedish ground force Joint terminal attack controllers. One B-1 also conducted a warm-pit refuel at Bodø Main Air Station, marking the first landing inside Norway's Arctic Circle, and integrated with four Swedish Air Force JAS 39 Gripen fighters.
On 2 February 2024, the U.S. deployed two B-1Bs to strike 85 terrorist targets in seven locations in Iraq and Syria as part of a multi-tiered response to the killing of three U.S. troops in a drone attack in Jordan.
Variants
B-1A The B-1A was the original B-1 design with variable engine intakes and Mach 2.2 top speed. Four prototypes were built; no production units were manufactured.
B-1B The B-1B is a revised B-1 design with reduced radar signature and a top speed of Mach 1.25. It is optimized for low-level penetration. A total of 100 B-1Bs were produced.
B-1R The B-1R was a 2004 proposed upgrade of existing B-1B aircraft. The B-1R (R for "regional") would be fitted with advanced radars, air-to-air missiles, and new Pratt & Whitney F119 engines (from the Lockheed Martin F-22 Raptor). This variant would have a top speed of Mach 2.2, but with 20% shorter range. Existing external hardpoints would be modified to allow multiple conventional weapons to be carried, increasing overall loadout. For air-to-air defense, an active electronically scanned array (AESA) radar would be added and some existing hardpoints modified to carry air-to-air missiles.
Operators
The USAF had 45 B-1Bs in service as of July 2024.
Aircraft on display
B-1A
B-1B
Accidents and incidents
The Aviation Safety Network lists 15 accidents from 1984 to 2024 in which 11 B-1s were lost and a total of 12 crew members were killed. An April 2022 maintenance fire damaged another. Among the incidents:
On 29 August 1984, B-1A (serial number (74-0159) crashed because of a loss of control during a test flight over the Mojave Desert. The center of gravity was well aft of the limit due to fuel transfer error. Two crew members survived and a Rockwell test pilot was killed.
On 28 September 1987, B-1B (84–0052) from the 96th Bomb Wing, 338th Combat Crew Training Squadron, Dyess AFB, crashed near La Junta, Colorado, while flying on a low-level training route. This was the only B-1B crash with six crew members aboard. The two crew members in jump seats, and one of the four crew members in ejection seats perished. An impact—thought to be a bird strike on a wing's leading edge—severed fuel and hydraulic lines on one side of the aircraft, while the other side's engines functioned long enough to allow the crew to eject. The B-1B fleet was later modified to protect these supply lines.
In October 1990, while flying a training route in eastern Colorado, B-1B (86-0128) from the 384th Bomb Wing, 28th Bomb Squadron, McConnell AFB, experienced an explosion as the engines reached full power without afterburners. Fire on the aircraft's left was spotted. The No. 1 engine was shut down and its fire extinguisher was activated. The accident investigation determined that the engine had suffered catastrophic failure, engine blades had cut through the engine mounts, and the engine had detached from the aircraft.
In December 1990, B-1B (83-0071) from the 96th Bomb Wing, 337th Bomb Squadron, Dyess AFB, Texas, experienced a jolt that caused the No. 3 engine to shut down and activate its fire extinguisher. This event, coupled with the October 1990 engine incident, led to a 50-plus-day grounding of B-1Bs that were not on nuclear alert status. The problem was traced to problems in the first-stage fan, and all B-1Bs were equipped with modified engines.
On 30 November 1992, B-1B (86-0106) crashed 300 feet below a 6,500-foot ridgeline during a night sortie 36 miles south-southwest of Van Horn, Texas. All four crew died in the crash.
On 19 September 1997, B-1B (85-0078) crashed near Alzada, Montana, during a daytime training flight. The aircraft struck the ground due to an excessive sink rate when the crew was performing a defensive maneuver. All four crew were killed.
On 12 December 2001, while supporting Operation Enduring Freedom, B-1B (86-0114) flying from the British Air Base, Diego Garcia, crashed into the Indian Ocean about 30 miles north of the island. All four crew members ejected safely and were rescued in good condition after two hours in a warm, calm sea. The pilot, Capt. William Steele, later told reporters the aircraft had not been hit by hostile fire but had suffered "multiple malfunctions" that made it impossible to handle.
In September 2005, a B-1B (85-0066) from Ellsworth AFB landed at Andersen AFB and caught fire while exiting the runway. Cause was determined to be failure of brakes on the right main landing gear when leaking hydraulic fluid came in contact in a "hot brakes" situation. The resulting fire damaged the right wing, nacelle, aircraft structure, and right main landing gear assembly. The aircraft was repaired at Andersen over a three-year period with many parts obtained from retired B-1Bs in storage at Davis–Monthan AFB, Arizona. The aircraft returned to flight in 2008.
On 19 August 2013, B-1B (85-0091) crashed during a routine training mission in Broadus, Montana, 170 miles southeast of Billings, Montana, because of a fuel leak and explosion that damaged a wing during a wing sweep. All four crew ejected safely.
On 20 April 2022, a fire broke out on B-1B (85-0089) of the 7th Bomb Wing, assigned to Dyess Air Force Base, Texas, that was undergoing maintenance on the main ramp. The fire damaged the No. 1 engine, the left nacelle, and the wing of the aircraft. Falling debris injured an airman. The cost of damage was estimated at $14.944 million.
On 4 January 2024, B-1B (85-0085) assigned to Ellsworth Air Force Base in South Dakota crashed while attempting to land at the installation while on a training mission. At the time of the crash, visibility was poor and temperatures freezing. The four aircrew on board ejected safely. The total loss for the crash was reported as $456,248,485 (2024).
Specifications (B-1B)
Weapons loads
Notable appearances in media
| Technology | Specific aircraft | null |
5047 | https://en.wikipedia.org/wiki/Biotite | Biotite | Biotite is a common group of phyllosilicate minerals within the mica group, with the approximate chemical formula . It is primarily a solid-solution series between the iron-endmember annite, and the magnesium-endmember phlogopite; more aluminous end-members include siderophyllite and eastonite. Biotite was regarded as a mineral species by the International Mineralogical Association until 1998, when its status was changed to a mineral group. The term biotite is still used to describe unanalysed dark micas in the field. Biotite was named by J.F.L. Hausmann in 1847 in honor of the French physicist Jean-Baptiste Biot, who performed early research into the many optical properties of mica.
Members of the biotite group are sheet silicates. Iron, magnesium, aluminium, silicon, oxygen, and hydrogen form sheets that are weakly bound together by potassium ions. The term "iron mica" is sometimes used for iron-rich biotite, but the term also refers to a flaky micaceous form of haematite, and the field term Lepidomelane for unanalysed iron-rich Biotite avoids this ambiguity. Biotite is also sometimes called "black mica" as opposed to "white mica" (muscovite) – both form in the same rocks, and in some instances side by side.
Properties
Like other mica minerals, biotite has a highly perfect basal cleavage, and consists of flexible sheets, or lamellae, which easily flake off. It has a monoclinic crystal system, with tabular to prismatic crystals with an obvious pinacoid termination. It has four prism faces and two pinacoid faces to form a pseudohexagonal crystal. Although not easily seen because of the cleavage and sheets, fracture is uneven. It appears greenish to brown or black, and even yellow when weathered. It can be transparent to opaque, has a vitreous to pearly luster, and a grey-white streak. When biotite crystals are found in large chunks, they are called "books" because they resemble books with pages of many sheets. The color of biotite is usually black and the mineral has a hardness of 2.5–3 on the Mohs scale of mineral hardness.
Biotite dissolves in both acid and alkaline aqueous solutions, with the highest dissolution rates at low pH. However, biotite dissolution is highly anisotropic with crystal edge surfaces (h k0) reacting 45 to 132 times faster than basal surfaces (001).
Optical properties
In thin section, biotite exhibits moderate relief and a pale to deep greenish brown or brown color, with moderate to strong pleochroism. Biotite has a high birefringence which can be partially masked by its deep intrinsic color. Under cross-polarized light, biotite exhibits extinction approximately parallel to cleavage lines, and can have characteristic bird's eye maple extinction, a mottled appearance caused by the distortion of the mineral's flexible lamellae during grinding of the thin section. Basal sections of biotite in thin section are typically approximately hexagonal in shape and usually appear isotropic under cross-polarized light.
Structure
Like other micas, biotite has a crystal structure described as TOT-c, meaning that it is composed of parallel TOT layers weakly bonded to each other by cations (c). The TOT layers in turn consist of two tetrahedral sheets (T) strongly bonded to the two faces of a single octahedral sheet (O). It is the relatively weak ionic bonding between TOT layers that gives biotite its perfect basal cleavage.
The tetrahedral sheets consist of silica tetrahedra, which are silicon ions surrounded by four oxygen ions. In biotite, one in four silicon ions is replaced by an aluminium ion. The tetrahedra each share three of their four oxygen ions with neighboring tetrahedra to produce a hexagonal sheet. The remaining oxygen ion (the apical oxygen ion) is available to bond with the octahedral sheet.
The octahedral sheet in biotite is a trioctahedral sheet having the structure of a sheet of the mineral brucite, with magnesium or ferrous iron being the usual cations. Apical oxygens take the place of some of the hydroxyl ions that would be present in a brucite sheet, bonding the tetrahedral sheets tightly to the octahedral sheet.
Tetrahedral sheets have a strong negative charge, since their bulk composition is AlSi3O105-. The trioctahedral sheet has a positive charge, since its bulk composition is M3(OH)24+ (M represents a divalent ion such as ferrous iron or magnesium) The combined TOT layer has a residual negative charge, since its bulk composition is M3(AlSi3O10)(OH)2−. The remaining negative charge of the TOT layer is neutralized by the interlayer potassium ions.
Because the hexagons in the T and O sheets are slightly different in size, the sheets are slightly distorted when they bond into a TOT layer. This breaks the hexagonal symmetry and reduces it to monoclinic symmetry. However, the original hexahedral symmetry is discernible in the pseudohexagonal character of biotite crystals.
Occurrence
Members of the biotite group are found in a wide variety of igneous and metamorphic rocks. For instance, biotite occurs in the lava of Mount Vesuvius and in the Monzoni intrusive complex of the western Dolomites. Biotite in granite tends to be poorer in magnesium than the biotite found in its volcanic equivalent, rhyolite. Biotite is an essential phenocryst in some varieties of lamprophyre. Biotite is occasionally found in large cleavable crystals, especially in pegmatite veins, as in New England, Virginia and North Carolina USA. Other notable occurrences include Bancroft and Sudbury, Ontario Canada. It is an essential constituent of many metamorphic schists, and it forms in suitable compositions over a wide range of pressure and temperature. It has been estimated that biotite comprises up to 7% of the exposed continental crust.
An igneous rock composed almost entirely of dark mica (biotite or phlogopite) is known as a glimmerite or biotitite.
Biotite may be found in association with its common alteration product chlorite.
The largest documented single crystals of biotite were approximately sheets found in Iveland, Norway.
Uses
Biotite is used extensively to constrain ages of rocks, by either potassium-argon dating or argon–argon dating. Because argon escapes readily from the biotite crystal structure at high temperatures, these methods may provide only minimum ages for many rocks. Biotite is also useful in assessing temperature histories of metamorphic rocks, because the partitioning of iron and magnesium between biotite and garnet is sensitive to temperature.
| Physical sciences | Silicate minerals | Earth science |
5131 | https://en.wikipedia.org/wiki/Chordate | Chordate | A chordate ( ) is a deuterostomal bilaterian animal belonging to the phylum Chordata ( ). All chordates possess, at some point during their larval or adult stages, five distinctive physical characteristics (synapomorphies) that distinguish them from other taxa. These five synapomorphies are a notochord, a hollow dorsal nerve cord, an endostyle or thyroid, pharyngeal slits, and a post-anal tail.
In addition to the morphological characteristics used to define chordates, analysis of genome sequences has identified two conserved signature indels (CSIs) in their proteins: cyclophilin-like protein and inner mitochondrial membrane protease ATP23, which are exclusively shared by all vertebrates, tunicates and cephalochordates. These CSIs provide molecular means to reliably distinguish chordates from all other animals.
Chordates are divided into three subphyla: Vertebrata (fish, amphibians, reptiles, birds and mammals), whose notochords are replaced by a cartilaginous/bony axial endoskeleton (spine) and are cladistically and phylogenetically a subgroup of the clade Craniata (i.e. chordates with a skull); Tunicata or Urochordata (sea squirts, salps, and larvaceans), which only retain the synapomorphies during their larval stage; and Cephalochordata (lancelets), which resemble jawless fish but have no gills or a distinct head. The vertebrates and tunicates compose the clade Olfactores, which is sister to Cephalochordata (see diagram under Phylogeny). Extinct taxa such as the conodonts are chordates, but their internal placement is less certain. Hemichordata (which includes the acorn worms) was previously considered a fourth chordate subphylum, but now is treated as a separate phylum which are now thought to be closer to the echinoderms, and together they form the clade Ambulacraria, the sister phylum of the chordates. Chordata, Ambulacraria, and possibly Xenacoelomorpha are believed to form the superphylum Deuterostomia, although this has recently been called into doubt.
Chordata is the third-largest phylum of the animal kingdom (behind only the protostomal phyla Arthropoda and Mollusca) and is also one of the most ancient taxons. Chordate fossils have been found from as early as the Cambrian explosion over 539 million years ago. Of the more than 81,000 living species of chordates, about half are ray-finned fishes (class Actinopterygii) and the vast majority of the rest are tetrapods, a terrestrial clade of lobe-finned fishes (Sarcopterygii) who evolved air-breathing using lungs.
Etymology
The name "chordate" comes from the first of these synapomorphies, the notochord, which plays a significant role in chordate body plan structuring and movements. Chordates are also bilaterally symmetric, have a coelom, possess a closed circulatory system, and exhibit metameric segmentation. Although the name Chordata is attributed to William Bateson (1885), it was already in prevalent use by 1880. Ernst Haeckel described a taxon comprising tunicates, cephalochordates, and vertebrates in 1866. Though he used the German vernacular form, it is allowed under the ICZN code because of its subsequent latinization.
Anatomy
Chordates form a phylum of animals that are defined by having at some stage in their lives all of the following anatomical features:
A notochord, a stiff but elastic rod of glycoprotein wrapped in two collagen helices, which extends along the central axis of the body. Among members of the subphylum Vertebrata (vertebrates), the notochord gets replaced by hyaline cartilage or osseous tissue of the spine, and notochord remnants develop into the intervertebral discs, which allow adjacent spinal vertebrae to bend and twist relative to each other. In wholly aquatic species, this helps the animal swim efficiently by flexing its tail side-to-side.
A hollow dorsal nerve cord, also known as the neural tube, which develops into the spinal cord, the main communications trunk of the nervous system. In vertebrates, the rostral end of the neural tube enlarges into several vesicles during embryonic development, which give rise to the brain.
Pharyngeal slits. The pharynx is the part of the throat immediately behind the mouth. In fish, the slits are modified to form gills, but in some other chordates they are part of a filter-feeding system that extracts food particles from ingested water. In tetrapods, they are only present during embryonic stages of the development.
A post-anal tail. A muscular tail that extends backwards beyond the location of the anus. In some chordates such as hominids, this is only present in the embryonic stage.
An endostyle. This is a groove in the ventral wall of the pharynx. In filter-feeding species it produces mucus to gather food particles, which helps in transporting food to the esophagus. It also stores iodine, and may be a precursor of the vertebrate thyroid gland.
There are soft constraints that separate chordates from other biological lineages, but are not part of the formal definition:
All chordates are deuterostomes. This means that, during embryonic development, the anus forms before the mouth does.
All chordates are based on a bilateral body plan.
All chordates are coelomates, and have a fluid-filled body cavity (coelom) with a complete serosal lining derived from mesoderm called mesothelium (see Brusca and Brusca).
Classification
The following schema is from the 2015 edition of Vertebrate Palaeontology. The invertebrate chordate classes are from Fishes of the World. While it is structured so as to reflect evolutionary relationships (similar to a cladogram), it also retains the traditional ranks used in Linnaean taxonomy.
Phylum Chordata
Subphylum Cephalochordata (Acraniata) – (lancelets; 30 species)
Class Leptocardii (lancelets)
Clade Olfactores
Subphylum Tunicata (Urochordata) – (tunicates; 3,000 species)
Class Ascidiacea (sea squirts)
Class Thaliacea (salps, doliolids and pyrosomes)
Class Appendicularia (larvaceans)
Subphylum Vertebrata (Craniata) (vertebrates – animals with backbones; 66,100+ species)
Superclass 'Agnatha' paraphyletic (jawless vertebrates; 100+ species)
Class Cyclostomata
Infraclass Myxinoidea or Myxini (hagfish; 65 species)
Infraclass Petromyzontida or Hyperoartia (lampreys)
Class †Conodonta
Class †Myllokunmingiida
Class †Pteraspidomorphi
Class †Thelodonti
Class †Anaspida
Class †Cephalaspidomorphi
Infraphylum Gnathostomata (jawed vertebrates)
Class †Placodermi (Paleozoic armoured forms; paraphyletic in relation to all other gnathostomes)
Class Chondrichthyes (cartilaginous fish; 900+ species)
Class †Acanthodii (Paleozoic "spiny sharks"; paraphyletic in relation to Chondrichthyes)
Class Osteichthyes (bony fish; 30,000+ species)
Subclass Actinopterygii (ray-finned fish; about 30,000 species)
Subclass Sarcopterygii (lobe-finned fish: 8 species; paraphyletic when tetrapods are excluded)
Superclass Tetrapoda (four-limbed vertebrates; 35,100+ species) (The classification below follows Benton 2004, and uses a synthesis of rank-based Linnaean taxonomy and also reflects evolutionary relationships. Benton included the superclass Tetrapoda in the subclass Sarcopterygii in order to reflect the direct descent of tetrapods from lobe-finned fish, despite the former being assigned a higher taxonomic rank.)
Class Amphibia (amphibians; 8,100+ species)
Class Sauropsida (reptiles (including birds); 21,300+ species – 10,000+ species of birds and 11,300+ species of reptiles)
Class Synapsida (mammals; 5,700+ species)
Subphyla
Cephalochordata: Lancelets
Cephalochordates, one of the three subdivisions of chordates, are small, "vaguely fish-shaped" animals that lack brains, clearly defined heads and specialized sense organs. These burrowing filter-feeders compose the earliest-branching chordate subphylum.
Tunicata (Urochordata)
The tunicates have three distinct adult shapes. Each is a member of one of three monophylitic clades. All tunicate larvae have the standard chordate features, including long, tadpole-like tails. Their larva also have rudimentary brains, light sensors and tilt sensors.
The smallest of the three groups of tunicates is the Appendicularia. They retain tadpole-like shapes and active swimming all their lives, and were for a long time regarded as larvae of the other two groups.
The other two groups, the sea squirts and the salps, metamorphize into adult forms which lose the notochord, nerve cord, and post anal tail. Both are soft-bodied filter feeders with multiple gill slits. They feed on plankton which they collect in their mucus.
Sea squirts are sessile and consist mainly of water pumps and filter-feeding apparatus. Most attach firmly to the sea floor, where they remain in one place for life, feeding on plankton.
The salps float in mid-water, feeding on plankton, and have a two-generation cycle in which one generation is solitary and the next forms chain-like colonies.
The etymology of the term Urochordata (Balfour 1881) is from the ancient Greek οὐρά (oura, "tail") + Latin chorda ("cord"), because the notochord is only found in the tail. The term Tunicata (Lamarck 1816) is recognised as having precedence and is now more commonly used.
Craniata (Vertebrata)
Craniates all have distinct skulls. They include the hagfish, which have no vertebrae. Michael J. Benton commented that "craniates are characterized by their heads, just as chordates, or possibly all deuterostomes, are by their tails".
Most craniates are vertebrates, in which the notochord is replaced by the vertebral column. It consists of a series of bony or cartilaginous cylindrical vertebrae, generally with neural arches that protect the spinal cord, and with projections that link the vertebrae. Hagfishes have incomplete braincases and no vertebrae, and are therefore not regarded as vertebrates, but they are members of the craniates, the group within which vertebrates are thought to have evolved. However the cladistic exclusion of hagfish from the vertebrates is controversial, as they may instead be degenerate vertebrates who have secondarily lost their vertebral columns.
Before molecular phylogenetics, the position of lampreys was ambiguous. They have complete braincases and rudimentary vertebrae, and therefore may be regarded as vertebrates and true fish. However, molecular phylogenetics, which uses DNA to classify organisms, has produced both results that group them with vertebrates and others that group them with hagfish. If lampreys are more closely related to the hagfish than the other vertebrates, this would suggest that they form a clade, which has been named the Cyclostomata.
Phylogeny
Overview
There is still much ongoing differential (DNA sequence based) comparison research that is trying to separate out the simplest forms of chordates. As some lineages of the 90% of species that lack a backbone or notochord might have lost these structures over time, this complicates the classification of chordates. Some chordate lineages may only be found by DNA analysis, when there is no physical trace of any chordate-like structures.
Attempts to work out the evolutionary relationships of the chordates have produced several hypotheses. The current consensus is that chordates are monophyletic, meaning that the Chordata include all and only the descendants of a single common ancestor, which is itself a chordate, and that the vertebrates' nearest relatives are tunicates. Recent identification of two conserved signature indels (CSIs) in the proteins cyclophilin-like protein and mitochondrial inner membrane protease ATP23, which are exclusively shared by all vertebrates, tunicates and cephalochordates also provide strong evidence of the monophyly of Chordata.
All of the earliest chordate fossils have been found in the Early Cambrian Chengjiang fauna, and include two species that are regarded as fish, which implies that they are vertebrates. Because the fossil record of early chordates is poor, only molecular phylogenetics offers a reasonable prospect of dating their emergence. However, the use of molecular phylogenetics for dating evolutionary transitions is controversial. It has proven difficult to produce a detailed classification within the living chordates. Attempts to produce evolutionary "family trees" shows that many of the traditional classes are paraphyletic.
Diagram of the evolutionary relationships of chordates
While this has been well known since the 19th century, an insistence on only monophyletic taxa has resulted in vertebrate classification being in a state of flux.
The majority of animals more complex than jellyfish and other cnidarians are split into two groups, the protostomes and deuterostomes, the latter of which contains chordates. It seems very likely the Kimberella was a member of the protostomes. If so, this means the protostome and deuterostome lineages must have split some time before Kimberella appeared—at least , and hence well before the start of the Cambrian . Three enigmatic species that are possible very early tunicates, and therefore deuterostomes, were also found from the Ediacaran period – Ausia fenestrata from the Nama Group of Namibia, the sac-like Yarnemia ascidiformis, and one from a second new Ausia-like genus from the Onega Peninsula of northern Russia, Burykhia hunti. Results of a new study have shown possible affinity of these Ediacaran organisms to the ascidians. Ausia and Burykhia lived in shallow coastal waters slightly more than 555 to 548 million years ago, and are believed to be the oldest evidence of the chordate lineage of metazoans. The Russian Precambrian fossil Yarnemia is identified as a tunicate only tentatively, because its fossils are nowhere near as well-preserved as those of Ausia and Burykhia, so this identification has been questioned.
Fossils of one major deuterostome group, the echinoderms (whose modern members include starfish, sea urchins and crinoids), are quite common from the start of the Cambrian, . The Mid Cambrian fossil Rhabdotubus johanssoni has been interpreted as a pterobranch hemichordate. Opinions differ about whether the Chengjiang fauna fossil Yunnanozoon, from the earlier Cambrian, was a hemichordate or chordate. Another fossil, Haikouella lanceolata, also from the Chengjiang fauna, is interpreted as a chordate and possibly a craniate, as it shows signs of a heart, arteries, gill filaments, a tail, a neural chord with a brain at the front end, and possibly eyes—although it also had short tentacles round its mouth. Haikouichthys and Myllokunmingia, also from the Chengjiang fauna, are regarded as fish. Pikaia, discovered much earlier (1911) but from the Mid Cambrian Burgess Shale (505 Ma), is also regarded as a primitive chordate. On the other hand, fossils of early chordates are very rare, since invertebrate chordates have no bones or teeth, and only one has been reported for the rest of the Cambrian. The best known and earliest unequivocally identified Tunicate is Shankouclava shankouense from the Lower Cambrian Maotianshan Shale at Shankou village, Anning, near Kunming (South China).
The evolutionary relationships between the chordate groups and between chordates as a whole and their closest deuterostome relatives have been debated since 1890. Studies based on anatomical, embryological, and paleontological data have produced different "family trees". Some closely linked chordates and hemichordates, but that idea is now rejected. Combining such analyses with data from a small set of ribosome RNA genes eliminated some older ideas, but opened up the possibility that tunicates (urochordates) are "basal deuterostomes", surviving members of the group from which echinoderms, hemichordates and chordates evolved. Some researchers believe that, within the chordates, craniates are most closely related to cephalochordates, but there are also reasons for regarding tunicates (urochordates) as craniates' closest relatives.
Since early chordates have left a poor fossil record, attempts have been made to calculate the key dates in their evolution by molecular phylogenetics techniques—by analyzing biochemical differences, mainly in RNA. One such study suggested that deuterostomes arose before and the earliest chordates around . However, molecular estimates of dates often disagree with each other and with the fossil record, and their assumption that the molecular clock runs at a known constant rate has been challenged.
Traditionally, Cephalochordata and Craniata were grouped into the proposed clade "Euchordata", which would have been the sister group to Tunicata/Urochordata. More recently, Cephalochordata has been thought of as a sister group to the "Olfactores", which includes the craniates and tunicates. The matter is not yet settled.
A specific relationship between vertebrates and tunicates is also strongly supported by two CSIs found in the proteins predicted exosome complex RRP44 and serine palmitoyltransferase, that are exclusively shared by species from these two subphyla but not cephalochordates, indicating vertebrates are more closely related to tunicates than cephalochordates.
Cladogram
Below is a phylogenetic tree of the phylum. Lines of the cladogram show probable evolutionary relationships between both extinct taxa, which are denoted with a dagger (†), and extant taxa.
Closest nonchordate relatives
The closest relatives of the chordates are believed to be the hemichordates and Echinodermata, which together form the Ambulacraria.
The Chordata and Ambulacraria together form the superphylum Deuterostomia.
Hemichordates
Hemichordates ("half chordates") have some features similar to those of chordates: branchial openings that open into the pharynx and look rather like gill slits; stomochords, similar in composition to notochords, but running in a circle round the "collar", which is ahead of the mouth; and a dorsal nerve cord—but also a smaller ventral nerve cord.
There are two living groups of hemichordates. The solitary enteropneusts, commonly known as "acorn worms", have long proboscises and worm-like bodies with up to 200 branchial slits, are up to long, and burrow though seafloor sediments. Pterobranchs are colonial animals, often less than long individually, whose dwellings are interconnected. Each filter feeds by means of a pair of branched tentacles, and has a short, shield-shaped proboscis. The extinct graptolites, colonial animals whose fossils look like tiny hacksaw blades, lived in tubes similar to those of pterobranchs.
Echinoderms
Echinoderms differ from chordates and their other relatives in three conspicuous ways: they possess bilateral symmetry only as larvae – in adulthood they have radial symmetry, meaning that their body pattern is shaped like a wheel; they have tube feet; and their bodies are supported by dermal skeletons made of calcite, a material not used by chordates. Their hard, calcified shells keep their bodies well protected from the environment, and these skeletons enclose their bodies, but are also covered by thin skins. The feet are powered by another unique feature of echinoderms, a water vascular system of canals that also functions as a "lung" and surrounded by muscles that act as pumps. Crinoids are typically sessile and look rather like flowers (hence the common name "sea lilies"), and use their feather-like arms to filter food particles out of the water; most live anchored to rocks, but a few species can move very slowly. Other echinoderms are mobile and take a variety of body shapes, for example starfish and brittle stars, sea urchins and sea cucumbers.
| Biology and health sciences | General classification | null |
5170 | https://en.wikipedia.org/wiki/Combinatorics | Combinatorics | Combinatorics is an area of mathematics primarily concerned with counting, both as a means and as an end to obtaining results, and certain properties of finite structures. It is closely related to many other areas of mathematics and has many applications ranging from logic to statistical physics and from evolutionary biology to computer science.
Combinatorics is well known for the breadth of the problems it tackles. Combinatorial problems arise in many areas of pure mathematics, notably in algebra, probability theory, topology, and geometry, as well as in its many application areas. Many combinatorial questions have historically been considered in isolation, giving an ad hoc solution to a problem arising in some mathematical context. In the later twentieth century, however, powerful and general theoretical methods were developed, making combinatorics into an independent branch of mathematics in its own right. One of the oldest and most accessible parts of combinatorics is graph theory, which by itself has numerous natural connections to other areas. Combinatorics is used frequently in computer science to obtain formulas and estimates in the analysis of algorithms.
Definition
The full scope of combinatorics is not universally agreed upon. According to H.J. Ryser, a definition of the subject is difficult because it crosses so many mathematical subdivisions. Insofar as an area can be described by the types of problems it addresses, combinatorics is involved with:
the enumeration (counting) of specified structures, sometimes referred to as arrangements or configurations in a very general sense, associated with finite systems,
the existence of such structures that satisfy certain given criteria,
the construction of these structures, perhaps in many ways, and
optimization: finding the "best" structure or solution among several possibilities, be it the "largest", "smallest" or satisfying some other optimality criterion.
Leon Mirsky has said: "combinatorics is a range of linked studies which have something in common and yet diverge widely in their objectives, their methods, and the degree of coherence they have attained." One way to define combinatorics is, perhaps, to describe its subdivisions with their problems and techniques. This is the approach that is used below. However, there are also purely historical reasons for including or not including some topics under the combinatorics umbrella. Although primarily concerned with finite systems, some combinatorial questions and techniques can be extended to an infinite (specifically, countable) but discrete setting.
History
Basic combinatorial concepts and enumerative results appeared throughout the ancient world. Indian physician Sushruta asserts in Sushruta Samhita that 63 combinations can be made out of 6 different tastes, taken one at a time, two at a time, etc., thus computing all 26 − 1 possibilities. Greek historian Plutarch discusses an argument between Chrysippus (3rd century BCE) and Hipparchus (2nd century BCE) of a rather delicate enumerative problem, which was later shown to be related to Schröder–Hipparchus numbers. Earlier, in the Ostomachion, Archimedes (3rd century BCE) may have considered the number of configurations of a tiling puzzle, while combinatorial interests possibly were present in lost works by Apollonius.
In the Middle Ages, combinatorics continued to be studied, largely outside of the European civilization. The Indian mathematician Mahāvīra () provided formulae for the number of permutations and combinations, and these formulas may have been familiar to Indian mathematicians as early as the 6th century CE. The philosopher and astronomer Rabbi Abraham ibn Ezra () established the symmetry of binomial coefficients, while a closed formula was obtained later by the talmudist and mathematician Levi ben Gerson (better known as Gersonides), in 1321.
The arithmetical triangle—a graphical diagram showing relationships among the binomial coefficients—was presented by mathematicians in treatises dating as far back as the 10th century, and would eventually become known as Pascal's triangle. Later, in Medieval England, campanology provided examples of what is now known as Hamiltonian cycles in certain Cayley graphs on permutations.
During the Renaissance, together with the rest of mathematics and the sciences, combinatorics enjoyed a rebirth. Works of Pascal, Newton, Jacob Bernoulli and Euler became foundational in the emerging field. In modern times, the works of J.J. Sylvester (late 19th century) and Percy MacMahon (early 20th century) helped lay the foundation for enumerative and algebraic combinatorics. Graph theory also enjoyed an increase of interest at the same time, especially in connection with the four color problem.
In the second half of the 20th century, combinatorics enjoyed a rapid growth, which led to establishment of dozens of new journals and conferences in the subject. In part, the growth was spurred by new connections and applications to other fields, ranging from algebra to probability, from functional analysis to number theory, etc. These connections shed the boundaries between combinatorics and parts of mathematics and theoretical computer science, but at the same time led to a partial fragmentation of the field.
Approaches and subfields of combinatorics
Enumerative combinatorics
Enumerative combinatorics is the most classical area of combinatorics and concentrates on counting the number of certain combinatorial objects. Although counting the number of elements in a set is a rather broad mathematical problem, many of the problems that arise in applications have a relatively simple combinatorial description. Fibonacci numbers is the basic example of a problem in enumerative combinatorics. The twelvefold way provides a unified framework for counting permutations, combinations and partitions.
Analytic combinatorics
Analytic combinatorics concerns the enumeration of combinatorial structures using tools from complex analysis and probability theory. In contrast with enumerative combinatorics, which uses explicit combinatorial formulae and generating functions to describe the results, analytic combinatorics aims at obtaining asymptotic formulae.
Partition theory
Partition theory studies various enumeration and asymptotic problems related to integer partitions, and is closely related to q-series, special functions and orthogonal polynomials. Originally a part of number theory and analysis, it is now considered a part of combinatorics or an independent field. It incorporates the bijective approach and various tools in analysis and analytic number theory and has connections with statistical mechanics. Partitions can be graphically visualized with Young diagrams or Ferrers diagrams. They occur in a number of branches of mathematics and physics, including the study of symmetric polynomials and of the symmetric group and in group representation theory in general.
Graph theory
Graphs are fundamental objects in combinatorics. Considerations of graph theory range from enumeration (e.g., the number of graphs on n vertices with k edges) to existing structures (e.g., Hamiltonian cycles) to algebraic representations (e.g., given a graph G and two numbers x and y, does the Tutte polynomial TG(x,y) have a combinatorial interpretation?). Although there are very strong connections between graph theory and combinatorics, they are sometimes thought of as separate subjects. While combinatorial methods apply to many graph theory problems, the two disciplines are generally used to seek solutions to different types of problems.
Design theory
Design theory is a study of combinatorial designs, which are collections of subsets with certain intersection properties. Block designs are combinatorial designs of a special type. This area is one of the oldest parts of combinatorics, such as in Kirkman's schoolgirl problem proposed in 1850. The solution of the problem is a special case of a Steiner system, which play an important role in the classification of finite simple groups. The area has further connections to coding theory and geometric combinatorics.
Combinatorial design theory can be applied to the area of design of experiments. Some of the basic theory of combinatorial designs originated in the statistician Ronald Fisher's work on the design of biological experiments. Modern applications are also found in a wide gamut of areas including finite geometry, tournament scheduling, lotteries, mathematical chemistry, mathematical biology, algorithm design and analysis, networking, group testing and cryptography.
Finite geometry
Finite geometry is the study of geometric systems having only a finite number of points. Structures analogous to those found in continuous geometries (Euclidean plane, real projective space, etc.) but defined combinatorially are the main items studied. This area provides a rich source of examples for design theory. It should not be confused with discrete geometry (combinatorial geometry).
Order theory
Order theory is the study of partially ordered sets, both finite and infinite. It provides a formal framework for describing statements such as "this is less than that" or "this precedes that". Various examples of partial orders appear in algebra, geometry, number theory and throughout combinatorics and graph theory. Notable classes and examples of partial orders include lattices and Boolean algebras.
Matroid theory
Matroid theory abstracts part of geometry. It studies the properties of sets (usually, finite sets) of vectors in a vector space that do not depend on the particular coefficients in a linear dependence relation. Not only the structure but also enumerative properties belong to matroid theory. Matroid theory was introduced by Hassler Whitney and studied as a part of order theory. It is now an independent field of study with a number of connections with other parts of combinatorics.
Extremal combinatorics
Extremal combinatorics studies how large or how small a collection of finite objects (numbers, graphs, vectors, sets, etc.) can be, if it has to satisfy certain restrictions. Much of extremal combinatorics concerns classes of set systems; this is called extremal set theory. For instance, in an n-element set, what is the largest number of k-element subsets that can pairwise intersect one another? What is the largest number of subsets of which none contains any other? The latter question is answered by Sperner's theorem, which gave rise to much of extremal set theory.
The types of questions addressed in this case are about the largest possible graph which satisfies certain properties. For example, the largest triangle-free graph on 2n vertices is a complete bipartite graph Kn,n. Often it is too hard even to find the extremal answer f(n) exactly and one can only give an asymptotic estimate.
Ramsey theory is another part of extremal combinatorics. It states that any sufficiently large configuration will contain some sort of order. It is an advanced generalization of the pigeonhole principle.
Probabilistic combinatorics
In probabilistic combinatorics, the questions are of the following type: what is the probability of a certain property for a random discrete object, such as a random graph? For instance, what is the average number of triangles in a random graph? Probabilistic methods are also used to determine the existence of combinatorial objects with certain prescribed properties (for which explicit examples might be difficult to find) by observing that the probability of randomly selecting an object with those properties is greater than 0. This approach (often referred to as the probabilistic method) proved highly effective in applications to extremal combinatorics and graph theory. A closely related area is the study of finite Markov chains, especially on combinatorial objects. Here again probabilistic tools are used to estimate the mixing time.
Often associated with Paul Erdős, who did the pioneering work on the subject, probabilistic combinatorics was traditionally viewed as a set of tools to study problems in other parts of combinatorics. The area recently grew to become an independent field of combinatorics.
Algebraic combinatorics
Algebraic combinatorics is an area of mathematics that employs methods of abstract algebra, notably group theory and representation theory, in various combinatorial contexts and, conversely, applies combinatorial techniques to problems in algebra. Algebraic combinatorics has come to be seen more expansively as an area of mathematics where the interaction of combinatorial and algebraic methods is particularly strong and significant. Thus the combinatorial topics may be enumerative in nature or involve matroids, polytopes, partially ordered sets, or finite geometries. On the algebraic side, besides group and representation theory, lattice theory and commutative algebra are common.
Combinatorics on words
Combinatorics on words deals with formal languages. It arose independently within several branches of mathematics, including number theory, group theory and probability. It has applications to enumerative combinatorics, fractal analysis, theoretical computer science, automata theory, and linguistics. While many applications are new, the classical Chomsky–Schützenberger hierarchy of classes of formal grammars is perhaps the best-known result in the field.
Geometric combinatorics
Geometric combinatorics is related to convex and discrete geometry. It asks, for example, how many faces of each dimension a convex polytope can have. Metric properties of polytopes play an important role as well, e.g. the Cauchy theorem on the rigidity of convex polytopes. Special polytopes are also considered, such as permutohedra, associahedra and Birkhoff polytopes. Combinatorial geometry is a historical name for discrete geometry.
It includes a number of subareas such as polyhedral combinatorics (the study of faces of convex polyhedra), convex geometry (the study of convex sets, in particular combinatorics of their intersections), and discrete geometry, which in turn has many applications to computational geometry. The study of regular polytopes, Archimedean solids, and kissing numbers is also a part of geometric combinatorics. Special polytopes are also considered, such as the permutohedron, associahedron and Birkhoff polytope.
Topological combinatorics
Combinatorial analogs of concepts and methods in topology are used to study graph coloring, fair division, partitions, partially ordered sets, decision trees, necklace problems and discrete Morse theory. It should not be confused with combinatorial topology which is an older name for algebraic topology.
Arithmetic combinatorics
Arithmetic combinatorics arose out of the interplay between number theory, combinatorics, ergodic theory, and harmonic analysis. It is about combinatorial estimates associated with arithmetic operations (addition, subtraction, multiplication, and division). Additive number theory (sometimes also called additive combinatorics) refers to the special case when only the operations of addition and subtraction are involved. One important technique in arithmetic combinatorics is the ergodic theory of dynamical systems.
Infinitary combinatorics
Infinitary combinatorics, or combinatorial set theory, is an extension of ideas in combinatorics to infinite sets. It is a part of set theory, an area of mathematical logic, but uses tools and ideas from both set theory and extremal combinatorics. Some of the things studied include continuous graphs and trees, extensions of Ramsey's theorem, and Martin's axiom. Recent developments concern combinatorics of the continuum and combinatorics on successors of singular cardinals.
Gian-Carlo Rota used the name continuous combinatorics to describe geometric probability, since there are many analogies between counting and measure.
Related fields
Combinatorial optimization
Combinatorial optimization is the study of optimization on discrete and combinatorial objects. It started as a part of combinatorics and graph theory, but is now viewed as a branch of applied mathematics and computer science, related to operations research, algorithm theory and computational complexity theory.
Coding theory
Coding theory started as a part of design theory with early combinatorial constructions of error-correcting codes. The main idea of the subject is to design efficient and reliable methods of data transmission. It is now a large field of study, part of information theory.
Discrete and computational geometry
Discrete geometry (also called combinatorial geometry) also began as a part of combinatorics, with early results on convex polytopes and kissing numbers. With the emergence of applications of discrete geometry to computational geometry, these two fields partially merged and became a separate field of study. There remain many connections with geometric and topological combinatorics, which themselves can be viewed as outgrowths of the early discrete geometry.
Combinatorics and dynamical systems
Combinatorial aspects of dynamical systems is another emerging field. Here dynamical systems can be defined on combinatorial objects. See for example
graph dynamical system.
Combinatorics and physics
There are increasing interactions between combinatorics and physics, particularly statistical physics. Examples include an exact solution of the Ising model, and a connection between the Potts model on one hand, and the chromatic and Tutte polynomials on the other hand.
| Mathematics | Discrete mathematics | null |
5176 | https://en.wikipedia.org/wiki/Calculus | Calculus | Calculus is the mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizations of arithmetic operations.
Originally called infinitesimal calculus or "the calculus of infinitesimals", it has two major branches, differential calculus and integral calculus. The former concerns instantaneous rates of change, and the slopes of curves, while the latter concerns accumulation of quantities, and areas under or between curves. These two branches are related to each other by the fundamental theorem of calculus. They make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit. It is the "mathematical backbone" for dealing with problems where variables change with time or another reference variable.
Infinitesimal calculus was formulated separately in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz. Later work, including codifying the idea of limits, put these developments on a more solid conceptual footing. Today, calculus is widely used in science, engineering, biology, and even has applications in social science and other branches of math.
Etymology
In mathematics education, calculus is an abbreviation of both infinitesimal calculus and integral calculus, which denotes courses of elementary mathematical analysis.
In Latin, the word calculus means “small pebble”, (the diminutive of calx, meaning "stone"), a meaning which still persists in medicine. Because such pebbles were used for counting out distances, tallying votes, and doing abacus arithmetic, the word came to be the Latin word for calculation. In this sense, it was used in English at least as early as 1672, several years before the publications of Leibniz and Newton, who wrote their mathematical texts in Latin.
In addition to differential calculus and integral calculus, the term is also used for naming specific methods of computation or theories that imply some sort of computation. Examples of this usage include propositional calculus, Ricci calculus, calculus of variations, lambda calculus, sequent calculus, and process calculus. Furthermore, the term "calculus" has variously been applied in ethics and philosophy, for such systems as Bentham's felicific calculus, and the ethical calculus.
History
Modern calculus was developed in 17th-century Europe by Isaac Newton and Gottfried Wilhelm Leibniz (independently of each other, first publishing around the same time) but elements of it first appeared in ancient Egypt and later Greece, then in China and the Middle East, and still later again in medieval Europe and India.
Ancient precursors
Egypt
Calculations of volume and area, one goal of integral calculus, can be found in the Egyptian Moscow papyrus (), but the formulae are simple instructions, with no indication as to how they were obtained.
Greece
Laying the foundations for integral calculus and foreshadowing the concept of the limit, ancient Greek mathematician Eudoxus of Cnidus () developed the method of exhaustion to prove the formulas for cone and pyramid volumes.
During the Hellenistic period, this method was further developed by Archimedes (BC), who combined it with a concept of the indivisibles—a precursor to infinitesimals—allowing him to solve several problems now treated by integral calculus. In The Method of Mechanical Theorems he describes, for example, calculating the center of gravity of a solid hemisphere, the center of gravity of a frustum of a circular paraboloid, and the area of a region bounded by a parabola and one of its secant lines.
China
The method of exhaustion was later discovered independently in China by Liu Hui in the 3rd century AD to find the area of a circle. In the 5th century AD, Zu Gengzhi, son of Zu Chongzhi, established a method that would later be called Cavalieri's principle to find the volume of a sphere.
Medieval
Middle East
In the Middle East, Hasan Ibn al-Haytham, Latinized as Alhazen (AD) derived a formula for the sum of fourth powers. He used the results to carry out what would now be called an integration of this function, where the formulae for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid.
India
Bhāskara II () was acquainted with some ideas of differential calculus and suggested that the "differential coefficient" vanishes at an extremum value of the function. In his astronomical work, he gave a procedure that looked like a precursor to infinitesimal methods. Namely, if then This can be interpreted as the discovery that cosine is the derivative of sine. In the 14th century, Indian mathematicians gave a non-rigorous method, resembling differentiation, applicable to some trigonometric functions. Madhava of Sangamagrama and the Kerala School of Astronomy and Mathematics stated components of calculus, but according to Victor J. Katz they were not able to "combine many differing ideas under the two unifying themes of the derivative and the integral, show the connection between the two, and turn calculus into the great problem-solving tool we have today".
Modern
Johannes Kepler's work Stereometria Doliorum (1615) formed the basis of integral calculus. Kepler developed a method to calculate the area of an ellipse by adding up the lengths of many radii drawn from a focus of the ellipse.
Significant work was a treatise, the origin being Kepler's methods, written by Bonaventura Cavalieri, who argued that volumes and areas should be computed as the sums of the volumes and areas of infinitesimally thin cross-sections. The ideas were similar to Archimedes' in The Method, but this treatise is believed to have been lost in the 13th century and was only rediscovered in the early 20th century, and so would have been unknown to Cavalieri. Cavalieri's work was not well respected since his methods could lead to erroneous results, and the infinitesimal quantities he introduced were disreputable at first.
The formal study of calculus brought together Cavalieri's infinitesimals with the calculus of finite differences developed in Europe at around the same time. Pierre de Fermat, claiming that he borrowed from Diophantus, introduced the concept of adequality, which represented equality up to an infinitesimal error term. The combination was achieved by John Wallis, Isaac Barrow, and James Gregory, the latter two proving predecessors to the second fundamental theorem of calculus around 1670.
The product rule and chain rule, the notions of higher derivatives and Taylor series, and of analytic functions were used by Isaac Newton in an idiosyncratic notation which he applied to solve problems of mathematical physics. In his works, Newton rephrased his ideas to suit the mathematical idiom of the time, replacing calculations with infinitesimals by equivalent geometrical arguments which were considered beyond reproach. He used the methods of calculus to solve the problem of planetary motion, the shape of the surface of a rotating fluid, the oblateness of the earth, the motion of a weight sliding on a cycloid, and many other problems discussed in his Principia Mathematica (1687). In other work, he developed series expansions for functions, including fractional and irrational powers, and it was clear that he understood the principles of the Taylor series. He did not publish all these discoveries, and at this time infinitesimal methods were still considered disreputable.
These ideas were arranged into a true calculus of infinitesimals by Gottfried Wilhelm Leibniz, who was originally accused of plagiarism by Newton. He is now regarded as an independent inventor of and contributor to calculus. His contribution was to provide a clear set of rules for working with infinitesimal quantities, allowing the computation of second and higher derivatives, and providing the product rule and chain rule, in their differential and integral forms. Unlike Newton, Leibniz put painstaking effort into his choices of notation.
Today, Leibniz and Newton are usually both given credit for independently inventing and developing calculus. Newton was the first to apply calculus to general physics. Leibniz developed much of the notation used in calculus today. The basic insights that both Newton and Leibniz provided were the laws of differentiation and integration, emphasizing that differentiation and integration are inverse processes, second and higher derivatives, and the notion of an approximating polynomial series.
When Newton and Leibniz first published their results, there was great controversy over which mathematician (and therefore which country) deserved credit. Newton derived his results first (later to be published in his Method of Fluxions), but Leibniz published his "Nova Methodus pro Maximis et Minimis" first. Newton claimed Leibniz stole ideas from his unpublished notes, which Newton had shared with a few members of the Royal Society. This controversy divided English-speaking mathematicians from continental European mathematicians for many years, to the detriment of English mathematics. A careful examination of the papers of Leibniz and Newton shows that they arrived at their results independently, with Leibniz starting first with integration and Newton with differentiation. It is Leibniz, however, who gave the new discipline its name. Newton called his calculus "the science of fluxions", a term that endured in English schools into the 19th century. The first complete treatise on calculus to be written in English and use the Leibniz notation was not published until 1815.
Since the time of Leibniz and Newton, many mathematicians have contributed to the continuing development of calculus. One of the first and most complete works on both infinitesimal and integral calculus was written in 1748 by Maria Gaetana Agnesi.
Foundations
In calculus, foundations refers to the rigorous development of the subject from axioms and definitions. In early calculus, the use of infinitesimal quantities was thought unrigorous and was fiercely criticized by several authors, most notably Michel Rolle and Bishop Berkeley. Berkeley famously described infinitesimals as the ghosts of departed quantities in his book The Analyst in 1734. Working out a rigorous foundation for calculus occupied mathematicians for much of the century following Newton and Leibniz, and is still to some extent an active area of research today.
Several mathematicians, including Maclaurin, tried to prove the soundness of using infinitesimals, but it would not be until 150 years later when, due to the work of Cauchy and Weierstrass, a way was finally found to avoid mere "notions" of infinitely small quantities. The foundations of differential and integral calculus had been laid. In Cauchy's Cours d'Analyse, we find a broad range of foundational approaches, including a definition of continuity in terms of infinitesimals, and a (somewhat imprecise) prototype of an (ε, δ)-definition of limit in the definition of differentiation. In his work, Weierstrass formalized the concept of limit and eliminated infinitesimals (although his definition can validate nilsquare infinitesimals). Following the work of Weierstrass, it eventually became common to base calculus on limits instead of infinitesimal quantities, though the subject is still occasionally called "infinitesimal calculus". Bernhard Riemann used these ideas to give a precise definition of the integral. It was also during this period that the ideas of calculus were generalized to the complex plane with the development of complex analysis.
In modern mathematics, the foundations of calculus are included in the field of real analysis, which contains full definitions and proofs of the theorems of calculus. The reach of calculus has also been greatly extended. Henri Lebesgue invented measure theory, based on earlier developments by Émile Borel, and used it to define integrals of all but the most pathological functions. Laurent Schwartz introduced distributions, which can be used to take the derivative of any function whatsoever.
Limits are not the only rigorous approach to the foundation of calculus. Another way is to use Abraham Robinson's non-standard analysis. Robinson's approach, developed in the 1960s, uses technical machinery from mathematical logic to augment the real number system with infinitesimal and infinite numbers, as in the original Newton-Leibniz conception. The resulting numbers are called hyperreal numbers, and they can be used to give a Leibniz-like development of the usual rules of calculus. There is also smooth infinitesimal analysis, which differs from non-standard analysis in that it mandates neglecting higher-power infinitesimals during derivations. Based on the ideas of F. W. Lawvere and employing the methods of category theory, smooth infinitesimal analysis views all functions as being continuous and incapable of being expressed in terms of discrete entities. One aspect of this formulation is that the law of excluded middle does not hold. The law of excluded middle is also rejected in constructive mathematics, a branch of mathematics that insists that proofs of the existence of a number, function, or other mathematical object should give a construction of the object. Reformulations of calculus in a constructive framework are generally part of the subject of constructive analysis.
Significance
While many of the ideas of calculus had been developed earlier in Greece, China, India, Iraq, Persia, and Japan, the use of calculus began in Europe, during the 17th century, when Newton and Leibniz built on the work of earlier mathematicians to introduce its basic principles. The Hungarian polymath John von Neumann wrote of this work,
Applications of differential calculus include computations involving velocity and acceleration, the slope of a curve, and optimization. Applications of integral calculus include computations involving area, volume, arc length, center of mass, work, and pressure. More advanced applications include power series and Fourier series.
Calculus is also used to gain a more precise understanding of the nature of space, time, and motion. For centuries, mathematicians and philosophers wrestled with paradoxes involving division by zero or sums of infinitely many numbers. These questions arise in the study of motion and area. The ancient Greek philosopher Zeno of Elea gave several famous examples of such paradoxes. Calculus provides tools, especially the limit and the infinite series, that resolve the paradoxes.
Principles
Limits and infinitesimals
Calculus is usually developed by working with very small quantities. Historically, the first method of doing so was by infinitesimals. These are objects which can be treated like real numbers but which are, in some sense, "infinitely small". For example, an infinitesimal number could be greater than 0, but less than any number in the sequence 1, 1/2, 1/3, ... and thus less than any positive real number. From this point of view, calculus is a collection of techniques for manipulating infinitesimals. The symbols and were taken to be infinitesimal, and the derivative was their ratio.
The infinitesimal approach fell out of favor in the 19th century because it was difficult to make the notion of an infinitesimal precise. In the late 19th century, infinitesimals were replaced within academia by the epsilon, delta approach to limits. Limits describe the behavior of a function at a certain input in terms of its values at nearby inputs. They capture small-scale behavior using the intrinsic structure of the real number system (as a metric space with the least-upper-bound property). In this treatment, calculus is a collection of techniques for manipulating certain limits. Infinitesimals get replaced by sequences of smaller and smaller numbers, and the infinitely small behavior of a function is found by taking the limiting behavior for these sequences. Limits were thought to provide a more rigorous foundation for calculus, and for this reason, they became the standard approach during the 20th century. However, the infinitesimal concept was revived in the 20th century with the introduction of non-standard analysis and smooth infinitesimal analysis, which provided solid foundations for the manipulation of infinitesimals.
Differential calculus
Differential calculus is the study of the definition, properties, and applications of the derivative of a function. The process of finding the derivative is called differentiation. Given a function and a point in the domain, the derivative at that point is a way of encoding the small-scale behavior of the function near that point. By finding the derivative of a function at every point in its domain, it is possible to produce a new function, called the derivative function or just the derivative of the original function. In formal terms, the derivative is a linear operator which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to produce another function. The function produced by differentiating the squaring function turns out to be the doubling function.
In more explicit terms the "doubling function" may be denoted by and the "squaring function" by . The "derivative" now takes the function , defined by the expression "", as an input, that is all the information—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to output another function, the function , as will turn out.
In Lagrange's notation, the symbol for a derivative is an apostrophe-like mark called a prime. Thus, the derivative of a function called is denoted by , pronounced "f prime" or "f dash". For instance, if is the squaring function, then is its derivative (the doubling function from above).
If the input of the function represents time, then the derivative represents change concerning time. For example, if is a function that takes time as input and gives the position of a ball at that time as output, then the derivative of is how the position is changing in time, that is, it is the velocity of the ball.
If a function is linear (that is if the graph of the function is a straight line), then the function can be written as , where is the independent variable, is the dependent variable, is the y-intercept, and:
This gives an exact value for the slope of a straight line. If the graph of the function is not a straight line, however, then the change in divided by the change in varies. Derivatives give an exact meaning to the notion of change in output concerning change in input. To be concrete, let be a function, and fix a point in the domain of . is a point on the graph of the function. If is a number close to zero, then is a number close to . Therefore, is close to . The slope between these two points is
This expression is called a difference quotient. A line through two points on a curve is called a secant line, so is the slope of the secant line between and . The second line is only an approximation to the behavior of the function at the point because it does not account for what happens between and . It is not possible to discover the behavior at by setting to zero because this would require dividing by zero, which is undefined. The derivative is defined by taking the limit as tends to zero, meaning that it considers the behavior of for all small values of and extracts a consistent value for the case when equals zero:
Geometrically, the derivative is the slope of the tangent line to the graph of at . The tangent line is a limit of secant lines just as the derivative is a limit of difference quotients. For this reason, the derivative is sometimes called the slope of the function .
Here is a particular example, the derivative of the squaring function at the input 3. Let be the squaring function.
The slope of the tangent line to the squaring function at the point (3, 9) is 6, that is to say, it is going up six times as fast as it is going to the right. The limit process just described can be performed for any point in the domain of the squaring function. This defines the derivative function of the squaring function or just the derivative of the squaring function for short. A computation similar to the one above shows that the derivative of the squaring function is the doubling function.
Leibniz notation
A common notation, introduced by Leibniz, for the derivative in the example above is
In an approach based on limits, the symbol is to be interpreted not as the quotient of two numbers but as a shorthand for the limit computed above. Leibniz, however, did intend it to represent the quotient of two infinitesimally small numbers, being the infinitesimally small change in caused by an infinitesimally small change applied to . We can also think of as a differentiation operator, which takes a function as an input and gives another function, the derivative, as the output. For example:
In this usage, the in the denominator is read as "with respect to ". Another example of correct notation could be:
Even when calculus is developed using limits rather than infinitesimals, it is common to manipulate symbols like and as if they were real numbers; although it is possible to avoid such manipulations, they are sometimes notationally convenient in expressing operations such as the total derivative.
Integral calculus
Integral calculus is the study of the definitions, properties, and applications of two related concepts, the indefinite integral and the definite integral. The process of finding the value of an integral is called integration. The indefinite integral, also known as the antiderivative, is the inverse operation to the derivative. is an indefinite integral of when is a derivative of . (This use of lower- and upper-case letters for a function and its indefinite integral is common in calculus.) The definite integral inputs a function and outputs a number, which gives the algebraic sum of areas between the graph of the input and the x-axis. The technical definition of the definite integral involves the limit of a sum of areas of rectangles, called a Riemann sum.
A motivating example is the distance traveled in a given time. If the speed is constant, only multiplication is needed:
But if the speed changes, a more powerful method of finding the distance is necessary. One such method is to approximate the distance traveled by breaking up the time into many short intervals of time, then multiplying the time elapsed in each interval by one of the speeds in that interval, and then taking the sum (a Riemann sum) of the approximate distance traveled in each interval. The basic idea is that if only a short time elapses, then the speed will stay more or less the same. However, a Riemann sum only gives an approximation of the distance traveled. We must take the limit of all such Riemann sums to find the exact distance traveled.
When velocity is constant, the total distance traveled over the given time interval can be computed by multiplying velocity and time. For example, traveling a steady 50 mph for 3 hours results in a total distance of 150 miles. Plotting the velocity as a function of time yields a rectangle with a height equal to the velocity and a width equal to the time elapsed. Therefore, the product of velocity and time also calculates the rectangular area under the (constant) velocity curve. This connection between the area under a curve and the distance traveled can be extended to any irregularly shaped region exhibiting a fluctuating velocity over a given period. If represents speed as it varies over time, the distance traveled between the times represented by and is the area of the region between and the -axis, between and .
To approximate that area, an intuitive method would be to divide up the distance between and into several equal segments, the length of each segment represented by the symbol . For each small segment, we can choose one value of the function . Call that value . Then the area of the rectangle with base and height gives the distance (time multiplied by speed ) traveled in that segment. Associated with each segment is the average value of the function above it, . The sum of all such rectangles gives an approximation of the area between the axis and the curve, which is an approximation of the total distance traveled. A smaller value for will give more rectangles and in most cases a better approximation, but for an exact answer, we need to take a limit as approaches zero.
The symbol of integration is , an elongated S chosen to suggest summation. The definite integral is written as:
and is read "the integral from a to b of f-of-x with respect to x." The Leibniz notation is intended to suggest dividing the area under the curve into an infinite number of rectangles so that their width becomes the infinitesimally small .
The indefinite integral, or antiderivative, is written:
Functions differing by only a constant have the same derivative, and it can be shown that the antiderivative of a given function is a family of functions differing only by a constant. Since the derivative of the function , where is any constant, is , the antiderivative of the latter is given by:
The unspecified constant present in the indefinite integral or antiderivative is known as the constant of integration.
Fundamental theorem
The fundamental theorem of calculus states that differentiation and integration are inverse operations. More precisely, it relates the values of antiderivatives to definite integrals. Because it is usually easier to compute an antiderivative than to apply the definition of a definite integral, the fundamental theorem of calculus provides a practical way of computing definite integrals. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration.
The fundamental theorem of calculus states: If a function is continuous on the interval and if is a function whose derivative is on the interval , then
Furthermore, for every in the interval ,
This realization, made by both Newton and Leibniz, was key to the proliferation of analytic results after their work became known. (The extent to which Newton and Leibniz were influenced by immediate predecessors, and particularly what Leibniz may have learned from the work of Isaac Barrow, is difficult to determine because of the priority dispute between them.) The fundamental theorem provides an algebraic method of computing many definite integrals—without performing limit processes—by finding formulae for antiderivatives. It is also a prototype solution of a differential equation. Differential equations relate an unknown function to its derivatives and are ubiquitous in the sciences.
Applications
Calculus is used in every branch of the physical sciences, actuarial science, computer science, statistics, engineering, economics, business, medicine, demography, and in other fields wherever a problem can be mathematically modeled and an optimal solution is desired. It allows one to go from (non-constant) rates of change to the total change or vice versa, and many times in studying a problem we know one and are trying to find the other. Calculus can be used in conjunction with other mathematical disciplines. For example, it can be used with linear algebra to find the "best fit" linear approximation for a set of points in a domain. Or, it can be used in probability theory to determine the expectation value of a continuous random variable given a probability density function. In analytic geometry, the study of graphs of functions, calculus is used to find high points and low points (maxima and minima), slope, concavity and inflection points. Calculus is also used to find approximate solutions to equations; in practice, it is the standard way to solve differential equations and do root finding in most applications. Examples are methods such as Newton's method, fixed point iteration, and linear approximation. For instance, spacecraft use a variation of the Euler method to approximate curved courses within zero-gravity environments.
Physics makes particular use of calculus; all concepts in classical mechanics and electromagnetism are related through calculus. The mass of an object of known density, the moment of inertia of objects, and the potential energies due to gravitational and electromagnetic forces can all be found by the use of calculus. An example of the use of calculus in mechanics is Newton's second law of motion, which states that the derivative of an object's momentum concerning time equals the net force upon it. Alternatively, Newton's second law can be expressed by saying that the net force equals the object's mass times its acceleration, which is the time derivative of velocity and thus the second time derivative of spatial position. Starting from knowing how an object is accelerating, we use calculus to derive its path.
Maxwell's theory of electromagnetism and Einstein's theory of general relativity are also expressed in the language of differential calculus. Chemistry also uses calculus in determining reaction rates and in studying radioactive decay. In biology, population dynamics starts with reproduction and death rates to model population changes.
Green's theorem, which gives the relationship between a line integral around a simple closed curve C and a double integral over the plane region D bounded by C, is applied in an instrument known as a planimeter, which is used to calculate the area of a flat surface on a drawing. For example, it can be used to calculate the amount of area taken up by an irregularly shaped flower bed or swimming pool when designing the layout of a piece of property.
In the realm of medicine, calculus can be used to find the optimal branching angle of a blood vessel to maximize flow. Calculus can be applied to understand how quickly a drug is eliminated from a body or how quickly a cancerous tumor grows.
In economics, calculus allows for the determination of maximal profit by providing a way to easily calculate both marginal cost and marginal revenue.
| Mathematics | Analysis | null |
5180 | https://en.wikipedia.org/wiki/Chemistry | Chemistry | Chemistry is the scientific study of the properties and behavior of matter. It is a physical science within the natural sciences that studies the chemical elements that make up matter and compounds made of atoms, molecules and ions: their composition, structure, properties, behavior and the changes they undergo during reactions with other substances. Chemistry also addresses the nature of chemical bonds in chemical compounds.
In the scope of its subject, chemistry occupies an intermediate position between physics and biology. It is sometimes called the central science because it provides a foundation for understanding both basic and applied scientific disciplines at a fundamental level. For example, chemistry explains aspects of plant growth (botany), the formation of igneous rocks (geology), how atmospheric ozone is formed and how environmental pollutants are degraded (ecology), the properties of the soil on the Moon (cosmochemistry), how medications work (pharmacology), and how to collect DNA evidence at a crime scene (forensics).
Chemistry has existed under various names since ancient times. It has evolved, and now chemistry encompasses various areas of specialisation, or subdisciplines, that continue to increase in number and interrelate to create further interdisciplinary fields of study. The applications of various fields of chemistry are used frequently for economic purposes in the chemical industry.
Etymology
The word chemistry comes from a modification during the Renaissance of the word alchemy, which referred to an earlier set of practices that encompassed elements of chemistry, metallurgy, philosophy, astrology, astronomy, mysticism, and medicine. Alchemy is often associated with the quest to turn lead or other base metals into gold, though alchemists were also interested in many of the questions of modern chemistry.
The modern word alchemy in turn is derived from the Arabic word (). This may have Egyptian origins since is derived from the Ancient Greek , which is in turn derived from the word , which is the ancient name of Egypt in the Egyptian language. Alternately, may derive from 'cast together'.
Modern principles
The current model of atomic structure is the quantum mechanical model. Traditional chemistry starts with the study of elementary particles, atoms, molecules, substances, metals, crystals and other aggregates of matter. Matter can be studied in solid, liquid, gas and plasma states, in isolation or in combination. The interactions, reactions and transformations that are studied in chemistry are usually the result of interactions between atoms, leading to rearrangements of the chemical bonds which hold atoms together. Such behaviors are studied in a chemistry laboratory.
The chemistry laboratory stereotypically uses various forms of laboratory glassware. However glassware is not central to chemistry, and a great deal of experimental (as well as applied/industrial) chemistry is done without it.
A chemical reaction is a transformation of some substances into one or more different substances. The basis of such a chemical transformation is the rearrangement of electrons in the chemical bonds between atoms. It can be symbolically depicted through a chemical equation, which usually involves atoms as subjects. The number of atoms on the left and the right in the equation for a chemical transformation is equal. (When the number of atoms on either side is unequal, the transformation is referred to as a nuclear reaction or radioactive decay.) The type of chemical reactions a substance may undergo and the energy changes that may accompany it are constrained by certain basic rules, known as chemical laws.
Energy and entropy considerations are invariably important in almost all chemical studies. Chemical substances are classified in terms of their structure, phase, as well as their chemical compositions. They can be analyzed using the tools of chemical analysis, e.g. spectroscopy and chromatography. Scientists engaged in chemical research are known as chemists. Most chemists specialize in one or more sub-disciplines. Several concepts are essential for the study of chemistry; some of them are:
Matter
In chemistry, matter is defined as anything that has rest mass and volume (it takes up space) and is made up of particles. The particles that make up matter have rest mass as well – not all particles have rest mass, such as the photon. Matter can be a pure chemical substance or a mixture of substances.
Atom
The atom is the basic unit of chemistry. It consists of a dense core called the atomic nucleus surrounded by a space occupied by an electron cloud. The nucleus is made up of positively charged protons and uncharged neutrons (together called nucleons), while the electron cloud consists of negatively charged electrons which orbit the nucleus. In a neutral atom, the negatively charged electrons balance out the positive charge of the protons. The nucleus is dense; the mass of a nucleon is approximately 1,836 times that of an electron, yet the radius of an atom is about 10,000 times that of its nucleus.
The atom is also the smallest entity that can be envisaged to retain the chemical properties of the element, such as electronegativity, ionization potential, preferred oxidation state(s), coordination number, and preferred types of bonds to form (e.g., metallic, ionic, covalent).
Element
A chemical element is a pure substance which is composed of a single type of atom, characterized by its particular number of protons in the nuclei of its atoms, known as the atomic number and represented by the symbol Z. The mass number is the sum of the number of protons and neutrons in a nucleus. Although all the nuclei of all atoms belonging to one element will have the same atomic number, they may not necessarily have the same mass number; atoms of an element which have different mass numbers are known as isotopes. For example, all atoms with 6 protons in their nuclei are atoms of the chemical element carbon, but atoms of carbon may have mass numbers of 12 or 13.
The standard presentation of the chemical elements is in the periodic table, which orders elements by atomic number. The periodic table is arranged in groups, or columns, and periods, or rows. The periodic table is useful in identifying periodic trends.
Compound
A compound is a pure chemical substance composed of more than one element. The properties of a compound bear little similarity to those of its elements. The standard nomenclature of compounds is set by the International Union of Pure and Applied Chemistry (IUPAC). Organic compounds are named according to the organic nomenclature system. The names for inorganic compounds are created according to the inorganic nomenclature system. When a compound has more than one component, then they are divided into two classes, the electropositive and the electronegative components. In addition the Chemical Abstracts Service has devised a method to index chemical substances. In this scheme each chemical substance is identifiable by a number known as its CAS registry number.
Molecule
A molecule is the smallest indivisible portion of a pure chemical substance that has its unique set of chemical properties, that is, its potential to undergo a certain set of chemical reactions with other substances. However, this definition only works well for substances that are composed of molecules, which is not true of many substances (see below). Molecules are typically a set of atoms bound together by covalent bonds, such that the structure is electrically neutral and all valence electrons are paired with other electrons either in bonds or in lone pairs.
Thus, molecules exist as electrically neutral units, unlike ions. When this rule is broken, giving the "molecule" a charge, the result is sometimes named a molecular ion or a polyatomic ion. However, the discrete and separate nature of the molecular concept usually requires that molecular ions be present only in well-separated form, such as a directed beam in a vacuum in a mass spectrometer. Charged polyatomic collections residing in solids (for example, common sulfate or nitrate ions) are generally not considered "molecules" in chemistry. Some molecules contain one or more unpaired electrons, creating radicals. Most radicals are comparatively reactive, but some, such as nitric oxide (NO) can be stable.
The "inert" or noble gas elements (helium, neon, argon, krypton, xenon and radon) are composed of lone atoms as their smallest discrete unit, but the other isolated chemical elements consist of either molecules or networks of atoms bonded to each other in some way. Identifiable molecules compose familiar substances such as water, air, and many organic compounds like alcohol, sugar, gasoline, and the various pharmaceuticals.
However, not all substances or chemical compounds consist of discrete molecules, and indeed most of the solid substances that make up the solid crust, mantle, and core of the Earth are chemical compounds without molecules. These other types of substances, such as ionic compounds and network solids, are organized in such a way as to lack the existence of identifiable molecules per se. Instead, these substances are discussed in terms of formula units or unit cells as the smallest repeating structure within the substance. Examples of such substances are mineral salts (such as table salt), solids like carbon and diamond, metals, and familiar silica and silicate minerals such as quartz and granite.
One of the main characteristics of a molecule is its geometry often called its structure. While the structure of diatomic, triatomic or tetra-atomic molecules may be trivial, (linear, angular pyramidal etc.) the structure of polyatomic molecules, that are constituted of more than six atoms (of several elements) can be crucial for its chemical nature.
Substance and mixture
A chemical substance is a kind of matter with a definite composition and set of properties. A collection of substances is called a mixture. Examples of mixtures are air and alloys.
Mole and amount of substance
The mole is a unit of measurement that denotes an amount of substance (also called chemical amount). One mole is defined to contain exactly particles (atoms, molecules, ions, or electrons), where the number of particles per mole is known as the Avogadro constant. Molar concentration is the amount of a particular substance per volume of solution, and is commonly reported in mol/dm3.
Phase
In addition to the specific chemical properties that distinguish different chemical classifications, chemicals can exist in several phases. For the most part, the chemical classifications are independent of these bulk phase classifications; however, some more exotic phases are incompatible with certain chemical properties. A phase is a set of states of a chemical system that have similar bulk structural properties, over a range of conditions, such as pressure or temperature.
Physical properties, such as density and refractive index tend to fall within values characteristic of the phase. The phase of matter is defined by the phase transition, which is when energy put into or taken out of the system goes into rearranging the structure of the system, instead of changing the bulk conditions.
Sometimes the distinction between phases can be continuous instead of having a discrete boundary; in this case the matter is considered to be in a supercritical state. When three states meet based on the conditions, it is known as a triple point and since this is invariant, it is a convenient way to define a set of conditions.
The most familiar examples of phases are solids, liquids, and gases. Many substances exhibit multiple solid phases. For example, there are three phases of solid iron (alpha, gamma, and delta) that vary based on temperature and pressure. A principal difference between solid phases is the crystal structure, or arrangement, of the atoms. Another phase commonly encountered in the study of chemistry is the aqueous phase, which is the state of substances dissolved in aqueous solution (that is, in water).
Less familiar phases include plasmas, Bose–Einstein condensates and fermionic condensates and the paramagnetic and ferromagnetic phases of magnetic materials. While most familiar phases deal with three-dimensional systems, it is also possible to define analogs in two-dimensional systems, which has received attention for its relevance to systems in biology.
Bonding
Atoms sticking together in molecules or crystals are said to be bonded with one another. A chemical bond may be visualized as the multipole balance between the positive charges in the nuclei and the negative charges oscillating about them. More than simple attraction and repulsion, the energies and distributions characterize the availability of an electron to bond to another atom.
The chemical bond can be a covalent bond, an ionic bond, a hydrogen bond or just because of Van der Waals force. Each of these kinds of bonds is ascribed to some potential. These potentials create the interactions which hold atoms together in molecules or crystals. In many simple compounds, valence bond theory, the Valence Shell Electron Pair Repulsion model (VSEPR), and the concept of oxidation number can be used to explain molecular structure and composition.
An ionic bond is formed when a metal loses one or more of its electrons, becoming a positively charged cation, and the electrons are then gained by the non-metal atom, becoming a negatively charged anion. The two oppositely charged ions attract one another, and the ionic bond is the electrostatic force of attraction between them. For example, sodium (Na), a metal, loses one electron to become an Na+ cation while chlorine (Cl), a non-metal, gains this electron to become Cl−. The ions are held together due to electrostatic attraction, and that compound sodium chloride (NaCl), or common table salt, is formed.
In a covalent bond, one or more pairs of valence electrons are shared by two atoms: the resulting electrically neutral group of bonded atoms is termed a molecule. Atoms will share valence electrons in such a way as to create a noble gas electron configuration (eight electrons in their outermost shell) for each atom. Atoms that tend to combine in such a way that they each have eight electrons in their valence shell are said to follow the octet rule. However, some elements like hydrogen and lithium need only two electrons in their outermost shell to attain this stable configuration; these atoms are said to follow the duet rule, and in this way they are reaching the electron configuration of the noble gas helium, which has two electrons in its outer shell.
Similarly, theories from classical physics can be used to predict many ionic structures. With more complicated compounds, such as metal complexes, valence bond theory is less applicable and alternative approaches, such as the molecular orbital theory, are generally used.
Energy
In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structures, it is invariably accompanied by an increase or decrease of energy of the substances involved. Some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light; thus the products of a reaction may have more or less energy than the reactants.
A reaction is said to be exergonic if the final state is lower on the energy scale than the initial state; in the case of endergonic reactions the situation is the reverse. A reaction is said to be exothermic if the reaction releases heat to the surroundings; in the case of endothermic reactions, the reaction absorbs heat from the surroundings.
Chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at given temperature T) is related to the activation energy E, by the Boltzmann's population factor – that is the probability of a molecule to have energy greater than or equal to E at the given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation.
The activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound.
A related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. A reaction is feasible only if the total change in the Gibbs free energy is negative, ; if it is equal to zero the chemical reaction is said to be at equilibrium.
There exist only limited possible states of energy for electrons, atoms and molecules. These are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. The atoms/molecules in a higher energy state are said to be excited. The molecules/atoms of substance in an excited energy state are often much more reactive; that is, more amenable to chemical reactions.
The phase of a substance is invariably determined by its energy and the energy of its surroundings. When the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water (H2O); a liquid at room temperature because its molecules are bound by hydrogen bonds. Whereas hydrogen sulfide (H2S) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole–dipole interactions.
The transfer of energy from one chemical substance to another depends on the size of energy quanta emitted from one substance. However, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer. Thus, because vibrational and rotational energy levels are more closely spaced than electronic energy levels, heat is more easily transferred between substances relative to light or other forms of electronic energy. For example, ultraviolet electromagnetic radiation is not transferred with as much efficacy from one substance to another as thermal or electrical energy.
The existence of characteristic energy levels for different chemical substances is useful for their identification by the analysis of spectral lines. Different kinds of spectra are often used in chemical spectroscopy, e.g. IR, microwave, NMR, ESR, etc. Spectroscopy is also used to identify the composition of remote objects – like stars and distant galaxies – by analyzing their radiation spectra.
The term chemical energy is often used to indicate the potential of a chemical substance to undergo a transformation through a chemical reaction or to transform other chemical substances.
Reaction
When a chemical substance is transformed as a result of its interaction with another substance or with energy, a chemical reaction is said to have occurred. A chemical reaction is therefore a concept related to the "reaction" of a substance when it comes in close contact with another, whether as a mixture or a solution; exposure to some form of energy, or both. It results in some energy exchange between the constituents of the reaction as well as with the system environment, which may be designed vessels—often laboratory glassware.
Chemical reactions can result in the formation or dissociation of molecules, that is, molecules breaking apart to form two or more molecules or rearrangement of atoms within or across molecules. Chemical reactions usually involve the making or breaking of chemical bonds. Oxidation, reduction, dissociation, acid–base neutralization and molecular rearrangement are some examples of common chemical reactions.
A chemical reaction can be symbolically depicted through a chemical equation. While in a non-nuclear chemical reaction the number and kind of atoms on both sides of the equation are equal, for a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons.
The sequence of steps in which the reorganization of chemical bonds may be taking place in the course of a chemical reaction is called its mechanism. A chemical reaction can be envisioned to take place in a number of steps, each of which may have a different speed. Many reaction intermediates with variable stability can thus be envisaged during the course of a reaction. Reaction mechanisms are proposed to explain the kinetics and the relative product mix of a reaction. Many physical chemists specialize in exploring and proposing the mechanisms of various chemical reactions. Several empirical rules, like the Woodward–Hoffmann rules often come in handy while proposing a mechanism for a chemical reaction.
According to the IUPAC gold book, a chemical reaction is "a process that results in the interconversion of chemical species." Accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. An additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. Such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities (i.e. 'microscopic chemical events').
Ions and salts
An ion is a charged species, an atom or a molecule, that has lost or gained one or more electrons. When an atom loses an electron and thus has more protons than electrons, the atom is a positively charged ion or cation. When an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. Cations and anions can form a crystalline lattice of neutral salts, such as the Na+ and Cl− ions forming sodium chloride, or NaCl. Examples of polyatomic ions that do not split up during acid–base reactions are hydroxide (OH−) and phosphate (PO43−).
Plasma is composed of gaseous matter that has been completely ionized, usually through high temperature.
Acidity and basicity
A substance can often be classified as an acid or a base. There are several different theories which explain acid–base behavior. The simplest is Arrhenius theory, which states that an acid is a substance that produces hydronium ions when it is dissolved in water, and a base is one that produces hydroxide ions when dissolved in water. According to Brønsted–Lowry acid–base theory, acids are substances that donate a positive hydrogen ion to another substance in a chemical reaction; by extension, a base is the substance which receives that hydrogen ion.
A third common theory is Lewis acid–base theory, which is based on the formation of new chemical bonds. Lewis theory explains that an acid is a substance which is capable of accepting a pair of electrons from another substance during the process of bond formation, while a base is a substance which can provide a pair of electrons to form a new bond. There are several other ways in which a substance may be classified as an acid or a base, as is evident in the history of this concept.
Acid strength is commonly measured by two methods. One measurement, based on the Arrhenius definition of acidity, is pH, which is a measurement of the hydronium ion concentration in a solution, as expressed on a negative logarithmic scale. Thus, solutions that have a low pH have a high hydronium ion concentration and can be said to be more acidic. The other measurement, based on the Brønsted–Lowry definition, is the acid dissociation constant (Ka), which measures the relative ability of a substance to act as an acid under the Brønsted–Lowry definition of an acid. That is, substances with a higher Ka are more likely to donate hydrogen ions in chemical reactions than those with lower Ka values.
Redox
Redox (-) reactions include all chemical reactions in which atoms have their oxidation state changed by either gaining electrons (reduction) or losing electrons (oxidation). Substances that have the ability to oxidize other substances are said to be oxidative and are known as oxidizing agents, oxidants or oxidizers. An oxidant removes electrons from another substance. Similarly, substances that have the ability to reduce other substances are said to be reductive and are known as reducing agents, reductants, or reducers.
A reductant transfers electrons to another substance and is thus oxidized itself. And because it "donates" electrons it is also called an electron donor. Oxidation and reduction properly refer to a change in oxidation number—the actual transfer of electrons may never occur. Thus, oxidation is better defined as an increase in oxidation number, and reduction as a decrease in oxidation number.
Equilibrium
Although the concept of equilibrium is widely used across sciences, in the context of chemistry, it arises whenever a number of different states of the chemical composition are possible, as for example, in a mixture of several chemical compounds that can react with one another, or when a substance can be present in more than one kind of phase.
A system of chemical substances at equilibrium, even though having an unchanging composition, is most often not static; molecules of the substances continue to react with one another thus giving rise to a dynamic equilibrium. Thus the concept describes the state in which the parameters such as chemical composition remain unchanged over time.
Chemical laws
Chemical reactions are governed by certain laws, which have become fundamental concepts in chemistry. Some of them are:
Avogadro's law
Beer–Lambert law
Boyle's law (1662, relating pressure and volume)
Charles's law (1787, relating volume and temperature)
Fick's laws of diffusion
Gay-Lussac's law (1809, relating pressure and temperature)
Le Chatelier's principle
Henry's law
Hess's law
Law of conservation of energy leads to the important concepts of equilibrium, thermodynamics, and kinetics.
Law of conservation of mass continues to be conserved in isolated systems, even in modern physics. However, special relativity shows that due to mass–energy equivalence, whenever non-material "energy" (heat, light, kinetic energy) is removed from a non-isolated system, some mass will be lost with it. High energy losses result in loss of weighable amounts of mass, an important topic in nuclear chemistry.
Law of definite composition, although in many systems (notably biomacromolecules and minerals) the ratios tend to require large numbers, and are frequently represented as a fraction.
Law of multiple proportions
Raoult's law
History
The history of chemistry spans a period from the ancient past to the present. Since several millennia BC, civilizations were using technologies that would eventually form the basis of the various branches of chemistry. Examples include extracting metals from ores, making pottery and glazes, fermenting beer and wine, extracting chemicals from plants for medicine and perfume, rendering fat into soap, making glass, and making alloys like bronze.
Chemistry was preceded by its protoscience, alchemy, which operated a non-scientific approach to understanding the constituents of matter and their interactions. Despite being unsuccessful in explaining the nature of matter and its transformations, alchemists set the stage for modern chemistry by performing experiments and recording the results. Robert Boyle, although skeptical of elements and convinced of alchemy, played a key part in elevating the "sacred art" as an independent, fundamental and philosophical discipline in his work The Sceptical Chymist (1661).
While both alchemy and chemistry are concerned with matter and its transformations, the crucial difference was given by the scientific method that chemists employed in their work. Chemistry, as a body of knowledge distinct from alchemy, became an established science with the work of Antoine Lavoisier, who developed a law of conservation of mass that demanded careful measurement and quantitative observations of chemical phenomena. The history of chemistry afterwards is intertwined with the history of thermodynamics, especially through the work of Willard Gibbs.
Definition
The definition of chemistry has changed over time, as new discoveries and theories add to the functionality of the science. The term "chymistry", in the view of noted scientist Robert Boyle in 1661, meant the subject of the material principles of mixed bodies. In 1663, the chemist Christopher Glaser described "chymistry" as a scientific art, by which one learns to dissolve bodies, and draw from them the different substances on their composition, and how to unite them again, and exalt them to a higher perfection.
The 1730 definition of the word "chemistry", as used by Georg Ernst Stahl, meant the art of resolving mixed, compound, or aggregate bodies into their principles; and of composing such bodies from those principles. In 1837, Jean-Baptiste Dumas considered the word "chemistry" to refer to the science concerned with the laws and effects of molecular forces. This definition further evolved until, in 1947, it came to mean the science of substances: their structure, their properties, and the reactions that change them into other substances—a characterization accepted by Linus Pauling. More recently, in 1998, Professor Raymond Chang broadened the definition of "chemistry" to mean the study of matter and the changes it undergoes.
Background
Early civilizations, such as the Egyptians, Babylonians, and Indians, amassed practical knowledge concerning the arts of metallurgy, pottery and dyes, but did not develop a systematic theory.
A basic chemical hypothesis first emerged in Classical Greece with the theory of four elements as propounded definitively by Aristotle stating that fire, air, earth and water were the fundamental elements from which everything is formed as a combination. Greek atomism dates back to 440 BC, arising in works by philosophers such as Democritus and Epicurus. In 50 BCE, the Roman philosopher Lucretius expanded upon the theory in his poem De rerum natura (On The Nature of Things). Unlike modern concepts of science, Greek atomism was purely philosophical in nature, with little concern for empirical observations and no concern for chemical experiments.
An early form of the idea of conservation of mass is the notion that "Nothing comes from nothing" in Ancient Greek philosophy, which can be found in Empedocles (approx. 4th century BC): "For it is impossible for anything to come to be from what is not, and it cannot be brought about or heard of that what is should be utterly destroyed." and Epicurus (3rd century BC), who, describing the nature of the Universe, wrote that "the totality of things was always such as it is now, and always will be".
In the Hellenistic world the art of alchemy first proliferated, mingling magic and occultism into the study of natural substances with the ultimate goal of transmuting elements into gold and discovering the elixir of eternal life. Work, particularly the development of distillation, continued in the early Byzantine period with the most famous practitioner being the 4th century Greek-Egyptian Zosimos of Panopolis. Alchemy continued to be developed and practised throughout the Arab world after the Muslim conquests, and from there, and from the Byzantine remnants, diffused into medieval and Renaissance Europe through Latin translations.
The Arabic works attributed to Jabir ibn Hayyan introduced a systematic classification of chemical substances, and provided instructions for deriving an inorganic compound (sal ammoniac or ammonium chloride) from organic substances (such as plants, blood, and hair) by chemical means. Some Arabic Jabirian works (e.g., the "Book of Mercy", and the "Book of Seventy") were later translated into Latin under the Latinized name "Geber", and in 13th-century Europe an anonymous writer, usually referred to as pseudo-Geber, started to produce alchemical and metallurgical writings under this name. Later influential Muslim philosophers, such as Abū al-Rayhān al-Bīrūnī and Avicenna disputed the theories of alchemy, particularly the theory of the transmutation of metals.
Improvements of the refining of ores and their extractions to smelt metals was widely used source of information for early chemists in the 16th century, among them Georg Agricola (1494–1555), who published his major work De re metallica in 1556. His work, describing highly developed and complex processes of mining metal ores and metal extraction, were the pinnacle of metallurgy during that time. His approach removed all mysticism associated with the subject, creating the practical base upon which others could and would build. The work describes the many kinds of furnace used to smelt ore, and stimulated interest in minerals and their composition. Agricola has been described as the "father of metallurgy" and the founder of geology as a scientific discipline.
Under the influence of the new empirical methods propounded by Sir Francis Bacon and others, a group of chemists at Oxford, Robert Boyle, Robert Hooke and John Mayow began to reshape the old alchemical traditions into a scientific discipline. Boyle in particular questioned some commonly held chemical theories and argued for chemical practitioners to be more "philosophical" and less commercially focused in The Sceptical Chemyst. He formulated Boyle's law, rejected the classical "four elements" and proposed a mechanistic alternative of atoms and chemical reactions that could be subject to rigorous experiment.
In the following decades, many important discoveries were made, such as the nature of 'air' which was discovered to be composed of many different gases. The Scottish chemist Joseph Black and the Flemish Jan Baptist van Helmont discovered carbon dioxide, or what Black called 'fixed air' in 1754; Henry Cavendish discovered hydrogen and elucidated its properties and Joseph Priestley and, independently, Carl Wilhelm Scheele isolated pure oxygen. The theory of phlogiston (a substance at the root of all combustion) was propounded by the German Georg Ernst Stahl in the early 18th century and was only overturned by the end of the century by the French chemist Antoine Lavoisier, the chemical analogue of Newton in physics. Lavoisier did more than any other to establish the new science on proper theoretical footing, by elucidating the principle of conservation of mass and developing a new system of chemical nomenclature used to this day.
English scientist John Dalton proposed the modern theory of atoms; that all substances are composed of indivisible 'atoms' of matter and that different atoms have varying atomic weights.
The development of the electrochemical theory of chemical combinations occurred in the early 19th century as the result of the work of two scientists in particular, Jöns Jacob Berzelius and Humphry Davy, made possible by the prior invention of the voltaic pile by Alessandro Volta. Davy discovered nine new elements including the alkali metals by extracting them from their oxides with electric current.
British William Prout first proposed ordering all the elements by their atomic weight as all atoms had a weight that was an exact multiple of the atomic weight of hydrogen. J.A.R. Newlands devised an early table of elements, which was then developed into the modern periodic table of elements in the 1860s by Dmitri Mendeleev and independently by several other scientists including Julius Lothar Meyer. The inert gases, later called the noble gases were discovered by William Ramsay in collaboration with Lord Rayleigh at the end of the century, thereby filling in the basic structure of the table.
Organic chemistry was developed by Justus von Liebig and others, following Friedrich Wöhler's synthesis of urea. Other crucial 19th century advances were; an understanding of valence bonding (Edward Frankland in 1852) and the application of thermodynamics to chemistry (J. W. Gibbs and Svante Arrhenius in the 1870s).
At the turn of the twentieth century the theoretical underpinnings of chemistry were finally understood due to a series of remarkable discoveries that succeeded in probing and discovering the very nature of the internal structure of atoms. In 1897, J.J. Thomson of the University of Cambridge discovered the electron and soon after the French scientist Becquerel as well as the couple Pierre and Marie Curie investigated the phenomenon of radioactivity. In a series of pioneering scattering experiments Ernest Rutherford at the University of Manchester discovered the internal structure of the atom and the existence of the proton, classified and explained the different types of radioactivity and successfully transmuted the first element by bombarding nitrogen with alpha particles.
His work on atomic structure was improved on by his students, the Danish physicist Niels Bohr, the Englishman Henry Moseley and the German Otto Hahn, who went on to father the emerging nuclear chemistry and discovered nuclear fission. The electronic theory of chemical bonds and molecular orbitals was developed by the American scientists Linus Pauling and Gilbert N. Lewis.
The year 2011 was declared by the United Nations as the International Year of Chemistry. It was an initiative of the International Union of Pure and Applied Chemistry, and of the United Nations Educational, Scientific, and Cultural Organization and involves chemical societies, academics, and institutions worldwide and relied on individual initiatives to organize local and regional activities.
Practice
In the practice of chemistry, pure chemistry is the study of the fundamental principles of chemistry, while applied chemistry applies that knowledge to develop technology and solve real-world problems.
Subdisciplines
Chemistry is typically divided into several major sub-disciplines. There are also several main cross-disciplinary and more specialized fields of chemistry.
Analytical chemistry is the analysis of material samples to gain an understanding of their chemical composition and structure. Analytical chemistry incorporates standardized experimental methods in chemistry. These methods may be used in all subdisciplines of chemistry, excluding purely theoretical chemistry.
Biochemistry is the study of the chemicals, chemical reactions and interactions that take place at a molecular level in living organisms. Biochemistry is highly interdisciplinary, covering medicinal chemistry, neurochemistry, molecular biology, forensics, plant science and genetics.
Inorganic chemistry is the study of the properties and reactions of inorganic compounds, such as metals and minerals. The distinction between organic and inorganic disciplines is not absolute and there is much overlap, most importantly in the sub-discipline of organometallic chemistry.
Materials chemistry is the preparation, characterization, and understanding of solid state components or devices with a useful current or future function. The field is a new breadth of study in graduate programs, and it integrates elements from all classical areas of chemistry like organic chemistry, inorganic chemistry, and crystallography with a focus on fundamental issues that are unique to materials. Primary systems of study include the chemistry of condensed phases (solids, liquids, polymers) and interfaces between different phases.
Neurochemistry is the study of neurochemicals; including transmitters, peptides, proteins, lipids, sugars, and nucleic acids; their interactions, and the roles they play in forming, maintaining, and modifying the nervous system.
Nuclear chemistry is the study of how subatomic particles come together and make nuclei. Modern transmutation is a large component of nuclear chemistry, and the table of nuclides is an important result and tool for this field. In addition to medical applications, nuclear chemistry encompasses nuclear engineering which explores the topic of using nuclear power sources for generating energy.
Organic chemistry is the study of the structure, properties, composition, mechanisms, and reactions of organic compounds. An organic compound is defined as any compound based on a carbon skeleton. Organic compounds can be classified, organized and understood in reactions by their functional groups, unit atoms or molecules that show characteristic chemical properties in a compound.
Physical chemistry is the study of the physical and fundamental basis of chemical systems and processes. In particular, the energetics and dynamics of such systems and processes are of interest to physical chemists. Important areas of study include chemical thermodynamics, chemical kinetics, electrochemistry, statistical mechanics, spectroscopy, and more recently, astrochemistry. Physical chemistry has large overlap with molecular physics. Physical chemistry involves the use of infinitesimal calculus in deriving equations. It is usually associated with quantum chemistry and theoretical chemistry. Physical chemistry is a distinct discipline from chemical physics, but again, there is very strong overlap.
Theoretical chemistry is the study of chemistry via fundamental theoretical reasoning (usually within mathematics or physics). In particular the application of quantum mechanics to chemistry is called quantum chemistry. Since the end of the Second World War, the development of computers has allowed a systematic development of computational chemistry, which is the art of developing and applying computer programs for solving chemical problems. Theoretical chemistry has large overlap with (theoretical and experimental) condensed matter physics and molecular physics.
Other subdivisions include electrochemistry, femtochemistry, flavor chemistry, flow chemistry, immunohistochemistry, hydrogenation chemistry, mathematical chemistry, molecular mechanics, natural product chemistry, organometallic chemistry, petrochemistry, photochemistry, physical organic chemistry, polymer chemistry, radiochemistry, sonochemistry, supramolecular chemistry, synthetic chemistry, and many others.
Interdisciplinary
Interdisciplinary fields include agrochemistry, astrochemistry (and cosmochemistry), atmospheric chemistry, chemical engineering, chemical biology, chemo-informatics, environmental chemistry, geochemistry, green chemistry, immunochemistry, marine chemistry, materials science, mechanochemistry, medicinal chemistry, molecular biology, nanotechnology, oenology, pharmacology, phytochemistry, solid-state chemistry, surface science, thermochemistry, and many others.
Industry
The chemical industry represents an important economic activity worldwide. The global top 50 chemical producers in 2013 had sales of US$980.5 billion with a profit margin of 10.3%.
Professional societies
American Chemical Society
American Society for Neurochemistry
Chemical Institute of Canada
Chemical Society of Peru
International Union of Pure and Applied Chemistry
Royal Australian Chemical Institute
Royal Netherlands Chemical Society
Royal Society of Chemistry
Society of Chemical Industry
World Association of Theoretical and Computational Chemists
| Physical sciences | Science and medicine | null |
5184 | https://en.wikipedia.org/wiki/Cytoplasm | Cytoplasm | The cytoplasm describes all the material within a eukaryotic or prokaryotic cell, enclosed by the cell membrane, including the organelles and excluding the nucleus in eukaryotic cells. The material inside the nucleus of a eukaryotic cell and contained within the nuclear membrane is termed the nucleoplasm. The main components of the cytoplasm are the cytosol (a gel-like substance), the cell's internal sub-structures, and various cytoplasmic inclusions. In eukaryotes the cytoplasm also includes the nucleus, and other membrane-bound organelles.The cytoplasm is about 80% water and is usually colorless.
The submicroscopic ground cell substance, or cytoplasmic matrix, that remains after the exclusion of the cell organelles and particles is groundplasm. It is the hyaloplasm of light microscopy, a highly complex, polyphasic system in which all resolvable cytoplasmic elements are suspended, including the larger organelles such as the ribosomes, mitochondria, plant plastids, lipid droplets, and vacuoles.
Many cellular activities take place within the cytoplasm, such as many metabolic pathways, including glycolysis, photosynthesis, and processes such as cell division. The concentrated inner area is called the endoplasm and the outer layer is called the cell cortex, or ectoplasm.
Movement of calcium ions in and out of the cytoplasm is a signaling activity for metabolic processes.
In plants, movement of the cytoplasm around vacuoles is known as cytoplasmic streaming.
History
The term was introduced by Rudolf von Kölliker in 1863, originally as a synonym for protoplasm, but later it has come to mean the cell substance and organelles outside the nucleus.
There has been certain disagreement on the definition of cytoplasm, as some authors prefer to exclude from it some organelles, especially the vacuoles and sometimes the plastids.
Physical nature
It remains uncertain how the various components of the cytoplasm interact to allow movement of organelles while maintaining the cell's structure. The flow of cytoplasmic components plays an important role in many cellular functions which are dependent on the permeability of the cytoplasm. An example of such function is cell signalling, a process which is dependent on the manner in which signaling molecules are allowed to diffuse across the cell. While small signaling molecules like calcium ions are able to diffuse with ease, larger molecules and subcellular structures often require aid in moving through the cytoplasm. The irregular dynamics of such particles have given rise to various theories on the nature of the cytoplasm.
As a sol-gel
There has long been evidence that the cytoplasm behaves like a sol-gel. It is thought that the component molecules and structures of the cytoplasm behave at times like a disordered colloidal solution (sol) and at other times like an integrated network, forming a solid mass (gel). This theory thus proposes that the cytoplasm exists in distinct fluid and solid phases depending on the level of interaction between cytoplasmic components, which may explain the differential dynamics of different particles observed moving through the cytoplasm. A papers suggested that at length scale smaller than 100 nm, the cytoplasm acts like a liquid, while in a larger length scale, it acts like a gel.
As a glass
It has been proposed that the cytoplasm behaves like a glass-forming liquid approaching the glass transition. In this theory, the greater the concentration of cytoplasmic components, the less the cytoplasm behaves like a liquid and the more it behaves as a solid glass, freezing more significant cytoplasmic components in place (it is thought that the cell's metabolic activity can fluidize the cytoplasm to allow the movement of such more significant cytoplasmic components). A cell's ability to vitrify in the absence of metabolic activity, as in dormant periods, may be beneficial as a defense strategy. A solid glass cytoplasm would freeze subcellular structures in place, preventing damage, while allowing the transmission of tiny proteins and metabolites, helping to kickstart growth upon the cell's revival from dormancy.
Other perspectives
Research has examined the motion of cytoplasmic particles independent of the nature of the cytoplasm. In such an alternative approach, the aggregate random forces within the cell caused by motor proteins explain the non-Brownian motion of cytoplasmic constituents.
Constituents
The three major elements of the cytoplasm are the cytosol, organelles and inclusions.
Cytosol
The cytosol is the portion of the cytoplasm not contained within membrane-bound organelles. Cytosol makes up about 70% of the cell volume and is a complex mixture of cytoskeleton filaments, dissolved molecules, and water. The cytosol's filaments include the protein filaments such as actin filaments and microtubules that make up the cytoskeleton, as well as soluble proteins and small structures such as ribosomes, proteasomes, and the mysterious vault complexes. The inner, granular and more fluid portion of the cytoplasm is referred to as endoplasm.
Due to this network of fibres and high concentrations of dissolved macromolecules, such as proteins, an effect called macromolecular crowding occurs and the cytosol does not act as an ideal solution. This crowding effect alters how the components of the cytosol interact with each other.
Organelles
Organelles (literally "little organs") are usually membrane-bound structures inside the cell that have specific functions. Some major organelles that are suspended in the cytosol are the mitochondria, the endoplasmic reticulum, the Golgi apparatus, vacuoles, lysosomes, and in plant cells, chloroplasts.
Cytoplasmic inclusions
The inclusions are small particles of insoluble substances suspended in the cytosol. A huge range of inclusions exist in different cell types, and range from crystals of calcium oxalate or silicon dioxide in plants, to granules of energy-storage materials such as starch, glycogen, or polyhydroxybutyrate. A particularly widespread example are lipid droplets, which are spherical droplets composed of lipids and proteins that are used in both prokaryotes and eukaryotes as a way of storing lipids such as fatty acids and sterols. Lipid droplets make up much of the volume of adipocytes, which are specialized lipid-storage cells, but they are also found in a range of other cell types.
Controversy and research
The cytoplasm, mitochondria, and most organelles are contributions to the cell from the maternal gamete. Contrary to the older information that disregards any notion of the cytoplasm being active, new research has shown it to be in control of movement and flow of nutrients in and out of the cell by viscoplastic behavior and a measure of the reciprocal rate of bond breakage within the cytoplasmic network.
The material properties of the cytoplasm remain an ongoing investigation. A method of determining the mechanical behaviour of living cell mammalian cytoplasm with the aid of optical tweezers has been described.
| Biology and health sciences | Organelles and other cell parts | null |
5213 | https://en.wikipedia.org/wiki/Computing | Computing | Computing is any goal-oriented activity requiring, benefiting from, or creating computing machinery. It includes the study and experimentation of algorithmic processes, and the development of both hardware and software. Computing has scientific, engineering, mathematical, technological, and social aspects. Major computing disciplines include computer engineering, computer science, cybersecurity, data science, information systems, information technology, and software engineering.
The term computing is also synonymous with counting and calculating. In earlier times, it was used in reference to the action performed by mechanical computing machines, and before that, to human computers.
History
The history of computing is longer than the history of computing hardware and includes the history of methods intended for pen and paper (or for chalk and slate) with or without the aid of tables. Computing is intimately tied to the representation of numbers, though mathematical concepts necessary for computing existed before numeral systems. The earliest known tool for use in computation is the abacus, and it is thought to have been invented in Babylon circa between 2700 and 2300 BC. Abaci, of a more modern design, are still used as calculation tools today.
The first recorded proposal for using digital electronics in computing was the 1931 paper "The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena" by C. E. Wynn-Williams. Claude Shannon's 1938 paper "A Symbolic Analysis of Relay and Switching Circuits" then introduced the idea of using electronics for Boolean algebraic operations.
The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947. In 1953, the University of Manchester built the first transistorized computer, the Manchester Baby. However, early junction transistors were relatively bulky devices that were difficult to mass-produce, which limited them to a number of specialised applications.
In 1957, Frosch and Derick were able to manufacture the first silicon dioxide field effect transistors at Bell Labs, the first transistors in which drain and source were adjacent at the surface. Subsequently, a team demonstrated a working MOSFET at Bell Labs 1960. The MOSFET made it possible to build high-density integrated circuits, leading to what is known as the computer revolution or microcomputer revolution.
Computer
A computer is a machine that manipulates data according to a set of instructions called a computer program. The program has an executable form that the computer can use directly to execute the instructions. The same program in its human-readable source code form, enables a programmer to study and develop a sequence of steps known as an algorithm. Because the instructions can be carried out in different types of computers, a single set of source instructions converts to machine instructions according to the CPU type.
The execution process carries out the instructions in a computer program. Instructions express the computations performed by the computer. They trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions.
Computer hardware
Computer hardware includes the physical parts of a computer, including the central processing unit, memory, and input/output. Computational logic and computer architecture are key topics in the field of computer hardware.
Computer software
Computer software, or just software, is a collection of computer programs and related data, which provides instructions to a computer. Software refers to one or more computer programs and data held in the storage of the computer. It is a set of programs, procedures, algorithms, as well as its documentation concerned with the operation of a data processing system. Program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software. The term was coined to contrast with the old term hardware (meaning physical devices). In contrast to hardware, software is intangible.
Software is also sometimes used in a more narrow sense, meaning application software only.
System software
System software, or systems software, is computer software designed to operate and control computer hardware, and to provide a platform for running application software. System software includes operating systems, utility software, device drivers, window systems, and firmware. Frequently used development tools such as compilers, linkers, and debuggers are classified as system software. System software and middleware manage and integrate a computer's capabilities, but typically do not directly apply them in the performance of tasks that benefit the user, unlike application software.
Application software
Application software, also known as an application or an app, is computer software designed to help the user perform specific tasks. Examples include enterprise software, accounting software, office suites, graphics software, and media players. Many application programs deal principally with documents. Apps may be bundled with the computer and its system software, or may be published separately. Some users are satisfied with the bundled apps and need never install additional applications. The system software manages the hardware and serves the application, which in turn serves the user.
Application software applies the power of a particular computing platform or system software to a particular purpose. Some apps, such as Microsoft Office, are developed in multiple versions for several different platforms; others have narrower requirements and are generally referred to by the platform they run on. For example, a geography application for Windows or an Android application for education or Linux gaming. Applications that run only on one platform and increase the desirability of that platform due to the popularity of the application, known as killer applications.
Computer network
A computer network, often simply referred to as a network, is a collection of hardware components and computers interconnected by communication channels that allow the sharing of resources and information. When at least one process in one device is able to send or receive data to or from at least one process residing in a remote device, the two devices are said to be in a network. Networks may be classified according to a wide variety of characteristics such as the medium used to transport the data, communications protocol used, scale, topology, and organizational scope.
Communications protocols define the rules and data formats for exchanging information in a computer network, and provide the basis for network programming. One well-known communications protocol is Ethernet, a hardware and link layer standard that is ubiquitous in local area networks. Another common protocol is the Internet Protocol Suite, which defines a set of protocols for internetworking, i.e. for data communication between multiple networks, host-to-host data transfer, and application-specific data transmission formats.
Computer networking is sometimes considered a sub-discipline of electrical engineering, telecommunications, computer science, information technology, or computer engineering, since it relies upon the theoretical and practical application of these disciplines.
Internet
The Internet is a global system of interconnected computer networks that use the standard Internet Protocol Suite (TCP/IP) to serve billions of users. This includes millions of private, public, academic, business, and government networks, ranging in scope from local to global. These networks are linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries an extensive range of information resources and services, such as the inter-linked hypertext documents of the World Wide Web and the infrastructure to support email.
Computer programming
Computer programming is the process of writing, testing, debugging, and maintaining the source code and documentation of computer programs. This source code is written in a programming language, which is an artificial language that is often more restrictive than natural languages, but easily translated by the computer. Programming is used to invoke some desired behavior (customization) from the machine.
Writing high-quality source code requires knowledge of both the computer science domain and the domain in which the application will be used. The highest-quality software is thus often developed by a team of domain experts, each a specialist in some area of development. However, the term programmer may apply to a range of program quality, from hacker to open source contributor to professional. It is also possible for a single programmer to do most or all of the computer programming needed to generate the proof of concept to launch a new killer application.
Computer programmer
A programmer, computer programmer, or coder is a person who writes computer software. The term computer programmer can refer to a specialist in one area of computer programming or to a generalist who writes code for many kinds of software. One who practices or professes a formal approach to programming may also be known as a programmer analyst. A programmer's primary computer language (C, C++, Java, Lisp, Python, etc.) is often prefixed to the above titles, and those who work in a web environment often prefix their titles with Web. The term programmer can be used to refer to a software developer, software engineer, computer scientist, or software analyst. However, members of these professions typically possess other software engineering skills, beyond programming.
Computer industry
The computer industry is made up of businesses involved in developing computer software, designing computer hardware and computer networking infrastructures, manufacturing computer components, and providing information technology services, including system administration and maintenance.
The software industry includes businesses engaged in development, maintenance, and publication of software. The industry also includes software services, such as training, documentation, and consulting.
Sub-disciplines of computing
Computer engineering
Computer engineering is a discipline that integrates several fields of electrical engineering and computer science required to develop computer hardware and software. Computer engineers usually have training in electronic engineering (or electrical engineering), software design, and hardware-software integration, rather than just software engineering or electronic engineering. Computer engineers are involved in many hardware and software aspects of computing, from the design of individual microprocessors, personal computers, and supercomputers, to circuit design. This field of engineering includes not only the design of hardware within its own domain, but also the interactions between hardware and the context in which it operates.
Software engineering
Software engineering is the application of a systematic, disciplined, and quantifiable approach to the design, development, operation, and maintenance of software, and the study of these approaches. That is, the application of engineering to software. It is the act of using insights to conceive, model and scale a solution to a problem. The first reference to the term is the 1968 NATO Software Engineering Conference, and was intended to provoke thought regarding the perceived software crisis at the time. Software development, a widely used and more generic term, does not necessarily subsume the engineering paradigm. The generally accepted concepts of Software Engineering as an engineering discipline have been specified in the Guide to the Software Engineering Body of Knowledge (SWEBOK). The SWEBOK has become an internationally accepted standard in ISO/IEC TR 19759:2015.
Computer science
Computer science or computing science (abbreviated CS or Comp Sci) is the scientific and practical approach to computation and its applications. A computer scientist specializes in the theory of computation and the design of computational systems.
Its subfields can be divided into practical techniques for its implementation and application in computer systems, and purely theoretical areas. Some, such as computational complexity theory, which studies fundamental properties of computational problems, are highly abstract, while others, such as computer graphics, emphasize real-world applications. Others focus on the challenges in implementing computations. For example, programming language theory studies approaches to the description of computations, while the study of computer programming investigates the use of programming languages and complex systems. The field of human–computer interaction focuses on the challenges in making computers and computations useful, usable, and universally accessible to humans.
Cybersecurity
The field of cybersecurity pertains to the protection of computer systems and networks. This includes information and data privacy, preventing disruption of IT services and prevention of theft of and damage to hardware, software, and data.
Data science
Data science is a field that uses scientific and computing tools to extract information and insights from data, driven by the increasing volume and availability of data. Data mining, big data, statistics, machine learning and deep learning are all interwoven with data science.
Information systems
Information systems (IS) is the study of complementary networks of hardware and software (see information technology) that people and organizations use to collect, filter, process, create, and distribute data. The ACM's Computing Careers describes IS as:
The study of IS bridges business and computer science, using the theoretical foundations of information and computation to study various business models and related algorithmic processes within a computer science discipline. The field of Computer Information Systems (CIS) studies computers and algorithmic processes, including their principles, their software and hardware designs, their applications, and their impact on society while IS emphasizes functionality over design.
Information technology
Information technology (IT) is the application of computers and telecommunications equipment to store, retrieve, transmit, and manipulate data, often in the context of a business or other enterprise. The term is commonly used as a synonym for computers and computer networks, but also encompasses other information distribution technologies such as television and telephones. Several industries are associated with information technology, including computer hardware, software, electronics, semiconductors, internet, telecom equipment, e-commerce, and computer services.
Research and emerging technologies
DNA-based computing and quantum computing are areas of active research for both computing hardware and software, such as the development of quantum algorithms. Potential infrastructure for future technologies includes DNA origami on photolithography and quantum antennae for transferring information between ion traps. By 2011, researchers had entangled 14 qubits. Fast digital circuits, including those based on Josephson junctions and rapid single flux quantum technology, are becoming more nearly realizable with the discovery of nanoscale superconductors.
Fiber-optic and photonic (optical) devices, which already have been used to transport data over long distances, are starting to be used by data centers, along with CPU and semiconductor memory components. This allows the separation of RAM from CPU by optical interconnects. IBM has created an integrated circuit with both electronic and optical information processing in one chip. This is denoted CMOS-integrated nanophotonics (CINP). One benefit of optical interconnects is that motherboards, which formerly required a certain kind of system on a chip (SoC), can now move formerly dedicated memory and network controllers off the motherboards, spreading the controllers out onto the rack. This allows standardization of backplane interconnects and motherboards for multiple types of SoCs, which allows more timely upgrades of CPUs.
Another field of research is spintronics. Spintronics can provide computing power and storage, without heat buildup. Some research is being done on hybrid chips, which combine photonics and spintronics. There is also research ongoing on combining plasmonics, photonics, and electronics.
Cloud computing
Cloud computing is a model that allows for the use of computing resources, such as servers or applications, without the need for interaction between the owner of these resources and the end user. It is typically offered as a service, making it an example of Software as a Service, Platforms as a Service, and Infrastructure as a Service, depending on the functionality offered. Key characteristics include on-demand access, broad network access, and the capability of rapid scaling. It allows individual users or small business to benefit from economies of scale.
One area of interest in this field is its potential to support energy efficiency. Allowing thousands of instances of computation to occur on one single machine instead of thousands of individual machines could help save energy. It could also ease the transition to renewable energy source, since it would suffice to power one server farm with renewable energy, rather than millions of homes and offices.
However, this centralized computing model poses several challenges, especially in security and privacy. Current legislation does not sufficiently protect users from companies mishandling their data on company servers. This suggests potential for further legislative regulations on cloud computing and tech companies.
Quantum computing
Quantum computing is an area of research that brings together the disciplines of computer science, information theory, and quantum physics. While the idea of information as part of physics is relatively new, there appears to be a strong tie between information theory and quantum mechanics. Whereas traditional computing operates on a binary system of ones and zeros, quantum computing uses qubits. Qubits are capable of being in a superposition, i.e. in both states of one and zero, simultaneously. Thus, the value of the qubit is not between 1 and 0, but changes depending on when it is measured. This trait of qubits is known as quantum entanglement, and is the core idea of quantum computing that allows quantum computers to do large scale computations. Quantum computing is often used for scientific research in cases where traditional computers do not have the computing power to do the necessary calculations, such in molecular modeling. Large molecules and their reactions are far too complex for traditional computers to calculate, but the computational power of quantum computers could provide a tool to perform such calculations.
| Technology | Basics_3 | null |
5218 | https://en.wikipedia.org/wiki/Central%20processing%20unit | Central processing unit | A central processing unit (CPU), also called a central processor, main processor, or just processor, is the primary processor in a given computer. Its electronic circuitry executes instructions of a computer program, such as arithmetic, logic, controlling, and input/output (I/O) operations. This role contrasts with that of external components, such as main memory and I/O circuitry, and specialized coprocessors such as graphics processing units (GPUs).
The form, design, and implementation of CPUs have changed over time, but their fundamental operation remains almost unchanged. Principal components of a CPU include the arithmetic–logic unit (ALU) that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations, and a control unit that orchestrates the fetching (from memory), decoding and execution (of instructions) by directing the coordinated operations of the ALU, registers, and other components. Modern CPUs devote a lot of semiconductor area to caches and instruction-level parallelism to increase performance and to CPU modes to support operating systems and virtualization.
Most modern CPUs are implemented on integrated circuit (IC) microprocessors, with one or more CPUs on a single IC chip. Microprocessor chips with multiple CPUs are called multi-core processors. The individual physical CPUs, called processor cores, can also be multithreaded to support CPU-level multithreading.
An IC that contains a CPU may also contain memory, peripheral interfaces, and other components of a computer; such integrated devices are variously called microcontrollers or systems on a chip (SoC).
History
Early computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called "fixed-program computers". The "central processing unit" term has been in use since as early as 1955. Since the term "CPU" is generally defined as a device for software (computer program) execution, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer.
The idea of a stored-program computer had been already present in the design of John Presper Eckert and John William Mauchly's ENIAC, but was initially omitted so that it could be finished sooner. On June 30, 1945, before ENIAC was made, mathematician John von Neumann distributed a paper entitled First Draft of a Report on the EDVAC. It was the outline of a stored-program computer that would eventually be completed in August 1949. EDVAC was designed to perform a certain number of instructions (or operations) of various types. Significantly, the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, which was the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program that EDVAC ran could be changed simply by changing the contents of the memory. EDVAC was not the first stored-program computer; the Manchester Baby, which was a small-scale experimental stored-program computer, ran its first program on 21 June 1948 and the Manchester Mark 1 ran its first program during the night of 16–17 June 1949.
Early CPUs were custom designs used as part of a larger and sometimes distinctive computer. However, this method of designing custom CPUs for a particular application has largely given way to the development of multi-purpose processors produced in large quantities. This standardization began in the era of discrete transistor mainframes and minicomputers, and has rapidly accelerated with the popularization of the integrated circuit (IC). The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, and sometimes even in toys.
While von Neumann is most often credited with the design of the stored-program computer because of his design of EDVAC, and the design became known as the von Neumann architecture, others before him, such as Konrad Zuse, had suggested and implemented similar ideas. The so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also used a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both. Most modern CPUs are primarily von Neumann in design, but CPUs with the Harvard architecture are seen as well, especially in embedded applications; for instance, the Atmel AVR microcontrollers are Harvard-architecture processors.
Relays and vacuum tubes (thermionic tubes) were commonly used as switching elements; a useful computer requires thousands or tens of thousands of switching devices. The overall speed of a system is dependent on the speed of the switches. Vacuum-tube computers such as EDVAC tended to average eight hours between failures, whereas relay computers—such as the slower but earlier Harvard Mark I—failed very rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, limited largely by the speed of the switching devices they were built with.
Transistor CPUs
The design complexity of CPUs increased as various technologies facilitated the building of smaller and more reliable electronic devices. The first such improvement came with the advent of the transistor. Transistorized CPUs during the 1950s and 1960s no longer had to be built out of bulky, unreliable, and fragile switching elements, like vacuum tubes and relays. With this improvement, more complex and reliable CPUs were built onto one or several printed circuit boards containing discrete (individual) components.
In 1964, IBM introduced its IBM System/360 computer architecture that was used in a series of computers capable of running the same programs with different speeds and performances. This was significant at a time when most electronic computers were incompatible with one another, even those made by the same manufacturer. To facilitate this improvement, IBM used the concept of a microprogram (often called "microcode"), which still sees widespread use in modern CPUs. The System/360 architecture was so popular that it dominated the mainframe computer market for decades and left a legacy that is continued by similar modern computers like the IBM zSeries. In 1965, Digital Equipment Corporation (DEC) introduced another influential computer aimed at the scientific and research markets—the PDP-8.
Transistor-based computers had several distinct advantages over their predecessors. Aside from facilitating increased reliability and lower power consumption, transistors also allowed CPUs to operate at much higher speeds because of the short switching time of a transistor in comparison to a tube or relay. The increased reliability and dramatically increased speed of the switching elements, which were almost exclusively transistors by this time; CPU clock rates in the tens of megahertz were easily obtained during this period. Additionally, while discrete transistor and IC CPUs were in heavy usage, new high-performance designs like single instruction, multiple data (SIMD) vector processors began to appear. These early experimental designs later gave rise to the era of specialized supercomputers like those made by Cray Inc and Fujitsu Ltd.
Small-scale integration CPUs
During this period, a method of manufacturing many interconnected transistors in a compact space was developed. The integrated circuit (IC) allowed a large number of transistors to be manufactured on a single semiconductor-based die, or "chip". At first, only very basic non-specialized digital circuits such as NOR gates were miniaturized into ICs. CPUs based on these "building block" ICs are generally referred to as "small-scale integration" (SSI) devices. SSI ICs, such as the ones used in the Apollo Guidance Computer, usually contained up to a few dozen transistors. To build an entire CPU out of SSI ICs required thousands of individual chips, but still consumed much less space and power than earlier discrete transistor designs.
IBM's System/370, follow-on to the System/360, used SSI ICs rather than Solid Logic Technology discrete-transistor modules. DEC's PDP-8/I and KI10 PDP-10 also switched from the individual transistors used by the PDP-8 and PDP-10 to SSI ICs, and their extremely popular PDP-11 line was originally built with SSI ICs, but was eventually implemented with LSI components once these became practical.
Large-scale integration CPUs
Lee Boysel published influential articles, including a 1967 "manifesto", which described how to build the equivalent of a 32-bit mainframe computer from a relatively small number of large-scale integration circuits (LSI). The only way to build LSI chips, which are chips with a hundred or more gates, was to build them using a metal–oxide–semiconductor (MOS) semiconductor manufacturing process (either PMOS logic, NMOS logic, or CMOS logic). However, some companies continued to build processors out of bipolar transistor–transistor logic (TTL) chips because bipolar junction transistors were faster than MOS chips up until the 1970s (a few companies such as Datapoint continued to build processors out of TTL chips until the early 1980s). In the 1960s, MOS ICs were slower and initially considered useful only in applications that required low power. Following the development of silicon-gate MOS technology by Federico Faggin at Fairchild Semiconductor in 1968, MOS ICs largely replaced bipolar TTL as the standard chip technology in the early 1970s.
As the microelectronic technology advanced, an increasing number of transistors were placed on ICs, decreasing the number of individual ICs needed for a complete CPU. MSI and LSI ICs increased transistor counts to hundreds, and then thousands. By 1968, the number of ICs required to build a complete CPU had been reduced to 24 ICs of eight different types, with each IC containing roughly 1000 MOSFETs. In stark contrast with its SSI and MSI predecessors, the first LSI implementation of the PDP-11 contained a CPU composed of only four LSI integrated circuits.
Microprocessors
Since microprocessors were first introduced they have almost completely overtaken all other central processing unit implementation methods. The first commercially available microprocessor, made in 1971, was the Intel 4004, and the first widely used microprocessor, made in 1974, was the Intel 8080. Mainframe and minicomputer manufacturers of the time launched proprietary IC development programs to upgrade their older computer architectures, and eventually produced instruction set compatible microprocessors that were backward-compatible with their older hardware and software. Combined with the advent and eventual success of the ubiquitous personal computer, the term CPU is now applied almost exclusively to microprocessors. Several CPUs (denoted cores) can be combined in a single processing chip.
Previous generations of CPUs were implemented as discrete components and numerous small integrated circuits (ICs) on one or more circuit boards. Microprocessors, on the other hand, are CPUs manufactured on a very small number of ICs; usually just one. The overall smaller CPU size, as a result of being implemented on a single die, means faster switching time because of physical factors like decreased gate parasitic capacitance. This has allowed synchronous microprocessors to have clock rates ranging from tens of megahertz to several gigahertz. Additionally, the ability to construct exceedingly small transistors on an IC has increased the complexity and number of transistors in a single CPU many fold. This widely observed trend is described by Moore's law, which had proven to be a fairly accurate predictor of the growth of CPU (and other IC) complexity until 2016.
While the complexity, size, construction and general form of CPUs have changed enormously since 1950, the basic design and function has not changed much at all. Almost all common CPUs today can be very accurately described as von Neumann stored-program machines. As Moore's law no longer holds, concerns have arisen about the limits of integrated circuit transistor technology. Extreme miniaturization of electronic gates is causing the effects of phenomena like electromigration and subthreshold leakage to become much more significant. These newer concerns are among the many factors causing researchers to investigate new methods of computing such as the quantum computer, as well as to expand the use of parallelism and other methods that extend the usefulness of the classical von Neumann model.
Operation
The fundamental operation of most CPUs, regardless of the physical form they take, is to execute a sequence of stored instructions that is called a program. The instructions to be executed are kept in some kind of computer memory. Nearly all CPUs follow the fetch, decode and execute steps in their operation, which are collectively known as the instruction cycle.
After the execution of an instruction, the entire process repeats, with the next instruction cycle normally fetching the next-in-sequence instruction because of the incremented value in the program counter. If a jump instruction was executed, the program counter will be modified to contain the address of the instruction that was jumped to and program execution continues normally. In more complex CPUs, multiple instructions can be fetched, decoded and executed simultaneously. This section describes what is generally referred to as the "classic RISC pipeline", which is quite common among the simple CPUs used in many electronic devices (often called microcontrollers). It largely ignores the important role of CPU cache, and therefore the access stage of the pipeline.
Some instructions manipulate the program counter rather than producing result data directly; such instructions are generally called "jumps" and facilitate program behavior like loops, conditional program execution (through the use of a conditional jump), and existence of functions. In some processors, some other instructions change the state of bits in a "flags" register. These flags can be used to influence how a program behaves, since they often indicate the outcome of various operations. For example, in such processors a "compare" instruction evaluates two values and sets or clears bits in the flags register to indicate which one is greater or whether they are equal; one of these flags could then be used by a later jump instruction to determine program flow.
Fetch
Fetch involves retrieving an instruction (which is represented by a number or sequence of numbers) from program memory. The instruction's location (address) in program memory is determined by the program counter (PC; called the "instruction pointer" in Intel x86 microprocessors), which stores a number that identifies the address of the next instruction to be fetched. After an instruction is fetched, the PC is incremented by the length of the instruction so that it will contain the address of the next instruction in the sequence. Often, the instruction to be fetched must be retrieved from relatively slow memory, causing the CPU to stall while waiting for the instruction to be returned. This issue is largely addressed in modern processors by caches and pipeline architectures (see below).
Decode
The instruction that the CPU fetches from memory determines what the CPU will do. In the decode step, performed by binary decoder circuitry known as the instruction decoder, the instruction is converted into signals that control other parts of the CPU.
The way in which the instruction is interpreted is defined by the CPU's instruction set architecture (ISA). Often, one group of bits (that is, a "field") within the instruction, called the opcode, indicates which operation is to be performed, while the remaining fields usually provide supplemental information required for the operation, such as the operands. Those operands may be specified as a constant value (called an immediate value), or as the location of a value that may be a processor register or a memory address, as determined by some addressing mode.
In some CPU designs, the instruction decoder is implemented as a hardwired, unchangeable binary decoder circuit. In others, a microprogram is used to translate instructions into sets of CPU configuration signals that are applied sequentially over multiple clock pulses. In some cases the memory that stores the microprogram is rewritable, making it possible to change the way in which the CPU decodes instructions.
Execute
After the fetch and decode steps, the execute step is performed. Depending on the CPU architecture, this may consist of a single action or a sequence of actions. During each action, control signals electrically enable or disable various parts of the CPU so they can perform all or part of the desired operation. The action is then completed, typically in response to a clock pulse. Very often the results are written to an internal CPU register for quick access by subsequent instructions. In other cases results may be written to slower, but less expensive and higher capacity main memory.
For example, if an instruction that performs addition is to be executed, registers containing operands (numbers to be summed) are activated, as are the parts of the arithmetic logic unit (ALU) that perform addition. When the clock pulse occurs, the operands flow from the source registers into the ALU, and the sum appears at its output. On subsequent clock pulses, other components are enabled (and disabled) to move the output (the sum of the operation) to storage (e.g., a register or memory). If the resulting sum is too large (i.e., it is larger than the ALU's output word size), an arithmetic overflow flag will be set, influencing the next operation.
Structure and implementation
Hardwired into a CPU's circuitry is a set of basic operations it can perform, called an instruction set. Such operations may involve, for example, adding or subtracting two numbers, comparing two numbers, or jumping to a different part of a program. Each instruction is represented by a unique combination of bits, known as the machine language opcode. While processing an instruction, the CPU decodes the opcode (via a binary decoder) into control signals, which orchestrate the behavior of the CPU. A complete machine language instruction consists of an opcode and, in many cases, additional bits that specify arguments for the operation (for example, the numbers to be summed in the case of an addition operation). Going up the complexity scale, a machine language program is a collection of machine language instructions that the CPU executes.
The actual mathematical operation for each instruction is performed by a combinational logic circuit within the CPU's processor known as the arithmetic–logic unit or ALU. In general, a CPU executes an instruction by fetching it from memory, using its ALU to perform an operation, and then storing the result to memory. Besides the instructions for integer mathematics and logic operations, various other machine instructions exist, such as those for loading data from memory and storing it back, branching operations, and mathematical operations on floating-point numbers performed by the CPU's floating-point unit (FPU).
Control unit
The control unit (CU) is a component of the CPU that directs the operation of the processor. It tells the computer's memory, arithmetic and logic unit and input and output devices how to respond to the instructions that have been sent to the processor.
It directs the operation of the other units by providing timing and control signals. Most computer resources are managed by the CU. It directs the flow of data between the CPU and the other devices. John von Neumann included the control unit as part of the von Neumann architecture. In modern computer designs, the control unit is typically an internal part of the CPU with its overall role and operation unchanged since its introduction.
Arithmetic logic unit
The arithmetic logic unit (ALU) is a digital circuit within the processor that performs integer arithmetic and bitwise logic operations. The inputs to the ALU are the data words to be operated on (called operands), status information from previous operations, and a code from the control unit indicating which operation to perform. Depending on the instruction being executed, the operands may come from internal CPU registers, external memory, or constants generated by the ALU itself.
When all input signals have settled and propagated through the ALU circuitry, the result of the performed operation appears at the ALU's outputs. The result consists of both a data word, which may be stored in a register or memory, and status information that is typically stored in a special, internal CPU register reserved for this purpose.
Modern CPUs typically contain more than one ALU to improve performance.
Address generation unit
The address generation unit (AGU), sometimes also called the address computation unit (ACU), is an execution unit inside the CPU that calculates addresses used by the CPU to access main memory. By having address calculations handled by separate circuitry that operates in parallel with the rest of the CPU, the number of CPU cycles required for executing various machine instructions can be reduced, bringing performance improvements.
While performing various operations, CPUs need to calculate memory addresses required for fetching data from the memory; for example, in-memory positions of array elements must be calculated before the CPU can fetch the data from actual memory locations. Those address-generation calculations involve different integer arithmetic operations, such as addition, subtraction, modulo operations, or bit shifts. Often, calculating a memory address involves more than one general-purpose machine instruction, which do not necessarily decode and execute quickly. By incorporating an AGU into a CPU design, together with introducing specialized instructions that use the AGU, various address-generation calculations can be offloaded from the rest of the CPU, and can often be executed quickly in a single CPU cycle.
Capabilities of an AGU depend on a particular CPU and its architecture. Thus, some AGUs implement and expose more address-calculation operations, while some also include more advanced specialized instructions that can operate on multiple operands at a time. Some CPU architectures include multiple AGUs so more than one address-calculation operation can be executed simultaneously, which brings further performance improvements due to the superscalar nature of advanced CPU designs. For example, Intel incorporates multiple AGUs into its Sandy Bridge and Haswell microarchitectures, which increase bandwidth of the CPU memory subsystem by allowing multiple memory-access instructions to be executed in parallel.
Memory management unit (MMU)
Many microprocessors (in smartphones and desktop, laptop, server computers) have a memory management unit, translating logical addresses into physical RAM addresses, providing memory protection and paging abilities, useful for virtual memory. Simpler processors, especially microcontrollers, usually don't include an MMU.
Cache
A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory. A cache is a smaller, faster memory, closer to a processor core, which stores copies of the data from frequently used main memory locations. Most CPUs have different independent caches, including instruction and data caches, where the data cache is usually organized as a hierarchy of more cache levels (L1, L2, L3, L4, etc.).
All modern (fast) CPUs (with few specialized exceptions) have multiple levels of CPU caches. The first CPUs that used a cache had only one level of cache; unlike later level 1 caches, it was not split into L1d (for data) and L1i (for instructions). Almost all current CPUs with caches have a split L1 cache. They also have L2 caches and, for larger processors, L3 caches as well. The L2 cache is usually not split and acts as a common repository for the already split L1 cache. Every core of a multi-core processor has a dedicated L2 cache and is usually not shared between the cores. The L3 cache, and higher-level caches, are shared between the cores and are not split. An L4 cache is currently uncommon, and is generally on dynamic random-access memory (DRAM), rather than on static random-access memory (SRAM), on a separate die or chip. That was also the case historically with L1, while bigger chips have allowed integration of it and generally all cache levels, with the possible exception of the last level. Each extra level of cache tends to be bigger and is optimized differently.
Other types of caches exist (that are not counted towards the "cache size" of the most important caches mentioned above), such as the translation lookaside buffer (TLB) that is part of the memory management unit (MMU) that most CPUs have.
Caches are generally sized in powers of two: 2, 8, 16 etc. KiB or MiB (for larger non-L1) sizes, although the IBM z13 has a 96 KiB L1 instruction cache.
Clock rate
Most CPUs are synchronous circuits, which means they employ a clock signal to pace their sequential operations. The clock signal is produced by an external oscillator circuit that generates a consistent number of pulses each second in the form of a periodic square wave. The frequency of the clock pulses determines the rate at which a CPU executes instructions and, consequently, the faster the clock, the more instructions the CPU will execute each second.
To ensure proper operation of the CPU, the clock period is longer than the maximum time needed for all signals to propagate (move) through the CPU. In setting the clock period to a value well above the worst-case propagation delay, it is possible to design the entire CPU and the way it moves data around the "edges" of the rising and falling clock signal. This has the advantage of simplifying the CPU significantly, both from a design perspective and a component-count perspective. However, it also carries the disadvantage that the entire CPU must wait on its slowest elements, even though some portions of it are much faster. This limitation has largely been compensated for by various methods of increasing CPU parallelism (see below).
However, architectural improvements alone do not solve all of the drawbacks of globally synchronous CPUs. For example, a clock signal is subject to the delays of any other electrical signal. Higher clock rates in increasingly complex CPUs make it more difficult to keep the clock signal in phase (synchronized) throughout the entire unit. This has led many modern CPUs to require multiple identical clock signals to be provided to avoid delaying a single signal significantly enough to cause the CPU to malfunction. Another major issue, as clock rates increase dramatically, is the amount of heat that is dissipated by the CPU. The constantly changing clock causes many components to switch regardless of whether they are being used at that time. In general, a component that is switching uses more energy than an element in a static state. Therefore, as clock rate increases, so does energy consumption, causing the CPU to require more heat dissipation in the form of CPU cooling solutions.
One method of dealing with the switching of unneeded components is called clock gating, which involves turning off the clock signal to unneeded components (effectively disabling them). However, this is often regarded as difficult to implement and therefore does not see common usage outside of very low-power designs. One notable recent CPU design that uses extensive clock gating is the IBM PowerPC-based Xenon used in the Xbox 360; this reduces the power requirements of the Xbox 360.
Clockless CPUs
Another method of addressing some of the problems with a global clock signal is the removal of the clock signal altogether. While removing the global clock signal makes the design process considerably more complex in many ways, asynchronous (or clockless) designs carry marked advantages in power consumption and heat dissipation in comparison with similar synchronous designs. While somewhat uncommon, entire asynchronous CPUs have been built without using a global clock signal. Two notable examples of this are the ARM compliant AMULET and the MIPS R3000 compatible MiniMIPS.
Rather than totally removing the clock signal, some CPU designs allow certain portions of the device to be asynchronous, such as using asynchronous ALUs in conjunction with superscalar pipelining to achieve some arithmetic performance gains. While it is not altogether clear whether totally asynchronous designs can perform at a comparable or better level than their synchronous counterparts, it is evident that they do at least excel in simpler math operations. This, combined with their excellent power consumption and heat dissipation properties, makes them very suitable for embedded computers.
Voltage regulator module
Many modern CPUs have a die-integrated power managing module which regulates on-demand voltage supply to the CPU circuitry allowing it to keep balance between performance and power consumption.
Integer range
Every CPU represents numerical values in a specific way. For example, some early digital computers represented numbers as familiar decimal (base 10) numeral system values, and others have employed more unusual representations such as ternary (base three). Nearly all modern CPUs represent numbers in binary form, with each digit being represented by some two-valued physical quantity such as a "high" or "low" voltage.
Related to numeric representation is the size and precision of integer numbers that a CPU can represent. In the case of a binary CPU, this is measured by the number of bits (significant digits of a binary encoded integer) that the CPU can process in one operation, which is commonly called word size, bit width, data path width, integer precision, or integer size. A CPU's integer size determines the range of integer values on which it can directly operate. For example, an 8-bit CPU can directly manipulate integers represented by eight bits, which have a range of 256 (28) discrete integer values.
Integer range can also affect the number of memory locations the CPU can directly address (an address is an integer value representing a specific memory location). For example, if a binary CPU uses 32 bits to represent a memory address then it can directly address 232 memory locations. To circumvent this limitation and for various other reasons, some CPUs use mechanisms (such as bank switching) that allow additional memory to be addressed.
CPUs with larger word sizes require more circuitry and consequently are physically larger, cost more and consume more power (and therefore generate more heat). As a result, smaller 4- or 8-bit microcontrollers are commonly used in modern applications even though CPUs with much larger word sizes (such as 16, 32, 64, even 128-bit) are available. When higher performance is required, however, the benefits of a larger word size (larger data ranges and address spaces) may outweigh the disadvantages. A CPU can have internal data paths shorter than the word size to reduce size and cost. For example, even though the IBM System/360 instruction set architecture was a 32-bit instruction set, the System/360 Model 30 and Model 40 had 8-bit data paths in the arithmetic logical unit, so that a 32-bit add required four cycles, one for each 8 bits of the operands, and, even though the Motorola 68000 series instruction set was a 32-bit instruction set, the Motorola 68000 and Motorola 68010 had 16-bit data paths in the arithmetic logical unit, so that a 32-bit add required two cycles.
To gain some of the advantages afforded by both lower and higher bit lengths, many instruction sets have different bit widths for integer and floating-point data, allowing CPUs implementing that instruction set to have different bit widths for different portions of the device. For example, the IBM System/360 instruction set was primarily 32 bit, but supported 64-bit floating-point values to facilitate greater accuracy and range in floating-point numbers. The System/360 Model 65 had an 8-bit adder for decimal and fixed-point binary arithmetic and a 60-bit adder for floating-point arithmetic. Many later CPU designs use similar mixed bit width, especially when the processor is meant for general-purpose use where a reasonable balance of integer and floating-point capability is required.
Parallelism
The description of the basic operation of a CPU offered in the previous section describes the simplest form that a CPU can take. This type of CPU, usually referred to as subscalar, operates on and executes one instruction on one or two pieces of data at a time, that is less than one instruction per clock cycle ().
This process gives rise to an inherent inefficiency in subscalar CPUs. Since only one instruction is executed at a time, the entire CPU must wait for that instruction to complete before proceeding to the next instruction. As a result, the subscalar CPU gets "hung up" on instructions which take more than one clock cycle to complete execution. Even adding a second execution unit (see below) does not improve performance much; rather than one pathway being hung up, now two pathways are hung up and the number of unused transistors is increased. This design, wherein the CPU's execution resources can operate on only one instruction at a time, can only possibly reach scalar performance (one instruction per clock cycle, ). However, the performance is nearly always subscalar (less than one instruction per clock cycle, ).
Attempts to achieve scalar and better performance have resulted in a variety of design methodologies that cause the CPU to behave less linearly and more in parallel. When referring to parallelism in CPUs, two terms are generally used to classify these design techniques:
instruction-level parallelism (ILP), which seeks to increase the rate at which instructions are executed within a CPU (that is, to increase the use of on-die execution resources);
task-level parallelism (TLP), which purposes to increase the number of threads or processes that a CPU can execute simultaneously.
Each methodology differs both in the ways in which they are implemented, as well as the relative effectiveness they afford in increasing the CPU's performance for an application.
Instruction-level parallelism
One of the simplest methods for increased parallelism is to begin the first steps of instruction fetching and decoding before the prior instruction finishes executing. This is a technique known as instruction pipelining, and is used in almost all modern general-purpose CPUs. Pipelining allows multiple instruction to be executed at a time by breaking the execution pathway into discrete stages. This separation can be compared to an assembly line, in which an instruction is made more complete at each stage until it exits the execution pipeline and is retired.
Pipelining does, however, introduce the possibility for a situation where the result of the previous operation is needed to complete the next operation; a condition often termed data dependency conflict. Therefore, pipelined processors must check for these sorts of conditions and delay a portion of the pipeline if necessary. A pipelined processor can become very nearly scalar, inhibited only by pipeline stalls (an instruction spending more than one clock cycle in a stage).
Improvements in instruction pipelining led to further decreases in the idle time of CPU components. Designs that are said to be superscalar include a long instruction pipeline and multiple identical execution units, such as load–store units, arithmetic–logic units, floating-point units and address generation units. In a superscalar pipeline, instructions are read and passed to a dispatcher, which decides whether or not the instructions can be executed in parallel (simultaneously). If so, they are dispatched to execution units, resulting in their simultaneous execution. In general, the number of instructions that a superscalar CPU will complete in a cycle is dependent on the number of instructions it is able to dispatch simultaneously to execution units.
Most of the difficulty in the design of a superscalar CPU architecture lies in creating an effective dispatcher. The dispatcher needs to be able to quickly determine whether instructions can be executed in parallel, as well as dispatch them in such a way as to keep as many execution units busy as possible. This requires that the instruction pipeline is filled as often as possible and requires significant amounts of CPU cache. It also makes hazard-avoiding techniques like branch prediction, speculative execution, register renaming, out-of-order execution and transactional memory crucial to maintaining high levels of performance. By attempting to predict which branch (or path) a conditional instruction will take, the CPU can minimize the number of times that the entire pipeline must wait until a conditional instruction is completed. Speculative execution often provides modest performance increases by executing portions of code that may not be needed after a conditional operation completes. Out-of-order execution somewhat rearranges the order in which instructions are executed to reduce delays due to data dependencies. Also in case of single instruction stream, multiple data stream, a case when a lot of data from the same type has to be processed, modern processors can disable parts of the pipeline so that when a single instruction is executed many times, the CPU skips the fetch and decode phases and thus greatly increases performance on certain occasions, especially in highly monotonous program engines such as video creation software and photo processing.
When a fraction of the CPU is superscalar, the part that is not suffers a performance penalty due to scheduling stalls. The Intel P5 Pentium had two superscalar ALUs which could accept one instruction per clock cycle each, but its FPU could not. Thus the P5 was integer superscalar but not floating point superscalar. Intel's successor to the P5 architecture, P6, added superscalar abilities to its floating-point features.
Simple pipelining and superscalar design increase a CPU's ILP by allowing it to execute instructions at rates surpassing one instruction per clock cycle. Most modern CPU designs are at least somewhat superscalar, and nearly all general purpose CPUs designed in the last decade are superscalar. In later years some of the emphasis in designing high-ILP computers has been moved out of the CPU's hardware and into its software interface, or instruction set architecture (ISA). The strategy of the very long instruction word (VLIW) causes some ILP to become implied directly by the software, reducing the CPU's work in boosting ILP and thereby reducing design complexity.
Task-level parallelism
Another strategy of achieving performance is to execute multiple threads or processes in parallel. This area of research is known as parallel computing. In Flynn's taxonomy, this strategy is known as multiple instruction stream, multiple data stream (MIMD).
One technology used for this purpose is multiprocessing (MP). The initial type of this technology is known as symmetric multiprocessing (SMP), where a small number of CPUs share a coherent view of their memory system. In this scheme, each CPU has additional hardware to maintain a constantly up-to-date view of memory. By avoiding stale views of memory, the CPUs can cooperate on the same program and programs can migrate from one CPU to another. To increase the number of cooperating CPUs beyond a handful, schemes such as non-uniform memory access (NUMA) and directory-based coherence protocols were introduced in the 1990s. SMP systems are limited to a small number of CPUs while NUMA systems have been built with thousands of processors. Initially, multiprocessing was built using multiple discrete CPUs and boards to implement the interconnect between the processors. When the processors and their interconnect are all implemented on a single chip, the technology is known as chip-level multiprocessing (CMP) and the single chip as a multi-core processor.
It was later recognized that finer-grain parallelism existed with a single program. A single program might have several threads (or functions) that could be executed separately or in parallel. Some of the earliest examples of this technology implemented input/output processing such as direct memory access as a separate thread from the computation thread. A more general approach to this technology was introduced in the 1970s when systems were designed to run multiple computation threads in parallel. This technology is known as multi-threading (MT). The approach is considered more cost-effective than multiprocessing, as only a small number of components within a CPU are replicated to support MT as opposed to the entire CPU in the case of MP. In MT, the execution units and the memory system including the caches are shared among multiple threads. The downside of MT is that the hardware support for multithreading is more visible to software than that of MP and thus supervisor software like operating systems have to undergo larger changes to support MT. One type of MT that was implemented is known as temporal multithreading, where one thread is executed until it is stalled waiting for data to return from external memory. In this scheme, the CPU would then quickly context switch to another thread which is ready to run, the switch often done in one CPU clock cycle, such as the UltraSPARC T1. Another type of MT is simultaneous multithreading, where instructions from multiple threads are executed in parallel within one CPU clock cycle.
For several decades from the 1970s to early 2000s, the focus in designing high performance general purpose CPUs was largely on achieving high ILP through technologies such as pipelining, caches, superscalar execution, out-of-order execution, etc. This trend culminated in large, power-hungry CPUs such as the Intel Pentium 4. By the early 2000s, CPU designers were thwarted from achieving higher performance from ILP techniques due to the growing disparity between CPU operating frequencies and main memory operating frequencies as well as escalating CPU power dissipation owing to more esoteric ILP techniques.
CPU designers then borrowed ideas from commercial computing markets such as transaction processing, where the aggregate performance of multiple programs, also known as throughput computing, was more important than the performance of a single thread or process.
This reversal of emphasis is evidenced by the proliferation of dual and more core processor designs and notably, Intel's newer designs resembling its less superscalar P6 architecture. Late designs in several processor families exhibit CMP, including the x86-64 Opteron and Athlon 64 X2, the SPARC UltraSPARC T1, IBM POWER4 and POWER5, as well as several video game console CPUs like the Xbox 360's triple-core PowerPC design, and the PlayStation 3's 7-core Cell microprocessor.
Data parallelism
A less common but increasingly important paradigm of processors (and indeed, computing in general) deals with data parallelism. The processors discussed earlier are all referred to as some type of scalar device. As the name implies, vector processors deal with multiple pieces of data in the context of one instruction. This contrasts with scalar processors, which deal with one piece of data for every instruction. Using Flynn's taxonomy, these two schemes of dealing with data are generally referred to as single instruction stream, multiple data stream (SIMD) and single instruction stream, single data stream (SISD), respectively. The great utility in creating processors that deal with vectors of data lies in optimizing tasks that tend to require the same operation (for example, a sum or a dot product) to be performed on a large set of data. Some classic examples of these types of tasks include multimedia applications (images, video and sound), as well as many types of scientific and engineering tasks. Whereas a scalar processor must complete the entire process of fetching, decoding and executing each instruction and value in a set of data, a vector processor can perform a single operation on a comparatively large set of data with one instruction. This is only possible when the application tends to require many steps which apply one operation to a large set of data.
Most early vector processors, such as the Cray-1, were associated almost exclusively with scientific research and cryptography applications. However, as multimedia has largely shifted to digital media, the need for some form of SIMD in general-purpose processors has become significant. Shortly after inclusion of floating-point units started to become commonplace in general-purpose processors, specifications for and implementations of SIMD execution units also began to appear for general-purpose processors. Some of these early SIMD specifications – like HP's Multimedia Acceleration eXtensions (MAX) and Intel's MMX – were integer-only. This proved to be a significant impediment for some software developers, since many of the applications that benefit from SIMD primarily deal with floating-point numbers. Progressively, developers refined and remade these early designs into some of the common modern SIMD specifications, which are usually associated with one instruction set architecture (ISA). Some notable modern examples include Intel's Streaming SIMD Extensions (SSE) and the PowerPC-related AltiVec (also known as VMX).
Hardware performance counter
Many modern architectures (including embedded ones) often include hardware performance counters (HPC), which enables low-level (instruction-level) collection, benchmarking, debugging or analysis of running software metrics. HPC may also be used to discover and analyze unusual or suspicious activity of the software, such as return-oriented programming (ROP) or sigreturn-oriented programming (SROP) exploits etc. This is usually done by software-security teams to assess and find malicious binary programs.
Many major vendors (such as IBM, Intel, AMD, and Arm) provide software interfaces (usually written in C/C++) that can be used to collect data from the CPU's registers in order to get metrics. Operating system vendors also provide software like perf (Linux) to record, benchmark, or trace CPU events running kernels and applications.
Hardware counters provide a low-overhead method for collecting comprehensive performance metrics related to a CPU's core elements (functional units, caches, main memory, etc.) – a significant advantage over software profilers. Additionally, they generally eliminate the need to modify the underlying source code of a program. Because hardware designs differ between architectures, the specific types and interpretations of hardware counters will also change.
Privileged modes
Most modern CPUs have privileged modes to support operating systems and virtualization.
Cloud computing can use virtualization to provide virtual central processing units (vCPUs) for separate users.
A host is the virtual equivalent of a physical machine, on which a virtual system is operating. When there are several physical machines operating in tandem and managed as a whole, the grouped computing and memory resources form a cluster. In some systems, it is possible to dynamically add and remove from a cluster. Resources available at a host and cluster level can be partitioned into resources pools with fine granularity.
Performance
The performance or speed of a processor depends on, among many other factors, the clock rate (generally given in multiples of hertz) and the instructions per clock (IPC), which together are the factors for the instructions per second (IPS) that the CPU can perform.
Many reported IPS values have represented "peak" execution rates on artificial instruction sequences with few branches, whereas realistic workloads consist of a mix of instructions and applications, some of which take longer to execute than others. The performance of the memory hierarchy also greatly affects processor performance, an issue barely considered in IPS calculations. Because of these problems, various standardized tests, often called "benchmarks" for this purpose such as SPECinthave been developed to attempt to measure the real effective performance in commonly used applications.
Processing performance of computers is increased by using multi-core processors, which essentially is plugging two or more individual processors (called cores in this sense) into one integrated circuit. Ideally, a dual core processor would be nearly twice as powerful as a single core processor. In practice, the performance gain is far smaller, only about 50%, due to imperfect software algorithms and implementation. Increasing the number of cores in a processor (i.e. dual-core, quad-core, etc.) increases the workload that can be handled. This means that the processor can now handle numerous asynchronous events, interrupts, etc. which can take a toll on the CPU when overwhelmed. These cores can be thought of as different floors in a processing plant, with each floor handling a different task. Sometimes, these cores will handle the same tasks as cores adjacent to them if a single core is not enough to handle the information. Multi-core CPUs enhance a computer's ability to run several tasks simultaneously by providing additional processing power. However, the increase in speed is not directly proportional to the number of cores added. This is because the cores need to interact through specific channels, and this inter-core communication consumes a portion of the available processing speed.
Due to specific capabilities of modern CPUs, such as simultaneous multithreading and uncore, which involve sharing of actual CPU resources while aiming at increased utilization, monitoring performance levels and hardware use gradually became a more complex task. As a response, some CPUs implement additional hardware logic that monitors actual use of various parts of a CPU and provides various counters accessible to software; an example is Intel's Performance Counter Monitor technology.
Overclocking
Overclocking is a process of increasing the clock speed of a CPU (and other components) to increase the performance of the CPU. Overclocking might increase CPU temperature and cause it to overheat, so most users do not overclock and leave the clock speed unchanged. Some versions of components (such as Intel's U version of its CPUss or Nvidia's OG GPUs) do not allow overclocking.
| Technology | Computer hardware | null |
5221 | https://en.wikipedia.org/wiki/Carnivora | Carnivora | Carnivora ( ) is an order of placental mammals specialized primarily in eating flesh, whose members are formally referred to as carnivorans. The order Carnivora is the sixth largest order of mammals, comprising at least 279 species on every major landmass and in a variety of habitats, ranging from the cold polar regions of Earth to the hyper-arid region of the Sahara Desert and the open seas. Carnivorans exhibit a wide array of body plans, varying greatly in size and shape.
Carnivora are divided into two suborders, the Feliformia, containing the true felids and several animals; and the Caniformia, containing the true canids and many animals. The feliforms include the Felidae, Viverridae, hyena, and mongoose families, the majority of which live only in the Old World; cats are the only exception, occurring in the Old World and the New World, entering the Americas via the Bering Land Bridge. The caniforms include the Caninae, Procyonidae, bears, mustelids, skunks and pinnipeds that occur worldwide with immense diversity in their morphology, diet, and behavior.
Etymology
The word Carnivora is derived from Latin carō (stem carn-) 'flesh' and vorāre 'to devour'.
Phylogeny
The oldest known carnivoran line mammals (Carnivoramorpha) appeared in North America 6 million years after the Cretaceous–Paleogene extinction event. These early ancestors of carnivorans would have resembled small weasel or genet-like mammals, occupying a nocturnal shift on the forest floor or in the trees, as other groups of mammals like the mesonychians and later the creodonts were occupying the megafaunal faunivorous niche. However, following the extinction of mesonychians and the oxyaenid creodonts at the end of the Eocene, carnivorans quickly moved into this niche, with forms like the nimravids being the dominant large-bodied ambush predators during the Oligocene alongside the hyaenodont creodonts (which similarly produced larger, more open-country forms at the start of the Oligocene). By the time Miocene epoch appeared, most if not all of the major lineages and families of carnivorans had diversified and become the most dominant group of large terrestrial predators in Eurasia and North America, with various lineages being successful in megafaunal faunivorous niches at different intervals during the Miocene and later epochs.
Systematics
Evolution
The order Carnivora belongs to a group of mammals known as Laurasiatheria, which also includes other groups such as bats and ungulates. Within this group the carnivorans are placed in the clade Ferae. Ferae includes the closest extant relative of carnivorans, the pangolins, as well as several extinct groups of mostly Paleogene carnivorous placentals such as the creodonts, the arctocyonians, and mesonychians. The creodonts were originally thought of as the sister taxon to the carnivorans, perhaps even ancestral to, based on the presence of the carnassial teeth, but the nature of the carnassial teeth is different between the two groups. In carnivorans, the carnassials are positioned near the front of the molar row, while in the creodonts, they are positioned near the back of the molar row, and this suggests a separate evolutionary history and an order-level distinction. In addition, recent phylogenetic analysis suggests that creodonts are more closely related to pangolins while mesonychians might be the sister group to carnivorans and their stem-relatives.
The closest stem-carnivorans are the miacoids. The miacoids include the families Viverravidae and Miacidae, and together the Carnivora and Miacoidea form the stem-clade Carnivoramorpha. The miacoids were small, genet-like carnivoramorphs that occupy a variety of niches such as terrestrial and arboreal habitats. Recent studies have shown a supporting amount of evidence that Miacoidea is an evolutionary grade of carnivoramorphs that, while viverravids are monophyletic basal group, the miacids are paraphyletic in respect to Carnivora (as shown in the phylogeny below).
Carnivoramorpha as a whole first appeared in the Paleocene of North America about 60 million years ago. Crown carnivorans first appeared around 42 million years ago in the Middle Eocene. Their molecular phylogeny shows the extant Carnivora are a monophyletic group, the crown group of the Carnivoramorpha. From there carnivorans have split into two clades based on the composition of the bony structures that surround the middle ear of the skull, the cat-like feliforms and the dog-like caniforms. In feliforms, the auditory bullae are double-chambered, composed of two bones joined by a septum. Caniforms have single-chambered or partially divided auditory bullae, composed of a single bone. Initially, the early representatives of carnivorans were small as the creodonts (specifically, the oxyaenids) and mesonychians dominated the apex predator niches during the Eocene, but in the Oligocene, carnivorans became a dominant group of apex predators with the nimravids, and by the Miocene most of the extant carnivoran families have diversified and become the primary terrestrial predators in the Northern Hemisphere.
Classification of the extant carnivorans
In 1758, the Swedish botanist Carl Linnaeus placed all carnivorans known at the time into the group Ferae (not to be confused with the modern concept of Ferae which also includes pangolins) in the tenth edition of his book Systema Naturae. He recognized six genera: Canis (canids and hyaenids), Phoca (pinnipeds), Felis (felids), Viverra (viverrids, herpestids, and mephitids), Mustela (non-badger mustelids), Ursus (ursids, large species of mustelids, and procyonids). It was not until 1821 that the English writer and traveler Thomas Edward Bowdich gave the group its modern and accepted name.
Initially, the modern concept of Carnivora was divided into two suborders: the terrestrial Fissipedia and the marine Pinnipedia. Below is the classification of how the extant families were related to each other after American paleontologist George Gaylord Simpson in 1945:
Order Carnivora Bowdich, 1821
Suborder Fissipedia Blumenbach, 1791
Superfamily Canoidea G. Fischer de Waldheim, 1817
Family Canidae G. Fischer de Waldheim, 1817 – dogs
Family Ursidae G. Fischer de Waldheim, 1817 – bears
Family Procyonidae Bonaparte, 1850 – raccoons and pandas
Family Mustelidae G. Fischer de Waldheim, 1817 – skunks, badgers, otters, and weasels
Superfamily Feloidea G. Fischer de Waldheim, 1817
Family Viverridae J. E. Gray, 1821 – civets and mongooses
Family Hyaenidae J. E. Gray, 1821 – hyenas
Family Felidae G. Fischer de Waldheim, 1817 – cats
Suborder Pinnipedia Iliger, 1811
Family Otariidae J. E. Gray, 1825 – eared seals
Family Odobenidae J. A. Allen, 1880 – walrus
Family Phocidae J. E. Gray, 1821 – earless seals
Since then, however, the methods in which mammalogists use to assess the phylogenetic relationships among the carnivoran families has been improved with using more complicated and intensive incorporation of genetics, morphology and the fossil record. Research into Carnivora phylogeny since 1945 has found Fisspedia to be paraphyletic in respect to Pinnipedia, with pinnipeds being either more closely related to bears or to weasels. The small carnivoran families Viverridae, Procyonidae, and Mustelidae have been found to be polyphyletic:
Mongooses and a handful of Malagasy endemic species are found to be in a clade with hyenas, with the Malagasy species being in their own family Eupleridae.
The African palm civet is a basal cat-like carnivoran.
The linsang is more closely related to cats.
Pandas are not procyonids nor are they a natural grouping. The giant panda is a true bear while the red panda is a distinct family.
Skunks and stink badgers are placed in their own family, and are the sister group to a clade containing Ailuridae, Procyonidae and Mustelidae sensu stricto.
Below is a table chart of the extant carnivoran families and number of extant species recognized by various authors of the first (2009) and fourth (2014) volumes of the Handbook of the Mammals of the World:
Anatomy
Skull
The canine teeth are usually large, conical, thick and stress resistant. All of the terrestrial species of carnivorans have three incisors on each side of each jaw (the exception is the sea otter (Enhydra lutris) which only has two lower incisor teeth). The third molar has been lost. The carnassial pair is made up of the fourth upper premolar and the first lower molar teeth. Like most mammals, the dentition is heterodont, though in some species, such as the aardwolf (Proteles cristata), the teeth have been greatly reduced and the cheek teeth are specialised for eating insects. In pinnipeds, the teeth are homodont as they have evolved to grasp or catch fish, and the cheek teeth are often lost. In bears and raccoons, the carnassial pair is secondarily reduced. The skulls are heavily built with a strong zygomatic arch. Often a sagittal crest is present, sometimes more evident in sexually dimorphic species such as sea lions and fur seals, though it has also been greatly reduced in some small carnivorans. The braincase is enlarged with the frontoparietal bone at the front. In most species, the eyes are at the front of the face. In caniforms, the rostrum is usually long with many teeth, while in feliforms it is shorter with fewer teeth. The carnassial teeth of feliforms are generally more sectional than those of caniforms.
The turbinates are large and complex in comparison to other mammals, providing a large surface area for olfactory receptors.
Postcranial region
Aside from an accumulation of characteristics in the dental and cranial features, not much of their overall anatomy unites carnivorans as a group. All species of carnivorans are quadrupedal and most have five digits on the front feet and four digits on the back feet. In terrestrial carnivorans, the feet have soft pads. The feet can either be digitigrade as seen in cats, hyenas and dogs or plantigrade as seen in bears, skunks, raccoons, weasels, civets and mongooses. In pinnipeds, the limbs have been modified into flippers.
Unlike cetaceans and sirenians, which have fully functional tails to help them swim, pinnipeds use their limbs underwater to swim. Earless seals use their back flippers; sea lions and fur seals use their front flippers, and the walrus uses all of its limbs. As a result, pinnipeds have significantly shorter tails than other carnivorans.
Aside from the pinnipeds, dogs, bears, hyenas, and cats all have distinct and recognizable appearances. Dogs are usually cursorial mammals and are gracile in appearance, often relying on their teeth to hold prey; bears are much larger and rely on their physical strength to forage for food. Compared to dogs and bears, cats have longer and stronger forelimbs armed with retractable claws to hold on to prey. Hyenas are dog-like feliforms that have sloping backs due to their front legs being longer than their hind legs. The raccoon family and red panda are small, bear-like carnivorans with long tails. The other small carnivoran families Nandiniidae, Prionodontidae, Viverridae, Herpestidae, Eupleridae, Mephitidae and Mustelidae have through convergent evolution maintained the small, ancestral appearance of the miacoids, though there is some variation seen such as the robust and stout physicality of badgers and the wolverine (Gulo gulo).
Most carnivoran species have a well-defined breeding season. Male carnivorans usually have bacula, which are absent in hyenas and binturongs.
The length and density of the fur vary depending on the environment that the species inhabits. In warm climate species, the fur is often short in length and lighter. In cold climate species, the fur is either dense or long, often with an oily substance that helps to retain heat. The pelage coloration differs between species, often including black, white, orange, yellow, red, and many shades of grey and brown. Some are striped, spotted, blotched, banded, or otherwise boldly patterned. There seems to be a correlation between habitat and color pattern; for example spotted or banded species tend to be found in heavily forested environments. Some species like the grey wolf are polymorphic with different individual having different coat colors. The arctic fox (Vulpes lagopus) and the stoat (Mustela erminea) have fur that changes from white and dense in the winter to brown and sparse in the summer. In pinnipeds and polar bears, a thick insulating layer of blubber helps maintain their body temperature.
Sexual dimorphism
Relationship with humans
Carnivorans are arguably the group of mammals of most interest to humans. The dog is noteworthy for not only being the first species of carnivoran to be domesticated, but also the first species of any taxon. In the last 10,000 to 12,000 years, humans have selectively bred dogs for a variety of different tasks and today there are well over 400 breeds. The cat is another domesticated carnivoran and it is today considered one of the most successful species on the planet, due to their close proximity to humans and the popularity of cats as pets. Many other species are popular, and they are often charismatic megafauna. Many civilizations have incorporated a species of carnivoran into their culture: a prominent example is the lion, viewed as a symbol of power and royalty in many societies. Yet many species such as wolves and the big cats have been broadly hunted, resulting in extirpation in some areas. Habitat loss and human encroachment as well as climate change have been the primary cause of many species going into decline. Four species of carnivorans have gone extinct since the 1600s: Falkland Island wolf (Dusicyon australis) in 1876; the sea mink (Neogale macrodon) in 1894; the Japanese sea lion (Zalophus japonicus) in 1951 and the Caribbean monk seal (Neomonachus tropicalis) in 1952. Some species such as the red fox (Vulpes vulpes) and stoat (Mustela erminea) have been introduced to Australasia and have caused many native species to become endangered or even extinct.
| Biology and health sciences | Carnivora | null |
5232 | https://en.wikipedia.org/wiki/Chondrichthyes | Chondrichthyes | Chondrichthyes (; ) is a class of jawed fish that contains the cartilaginous fish or chondrichthyans, which all have skeletons primarily composed of cartilage. They can be contrasted with the Osteichthyes or bony fish, which have skeletons primarily composed of bone tissue. Chondrichthyes are aquatic vertebrates with paired fins, paired nares, placoid scales, conus arteriosus in the heart, and a lack of opercula and swim bladders. Within the infraphylum Gnathostomata, cartilaginous fishes are distinct from all other jawed vertebrates.
The class is divided into two subclasses: Elasmobranchii (sharks, rays, skates and sawfish) and Holocephali (chimaeras, sometimes called ghost sharks, which are sometimes separated into their own class). Extant chondrichthyans range in size from the finless sleeper ray to the over whale shark.
Anatomy
Skeleton
The skeleton is cartilaginous. The notochord is gradually replaced by a vertebral column during development, except in Holocephali, where the notochord stays intact. In some deepwater sharks, the column is reduced.
As they do not have bone marrow, red blood cells are produced in the spleen and the epigonal organ (special tissue around the gonads, which is also thought to play a role in the immune system). They are also produced in the Leydig's organ, which is only found in certain cartilaginous fishes. The subclass Holocephali, which is a very specialized group, lacks both the Leydig's and epigonal organs.
Appendages
Apart from electric rays, which have a thick and flabby body, with soft, loose skin, chondrichthyans have tough skin covered with dermal teeth (again, Holocephali is an exception, as the teeth are lost in adults, only kept on the clasping organ seen on the caudal ventral surface of the male), also called placoid scales (or dermal denticles), making it feel like sandpaper. In most species, all dermal denticles are oriented in one direction, making the skin feel very smooth if rubbed in one direction and very rough if rubbed in the other.
Originally, the pectoral and pelvic girdles, which do not contain any dermal elements, did not connect. In later forms, each pair of fins became ventrally connected in the middle when scapulocoracoid and puboischiadic bars evolved. In rays, the pectoral fins are connected to the head and are very flexible.
One of the primary characteristics present in most sharks is the heterocercal tail, which aids in locomotion.
Body covering
Chondrichthyans have tooth-like scales called dermal denticles or placoid scales. Denticles usually provide protection, and in most cases, streamlining. Mucous glands exist in some species, as well.
It is assumed that their oral teeth evolved from dermal denticles that migrated into the mouth, but it could be the other way around, as the teleost bony fish Denticeps clupeoides has most of its head covered by dermal teeth (as does, probably, Atherion elymus, another bony fish). This is most likely a secondary evolved characteristic, which means there is not necessarily a connection between the teeth and the original dermal scales.
The old placoderms did not have teeth at all, but had sharp bony plates in their mouth. Thus, it is unknown whether the dermal or oral teeth evolved first. It has even been suggested that the original bony plates of all vertebrates are now gone and that the present scales are just modified teeth, even if both the teeth and body armor had a common origin a long time ago. However, there is currently no evidence of this.
Respiratory system
All chondrichthyans breathe through five to seven pairs of gills, depending on the species. In general, pelagic species must keep swimming to keep oxygenated water moving through their gills, whilst demersal species can actively pump water in through their spiracles and out through their gills. However, this is only a general rule and many species differ.
A spiracle is a small hole found behind each eye. These can be tiny and circular, such as found on the nurse shark (Ginglymostoma cirratum), to extended and slit-like, such as found on the wobbegongs (Orectolobidae). Many larger, pelagic species, such as the mackerel sharks (Lamnidae) and the thresher sharks (Alopiidae), no longer possess them.
Nervous system
In chondrichthyans, the nervous system is composed of a small brain, 8–10 pairs of cranial nerves, and a spinal cord with spinal nerves. They have several sensory organs which provide information to be processed. Ampullae of Lorenzini are a network of small jelly filled pores called electroreceptors which help the fish sense electric fields in water. This aids in finding prey, navigation, and sensing temperature. The lateral line system has modified epithelial cells located externally which sense motion, vibration, and pressure in the water around them. Most species have large well-developed eyes. Also, they have very powerful nostrils and olfactory organs. Their inner ears consist of 3 large semicircular canals which aid in balance and orientation. Their sound detecting apparatus has limited range and is typically more powerful at lower frequencies. Some species have electric organs which can be used for defense and predation. They have relatively simple brains with the forebrain not greatly enlarged. The structure and formation of myelin in their nervous systems are nearly identical to that of tetrapods, which has led evolutionary biologists to believe that Chondrichthyes were a cornerstone group in the evolutionary timeline of myelin development.
Immune system
Like all other jawed vertebrates, members of Chondrichthyes have an adaptive immune system.
Reproduction
Fertilization is internal. Development is usually live birth (ovoviviparous species) but can be through eggs (oviparous). Some rare species are viviparous. There is no parental care after birth; however, some chondrichthyans do guard their eggs.
Capture-induced premature birth and abortion (collectively called capture-induced parturition) occurs frequently in sharks/rays when fished. Capture-induced parturition is often mistaken for natural birth by recreational fishers and is rarely considered in commercial fisheries management despite being shown to occur in at least 12% of live bearing sharks and rays (88 species to date).
Classification
The class Chondrichthyes has two subclasses: the subclass Elasmobranchii (sharks, rays, skates, and sawfish) and the subclass Holocephali (chimaeras). To see the full list of the species, click here.
Evolution
Cartilaginous fish are considered to have evolved from acanthodians. The discovery of Entelognathus and several examinations of acanthodian characteristics indicate that bony fish evolved directly from placoderm like ancestors, while acanthodians represent a paraphyletic assemblage leading to Chondrichthyes. Some characteristics previously thought to be exclusive to acanthodians are also present in basal cartilaginous fish. In particular, new phylogenetic studies find cartilaginous fish to be well nested among acanthodians, with Doliodus and Tamiobatis being the closest relatives to Chondrichthyes. Recent studies vindicate this, as Doliodus had a mosaic of chondrichthyan and acanthodian traits. Dating back to the Middle and Late Ordovician Period, many isolated scales, made of dentine and bone, have a structure and growth form that is chondrichthyan-like. They may be the remains of stem-chondrichthyans, but their classification remains uncertain.
The earliest unequivocal fossils of acanthodian-grade cartilaginous fishes are Qianodus and Fanjingshania from the early Silurian (Aeronian) of Guizhou, China around 439 million years ago, which are also the oldest unambiguous remains of any jawed vertebrates. Shenacanthus vermiformis, which lived 436 million years ago, had thoracic armour plates resembling those of placoderms.
By the start of the Early Devonian, 419 million years ago, jawed fishes had divided into three distinct groups: the now extinct placoderms (a paraphyletic assemblage of ancient armoured fishes), the bony fishes, and the clade that includes spiny sharks and early cartilaginous fish. The modern bony fishes, class Osteichthyes, appeared in the late Silurian or early Devonian, about 416 million years ago. The first abundant genus of shark, Cladoselache, appeared in the oceans during the Devonian Period. The first cartilaginous fishes evolved from Doliodus-like spiny shark ancestors.
Taxonomy
Subphylum Vertebrata
└─Infraphylum Gnathostomata
├─Placodermi — extinct (armored gnathostomes)
└Eugnathostomata (true jawed vertebrates)
├─Acanthodii (stem cartilaginous fish)
└─Chondrichthyes (true cartilaginous fish)
├─Holocephali (chimaeras + several extinct clades)
└Elasmobranchii (shark and rays)
├─Selachii (true sharks)
└─Batoidea (rays and relatives)
Note: Lines show evolutionary relationships.
| Biology and health sciences | Fishes | null |
5236 | https://en.wikipedia.org/wiki/Coast | Coast | A coastalso called the coastline, shoreline, or seashoreis the land next to the sea or the line that forms the boundary between the land and the ocean or a lake. Coasts are influenced by the topography of the surrounding landscape, as well as by water induced erosion, such as waves. The geological composition of rock and soil dictates the type of shore that is created. Earth contains roughly of coastline.
Coasts are important zones in natural ecosystems, often home to a wide range of biodiversity. On land, they harbor important ecosystems such as freshwater or estuarine wetlands, which are important for bird populations and other terrestrial animals. In wave-protected areas, they harbor salt marshes, mangroves or seagrasses, all of which can provide nursery habitat for finfish, shellfish, and other aquatic animals. Rocky shores are usually found along exposed coasts and provide habitat for a wide range of sessile animals (e.g. mussels, starfish, barnacles) and various kinds of seaweeds.
In physical oceanography, a shore is the wider fringe that is geologically modified by the action of the body of water past and present, while the beach is at the edge of the shore, representing the intertidal zone where there is one. Along tropical coasts with clear, nutrient-poor water, coral reefs can often be found between depths of .
According to an atlas prepared by the United Nations, about 44% of the human population lives within of the sea . Due to its importance in society and its high population concentrations, the coast is important for major parts of the global food and economic system, and they provide many ecosystem services to humankind. For example, important human activities happen in port cities. Coastal fisheries (commercial, recreational, and subsistence) and aquaculture are major economic activities and create jobs, livelihoods, and protein for the majority of coastal human populations. Other coastal spaces like beaches and seaside resorts generate large revenues through tourism.
Marine coastal ecosystems can also provide protection against sea level rise and tsunamis. In many countries, mangroves are the primary source of wood for fuel (e.g. charcoal) and building material. Coastal ecosystems like mangroves and seagrasses have a much higher capacity for carbon sequestration than many terrestrial ecosystems, and as such can play a critical role in the near-future to help mitigate climate change effects by uptake of atmospheric anthropogenic carbon dioxide.
However, the economic importance of coasts makes many of these communities vulnerable to climate change, which causes increases in extreme weather and sea level rise, as well as related issues like coastal erosion, saltwater intrusion, and coastal flooding. Other coastal issues, such as marine pollution, marine debris, coastal development, and marine ecosystem destruction, further complicate the human uses of the coast and threaten coastal ecosystems.
The interactive effects of climate change, habitat destruction, overfishing, and water pollution (especially eutrophication) have led to the demise of coastal ecosystem around the globe. This has resulted in population collapse of fisheries stocks, loss of biodiversity, increased invasion of alien species, and loss of healthy habitats. International attention to these issues has been captured in Sustainable Development Goal 14 "Life Below Water", which sets goals for international policy focused on preserving marine coastal ecosystems and supporting more sustainable economic practices for coastal communities. Likewise, the United Nations has declared 2021–2030 the UN Decade on Ecosystem Restoration, but restoration of coastal ecosystems has received insufficient attention.
Since coasts are constantly changing, a coastline's exact perimeter cannot be determined; this measurement challenge is called the coastline paradox. The term coastal zone is used to refer to a region where interactions of sea and land processes occur. Both the terms coast and coastal are often used to describe a geographic location or region located on a coastline (e.g., New Zealand's West Coast, or the East, West, and Gulf Coast of the United States.) Coasts with a narrow continental shelf that are close to the open ocean are called pelagic coast, while other coasts are more sheltered coast in a gulf or bay. A shore, on the other hand, may refer to parts of land adjoining any large body of water, including oceans (sea shore) and lakes (lake shore).
Size
The Earth has approximately of coastline. Coastal habitats, which extend to the margins of the continental shelves, make up about 7 percent of the Earth's oceans, but at least 85% of commercially harvested fish depend on coastal environments during at least part of their life cycle. about 2.86% of exclusive economic zones were part of marine protected areas.
The definition of coasts varies. Marine scientists think of the "wet" (aquatic or intertidal) vegetated habitats as being coastal ecosystems (including seagrass, salt marsh etc.) whilst some terrestrial scientists might only think of coastal ecosystems as purely terrestrial plants that live close to the seashore (see also estuaries and coastal ecosystems).
While there is general agreement in the scientific community regarding the definition of coast, in the political sphere, the delineation of the extents of a coast differ according to jurisdiction. Government authorities in various countries may define coast differently for economic and social policy reasons.
Challenges of precisely measuring the coastline
Formation
Tides often determine the range over which sediment is deposited or eroded. Areas with high tidal ranges allow waves to reach farther up the shore, and areas with lower tidal ranges produce deposition at a smaller elevation interval. The tidal range is influenced by the size and shape of the coastline. Tides do not typically cause erosion by themselves; however, tidal bores can erode as the waves surge up the river estuaries from the ocean.
Geologists classify coasts on the basis of tidal range into macrotidal coasts with a tidal range greater than ; mesotidal coasts with a tidal range of ; and microtidal coasts with a tidal range of less than . The distinction between macrotidal and mesotidal coasts is more important. Macrotidal coasts lack barrier islands and lagoons, and are characterized by funnel-shaped estuaries containing sand ridges aligned with tidal currents. Wave action is much more important for determining bedforms of sediments deposited along mesotidal and microtidal coasts than in macrotidal coasts.
Waves erode coastline as they break on shore releasing their energy; the larger the wave the more energy it releases and the more sediment it moves. Coastlines with longer shores have more room for the waves to disperse their energy, while coasts with cliffs and short shore faces give little room for the wave energy to be dispersed. In these areas, the wave energy breaking against the cliffs is higher, and air and water are compressed into cracks in the rock, forcing the rock apart, breaking it down. Sediment deposited by waves comes from eroded cliff faces and is moved along the coastline by the waves. This forms an abrasion or cliffed coast.
Sediment deposited by rivers is the dominant influence on the amount of sediment located in the case of coastlines that have estuaries. Today, riverine deposition at the coast is often blocked by dams and other human regulatory devices, which remove the sediment from the stream by causing it to be deposited inland. Coral reefs are a provider of sediment for coastlines of tropical islands.
Like the ocean which shapes them, coasts are a dynamic environment with constant change. The Earth's natural processes, particularly sea level rises, waves and various weather phenomena, have resulted in the erosion, accretion and reshaping of coasts as well as flooding and creation of continental shelves and drowned river valleys (rias).
Importance for humans and ecosystems
Human settlements
More and more of the world's people live in coastal regions. According to a United Nations atlas, 44% of all people live within 150 km (93 mi) of the sea. Many major cities are on or near good harbors and have port facilities. Some landlocked places have achieved port status by building canals.
Nations defend their coasts against military invaders, smugglers and illegal migrants. Fixed coastal defenses have long been erected in many nations, and coastal countries typically have a navy and some form of coast guard.
Tourism
Coasts, especially those with beaches and warm water, attract tourists often leading to the development of seaside resort communities. In many island nations such as those of the Mediterranean, South Pacific Ocean and Caribbean, tourism is central to the economy. Coasts offer recreational activities such as swimming, fishing, surfing, boating, and sunbathing.
Growth management and coastal management can be a challenge for coastal local authorities who often struggle to provide the infrastructure required by new residents, and poor management practices of construction often leave these communities and infrastructure vulnerable to processes like coastal erosion and sea level rise. In many of these communities, management practices such as beach nourishment or when the coastal infrastructure is no longer financially sustainable, managed retreat to remove communities from the coast.
Ecosystem services
Types
Emergent coastline
According to one principle of classification, an emergent coastline is a coastline that has experienced a fall in sea level, because of either a global sea-level change, or local uplift. Emergent coastlines are identifiable by the coastal landforms, which are above the high tide mark, such as raised beaches. In contrast, a submergent coastline is one where the sea level has risen, due to a global sea-level change, local subsidence, or isostatic rebound. Submergent coastlines are identifiable by their submerged, or "drowned" landforms, such as rias (drowned valleys) and fjords
Concordant coastline
According to the second principle of classification, a concordant coastline is a coastline where bands of different rock types run parallel to the shore. These rock types are usually of varying resistance, so the coastline forms distinctive landforms, such as coves. Discordant coastlines feature distinctive landforms because the rocks are eroded by the ocean waves. The less resistant rocks erode faster, creating inlets or bay; the more resistant rocks erode more slowly, remaining as headlands or outcroppings.
High and low energy coasts
Parts of a coastline can be categorised as high energy coast or low energy coast. The distinguishing characteristics of a high energy coast are that the average wave energy is relatively high so that erosion of small grained material tends to exceed deposition, and consequently landforms like cliffs, headlands and wave-cut terraces develop. Low energy coasts are generally sheltered from waves, or in regions where the average wind wave and swell conditions are relatively mild. Low energy coasts typically change slowly, and tend to be depositional environments.
High energy coasts are exposed to the direct impact of waves and storms, and are generally erosional environments. High energy storm events can make large changes to a coastline, and can move significant amounts of sediment over a short period, sometimes changing a shoreline configuration.
Destructive and constructive waves
Swash is the shoreward flow after the break, backwash is the water flow back down the beach. The relative strength of flow in the swash and backwash determines what size grains are deposited or eroded. This is dependent on how the wave breaks and the slope of the shore.
Depending on the form of the breaking wave, its energy can carry granular material up the beach and deposit it, or erode it by carrying more material down the slope than up it. Steep waves that are close together and break with the surf plunging down onto the shore slope expend much of their energy lifting the sediment. The weak swash does not carry it far up the slope, and the strong backwash carries it further down the slope, where it either settles in deeper water or is carried along the shore by a longshore current induced by an angled approach of the wave-front to the shore. These waves which erode the beach are called destructive waves.
Low waves that are further apart and break by spilling, expend more of their energy in the swash which carries particles up the beach, leaving less energy for the backwash to transport them downslope, with a net constrictive influence on the beach.
Rivieras
Riviera is an Italian word for "shoreline", ultimately derived from Latin ("riverbank"). It came to be applied as a proper name to the coast of the Ligurian Sea, in the form riviera ligure, then shortened to riviera. Historically, the Ligurian Riviera extended from Capo Corvo (Punta Bianca) south of Genoa, north and west into what is now French territory past Monaco and sometimes as far as Marseille. Today, this coast is divided into the Italian Riviera and the French Riviera, although the French use the term "Riviera" to refer to the Italian Riviera and call the French portion the "Côte d'Azur".
As a result of the fame of the Ligurian rivieras, the term came into English to refer to any shoreline, especially one that is sunny, topographically diverse and popular with tourists. Such places using the term include the Australian Riviera in Queensland and the Turkish Riviera along the Aegean Sea.
Other coastal categories
A cliffed coast or abrasion coast is one where marine action has produced steep declivities known as cliffs.
A flat coast is one where the land gradually descends into the sea.
A graded shoreline is one where wind and water action has produced a flat and straight coastline.
A primary coast isone which is mainly undergoing early stage development by major long-term processes such as tectonism and climate change A secondary coast is one where the primary processes have mostly stabilised, and more localised processes have become prominent.
An erosional coast is on average undergoing erosion, while a depositional coast is accumulating material.
An active coast is on the edge of a tectonic plate, while a passive coast is usually on a substantial continental shelf or away from a plate edge.
Landforms
The following articles describe some coastal landforms:
Barrier island
Bay
Cove
Headland
Peninsula
Cliff erosion
Much of the sediment deposited along a coast is the result of erosion of a surrounding cliff, or bluff. Sea cliffs retreat landward because of the constant undercutting of slopes by waves. If the slope/cliff being undercut is made of unconsolidated sediment it will erode at a much faster rate than a cliff made of bedrock.
A natural arch is formed when a headland is eroded through by waves.
Sea caves are made when certain rock beds are more susceptible to erosion than the surrounding rock beds because of different areas of weakness. These areas are eroded at a faster pace creating a hole or crevice that, through time, by means of wave action and erosion, becomes a cave.
A stack is formed when a headland is eroded away by wave and wind action or an arch collapses leaving an offshore remnant.
A stump is a shortened sea stack that has been eroded away or fallen because of instability.
Wave-cut notches are caused by the undercutting of overhanging slopes which leads to increased stress on cliff material and a greater probability that the slope material will fall. The fallen debris accumulates at the bottom of the cliff and is eventually removed by waves.
A wave-cut platform forms after erosion and retreat of a sea cliff has been occurring for a long time. Gently sloping wave-cut platforms develop early on in the first stages of cliff retreat. Later, the length of the platform decreases because the waves lose their energy as they break further offshore.
Coastal features formed by sediment
Beach
Beach cusps
Cuspate foreland
Dune system
Mudflat
Raised beach
Ria
Shoal
Spit
Strand plain
Surge channel
Tombolo
Coastal features formed by another feature
Estuary
Lagoon
Salt marsh
Mangrove forests
Kelp forests
Coral reefs
Oyster reefs
Other features on the coast
Concordant coastline
Discordant coastline
Fjord
Island
Island arc
Machair
Coastal waters
"Coastal waters" (or "coastal seas") is a rather general term used differently in different contexts, ranging geographically from the waters within a few kilometers of the coast, through to the entire continental shelf which may stretch for more than a hundred kilometers from land. Thus the term coastal waters is used in a slightly different way in discussions of legal and economic boundaries (see territorial waters and international waters) or when considering the geography of coastal landforms or the ecological systems operating through the continental shelf (marine coastal ecosystems). The research on coastal waters often divides into these separate areas too.
The dynamic fluid nature of the ocean means that all components of the whole ocean system are ultimately connected, although certain regional classifications are useful and relevant. The waters of the continental shelves represent such a region. The term "coastal waters" has been used in a wide variety of different ways in different contexts. In European Union environmental management it extends from the coast to just a few nautical miles while in the United States the US EPA considers this region to extend much further offshore.
"Coastal waters" has specific meanings in the context of commercial coastal shipping, and somewhat different meanings in the context of naval littoral warfare. Oceanographers and marine biologists have yet other takes. Coastal waters have a wide range of marine habitats from enclosed estuaries to the open waters of the continental shelf.
Similarly, the term littoral zone has no single definition. It is the part of a sea, lake, or river that is close to the shore. In coastal environments, the littoral zone extends from the high water mark, which is rarely inundated, to shoreline areas that are permanently submerged.
Coastal waters can be threatened by coastal eutrophication and harmful algal blooms.
In geology
The identification of bodies of rock formed from sediments deposited in shoreline and nearshore environments (shoreline and nearshore facies) is extremely important to geologists. These provide vital clues for reconstructing the geography of ancient continents (paleogeography). The locations of these beds show the extent of ancient seas at particular points in geological time, and provide clues to the magnitudes of tides in the distant past.
Sediments deposited in the shoreface are preserved as lenses of sandstone in which the upper part of the sandstone is coarser than the lower part (a coarsening upwards sequence). Geologists refer to these are parasequences. Each records an episode of retreat of the ocean from the shoreline over a period of 10,000 to 1,000,000 years. These often show laminations reflecting various kinds of tidal cycles.
Some of the best-studied shoreline deposits in the world are found along the former western shore of the Western Interior Seaway, a shallow sea that flooded central North America during the late Cretaceous Period (about 100 to 66 million years ago). These are beautifully exposed along the Book Cliffs of Utah and Colorado.
Geologic processes
The following articles describe the various geologic processes that affect a coastal zone:
Attrition
Currents
Denudation
Deposition
Erosion
Flooding
Longshore drift
Marine sediments
Saltation
Sea level change
eustatic
isostatic
Sedimentation
Coastal sediment supply
sediment transport
solution
subaerial processes
suspension
Tides
Water waves
diffraction
refraction
wave breaking
wave shoaling
Weathering
Wildlife
Animals
Larger animals that live in coastal areas include puffins, sea turtles and rockhopper penguins, among many others. Sea snails and various kinds of barnacles live on rocky coasts and scavenge on food deposited by the sea. Some coastal animals are used to humans in developed areas, such as dolphins and seagulls who eat food thrown for them by tourists. Since the coastal areas are all part of the littoral zone, there is a profusion of marine life found just off-coast, including sessile animals such as corals, sponges, starfish, mussels, seaweeds, fishes, and sea anemones.
There are many kinds of seabirds on various coasts. These include pelicans and cormorants, who join up with terns and oystercatchers to forage for fish and shellfish. There are sea lions on the coast of Wales and other countries.
Coastal fish
Plants
Many coastal areas are famous for their kelp beds. Kelp is a fast-growing seaweed that can grow up to half a meter a day in ideal conditions. Mangroves, seagrasses, macroalgal beds, and salt marsh are important coastal vegetation types in tropical and temperate environments respectively. Restinga is another type of coastal vegetation.
Threats
Coasts also face many human-induced environmental impacts and coastal development hazards. The most important ones are:
Pollution which can be in the form of water pollution, nutrient pollution (leading to coastal eutrophication and harmful algal blooms), oil spills or marine debris that is contaminating coasts with plastic and other trash.
Sea level rise, and associated issues like coastal erosion and saltwater intrusion.
Pollution
The pollution of coastlines is connected to marine pollution which can occur from a number of sources: Marine debris (garbage and industrial debris); the transportation of petroleum in tankers, increasing the probability of large oil spills; small oil spills created by large and small vessels, which flush bilge water into the ocean.
Marine pollution
Marine debris
Microplastics
Sea level rise due to climate change
Global goals
International attention to address the threats of coasts has been captured in Sustainable Development Goal 14 "Life Below Water" which sets goals for international policy focused on preserving marine coastal ecosystems and supporting more sustainable economic practices for coastal communities. Likewise, the United Nations has declared 2021–2030 the UN Decade on Ecosystem Restoration, but restoration of coastal ecosystems has received insufficient attention.
| Physical sciences | Oceanic and coastal landforms | null |
5237 | https://en.wikipedia.org/wiki/Catatonia | Catatonia | Catatonia is a complex syndrome, most commonly seen in people with underlying mood disorders, such as major depressive disorder, or psychotic disorders, such as schizophrenia. People with catatonia have abnormal movement and behaviors, which vary from person to person and fluctuate in intensity within a single episode. People with catatonia appear withdrawn, meaning that they do not interact with the outside world and have difficulty processing information. They may be nearly motionless for days on end or perform repetitive purposeless movements. Two people may exhibit very different sets of behaviors and both still be diagnosed with catatonia. Treatment with benzodiazepines or ECT are most effective and lead to remission of symptoms in most cases.
There are different subtypes of catatonia, which represent groups of symptoms that commonly occur together. These include stuporous/akinetic catatonia, excited catatonia, malignant catatonia, and periodic catatonia.
Catatonia has historically been related to schizophrenia (catatonic schizophrenia), but is most often seen in mood disorders. It is now known that catatonic symptoms are nonspecific and may be observed in other mental, neurological, and medical conditions.
Classification
Modern classifications
The ICD-11 is the most common manual used globally to define and diagnose illness, including mental illness. It diagnoses catatonia in someone who has three different symptoms associated with catatonia at one time. These symptoms are called stupor, catalepsy, waxy flexibility, mutism, negativism, posturing, mannerisms, stereotypies, psychomotor agitation, grimacing, echolalia, and echopraxia. It divides catatonia into three groups based on the underlying cause; Catatonia associated with another mental disorder, catatonia induced by psychoactive substance, and secondary catatonia.
The DSM-5 is the most common manual used by mental health professionals in the United States to define and diagnose different mental illnesses. The DSM-5 defines catatonia as “a syndrome characterized by lack of movement and communication, along with three or more of the following 12 behaviors; stupor, catalepsy, waxy flexibility, mutism, negativism, posturing, mannerism, stereotypy, agitation, grimacing, echolalia, or echopraxia.” As a syndrome, catatonia can only occur in people with an existing illness. The DSM-5 divides catatonia into 3 diagnoses. The most common of the three diagnoses is Catatonia Associated with Another Mental Disorder. Around 20% of cases are caused by an underlying medical condition, and known as Catatonic Disorder Due to Another Medical Condition. When the underlying condition is unknown it is considered Unspecified Catatonia.
Signs and symptoms
As discussed previously, the ICD-11 and DSM-5 both require 3 or more of the symptoms defined in the table below in order to diagnose catatonia. However, each person can have a different set of symptoms that may worsen, improve, and change in appearance throughout a single episode. Symptoms may develop over hours or days to weeks.
Because most patients with catatonia have an underlying psychiatric illness, the majority will present with worsening depression, mania, or psychosis followed by catatonia symptoms. Even when unable to interact, It should not be assumed that patients presenting with catatonia are unaware of their surroundings as some patients can recall in detail their catatonic state and their actions.
Subtypes
There are several subtypes of catatonia which are used currently: stuporous catatonia, excited catatonia, malignant catatonia, and periodic catatonia. Subtypes are defined by the group of symptoms and associated features that a person is experiencing or displaying. Notably, while catatonia can be divided into various subtypes, the appearance of catatonia is often dynamic and the same individual may have different subtypes at different times.
Stuporous catatonia: This form of catatonia is characterized by immobility, mutism, and a lack of response to the world around them. They may appear frozen in one position for long periods of time unable to eat, drink, or speak.
Excited catatonia: This form of catatonia is characterized by odd mannerisms and gestures, purposeless or inappropriate actions, excessive motor activity, restlessness, stereotypy, impulsivity, agitation, and combativeness. Speech and actions may be repetitive or mimic another person's. People in this state are extremely hyperactive and may have delusions and hallucinations.
Malignant catatonia: This form of catatonia is life-threatening. It is characterized by fever, dramatic and rapid changes in blood pressure, increased heart rate and respiratory rate, and excessive sweating. Laboratory tests may be abnormal.
Periodic catatonia: This form of catatonia is characterized by only by a person having recurrent episodes of catatonia. Individuals will experience multiple episodes over time, without signs of catatonia in between episodes. Historically, the Wernicke-Kleist-Leonhard School considered periodic catatonia a distinct form of "non-system schizophrenia" characterized by recurrent acute phases with hyperkinetic and akinetic features and often psychotic symptoms, and the build-up of a residual state in between these acute phases, which is characterized by low-level catatonic features and aboulia of varying severity.
Causes
Catatonia can only exist if a person has another underlying illness, and can be associated with a wide range of illnesses including psychiatric disorders, medical conditions, and substance use.
Psychiatric conditions
Mood disorders such as bipolar disorder and depression are the most common conditions underlying catatonia. Other psychiatric conditions that can cause catatonia include schizophrenia and other primary psychotic disorders, autism spectrum disorders, ADHD, and post-traumatic stress disorder. In autism, people tend to present with catatonia during periods of regression.
Psychodynamic theorists have interpreted catatonia as a defense against the potentially destructive consequences of responsibility, and the passivity of the disorder provides relief.
Medical conditions
Catatonia is also seen in many medical disorders, including encephalitis, meningitis, autoimmune disorders, focal neurological lesions (including strokes), alcohol withdrawal, abrupt or overly rapid benzodiazepine withdrawal, cerebrovascular disease, neoplasms, head injury, and some metabolic conditions (homocystinuria, diabetic ketoacidosis, hepatic encephalopathy, and hypercalcaemia).
Neurological disorders
Catatonia can occur due to a number of neurological conditions. For instance, certain types of encephalitis can cause catatonia. Anti-NMDA receptor encephalitis is a form of autoimmune encephalitis which is known to cause catatonia in some people. Additionally, encephalitis has been reported to cause catatonia in people who have encephalitis due to HIV and herpes simplex virus (HSV). The research is limited, but some evidence suggests that people can develop catatonia after traumatic brain injury without a primary psychiatric disorder. Similarly, there are several case reports suggesting that people have experienced catatonia after a stroke, with some people having catatonia-associated symptoms that were unexplainable by their stroke itself, and which improved after treatment with benzodiazepines. Parkinson's disease can cause catatonia for some people by impairing their ability to produce and secrete dopamine, a neurotransmitter which is thought to contribute to motor dysfunction in people with catatonia.
Metabolic and endocrine disorders
Abnormal thyroid function can cause catatonia when the thyroid overproduces or underproduces thyroid hormones. This is thought to occur due to thyroid hormones impact on metabolism including in the cells of the nervous system. Abnormal electrolyte levels have also been shown to cause catatonia in rare cases. Most notably, low levels of sodium in the blood can cause catatonia in some people.
Autoimmune disorders
As discussed previously, anti-NMDA receptor encephalitis is a form of autoimmune encephalitis which can cause catatonia. Additionally, autoimmune diseases that are not exclusively neurological can cause neurological and psychiatric symptoms including catatonia. For instance, systemic lupus erythematosus can cause catatonia and is thought to do by causing inflammation in the blood vessels of the brain or possibly by the body's own antibodies damaging neurons.
Infectious diseases
Certain types of infections are known to cause catatonia either through directly impairing brain function or by making a person more likely to contract diseases that impair brain function. HIV and AIDS can cause catatonia, most likely by predisposing one to infections in the brain, including different types of viral encephalitis. Borrelia burgdorferi causes Lyme disease, which has been shown to cause catatonia by infecting the brain and causing encephalitis.
Pharmacological causes
Use of NMDA receptor antagonists including ketamine and phencyclidine (PCP) can lead to catatonia-like states. Information about these effects has improved scientific understanding of the role of glutamate in catatonia. High dose and chronic use of stimulants like cocaine and amphetamines can lead to cases of catatonia, typically associated with psychosis. This is thought to be due to changes in the function of circuits of the brain associated with dopamine release.
Pathogenesis
The mechanisms in the brain that cause catatonia are poorly understood. Currently, there are two main categories of explanations for what may be happening in the brain to cause catatonia. The first is that there is disruption of normal neurotransmitter production or release in certain areas of the brain preventing normal function of those areas of the brain, leading to behavioral and motor symptoms associated with catatonia. The second claims that disruption of communication between different areas of the brain causes catatonia.
Neurotransmitters
The neurotransmitters that are most strongly associated with catatonia are GABA, dopamine, and glutamate. GABA is the main inhibitory neurotransmitter of the brain, meaning that it slows down the activity of the systems of the brain it acts on. In catatonia, people have low levels of GABA which causes them to be overly activated, especially in the areas of the brain that cause inhibition. This is thought to cause the behavioral symptoms associated with catatonia including withdrawal. Dopamine can increase or decrease the activity of the area of the brain it acts on depending on where in the brain it is. Dopamine is lower than normal in people with catatonia, which is thought to cause many of the motor symptoms, because dopamine is the main neurotransmitter which activates the parts of the brain responsible for movement. Glutamate is an excitatory neurotransmitter, meaning that it increases the activity of the areas of the brain it acts on. Notably, glutamate increases tells the neuron it acts on to fire, by binding to the NMDA receptor. People with anti-NMDA receptor encephalitis can develop catatonia because their own antibodies attack the NMDA receptor, which reduces the ability of the brain to activate different areas of the brain using glutamate.
Neurological pathways
Several pathways in the brain have been studied which seem to contribute to catatonia when they are not functioning properly. However, these studies were unable to determine if the abnormalities they observed were the cause of catatonia or if the catatonia caused the abnormalities. Furthermore, it has also been hypothesized that pathways that connect the basal ganglia with the cortex and thalamus is involved in the development of catatonia.
Diagnosis
There is not yet a definitive consensus regarding diagnostic criteria of catatonia. In the fifth edition of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM-5, 2013) and the eleventh edition of the World Health Organization's International Classification of Diseases (ICD-11, 2022), the classification is more homogeneous than in earlier editions. Prominent researchers in the field have other suggestions for diagnostic criteria. Still, diagnosing catatonia can be challenging. Evidence suggests that there is as high as a 15-day average delay to diagnosis for people with catatonia.
DSM-5 classification
The DSM-5 does not classify catatonia as an independent disorder, but rather it classifies it as catatonia associated with another mental disorder, due to another medical condition, or as unspecified catatonia.
Catatonia is diagnosed by the presence of three or more of the following 12 psychomotor symptoms in association with a mental disorder, medical condition, or unspecified:
stupor: no psycho-motor activity; not actively relating to the environment
catalepsy: passive induction of a posture held against gravity
waxy flexibility: allowing positioning by an examiner and maintaining that position
mutism: no, or very little, verbal response (exclude if known aphasia)
negativism: opposition or no response to instructions or external stimuli
posturing: spontaneous and active maintenance of a posture against gravity
mannerisms that are odd, circumstantial caricatures of normal actions
stereotypy: repetitive, abnormally frequent, non-goal-directed movements
agitation, not influenced by external stimuli
grimacing: keeping a fixed facial expression
echolalia: mimicking another's speech
echopraxia: mimicking another's movements.
Other disorders (additional code 293.89 [F06.1] to indicate the presence of the co-morbid catatonia):
Catatonia associated with autism spectrum disorder
Catatonia associated with schizophrenia spectrum and other psychotic disorders
Catatonia associated with brief psychotic disorder
Catatonia associated with schizophreniform disorder
Catatonia associated with schizoaffective disorder
Catatonia associated with a substance-induced psychotic disorder
Catatonia associated with bipolar and related disorders
Catatonia associated with major depressive disorder
Catatonic disorder due to another medical condition
If catatonic symptoms are present but do not form the catatonic syndrome, a medication- or substance-induced aetiology should first be considered.
ICD-11 classification
In the ICD-11, catatonia is defined as a syndrome of primarily psychomotor disturbances that is characterized by the simultaneous occurrence of several symptoms such as stupor, catalepsy, waxy flexibility, mutism, negativism, posturing, mannerisms, stereotypies, psychomotor agitation, grimacing, echolalia, and echopraxia. Catatonia may occur in the context of specific mental disorders, including mood disorders, schizophrenia or other primary psychotic disorders, and neurodevelopmental disorders, and may be induced by psychoactive substances, including medications. Catatonia may also be caused by a medical condition not classified under mental, behavioral, or neurodevelopmental disorders.
Assessment/physical
Catatonia is often overlooked and under-diagnosed. Patients with catatonia most commonly have an underlying psychiatric disorder. For this reason, physicians may overlook signs of catatonia due to the severity of the psychosis the patient is presenting with. Furthermore, the patient may not be presenting with the common signs of catatonia such as mutism and posturing. Additionally, the motor abnormalities seen in catatonia are also present in psychiatric disorders. For example, a patient with mania will show increased motor activity and may not be considered for a diagnosis of excited catatonia, even if symptoms are developing that are not associated with mania. One way in which physicians can differentiate between the two is to observe the motor abnormality. Patients with mania present with increased goal-directed activity. On the other hand, the increased activity in catatonia is not goal-directed and often repetitive.
Catatonia is a clinical diagnosis and there is no specific laboratory test to diagnose it. However, certain testing can help determine what is causing the catatonia. An EEG will likely show diffuse slowing. If seizure activity is driving the syndrome, then an EEG would also be helpful in detecting this. CT or MRI will not show catatonia; however, they might reveal abnormalities that might be leading to the syndrome. Metabolic screens, inflammatory markers, or autoantibodies may reveal reversible medical causes of catatonia.
Vital signs should be frequently monitored as catatonia can progress to malignant catatonia which is life-threatening. Malignant catatonia is characterized by fever, hypertension, tachycardia, and tachypnea.
Rating scale
Various rating scales for catatonia have been developed, however, their utility for clinical care has not been well established. The most commonly used scale is the Bush-Francis Catatonia Rating Scale (BFCRS) (external link is provided below). The scale is composed of 23 items with the first 14 items being used as the screening tool. If 2 of the 14 are positive, this prompts for further evaluation and completion of the remaining 9 items.
A diagnosis can be supported by the lorazepam challenge or the zolpidem challenge. While proven useful in the past, barbiturates are no longer commonly used in psychiatry; thus the option of either benzodiazepines or ECT.
Laboratory findings
Certain lab findings are common with malignant catatonia that are uncommon in other forms of catatonia. These lab findings include: leukocytosis, elevated creatine kinase, low serum iron. The signs and symptoms of malignant catatonia overlap significantly with neuroleptic malignant syndrome (NMS). Therefore, the results of laboratory tests need to be considered in the context of clinical history, review of medications, and physical exam findings.
Differential diagnosis
The differential diagnosis of catatonia is extensive as signs and symptoms of catatonia may overlap significantly with those of other conditions. Therefore, a careful and detailed history, medication review, and physical exam are key to diagnosing catatonia and differentiating it from other conditions. Furthermore, some of these conditions can themselves lead to catatonia. The differential diagnosis is as follows:
Neuroleptic malignant syndrome (NMS) and catatonia are both life-threatening conditions that share many of the same characteristics including fever, autonomic instability, rigidity, and delirium. Lab values of low serum iron, elevated creatine kinase, and white blood cell count are also shared by the two disorders, further complicating the diagnosis. There are features of malignant catatonia (posturing, impulsivity, etc.) that are absent from NMS and the lab results are not as consistent in malignant catatonia as they are in NMS. Some experts consider NMS to be a drug-induced condition associated with antipsychotics, particularly first generation antipsychotics, but it has not been established as a subtype. Therefore, discontinuing antipsychotics and starting benzodiazepines is a treatment for this condition, and similarly it is helpful in catatonia as well.
Anti-NMDA receptor encephalitis is an autoimmune disorder characterized by neuropsychiatric features and the presence of IgG antibodies. The presentation of anti-NMDA encephalitis has been categorized into 5 phases: prodromal phase, psychotic phase, unresponsive phase, hyperkinetic phase, and recovery phase. The psychotic phase progresses into the unresponsive phase characterized by mutism, decreased motor activity, and catatonia.
Both serotonin syndrome and malignant catatonia may present with signs and symptoms of delirium, autonomic instability, hyperthermia, and rigidity. Again, similar to the presentation in NMS. However, patients with Serotonin syndrome have a history of ingestion of serotonergic drugs (ex: SSRI). These patients will also present with hyperreflexia, myoclonus, nausea, vomiting, and diarrhea.
Malignant hyperthermia and malignant catatonia share features of autonomic instability, hyperthermia, and rigidity. However, malignant hyperthermia is a hereditary disorder of skeletal muscle that makes these patients susceptible to exposure to halogenated anesthetics and/or depolarizing muscle relaxants like succinylcholine. Malignant hyperthermia most commonly occurs in the intraoperative or postoperative periods. Other signs and symptoms of malignant hyperthermia include metabolic and respiratory acidosis, hyperkalemia, and cardiac arrhythmias.
Akinetic mutism is a neurological disorder characterized by a decrease in goal-directed behavior and motivation; however, the patient has an intact level of consciousness. Patients may present with apathy, and may seem indifferent to pain, hunger, or thirst. Akinetic mutism has been associated with structural damage in a variety of brain areas. Akinetic mutism and catatonia may both manifest with immobility, mutism, and waxy flexibility. Differentiating both disorders is the fact that akinetic mutism does not present with echolalia, echopraxia, or posturing. Furthermore, it is not responsive to benzodiazepines as is the case for catatonia.
Selective mutism has an anxious etiology but has also been associated with personality disorders. Patients with this disorder fail to speak with some individuals but will speak with others. Likewise, they may refuse to speak in certain situations; for example, a child who refuses to speak at school but is conversational at home. This disorder is distinguished from catatonia by the absence of any other signs/symptoms.
Nonconvulsive status epilepticus is seizure activity with no accompanying tonic-clonic movements. It can present with stupor, similar to catatonia, and they both respond to benzodiazepines. Nonconvulsive status epilepticus is diagnosed by the presence of seizure activity seen on electroencephalogram (EEG). Catatonia, on the other hand, is associated with normal EEG or diffuse slowing.
Delirium is characterized by fluctuating disturbed perception and consciousness in the ill individual. It has hypoactive and hyperactive or mixed forms. People with hyperactive delirium present similarly to those with excited catatonia and have symptoms of restlessness, agitation, and aggression. Those with hypoactive delirium present with similarly to stuporous catatonia, withdrawn and quiet. However, catatonia also includes other distinguishing features including posturing and rigidity as well as a positive response to benzodiazepines.
Patients with locked-in syndrome present with immobility and mutism; however, unlike patients with catatonia who are unmotivated to communicate, patients with locked-in syndrome try to communicate with eye movements and blinking. Furthermore, locked-in syndrome is caused by damage to the brainstem.
Stiff-person syndrome and catatonia are similar in that they may both present with rigidity, autonomic instability, and a positive response to benzodiazepines. However, stiff-person syndrome may be associated with anti-glutamic acid decarboxylase (anti-GAD) antibodies and other catatonic signs such as mutism and posturing are not part of the syndrome.
Untreated late-stage Parkinson's disease may present similarly to stuporous catatonia with symptoms of immobility, rigidity, and difficulty speaking. Further complicating the diagnosis is the fact that many patients with Parkinson's disease will have major depressive disorder, which may be the underlying cause of catatonia. Parkinson's disease can be distinguished from catatonia by a positive response to levodopa. Catatonia, on the other hand, will show a positive response to benzodiazepines.
Extrapyramidal side effects of antipsychotic medication, especially dystonia and akathisia, can be difficult to distinguish from catatonic symptoms, or may confound them in the psychiatric setting. Extrapyramidal motor disorders usually do not involve social symptoms like negativism, while individuals with catatonic excitement typically do not have the physically painful compulsion to move that is seen in akathisia.
Certain stimming behaviors and stress responses in individuals with autism spectrum disorders can present similarly to catatonia. In autism spectrum disorders, chronic catatonia is distinguished by a lasting deterioration of adaptive skills from the background of pre-existing autistic symptomatology that cannot be easily explained. Acute catatonia is usually clearly distinguishable from autistic symptoms.
The diagnostic entities of obsessional slowness and psychogenic parkinsonism show overlapping features with catatonia, such as motor slowness, gegenhalten (oppositional paratonia), mannerisms, and reduced or absent speech. However, psychogenic parkinsonism involves tremor which is unusual in catatonia. Obsessional slowness is a controversial diagnosis, with presentations ranging from severe but common manifestations of obsessive compulsive disorder to catatonia.
Down Syndrome Disintegrative Disorder (or Down Syndrome Regression Disorder, DSDD / DSRD) is a chronic condition characterized by loss of previously acquired adaptive, cognitive and social functioning occurring in persons with Down Syndrome, usually during adolescence or early adulthood. The clinical picture is variable, but often includes catatonic signs, which is why it was called "catatonic psychosis" in initial reports in 1946. DSDD seems to phenotypically overlap with obsessional slowness (see above) and catatonia-like regression occurring in ASD.
Treatment
Treating catatonia effectively requires treating the catatonia itself, treating the underlying condition, and helping them with their basic needs, like eating, drinking, and staying clean and safe, while they are withdrawn and incapable of caring for themselves.
Catatonia-specific treatments
The specifics of treating catatonia itself can vary from region to region, hospital to hospital, and individual to individual, but typically involves the use of benzodiazepines. In fact, in some cases it is unclear whether a person has catatonia or another condition which may present similarly. In these cases a "benzodiazepine challenge" is often done. During a "benzodiazepine challenge" a healthcare provider will give a moderate dose of a benzodiazepine to the patient and monitor them. If a person has catatonia they will often have improvements in their symptoms within 15 to 30 minutes. If the person does not improve within 30 minutes they are given a second dose and the process is repeated once more. If the person responds to either of the doses then they can be given benzodiazepines at a consistent dose and timing until their catatonia resolves. Depending on the person, a person may need to reduce their dosing slowly over time in order to prevent reoccurrence of their symptoms.
ECT is also commonly used to treat catatonia in people who do not improve with medication alone or whose symptoms reoccur whenever the dose of medications are reduced. ECT is usually administered with multiple sessions per week over two to four weeks. ECT has a success rate of 80% to 100%. ECT is effective for all subtypes of catatonia, however people who have catatonia with an underlying neurological condition show less improvement with ECT treatment.
Excessive glutamate activity is believed to be involved in catatonia; when first-line treatment options fail, NMDA antagonists such as amantadine or memantine may be used. Amantadine may have an increased incidence of tolerance with prolonged use and can cause psychosis, due to its additional effects on the dopamine system. Memantine has a more targeted pharmacological profile for the glutamate system, reduced incidence of psychosis and may therefore be preferred for individuals who cannot tolerate amantadine. Topiramate is another treatment option for resistant catatonia; it produces its therapeutic effects by producing glutamate antagonism via modulation of AMPA receptors.
Non-specific aspects of treatment
Treating the underlying condition
There are many medications that are known to cause catatonia in some people including steroids, stimulants, anticonvulsants, neuroleptics or dopamine blockers. If a person has catatonia and is on these medications, they should be considered as a potential cause if another cause is not apparent and discontinued if possible.
Antipsychotics are sometimes used in those with a co-existing psychosis, however they should be used with care as they may worsen catatonia and have a risk of neuroleptic malignant syndrome, a dangerous condition that can mimic catatonia and requires immediate discontinuation of the antipsychotic. There is evidence that clozapine works better than other antipsychotics to treat catatonia.
Supportive care
Supportive care is required in those with catatonia. This includes monitoring vital signs and fluid status, and in those with chronic symptoms; maintaining nutrition and hydration, medications to prevent a blood clot, and measures to prevent the development of pressure ulcers.
Prognosis
Twenty-five percent of psychiatric patients with catatonia will have more than one episode throughout their lives. Treatment response for patients with catatonia is 50–70%, with treatment failure being associated with a poor prognosis. Many of these patients will require long-term and continuous mental health care. The prognosis for people with catatonia due to schizophrenia is much worse compared to other causes. In cases of catatonia that develop into malignant catatonia, the mortality rate is as high as 20%.
Complications
Patients may experience several complications from being in a catatonic state. The nature of these complications will depend on the type of catatonia being experienced by the patient. For example, patients presenting with withdrawn catatonia may have refusal to eat which will in turn lead to malnutrition and dehydration. Furthermore, if immobility is a symptom the patient is presenting with, then they may develop pressure ulcers, muscle contractions, and are at risk of developing deep vein thrombosis (DVT) and pulmonary embolus (PE). Patients with excited catatonia may be aggressive and violent, and physical trauma may result from this. Catatonia may progress to the malignant type which will present with autonomic instability and may be life-threatening. Other complications also include the development of pneumonia and neuroleptic malignant syndrome.
Epidemiology
Catatonia has been historically studied in psychiatric patients. Catatonia is under-recognized because the features are often mistaken for other disorders including delirium or the negative symptoms of schizophrenia. The prevalence has been reported to be as high as 10% in those with acute psychiatric illnesses, and 9–30% in the setting of inpatient psychiatric care. The incidence of catatonia is 10.6 episodes per 100 000 person-years, which essentially means that in a group of 100,000 people, the group as a whole would experience 10 to 11 episodes of catatonia per year. Catatonia can occur at any age, but is most commonly seen in adolescence or young adulthood or in older adults with existing medical conditions. It occurs in males and females in approximately equal numbers. Around 20% of all catatonia cases can be attributed to a general medical condition.
History
Ancient history
There have been reports of stupor-like and catatonia-like states in people throughout the history of psychiatry. In ancient Greece, the first physician to document stupor-like or catatonia-like states was Hippocrates, in his Aphorisms. He never defined the syndrome, but seemingly observed these states in people he was treating for melancholia. In ancient China, the first descriptions of people that appear in the Huangdi Neijing (The Yellow Emperor's Inner Canon), the book which forms the basis of Traditional Chinese Medicine. It is thought to have been compiled by many people over the course of centuries during the Warring States Period (475-221 BCE) and the early Han Dynasty (206 BCE-220 CE).
Modern history
The term “catatonia” was first used by German psychiatrist Karl Ludwig Kahlbaum in 1874, in his book Die Katatonie oder das Spannungsirresein, which translates to "Catatonia or Tension Insanity". He viewed catatonia as its own illness, which would get worse over time in stages of mania, depression, and psychosis leading to dementia. This work heavily influenced another German psychiatrist, Emil Kraeplin, who was the first to classify catatonia as a syndrome. Kraeplin associated catatonia with a psychotic disorder called dementia praecox, which is no longer used as a diagnosis, but heavily informed the development of the concept of schizophrenia.
Kraeplin's work influenced two other notable German psychiatrists, Karl Leonhard and Max Fink, and their colleagues to expand the concept of catatonia as a syndrome which could occur in the setting of many mental illnesses, not just psychotic disorders. They also laid the groundwork to describe different subtypes of catatonia still used today, including Stuporous Catatonia, Excited Catatonia, Malignant Catatonia, and Periodic Catatonia. Additionally, Leonhard and his colleagues categorized catatonia as either systematic or unsystematic, based on whether or not symptoms happened according to consistent and predictable patterns. These ways of thinking shaped the way that psychologists and psychiatrists thought of catatonia well into the 20th century. In fact, catatonia was a subtype of schizophrenia as recently as the DSM-III, and was not revised to be able to be applied to mood disorders until 1994 with the release of the DSM-IV.
In the latter half of the 20th century, clinicians observed that catatonia occurred in various psychiatric and medical conditions, not exclusively in schizophrenia. Max Fink and colleagues advocated for recognizing catatonia as an independent syndrome, highlighting its frequent association with mood disorders and responsiveness to treatments like benzodiazepines and ECT.
Society and culture
Popular conceptions and origins
Catatonia, historically misunderstood, has been subject to shifting perceptions in society. As discussed previously, since the 19th century it was often linked exclusively to schizophrenia, perpetuating misconceptions. These historical misunderstandings have shaped the public opinions on catatonia. This has contributed to a lack of understanding about catatonia and its broader association with other mental disorders and medical conditions.
Popular culture and media have played a significant role in shaping societal perceptions of catatonia. In many cases, media portrayals reduce it to a stereotypical "frozen state," similar to a coma, failing to capture the complexity of symptoms like stupor, agitation, and mutism. Such oversimplifications contribute to public misperceptions and get in the way of people receiving the care they need.
| Biology and health sciences | Mental disorders | Health |
5244 | https://en.wikipedia.org/wiki/Cipher | Cipher | In cryptography, a cipher (or cypher) is an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term is encipherment. To encipher or encode is to convert information into cipher or code. In common parlance, "cipher" is synonymous with "code", as they are both a set of steps that encrypt a message; however, the concepts are distinct in cryptography, especially classical cryptography.
Codes generally substitute different length strings of characters in the output, while ciphers generally substitute the same number of characters as are input. A code maps one meaning with another. Words and phrases can be coded as letters or numbers. Codes typically have direct meaning from input to key. Codes primarily function to save time. Ciphers are algorithmic. The given input must follow the cipher's process to be solved. Ciphers are commonly used to encrypt written information.
Codes operated by substituting according to a large codebook which linked a random string of characters or numbers to a word or phrase. For example, "UQJHSE" could be the code for "Proceed to the following coordinates." When using a cipher the original information is known as plaintext, and the encrypted form as ciphertext. The ciphertext message contains all the information of the plaintext message, but is not in a format readable by a human or computer without the proper mechanism to decrypt it.
The operation of a cipher usually depends on a piece of auxiliary information, called a key (or, in traditional NSA parlance, a cryptovariable). The encrypting procedure is varied depending on the key, which changes the detailed operation of the algorithm. A key must be selected before using a cipher to encrypt a message. Without knowledge of the key, it should be extremely difficult, if not impossible, to decrypt the resulting ciphertext into readable plaintext.
Most modern ciphers can be categorized in several ways:
By whether they work on blocks of symbols usually of a fixed size (block ciphers), or on a continuous stream of symbols (stream ciphers).
By whether the same key is used for both encryption and decryption (symmetric key algorithms), or if a different key is used for each (asymmetric key algorithms). If the algorithm is symmetric, the key must be known to the recipient and sender and to no one else. If the algorithm is an asymmetric one, the enciphering key is different from, but closely related to, the deciphering key. If one key cannot be deduced from the other, the asymmetric key algorithm has the public/private key property and one of the keys may be made public without loss of confidentiality.
Etymology
Originating from the Arabic word for zero صفر (ṣifr), the word "cipher" spread to Europe as part of the Arabic numeral system during the Middle Ages. The Roman numeral system lacked the concept of zero, and this limited advances in mathematics. In this transition, the word was adopted into Medieval Latin as cifra, and then into Middle French as cifre. This eventually led to the English word cipher (minority spelling cypher). One theory for how the term came to refer to encoding is that the concept of zero was confusing to Europeans, and so the term came to refer to a message or communication that was not easily understood.
The term cipher was later also used to refer to any Arabic digit, or to calculation using them, so encoding text in the form of Arabic numerals is literally converting the text to "ciphers".
Versus codes
In casual contexts, "code" and "cipher" can typically be used interchangeably; however, the technical usages of the words refer to different concepts. Codes contain meaning; words and phrases are assigned to numbers or symbols, creating a shorter message.
An example of this is the commercial telegraph code which was used to shorten long telegraph messages which resulted from entering into commercial contracts using exchanges of telegrams.
Another example is given by whole word ciphers, which allow the user to replace an entire word with a symbol or character, much like the way written Japanese utilizes Kanji (meaning Chinese characters in Japanese) characters to supplement the native Japanese characters representing syllables. An example using English language with Kanji could be to replace "The quick brown fox jumps over the lazy dog" by "The quick brown 狐 jumps 上 the lazy 犬". Stenographers sometimes use specific symbols to abbreviate whole words.
Ciphers, on the other hand, work at a lower level: the level of individual letters, small groups of letters, or, in modern schemes, individual bits and blocks of bits. Some systems used both codes and ciphers in one system, using superencipherment to increase the security. In some cases the terms codes and ciphers are used synonymously with substitution and transposition, respectively.
Historically, cryptography was split into a dichotomy of codes and ciphers, while coding had its own terminology analogous to that of ciphers: "encoding, codetext, decoding" and so on.
However, codes have a variety of drawbacks, including susceptibility to cryptanalysis and the difficulty of managing a cumbersome codebook. Because of this, codes have fallen into disuse in modern cryptography, and ciphers are the dominant technique.
Types
There are a variety of different types of encryption. Algorithms used earlier in the history of cryptography are substantially different from modern methods, and modern ciphers can be classified according to how they operate and whether they use one or two keys.
Historical
The Caesar Cipher is one of the earliest known cryptographic systems. Julius Caesar used a cipher that shifts the letters in the alphabet in place by three and wrapping the remaining letters to the front to write to Marcus Tullius Cicero in approximately 50 BC.
Historical pen and paper ciphers used in the past are sometimes known as classical ciphers. They include simple substitution ciphers (such as ROT13) and transposition ciphers (such as a Rail Fence Cipher). For example, "GOOD DOG" can be encrypted as "PLLX XLP" where "L" substitutes for "O", "P" for "G", and "X" for "D" in the message. Transposition of the letters "GOOD DOG" can result in "DGOGDOO". These simple ciphers and examples are easy to crack, even without plaintext-ciphertext pairs.
In the 1640s, the Parliamentarian commander, Edward Montagu, 2nd Earl of Manchester, developed ciphers to send coded messages to his allies during the English Civil War.
Simple ciphers were replaced by polyalphabetic substitution ciphers (such as the Vigenère) which changed the substitution alphabet for every letter. For example, "GOOD DOG" can be encrypted as "PLSX TWF" where "L", "S", and "W" substitute for "O". With even a small amount of known or estimated plaintext, simple polyalphabetic substitution ciphers and letter transposition ciphers designed for pen and paper encryption are easy to crack. It is possible to create a secure pen and paper cipher based on a one-time pad, but these have other disadvantages.
During the early twentieth century, electro-mechanical machines were invented to do encryption and decryption using transposition, polyalphabetic substitution, and a kind of "additive" substitution. In rotor machines, several rotor disks provided polyalphabetic substitution, while plug boards provided another substitution. Keys were easily changed by changing the rotor disks and the plugboard wires. Although these encryption methods were more complex than previous schemes and required machines to encrypt and decrypt, other machines such as the British Bombe were invented to crack these encryption methods.
Modern
Modern encryption methods can be divided by two criteria: by type of key used, and by type of input data.
By type of key used ciphers are divided into:
symmetric key algorithms (Private-key cryptography), where one same key is used for encryption and decryption, and
asymmetric key algorithms (Public-key cryptography), where two different keys are used for encryption and decryption.
In a symmetric key algorithm (e.g., DES and AES), the sender and receiver must have a shared key set up in advance and kept secret from all other parties; the sender uses this key for encryption, and the receiver uses the same key for decryption. The design of AES (Advanced Encryption System) was beneficial because it aimed to overcome the flaws in the design of the DES (Data encryption standard). AES's designer's claim that the common means of modern cipher cryptanalytic attacks are ineffective against AES due to its design structure.[12]
Ciphers can be distinguished into two types by the type of input data:
block ciphers, which encrypt block of data of fixed size, and
stream ciphers, which encrypt continuous streams of data.
Key size and vulnerability
In a pure mathematical attack, (i.e., lacking any other information to help break a cipher) two factors above all count:
Computational power available, i.e., the computing power which can be brought to bear on the problem. It is important to note that average performance/capacity of a single computer is not the only factor to consider. An adversary can use multiple computers at once, for instance, to increase the speed of exhaustive search for a key (i.e., "brute force" attack) substantially.
Key size, i.e., the size of key used to encrypt a message. As the key size increases, so does the complexity of exhaustive search to the point where it becomes impractical to crack encryption directly.
Since the desired effect is computational difficulty, in theory one would choose an algorithm and desired difficulty level, thus decide the key length accordingly.
Claude Shannon proved, using information theory considerations, that any theoretically unbreakable cipher must have keys which are at least as long as the plaintext, and used only once: one-time pad.
| Technology | Computer security | null |
5267 | https://en.wikipedia.org/wiki/Constellation | Constellation | A constellation is an area on the celestial sphere in which a group of visible stars forms a perceived pattern or outline, typically representing an animal, mythological subject, or inanimate object.
The first constellations were likely defined in prehistory. People used them to relate stories of their beliefs, experiences, creation, and mythology. Different cultures and countries invented their own constellations, some of which lasted into the early 20th century before today's constellations were internationally recognized. The recognition of constellations has changed significantly over time. Many changed in size or shape. Some became popular, only to drop into obscurity. Some were limited to a single culture or nation. Naming constellations also helped astronomers and navigators identify stars more easily.
Twelve (or thirteen) ancient constellations belong to the zodiac (straddling the ecliptic, which the Sun, Moon, and planets all traverse). The origins of the zodiac remain historically uncertain; its astrological divisions became prominent in Babylonian or Chaldean astronomy. Constellations appear in Western culture via Greece and are mentioned in the works of Hesiod, Eudoxus and Aratus. The traditional 48 constellations, consisting of the zodiac and 36 more (now 38, following the division of Argo Navis into three constellations) are listed by Ptolemy, a Greco-Roman astronomer from Alexandria, Egypt, in his Almagest. The formation of constellations was the subject of extensive mythology, most notably in the Metamorphoses of the Latin poet Ovid. Constellations in the far southern sky were added from the 15th century until the mid-18th century when European explorers began traveling to the Southern Hemisphere. Due to Roman and European transmission, each constellation has a Latin name.
In 1922, the International Astronomical Union (IAU) formally accepted the modern list of 88 constellations, and in 1928 adopted official constellation boundaries that together cover the entire celestial sphere. Any given point in a celestial coordinate system lies in one of the modern constellations. Some astronomical naming systems include the constellation where a given celestial object is found to convey its approximate location in the sky. The Flamsteed designation of a star, for example, consists of a number and the genitive form of the constellation's name.
Other star patterns or groups called asterisms are not constellations under the formal definition, but are also used by observers to navigate the night sky. Asterisms may be several stars within a constellation, or they may share stars with more than one constellation. Examples of asterisms include the teapot within the constellation Sagittarius, or the big dipper in the constellation of Ursa Major.
Terminology
The word constellation comes from the Late Latin term , which can be translated as "set of stars"; it came into use in Middle English during the 14th century. The Ancient Greek word for constellation is ἄστρον (). These terms historically referred to any recognisable pattern of stars whose appearance was associated with mythological characters or creatures, earthbound animals, or objects. Over time, among European astronomers, the constellations became clearly defined and widely recognised. In the 20th century, the International Astronomical Union (IAU) recognized 88 constellations.
A constellation or star that never sets below the horizon when viewed from a particular latitude on Earth is termed circumpolar. From the North Pole or South Pole, all constellations south or north of the celestial equator are circumpolar. Depending on the definition, equatorial constellations may include those that lie between declinations 45° north and 45° south, or those that pass through the declination range of the ecliptic (or zodiac) ranging between 23.5° north and 23.5° south.
Stars in constellations can appear near each other in the sky, but they usually lie at a variety of distances away from the Earth. Since each star has its own independent motion, all constellations will change slowly over time. After tens to hundreds of thousands of years, familiar outlines will become unrecognizable. Astronomers can predict the past or future constellation outlines by measuring common proper motions of individual stars by accurate astrometry and their radial velocities by astronomical spectroscopy.
The 88 constellations recognized by the IAU as well as those by cultures throughout history are imagined figures and shapes derived from the patterns of stars in the observable sky. Many officially recognized constellations are based on the imaginations of ancient, Near Eastern and Mediterranean mythologies. Some of these stories seem to relate to the appearance of the constellations, e.g. the assassination of Orion by Scorpius, their constellations appearing at opposite times of year.
Observation
Constellation positions change throughout the year due to night on Earth occurring at gradually different portions of its orbit around the Sun. As Earth rotates toward the east, the celestial sphere appears to rotate west, with stars circling counterclockwise around the northern pole star and clockwise around the southern pole star.
Because of Earth's 23.5° axial tilt, the zodiac is distributed equally across hemispheres (along the ecliptic), approximating a great circle. Zodiacal constellations of the northern sky are Pisces, Aries, Taurus, Gemini, Cancer, and Leo. In the southern sky are Virgo, Libra, Scorpius, Sagittarius, Capricornus, and Aquarius. The zodiac appears directly overhead from latitudes of 23.5° north to 23.5° south, depending on the time of year. In summer, the ecliptic appears higher up in the daytime and lower at night, while in winter the reverse is true, for both hemispheres.
Due to the Solar System's 60° tilt, the galactic plane of the Milky Way is inclined 60° from the ecliptic, between Taurus and Gemini (north) and Scorpius and Sagittarius (south and near which the Galactic Center can be found). The galaxy appears to pass through Aquila (near the celestial equator) and northern constellations Cygnus, Cassiopeia, Perseus, Auriga, and Orion (near Betelgeuse), as well as Monoceros (near the celestial equator), and southern constellations Puppis, Vela, Carina, Crux, Centaurus, Triangulum Australe, and Ara.
Northern hemisphere
Polaris, being the North Star, is the approximate center of the northern celestial hemisphere. It is part of Ursa Minor, constituting the end of the Little Dipper's handle.
From latitudes of around 35° north, in January, Ursa Major (containing the Big Dipper) appears to the northeast, while Cassiopeia is the northwest. To the west are Pisces (above the horizon) and Aries. To the southwest Cetus is near the horizon. Up high and to the south are Orion and Taurus. To the southeast above the horizon is Canis Major. Appearing above and to the east of Orion is Gemini: also in the east (and progressively closer to the horizon) are Cancer and Leo. In addition to Taurus, Perseus and Auriga appear overhead.
From the same latitude, in July, Cassiopeia (low in the sky) and Cepheus appear to the northeast. Ursa Major is now in the northwest. Boötes is high up in the west. Virgo is to the west, with Libra southwest and Scorpius south. Sagittarius and Capricorn are southeast. Cygnus (containing the Northern Cross) is to the east. Hercules is high in the sky along with Corona Borealis.
Southern hemisphere
January constellations include Pictor and Reticulum (near Hydrus and Mensa, respectively).
In July, Ara (adjacent to Triangulum Australe) and Scorpius can be seen.
Constellations near the pole star include Chamaeleon, Apus and Triangulum Australe (near Centaurus), Pavo, Hydrus, and Mensa.
Sigma Octantis is the closest star approximating a southern pole star, but is faint in the night sky. Thus, the pole can be triangulated using the constellation Crux as well as the stars Alpha and Beta Centauri (about 30° counterclockwise from Crux) of the constellation Centaurus (arching over Crux).
History of the early constellations
Lascaux Caves, southern France
It has been suggested that the 17,000-year-old cave paintings in Lascaux, southern France, depict star constellations such as Taurus, Orion's Belt, and the Pleiades. However, this view is not generally accepted among scientists.
Mesopotamia
Inscribed stones and clay writing tablets from Mesopotamia (in modern Iraq) dating to 3000 BC provide the earliest generally accepted evidence for humankind's identification of constellations. It seems that the bulk of the Mesopotamian constellations were created within a relatively short interval from around 1300 to 1000 BC. Mesopotamian constellations appeared later in many of the classical Greek constellations.
Ancient Near East
The oldest Babylonian catalogues of stars and constellations date back to the beginning of the Middle Bronze Age, most notably the Three Stars Each texts and the MUL.APIN, an expanded and revised version based on more accurate observation from around 1000 BC. However, the numerous Sumerian names in these catalogues suggest that they built on older, but otherwise unattested, Sumerian traditions of the Early Bronze Age.
The classical Zodiac is a revision of Neo-Babylonian constellations from the 6th century BC. The Greeks adopted the Babylonian constellations in the 4th century BC. Twenty Ptolemaic constellations are from the Ancient Near East. Another ten have the same stars but different names.
Biblical scholar E. W. Bullinger interpreted some of the creatures mentioned in the books of Ezekiel and Revelation as the middle signs of the four-quarters of the Zodiac, with the Lion as Leo, the Bull as Taurus, the Man representing Aquarius, and the Eagle standing in for Scorpio. The biblical Book of Job also makes reference to a number of constellations, including "bier", "fool" and "heap" (Job 9:9, 38:31–32), rendered as "Arcturus, Orion and Pleiades" by the KJV, but ‘Ayish "the bier" actually corresponding to Ursa Major. The term Mazzaroth , translated as a garland of crowns, is a hapax legomenon in Job 38:32, and it might refer to the zodiacal constellations.
Classical antiquity
There is only limited information on ancient Greek constellations, with some fragmentary evidence being found in the Works and Days of the Greek poet Hesiod, who mentioned the "heavenly bodies". Greek astronomy essentially adopted the older Babylonian system in the Hellenistic era, first introduced to Greece by Eudoxus of Cnidus in the 4th century BC. The original work of Eudoxus is lost, but it survives as a versification by Aratus, dating to the 3rd century BC. The most complete existing works dealing with the mythical origins of the constellations are by the Hellenistic writer termed pseudo-Eratosthenes and an early Roman writer styled pseudo-Hyginus. The basis of Western astronomy as taught during Late Antiquity and until the Early Modern period is the Almagest by Ptolemy, written in the 2nd century.
In the Ptolemaic Kingdom, native Egyptian tradition of anthropomorphic figures represented the planets, stars, and various constellations. Some of these were combined with Greek and Babylonian astronomical systems culminating in the Zodiac of Dendera; it remains unclear when this occurred, but most were placed during the Roman period between 2nd to 4th centuries AD. The oldest known depiction of the zodiac showing all the now familiar constellations, along with some original Egyptian constellations, decans, and planets. Ptolemy's Almagest remained the standard definition of constellations in the medieval period both in Europe and in Islamic astronomy.
Ancient China
Ancient China had a long tradition of observing celestial phenomena. Nonspecific Chinese star names, later categorized in the twenty-eight mansions, have been found on oracle bones from Anyang, dating back to the middle Shang dynasty. These constellations are some of the most important observations of Chinese sky, attested from the 5th century BC. Parallels to the earliest Babylonian (Sumerian) star catalogues suggest that the ancient Chinese system did not arise independently.
Three schools of classical Chinese astronomy in the Han period are attributed to astronomers of the earlier Warring States period. The constellations of the three schools were conflated into a single system by Chen Zhuo, an astronomer of the 3rd century (Three Kingdoms period). Chen Zhuo's work has been lost, but information on his system of constellations survives in Tang period records, notably by Qutan Xida. The oldest extant Chinese star chart dates to that period and was preserved as part of the Dunhuang Manuscripts. Native Chinese astronomy flourished during the Song dynasty, and during the Yuan dynasty became increasingly influenced by medieval Islamic astronomy (see Treatise on Astrology of the Kaiyuan Era). As maps were prepared during this period on more scientific lines, they were considered as more reliable.
A well-known map from the Song period is the Suzhou Astronomical Chart, which was prepared with carvings of stars on the planisphere of the Chinese sky on a stone plate; it is done accurately based on observations, and it shows the 1054 supernova in Taurus.
Influenced by European astronomy during the late Ming dynasty, charts depicted more stars but retained the traditional constellations. Newly observed stars were incorporated as supplementary to old constellations in the southern sky, which did not depict the traditional stars recorded by ancient Chinese astronomers. Further improvements were made during the later part of the Ming dynasty by Xu Guangqi and Johann Adam Schall von Bell, the German Jesuit and was recorded in Chongzhen Lishu (Calendrical Treatise of Chongzhen period, 1628). Traditional Chinese star maps incorporated 23 new constellations with 125 stars of the southern hemisphere of the sky based on the knowledge of Western star charts; with this improvement, the Chinese Sky was integrated with the World astronomy.
Early modern astronomy
Historically, the origins of the constellations of the northern and southern skies are distinctly different. Most northern constellations date to antiquity, with names based mostly on Classical Greek legends. Evidence of these constellations has survived in the form of star charts, whose oldest representation appears on the statue known as the Farnese Atlas, based perhaps on the star catalogue of the Greek astronomer Hipparchus. Southern constellations are more modern inventions, sometimes as substitutes for ancient constellations (e.g. Argo Navis). Some southern constellations had long names that were shortened to more usable forms; e.g. Musca Australis became simply Musca.
Some of the early constellations were never universally adopted. Stars were often grouped into constellations differently by different observers, and the arbitrary constellation boundaries often led to confusion as to which constellation a celestial object belonged. Before astronomers delineated precise boundaries (starting in the 19th century), constellations generally appeared as ill-defined regions of the sky. Today they now follow officially accepted designated lines of right ascension and declination based on those defined by Benjamin Gould in epoch 1875.0 in his star catalogue Uranometria Argentina.
The 1603 star atlas "Uranometria" of Johann Bayer assigned stars to individual constellations and formalized the division by assigning a series of Greek and Latin letters to the stars within each constellation. These are known today as Bayer designations. Subsequent star atlases led to the development of today's accepted modern constellations.
Origin of the southern constellations
The southern sky, below about −65° declination, was only partially catalogued by ancient Babylonians, Egyptians, Greeks, Chinese, and Persian astronomers of the north. The knowledge that northern and southern star patterns differed goes back to Classical writers, who describe, for example, the African circumnavigation expedition commissioned by Egyptian Pharaoh Necho II in c. 600 BC and those of Hanno the Navigator in c. 500 BC.
The history of southern constellations is not straightforward. Different groupings and different names were proposed by various observers, some reflecting national traditions or designed to promote various sponsors. Southern constellations were important from the 14th to 16th centuries, when sailors used the stars for celestial navigation. Italian explorers who recorded new southern constellations include Andrea Corsali, Antonio Pigafetta, and Amerigo Vespucci.
Many of the 88 IAU-recognized constellations in this region first appeared on celestial globes developed in the late 16th century by Petrus Plancius, based mainly on observations of the Dutch navigators Pieter Dirkszoon Keyser and Frederick de Houtman. These became widely known through Johann Bayer's star atlas Uranometria of 1603. Fourteen more were created in 1763 by the French astronomer Nicolas Louis de Lacaille, who also split the ancient constellation Argo Navis into three; these new figures appeared in his star catalogue, published in 1756.
Several modern proposals have not survived. The French astronomers Pierre Lemonnier and Joseph Lalande, for example, proposed constellations that were once popular but have since been dropped. The northern constellation Quadrans Muralis survived into the 19th century (when its name was attached to the Quadrantid meteor shower), but is now divided between Boötes and Draco.
88 modern constellations
A list of 88 constellations was produced for the IAU in 1922. It is roughly based on the traditional Greek constellations listed by Ptolemy in his Almagest in the 2nd century and Aratus' work Phenomena, with early modern modifications and additions (most importantly introducing constellations covering the parts of the southern sky unknown to Ptolemy) by Petrus Plancius (1592, 1597/98 and 1613), Johannes Hevelius (1690) and Nicolas Louis de Lacaille (1763), who introduced fourteen new constellations. Lacaille studied the stars of the southern hemisphere from 1751 until 1752 from the Cape of Good Hope, when he was said to have observed more than 10,000 stars using a refracting telescope with an aperture of .
In 1922, Henry Norris Russell produced a list of 88 constellations with three-letter abbreviations for them. However, these constellations did not have clear borders between them. In 1928, the IAU formally accepted the 88 modern constellations, with contiguous boundaries along vertical and horizontal lines of right ascension and declination developed by Eugene Delporte that, together, cover the entire celestial sphere; this list was finally published in 1930. Where possible, these modern constellations usually share the names of their Graeco-Roman predecessors, such as Orion, Leo, or Scorpius. The aim of this system is area-mapping, i.e. the division of the celestial sphere into contiguous fields. Out of the 88 modern constellations, 36 lie predominantly in the northern sky, and the other 52 predominantly in the southern.
The boundaries developed by Delporte used data that originated back to epoch B1875.0, which was when Benjamin A. Gould first made his proposal to designate boundaries for the celestial sphere, a suggestion on which Delporte based his work. The consequence of this early date is that because of the precession of the equinoxes, the borders on a modern star map, such as epoch J2000, are already somewhat skewed and no longer perfectly vertical or horizontal. This effect will increase over the years and centuries to come.
Symbols
The constellations have no official symbols, though those of the ecliptic may take the signs of the zodiac. Symbols for the other modern constellations, as well as older ones that still occur in modern nomenclature, have occasionally been published.
Dark cloud constellations
The Great Rift, a series of dark patches in the Milky Way, is most visible in the southern sky. Some cultures have discerned shapes in these patches. Members of the Inca civilization identified various dark areas or dark nebulae in the Milky Way as animals and associated their appearance with the seasonal rains. Australian Aboriginal astronomy also describes dark cloud constellations, the most famous being the "emu in the sky" whose head is formed by the Coalsack, a dark nebula, instead of the stars.
List of dark cloud constellations
Great Rift (astronomy)
Cygnus Rift
Serpens–Aquila Rift
Dark Horse (astronomy)
Rho Ophiuchi cloud complex
Emu in the sky
| Physical sciences | Constellations | null |
5272 | https://en.wikipedia.org/wiki/Printer%20%28computing%29 | Printer (computing) | In computing, a printer is a peripheral machine which makes a durable representation of graphics or text, usually on paper. While most output is human-readable, bar code printers are an example of an expanded use for printers. Different types of printers include 3D printers, inkjet printers, laser printers, and thermal printers.
History
The first computer printer designed was a mechanically driven apparatus by Charles Babbage for his difference engine in the 19th century; however, his mechanical printer design was not built until 2000.
The first patented printing mechanism for applying a marking medium to a recording medium or more particularly an electrostatic inking apparatus and a method for electrostatically depositing ink on controlled areas of a receiving medium, was in 1962 by C. R. Winston, Teletype Corporation, using continuous inkjet printing. The ink was a red stamp-pad ink manufactured by Phillips Process Company of Rochester, NY under the name Clear Print. This patent (US3060429) led to the Teletype Inktronic Printer product delivered to customers in late 1966.
The first compact, lightweight digital printer was the EP-101, invented by Japanese company Epson and released in 1968, according to Epson.
The first commercial printers generally used mechanisms from electric typewriters and Teletype machines. The demand for higher speed led to the development of new systems specifically for computer use. In the 1980s there were daisy wheel systems similar to typewriters, line printers that produced similar output but at much higher speed, and dot-matrix systems that could mix text and graphics but produced relatively low-quality output. The plotter was used for those requiring high-quality line art like blueprints.
The introduction of the low-cost laser printer in 1984, with the first HP LaserJet, and the addition of PostScript in next year's Apple LaserWriter set off a revolution in printing known as desktop publishing. Laser printers using PostScript mixed text and graphics, like dot-matrix printers, but at quality levels formerly available only from commercial typesetting systems. By 1990, most simple printing tasks like fliers and brochures were now created on personal computers and then laser printed; expensive offset printing systems were being dumped as scrap. The HP Deskjet of 1988 offered the same advantages as a laser printer in terms of flexibility, but produced somewhat lower-quality output (depending on the paper) from much less-expensive mechanisms. Inkjet systems rapidly displaced dot-matrix and daisy-wheel printers from the market. By the 2000s, high-quality printers of this sort had fallen under the $100 price point and became commonplace.
The rapid improvement of internet email through the 1990s and into the 2000s has largely displaced the need for printing as a means of moving documents, and a wide variety of reliable storage systems means that a "physical backup" is of little benefit today.
Starting around 2010, 3D printing became an area of intense interest, allowing the creation of physical objects with the same sort of effort as an early laser printer required to produce a brochure. As of the 2020s, 3D printing has become a widespread hobby due to the abundance of cheap 3D printer kits, with the most common process being Fused deposition modeling.
Types
Personal printer
Personal printers are mainly designed to support individual users, and may be connected to only a single computer. These printers are designed for low-volume, short-turnaround print jobs, requiring minimal setup time to produce a hard copy of a given document. They are generally slow devices ranging from 6 to around 25 pages per minute (ppm), and the cost per page is relatively high. However, this is offset by the on-demand convenience. Some printers can print documents stored on memory cards or from digital cameras and scanners.
Networked printer
Networked or shared printers are "designed for high-volume, high-speed printing". They are usually shared by many users on a network and can print at speeds of 45 to around 100 ppm. The Xerox 9700 could achieve 120 ppm.
An ID Card printer is used for printing plastic ID cards. These can now be customised with important features such as holographic overlays, HoloKotes and watermarks. This is either a direct to card printer (the more feasible option) or a retransfer printer.
Virtual printer
A virtual printer is a piece of computer software whose user interface and API resembles that of a printer driver, but which is not connected with a physical computer printer. A virtual printer can be used to create a file which is an image of the data which would be printed, for archival purposes or as input to another program, for example to create a PDF or to transmit to another system or user.
Barcode printer
A barcode printer is a computer peripheral for printing barcode labels or tags that can be attached to, or printed directly on, physical objects. Barcode printers are commonly used to label cartons before shipment, or to label retail items with UPCs or EANs.
3D printer
A 3D printer is a device for making a three-dimensional object from a 3D model or other electronic data source through additive processes in which successive layers of material (including plastics, metals, food, cement, wood, and other materials) are laid down under computer control. It is called a printer by analogy with an inkjet printer which produces a two-dimensional document by a similar process of depositing a layer of ink on paper.
ID card printer
A card printer is an electronic desktop printer with single card feeders which print and personalize plastic cards. In this respect they differ from, for example, label printers which have a continuous supply feed. Card dimensions are usually 85.60 × 53.98 mm, standardized under ISO/IEC 7810 as ID-1. This format is also used in EC-cards, telephone cards, credit cards, driver's licenses and health insurance cards. This is commonly known as the bank card format. Card printers are controlled by corresponding printer drivers or by means of a specific programming language. Generally card printers are designed with laminating, striping, and punching functions, and use desktop or web-based software. The hardware features of a card printer differentiate a card printer from the more traditional printers, as ID cards are usually made of PVC plastic and require laminating and punching. Different card printers can accept different card thickness and dimensions.
The principle is the same for practically all card printers: the plastic card is passed through a thermal print head at the same time as a color ribbon. The color from the ribbon is transferred onto the card through the heat given out from the print head. The standard performance for card printing is 300 dpi (300 dots per inch, equivalent to 11.8 dots per mm). There are different printing processes, which vary in their detail:
Thermal transferMainly used to personalize pre-printed plastic cards in monochrome. The color is "transferred" from the (monochrome) color ribbon ;Dye sublimation:This process uses four panels of color according to the CMYK color ribbon. The card to be printed passes under the print head several times each time with the corresponding ribbon panel. Each color in turn is diffused (sublimated) directly onto the card. Thus it is possible to produce a high depth of color (up to 16 million shades) on the card. Afterwards a transparent overlay (O) also known as a topcoat (T) is placed over the card to protect it from mechanical wear and tear and to render the printed image UV resistant.
Reverse image technologyThe standard for high-security card applications that use contact and contactless smart chip cards. The technology prints images onto the underside of a special film that fuses to the surface of a card through heat and pressure. Since this process transfers dyes and resins directly onto a smooth, flexible film, the print-head never comes in contact with the card surface itself. As such, card surface interruptions such as smart chips, ridges caused by internal RFID antennae and debris do not affect print quality. Even printing over the edge is possible.
Thermal rewrite print processIn contrast to the majority of other card printers, in the thermal rewrite process the card is not personalized through the use of a color ribbon, but by activating a thermal sensitive foil within the card itself. These cards can be repeatedly personalized, erased and rewritten. The most frequent use of these are in chip-based student identity cards, whose validity changes every semester.
Common printing problemsMany printing problems are caused by physical defects in the card material itself, such as deformation or warping of the card that is fed into the machine in the first place. Printing irregularities can also result from chip or antenna embedding that alters the thickness of the plastic and interferes with the printer's effectiveness. Other issues are often caused by operator errors, such as users attempting to feed non-compatible cards into the card printer, while other printing defects may result from environmental abnormalities such as dirt or contaminants on the card or in the printer. Reverse transfer printers are less vulnerable to common printing problems than direct-to-card printers, since with these printers the card does not come into direct contact with the printhead.
Variations
Broadly speaking there are three main types of card printers, differing mainly by the method used to print onto the card. They are:
Near to EdgeThis term designates the cheapest type of printing by card printers. These printers print up to 5 mm from the edge of the card stock.
Direct to CardAlso known as "Edge to Edge Printing". The print-head comes in direct contact with the card. This printing type is the most popular nowadays, mostly due to cost factor. The majority of identification card printers today are of this type.
Reverse TransferAlso known as "High Definition Printing" or "Over the Edge Printing". The print-head prints to a transfer film backwards (hence the reverse) and then the printed film is rolled onto the card with intense heat (hence the transfer). The term "over the edge" is due to the fact that when the printer prints onto the film it has a "bleed", and when rolled onto the card the bleed extends to completely over the edge of the card, leaving no border.
Different ID Card Printers use different encoding techniques to facilitate disparate business environments and to support security initiatives. Known encoding techniques are:
Contact Smart Card The Contact Smart Cards use RFID technology and require direct contact to a conductive plate to register admission or transfer of information. The transmission of commands, data, and card status held between the two physical contact points.
Contactless Smart Card Contactless Smart Cards exhibit integrated circuit that can store and process data while communicating with the terminal via Radio Frequency. Unlike Contact Smart Card, contact less cards feature intelligent re-writable microchip that can be transcribed through radio waves.
HiD Proximity HID's proximity technology allows fast, accurate reading while offering card or key tag read ranges from 4" to 24" inches (10 cm to 60.96 cm), dependent on the type of proximity reader being used. Since these cards and key tags do not require physical contact with the reader, they are virtually maintenance and wear-free.
ISO Magnetic Stripe A magnetic stripe card is a type of card capable of storing data by modifying the magnetism of tiny iron-based magnetic particles on a band of magnetic material on the card. The magnetic stripe, sometimes called swipe card or magstripe, is read by physical contact and swiping past a magnetic reading head.
Software
There are basically two categories of card printer software: desktop-based, and web-based (online). The biggest difference between the two is whether or not a customer has a printer on their network that is capable of printing identification cards. If a business already owns an ID card printer, then a desktop-based badge maker is probably suitable for their needs. Typically, large organizations who have high employee turnover will have their own printer. A desktop-based badge maker is also required if a company needs their IDs make instantly. An example of this is the private construction site that has restricted access. However, if a company does not already have a local (or network) printer that has the features they need, then the web-based option is a perhaps a more affordable solution. The web-based solution is good for small businesses that do not anticipate a lot of rapid growth, or organizations who either can not afford a card printer, or do not have the resources to learn how to set up and use one. Generally speaking, desktop-based solutions involve software, a database (or spreadsheet) and can be installed on a single computer or network.
Other options
Alongside the basic function of printing cards, card printers can also read and encode magnetic stripes as well as contact and contact free RFID chip cards (smart cards). Thus card printers enable the encoding of plastic cards both visually and logically. Plastic cards can also be laminated after printing. Plastic cards are laminated after printing to achieve a considerable increase in durability and a greater degree of counterfeit prevention. Some card printers come with an option to print both sides at the same time, which cuts down the time taken to print and less margin of error. In such printers one side of id card is printed and then the card is flipped in the flip station and other side is printed.
Applications
Alongside the traditional uses in time attendance and access control (in particular with photo personalization), countless other applications have been found for plastic cards, e.g. for personalized customer and members' cards, for sports ticketing and in local public transport systems for the production of season tickets, for the production of school and college identity cards as well as for the production of national ID cards.
Technology
The choice of print technology has a great effect on the cost of the printer and cost of operation, speed, quality and permanence of documents, and noise. Some printer technologies do not work with certain types of physical media, such as carbon paper or transparencies.
A second aspect of printer technology that is often forgotten is resistance to alteration: liquid ink, such as from an inkjet head or fabric ribbon, becomes absorbed by the paper fibers, so documents printed with liquid ink are more difficult to alter than documents printed with toner or solid inks, which do not penetrate below the paper surface.
Cheques can be printed with liquid ink or on special cheque paper with toner anchorage so that alterations may be detected. The machine-readable lower portion of a cheque must be printed using MICR toner or ink. Banks and other clearing houses employ automation equipment that relies on the magnetic flux from these specially printed characters to function properly.
Modern print technology
The following printing technologies are routinely found in modern printers:
Laser printers and other toner-based printers
A laser printer rapidly produces high quality text and graphics. As with digital photocopiers and multifunction printers (MFPs), laser printers employ a xerographic printing process but differ from analog photocopiers in that the image is produced by the direct scanning of a laser beam across the printer's photoreceptor.
Another toner-based printer is the LED printer which uses an array of LEDs instead of a laser to cause toner adhesion to the print drum.
Liquid inkjet printers
Inkjet printers operate by propelling variably sized droplets of liquid ink onto almost any sized page. They are the most common type of computer printer used by consumers.
Solid ink printers
Solid ink printers, also known as phase-change ink or hot-melt ink printers, are a type of thermal transfer printer, graphics sheet printer or 3D printer . They use solid sticks, crayons, pearls or granular ink materials. Common inks are CMYK-colored ink, similar in consistency to candle wax, which are melted and fed into a piezo crystal operated print-head. A Thermal transfer printhead jets the liquid ink on a rotating, oil coated drum. The paper then passes over the print drum, at which time the image is immediately transferred, or transfixed, to the page. Solid ink printers are most commonly used as color office printers and are excellent at printing on transparencies and other non-porous media. Solid ink is also called phase-change or hot-melt ink and was first used by Data Products and Howtek, Inc., in 1984. Solid ink printers can produce excellent results with text and images. Some solid ink printers have evolved to print 3D models, for example, Visual Impact Corporation of Windham, NH was started by retired Howtek employee, Richard Helinski whose 3D patents US4721635 and then US5136515 was licensed to Sanders Prototype, Inc., later named Solidscape, Inc. Acquisition and operating costs are similar to laser printers. Drawbacks of the technology include high energy consumption and long warm-up times from a cold state. Also, some users complain that the resulting prints are difficult to write on, as the wax tends to repel inks from pens, and are difficult to feed through automatic document feeders, but these traits have been significantly reduced in later models. This type of thermal transfer printer is only available from one manufacturer, Xerox, manufactured as part of their Xerox Phaser office printer line. Previously, solid ink printers were manufactured by Tektronix, but Tektronix sold the printing business to Xerox in 2001.
Dye-sublimation printers
A dye-sublimation printer (or dye-sub printer) is a printer that employs a printing process that uses heat to transfer dye to a medium such as a plastic card, paper, or canvas. The process is usually to lay one color at a time using a ribbon that has color panels. Dye-sub printers are intended primarily for high-quality color applications, including color photography; and are less well-suited for text. While once the province of high-end print shops, dye-sublimation printers are now increasingly used as dedicated consumer photo printers.
Thermal printers
Thermal printers work by selectively heating regions of special heat-sensitive paper. Monochrome thermal printers are used in cash registers, ATMs, gasoline dispensers and some older inexpensive fax machines. Colors can be achieved with special papers and different temperatures and heating rates for different colors; these colored sheets are not required in black-and-white output. One example is Zink (a portmanteau of "zero ink").
Obsolete and special-purpose printing technologies
The following technologies are either obsolete, or limited to special applications though most were, at one time, in widespread use.
Impact printers
Impact printers rely on a forcible impact to transfer ink to the media. The impact printer uses a print head that either hits the surface of the ink ribbon, pressing the ink ribbon against the paper (similar to the action of a typewriter), or, less commonly, hits the back of the paper, pressing the paper against the ink ribbon (the IBM 1403 for example). All but the dot matrix printer rely on the use of fully formed characters, letterforms that represent each of the characters that the printer was capable of printing. In addition, most of these printers were limited to monochrome, or sometimes two-color, printing in a single typeface at one time, although bolding and underlining of text could be done by "overstriking", that is, printing two or more impressions either in the same character position or slightly offset. Impact printers varieties include typewriter-derived printers, teletypewriter-derived printers, daisywheel printers, dot matrix printers, and line printers. Dot-matrix printers remain in common use in businesses where multi-part forms are printed. An overview of impact printing contains a detailed description of many of the technologies used.
Typewriter-derived printers
Several different computer printers were simply computer-controllable versions of existing electric typewriters. The Friden Flexowriter and IBM Selectric-based printers were the most-common examples. The Flexowriter printed with a conventional typebar mechanism while the Selectric used IBM's well-known "golf ball" printing mechanism. In either case, the letter form then struck a ribbon which was pressed against the paper, printing one character at a time. The maximum speed of the Selectric printer (the faster of the two) was 15.5 characters per second.
Teletypewriter-derived printers
The common teleprinter could easily be interfaced with the computer and became very popular except for those computers manufactured by IBM. Some models used a "typebox" that was positioned, in the X- and Y-axes, by a mechanism, and the selected letter form was struck by a hammer. Others used a type cylinder in a similar way as the Selectric typewriters used their type ball. In either case, the letter form then struck a ribbon to print the letterform. Most teleprinters operated at ten characters per second although a few achieved 15 CPS.
Daisy wheel printers
Daisy wheel printers operate in much the same fashion as a typewriter. A hammer strikes a wheel with petals, the "daisy wheel", each petal containing a letter form at its tip. The letter form strikes a ribbon of ink, depositing the ink on the page and thus printing a character. By rotating the daisy wheel, different characters are selected for printing. These printers were also referred to as letter-quality printers because they could produce text which was as clear and crisp as a typewriter. The fastest letter-quality printers printed at 30 characters per second.
Dot-matrix printers
The term dot matrix printer is used for impact printers that use a matrix of small pins to transfer ink to the page. The advantage of dot matrix over other impact printers is that they can produce graphical images in addition to text; however the text is generally of poorer quality than impact printers that use letterforms (type).
Dot-matrix printers can be broadly divided into two major classes:
Ballistic wire printers
Stored energy printers
Dot matrix printers can either be character-based or line-based (that is, a single horizontal series of pixels across the page), referring to the configuration of the print head.
In the 1970s and '80s, dot matrix printers were one of the more common types of printers used for general use, such as for home and small office use. Such printers normally had either 9 or 24 pins on the print head (early 7 pin printers also existed, which did not print descenders). There was a period during the early home computer era when a range of printers were manufactured under many brands such as the Commodore VIC-1525 using the Seikosha Uni-Hammer system. This used a single solenoid with an oblique striker that would be actuated 7 times for each column of 7 vertical pixels while the head was moving at a constant speed. The angle of the striker would align the dots vertically even though the head had moved one dot spacing in the time. The vertical dot position was controlled by a synchronized longitudinally ribbed platen behind the paper that rotated rapidly with a rib moving vertically seven dot spacings in the time it took to print one pixel column. 24-pin print heads were able to print at a higher quality and started to offer additional type styles and were marketed as Near Letter Quality by some vendors. Once the price of inkjet printers dropped to the point where they were competitive with dot matrix printers, dot matrix printers began to fall out of favour for general use.
Some dot matrix printers, such as the NEC P6300, can be upgraded to print in color. This is achieved through the use of a four-color ribbon mounted on a mechanism (provided in an upgrade kit that replaces the standard black ribbon mechanism after installation) that raises and lowers the ribbons as needed. Color graphics are generally printed in four passes at standard resolution, thus slowing down printing considerably. As a result, color graphics can take up to four times longer to print than standard monochrome graphics, or up to 8-16 times as long at high resolution mode.
Dot matrix printers are still commonly used in low-cost, low-quality applications such as cash registers, or in demanding, very high volume applications like invoice printing. Impact printing, unlike laser printing, allows the pressure of the print head to be applied to a stack of two or more forms to print multi-part documents such as sales invoices and credit card receipts using continuous stationery with carbonless copy paper. It also has security advantages as ink impressed into a paper matrix by force is harder to erase invisibly. Dot-matrix printers were being superseded even as receipt printers after the end of the twentieth century.
Line printers
Line printers print an entire line of text at a time. Four principal designs exist.
Drum printers, where a horizontally mounted rotating drum carries the entire character set of the printer repeated in each printable character position. The IBM 1132 printer is an example of a drum printer. Drum printers are also found in adding machines and other numeric printers (POS), the dimensions are compact as only a dozen characters need to be supported.
Chain or train printers, where the character set is arranged multiple times around a linked chain or a set of character slugs in a track traveling horizontally past the print line. The IBM 1403 is perhaps the most popular and comes in both chain and train varieties. The band printer is a later variant where the characters are embossed on a flexible steel band. The LP27 from Digital Equipment Corporation is a band printer.
Bar printers, where the character set is attached to a solid bar that moves horizontally along the print line, such as the IBM 1443.
A fourth design, used mainly on very early printers such as the IBM 402, features independent type bars, one for each printable position. Each bar contains the character set to be printed. The bars move vertically to position the character to be printed in front of the print hammer.
In each case, to print a line, precisely timed hammers strike against the back of the paper at the exact moment that the correct character to be printed is passing in front of the paper. The paper presses forward against a ribbon which then presses against the character form and the impression of the character form is printed onto the paper. Each system could have slight timing issues, which could cause minor misalignment of the resulting printed characters. For drum or typebar printers, this appeared as vertical misalignment, with characters being printed slightly above or below the rest of the line. In chain or bar printers, the misalignment was horizontal, with printed characters being crowded closer together or farther apart. This was much less noticeable to human vision than vertical misalignment, where characters seemed to bounce up and down in the line, so they were considered as higher quality print.
Comb printers, also called line matrix printers, represent the fifth major design. These printers are a hybrid of dot matrix printing and line printing. In these printers, a comb of hammers prints a portion of a row of pixels at one time, such as every eighth pixel. By shifting the comb back and forth slightly, the entire pixel row can be printed, continuing the example, in just eight cycles. The paper then advances, and the next pixel row is printed. Because far less motion is involved than in a conventional dot matrix printer, these printers are very fast compared to dot matrix printers and are competitive in speed with formed-character line printers while also being able to print dot matrix graphics. The Printronix P7000 series of line matrix printers are still manufactured as of 2013.
Line printers are the fastest of all impact printers and are used for bulk printing in large computer centres. A line printer can print at 1100 lines per minute or faster, frequently printing pages more rapidly than many current laser printers. On the other hand, the mechanical components of line printers operate with tight tolerances and require regular preventive maintenance (PM) to produce a top quality print. They are virtually never used with personal computers and have now been replaced by high-speed laser printers. The legacy of line printers lives on in many operating systems, which use the abbreviations "lp", "lpr", or "LPT" to refer to printers.
Liquid ink electrostatic printers
Liquid ink electrostatic printers use a chemical coated paper, which is charged by the print head according to the image of the document. The paper is passed near a pool of liquid ink with the opposite charge. The charged areas of the paper attract the ink and thus form the image. This process was developed from the process of electrostatic copying. Color reproduction is very accurate, and because there is no heating the scale distortion is less than ±0.1%. (All laser printers have an accuracy of ±1%.)
Worldwide, most survey offices used this printer before color inkjet plotters become popular. Liquid ink electrostatic printers were mostly available in width and also 6 color printing. These were also used to print large billboards. It was first introduced by Versatec, which was later bought by Xerox. 3M also used to make these printers.
Plotters
Pen-based plotters were an alternate printing technology once common in engineering and architectural firms. Pen-based plotters rely on contact with the paper (but not impact, per se) and special purpose pens that are mechanically run over the paper to create text and images. Since the pens output continuous lines, they were able to produce technical drawings of higher resolution than was achievable with dot-matrix technology. Some plotters used roll-fed paper, and therefore had a minimal restriction on the size of the output in one dimension. These plotters were capable of producing quite sizable drawings.
Other printers
A number of other sorts of printers are important for historical reasons, or for special purpose uses.
Digital minilab (photographic paper)
Electrolytic printers
Spark printer
Barcode printer multiple technologies, including: thermal printing, inkjet printing, and laser printing barcodes
Label printer
Billboard / sign paint spray printers
Laser etching (product packaging) industrial printers
Microsphere (special paper)
Attributes
Connectivity
Printers can be connected to computers in many ways: directly by a dedicated data cable such as the USB, through a short-range radio like Bluetooth, a local area network using cables (such as the Ethernet) or radio (such as WiFi), or on a standalone basis without a computer, using a memory card or other portable data storage device.
Printer control languages
Most printers other than line printers accept control characters or unique character sequences to control various printer functions. These may range from shifting from lower to upper case or from black to red ribbon on typewriter printers to switching fonts and changing character sizes and colors on raster printers. Early printer controls were not standardized, with each manufacturer's equipment having its own set. The IBM Personal Printer Data Stream (PPDS) became a commonly used command set for dot-matrix printers.
Today, most printers accept one or more page description languages (PDLs). Laser printers with greater processing power frequently offer support for variants of Hewlett-Packard's Printer Command Language (PCL), PostScript or XML Paper Specification. Most inkjet devices support manufacturer proprietary PDLs such as ESC/P. The diversity in mobile platforms have led to various standardization efforts around device PDLs such as the Printer Working Group (PWG's) PWG Raster.
Printing speed
The speed of early printers was measured in units of characters per minute (cpm) for character printers, or lines per minute (lpm) for line printers. Modern printers are measured in pages per minute (ppm). These measures are used primarily as a marketing tool, and are not as well standardised as toner yields. Usually pages per minute refers to sparse monochrome office documents, rather than dense pictures which usually print much more slowly, especially color images. Speeds in ppm usually apply to A4 paper in most countries in the world, and letter paper size, about 6% shorter, in North America.
Printing mode
The data received by a printer may be:
A string of characters
A bitmapped image
A vector image
A computer program written in a page description language, such as PCL or PostScript
Some printers can process all four types of data, others not.
Character printers, such as daisy wheel printers, can handle only plain text data or rather simple point plots.
Pen plotters typically process vector images. Inkjet based plotters can adequately reproduce all four.
Modern printing technology, such as laser printers and inkjet printers, can adequately reproduce all four. This is especially true of printers equipped with support for PCL or PostScript, which includes the vast majority of printers produced today.
Today it is possible to print everything (even plain text) by sending ready bitmapped images to the printer. This allows better control over formatting, especially among machines from different vendors. Many printer drivers do not use the text mode at all, even if the printer is capable of it.
Monochrome, color and photo printers
A monochrome printer can only produce monochrome images, with only shades of a single color. Most printers can produce only two colors, black (ink) and white (no ink). With half-tonning techniques, however, such a printer can produce acceptable grey-scale images too
A color printer can produce images of multiple colors. A photo printer is a color printer that can produce images that mimic the color range (gamut) and resolution of prints made from photographic film.
Page yield
The page yield is the number of pages that can be printed from a toner cartridge or ink cartridge—before the cartridge needs to be refilled or replaced.
The actual number of pages yielded by a specific cartridge depends on a number of factors.
For a fair comparison, many laser printer manufacturers use the ISO/IEC 19752 process to measure the toner cartridge yield.
Economics
In order to fairly compare operating expenses of printers with a relatively small ink cartridge to printers with a larger, more expensive toner cartridge that typically holds more toner and so prints more pages before the cartridge needs to be replaced, many people prefer to estimate operating expenses in terms of cost per page (CPP).
Retailers often apply the "razor and blades" model: a company may sell a printer at cost and make profits on the ink cartridge, paper, or some other replacement part. This has caused legal disputes regarding the right of companies other than the printer manufacturer to sell compatible ink cartridges. To protect their business model, several manufacturers invest heavily in developing new cartridge technology and patenting it.
Other manufacturers, in reaction to the challenges from using this business model, choose to make more money on printers and less on ink, promoting the latter through their advertising campaigns. Finally, this generates two clearly different proposals: "cheap printer – expensive ink" or "expensive printer – cheap ink". Ultimately, the consumer decision depends on their reference interest rate or their time preference. From an economics viewpoint, there is a clear trade-off between cost per copy and cost of the printer.
Printer steganography
Printer steganography is a type of steganography – "hiding data within data" – produced by color printers, including Brother, Canon, Dell, Epson, HP, IBM, Konica Minolta, Kyocera, Lanier, Lexmark, Ricoh, Toshiba and Xerox brand color laser printers, where tiny yellow dots are added to each page. The dots are barely visible and contain encoded printer serial numbers, as well as date and time stamps.
Manufacturers and market share
As of 2020–2021, the largest worldwide vendor of printers is Hewlett-Packard, followed by Canon, Brother, Seiko Epson and Kyocera. Other known vendors include NEC, Ricoh, Xerox, Lexmark, OKI, Sharp, Konica Minolta, Samsung, Kodak, Dell, Toshiba, Star Micronics, Citizen and Panasonic.
| Technology | Media and communication | null |
5295 | https://en.wikipedia.org/wiki/Character%20encoding | Character encoding | Character encoding is the process of assigning numbers to graphical characters, especially the written characters of human language, allowing them to be stored, transmitted, and transformed using computers. The numerical values that make up a character encoding are known as code points and collectively comprise a code space, a code page, or character map.
Early character encodings that originated with optical or electrical telegraphy and in early computers could only represent a subset of the characters used in written languages, sometimes restricted to upper case letters, numerals and some punctuation only. Over time, character encodings capable of representing more characters were created, such as ASCII, the ISO/IEC 8859 encodings, various computer vendor encodings, and Unicode encodings such as UTF-8 and UTF-16.
The most popular character encoding on the World Wide Web is UTF-8, which is used in 98.2% of surveyed web sites, as of May 2024. In application programs and operating system tasks, both UTF-8 and UTF-16 are popular options.
History
The history of character codes illustrates the evolving need for machine-mediated character-based symbolic information over a distance, using once-novel electrical means. The earliest codes were based upon manual and hand-written encoding and cyphering systems, such as Bacon's cipher, Braille, international maritime signal flags, and the 4-digit encoding of Chinese characters for a Chinese telegraph code (Hans Schjellerup, 1869). With the adoption of electrical and electro-mechanical techniques these earliest codes were adapted to the new capabilities and limitations of the early machines. The earliest well-known electrically transmitted character code, Morse code, introduced in the 1840s, used a system of four "symbols" (short signal, long signal, short space, long space) to generate codes of variable length. Though some commercial use of Morse code was via machinery, it was often used as a manual code, generated by hand on a telegraph key and decipherable by ear, and persists in amateur radio and aeronautical use. Most codes are of fixed per-character length or variable-length sequences of fixed-length codes (e.g. Unicode).
Common examples of character encoding systems include Morse code, the Baudot code, the American Standard Code for Information Interchange (ASCII) and Unicode. Unicode, a well-defined and extensible encoding system, has replaced most earlier character encodings, but the path of code development to the present is fairly well known.
The Baudot code, a five-bit encoding, was created by Émile Baudot in 1870, patented in 1874, modified by Donald Murray in 1901, and standardized by CCITT as International Telegraph Alphabet No. 2 (ITA2) in 1930. The name baudot has been erroneously applied to ITA2 and its many variants. ITA2 suffered from many shortcomings and was often improved by many equipment manufacturers, sometimes creating compatibility issues.
Herman Hollerith invented punch card data encoding in the late 19th century to analyze census data. Initially, each hole position represented a different data element, but later, numeric information was encoded by numbering the lower rows 0 to 9, with a punch in a column representing its row number. Later alphabetic data was encoded by allowing more than one punch per column. Electromechanical tabulating machines represented date internally by the timing of pulses relative to the motion of the cards through the machine.
When IBM went to electronic processing, starting with the IBM 603 Electronic Multiplier, it used a variety of binary encoding schemes that were tied to the punch card code. IBM used several binary-coded decimal (BCD) six-bit character encoding schemes, starting as early as 1953 in its 702 and 704 computers, and in its later 7000 Series and 1400 series, as well as in associated peripherals. Since the punched card code then in use only allowed digits, upper-case English letters and a few special characters, six bits were sufficient. These BCD encodings extended existing simple four-bit numeric encoding to include alphabetic and special characters, mapping them easily to punch-card encoding which was already in widespread use. IBM's codes were used primarily with IBM equipment; other computer vendors of the era had their own character codes, often six-bit, such as the encoding used by the , but usually had the ability to read tapes produced on IBM equipment. IBM's BCD encodings were the precursors of their Extended Binary-Coded Decimal Interchange Code (usually abbreviated as EBCDIC), an eight-bit encoding scheme developed in 1963 for the IBM System/360 that featured a larger character set, including lower case letters.
In 1959 the U.S. military defined its Fieldata code, a six-or seven-bit code, introduced by the U.S. Army Signal Corps. While Fieldata addressed many of the then-modern issues (e.g. letter and digit codes arranged for machine collation), it fell short of its goals and was short-lived. In 1963 the first ASCII code was released (X3.4-1963) by the ASCII committee (which contained at least one member of the Fieldata committee, W. F. Leubbert), which addressed most of the shortcomings of Fieldata, using a simpler seven-bit code. Many of the changes were subtle, such as collatable character sets within certain numeric ranges. ASCII63 was a success, widely adopted by industry, and with the follow-up issue of the 1967 ASCII code (which added lower-case letters and fixed some "control code" issues) ASCII67 was adopted fairly widely. ASCII67's American-centric nature was somewhat addressed in the European ECMA-6 standard. Eight-bit extended ASCII encodings, such as various vendor extensions and the ISO/IEC 8859 series, supported all ASCII characters as well as additional non-ASCII characters.
In trying to develop universally interchangeable character encodings, researchers in the 1980s faced the dilemma that, on the one hand, it seemed necessary to add more bits to accommodate additional characters, but on the other hand, for the users of the relatively small character set of the Latin alphabet (who still constituted the majority of computer users), those additional bits were a colossal waste of then-scarce and expensive computing resources (as they would always be zeroed out for such users). In 1985, the average personal computer user's hard disk drive could store only about 10 megabytes, and it cost approximately US$250 on the wholesale market (and much higher if purchased separately at retail), so it was very important at the time to make every bit count.
The compromise solution that was eventually found and was to break the assumption (dating back to telegraph codes) that each character should always directly correspond to a particular sequence of bits. Instead, characters would first be mapped to a universal intermediate representation in the form of abstract numbers called code points. Code points would then be represented in a variety of ways and with various default numbers of bits per character (code units) depending on context. To encode code points higher than the length of the code unit, such as above 256 for eight-bit units, the solution was to implement variable-length encodings where an escape sequence would signal that subsequent bits should be parsed as a higher code point.
Terminology
Informally, the terms "character encoding", "character map", "character set" and "code page" are often used interchangeably. Historically, the same standard would specify a repertoire of characters and how they were to be encoded into a stream of code units — usually with a single character per code unit. However, due to the emergence of more sophisticated character encodings, the distinction between these terms has become important.
A character is a minimal unit of text that has semantic value.
A character set is a collection of elements used to represent text. For example, the Latin alphabet and Greek alphabet are both character sets.
A coded character set is a character set mapped to a set of unique numbers. For historical reasons, this is also often referred to as a code page.
A character repertoire is the set of characters that can be represented by a particular coded character set. The repertoire may be closed, meaning that no additions are allowed without creating a new standard (as is the case with ASCII and most of the ISO-8859 series); or it may be open, allowing additions (as is the case with Unicode and to a limited extent Windows code pages).
A code point is a value or position of a character in a coded character set.
A code space is the range of numerical values spanned by a coded character set.
A code unit is the minimum bit combination that can represent a character in a character encoding (in computer science terms, it is the word size of the character encoding). For example, common code units include 7-bit, 8-bit, 16-bit, and 32-bit. In some encodings, some characters are encoded using multiple code units; such an encoding is referred to as a variable-width encoding.
Code pages
"Code page" is a historical name for a coded character set.
Originally, a code page referred to a specific page number in the IBM standard character set manual, which would define a particular character encoding. Other vendors, including Microsoft, SAP, and Oracle Corporation, also published their own sets of code pages; the most well-known code page suites are "Windows" (based on Windows-1252) and "IBM"/"DOS" (based on code page 437).
Despite no longer referring to specific page numbers in a standard, many character encodings are still referred to by their code page number; likewise, the term "code page" is often still used to refer to character encodings in general.
The term "code page" is not used in Unix or Linux, where "charmap" is preferred, usually in the larger context of locales. IBM's Character Data Representation Architecture (CDRA) designates entities with coded character set identifiers (CCSIDs), each of which is variously called a "charset", "character set", "code page", or "CHARMAP".
Code units
The code unit size is equivalent to the bit measurement for the particular encoding:
A code unit in ASCII consists of 7 bits;
A code unit in UTF-8, EBCDIC and GB 18030 consists of 8 bits;
A code unit in UTF-16 consists of 16 bits;
A code unit in UTF-32 consists of 32 bits.
Code points
A code point is represented by a sequence of code units. The mapping is defined by the encoding. Thus, the number of code units required to represent a code point depends on the encoding:
UTF-8: code points map to a sequence of one, two, three or four code units.
UTF-16: code units are twice as long as 8-bit code units. Therefore, any code point with a scalar value less than U+10000 is encoded with a single code unit. Code points with a value U+10000 or higher require two code units each. These pairs of code units have a unique term in UTF-16: "Unicode surrogate pairs".
UTF-32: the 32-bit code unit is large enough that every code point is represented as a single code unit.
GB 18030: multiple code units per code point are common, because of the small code units. Code points are mapped to one, two, or four code units.
Characters
Exactly what constitutes a character varies between character encodings.
For example, for letters with diacritics, there are two distinct approaches that can be taken to encode them: they can be encoded either as a single unified character (known as a precomposed character), or as separate characters that combine into a single glyph. The former simplifies the text handling system, but the latter allows any letter/diacritic combination to be used in text. Ligatures pose similar problems.
Exactly how to handle glyph variants is a choice that must be made when constructing a particular character encoding. Some writing systems, such as Arabic and Hebrew, need to accommodate things like graphemes that are joined in different ways in different contexts, but represent the same semantic character.
Unicode encoding model
Unicode and its parallel standard, the ISO/IEC 10646 Universal Character Set, together constitute a unified standard for character encoding. Rather than mapping characters directly to bytes, Unicode separately defines a coded character set that maps characters to unique natural numbers (code points), how those code points are mapped to a series of fixed-size natural numbers (code units), and finally how those units are encoded as a stream of octets (bytes). The purpose of this decomposition is to establish a universal set of characters that can be encoded in a variety of ways. To describe this model precisely, Unicode uses its own set of terminology to describe its process:
An abstract character repertoire (ACR) is the full set of abstract characters that a system supports. Unicode has an open repertoire, meaning that new characters will be added to the repertoire over time.
A coded character set (CCS) is a function that maps characters to code points (each code point represents one character). For example, in a given repertoire, the capital letter "A" in the Latin alphabet might be represented by the code point 65, the character "B" by 66, and so on. Multiple coded character sets may share the same character repertoire; for example ISO/IEC 8859-1 and IBM code pages 037 and 500 all cover the same repertoire but map them to different code points.
A character encoding form (CEF) is the mapping of code points to code units to facilitate storage in a system that represents numbers as bit sequences of fixed length (i.e. practically any computer system). For example, a system that stores numeric information in 16-bit units can only directly represent code points 0 to 65,535 in each unit, but larger code points (say, 65,536 to 1.4 million) could be represented by using multiple 16-bit units. This correspondence is defined by a CEF.
A character encoding scheme (CES) is the mapping of code units to a sequence of octets to facilitate storage on an octet-based file system or transmission over an octet-based network. Simple character encoding schemes include UTF-8, UTF-16BE, UTF-32BE, UTF-16LE, and UTF-32LE; compound character encoding schemes, such as UTF-16, UTF-32 and ISO/IEC 2022, switch between several simple schemes by using a byte order mark or escape sequences; compressing schemes try to minimize the number of bytes used per code unit (such as SCSU and BOCU).
Although UTF-32BE and UTF-32LE are simpler CESes, most systems working with Unicode use either UTF-8, which is backward compatible with fixed-length ASCII and maps Unicode code points to variable-length sequences of octets, or UTF-16BE, which is backward compatible with fixed-length UCS-2BE and maps Unicode code points to variable-length sequences of 16-bit words. See comparison of Unicode encodings for a detailed discussion.
Finally, there may be a higher-level protocol which supplies additional information to select the particular variant of a Unicode character, particularly where there are regional variants that have been 'unified' in Unicode as the same character. An example is the XML attribute xml:lang.
The Unicode model uses the term "character map" for other systems which directly assign a sequence of characters to a sequence of bytes, covering all of the CCS, CEF and CES layers.
Unicode code points
In Unicode, a character can be referred to as 'U+' followed by its codepoint value in hexadecimal. The range of valid code points (the codespace) for the Unicode standard is U+0000 to U+10FFFF, inclusive, divided in 17 planes, identified by the numbers 0 to 16. Characters in the range U+0000 to U+FFFF are in plane 0, called the Basic Multilingual Plane (BMP). This plane contains the most commonly-used characters. Characters in the range U+10000 to U+10FFFF in the other planes are called supplementary characters.
The following table shows examples of code point values:
Example
Consider a string of the letters "ab̲c𐐀"—that is, a string containing a Unicode combining character () as well as a supplementary character (). This string has several Unicode representations which are logically equivalent, yet while each is suited to a diverse set of circumstances or range of requirements:
Four composed characters:
, , ,
Five graphemes:
, , , ,
Five Unicode code points:
, , , ,
Five UTF-32 code units (32-bit integer values):
, , , ,
Six UTF-16 code units (16-bit integers)
, , , , ,
Nine UTF-8 code units (8-bit values, or bytes)
, , , , , , , ,
Note in particular that 𐐀 is represented with either one 32-bit value (UTF-32), two 16-bit values (UTF-16), or four 8-bit values (UTF-8). Although each of those forms uses the same total number of bits (32) to represent the glyph, it is not obvious how the actual numeric byte values are related.
Transcoding
As a result of having many character encoding methods in use (and the need for backward compatibility with archived data), many computer programs have been developed to translate data between character encoding schemes, a process known as transcoding. Some of these are cited below.
Cross-platform:
Web browsers – most modern web browsers feature automatic character encoding detection. On Firefox 3, for example, see the View/Character Encoding submenu.
iconv – a program and standardized API to convert encodings
luit – a program that converts encoding of input and output to programs running interactively
International Components for Unicode – A set of C and Java libraries to perform charset conversion. uconv can be used from ICU4C.
Windows:
Encoding.Convert – .NET API
MultiByteToWideChar/WideCharToMultiByte – to convert from ANSI to Unicode & Unicode to ANSI
Common character encodings
The most used character encoding on the web is UTF-8, used in 98.2% of surveyed web sites, as of May 2024. In application programs and operating system tasks, both UTF-8 and UTF-16 are popular options.
ISO 646
ASCII
EBCDIC
ISO 8859:
ISO 8859-1 Western Europe
ISO 8859-2 Western and Central Europe
ISO 8859-3 Western Europe and South European (Turkish, Maltese plus Esperanto)
ISO 8859-4 Western Europe and Baltic countries (Lithuania, Estonia, Latvia and Lapp)
ISO 8859-5 Cyrillic alphabet
ISO 8859-6 Arabic
ISO 8859-7 Greek
ISO 8859-8 Hebrew
ISO 8859-9 Western Europe with amended Turkish character set
ISO 8859-10 Western Europe with rationalised character set for Nordic languages, including complete Icelandic set
ISO 8859-11 Thai
ISO 8859-13 Baltic languages plus Polish
ISO 8859-14 Celtic languages (Irish Gaelic, Scottish, Welsh)
ISO 8859-15 Added the Euro sign and other rationalisations to ISO 8859-1
ISO 8859-16 Central, Eastern and Southern European languages (Albanian, Bosnian, Croatian, Hungarian, Polish, Romanian, Serbian and Slovenian, but also French, German, Italian and Irish Gaelic)
CP437, CP720, CP737, CP850, CP852, CP855, CP857, CP858, CP860, CP861, CP862, CP863, CP865, CP866, CP869, CP872
MS-Windows character sets:
Windows-1250 for Central European languages that use Latin script, (Polish, Czech, Slovak, Hungarian, Slovene, Serbian, Croatian, Bosnian, Romanian and Albanian)
Windows-1251 for Cyrillic alphabets
Windows-1252 for Western languages
Windows-1253 for Greek
Windows-1254 for Turkish
Windows-1255 for Hebrew
Windows-1256 for Arabic
Windows-1257 for Baltic languages
Windows-1258 for Vietnamese
Mac OS Roman
KOI8-R, KOI8-U, KOI7
MIK
ISCII
TSCII
VISCII
JIS X 0208 is a widely deployed standard for Japanese character encoding that has several encoding forms.
Shift JIS (Microsoft Code page 932 is a dialect of Shift_JIS)
EUC-JP
ISO-2022-JP
JIS X 0213 is an extended version of JIS X 0208.
Shift_JIS-2004
EUC-JIS-2004
ISO-2022-JP-2004
Chinese Guobiao
GB 2312
GBK (Microsoft Code page 936)
GB 18030
Taiwan Big5 (a more famous variant is Microsoft Code page 950)
Hong Kong HKSCS
Korean
KS X 1001 is a Korean double-byte character encoding standard
EUC-KR
ISO-2022-KR
Unicode (and subsets thereof, such as the 16-bit 'Basic Multilingual Plane')
UTF-8
UTF-16
UTF-32
ANSEL or ISO/IEC 6937
| Technology | Programming | null |
5299 | https://en.wikipedia.org/wiki/Carbon | Carbon | Carbon () is a chemical element; it has symbol C and atomic number 6. It is nonmetallic and tetravalent—meaning that its atoms are able to form up to four covalent bonds due to its valence shell exhibiting 4 electrons. It belongs to group 14 of the periodic table. Carbon makes up about 0.025 percent of Earth's crust. Three isotopes occur naturally, C and C being stable, while C is a radionuclide, decaying with a half-life of 5,700 years. Carbon is one of the few elements known since antiquity.
Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass after hydrogen, helium, and oxygen. Carbon's abundance, its unique diversity of organic compounds, and its unusual ability to form polymers at the temperatures commonly encountered on Earth, enables this element to serve as a common element of all known life. It is the second most abundant element in the human body by mass (about 18.5%) after oxygen.
The atoms of carbon can bond together in diverse ways, resulting in various allotropes of carbon. Well-known allotropes include graphite, diamond, amorphous carbon, and fullerenes. The physical properties of carbon vary widely with the allotropic form. For example, graphite is opaque and black, while diamond is highly transparent. Graphite is soft enough to form a streak on paper (hence its name, from the Greek verb "γράφειν" which means "to write"), while diamond is the hardest naturally occurring material known. Graphite is a good electrical conductor while diamond has a low electrical conductivity. Under normal conditions, diamond, carbon nanotubes, and graphene have the highest thermal conductivities of all known materials. All carbon allotropes are solids under normal conditions, with graphite being the most thermodynamically stable form at standard temperature and pressure. They are chemically resistant and require high temperature to react even with oxygen.
The most common oxidation state of carbon in inorganic compounds is +4, while +2 is found in carbon monoxide and transition metal carbonyl complexes. The largest sources of inorganic carbon are limestones, dolomites and carbon dioxide, but significant quantities occur in organic deposits of coal, peat, oil, and methane clathrates. Carbon forms a vast number of compounds, with about two hundred million having been described and indexed; and yet that number is but a fraction of the number of theoretically possible compounds under standard conditions.
Characteristics
The allotropes of carbon include graphite, one of the softest known substances, and diamond, the hardest naturally occurring substance. It bonds readily with other small atoms, including other carbon atoms, and is capable of forming multiple stable covalent bonds with suitable multivalent atoms. Carbon is a component element in the large majority of all chemical compounds, with about two hundred million examples having been described in the published chemical literature. Carbon also has the highest sublimation point of all elements. At atmospheric pressure it has no melting point, as its triple point is at and , so it sublimes at about . Graphite is much more reactive than diamond at standard conditions, despite being more thermodynamically stable, as its delocalised pi system is much more vulnerable to attack. For example, graphite can be oxidised by hot concentrated nitric acid at standard conditions to mellitic acid, C6(CO2H)6, which preserves the hexagonal units of graphite while breaking up the larger structure.
Carbon sublimes in a carbon arc, which has a temperature of about 5800 K (5,530 °C or 9,980 °F). Thus, irrespective of its allotropic form, carbon remains solid at higher temperatures than the highest-melting-point metals such as tungsten or rhenium. Although thermodynamically prone to oxidation, carbon resists oxidation more effectively than elements such as iron and copper, which are weaker reducing agents at room temperature.
Carbon is the sixth element, with a ground-state electron configuration of 1s22s22p2, of which the four outer electrons are valence electrons. Its first four ionisation energies, 1086.5, 2352.6, 4620.5 and 6222.7 kJ/mol, are much higher than those of the heavier group-14 elements. The electronegativity of carbon is 2.5, significantly higher than the heavier group-14 elements (1.8–1.9), but close to most of the nearby nonmetals, as well as some of the second- and third-row transition metals. Carbon's covalent radii are normally taken as 77.2 pm (C−C), 66.7 pm (C=C) and 60.3 pm (C≡C), although these may vary depending on coordination number and what the carbon is bonded to. In general, covalent radius decreases with lower coordination number and higher bond order.
Carbon-based compounds form the basis of all known life on Earth, and the carbon-nitrogen-oxygen cycle provides a small portion of the energy produced by the Sun, and most of the energy in larger stars (e.g. Sirius). Although it forms an extraordinary variety of compounds, most forms of carbon are comparatively unreactive under normal conditions. At standard temperature and pressure, it resists all but the strongest oxidizers. It does not react with sulfuric acid, hydrochloric acid, chlorine or any alkalis. At elevated temperatures, carbon reacts with oxygen to form carbon oxides and will rob oxygen from metal oxides to leave the elemental metal. This exothermic reaction is used in the iron and steel industry to smelt iron and to control the carbon content of steel:
+ 4 C + 2 → 3 Fe + 4 .
Carbon reacts with sulfur to form carbon disulfide, and it reacts with steam in the coal-gas reaction used in coal gasification:
C + HO → CO + H.
Carbon combines with some metals at high temperatures to form metallic carbides, such as the iron carbide cementite in steel and tungsten carbide, widely used as an abrasive and for making hard tips for cutting tools.
The system of carbon allotropes spans a range of extremes:
Allotropes
Atomic carbon is a very short-lived species and, therefore, carbon is stabilized in various multi-atomic structures with diverse molecular configurations called allotropes. The three relatively well-known allotropes of carbon are amorphous carbon, graphite, and diamond. Once considered exotic, fullerenes are nowadays commonly synthesized and used in research; they include buckyballs, carbon nanotubes, carbon nanobuds and nanofibers. Several other exotic allotropes have also been discovered, such as lonsdaleite, glassy carbon, carbon nanofoam and linear acetylenic carbon (carbyne).
Graphene is a two-dimensional sheet of carbon with the atoms arranged in a hexagonal lattice. As of 2009, graphene appears to be the strongest material ever tested. The process of separating it from graphite will require some further technological development before it is economical for industrial processes. If successful, graphene could be used in the construction of a space elevator. It could also be used to safely store hydrogen for use in a hydrogen based engine in cars.
The amorphous form is an assortment of carbon atoms in a non-crystalline, irregular, glassy state, not held in a crystalline macrostructure. It is present as a powder, and is the main constituent of substances such as charcoal, lampblack (soot), and activated carbon. At normal pressures, carbon takes the form of graphite, in which each atom is bonded trigonally to three others in a plane composed of fused hexagonal rings, just like those in aromatic hydrocarbons. The resulting network is 2-dimensional, and the resulting flat sheets are stacked and loosely bonded through weak van der Waals forces. This gives graphite its softness and its cleaving properties (the sheets slip easily past one another). Because of the delocalization of one of the outer electrons of each atom to form a π-cloud, graphite conducts electricity, but only in the plane of each covalently bonded sheet. This results in a lower bulk electrical conductivity for carbon than for most metals. The delocalization also accounts for the energetic stability of graphite over diamond at room temperature.
At very high pressures, carbon forms the more compact allotrope, diamond, having nearly twice the density of graphite. Here, each atom is bonded tetrahedrally to four others, forming a 3-dimensional network of puckered six-membered rings of atoms. Diamond has the same cubic structure as silicon and germanium, and because of the strength of the carbon-carbon bonds, it is the hardest naturally occurring substance measured by resistance to scratching. Contrary to the popular belief that "diamonds are forever", they are thermodynamically unstable (ΔfG°(diamond, 298 K) = 2.9 kJ/mol) under normal conditions (298 K, 105 Pa) and should theoretically transform into graphite. But due to a high activation energy barrier, the transition into graphite is so slow at normal temperature that it is unnoticeable. However, at very high temperatures diamond will turn into graphite, and diamonds can burn up in a house fire. The bottom left corner of the phase diagram for carbon has not been scrutinized experimentally. Although a computational study employing density functional theory methods reached the conclusion that as and , diamond becomes more stable than graphite by approximately 1.1 kJ/mol, more recent and definitive experimental and computational studies show that graphite is more stable than diamond for , without applied pressure, by 2.7 kJ/mol at T = 0 K and 3.2 kJ/mol at T = 298.15 K. Under some conditions, carbon crystallizes as lonsdaleite, a hexagonal crystal lattice with all atoms covalently bonded and properties similar to those of diamond.
Fullerenes are a synthetic crystalline formation with a graphite-like structure, but in place of flat hexagonal cells only, some of the cells of which fullerenes are formed may be pentagons, nonplanar hexagons, or even heptagons of carbon atoms. The sheets are thus warped into spheres, ellipses, or cylinders. The properties of fullerenes (split into buckyballs, buckytubes, and nanobuds) have not yet been fully analyzed and represent an intense area of research in nanomaterials. The names fullerene and buckyball are given after Richard Buckminster Fuller, popularizer of geodesic domes, which resemble the structure of fullerenes. The buckyballs are fairly large molecules formed completely of carbon bonded trigonally, forming spheroids (the best-known and simplest is the soccerball-shaped C buckminsterfullerene). Carbon nanotubes (buckytubes) are structurally similar to buckyballs, except that each atom is bonded trigonally in a curved sheet that forms a hollow cylinder. Nanobuds were first reported in 2007 and are hybrid buckytube/buckyball materials (buckyballs are covalently bonded to the outer wall of a nanotube) that combine the properties of both in a single structure.
Of the other discovered allotropes, carbon nanofoam is a ferromagnetic allotrope discovered in 1997. It consists of a low-density cluster-assembly of carbon atoms strung together in a loose three-dimensional web, in which the atoms are bonded trigonally in six- and seven-membered rings. It is among the lightest known solids, with a density of about 2 kg/m. Similarly, glassy carbon contains a high proportion of closed porosity, but contrary to normal graphite, the graphitic layers are not stacked like pages in a book, but have a more random arrangement. Linear acetylenic carbon has the chemical structure −(C≡C)− . Carbon in this modification is linear with sp orbital hybridization, and is a polymer with alternating single and triple bonds. This carbyne is of considerable interest to nanotechnology as its Young's modulus is 40 times that of the hardest known material – diamond.
In 2015, a team at the North Carolina State University announced the development of another allotrope they have dubbed Q-carbon, created by a high-energy low-duration laser pulse on amorphous carbon dust. Q-carbon is reported to exhibit ferromagnetism, fluorescence, and a hardness superior to diamonds.
In the vapor phase, some of the carbon is in the form of highly reactive diatomic carbon dicarbon (). When excited, this gas glows green.
Occurrence
Carbon is the fourth most abundant chemical element in the observable universe by mass after hydrogen, helium, and oxygen. Carbon is abundant in the Sun, stars, comets, and in the atmospheres of most planets. Some meteorites contain microscopic diamonds that were formed when the Solar System was still a protoplanetary disk. Microscopic diamonds may also be formed by the intense pressure and high temperature at the sites of meteorite impacts.
In 2014 NASA announced a greatly upgraded database for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. More than 20% of the carbon in the universe may be associated with PAHs, complex compounds of carbon and hydrogen without oxygen. These compounds figure in the PAH world hypothesis where they are hypothesized to have a role in abiogenesis and formation of life. PAHs seem to have been formed "a couple of billion years" after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets.
It has been estimated that the solid earth as a whole contains 730 ppm of carbon, with 2000 ppm in the core and 120 ppm in the combined mantle and crust. Since the mass of the earth is , this would imply 4360 million gigatonnes of carbon. This is much more than the amount of carbon in the oceans or atmosphere (below).
In combination with oxygen in carbon dioxide, carbon is found in the Earth's atmosphere (approximately 900 gigatonnes of carbon — each ppm corresponds to 2.13 Gt) and dissolved in all water bodies (approximately 36,000 gigatonnes of carbon). Carbon in the biosphere has been estimated at 550 gigatonnes but with a large uncertainty, due mostly to a huge uncertainty in the amount of terrestrial deep subsurface bacteria. Hydrocarbons (such as coal, petroleum, and natural gas) contain carbon as well. Coal "reserves" (not "resources") amount to around 900 gigatonnes with perhaps 18,000 Gt of resources. Oil reserves are around 150 gigatonnes. Proven sources of natural gas are about (containing about 105 gigatonnes of carbon), but studies estimate another of "unconventional" deposits such as shale gas, representing about 540 gigatonnes of carbon.
Carbon is also found in methane hydrates in polar regions and under the seas. Various estimates put this carbon between 500, 2500, or 3,000 Gt.
According to one source, in the period from 1751 to 2008 about 347 gigatonnes of carbon were released as carbon dioxide to the atmosphere from burning of fossil fuels. Another source puts the amount added to the atmosphere for the period since 1750 at 879 Gt, and the total going to the atmosphere, sea, and land (such as peat bogs) at almost 2,000 Gt.
Carbon is a constituent (about 12% by mass) of the very large masses of carbonate rock (limestone, dolomite, marble, and others). Coal is very rich in carbon (anthracite contains 92–98%) and is the largest commercial source of mineral carbon, accounting for 4,000 gigatonnes or 80% of fossil fuel.
As for individual carbon allotropes, graphite is found in large quantities in the United States (mostly in New York and Texas), Russia, Mexico, Greenland, and India. Natural diamonds occur in the rock kimberlite, found in ancient volcanic "necks", or "pipes". Most diamond deposits are in Africa, notably in South Africa, Namibia, Botswana, the Republic of the Congo, and Sierra Leone. Diamond deposits have also been found in Arkansas, Canada, the Russian Arctic, Brazil, and in Northern and Western Australia. Diamonds are now also being recovered from the ocean floor off the Cape of Good Hope. Diamonds are found naturally, but about 30% of all industrial diamonds used in the U.S. are now manufactured.
Carbon-14 is formed in upper layers of the troposphere and the stratosphere at altitudes of 9–15 km by a reaction that is precipitated by cosmic rays. Thermal neutrons are produced that collide with the nuclei of nitrogen-14, forming carbon-14 and a proton. As such, of atmospheric carbon dioxide contains carbon-14.
Carbon-rich asteroids are relatively preponderant in the outer parts of the asteroid belt in the Solar System. These asteroids have not yet been directly sampled by scientists. The asteroids can be used in hypothetical space-based carbon mining, which may be possible in the future, but is currently technologically impossible.
Isotopes
Isotopes of carbon are atomic nuclei that contain six protons plus a number of neutrons (varying from 2 to 16). Carbon has two stable, naturally occurring isotopes. The isotope carbon-12 (C) forms 98.93% of the carbon on Earth, while carbon-13 (C) forms the remaining 1.07%. The concentration of C is further increased in biological materials because biochemical reactions discriminate against C. In 1961, the International Union of Pure and Applied Chemistry (IUPAC) adopted the isotope carbon-12 as the basis for atomic weights. Identification of carbon in nuclear magnetic resonance (NMR) experiments is done with the isotope C.
Carbon-14 (C) is a naturally occurring radioisotope, created in the upper atmosphere (lower stratosphere and upper troposphere) by interaction of nitrogen with cosmic rays. It is found in trace amounts on Earth of 1 part per trillion (0.0000000001%) or more, mostly confined to the atmosphere and superficial deposits, particularly of peat and other organic materials. This isotope decays by 0.158 MeV β emission. Because of its relatively short half-life of years, C is virtually absent in ancient rocks. The amount of C in the atmosphere and in living organisms is almost constant, but decreases predictably in their bodies after death. This principle is used in radiocarbon dating, invented in 1949, which has been used extensively to determine the age of carbonaceous materials with ages up to about 40,000 years.
There are 15 known isotopes of carbon and the shortest-lived of these is C which decays through proton emission and has a half-life of 3.5 s. The exotic C exhibits a nuclear halo, which means its radius is appreciably larger than would be expected if the nucleus were a sphere of constant density.
Formation in stars
Formation of the carbon atomic nucleus occurs within a giant or supergiant star through the triple-alpha process. This requires a nearly simultaneous collision of three alpha particles (helium nuclei), as the products of further nuclear fusion reactions of helium with hydrogen or another helium nucleus produce lithium-5 and beryllium-8 respectively, both of which are highly unstable and decay almost instantly back into smaller nuclei. The triple-alpha process happens in conditions of temperatures over 100 megakelvins and helium concentration that the rapid expansion and cooling of the early universe prohibited, and therefore no significant carbon was created during the Big Bang.
According to current physical cosmology theory, carbon is formed in the interiors of stars on the horizontal branch. When massive stars die as supernova, the carbon is scattered into space as dust. This dust becomes component material for the formation of the next-generation star systems with accreted planets. The Solar System is one such star system with an abundance of carbon, enabling the existence of life as we know it. It is the opinion of most scholars that all the carbon in the Solar System and the Milky Way comes from dying stars.
The CNO cycle is an additional hydrogen fusion mechanism that powers stars, wherein carbon operates as a catalyst.
Rotational transitions of various isotopic forms of carbon monoxide (for example, CO, CO, and CO) are detectable in the submillimeter wavelength range, and are used in the study of newly forming stars in molecular clouds.
Carbon cycle
Under terrestrial conditions, conversion of one element to another is very rare. Therefore, the amount of carbon on Earth is effectively constant. Thus, processes that use carbon must obtain it from somewhere and dispose of it somewhere else. The paths of carbon in the environment form the carbon cycle. For example, photosynthetic plants draw carbon dioxide from the atmosphere (or seawater) and build it into biomass, as in the Calvin cycle, a process of carbon fixation. Some of this biomass is eaten by animals, while some carbon is exhaled by animals as carbon dioxide. The carbon cycle is considerably more complicated than this short loop; for example, some carbon dioxide is dissolved in the oceans; if bacteria do not consume it, dead plant or animal matter may become petroleum or coal, which releases carbon when burned.
Compounds
Organic compounds
Carbon can form very long chains of interconnecting carbon–carbon bonds, a property that is called catenation. Carbon-carbon bonds are strong and stable. Through catenation, carbon forms a countless number of compounds. A tally of unique compounds shows that more contain carbon than do not. A similar claim can be made for hydrogen because most organic compounds contain hydrogen chemically bonded to carbon or another common element like oxygen or nitrogen.
The simplest form of an organic molecule is the hydrocarbon—a large family of organic molecules that are composed of hydrogen atoms bonded to a chain of carbon atoms. A hydrocarbon backbone can be substituted by other atoms, known as heteroatoms. Common heteroatoms that appear in organic compounds include oxygen, nitrogen, sulfur, phosphorus, and the nonradioactive halogens, as well as the metals lithium and magnesium. Organic compounds containing bonds to metal are known as organometallic compounds (see below). Certain groupings of atoms, often including heteroatoms, recur in large numbers of organic compounds. These collections, known as functional groups, confer common reactivity patterns and allow for the systematic study and categorization of organic compounds. Chain length, shape and functional groups all affect the properties of organic molecules.
In most stable compounds of carbon (and nearly all stable organic compounds), carbon obeys the octet rule and is tetravalent, meaning that a carbon atom forms a total of four covalent bonds (which may include double and triple bonds). Exceptions include a small number of stabilized carbocations (three bonds, positive charge), radicals (three bonds, neutral), carbanions (three bonds, negative charge) and carbenes (two bonds, neutral), although these species are much more likely to be encountered as unstable, reactive intermediates.
Carbon occurs in all known organic life and is the basis of organic chemistry. When united with hydrogen, it forms various hydrocarbons that are important to industry as refrigerants, lubricants, solvents, as chemical feedstock for the manufacture of plastics and petrochemicals, and as fossil fuels.
When combined with oxygen and hydrogen, carbon can form many groups of important biological compounds including sugars, lignans, chitins, alcohols, fats, aromatic esters, carotenoids and terpenes. With nitrogen, it forms alkaloids, and with the addition of sulfur also it forms antibiotics, amino acids, and rubber products. With the addition of phosphorus to these other elements, it forms DNA and RNA, the chemical-code carriers of life, and adenosine triphosphate (ATP), the most important energy-transfer molecule in all living cells. Norman Horowitz, head of the Mariner and Viking missions to Mars (1965–1976), considered that the unique characteristics of carbon made it unlikely that any other element could replace carbon, even on another planet, to generate the biochemistry necessary for life.
Inorganic compounds
Commonly carbon-containing compounds which are associated with minerals or which do not contain bonds to the other carbon atoms, halogens, or hydrogen, are treated separately from classical organic compounds; the definition is not rigid, and the classification of some compounds can vary from author to author (see reference articles above). Among these are the simple oxides of carbon. The most prominent oxide is carbon dioxide (). This was once the principal constituent of the paleoatmosphere, but is a minor component of the Earth's atmosphere today. Dissolved in water, it forms carbonic acid (), but as most compounds with multiple single-bonded oxygens on a single carbon it is unstable. Through this intermediate, though, resonance-stabilized carbonate ions are produced. Some important minerals are carbonates, notably calcite. Carbon disulfide () is similar. Nevertheless, due to its physical properties and its association with organic synthesis, carbon disulfide is sometimes classified as an organic solvent.
The other common oxide is carbon monoxide (CO). It is formed by incomplete combustion, and is a colorless, odorless gas. The molecules each contain a triple bond and are fairly polar, resulting in a tendency to bind permanently to hemoglobin molecules, displacing oxygen, which has a lower binding affinity. Cyanide (CN), has a similar structure, but behaves much like a halide ion (pseudohalogen). For example, it can form the nitride cyanogen molecule ((CN)), similar to diatomic halides. Likewise, the heavier analog of cyanide, cyaphide (CP), is also considered inorganic, though most simple derivatives are highly unstable. Other uncommon oxides are carbon suboxide (), the unstable dicarbon monoxide (CO), carbon trioxide (CO), cyclopentanepentone (CO), cyclohexanehexone (CO), and mellitic anhydride (CO). However, mellitic anhydride is the triple acyl anhydride of mellitic acid; moreover, it contains a benzene ring. Thus, many chemists consider it to be organic.
With reactive metals, such as tungsten, carbon forms either carbides (C) or acetylides () to form alloys with high melting points. These anions are also associated with methane and acetylene, both very weak acids. With an electronegativity of 2.5, carbon prefers to form covalent bonds. A few carbides are covalent lattices, like carborundum (SiC), which resembles diamond. Nevertheless, even the most polar and salt-like of carbides are not completely ionic compounds.
Organometallic compounds
Organometallic compounds by definition contain at least one carbon-metal covalent bond. A wide range of such compounds exist; major classes include simple alkyl-metal compounds (for example, tetraethyllead), η-alkene compounds (for example, Zeise's salt), and η-allyl compounds (for example, allylpalladium chloride dimer); metallocenes containing cyclopentadienyl ligands (for example, ferrocene); and transition metal carbene complexes. Many metal carbonyls and metal cyanides exist (for example, tetracarbonylnickel and potassium ferricyanide); some workers consider metal carbonyl and cyanide complexes without other carbon ligands to be purely inorganic, and not organometallic. However, most organometallic chemists consider metal complexes with any carbon ligand, even 'inorganic carbon' (e.g., carbonyls, cyanides, and certain types of carbides and acetylides) to be organometallic in nature. Metal complexes containing organic ligands without a carbon-metal covalent bond (e.g., metal carboxylates) are termed metalorganic compounds.
While carbon is understood to strongly prefer formation of four covalent bonds, other exotic bonding schemes are also known. Carboranes are highly stable dodecahedral derivatives of the [B12H12]2- unit, with one BH replaced with a CH+. Thus, the carbon is bonded to five boron atoms and one hydrogen atom. The cation [(PhPAu)C] contains an octahedral carbon bound to six phosphine-gold fragments. This phenomenon has been attributed to the aurophilicity of the gold ligands, which provide additional stabilization of an otherwise labile species. In nature, the iron-molybdenum cofactor (FeMoco) responsible for microbial nitrogen fixation likewise has an octahedral carbon center (formally a carbide, C(-IV)) bonded to six iron atoms. In 2016, it was confirmed that, in line with earlier theoretical predictions, the hexamethylbenzene dication contains a carbon atom with six bonds. More specifically, the dication could be described structurally by the formulation [MeC(η5-C5Me5)]2+, making it an "organic metallocene" in which a MeC3+ fragment is bonded to a η5-C5Me5− fragment through all five of the carbons of the ring.
It is important to note that in the cases above, each of the bonds to carbon contain less than two formal electron pairs. Thus, the formal electron count of these species does not exceed an octet. This makes them hypercoordinate but not hypervalent. Even in cases of alleged 10-C-5 species (that is, a carbon with five ligands and a formal electron count of ten), as reported by Akiba and co-workers, electronic structure calculations conclude that the electron population around carbon is still less than eight, as is true for other compounds featuring four-electron three-center bonding.
History and etymology
The English name carbon comes from the Latin carbo for coal and charcoal, whence also comes the French charbon, meaning charcoal. In German, Dutch and Danish, the names for carbon are Kohlenstoff, koolstof, and kulstof respectively, all literally meaning coal-substance.
Carbon was discovered in prehistory and was known in the forms of soot and charcoal to the earliest human civilizations. Diamonds were known probably as early as 2500 BCE in China, while carbon in the form of charcoal was made by the same chemistry as it is today, by heating wood in a pyramid covered with clay to exclude air.
In 1722, René Antoine Ferchault de Réaumur demonstrated that iron was transformed into steel through the absorption of some substance, now known to be carbon. In 1772, Antoine Lavoisier showed that diamonds are a form of carbon; when he burned samples of charcoal and diamond and found that neither produced any water and that both released the same amount of carbon dioxide per gram. In 1779, Carl Wilhelm Scheele showed that graphite, which had been thought of as a form of lead, was instead identical with charcoal but with a small admixture of iron, and that it gave "aerial acid" (his name for carbon dioxide) when oxidized with nitric acid. In 1786, the French scientists Claude Louis Berthollet, Gaspard Monge and C. A. Vandermonde confirmed that graphite was mostly carbon by oxidizing it in oxygen in much the same way Lavoisier had done with diamond. Some iron again was left, which the French scientists thought was necessary to the graphite structure. In their publication they proposed the name carbone (Latin carbonum) for the element in graphite which was given off as a gas upon burning graphite. Antoine Lavoisier then listed carbon as an element in his 1789 textbook.
A new allotrope of carbon, fullerene, that was discovered in 1985 includes nanostructured forms such as buckyballs and nanotubes. Their discoverers – Robert Curl, Harold Kroto, and Richard Smalley – received the Nobel Prize in Chemistry in 1996. The resulting renewed interest in new forms led to the discovery of further exotic allotropes, including glassy carbon, and the realization that "amorphous carbon" is not strictly amorphous.
Production
Graphite
Commercially viable natural deposits of graphite occur in many parts of the world, but the most important sources economically are in China, India, Brazil, and North Korea. Graphite deposits are of metamorphic origin, found in association with quartz, mica, and feldspars in schists, gneisses, and metamorphosed sandstones and limestone as lenses or veins, sometimes of a metre or more in thickness. Deposits of graphite in Borrowdale, Cumberland, England were at first of sufficient size and purity that, until the 19th century, pencils were made by sawing blocks of natural graphite into strips before encasing the strips in wood. Today, smaller deposits of graphite are obtained by crushing the parent rock and floating the lighter graphite out on water.
There are three types of natural graphite—amorphous, flake or crystalline flake, and vein or lump. Amorphous graphite is the lowest quality and most abundant. Contrary to science, in industry "amorphous" refers to very small crystal size rather than complete lack of crystal structure. Amorphous is used for lower value graphite products and is the lowest priced graphite. Large amorphous graphite deposits are found in China, Europe, Mexico and the United States. Flake graphite is less common and of higher quality than amorphous; it occurs as separate plates that crystallized in metamorphic rock. Flake graphite can be four times the price of amorphous. Good quality flakes can be processed into expandable graphite for many uses, such as flame retardants. The foremost deposits are found in Austria, Brazil, Canada, China, Germany and Madagascar. Vein or lump graphite is the rarest, most valuable, and highest quality type of natural graphite. It occurs in veins along intrusive contacts in solid lumps, and it is only commercially mined in Sri Lanka.
According to the USGS, world production of natural graphite was 1.1 million tonnes in 2010, to which China contributed 800,000 t, India 130,000 t, Brazil 76,000 t, North Korea 30,000 t and Canada 25,000 t. No natural graphite was reported mined in the United States, but 118,000 t of synthetic graphite with an estimated value of $998 million was produced in 2009.
Diamond
The diamond supply chain is controlled by a limited number of powerful businesses, and is also highly concentrated in a small number of locations around the world (see figure).
Only a very small fraction of the diamond ore consists of actual diamonds. The ore is crushed, during which care has to be taken in order to prevent larger diamonds from being destroyed in this process and subsequently the particles are sorted by density. Today, diamonds are located in the diamond-rich density fraction with the help of X-ray fluorescence, after which the final sorting steps are done by hand. Before the use of X-rays became commonplace, the separation was done with grease belts; diamonds have a stronger tendency to stick to grease than the other minerals in the ore.
Historically diamonds were known to be found only in alluvial deposits in southern India. India led the world in diamond production from the time of their discovery in approximately the 9th century BC to the mid-18th century AD, but the commercial potential of these sources had been exhausted by the late 18th century and at that time India was eclipsed by Brazil where the first non-Indian diamonds were found in 1725.
Diamond production of primary deposits (kimberlites and lamproites) only started in the 1870s after the discovery of the diamond fields in South Africa. Production has increased over time and an accumulated total of over 4.5 billion carats have been mined since that date. Most commercially viable diamond deposits were in Russia, Botswana, Australia and the Democratic Republic of Congo. By 2005, Russia produced almost one-fifth of the global diamond output (mostly in Yakutia territory; for example, Mir pipe and Udachnaya pipe) but the Argyle mine in Australia became the single largest source, producing 14 million carats in 2018. New finds, the Canadian mines at Diavik and Ekati, are expected to become even more valuable owing to their production of gem quality stones.
In the United States, diamonds have been found in Arkansas, Colorado, and Montana. In 2004, a startling discovery of a microscopic diamond in the United States led to the January 2008 bulk-sampling of kimberlite pipes in a remote part of Montana.
Applications
Carbon is essential to all known living systems, and without it life as we know it could not exist (see alternative biochemistry). The major economic use of carbon other than food and wood is in the form of hydrocarbons, most notably the fossil fuel methane gas and crude oil (petroleum). Crude oil is distilled in refineries by the petrochemical industry to produce gasoline, kerosene, and other products. Cellulose is a natural, carbon-containing polymer produced by plants in the form of wood, cotton, linen, and hemp. Cellulose is used primarily for maintaining structure in plants. Commercially valuable carbon polymers of animal origin include wool, cashmere, and silk. Plastics are made from synthetic carbon polymers, often with oxygen and nitrogen atoms included at regular intervals in the main polymer chain. The raw materials for many of these synthetic substances come from crude oil.
The uses of carbon and its compounds are extremely varied. It can form alloys with iron, of which the most common is carbon steel. Graphite is combined with clays to form the 'lead' used in pencils used for writing and drawing. It is also used as a lubricant and a pigment, as a moulding material in glass manufacture, in electrodes for dry batteries and in electroplating and electroforming, in brushes for electric motors, and as a neutron moderator in nuclear reactors.
Charcoal is used as a drawing material in artwork, barbecue grilling, iron smelting, and in many other applications. Wood, coal and oil are used as fuel for production of energy and heating. Gem quality diamond is used in jewelry, and industrial diamonds are used in drilling, cutting and polishing tools for machining metals and stone. Plastics are made from fossil hydrocarbons, and carbon fiber, made by pyrolysis of synthetic polyester fibers is used to reinforce plastics to form advanced, lightweight composite materials.
Carbon fiber is made by pyrolysis of extruded and stretched filaments of polyacrylonitrile (PAN) and other organic substances. The crystallographic structure and mechanical properties of the fiber depend on the type of starting material, and on the subsequent processing. Carbon fibers made from PAN have structure resembling narrow filaments of graphite, but thermal processing may re-order the structure into a continuous rolled sheet. The result is fibers with higher specific tensile strength than steel.
Carbon black is used as the black pigment in printing ink, artist's oil paint, and water colours, carbon paper, automotive finishes, India ink and laser printer toner. Carbon black is also used as a filler in rubber products such as tyres and in plastic compounds. Activated charcoal is used as an absorbent and adsorbent in filter material in applications as diverse as gas masks, water purification, and kitchen extractor hoods, and in medicine to absorb toxins, poisons, or gases from the digestive system. Carbon is used in chemical reduction at high temperatures. Coke is used to reduce iron ore into iron (smelting). Case hardening of steel is achieved by heating finished steel components in carbon powder. Carbides of silicon, tungsten, boron, and titanium are among the hardest known materials, and are used as abrasives in cutting and grinding tools. Carbon compounds make up most of the materials used in clothing, such as natural and synthetic textiles and leather, and almost all of the interior surfaces in the built environment other than glass, stone, drywall, and metal.
Diamonds
The diamond industry falls into two categories: one dealing with gem-grade diamonds and the other, with industrial-grade diamonds. While a large trade in both types of diamonds exists, the two markets function dramatically differently.
Unlike precious metals such as gold or platinum, gem diamonds do not trade as a commodity. There is a substantial mark-up in the sale of diamonds, and there is not a very active market for resale of diamonds.
Industrial diamonds are valued mostly for their hardness and heat conductivity, with the gemological qualities of clarity and color being mostly irrelevant. About 80% of mined diamonds (equal to about 100 million carats or 20 tonnes annually) are unsuitable for use as gemstones and relegated for industrial use (known as bort). Synthetic diamonds, invented in the 1950s, found almost immediate industrial applications; 3 billion carats (600 tonnes) of synthetic diamond is produced annually.
The dominant industrial use of diamond is in cutting, drilling, grinding, and polishing. Most of these applications do not require large diamonds; in fact, most diamonds of gem-quality except for their small size can be used industrially. Diamonds are embedded in drill tips or saw blades, or ground into a powder for use in grinding and polishing applications. Specialized applications include use in laboratories as containment for high-pressure experiments (see diamond anvil cell), high-performance bearings, and limited use in specialized windows. With the continuing advances in the production of synthetic diamonds, new applications are becoming feasible. Garnering much excitement is the possible use of diamond as a semiconductor suitable for microchips, and because of its exceptional heat conductance property, as a heat sink in electronics.
Precautions
Pure carbon has extremely low toxicity to humans and can be handled safely in the form of graphite or charcoal. It is resistant to dissolution or chemical attack, even in the acidic contents of the digestive tract. Consequently, once it enters into the body's tissues it is likely to remain there indefinitely. Carbon black was probably one of the first pigments to be used for tattooing, and Ötzi the Iceman was found to have carbon tattoos that survived during his life and for 5200 years after his death. Inhalation of coal dust or soot (carbon black) in large quantities can be dangerous, irritating lung tissues and causing the congestive lung disease, coalworker's pneumoconiosis. Diamond dust used as an abrasive can be harmful if ingested or inhaled. Microparticles of carbon are produced in diesel engine exhaust fumes, and may accumulate in the lungs. In these examples, the harm may result from contaminants (e.g., organic chemicals, heavy metals) rather than from the carbon itself.
Carbon generally has low toxicity to life on Earth; but carbon nanoparticles are deadly to Drosophila.
Carbon may burn vigorously and brightly in the presence of air at high temperatures. Large accumulations of coal, which have remained inert for hundreds of millions of years in the absence of oxygen, may spontaneously combust when exposed to air in coal mine waste tips, ship cargo holds and coal bunkers, and storage dumps.
In nuclear applications where graphite is used as a neutron moderator, accumulation of Wigner energy followed by a sudden, spontaneous release may occur. Annealing to at least 250 °C can release the energy safely, although in the Windscale fire the procedure went wrong, causing other reactor materials to combust.
The great variety of carbon compounds include such lethal poisons as tetrodotoxin, the lectin ricin from seeds of the castor oil plant Ricinus communis, cyanide (CN), and carbon monoxide; and such essentials to life as glucose and protein.
| Physical sciences | Chemistry | null |
5300 | https://en.wikipedia.org/wiki/Computer%20data%20storage | Computer data storage | Computer data storage or digital data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers.
The central processing unit (CPU) of a computer is what manipulates data by performing computations. In practice, almost all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but less expensive and larger options further away. Generally, the fast technologies are referred to as "memory", while slower persistent technologies are referred to as "storage".
Even the first computer designs, Charles Babbage's Analytical Engine and Percy Ludgate's Analytical Machine, clearly distinguished between processing and memory (Babbage stored numbers as rotations of gears, while Ludgate stored numbers as displacements of rods in shuttles). This distinction was extended in the Von Neumann architecture, where the CPU consists of two main parts: The control unit and the arithmetic logic unit (ALU). The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data.
Functionality
Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, and other specialized devices. Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann machines.
Data organization and representation
A modern digital computer represents data using the binary numeral system. Text, numbers, pictures, audio, and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 0 or 1. The most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or simply data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes (40 million bits) with one byte per character.
Data are encoded by assigning a bit pattern to each character, digit, or multimedia object. Many standards exist for encoding (e.g. character encodings like ASCII, image encodings like JPEG, and video encodings like MPEG-4).
By adding bits to each encoded unit, redundancy allows the computer to detect errors in coded data and correct them based on mathematical algorithms. Errors generally occur in low probabilities due to random bit value flipping, or "physical bit fatigue", loss of the physical bit in the storage of its ability to maintain a distinguishable value (0 or 1), or due to errors in inter or intra-computer communication. A random bit flip (e.g. due to random radiation) is typically corrected upon detection. A bit or a group of malfunctioning physical bits (the specific defective bit is not always known; group definition depends on the specific storage device) is typically automatically fenced out, taken out of use by the device, and replaced with another functioning equivalent group in the device, where the corrected bit values are restored (if possible). The cyclic redundancy check (CRC) method is typically used in communications and storage for error detection. A detected error is then retried.
Data compression methods allow in many cases (such as a database) to represent a string of bits by a shorter bit string ("compress") and reconstruct the original string ("decompress") when needed. This utilizes substantially less storage (tens of percent) for many types of data at the cost of more computation (compress and decompress when needed). Analysis of the trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not.
For security reasons, certain types of data (e.g. credit card information) may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots.
Hierarchy of storage
Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary, tertiary, and off-line storage is also guided by cost per bit.
In contemporary usage, memory is usually fast but temporary semiconductor read-write memory, typically DRAM (dynamic RAM) or other such devices. Storage consists of storage devices and their media not directly accessible by the CPU (secondary or tertiary storage), typically hard disk drives, optical disc drives, and other devices slower than RAM but non-volatile (retaining contents when powered down).
Historically, memory has, depending on technology, been called central memory, core memory, core storage, drum, main memory, real storage, or internal memory. Meanwhile, slower persistent storage devices have been referred to as secondary storage, external memory, or auxiliary/peripheral storage.
Primary storage
Primary storage (also known as main memory, internal memory, or prime memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in a uniform manner.
Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic-core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive.
This led to modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. The particular types of RAM used for primary storage are volatile, meaning that they lose the information when not powered. Besides storing opened programs, it serves as disk cache and write buffer to improve both reading and writing performance. Operating systems borrow RAM capacity for caching so long as it's not needed by running software. Spare memory can be utilized as RAM drive for temporary high-speed data storage.
As shown in the diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity RAM:
Processor registers are located inside the processor. Each register typically holds a word of data (often 32 or 64 bits). CPU instructions instruct the arithmetic logic unit to perform various calculations or other operations on this data (or with the help of it). Registers are the fastest of all forms of computer data storage.
Processor cache is an intermediate stage between ultra-fast registers and much slower main memory. It was introduced solely to improve the performance of computers. Most actively used information in the main memory is just duplicated in the cache memory, which is faster, but of much lesser capacity. On the other hand, main memory is much slower, but has a much greater storage capacity than processor registers. Multi-level hierarchical cache setup is also commonly used—primary cache being smallest, fastest and located inside the processor; secondary cache being somewhat larger and slower.
Main memory is directly or indirectly connected to the central processing unit via a memory bus. It is actually two buses (not on the diagram): an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address, that indicates the desired location of data. Then it reads or writes the data in the memory cells using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks.
As the RAM types used for primary storage are volatile (uninitialized at start up), a computer containing only such storage would not have a source to read instructions from, in order to start the computer. Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of random access).
Many types of "ROM" are not literally read only, as updates to them are possible; however it is slow and memory must be erased in large portions before it can be re-written. Some embedded systems run programs directly from ROM (or similar), because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, and rather, use large capacities of secondary storage, which is non-volatile as well, and not as costly.
Recently, primary storage and secondary storage in some uses refer to what was historically called, respectively, secondary storage and tertiary storage.
The primary storage, including ROM, EEPROM, NOR flash, and RAM, are usually byte-addressable.
Secondary storage
Secondary storage (also known as external memory or auxiliary storage) differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfer the desired data to primary storage. Secondary storage is non-volatile (retaining data when its power is shut off). Modern computer systems typically have two orders of magnitude more secondary storage than primary storage because secondary storage is less expensive.
In modern computers, hard disk drives (HDDs) or solid-state drives (SSDs) are usually used as secondary storage. The access time per byte for HDDs or SSDs is typically measured in milliseconds (thousandths of a second), while the access time per byte for primary storage is measured in nanoseconds (billionths of a second). Thus, secondary storage is significantly slower than primary storage. Rotating optical storage devices, such as CD and DVD drives, have even longer access times. Other examples of secondary storage technologies include USB flash drives, floppy disks, magnetic tape, paper tape, punched cards, and RAM disks.
Once the disk read/write head on HDDs reaches the proper placement and the data, subsequent data on the track are very fast to access. To reduce the seek time and rotational latency, data are transferred to and from disks in large contiguous blocks. Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated paradigms have been developed to design efficient algorithms based on sequential and block access. Another way to reduce the I/O bottleneck is to use multiple disks in parallel to increase the bandwidth between primary and secondary memory.
Secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, while also providing metadata describing the owner of a certain file, the access time, the access permissions, and other information.
Most computer operating systems use the concept of virtual memory, allowing the utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks (pages) to a swap file or page file on secondary storage, retrieving them later when needed. If a lot of pages are moved to slower secondary storage, the system performance is degraded.
The secondary storage, including HDD, ODD and SSD, are usually block-addressable.
Tertiary storage
Tertiary storage or tertiary memory is a level below secondary storage. Typically, it involves a robotic mechanism which will mount (insert) and dismount removable mass storage media into a storage device according to the system's demands; such data are often copied to secondary storage before use. It is primarily used for archiving rarely accessed information since it is much slower than secondary storage (e.g. 5–60 seconds vs. 1–10 milliseconds). This is primarily useful for extraordinarily large data stores, accessed without human operators. Typical examples include tape libraries and optical jukeboxes.
When a computer needs to read information from the tertiary storage, it will first consult a catalog database to determine which tape or disc contains the information. Next, the computer will instruct a robotic arm to fetch the medium and place it in a drive. When the computer has finished reading the information, the robotic arm will return the medium to its place in the library.
Tertiary storage is also known as nearline storage because it is "near to online". The formal distinction between online, nearline, and offline storage is:
Online storage is immediately available for I/O.
Nearline storage is not immediately available, but can be made online quickly without human intervention.
Offline storage is not immediately available, and requires some human intervention to become online.
For example, always-on spinning hard disk drives are online storage, while spinning drives that spin down automatically, such as in massive arrays of idle disks (MAID), are nearline storage. Removable media such as tape cartridges that can be automatically loaded, as in tape libraries, are nearline storage, while tape cartridges that must be manually loaded are offline storage.
Off-line storage
Off-line storage is computer data storage on a medium or a device that is not under the control of a processing unit. The medium is recorded, usually in a secondary or tertiary storage device, and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction.
Off-line storage is used to transfer information since the detached medium can easily be physically transported. Additionally, it is useful for cases of disaster, where, for example, a fire destroys the original data, a medium in a remote location will be unaffected, enabling disaster recovery. Off-line storage increases general information security since it is physically inaccessible from a computer, and data confidentiality or integrity cannot be affected by computer-based attack techniques. Also, if the information stored for archival purposes is rarely accessed, off-line storage is less expensive than tertiary storage.
In modern personal computers, most secondary and tertiary storage media are also used for off-line storage. Optical discs and flash memory devices are the most popular, and to a much lesser extent removable hard disk drives; older examples include floppy disks and Zip disks. In enterprise uses, magnetic tape cartridges are predominant; older examples include open-reel magnetic tape and punched cards.
Characteristics of storage
Storage technologies at all levels of the storage hierarchy can be differentiated by evaluating certain core characteristics as well as measuring characteristics specific to a particular implementation. These core characteristics are volatility, mutability, accessibility, and addressability. For any particular implementation of any storage technology, the characteristics worth measuring are capacity and performance.
Volatility
Non-volatile memory retains the stored information even if not constantly supplied with electric power. It is suitable for long-term storage of information. Volatile memory requires constant power to maintain the stored information. The fastest memory technologies are volatile ones, although that is not a universal rule. Since the primary storage is required to be very fast, it predominantly uses volatile memory.
Dynamic random-access memory is a form of volatile memory that also requires the stored information to be periodically reread and rewritten, or refreshed, otherwise it would vanish. Static random-access memory is a form of volatile memory similar to DRAM with the exception that it never needs to be refreshed as long as power is applied; it loses its content when the power supply is lost.
An uninterruptible power supply (UPS) can be used to give a computer a brief window of time to move information from primary volatile storage into non-volatile storage before the batteries are exhausted. Some systems, for example EMC Symmetrix, have integrated batteries that maintain volatile storage for several minutes.
Mutability
Read/write storage or mutable storage Allows information to be overwritten at any time. A computer without some amount of read/write storage for primary storage purposes would be useless for many tasks. Modern computers typically use read/write storage also for secondary storage.
Slow write, fast read storage Read/write storage which allows information to be overwritten multiple times, but with the write operation being much slower than the read operation. Examples include CD-RW and SSD.
Write once storage Write once read many (WORM) allows the information to be written only once at some point after manufacture. Examples include semiconductor programmable read-only memory and CD-R.
Read only storage Retains the information stored at the time of manufacture. Examples include mask ROM ICs and CD-ROM.
Accessibility
Random access Any location in storage can be accessed at any moment in approximately the same amount of time. Such characteristic is well suited for primary and secondary storage. Most semiconductor memories, flash memories and hard disk drives provide random access, though both semiconductor and flash memories have minimal latency when compared to hard disk drives, as no mechanical parts need to be moved.
Sequential access The accessing of pieces of information will be in a serial order, one after the other; therefore the time to access a particular piece of information depends upon which piece of information was last accessed. Such characteristic is typical of off-line storage.
Addressability
Location-addressable Each individually accessible unit of information in storage is selected with its numerical memory address. In modern computers, location-addressable storage usually limits to primary storage, accessed internally by computer programs, since location-addressability is very efficient, but burdensome for humans.
File addressable Information is divided into files of variable length, and a particular file is selected with human-readable directory and file names. The underlying device is still location-addressable, but the operating system of a computer provides the file system abstraction to make the operation more understandable. In modern computers, secondary, tertiary and off-line storage use file systems.
Content-addressable Each individually accessible unit of information is selected based on the basis of (part of) the contents stored there. Content-addressable storage can be implemented using software (computer program) or hardware (computer device), with hardware being faster but more expensive option. Hardware content addressable memory is often used in a computer's CPU cache.
Capacity
Raw capacity The total amount of stored information that a storage device or medium can hold. It is expressed as a quantity of bits or bytes (e.g. 10.4 megabytes).
Memory storage density The compactness of stored information. It is the storage capacity of a medium divided with a unit of length, area or volume (e.g. 1.2 megabytes per square inch).
Performance
Latency The time it takes to access a particular location in storage. The relevant unit of measurement is typically nanosecond for primary storage, millisecond for secondary storage, and second for tertiary storage. It may make sense to separate read latency and write latency (especially for non-volatile memory) and in case of sequential access storage, minimum, maximum and average latency.
Throughput The rate at which information can be read from or written to the storage. In computer data storage, throughput is usually expressed in terms of megabytes per second (MB/s), though bit rate may also be used. As with latency, read rate and write rate may need to be differentiated. Also accessing media sequentially, as opposed to randomly, typically yields maximum throughput.
Granularity The size of the largest "chunk" of data that can be efficiently accessed as a single unit, e.g. without introducing additional latency.
Reliability The probability of spontaneous bit value change under various conditions, or overall failure rate.
Utilities such as hdparm and sar can be used to measure IO performance in Linux.
Energy use
Storage devices that reduce fan usage automatically shut-down during inactivity, and low power hard drives can reduce energy consumption by 90 percent.
2.5-inch hard disk drives often consume less power than larger ones. Low capacity solid-state drives have no moving parts and consume less power than hard disks. Also, memory may use more power than hard disks. Large caches, which are used to avoid hitting the memory wall, may also consume a large amount of power.
Security
Full disk encryption, volume and virtual disk encryption, andor file/folder encryption is readily available for most storage devices.
Hardware memory encryption is available in Intel Architecture, supporting Total Memory Encryption (TME) and page granular memory encryption with multiple keys (MKTME). and in SPARC M7 generation since October 2015.
Vulnerability and reliability
Distinct types of data storage have different points of failure and various methods of predictive failure analysis.
Vulnerabilities that can instantly lead to total loss are head crashing on mechanical hard drives and failure of electronic components on flash storage.
Error detection
Impending failure on hard disk drives is estimable using S.M.A.R.T. diagnostic data that includes the hours of operation and the count of spin-ups, though its reliability is disputed.
Flash storage may experience downspiking transfer rates as a result of accumulating errors, which the flash memory controller attempts to correct.
The health of optical media can be determined by measuring correctable minor errors, of which high counts signify deteriorating and/or low-quality media. Too many consecutive minor errors can lead to data corruption. Not all vendors and models of optical drives support error scanning.
Storage media
, the most commonly used data storage media are semiconductor, magnetic, and optical, while paper still sees some limited usage. Some other fundamental storage technologies, such as all-flash arrays (AFAs) are proposed for development.
Semiconductor
Semiconductor memory uses semiconductor-based integrated circuit (IC) chips to store information. Data are typically stored in metal–oxide–semiconductor (MOS) memory cells. A semiconductor memory chip may contain millions of memory cells, consisting of tiny MOS field-effect transistors (MOSFETs) and/or MOS capacitors. Both volatile and non-volatile forms of semiconductor memory exist, the former using standard MOSFETs and the latter using floating-gate MOSFETs.
In modern computers, primary storage almost exclusively consists of dynamic volatile semiconductor random-access memory (RAM), particularly dynamic random-access memory (DRAM). Since the turn of the century, a type of non-volatile floating-gate semiconductor memory known as flash memory has steadily gained share as off-line storage for home computers. Non-volatile semiconductor memory is also used for secondary storage in various advanced electronic devices and specialized computers that are designed for them.
As early as 2006, notebook and desktop computer manufacturers started using flash-based solid-state drives (SSDs) as default configuration options for the secondary storage either in addition to or instead of the more traditional HDD.
Magnetic
Magnetic storage uses different patterns of magnetization on a magnetically coated surface to store information. Magnetic storage is non-volatile. The information is accessed using one or more read/write heads which may contain one or more recording transducers. A read/write head only covers a part of the surface so that the head or medium or both must be moved relative to another in order to access data. In modern computers, magnetic storage will take these forms:
Magnetic disk;
Floppy disk, used for off-line storage;
Hard disk drive, used for secondary storage.
Magnetic tape, used for tertiary and off-line storage;
Carousel memory (magnetic rolls).
In early computers, magnetic storage was also used as:
Primary storage in a form of magnetic memory, or core memory, core rope memory, thin-film memory and/or twistor memory;
Tertiary (e.g. NCR CRAM) or off line storage in the form of magnetic cards;
Magnetic tape was then often used for secondary storage.
Magnetic storage does not have a definite limit of rewriting cycles like flash storage and re-writeable optical media, as altering magnetic fields causes no physical wear. Rather, their life span is limited by mechanical parts.
Optical
Optical storage, the typical optical disc, stores information in deformities on the surface of a circular disc and reads this information by illuminating the surface with a laser diode and observing the reflection. Optical disc storage is non-volatile. The deformities may be permanent (read only media), formed once (write once media) or reversible (recordable or read/write media). The following forms are in common use :
CD, CD-ROM, DVD, BD-ROM: Read only storage, used for mass distribution of digital information (music, video, computer programs);
CD-R, DVD-R, DVD+R, BD-R: Write once storage, used for tertiary and off-line storage;
CD-RW, DVD-RW, DVD+RW, DVD-RAM, BD-RE: Slow write, fast read storage, used for tertiary and off-line storage;
Ultra Density Optical or UDO is similar in capacity to BD-R or BD-RE and is slow write, fast read storage used for tertiary and off-line storage.
Magneto-optical disc storage is optical disc storage where the magnetic state on a ferromagnetic surface stores information. The information is read optically and written by combining magnetic and optical methods. Magneto-optical disc storage is non-volatile, sequential access, slow write, fast read storage used for tertiary and off-line storage.
3D optical data storage has also been proposed.
Light induced magnetization melting in magnetic photoconductors has also been proposed for high-speed low-energy consumption magneto-optical storage.
Paper
Paper data storage, typically in the form of paper tape or punched cards, has long been used to store information for automatic processing, particularly before general-purpose computers existed. Information was recorded by punching holes into the paper or cardboard medium and was read mechanically (or later optically) to determine whether a particular location on the medium was solid or contained a hole. Barcodes make it possible for objects that are sold or transported to have some computer-readable information securely attached.
Relatively small amounts of digital data (compared to other digital data storage) may be backed up on paper as a matrix barcode for very long-term storage, as the longevity of paper typically exceeds even magnetic data storage.
Other storage media or substrates
Vacuum-tube memory A Williams tube used a cathode-ray tube, and a Selectron tube used a large vacuum tube to store information. These primary storage devices were short-lived in the market, since the Williams tube was unreliable, and the Selectron tube was expensive.
Electro-acoustic memory Delay-line memory used sound waves in a substance such as mercury to store information. Delay-line memory was dynamic volatile, cycle sequential read/write storage, and was used for primary storage.
Optical tape is a medium for optical storage, generally consisting of a long and narrow strip of plastic, onto which patterns can be written and from which the patterns can be read back. It shares some technologies with cinema film stock and optical discs, but is compatible with neither. The motivation behind developing this technology was the possibility of far greater storage capacities than either magnetic tape or optical discs.
Phase-change memory uses different mechanical phases of phase-change material to store information in an X–Y addressable matrix and reads the information by observing the varying electrical resistance of the material. Phase-change memory would be non-volatile, random-access read/write storage, and might be used for primary, secondary and off-line storage. Most rewritable and many write-once optical disks already use phase-change material to store information.
Holographic data storage stores information optically inside crystals or photopolymers. Holographic storage can utilize the whole volume of the storage medium, unlike optical disc storage, which is limited to a small number of surface layers. Holographic storage would be non-volatile, sequential-access, and either write-once or read/write storage. It might be used for secondary and off-line storage. See Holographic Versatile Disc (HVD).
Molecular memory stores information in polymer that can store electric charge. Molecular memory might be especially suited for primary storage. The theoretical storage capacity of molecular memory is 10 terabits per square inch (16 Gbit/mm2).
Magnetic photoconductors store magnetic information, which can be modified by low-light illumination.
DNA stores information in DNA nucleotides. It was first done in 2012, when researchers achieved a ratio of 1.28 petabytes per gram of DNA. In March 2017 scientists reported that a new algorithm called a DNA fountain achieved 85% of the theoretical limit, at 215 petabytes per gram of DNA.
Related technologies
Redundancy
While a group of bits malfunction may be resolved by error detection and correction mechanisms (see above), storage device malfunction requires different solutions. The following solutions are commonly used and valid for most storage devices:
Device mirroring (replication) – A common solution to the problem is constantly maintaining an identical copy of device content on another device (typically of the same type). The downside is that this doubles the storage, and both devices (copies) need to be updated simultaneously with some overhead and possibly some delays. The upside is the possible concurrent reading of the same data group by two independent processes, which increases performance. When one of the replicated devices is detected to be defective, the other copy is still operational and is being utilized to generate a new copy on another device (usually available operational in a pool of stand-by devices for this purpose).
Redundant array of independent disks (RAID) – This method generalizes the device mirroring above by allowing one device in a group of devices to fail and be replaced with the content restored (Device mirroring is RAID with n=2). RAID groups of n=5 or n=6 are common. n>2 saves storage, when compared with n=2, at the cost of more processing during both regular operation (with often reduced performance) and defective device replacement.
Device mirroring and typical RAID are designed to handle a single device failure in the RAID group of devices. However, if a second failure occurs before the RAID group is completely repaired from the first failure, then data can be lost. The probability of a single failure is typically small. Thus the probability of two failures in the same RAID group in time proximity is much smaller (approximately the probability squared, i.e., multiplied by itself). If a database cannot tolerate even such a smaller probability of data loss, then the RAID group itself is replicated (mirrored). In many cases such mirroring is done geographically remotely, in a different storage array, to handle recovery from disasters (see disaster recovery above).
Network connectivity
A secondary or tertiary storage may connect to a computer utilizing computer networks. This concept does not pertain to the primary storage, which is shared between multiple processors to a lesser degree.
Direct-attached storage (DAS) is a traditional mass storage, that does not use any network. This is still a most popular approach. This retronym was coined recently, together with NAS and SAN.
Network-attached storage (NAS) is mass storage attached to a computer which another computer can access at file level over a local area network, a private wide area network, or in the case of online file storage, over the Internet. NAS is commonly associated with the NFS and CIFS/SMB protocols.
Storage area network (SAN) is a specialized network, that provides other computers with storage capacity. The crucial difference between NAS and SAN, is that NAS presents and manages file systems to client computers, while SAN provides access at block-addressing (raw) level, leaving it to attaching systems to manage data or file systems within the provided capacity. SAN is commonly associated with Fibre Channel networks.
Robotic storage
Large quantities of individual magnetic tapes, and optical or magneto-optical discs may be stored in robotic tertiary storage devices. In tape storage field they are known as tape libraries, and in optical storage field optical jukeboxes, or optical disk libraries per analogy. The smallest forms of either technology containing just one drive device are referred to as autoloaders or autochangers.
Robotic-access storage devices may have a number of slots, each holding individual media, and usually one or more picking robots that traverse the slots and load media to built-in drives. The arrangement of the slots and picking devices affects performance. Important characteristics of such storage are possible expansion options: adding slots, modules, drives, robots. Tape libraries may have from 10 to more than 100,000 slots, and provide terabytes or petabytes of near-line information. Optical jukeboxes are somewhat smaller solutions, up to 1,000 slots.
Robotic storage is used for backups, and for high-capacity archives in imaging, medical, and video industries. Hierarchical storage management is a most known archiving strategy of automatically migrating long-unused files from fast hard disk storage to libraries or jukeboxes. If the files are needed, they are retrieved back to disk.
| Technology | Data storage and memory | null |
5306 | https://en.wikipedia.org/wiki/Chemical%20equilibrium | Chemical equilibrium | In a chemical reaction, chemical equilibrium is the state in which both the reactants and products are present in concentrations which have no further tendency to change with time, so that there is no observable change in the properties of the system. This state results when the forward reaction proceeds at the same rate as the reverse reaction. The reaction rates of the forward and backward reactions are generally not zero, but they are equal. Thus, there are no net changes in the concentrations of the reactants and products. Such a state is known as dynamic equilibrium.
Historical introduction
The concept of chemical equilibrium was developed in 1803, after Berthollet found that some chemical reactions are reversible. For any reaction mixture to exist at equilibrium, the rates of the forward and backward (reverse) reactions must be equal. In the following chemical equation, arrows point both ways to indicate equilibrium. A and B are reactant chemical species, S and T are product species, and α, β, σ, and τ are the stoichiometric coefficients of the respective reactants and products:
α A + β B σ S + τ T
The equilibrium concentration position of a reaction is said to lie "far to the right" if, at equilibrium, nearly all the reactants are consumed. Conversely the equilibrium position is said to be "far to the left" if hardly any product is formed from the reactants.
Guldberg and Waage (1865), building on Berthollet's ideas, proposed the law of mass action:
where A, B, S and T are active masses and k+ and k− are rate constants. Since at equilibrium forward and backward rates are equal:
and the ratio of the rate constants is also a constant, now known as an equilibrium constant.
By convention, the products form the numerator.
However, the law of mass action is valid only for concerted one-step reactions that proceed through a single transition state and is not valid in general because rate equations do not, in general, follow the stoichiometry of the reaction as Guldberg and Waage had proposed (see, for example, nucleophilic aliphatic substitution by SN1 or reaction of hydrogen and bromine to form hydrogen bromide). Equality of forward and backward reaction rates, however, is a necessary condition for chemical equilibrium, though it is not sufficient to explain why equilibrium occurs.
Despite the limitations of this derivation, the equilibrium constant for a reaction is indeed a constant, independent of the activities of the various species involved, though it does depend on temperature as observed by the van 't Hoff equation. Adding a catalyst will affect both the forward reaction and the reverse reaction in the same way and will not have an effect on the equilibrium constant. The catalyst will speed up both reactions thereby increasing the speed at which equilibrium is reached.
Although the macroscopic equilibrium concentrations are constant in time, reactions do occur at the molecular level. For example, in the case of acetic acid dissolved in water and forming acetate and hydronium ions,
a proton may hop from one molecule of acetic acid onto a water molecule and then onto an acetate anion to form another molecule of acetic acid and leaving the number of acetic acid molecules unchanged. This is an example of dynamic equilibrium. Equilibria, like the rest of thermodynamics, are statistical phenomena, averages of microscopic behavior.
Le Châtelier's principle (1884) predicts the behavior of an equilibrium system when changes to its reaction conditions occur. If a dynamic equilibrium is disturbed by changing the conditions, the position of equilibrium moves to partially reverse the change. For example, adding more S (to the chemical reaction above) from the outside will cause an excess of products, and the system will try to counteract this by increasing the reverse reaction and pushing the equilibrium point backward (though the equilibrium constant will stay the same).
If mineral acid is added to the acetic acid mixture, increasing the concentration of hydronium ion, the amount of dissociation must decrease as the reaction is driven to the left in accordance with this principle. This can also be deduced from the equilibrium constant expression for the reaction:
If {H3O+} increases {CH3CO2H} must increase and must decrease. The H2O is left out, as it is the solvent and its concentration remains high and nearly constant.
J. W. Gibbs suggested in 1873 that equilibrium is attained when the "available energy" (now known as Gibbs free energy or Gibbs energy) of the system is at its minimum value, assuming the reaction is carried out at a constant temperature and pressure. What this means is that the derivative of the Gibbs energy with respect to reaction coordinate (a measure of the extent of reaction that has occurred, ranging from zero for all reactants to a maximum for all products) vanishes (because dG = 0), signaling a stationary point. This derivative is called the reaction Gibbs energy (or energy change) and corresponds to the difference between the chemical potentials of reactants and products at the composition of the reaction mixture. This criterion is both necessary and sufficient. If a mixture is not at equilibrium, the liberation of the excess Gibbs energy (or Helmholtz energy at constant volume reactions) is the "driving force" for the composition of the mixture to change until equilibrium is reached. The equilibrium constant can be related to the standard Gibbs free energy change for the reaction by the equation
where R is the universal gas constant and T the temperature.
When the reactants are dissolved in a medium of high ionic strength the quotient of activity coefficients may be taken to be constant. In that case the concentration quotient, Kc,
where [A] is the concentration of A, etc., is independent of the analytical concentration of the reactants. For this reason, equilibrium constants for solutions are usually determined in media of high ionic strength. Kc varies with ionic strength, temperature and pressure (or volume). Likewise Kp for gases depends on partial pressure. These constants are easier to measure and encountered in high-school chemistry courses.
Thermodynamics
At constant temperature and pressure, one must consider the Gibbs free energy, G, while at constant temperature and volume, one must consider the Helmholtz free energy, A, for the reaction; and at constant internal energy and volume, one must consider the entropy, S, for the reaction.
The constant volume case is important in geochemistry and atmospheric chemistry where pressure variations are significant. Note that, if reactants and products were in standard state (completely pure), then there would be no reversibility and no equilibrium. Indeed, they would necessarily occupy disjoint volumes of space. The mixing of the products and reactants contributes a large entropy increase (known as entropy of mixing) to states containing equal mixture of products and reactants and gives rise to a distinctive minimum in the Gibbs energy as a function of the extent of reaction. The standard Gibbs energy change, together with the Gibbs energy of mixing, determine the equilibrium state.
In this article only the constant pressure case is considered. The relation between the Gibbs free energy and the equilibrium constant can be found by considering chemical potentials.
At constant temperature and pressure in the absence of an applied voltage, the Gibbs free energy, G, for the reaction depends only on the extent of reaction: ξ (Greek letter xi), and can only decrease according to the second law of thermodynamics. It means that the derivative of G with respect to ξ must be negative if the reaction happens; at the equilibrium this derivative is equal to zero.
:equilibrium
In order to meet the thermodynamic condition for equilibrium, the Gibbs energy must be stationary, meaning that the derivative of G with respect to the extent of reaction, ξ, must be zero. It can be shown that in this case, the sum of chemical potentials times the stoichiometric coefficients of the products is equal to the sum of those corresponding to the reactants. Therefore, the sum of the Gibbs energies of the reactants must be the equal to the sum of the Gibbs energies of the products.
where μ is in this case a partial molar Gibbs energy, a chemical potential. The chemical potential of a reagent A is a function of the activity, {A} of that reagent.
(where μ is the standard chemical potential).
The definition of the Gibbs energy equation interacts with the fundamental thermodynamic relation to produce
.
Inserting dNi = νi dξ into the above equation gives a stoichiometric coefficient () and a differential that denotes the reaction occurring to an infinitesimal extent (dξ). At constant pressure and temperature the above equations can be written as
which is the Gibbs free energy change for the reaction. This results in:
.
By substituting the chemical potentials:
,
the relationship becomes:
:
which is the standard Gibbs energy change for the reaction that can be calculated using thermodynamical tables.
The reaction quotient is defined as:
Therefore,
At equilibrium:
leading to:
and
Obtaining the value of the standard Gibbs energy change, allows the calculation of the equilibrium constant.
Addition of reactants or products
For a reactional system at equilibrium: Qr = Keq; ξ = ξeq.
If the activities of constituents are modified, the value of the reaction quotient changes and becomes different from the equilibrium constant: Qr ≠ Keq and then
If activity of a reagent i increases the reaction quotient decreases. Then and The reaction will shift to the right (i.e. in the forward direction, and thus more products will form).
If activity of a product j increases, then and The reaction will shift to the left (i.e. in the reverse direction, and thus less products will form).
Note that activities and equilibrium constants are dimensionless numbers.
Treatment of activity
The expression for the equilibrium constant can be rewritten as the product of a concentration quotient, Kc and an activity coefficient quotient, Γ.
[A] is the concentration of reagent A, etc. It is possible in principle to obtain values of the activity coefficients, γ. For solutions, equations such as the Debye–Hückel equation or extensions such as Davies equation Specific ion interaction theory or Pitzer equations may be used. However this is not always possible. It is common practice to assume that Γ is a constant, and to use the concentration quotient in place of the thermodynamic equilibrium constant. It is also general practice to use the term equilibrium constant instead of the more accurate concentration quotient. This practice will be followed here.
For reactions in the gas phase partial pressure is used in place of concentration and fugacity coefficient in place of activity coefficient. In the real world, for example, when making ammonia in industry, fugacity coefficients must be taken into account. Fugacity, f, is the product of partial pressure and fugacity coefficient. The chemical potential of a species in the real gas phase is given by
so the general expression defining an equilibrium constant is valid for both solution and gas phases.
Concentration quotients
In aqueous solution, equilibrium constants are usually determined in the presence of an "inert" electrolyte such as sodium nitrate, NaNO3, or potassium perchlorate, KClO4. The ionic strength of a solution is given by
where ci and zi stand for the concentration and ionic charge of ion type i, and the sum is taken over all the N types of charged species in solution. When the concentration of dissolved salt is much higher than the analytical concentrations of the reagents, the ions originating from the dissolved salt determine the ionic strength, and the ionic strength is effectively constant. Since activity coefficients depend on ionic strength, the activity coefficients of the species are effectively independent of concentration. Thus, the assumption that Γ is constant is justified. The concentration quotient is a simple multiple of the equilibrium constant.
However, Kc will vary with ionic strength. If it is measured at a series of different ionic strengths, the value can be extrapolated to zero ionic strength. The concentration quotient obtained in this manner is known, paradoxically, as a thermodynamic equilibrium constant.
Before using a published value of an equilibrium constant in conditions of ionic strength different from the conditions used in its determination, the value should be adjusted.
Metastable mixtures
A mixture may appear to have no tendency to change, though it is not at equilibrium. For example, a mixture of SO2 and O2 is metastable as there is a kinetic barrier to formation of the product, SO3.
2 SO2 + O2 2 SO3
The barrier can be overcome when a catalyst is also present in the mixture as in the contact process, but the catalyst does not affect the equilibrium concentrations.
Likewise, the formation of bicarbonate from carbon dioxide and water is very slow under normal conditions
but almost instantaneous in the presence of the catalytic enzyme carbonic anhydrase.
Pure substances
When pure substances (liquids or solids) are involved in equilibria their activities do not appear in the equilibrium constant because their numerical values are considered one.
Applying the general formula for an equilibrium constant to the specific case of a dilute solution of acetic acid in water one obtains
CH3CO2H + H2O CH3CO2− + H3O+
For all but very concentrated solutions, the water can be considered a "pure" liquid, and therefore it has an activity of one. The equilibrium constant expression is therefore usually written as
.
A particular case is the self-ionization of water
2 H2O H3O+ + OH−
Because water is the solvent, and has an activity of one, the self-ionization constant of water is defined as
It is perfectly legitimate to write [H+] for the hydronium ion concentration, since the state of solvation of the proton is constant (in dilute solutions) and so does not affect the equilibrium concentrations. Kw varies with variation in ionic strength and/or temperature.
The concentrations of H+ and OH− are not independent quantities. Most commonly [OH−] is replaced by Kw[H+]−1 in equilibrium constant expressions which would otherwise include hydroxide ion.
Solids also do not appear in the equilibrium constant expression, if they are considered to be pure and thus their activities taken to be one. An example is the Boudouard reaction:
2 CO CO2 + C
for which the equation (without solid carbon) is written as:
Multiple equilibria
Consider the case of a dibasic acid H2A. When dissolved in water, the mixture will contain H2A, HA− and A2−. This equilibrium can be split into two steps in each of which one proton is liberated.
K1 and K2 are examples of stepwise equilibrium constants. The overall equilibrium constant, βD, is product of the stepwise constants.
Note that these constants are dissociation constants because the products on the right hand side of the equilibrium expression are dissociation products. In many systems, it is preferable to use association constants.
β1 and β2 are examples of association constants. Clearly and ; and
For multiple equilibrium systems, also see: theory of Response reactions.
Effect of temperature
The effect of changing temperature on an equilibrium constant is given by the van 't Hoff equation
Thus, for exothermic reactions (ΔH is negative), K decreases with an increase in temperature, but, for endothermic reactions, (ΔH is positive) K increases with an increase temperature. An alternative formulation is
At first sight this appears to offer a means of obtaining the standard molar enthalpy of the reaction by studying the variation of K with temperature. In practice, however, the method is unreliable because error propagation almost always gives very large errors on the values calculated in this way.
Effect of electric and magnetic fields
The effect of electric field on equilibrium has been studied by Manfred Eigen among others.
Types of equilibrium
Equilibrium can be broadly classified as heterogeneous and homogeneous equilibrium. Homogeneous equilibrium consists of reactants and products belonging in the same phase whereas heterogeneous equilibrium comes into play for reactants and products in different phases.
In the gas phase: rocket engines
The industrial synthesis such as ammonia in the Haber–Bosch process (depicted right) takes place through a succession of equilibrium steps including adsorption processes
Atmospheric chemistry
Seawater and other natural waters: chemical oceanography
Distribution between two phases
log D distribution coefficient: important for pharmaceuticals where lipophilicity is a significant property of a drug
Liquid–liquid extraction, Ion exchange, Chromatography
Solubility product
Uptake and release of oxygen by hemoglobin in blood
Acid–base equilibria: acid dissociation constant, hydrolysis, buffer solutions, indicators, acid–base homeostasis
Metal–ligand complexation: sequestering agents, chelation therapy, MRI contrast reagents, Schlenk equilibrium
Adduct formation: host–guest chemistry, supramolecular chemistry, molecular recognition, dinitrogen tetroxide
In certain oscillating reactions, the approach to equilibrium is not asymptotically but in the form of a damped oscillation .
The related Nernst equation in electrochemistry gives the difference in electrode potential as a function of redox concentrations.
When molecules on each side of the equilibrium are able to further react irreversibly in secondary reactions, the final product ratio is determined according to the Curtin–Hammett principle.
In these applications, terms such as stability constant, formation constant, binding constant, affinity constant, association constant and dissociation constant are used. In biochemistry, it is common to give units for binding constants, which serve to define the concentration units used when the constant's value was determined.
Composition of a mixture
When the only equilibrium is that of the formation of a 1:1 adduct as the composition of a mixture, there are many ways that the composition of a mixture can be calculated. For example, see ICE table for a traditional method of calculating the pH of a solution of a weak acid.
There are three approaches to the general calculation of the composition of a mixture at equilibrium.
The most basic approach is to manipulate the various equilibrium constants until the desired concentrations are expressed in terms of measured equilibrium constants (equivalent to measuring chemical potentials) and initial conditions.
Minimize the Gibbs energy of the system.
Satisfy the equation of mass balance. The equations of mass balance are simply statements that demonstrate that the total concentration of each reactant must be constant by the law of conservation of mass.
Mass-balance equations
In general, the calculations are rather complicated or complex. For instance, in the case of a dibasic acid, H2A dissolved in water the two reactants can be specified as the conjugate base, A2−, and the proton, H+. The following equations of mass-balance could apply equally well to a base such as 1,2-diaminoethane, in which case the base itself is designated as the reactant A:
with TA the total concentration of species A. Note that it is customary to omit the ionic charges when writing and using these equations.
When the equilibrium constants are known and the total concentrations are specified there are two equations in two unknown "free concentrations" [A] and [H]. This follows from the fact that [HA] = β1[A][H], [H2A] = β2[A][H]2 and [OH] = Kw[H]−1
so the concentrations of the "complexes" are calculated from the free concentrations and the equilibrium constants.
General expressions applicable to all systems with two reagents, A and B would be
It is easy to see how this can be extended to three or more reagents.
Polybasic acids
The composition of solutions containing reactants A and H is easy to calculate as a function of p[H]. When [H] is known, the free concentration [A] is calculated from the mass-balance equation in A.
The diagram alongside, shows an example of the hydrolysis of the aluminium Lewis acid Al3+(aq) shows the species concentrations for a 5 × 10−6 M solution of an aluminium salt as a function of pH. Each concentration is shown as a percentage of the total aluminium.
Solution and precipitation
The diagram above illustrates the point that a precipitate that is not one of the main species in the solution equilibrium may be formed. At pH just below 5.5 the main species present in a 5 μM solution of Al3+ are aluminium hydroxides Al(OH)2+, and , but on raising the pH Al(OH)3 precipitates from the solution. This occurs because Al(OH)3 has a very large lattice energy. As the pH rises more and more Al(OH)3 comes out of solution. This is an example of Le Châtelier's principle in action: Increasing the concentration of the hydroxide ion causes more aluminium hydroxide to precipitate, which removes hydroxide from the solution. When the hydroxide concentration becomes sufficiently high the soluble aluminate, , is formed.
Another common instance where precipitation occurs is when a metal cation interacts with an anionic ligand to form an electrically neutral complex. If the complex is hydrophobic, it will precipitate out of water. This occurs with the nickel ion Ni2+ and dimethylglyoxime, (dmgH2): in this case the lattice energy of the solid is not particularly large, but it greatly exceeds the energy of solvation of the molecule Ni(dmgH)2.
Minimization of Gibbs energy
At equilibrium, at a specified temperature and pressure, and with no external forces, the Gibbs free energy G is at a minimum:
where μj is the chemical potential of molecular species j, and Nj is the amount of molecular species j. It may be expressed in terms of thermodynamic activity as:
where is the chemical potential in the standard state, R is the gas constant T is the absolute temperature, and Aj is the activity.
For a closed system, no particles may enter or leave, although they may combine in various ways. The total number of atoms of each element will remain constant. This means that the minimization above must be subjected to the constraints:
where aij is the number of atoms of element i in molecule j and b is the total number of atoms of element i, which is a constant, since the system is closed. If there are a total of k types of atoms in the system, then there will be k such equations. If ions are involved, an additional row is added to the aij matrix specifying the respective charge on each molecule which will sum to zero.
This is a standard problem in optimisation, known as constrained minimisation. The most common method of solving it is using the method of Lagrange multipliers (although other methods may be used).
Define:
where the λi are the Lagrange multipliers, one for each element. This allows each of the Nj and λj to be treated independently, and it can be shown using the tools of multivariate calculus that the equilibrium condition is given by
(For proof see Lagrange multipliers.) This is a set of (m + k) equations in (m + k) unknowns (the Nj and the λi) and may, therefore, be solved for the equilibrium concentrations Nj as long as the chemical activities are known as functions of the concentrations at the given temperature and pressure. (In the ideal case, activities are proportional to concentrations.) (See Thermodynamic databases for pure substances.) Note that the second equation is just the initial constraints for minimization.
This method of calculating equilibrium chemical concentrations is useful for systems with a large number of different molecules. The use of k atomic element conservation equations for the mass constraint is straightforward, and replaces the use of the stoichiometric coefficient equations. The results are consistent with those specified by chemical equations. For example, if equilibrium is specified by a single chemical equation:,
where νj is the stoichiometric coefficient for the j th molecule (negative for reactants, positive for products) and Rj is the symbol for the j th molecule, a properly balanced equation will obey:
Multiplying the first equilibrium condition by νj and using the above equation yields:
As above, defining ΔG
where Kc is the equilibrium constant, and ΔG will be zero at equilibrium.
Analogous procedures exist for the minimization of other thermodynamic potentials.
| Physical sciences | Chemical reactions | null |
5308 | https://en.wikipedia.org/wiki/Combination | Combination | In mathematics, a combination is a selection of items from a set that has distinct members, such that the order of selection does not matter (unlike permutations). For example, given three fruits, say an apple, an orange and a pear, there are three combinations of two that can be drawn from this set: an apple and a pear; an apple and an orange; or a pear and an orange. More formally, a k-combination of a set S is a subset of k distinct elements of S. So, two combinations are identical if and only if each combination has the same members. (The arrangement of the members in each set does not matter.) If the set has n elements, the number of k-combinations, denoted by or , is equal to the binomial coefficient
which can be written using factorials as whenever , and which is zero when . This formula can be derived from the fact that each k-combination of a set S of n members has permutations so or . The set of all k-combinations of a set S is often denoted by .
A combination is a selection of n things taken k at a time without repetition. To refer to combinations in which repetition is allowed, the terms k-combination with repetition, k-multiset, or k-selection, are often used. If, in the above example, it were possible to have two of any one kind of fruit there would be 3 more 2-selections: one with two apples, one with two oranges, and one with two pears.
Although the set of three fruits was small enough to write a complete list of combinations, this becomes impractical as the size of the set increases. For example, a poker hand can be described as a 5-combination (k = 5) of cards from a 52 card deck (n = 52). The 5 cards of the hand are all distinct, and the order of cards in the hand does not matter. There are 2,598,960 such combinations, and the chance of drawing any one hand at random is 1 / 2,598,960.
Number of k-combinations
The number of k-combinations from a given set S of n elements is often denoted in elementary combinatorics texts by , or by a variation such as , , , or even (the last form is standard in French, Romanian, Russian, and Chinese texts). The same number however occurs in many other mathematical contexts, where it is denoted by (often read as "n choose k"); notably it occurs as a coefficient in the binomial formula, hence its name binomial coefficient. One can define for all natural numbers k at once by the relation
from which it is clear that
and further
for .
To see that these coefficients count k-combinations from S, one can first consider a collection of n distinct variables Xs labeled by the elements s of S, and expand the product over all elements of S:
it has 2n distinct terms corresponding to all the subsets of S, each subset giving the product of the corresponding variables Xs. Now setting all of the Xs equal to the unlabeled variable X, so that the product becomes , the term for each k-combination from S becomes Xk, so that the coefficient of that power in the result equals the number of such k-combinations.
Binomial coefficients can be computed explicitly in various ways. To get all of them for the expansions up to , one can use (in addition to the basic cases already given) the recursion relation
for 0 < k < n, which follows from =; this leads to the construction of Pascal's triangle.
For determining an individual binomial coefficient, it is more practical to use the formula
The numerator gives the number of k-permutations of n, i.e., of sequences of k distinct elements of S, while the denominator gives the number of such k-permutations that give the same k-combination when the order is ignored.
When k exceeds n/2, the above formula contains factors common to the numerator and the denominator, and canceling them out gives the relation
for 0 ≤ k ≤ n. This expresses a symmetry that is evident from the binomial formula, and can also be understood in terms of k-combinations by taking the complement of such a combination, which is an -combination.
Finally there is a formula which exhibits this symmetry directly, and has the merit of being easy to remember:
where n! denotes the factorial of n. It is obtained from the previous formula by multiplying denominator and numerator by !, so it is certainly computationally less efficient than that formula.
The last formula can be understood directly, by considering the n! permutations of all the elements of S. Each such permutation gives a k-combination by selecting its first k elements. There are many duplicate selections: any combined permutation of the first k elements among each other, and of the final (n − k) elements among each other produces the same combination; this explains the division in the formula.
From the above formulas follow relations between adjacent numbers in Pascal's triangle in all three directions:
Together with the basic cases , these allow successive computation of respectively all numbers of combinations from the same set (a row in Pascal's triangle), of k-combinations of sets of growing sizes, and of combinations with a complement of fixed size .
Example of counting combinations
As a specific example, one can compute the number of five-card hands possible from a standard fifty-two card deck as:
Alternatively one may use the formula in terms of factorials and cancel the factors in the numerator against parts of the factors in the denominator, after which only multiplication of the remaining factors is required:
Another alternative computation, equivalent to the first, is based on writing
which gives
When evaluated in the following order, , this can be computed using only integer arithmetic. The reason is that when each division occurs, the intermediate result that is produced is itself a binomial coefficient, so no remainders ever occur.
Using the symmetric formula in terms of factorials without performing simplifications gives a rather extensive calculation:
Enumerating k-combinations
One can enumerate all k-combinations of a given set S of n elements in some fixed order, which establishes a bijection from an interval of integers with the set of those k-combinations. Assuming S is itself ordered, for instance S = { 1, 2, ..., n }, there are two natural possibilities for ordering its k-combinations: by comparing their smallest elements first (as in the illustrations above) or by comparing their largest elements first. The latter option has the advantage that adding a new largest element to S will not change the initial part of the enumeration, but just add the new k-combinations of the larger set after the previous ones. Repeating this process, the enumeration can be extended indefinitely with k-combinations of ever larger sets. If moreover the intervals of the integers are taken to start at 0, then the k-combination at a given place i in the enumeration can be computed easily from i, and the bijection so obtained is known as the combinatorial number system. It is also known as "rank"/"ranking" and "unranking" in computational mathematics.
There are many ways to enumerate k combinations. One way is to track k index numbers of the elements selected, starting with {0 .. k−1} (zero-based) or {1 .. k} (one-based) as the first allowed k-combination. Then, repeatedly move to the next allowed k-combination by incrementing the smallest index number for which this would not create two equal index numbers, at the same time resetting all smaller index numbers to their initial values.
Number of combinations with repetition
A k-combination with repetitions, or k-multicombination, or multisubset of size k from a set S of size n is given by a set of k not necessarily distinct elements of S, where order is not taken into account: two sequences define the same multiset if one can be obtained from the other by permuting the terms. In other words, it is a sample of k elements from a set of n elements allowing for duplicates (i.e., with replacement) but disregarding different orderings (e.g. {2,1,2} = {1,2,2}). Associate an index to each element of S and think of the elements of S as types of objects, then we can let denote the number of elements of type i in a multisubset. The number of multisubsets of size k is then the number of nonnegative integer (so allowing zero) solutions of the Diophantine equation:
If S has n elements, the number of such k-multisubsets is denoted by
a notation that is analogous to the binomial coefficient which counts k-subsets. This expression, n multichoose k, can also be given in terms of binomial coefficients:
This relationship can be easily proved using a representation known as stars and bars.
A solution of the above Diophantine equation can be represented by stars, a separator (a bar), then more stars, another separator, and so on. The total number of stars in this representation is k and the number of bars is n - 1 (since a separation into n parts needs n-1 separators). Thus, a string of k + n - 1 (or n + k - 1) symbols (stars and bars) corresponds to a solution if there are k stars in the string. Any solution can be represented by choosing k out of positions to place stars and filling the remaining positions with bars. For example, the solution of the equation (n = 4 and k = 10) can be represented by
The number of such strings is the number of ways to place 10 stars in 13 positions, which is the number of 10-multisubsets of a set with 4 elements.
As with binomial coefficients, there are several relationships between these multichoose expressions. For example, for ,
This identity follows from interchanging the stars and bars in the above representation.
Example of counting multisubsets
For example, if you have four types of donuts (n = 4) on a menu to choose from and you want three donuts (k = 3), the number of ways to choose the donuts with repetition can be calculated as
This result can be verified by listing all the 3-multisubsets of the set S = {1,2,3,4}. This is displayed in the following table. The second column lists the donuts you actually chose, the third column shows the nonnegative integer solutions of the equation and the last column gives the stars and bars representation of the solutions.
Number of k-combinations for all k
The number of k-combinations for all k is the number of subsets of a set of n elements. There are several ways to see that this number is 2n. In terms of combinations, , which is the sum of the nth row (counting from 0) of the binomial coefficients in Pascal's triangle. These combinations (subsets) are enumerated by the 1 digits of the set of base 2 numbers counting from 0 to 2n − 1, where each digit position is an item from the set of n.
Given 3 cards numbered 1 to 3, there are 8 distinct combinations (subsets), including the empty set:
Representing these subsets (in the same order) as base 2 numerals:
0 – 000
1 – 001
2 – 010
3 – 011
4 – 100
5 – 101
6 – 110
7 – 111
Probability: sampling a random combination
There are various algorithms to pick out a random combination from a given set or list. Rejection sampling is extremely slow for large sample sizes. One way to select a k-combination efficiently from a population of size n is to iterate across each element of the population, and at each step pick that element with a dynamically changing probability of (see Reservoir sampling). Another is to pick a random non-negative integer less than and convert it into a combination using the combinatorial number system.
Number of ways to put objects into bins
A combination can also be thought of as a selection of two sets of items: those that go into the chosen bin and those that go into the unchosen bin. This can be generalized to any number of bins with the constraint that every item must go to exactly one bin. The number of ways to put objects into bins is given by the multinomial coefficient
where n is the number of items, m is the number of bins, and is the number of items that go into bin i.
One way to see why this equation holds is to first number the objects arbitrarily from 1 to n and put the objects with numbers into the first bin in order, the objects with numbers into the second bin in order, and so on. There are distinct numberings, but many of them are equivalent, because only the set of items in a bin matters, not their order in it. Every combined permutation of each bins' contents produces an equivalent way of putting items into bins. As a result, every equivalence class consists of distinct numberings, and the number of equivalence classes is .
The binomial coefficient is the special case where k items go into the chosen bin and the remaining items go into the unchosen bin:
| Mathematics | Discrete mathematics | null |
5309 | https://en.wikipedia.org/wiki/Software | Software | Software consists of computer programs that instruct the execution of a computer. Software also includes design documents and specifications.
The history of software is closely tied to the development of digital computers in the mid-20th century. Early programs were written in the machine language specific to the hardware. The introduction of high-level programming languages in 1958 allowed for more human-readable instructions, making software development easier and more portable across different computer architectures. Software in a programming language is run through a compiler or interpreter to execute on the architecture's hardware. Over time, software has become complex, owing to developments in networking, operating systems, and databases.
Software can generally be categorized into two main types:
operating systems, which manage hardware resources and provide services for applications
application software, which performs specific tasks for users
The rise of cloud computing has introduced the new software delivery model Software as a Service (SaaS). In SaaS, applications are hosted by a provider and accessed over the Internet.
The process of developing software involves several stages. The stages include software design, programming, testing, release, and maintenance. Software quality assurance and security are critical aspects of software development, as bugs and security vulnerabilities can lead to system failures and security breaches. Additionally, legal issues such as software licenses and intellectual property rights play a significant role in the distribution of software products.
History
The first use of the word software is credited to mathematician John Wilder Tukey in 1958.
The first programmable computers, which appeared at the end of the 1940s, were programmed in machine language. Machine language is difficult to debug and not portable across different computers. Initially, hardware resources were more expensive than human resources. As programs became complex, programmer productivity became the bottleneck. The introduction of high-level programming languages in 1958 hid the details of the hardware and expressed the underlying algorithms into the code . Early languages include Fortran, Lisp, and COBOL.
Types
There are two main types of software:
Operating systems are "the layer of software that manages a computer's resources for its users and their applications". There are three main purposes that an operating system fulfills:
Allocating resources between different applications, deciding when they will receive central processing unit (CPU) time or space in memory.
Providing an interface that abstracts the details of accessing hardware details (like physical memory) to make things easier for programmers.
Offering common services, such as an interface for accessing network and disk devices. This enables an application to be run on different hardware without needing to be rewritten.
Application software runs on top of the operating system and uses the computer's resources to perform a task. There are many different types of application software because the range of tasks that can be performed with modern computers is so large. Applications account for most software and require the environment provided by an operating system, and often other applications, in order to function.
Software can also be categorized by how it is deployed. Traditional applications are purchased with a perpetual license for a specific version of the software, downloaded, and run on hardware belonging to the purchaser. The rise of the Internet and cloud computing enabled a new model, software as a service (SaaS), in which the provider hosts the software (usually built on top of rented infrastructure or platforms) and provides the use of the software to customers, often in exchange for a subscription fee. By 2023, SaaS products—which are usually delivered via a web application—had become the primary method that companies deliver applications.
Software development and maintenance
Software companies aim to deliver a high-quality product on time and under budget. A challenge is that software development effort estimation is often inaccurate. Software development begins by conceiving the project, evaluating its feasibility, analyzing the business requirements, and making a software design. Most software projects speed up their development by reusing or incorporating existing software, either in the form of commercial off-the-shelf (COTS) or open-source software. Software quality assurance is typically a combination of manual code review by other engineers and automated software testing. Due to time constraints, testing cannot cover all aspects of the software's intended functionality, so developers often focus on the most critical functionality. Formal methods are used in some safety-critical systems to prove the correctness of code, while user acceptance testing helps to ensure that the product meets customer expectations. There are a variety of software development methodologies, which vary from completing all steps in order to concurrent and iterative models. Software development is driven by requirements taken from prospective users, as opposed to maintenance, which is driven by events such as a change request.
Frequently, software is released in an incomplete state when the development team runs out of time or funding. Despite testing and quality assurance, virtually all software contains bugs where the system does not work as intended. Post-release software maintenance is necessary to remediate these bugs when they are found and keep the software working as the environment changes over time. New features are often added after the release. Over time, the level of maintenance becomes increasingly restricted before being cut off entirely when the product is withdrawn from the market. As software ages, it becomes known as legacy software and can remain in use for decades, even if there is no one left who knows how to fix it. Over the lifetime of the product, software maintenance is estimated to comprise 75 percent or more of the total development cost.
Completing a software project involves various forms of expertise, not just in software programmers but also testing, documentation writing, project management, graphic design, user experience, user support, marketing, and fundraising.
Quality and security
Software quality is defined as meeting the stated requirements as well as customer expectations. Quality is an overarching term that can refer to a code's correct and efficient behavior, its reusability and portability, or the ease of modification. It is usually more cost-effective to build quality into the product from the beginning rather than try to add it later in the development process. Higher quality code will reduce lifetime cost to both suppliers and customers as it is more reliable and easier to maintain. Software failures in safety-critical systems can be very serious including death. By some estimates, the cost of poor quality software can be as high as 20 to 40 percent of sales. Despite developers' goal of delivering a product that works entirely as intended, virtually all software contains bugs.
The rise of the Internet also greatly increased the need for computer security as it enabled malicious actors to conduct cyberattacks remotely. If a bug creates a security risk, it is called a vulnerability. Software patches are often released to fix identified vulnerabilities, but those that remain unknown (zero days) as well as those that have not been patched are still liable for exploitation. Vulnerabilities vary in their ability to be exploited by malicious actors, and the actual risk is dependent on the nature of the vulnerability as well as the value of the surrounding system. Although some vulnerabilities can only be used for denial of service attacks that compromise a system's availability, others allow the attacker to inject and run their own code (called malware), without the user being aware of it. To thwart cyberattacks, all software in the system must be designed to withstand and recover from external attack. Despite efforts to ensure security, a significant fraction of computers are infected with malware.
Encoding and execution
Programming languages
Programming languages are the format in which software is written. Since the 1950s, thousands of different programming languages have been invented; some have been in use for decades, while others have fallen into disuse. Some definitions classify machine code—the exact instructions directly implemented by the hardware—and assembly language—a more human-readable alternative to machine code whose statements can be translated one-to-one into machine code—as programming languages. Programs written in the high-level programming languages used to create software share a few main characteristics: knowledge of machine code is not necessary to write them, they can be ported to other computer systems, and they are more concise and human-readable than machine code. They must be both human-readable and capable of being translated into unambiguous instructions for computer hardware.
Compilation, interpretation, and execution
The invention of high-level programming languages was simultaneous with the compilers needed to translate them automatically into machine code. Most programs do not contain all the resources needed to run them and rely on external libraries. Part of the compiler's function is to link these files in such a way that the program can be executed by the hardware. Once compiled, the program can be saved as an object file and the loader (part of the operating system) can take this saved file and execute it as a process on the computer hardware. Some programming languages use an interpreter instead of a compiler. An interpreter converts the program into machine code at run time, which makes them 10 to 100 times slower than compiled programming languages.
Legal issues
Liability
Software is often released with the knowledge that it is incomplete or contains bugs. Purchasers knowingly buy it in this state, which has led to a legal regime where liability for software products is significantly curtailed compared to other products.
Licenses
Source code is protected by copyright law that vests the owner with the exclusive right to copy the code. The underlying ideas or algorithms are not protected by copyright law, but are often treated as a trade secret and concealed by such methods as non-disclosure agreements. Software copyright has been recognized since the mid-1970s and is vested in the company that makes the software, not the employees or contractors who wrote it. The use of most software is governed by an agreement (software license) between the copyright holder and the user. Proprietary software is usually sold under a restrictive license that limits copying and reuse (often enforced with tools such as digital rights management (DRM)). Open-source licenses, in contrast, allow free use and redistribution of software with few conditions. Most open-source licenses used for software require that modifications be released under the same license, which can create complications when open-source software is reused in proprietary projects.
Patents
Patents give an inventor an exclusive, time-limited license for a novel product or process. Ideas about what software could accomplish are not protected by law and concrete implementations are instead covered by copyright law. In some countries, a requirement for the claimed invention to have an effect on the physical world may also be part of the requirements for a software patent to be held valid. Software patents have been historically controversial. Before the 1998 case State Street Bank & Trust Co. v. Signature Financial Group, Inc., software patents were generally not recognized in the United States. In that case, the Supreme Court decided that business processes could be patented. Patent applications are complex and costly, and lawsuits involving patents can drive up the cost of products. Unlike copyrights, patents generally only apply in the jurisdiction where they were issued.
Impact
Engineer Capers Jones writes that "computers and software are making profound changes to every aspect of human life: education, work, warfare, entertainment, medicine, law, and everything else". It has become ubiquitous in everyday life in developed countries. In many cases, software augments the functionality of existing technologies such as household appliances and elevators. Software also spawned entirely new technologies such as the Internet, video games, mobile phones, and GPS. New methods of communication, including email, forums, blogs, microblogging, wikis, and social media, were enabled by the Internet. Massive amounts of knowledge exceeding any paper-based library are now available with a quick web search. Most creative professionals have switched to software-based tools such as computer-aided design, 3D modeling, digital image editing, and computer animation. Almost every complex device is controlled by software.
| Technology | Computer software | null |
5311 | https://en.wikipedia.org/wiki/Computer%20programming | Computer programming | Computer programming or coding is the composition of sequences of instructions, called programs, that computers can follow to perform tasks. It involves designing and implementing algorithms, step-by-step specifications of procedures, by writing code in one or more programming languages. Programmers typically use high-level programming languages that are more easily intelligible to humans than machine code, which is directly executed by the central processing unit. Proficient programming usually requires expertise in several different subjects, including knowledge of the application domain, details of programming languages and generic code libraries, specialized algorithms, and formal logic.
Auxiliary tasks accompanying and related to programming include analyzing requirements, testing, debugging (investigating and fixing problems), implementation of build systems, and management of derived artifacts, such as programs' machine code. While these are sometimes considered programming, often the term software development is used for this larger overall process – with the terms programming, implementation, and coding reserved for the writing and editing of code per se. Sometimes software development is known as software engineering, especially when it employs formal methods or follows an engineering design process.
History
Programmable devices have existed for centuries. As early as the 9th century, a programmable music sequencer was invented by the Persian Banu Musa brothers, who described an automated mechanical flute player in the Book of Ingenious Devices. In 1206, the Arab engineer Al-Jazari invented a programmable drum machine where a musical mechanical automaton could be made to play different rhythms and drum patterns, via pegs and cams. In 1801, the Jacquard loom could produce entirely different weaves by changing the "program" – a series of pasteboard cards with holes punched in them.
Code-breaking algorithms have also existed for centuries. In the 9th century, the Arab mathematician Al-Kindi described a cryptographic algorithm for deciphering encrypted code, in A Manuscript on Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest code-breaking algorithm.
The first computer program is generally dated to 1843 when mathematician Ada Lovelace published an algorithm to calculate a sequence of Bernoulli numbers, intended to be carried out by Charles Babbage's Analytical Engine. However, Charles Babbage himself had written a program for the AE in 1837.
In the 1880s, Herman Hollerith invented the concept of storing data in machine-readable form. Later a control panel (plug board) added to his 1906 Type I Tabulator allowed it to be programmed for different jobs, and by the late 1940s, unit record equipment such as the IBM 602 and IBM 604, were programmed by control panels in a similar way, as were the first electronic computers. However, with the concept of the stored-program computer introduced in 1949, both programs and data were stored and manipulated in the same way in computer memory.
Machine language
Machine code was the language of early programs, written in the instruction set of the particular machine, often in binary notation. Assembly languages were soon developed that let the programmer specify instructions in a text format (e.g., ADD X, TOTAL), with abbreviations for each operation code and meaningful names for specifying addresses. However, because an assembly language is little more than a different notation for a machine language, two machines with different instruction sets also have different assembly languages.
Compiler languages
High-level languages made the process of developing a program simpler and more understandable, and less bound to the underlying hardware.
The first compiler related tool, the A-0 System, was developed in 1952 by Grace Hopper, who also coined the term 'compiler'. FORTRAN, the first widely used high-level language to have a functional implementation, came out in 1957, and many other languages were soon developed—in particular, COBOL aimed at commercial data processing, and Lisp for computer research.
These compiled languages allow the programmer to write programs in terms that are syntactically richer, and more capable of abstracting the code, making it easy to target varying machine instruction sets via compilation declarations and heuristics. Compilers harnessed the power of computers to make programming easier by allowing programmers to specify calculations by entering a formula using infix notation.
Source code entry
Programs were mostly entered using punched cards or paper tape. By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Text editors were also developed that allowed changes and corrections to be made much more easily than with punched cards.
Modern programming
Quality requirements
Whatever the approach to development may be, the final program must satisfy some fundamental properties. The following properties are among the most important:
Reliability: how often the results of a program are correct. This depends on conceptual correctness of algorithms and minimization of programming mistakes, such as mistakes in resource management (e.g., buffer overflows and race conditions) and logic errors (such as division by zero or off-by-one errors).
Robustness: how well a program anticipates problems due to errors (not bugs). This includes situations such as incorrect, inappropriate or corrupt data, unavailability of needed resources such as memory, operating system services, and network connections, user error, and unexpected power outages.
Usability: the ergonomics of a program: the ease with which a person can use the program for its intended purpose or in some cases even unanticipated purposes. Such issues can make or break its success even regardless of other issues. This involves a wide range of textual, graphical, and sometimes hardware elements that improve the clarity, intuitiveness, cohesiveness, and completeness of a program's user interface.
Portability: the range of computer hardware and operating system platforms on which the source code of a program can be compiled/interpreted and run. This depends on differences in the programming facilities provided by the different platforms, including hardware and operating system resources, expected behavior of the hardware and operating system, and availability of platform-specific compilers (and sometimes libraries) for the language of the source code.
Maintainability: the ease with which a program can be modified by its present or future developers in order to make improvements or to customize, fix bugs and security holes, or adapt it to new environments. Good practices during initial development make the difference in this regard. This quality may not be directly apparent to the end user but it can significantly affect the fate of a program over the long term.
Efficiency/performance: Measure of system resources a program consumes (processor time, memory space, slow devices such as disks, network bandwidth and to some extent even user interaction): the less, the better. This also includes careful management of resources, for example cleaning up temporary files and eliminating memory leaks. This is often discussed under the shadow of a chosen programming language. Although the language certainly affects performance, even slower languages, such as Python, can execute programs instantly from a human perspective. Speed, resource usage, and performance are important for programs that bottleneck the system, but efficient use of programmer time is also important and is related to cost: more hardware may be cheaper.
Using automated tests and fitness functions can help to maintain some of the aforementioned attributes.
Readability of source code
In computer programming, readability refers to the ease with which a human reader can comprehend the purpose, control flow, and operation of source code. It affects the aspects of quality above, including portability, usability and most importantly maintainability.
Readability is important because programmers spend the majority of their time reading, trying to understand, reusing, and modifying existing source code, rather than writing new source code. Unreadable code often leads to bugs, inefficiencies, and duplicated code. A study found that a few simple readability transformations made code shorter and drastically reduced the time to understand it.
Following a consistent programming style often helps readability. However, readability is more than just programming style. Many factors, having little or nothing to do with the ability of the computer to efficiently compile and execute the code, contribute to readability. Some of these factors include:
Different indent styles (whitespace)
Comments
Decomposition
Naming conventions for objects (such as variables, classes, functions, procedures, etc.)
The presentation aspects of this (such as indents, line breaks, color highlighting, and so on) are often handled by the source code editor, but the content aspects reflect the programmer's talent and skills.
Various visual programming languages have also been developed with the intent to resolve readability concerns by adopting non-traditional approaches to code structure and display. Integrated development environments (IDEs) aim to integrate all such help. Techniques like Code refactoring can enhance readability.
Algorithmic complexity
The academic field and the engineering practice of computer programming are concerned with discovering and implementing the most efficient algorithms for a given class of problems. For this purpose, algorithms are classified into orders using Big O notation, which expresses resource use—such as execution time or memory consumption—in terms of the size of an input. Expert programmers are familiar with a variety of well-established algorithms and their respective complexities and use this knowledge to choose algorithms that are best suited to the circumstances.
Methodologies
The first step in most formal software development processes is requirements analysis, followed by testing to determine value modeling, implementation, and failure elimination (debugging). There exist a lot of different approaches for each of those tasks. One approach popular for requirements analysis is Use Case analysis. Many programmers use forms of Agile software development where the various stages of formal software development are more integrated together into short cycles that take a few weeks rather than years. There are many approaches to the Software development process.
Popular modeling techniques include Object-Oriented Analysis and Design (OOAD) and Model-Driven Architecture (MDA). The Unified Modeling Language (UML) is a notation used for both the OOAD and MDA.
A similar technique used for database design is Entity-Relationship Modeling (ER Modeling).
Implementation techniques include imperative languages (object-oriented or procedural), functional languages, and logic programming languages.
Measuring language usage
It is very difficult to determine what are the most popular modern programming languages. Methods of measuring programming language popularity include: counting the number of job advertisements that mention the language, the number of books sold and courses teaching the language (this overestimates the importance of newer languages), and estimates of the number of existing lines of code written in the language (this underestimates the number of users of business languages such as COBOL).
Some languages are very popular for particular kinds of applications, while some languages are regularly used to write many different kinds of applications. For example, COBOL is still strong in corporate data centers often on large mainframe computers, Fortran in engineering applications, scripting languages in Web development, and C in embedded software. Many applications use a mix of several languages in their construction and use. New languages are generally designed around the syntax of a prior language with new functionality added, (for example C++ adds object-orientation to C, and Java adds memory management and bytecode to C++, but as a result, loses efficiency and the ability for low-level manipulation).
Debugging
Debugging is a very important task in the software development process since having defects in a program can have significant consequences for its users. Some languages are more prone to some kinds of faults because their specification does not require compilers to perform as much checking as other languages. Use of a static code analysis tool can help detect some possible problems. Normally the first step in debugging is to attempt to reproduce the problem. This can be a non-trivial task, for example as with parallel processes or some unusual software bugs. Also, specific user environment and usage history can make it difficult to reproduce the problem.
After the bug is reproduced, the input of the program may need to be simplified to make it easier to debug. For example, when a bug in a compiler can make it crash when parsing some large source file, a simplification of the test case that results in only few lines from the original source file can be sufficient to reproduce the same crash. Trial-and-error/divide-and-conquer is needed: the programmer will try to remove some parts of the original test case and check if the problem still exists. When debugging the problem in a GUI, the programmer can try to skip some user interaction from the original problem description and check if the remaining actions are sufficient for bugs to appear. Scripting and breakpointing are also part of this process.
Debugging is often done with IDEs. Standalone debuggers like GDB are also used, and these often provide less of a visual environment, usually using a command line. Some text editors such as Emacs allow GDB to be invoked through them, to provide a visual environment.
Programming languages
Different programming languages support different styles of programming (called programming paradigms). The choice of language used is subject to many considerations, such as company policy, suitability to task, availability of third-party packages, or individual preference. Ideally, the programming language best suited for the task at hand will be selected. Trade-offs from this ideal involve finding enough programmers who know the language to build a team, the availability of compilers for that language, and the efficiency with which programs written in a given language execute. Languages form an approximate spectrum from "low-level" to "high-level"; "low-level" languages are typically more machine-oriented and faster to execute, whereas "high-level" languages are more abstract and easier to use but execute less quickly. It is usually easier to code in "high-level" languages than in "low-level" ones.
Programming languages are essential for software development. They are the building blocks for all software, from the simplest applications to the most sophisticated ones.
Allen Downey, in his book How To Think Like A Computer Scientist, writes:
The details look different in different languages, but a few basic instructions appear in just about every language:
Input: Gather data from the keyboard, a file, or some other device.
Output: Display data on the screen or send data to a file or other device.
Arithmetic: Perform basic arithmetical operations like addition and multiplication.
Conditional Execution: Check for certain conditions and execute the appropriate sequence of statements.
Repetition: Perform some action repeatedly, usually with some variation.
Many computer languages provide a mechanism to call functions provided by shared libraries. Provided the functions in a library follow the appropriate run-time conventions (e.g., method of passing arguments), then these functions may be written in any other language.
Learning to program
Learning to program has a long history related to professional standards and practices, academic initiatives and curriculum, and commercial books and materials for students, self-taught learners, hobbyists, and others who desire to create or customize software for personal use. Since the 1960s, learning to program has taken on the characteristics of a popular movement, with the rise of academic disciplines, inspirational leaders, collective identities, and strategies to grow the movement and make institutionalize change. Through these social ideals and educational agendas, learning to code has become important not just for scientists and engineers, but for millions of citizens who have come to believe that creating software is beneficial to society and its members.
Context
In 1957, there were approximately 15,000 computer programmers employed in the U.S., a figure that accounts for 80% of the world’s active developers. In 2014, there were approximately 18.5 million professional programmers in the world, of which 11 million can be considered professional and 7.5 million student or hobbyists. Before the rise of the commercial Internet in the mid-1990s, most programmers learned about software construction through books, magazines, user groups, and informal instruction methods, with academic coursework and corporate training playing important roles for professional workers.
The first book containing specific instructions about how to program a computer may have been Maurice Wilkes, David Wheeler, and Stanley Gill's Preparation of Programs for an Electronic Digital Computer (1951). The book offered a selection of common subroutines for handling basic operations on the EDSAC, one of the world’s first stored-program computers.
When high-level languages arrived, they were introduced by numerous books and materials that explained language keywords, managing program flow, working with data, and other concepts. These languages included FLOW-MATIC, COBOL, FORTRAN, ALGOL, Pascal, BASIC, and C. An example of an early programming primer from these years is Marshal H. Wrubel's A Primer of Programming for Digital Computers (1959), which included step-by-step instructions for filling out coding sheets, creating punched cards, and using the keywords in IBM’s early FORTRAN system. Daniel McCracken's A Guide to FORTRAN Programming (1961) presented FORTRAN to a larger audience, including students and office workers.
In 1961, Alan Perlis suggested that all university freshmen at Carnegie Technical Institute take a course in computer programming. His advice was published in the popular technical journal Computers and Automation, which became a regular source of information for professional programmers.
Programmers soon had a range of learning texts at their disposal. Programmer’s references listed keywords and functions related to a language, often in alphabetical order, as well as technical information about compilers and related systems. An early example was IBM’s Programmers’ Reference Manual: the FORTRAN Automatic Coding System for the IBM 704 EDPM (1956).
Over time, the genre of programmer’s guides emerged, which presented the features of a language in tutorial or step by step format. Many early primers started with a program known as “Hello, World”, which presented the shortest program a developer could create in a given system. Programmer’s guides then went on to discuss core topics like declaring variables, data types, formulas, flow control, user-defined functions, manipulating data, and other topics.
Early and influential programmer’s guides included John G. Kemeny and Thomas E. Kurtz’s BASIC Programming (1967), Kathleen Jensen and Niklaus Wirth’s The Pascal User Manual and Report (1971), and Brian Kernighan and Dennis Ritchie’s The C Programming Language (1978). Similar books for popular audiences (but with a much lighter tone) included Bob Albrecht’s My Computer Loves Me When I Speak BASIC (1972), Al Kelley and Ira Pohl’s A Book on C (1984), and Dan Gookin's C for Dummies (1994).
Beyond language-specific primers, there were numerous books and academic journals that introduced professional programming practices. Many were designed for university courses in computer science, software engineering, or related disciplines. Donald Knuth’s The Art of Computer Programming (1968 and later), presented hundreds of computational algorithms and their analysis. The Elements of Programming Style (1974), by Brian W. Kernighan and P. J. Plauger, concerned itself with programming style, the idea that programs should be written not only to satisfy the compiler but human readers. Jon Bentley’s Programming Pearls (1986) offered practical advice about the art and craft of programming in professional and academic contexts. Texts specifically designed for students included Doug Cooper and Michael Clancy's Oh Pascal! (1982), Alfred Aho’s Data Structures and Algorithms (1983), and Daniel Watt's Learning with Logo (1983).
Technical Publishers
As personal computers became mass-market products, thousands of trade books and magazines sought to teach professional, hobbyist, and casual users to write computer programs. A sample of these learning resources includes BASIC Computer Games, Microcomputer Edition (1978), by David Ahl; Programming the Z80 (1979), by Rodnay Zaks; Programmer’s CP/M Handbook (1983), by Andy Johnson-Laird; C Primer Plus (1984), by Mitchell Waite and The Waite Group; The Peter Norton Programmer’s Guide to the IBM PC (1985), by Peter Norton; Advanced MS-DOS (1986), by Ray Duncan; Learn BASIC Now (1989), by Michael Halvorson and David Rygymr; Programming Windows (1992 and later), by Charles Petzold; Code Complete: A Practical Handbook for Software Construction (1993), by Steve McConnell; and Tricks of the Game-Programming Gurus (1994), by André LaMothe.
The PC software industry spurred the creation of numerous book publishers that offered programming primers and tutorials, as well as books for advanced software developers. These publishers included Addison-Wesley, IDG, Macmillan Inc., McGraw-Hill, Microsoft Press, O’Reilly Media, Prentice Hall, Sybex, Ventana Press, Waite Group Press, Wiley (publisher), Wrox Press, and Ziff-Davis.
Computer magazines and journals also provided learning content for professional and hobbyist programmers. A partial list of these resources includes Amiga World, Byte (magazine), Communications of the ACM, Computer (magazine), Compute!, Computer Language (magazine), Computers and Electronics, Dr. Dobb’s Journal, IEEE Software, Macworld, PC Magazine, PC/Computing, and UnixWorld.
Digital Learning / Online Resources
Between 2000 and 2010, computer book and magazine publishers declined significantly as providers of programming instruction, as programmers moved to Internet resources to expand their access to information. This shift brought forward new digital products and mechanisms to learn programming skills. During the transition, digital books from publishers transferred information that had traditionally been delivered in print to new and expanding audiences.
Important Internet resources for learning to code included blogs, wikis, videos, online databases, subscription sites, and custom websites focused on coding skills. New commercial resources included YouTube videos, Lynda.com tutorials (later LinkedIn Learning), Khan Academy, Codecademy, GitHub, and numerous coding bootcamps.
Most software development systems and game engines included rich online help resources, including integrated development environments (IDEs), context-sensitive help, APIs, and other digital resources. Commercial software development kits (SDKs) also provided a collection of software development tools and documentation in one installable package.
Commercial and non-profit organizations published learning websites for developers, created blogs, and established newsfeeds and social media resources about programming. Corporations like Apple, Microsoft, Oracle, Google, and Amazon built corporate websites providing support for programmers, including resources like the Microsoft Developer Network (MSDN). Contemporary movements like Hour of Code (Code.org) show how learning to program has become associated with digital learning strategies, education agendas, and corporate philanthropy.
Programmers
Computer programmers are those who write computer software. Their jobs usually involve:
Prototyping
Coding
Debugging
Documentation
Integration
Maintenance
Requirements analysis
Software architecture
Software testing
Specification
Although programming has been presented in the media as a somewhat mathematical subject, some research shows that good programmers have strong skills in natural human languages, and that learning to code is similar to learning a foreign language.
| Technology | Programming | null |
5320 | https://en.wikipedia.org/wiki/Carbon%20nanotube | Carbon nanotube | A carbon nanotube (CNT) is a tube made of carbon with a diameter in the nanometre range (nanoscale). They are one of the allotropes of carbon. Two broad classes of carbon nanotubes are recognized:
Single-walled carbon nanotubes (SWCNTs) have diameters around 0.5–2.0 nanometres, about 100,000 times smaller than the width of a human hair. They can be idealised as cutouts from a two-dimensional graphene sheet rolled up to form a hollow cylinder.
Multi-walled carbon nanotubes (MWCNTs) consist of nested single-wall carbon nanotubes in a nested, tube-in-tube structure. Double- and triple-walled carbon nanotubes are special cases of MWCNT.
Carbon nanotubes can exhibit remarkable properties, such as exceptional tensile strength and thermal conductivity because of their nanostructure and strength of the bonds between carbon atoms. Some SWCNT structures exhibit high electrical conductivity while others are semiconductors. In addition, carbon nanotubes can be chemically modified. These properties are expected to be valuable in many areas of technology, such as electronics, optics, composite materials (replacing or complementing carbon fibres), nanotechnology (including nanomedicine), and other applications of materials science.
The predicted properties for SWCNTs were tantalising, but a path to synthesising them was lacking until 1993, when Iijima and Ichihashi at NEC, and Bethune and others at IBM independently discovered that co-vaporising carbon and transition metals such as iron and cobalt could specifically catalyse SWCNT formation. These discoveries triggered research that succeeded in greatly increasing the efficiency of the catalytic production technique, and led to an explosion of work to characterise and find applications for SWCNTs.
History
The true identity of the discoverers of carbon nanotubes is a subject of some controversy. A 2006 editorial written by Marc Monthioux and Vladimir Kuznetsov in the journal Carbon described the origin of the carbon nanotube. A large percentage of academic and popular literature attributes the discovery of hollow, nanometre-size tubes composed of graphitic carbon to Sumio Iijima of NEC in 1991. His paper initiated a flurry of excitement and could be credited with inspiring the many scientists now studying applications of carbon nanotubes. Though Iijima has been given much of the credit for discovering carbon nanotubes, it turns out that the timeline of carbon nanotubes goes back much further than 1991.
In 1952, L. V. Radushkevich and V. M. Lukyanovich published clear images of 50-nanometre diameter tubes made of carbon in the Journal of Physical Chemistry Of Russia. This discovery was largely unnoticed, as the article was published in Russian, and Western scientists' access to Soviet press was limited during the Cold War. Monthioux and Kuznetsov mentioned in their Carbon editorial:
In 1976, Morinobu Endo of CNRS observed hollow tubes of rolled up graphite sheets synthesised by a chemical vapour-growth technique. The first specimens observed would later come to be known as single-walled carbon nanotubes (SWNTs). Endo, in his early review of vapor-phase-grown carbon fibers (VPCF), also reminded us that he had observed a hollow tube, linearly extended with parallel carbon layer faces near the fiber core. This appears to be the observation of multi-walled carbon nanotubes at the center of the fiber. The mass-produced MWCNTs today are strongly related to the VPGCF developed by Endo. In fact, they call it the "Endo-process", out of respect for his early work and patents. In 1979, John Abrahamson presented evidence of carbon nanotubes at the 14th Biennial Conference of Carbon at Pennsylvania State University. The conference paper described carbon nanotubes as carbon fibers that were produced on carbon anodes during arc discharge. A characterization of these fibers was given, as well as hypotheses for their growth in a nitrogen atmosphere at low pressures.
In 1981, a group of Soviet scientists published the results of chemical and structural characterization of carbon nanoparticles produced by a thermocatalytic disproportionation of carbon monoxide. Using TEM images and XRD patterns, the authors suggested that their "carbon multi-layer tubular crystals" were formed by rolling graphene layers into cylinders. They speculated that via this rolling, many different arrangements of graphene hexagonal nets are possible. They suggested two such possible arrangements: a circular arrangement (armchair nanotube); and a spiral, helical arrangement (chiral tube).
In 1987, Howard G. Tennent of Hyperion Catalysis was issued a U.S. patent for the production of "cylindrical discrete carbon fibrils" with a "constant diameter between about 3.5 and about 70 nanometers..., length 102 times the diameter, and an outer region of multiple essentially continuous layers of ordered carbon atoms and a distinct inner core...."
Helping to create the initial excitement associated with carbon nanotubes were Iijima's 1991 discovery of multi-walled carbon nanotubes in the insoluble material of arc-burned graphite rods; and Mintmire, Dunlap, and White's independent prediction that if single-walled carbon nanotubes could be made, they would exhibit remarkable conducting properties. Nanotube research accelerated greatly following the independent discoveries by Iijima and Ichihashi at NEC and Bethune et al. at IBM of methods to specifically produce single-walled carbon nanotubes by adding transition-metal catalysts to the carbon in an arc discharge. Thess et al. refined this catalytic method by vaporizing the carbon/transition-metal combination in a high-temperature furnace, which greatly improved the yield and purity of the SWNTs and made them widely available for characterization and application experiments. The arc discharge technique, well known to produce the famed Buckminsterfullerene, thus played a role in the discoveries of both multi- and single-wall nanotubes, extending the run of serendipitous discoveries relating to fullerenes. The discovery of nanotubes remains a contentious issue. Many believe that Iijima's report in 1991 is of particular importance because it brought carbon nanotubes into the awareness of the scientific community as a whole.
In 2020, during an archaeological excavation of Keezhadi in Tamil Nadu, India, ~2600-year-old pottery was discovered whose coatings appear to contain carbon nanotubes. The robust mechanical properties of the nanotubes are partially why the coatings have lasted for so many years, say the scientists.
Structure of SWCNTs
Basic details
The structure of an ideal (infinitely long) single-walled carbon nanotube is that of a regular hexagonal lattice drawn on an infinite cylindrical surface, whose vertices are the positions of the carbon atoms. Since the length of the carbon-carbon bonds is fairly fixed, there are constraints on the diameter of the cylinder and the arrangement of the atoms on it.
In the study of nanotubes, one defines a zigzag path on a graphene-like lattice as a path that turns 60 degrees, alternating left and right, after stepping through each bond. It is also conventional to define an armchair path as one that makes two left turns of 60 degrees followed by two right turns every four steps. On some carbon nanotubes, there is a closed zigzag path that goes around the tube. One says that the tube is of the zigzag type or configuration, or simply is a zigzag nanotube. If the tube is instead encircled by a closed armchair path, it is said to be of the armchair type, or an armchair nanotube. An infinite nanotube that is of one type consists entirely of closed paths of that type, connected to each other.
The zigzag and armchair configurations are not the only structures that a single-walled nanotube can have. To describe the structure of a general infinitely long tube, one should imagine it being sliced open by a cut parallel to its axis, that goes through some atom A, and then unrolled flat on the plane, so that its atoms and bonds coincide with those of an imaginary graphene sheet—more precisely, with an infinitely long strip of that sheet. The two halves of the atom A will end up on opposite edges of the strip, over two atoms A1 and A2 of the graphene. The line from A1 to A2 will correspond to the circumference of the cylinder that went through the atom A, and will be perpendicular to the edges of the strip. In the graphene lattice, the atoms can be split into two classes, depending on the directions of their three bonds. Half the atoms have their three bonds directed the same way, and half have their three bonds rotated 180 degrees relative to the first half. The atoms A1 and A2, which correspond to the same atom A on the cylinder, must be in the same class. It follows that the circumference of the tube and the angle of the strip are not arbitrary, because they are constrained to the lengths and directions of the lines that connect pairs of graphene atoms in the same class.
Let u and v be two linearly independent vectors that connect the graphene atom A1 to two of its nearest atoms with the same bond directions. That is, if one numbers consecutive carbons around a graphene cell with C1 to C6, then u can be the vector from C1 to C3, and v be the vector from C1 to C5. Then, for any other atom A2 with same class as A1, the vector from A1 to A2 can be written as a linear combination n u + m v, where n and m are integers. And, conversely, each pair of integers (n,m) defines a possible position for A2. Given n and m, one can reverse this theoretical operation by drawing the vector w on the graphene lattice, cutting a strip of the latter along lines perpendicular to w through its endpoints A1 and A2, and rolling the strip into a cylinder so as to bring those two points together. If this construction is applied to a pair (k,0), the result is a zigzag nanotube, with closed zigzag paths of 2k atoms. If it is applied to a pair (k,k), one obtains an armchair tube, with closed armchair paths of 4k atoms.
Types
The structure of the nanotube is not changed if the strip is rotated by 60 degrees clockwise around A1 before applying the hypothetical reconstruction above. Such a rotation changes the corresponding pair (n,m) to the pair (−2m,n+m). It follows that many possible positions of A2 relative to A1 — that is, many pairs (n,m) — correspond to the same arrangement of atoms on the nanotube. That is the case, for example, of the six pairs (1,2), (−2,3), (−3,1), (−1,−2), (2,−3), and (3,−1). In particular, the pairs (k,0) and (0,k) describe the same nanotube geometry. These redundancies can be avoided by considering only pairs (n,m) such that n > 0 and m ≥ 0; that is, where the direction of the vector w lies between those of u (inclusive) and v (exclusive). It can be verified that every nanotube has exactly one pair (n,m) that satisfies those conditions, which is called the tube's type. Conversely, for every type there is a hypothetical nanotube. In fact, two nanotubes have the same type if and only if one can be conceptually rotated and translated so as to match the other exactly. Instead of the type (n,m), the structure of a carbon nanotube can be specified by giving the length of the vector w (that is, the circumference of the nanotube), and the angle α between the directions of u and w,
may range from 0 (inclusive) to 60 degrees clockwise (exclusive). If the diagram is drawn with u horizontal, the latter is the tilt of the strip away from the vertical.
Chirality and mirror symmetry
A nanotube is chiral if it has type (n,m), with m > 0 and m ≠ n; then its enantiomer (mirror image) has type (m,n), which is different from (n,m). This operation corresponds to mirroring the unrolled strip about the line L through A1 that makes an angle of 30 degrees clockwise from the direction of the u vector (that is, with the direction of the vector u+v). The only types of nanotubes that are achiral are the (k,0) "zigzag" tubes and the (k,k) "armchair" tubes. If two enantiomers are to be considered the same structure, then one may consider only types (n,m) with 0 ≤ m ≤ n and n > 0. Then the angle α between u and w, which may range from 0 to 30 degrees (inclusive both), is called the "chiral angle" of the nanotube.
Circumference and diameter
From n and m one can also compute the circumference c, which is the length of the vector w, which turns out to be:
in picometres. The diameter of the tube is then , that is
also in picometres. (These formulas are only approximate, especially for small n and m where the bonds are strained; and they do not take into account the thickness of the wall.)
The tilt angle α between u and w and the circumference c are related to the type indices n and m by:
where arg(x,y) is the clockwise angle between the X-axis and the vector (x,y); a function that is available in many programming languages as atan2(y,x). Conversely, given c and α, one can get the type (n,m) by the formulas:
which must evaluate to integers.
Physical limits
Narrowest examples
If n and m are too small, the structure described by the pair (n,m) will describe a molecule that cannot be reasonably called a "tube", and may not even be stable. For example, the structure theoretically described by the pair (1,0) (the limiting "zigzag" type) would be just a chain of carbons. That is a real molecule, the carbyne; which has some characteristics of nanotubes (such as orbital hybridization, high tensile strength, etc.) — but has no hollow space, and may not be obtainable as a condensed phase. The pair (2,0) would theoretically yield a chain of fused 4-cycles; and (1,1), the limiting "armchair" structure, would yield a chain of bi-connected 4-rings. These structures may not be realizable.
The thinnest carbon nanotube proper is the armchair structure with type (2,2), which has a diameter of 0.3 nm. This nanotube was grown inside a multi-walled carbon nanotube. Assigning of the carbon nanotube type was done by a combination of high-resolution transmission electron microscopy (HRTEM), Raman spectroscopy, and density functional theory (DFT) calculations.
The thinnest freestanding single-walled carbon nanotube is about 0.43 nm in diameter. Researchers suggested that it can be either (5,1) or (4,2) SWCNT, but the exact type of the carbon nanotube remains questionable. (3,3), (4,3), and (5,1) carbon nanotubes (all about 0.4 nm in diameter) were unambiguously identified using aberration-corrected high-resolution transmission electron microscopy inside double-walled CNTs.
Length
The observation of the longest carbon nanotubes grown so far, around 0.5 metre (550 mm) long, was reported in 2013. These nanotubes were grown on silicon substrates using an improved chemical vapor deposition (CVD) method and represent electrically uniform arrays of single-walled carbon nanotubes.
The shortest carbon nanotube can be considered to be the organic compound cycloparaphenylene, which was synthesized in 2008 by Ramesh Jasti. Other small molecule carbon nanotubes have been synthesized since.
Density
The highest density of CNTs was achieved in 2013, grown on a conductive titanium-coated copper surface that was coated with co-catalysts cobalt and molybdenum at lower than typical temperatures of 450 °C. The tubes averaged a height of 380 nm and a mass density of 1.6 g cm−3. The material showed ohmic conductivity (lowest resistance ~22 kΩ).
Variants
There is no consensus on some terms describing carbon nanotubes in the scientific literature: both "-wall" and "-walled" are being used in combination with "single", "double", "triple", or "multi", and the letter C is often omitted in the abbreviation, for example, multi-walled carbon nanotube (MWNT). The International Standards Organization typically uses "single-walled carbon nanotube (SWCNT)" or "multi-walled carbon nanotube (MWCNT)" in its documents.
Multi-walled
Multi-walled nanotubes (MWNTs) consist of multiple rolled layers (concentric tubes) of graphene. There are two models that can be used to describe the structures of multi-walled nanotubes. In the Russian Doll model, sheets of graphite are arranged in concentric cylinders, e.g., a (0,8) single-walled nanotube (SWNT) within a larger (0,17) single-walled nanotube. In the Parchment model, a single sheet of graphite is rolled in around itself, resembling a scroll of parchment or a rolled newspaper. The interlayer distance in multi-walled nanotubes is close to the distance between graphene layers in graphite, approximately 3.4 Å. The Russian Doll structure is observed more commonly. Its individual shells can be described as SWNTs, which can be metallic or semiconducting. Because of statistical probability and restrictions on the relative diameters of the individual tubes, one of the shells, and thus the whole MWNT, is usually a zero-gap metal.
Double-walled carbon nanotubes (DWNTs) form a special class of nanotubes because their morphology and properties are similar to those of SWNTs but they are more resistant to attacks by chemicals. This is especially important when it is necessary to graft chemical functions to the surface of the nanotubes (functionalization) to add properties to the CNT. Covalent functionalization of SWNTs will break some C=C double bonds, leaving "holes" in the structure on the nanotube and thus modifying both its mechanical and electrical properties. In the case of DWNTs, only the outer wall is modified. DWNT synthesis on the gram-scale by the CCVD technique was first proposed in 2003 from the selective reduction of oxide solutions in methane and hydrogen.
The telescopic motion ability of inner shells, allowing them to act as low-friction, low-wear nanobearings and nanosprings, may make them a desirable material in nanoelectromechanical systems (NEMS) . The retraction force that occurs to telescopic motion is caused by the Lennard-Jones interaction between shells, and its value is about 1.5 nN.
Junctions and crosslinking
Junctions between two or more nanotubes have been widely discussed theoretically. Such junctions are quite frequently observed in samples prepared by arc discharge as well as by chemical vapor deposition. The electronic properties of such junctions were first considered theoretically by Lambin et al., who pointed out that a connection between a metallic tube and a semiconducting one would represent a nanoscale heterojunction. Such a junction could therefore form a component of a nanotube-based electronic circuit. The adjacent image shows a junction between two multiwalled nanotubes.
Junctions between nanotubes and graphene have been considered theoretically and studied experimentally. Nanotube-graphene junctions form the basis of pillared graphene, in which parallel graphene sheets are separated by short nanotubes. Pillared graphene represents a class of three-dimensional carbon nanotube architectures.
Recently, several studies have highlighted the prospect of using carbon nanotubes as building blocks to fabricate three-dimensional macroscopic (>100 nm in all three dimensions) all-carbon devices. Lalwani et al. have reported a novel radical-initiated thermal crosslinking method to fabricate macroscopic, free-standing, porous, all-carbon scaffolds using single- and multi-walled carbon nanotubes as building blocks. These scaffolds possess macro-, micro-, and nano-structured pores, and the porosity can be tailored for specific applications. These 3D all-carbon scaffolds/architectures may be used for the fabrication of the next generation of energy storage, supercapacitors, field emission transistors, high-performance catalysis, photovoltaics, and biomedical devices, implants, and sensors.
Other morphologies
Carbon nanobuds are a newly created material combining two previously discovered allotropes of carbon: carbon nanotubes and fullerenes. In this new material, fullerene-like "buds" are covalently bonded to the outer sidewalls of the underlying carbon nanotube. This hybrid material has useful properties of both fullerenes and carbon nanotubes. In particular, they have been found to be exceptionally good field emitters. In composite materials, the attached fullerene molecules may function as molecular anchors preventing slipping of the nanotubes, thus improving the composite's mechanical properties.
A carbon peapod is a novel hybrid carbon material which traps fullerene inside a carbon nanotube. It can possess interesting magnetic properties with heating and irradiation. It can also be applied as an oscillator during theoretical investigations and predictions.
In theory, a nanotorus is a carbon nanotube bent into a torus (doughnut shape). Nanotori are predicted to have many unique properties, such as magnetic moments 1000 times larger than that previously expected for certain specific radii. Properties such as magnetic moment, thermal stability, etc. vary widely depending on the radius of the torus and the radius of the tube.
Graphenated carbon nanotubes are a relatively new hybrid that combines graphitic foliates grown along the sidewalls of multiwalled or bamboo-style CNTs. The foliate density can vary as a function of deposition conditions (e.g., temperature and time) with their structure ranging from a few layers of graphene (< 10) to thicker, more graphite-like. The fundamental advantage of an integrated graphene-CNT structure is the high surface area three-dimensional framework of the CNTs coupled with the high edge density of graphene. Depositing a high density of graphene foliates along the length of aligned CNTs can significantly increase the total charge capacity per unit of nominal area as compared to other carbon nanostructures.
Cup-stacked carbon nanotubes (CSCNTs) differ from other quasi-1D carbon structures, which normally behave as quasi-metallic conductors of electrons. CSCNTs exhibit semiconducting behavior because of the stacking microstructure of graphene layers.
Properties
Many properties of single-walled carbon nanotubes depend significantly on the (n,m) type, and this dependence is non-monotonic (see Kataura plot). In particular, the band gap can vary from zero to about 2 eV and the electrical conductivity can show metallic or semiconducting behavior.
Mechanical
Carbon nanotubes are the strongest and stiffest materials yet discovered in terms of tensile strength and elastic modulus. This strength results from the covalent sp2 bonds formed between the individual carbon atoms. In 2000, a multiwalled carbon nanotube was tested to have a tensile strength of . (For illustration, this translates into the ability to endure tension of a weight equivalent to on a cable with cross-section of ). Further studies, such as one conducted in 2008, revealed that individual CNT shells have strengths of up to ≈, which is in agreement with quantum/atomistic models. Because carbon nanotubes have a low density for a solid of 1.3 to 1.4 g/cm3, its specific strength of up to 48,000 kN·m/kg is the best of known materials, compared to high-carbon steel's 154 kN·m/kg.
Although the strength of individual CNT shells is extremely high, weak shear interactions between adjacent shells and tubes lead to significant reduction in the effective strength of multiwalled carbon nanotubes and carbon nanotube bundles down to only a few GPa. This limitation has been recently addressed by applying high-energy electron irradiation, which crosslinks inner shells and tubes, and effectively increases the strength of these materials to ≈60 GPa for multiwalled carbon nanotubes and ≈17 GPa for double-walled carbon nanotube bundles. CNTs are not nearly as strong under compression. Because of their hollow structure and high aspect ratio, they tend to undergo buckling when placed under compressive, torsional, or bending stress.
On the other hand, there is evidence that in the radial direction they are rather soft. The first transmission electron microscope observation of radial elasticity suggested that even van der Waals forces can deform two adjacent nanotubes. Later, nanoindentations with an atomic force microscope were performed by several groups to quantitatively measure the radial elasticity of multiwalled carbon nanotubes and tapping/contact mode atomic force microscopy was also performed on single-walled carbon nanotubes. Their high Young's modulus in the linear direction, of on the order of several GPa (and even up to an experimentally-measured 1.8 TPa, for nanotubes near 2.4 μm in length), further suggests they may be soft in the radial direction.
Electrical
Unlike graphene, which is a two-dimensional semimetal, carbon nanotubes are either metallic or semiconducting along the tubular axis. For a given (n,m) nanotube, if n = m, the nanotube is metallic; if n − m is a multiple of 3 and n ≠ m, then the nanotube is quasi-metallic with a very small band gap, otherwise the nanotube is a moderate semiconductor.
Thus, all armchair (n = m) nanotubes are metallic, and nanotubes (6,4), (9,1), etc. are semiconducting.
Carbon nanotubes are not semimetallic because the degenerate point (the point where the π [bonding] band meets the π* [anti-bonding] band, at which the energy goes to zero) is slightly shifted away from the K point in the Brillouin zone because of the curvature of the tube surface, causing hybridization between the σ* and π* anti-bonding bands, modifying the band dispersion.
The rule regarding metallic versus semiconductor behavior has exceptions because curvature effects in small-diameter tubes can strongly influence electrical properties. Thus, a (5,0) SWCNT that should be semiconducting in fact is metallic according to the calculations. Likewise, zigzag and chiral SWCNTs with small diameters that should be metallic have a finite gap (armchair nanotubes remain metallic). In theory, metallic nanotubes can carry an electric current density of 4 × 109 A/cm2, which is more than 1,000 times greater than those of metals such as copper, where for copper interconnects, current densities are limited by electromigration. Carbon nanotubes are thus being explored as interconnects and conductivity-enhancing components in composite materials, and many groups are attempting to commercialize highly conducting electrical wire assembled from individual carbon nanotubes. There are significant challenges to be overcome however, such as undesired current saturation under voltage, and the much more resistive nanotube-to-nanotube junctions and impurities, all of which lower the electrical conductivity of the macroscopic nanotube wires by orders of magnitude, as compared to the conductivity of the individual nanotubes.
Because of its nanoscale cross-section, electrons propagate only along the tube's axis. As a result, carbon nanotubes are frequently referred to as one-dimensional conductors. The maximum electrical conductance of a single-walled carbon nanotube is 2G0, where G0 = 2e2/h is the conductance of a single ballistic quantum channel.
Because of the role of the π-electron system in determining the electronic properties of graphene, doping in carbon nanotubes differs from that of bulk crystalline semiconductors from the same group of the periodic table (e.g., silicon). Graphitic substitution of carbon atoms in the nanotube wall by boron or nitrogen dopants leads to p-type and n-type behavior, respectively, as would be expected in silicon. However, some non-substitutional (intercalated or adsorbed) dopants introduced into a carbon nanotube, such as alkali metals and electron-rich metallocenes, result in n-type conduction because they donate electrons to the π-electron system of the nanotube. By contrast, π-electron acceptors such as FeCl3 or electron-deficient metallocenes function as p-type dopants because they draw π-electrons away from the top of the valence band.
Intrinsic superconductivity has been reported, although other experiments found no evidence of this, leaving the claim a subject of debate.
In 2021, Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT, published department findings on the use of carbon nanotubes to create an electric current. By immersing the structures in an organic solvent, the liquid drew electrons out of the carbon particles. Strano was quoted as saying, "This allows you to do electrochemistry, but with no wires," and represents a significant breakthrough in the technology. Future applications include powering micro- or nanoscale robots, as well as driving alcohol oxidation reactions, which are important in the chemicals industry.
Crystallographic defects also affect the tube's electrical properties. A common result is lowered conductivity through the defective region of the tube. A defect in metallic armchair-type tubes (which can conduct electricity) can cause the surrounding region to become semiconducting, and single monatomic vacancies induce magnetic properties.
Electromechanical
Semiconducting carbon nanotubes have shown piezoresistive property when applying mechanical force. The structural deformation causes a change in the band gap which effects the conductance. This property has the potential to be used in strain sensors.
Optical
Carbon nanotubes have useful absorption, photoluminescence (fluorescence), and Raman spectroscopy properties. Spectroscopic methods offer the possibility of quick and non-destructive characterization of relatively large amounts of carbon nanotubes. There is a strong demand for such characterization from the industrial point of view: numerous parameters of nanotube synthesis can be changed, intentionally or unintentionally, to alter the nanotube quality, such as the non-tubular carbon content, structure (chirality) of the produced nanotubes, and structural defects. These features then determine nearly all other significant optical, mechanical, and electrical properties.
Carbon nanotube optical properties have been explored for use in applications such as for light-emitting diodes (LEDs) and photo-detectors based on a single nanotube have been produced in the lab. Their unique feature is not the efficiency, which is yet relatively low, but the narrow selectivity in the wavelength of emission and detection of light and the possibility of its fine-tuning through the nanotube structure. In addition, bolometer and optoelectronic memory devices have been realised on ensembles of single-walled carbon nanotubes. Nanotube fluorescence has been investigated for the purposes of imaging and sensing in biomedical applications.
Thermal
All nanotubes are expected to be very good thermal conductors along the tube, exhibiting a property known as "ballistic conduction", but good insulators lateral to the tube axis. Measurements show that an individual SWNT has a room-temperature thermal conductivity along its axis of about 3500 W·m−1·K−1; compare this to copper, a metal well known for its good thermal conductivity, which transmits 385 W·m−1·K−1. An individual SWNT has a room-temperature thermal conductivity lateral to its axis (in the radial direction) of about 1.52 W·m−1·K−1, which is about as thermally conductive as soil. Macroscopic assemblies of nanotubes such as films or fibres have reached up to 1500 W·m−1·K−1 so far. Networks composed of nanotubes demonstrate different values of thermal conductivity, from the level of thermal insulation with the thermal conductivity of 0.1 W·m−1·K−1 to such high values. That is dependent on the amount of contribution to the thermal resistance of the system caused by the presence of impurities, misalignments and other factors. The temperature stability of carbon nanotubes is estimated to be up to 2800 °C in vacuum and about 750 °C in air.
Crystallographic defects strongly affect the tube's thermal properties. Such defects lead to phonon scattering, which in turn increases the relaxation rate of the phonons. This reduces the mean free path and reduces the thermal conductivity of nanotube structures. Phonon transport simulations indicate that substitutional defects such as nitrogen or boron will primarily lead to the scattering of high-frequency optical phonons. However, larger-scale defects such as Stone–Wales defects cause phonon scattering over a wide range of frequencies, leading to a greater reduction in thermal conductivity.
Antibacterial
Recently, carbon-nanotubes have been shown to have antibacterial properties. They disrupt normal bacterial function by causing physical/mechanical damage, facilitating oxidative stress or lipid extraction, inhibiting bacterial metabolism, and isolating functional sites via wrapping with CNM-containing nanomaterials.
Synthesis
Techniques have been developed to produce nanotubes in sizeable quantities, including arc discharge, laser ablation, chemical vapor deposition (CVD) and high-pressure carbon monoxide disproportionation (HiPCO). Among these arc discharge, laser ablation are batch by batch process, Chemical Vapor Deposition can be used both for batch by batch or continuous processes, and HiPCO is gas phase continuous process. Most of these processes take place in a vacuum or with process gases. The CVD growth method is popular, as it yields high quantity and has a degree of control over diameter, length and morphology. Using particulate catalysts, large quantities of nanotubes can be synthesized by these methods, and industrialisation is well on its way, with several CNT and CNT fibers factory around the world. One problem of CVD processes is the high variability in the nanotube's characteristics The HiPCO process advances in catalysis and continuous growth are making CNTs more commercially viable. The HiPCO process helps in producing high purity single-walled carbon nanotubes in higher quantity. The HiPCO reactor operates at high temperature 900–1100 °C and high pressure ~30–50 bar. It uses carbon monoxide as the carbon source and iron pentacarbonyl or nickel tetracarbonyl as a catalyst. These catalysts provide a nucleation site for the nanotubes to grow, while cheaper iron-based catalysts like Ferrocene can be used for CVD process.
Vertically aligned carbon nanotube arrays are also grown by thermal chemical vapor deposition. A substrate (quartz, silicon, stainless steel, carbon fibers, etc.) is coated with a catalytic metal (Fe, Co, Ni) layer. Typically that layer is iron and is deposited via sputtering to a thickness of 1–5 nm. A 10–50 nm underlayer of alumina is often also put down on the substrate first. This imparts controllable wetting and good interfacial properties.
When the substrate is heated to the growth temperature (~600 to 850 °C), the continuous iron film breaks up into small islands with each island then nucleating a carbon nanotube. The sputtered thickness controls the island size and this in turn determines the nanotube diameter. Thinner iron layers drive down the diameter of the islands and drive down the diameter of the nanotubes grown. The amount of time the metal island can sit at the growth temperature is limited as they are mobile and can merge into larger (but fewer) islands. Annealing at the growth temperature reduces the site density (number of CNT/mm2) while increasing the catalyst diameter.
The as-prepared carbon nanotubes always have impurities such as other forms of carbon (amorphous carbon, fullerene, etc.) and non-carbonaceous impurities (metal used for catalyst). These impurities need to be removed to make use of the carbon nanotubes in applications.
Purification
As-synthesized carbon nanotubes typically contain impurities and most importantly different chiralities of carbon nanotubes. Therefore, multiple methods have been developed to purify them including polymer-assisted, density gradient ultracentrifugation (DGU), chromatography and aqueous two-phase extraction (ATPE). These methods have been reviewed in multiple articles.
Certain polymers selectively disperse or wrap CNTs of a particular chirality, metallic character or diameter. For example, poly(phenylenevinylenes) disperses CNTs of specific diameters (0.75–0.84 nm) and polyfluorenes are highly selective for semiconducting CNTs. It involves mainly two steps, sonicate the mixture (CNTs and polymers in solvent), centrifuge and the supernatant are desired CNTs.
Density gradient ultracentrifugation is a method based on the density difference of CNTs, so that different components are layered in centrifuge tubes under centrifugal force. Chromatography-based methods include size exclusion (SEC), ion-exchange (IEX) and gel chromatography. For SEC, CNTs are separated due to the difference in size using a stationary phase with different pore size. As for IEX, the separation is achieved based on their differential adsorption and desorption onto chemically functionalized resins packed in an IEX column, so understanding the interaction between CNTs mixtures and resins is important. The first IEX is reported to separate DNA-SWCNTs. Gel chromatography is based on the partition of CNTs between stationary and mobile phase, it's found semiconducting CNTs are more strongly attracted by gel than metallic CNTs. While it shows potential, the current application is limited to the separation of semiconducting (n,m) species.
ATPE uses two water-soluble polymers such as polyethylene glycol (PEG) and dextran. When mixed, two immiscible aqueous phases form spontaneously, and each of the two phases shows a different affinity to CNTs. Partition depends on the solvation energy difference between two similar phases of microscale volumes. By changing the separation system or temperatures, and adding strong oxidants, reductants, or salts, the partition of CNTs species into the two phases can be adjusted.
Despite the progress that has been made to separate and purify CNTs, many challenges remain, such as the growth of chirality-controlled CNTs, so that no further purification is needed, or large-scale purification.
Advantages of monochiral CNTs
Monochiral CNTs have the advantage that they do contain less or no impurities, well-defined non-congested optical spectra. This allows to create for example CNT-based biosensors with higher sensitivity and selectivity. For example, monochiral SWCNTs are necessary for multiplexed and ratiometric sensing schemes, enhanced sensitivity of biocompatibility.
Functionalization
Carbon nanotubes can be functionalized to attain desired properties that can be used in a wide variety of applications. The two main methods of carbon nanotube functionalization are covalent and non-covalent modifications. Because of their apparent hydrophobic nature, carbon nanotubes tend to agglomerate hindering their dispersion in solvents or viscous polymer melts. The resulting nanotube bundles or aggregates reduce the mechanical performance of the final composite. The surface of the carbon nanotubes can be modified to reduce the hydrophobicity and improve interfacial adhesion to a bulk polymer through chemical attachment.
Chemical routes such as covalent functionalization have been studied extensively, which involves the oxidation of CNTs via strong acids (e.g. sulfuric acid, nitric acid, or a mixture of both) in order to set the carboxylic groups onto the surface of the CNTs as the final product or for further modification by esterification or amination. Free radical grafting is a promising technique among covalent functionalization methods, in which alkyl or aryl peroxides, substituted anilines, and diazonium salts are used as the starting agents.
Functionalization can improve CNTs characteristically weak dispersibility in many solvents, such as water - a consequence of their strong intermolecular p–p interactions. This can enhance the processing and manipulation of insoluble CNTs, rendering them useful for synthesizing innovative CNT nanofluids with impressive properties that are tunable for a wide range of applications.
Free radical grafting of macromolecules (as the functional group) onto the surface of CNTs can improve the solubility of CNTs compared to common acid treatments which involve the attachment of small molecules such as hydroxyl onto the surface of CNTs. The solubility of CNTs can be improved significantly by free-radical grafting because the large functional molecules facilitate the dispersion of CNTs in a variety of solvents even at a low degree of functionalization. Recently an innovative environmentally friendly approach has been developed for the covalent functionalization of multi-walled carbon nanotubes (MWCNTs) using clove buds. This approach is innovative and green because it does not use toxic and hazardous acids which are typically used in common carbon nanomaterial functionalization procedures. The MWCNTs are functionalized in one pot using a free radical grafting reaction. The clove-functionalized MWCNTs are then dispersed in water producing a highly stable multi-walled carbon nanotube aqueous suspension (nanofluids).
The surface of carbon nanotubes can be chemically modified by coating spinel nanoparticles by hydrothermal synthesis and can be used for water oxidation purposes.
In addition, the surface of carbon nanotubes can be fluorinated or halofluorinated by heating while in contact with a fluoroorganic substance, thereby forming partially fluorinated carbons (so-called Fluocar materials) with grafted (halo)fluoroalkyl functionality.
Modeling
Carbon nanotubes are modelled in a similar manner as traditional composites in which a reinforcement phase is surrounded by a matrix phase. Ideal models such as cylindrical, hexagonal and square models are common. The size of the micromechanics model is highly function of the studied mechanical properties. The concept of representative volume element (RVE) is used to determine the appropriate size and configuration of the computer model to replicate the actual behavior of the CNT-reinforced nanocomposite. Depending on the material property of interest (thermal, electrical, modulus, creep), one RVE might predict the property better than the alternatives. While the implementation of the ideal model is computationally efficient, they do not represent microstructural features observed in scanning electron microscopy of actual nanocomposites. To incorporate realistic modeling, computer models are also generated to incorporate variability such as waviness, orientation and agglomeration of multiwall or single-wall carbon nanotubes.
Metrology
There are many metrology standards and reference materials available for carbon nanotubes.
For single-wall carbon nanotubes, ISO/TS 10868 describes a measurement method for the diameter, purity, and fraction of metallic nanotubes through optical absorption spectroscopy, while ISO/TS 10797 and ISO/TS 10798 establish methods to characterize the morphology and elemental composition of single-wall carbon nanotubes, using transmission electron microscopy and scanning electron microscopy respectively, coupled with energy dispersive X-ray spectrometry analysis.
NIST SRM 2483 is a soot of single-wall carbon nanotubes used as a reference material for elemental analysis, and was characterized using thermogravimetric analysis, prompt gamma activation analysis, induced neutron activation analysis, inductively coupled plasma mass spectroscopy, resonant Raman scattering, UV-visible-near infrared fluorescence spectroscopy and absorption spectroscopy, scanning electron microscopy, and transmission electron microscopy. The Canadian National Research Council also offers a certified reference material SWCNT-1 for elemental analysis using neutron activation analysis and inductively coupled plasma mass spectroscopy. NIST RM 8281 is a mixture of three lengths of single-wall carbon nanotube.
For multiwall carbon nanotubes, ISO/TR 10929 identifies the basic properties and the content of impurities, while ISO/TS 11888 describes morphology using scanning electron microscopy, transmission electron microscopy, viscometry, and light scattering analysis. ISO/TS 10798 is also valid for multiwall carbon nanotubes.
Safety and health
The National Institute for Occupational Safety and Health (NIOSH) is the leading United States federal agency conducting research and providing guidance on the occupational safety and health implications and applications of nanomaterials. Early scientific studies have indicated that nanoscale particles may pose a greater health risk than bulk materials due to a relative increase in surface area per unit mass. Increase in length and diameter of CNT is correlated to increased toxicity and pathological alterations in lung. The biological interactions of nanotubes are not well understood, and the field is open to continued toxicological studies. It is often difficult to separate confounding factors, and since carbon is relatively biologically inert, some of the toxicity attributed to carbon nanotubes may be instead due to residual metal catalyst contamination. In previous studies, only Mitsui-7 was reliably demonstrated to be carcinogenic, although for unclear/unknown reasons. Unlike many common mineral fibers (such as asbestos), most SWCNTs and MWCNTs do not fit the size and aspect-ratio criteria to be classified as respirable fibers. In 2013, given that the long-term health effects have not yet been measured, NIOSH published a Current Intelligence Bulletin detailing the potential hazards and recommended exposure limit for carbon nanotubes and fibers. The U.S. National Institute for Occupational Safety and Health has determined non-regulatory recommended exposure limits (RELs) of 1 μg/m3 for carbon nanotubes and carbon nanofibers as background-corrected elemental carbon as an 8-hour time-weighted average (TWA) respirable mass concentration. Although CNT caused pulmonary inflammation and toxicity in mice, exposure to aerosols generated from sanding of composites containing polymer-coated MWCNTs, representative of the actual end-product, did not exert such toxicity.
As of October 2016, single-wall carbon nanotubes have been registered through the European Union's Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) regulations, based on evaluation of the potentially hazardous properties of SWCNT. Based on this registration, SWCNT commercialization is allowed in the EU up to 100 metric tons. Currently, the type of SWCNT registered through REACH is limited to the specific type of single-wall carbon nanotubes manufactured by OCSiAl, which submitted the application.
Applications
Carbon nanotubes are currently used in multiple industrial and consumer applications. These include battery components, polymer composites, to improve the mechanical, thermal and electrical properties of the bulk product, and as a highly absorptive black paint. Many other applications are under development, including field effect transistors for electronics, high-strength fabrics, biosensors for biomedical and agricultural applications, and many others.
Biomedical Applications
Because of their relatively large surface area, CNTs are capable of interacting with a wide variety of therapeutic and diagnostic agents (drugs, genes, vaccines, antibodies, biosensors, etc.). This can be utilized to assist in drug delivery directly into cells. In addition, CNTs have recently been used as reinforcements in implants and scaffolds due to their suitable reaction area, high elastic modulus, and load transfer capability.
CNTs have been shown to increase the effectiveness of bioactive coatings for the attachment, proliferation, and differentiation of osteoblasts, and has been used as a bone substitution material.
CNTs may be used as reinforcing materials for chitosan-containing coatings used on implants and medical scaffolds.
Biosensing
SWCNTs have nanoscale dimensions that fit to the size of biological species. Due to this size compatibility and their large surface-to-volume ratio, they are sensitive to changes in their chemical environment. Through covalent and non-covalent surface functionalization, SWCNTs can be precisely tailored for selective molecular interactions with a target analyte. The SWCNT represents the transduction unit that converts the interaction into a signal change (optical or electrical). Due to continuous progress in the development of detection strategies, there are numerous examples of the use of SWCNTs as highly sensitive nanosensors (even down to the single molecule level) for a variety of important biomolecules. Examples include the detection of reactive oxygen and nitrogen species, neurotransmitters, other small molecules, lipids, proteins, sugars, DNA/RNA, enzymes as well as bacteria.
The signal change manifests itself in an increase or decrease in the current (electrical) or in a change in the intensity or wavelength of the fluorescence emission (optical). Depending on the type of application, both electrical or optical signal transmission can be advantageous. For sensitive measurement of electronic changes, field-effect transistors (FET) are often used in which the flow of charges within the SWCNTs is measured. The FET structures allow easy on-chip integration and can be parallelized to detect multiple target analytes simultaneously. However, such sensors are more invasive for in vivo applications, as the entire device has to be inserted into the body. Optical detection with semiconducting SWCNTs is based on the radiative recombination of excitons in the near-infrared (NIR) by prior optical (fluorescence) or electrical excitation (electroluminescence). The emission in the NIR enables detection in the biological transparency window, where optical sensor applications benefit from reduced scattering and autofluorescence of biological samples and consequently a high signal-to-noise ratio. Compared to optical sensors in the UV or visible range, the penetration depth in biological tissue is also increased. In addition to the advantage of a contactless readout SWCNTs have excellent photostability, which enables long-term sensor applications. Furthermore, the nanoscale size of SWCNTs allows dense coating of surfaces which enables chemical imaging, e.g. of cellular release processes with high spatial and temporal resolution. Detection of several target analytes is possible by the spatial arrangement of different SWCNT sensors in arrays or by hyperspectral detection based on monochiral SWCNT sensors that emit at different emission wavelengths. For fluorescence applications, however, optical filters to distinguish between excitation and emission and a NIR-sensitive detector must be used. Standard silicon detectors can also be used if monochiral SWCNTs (extractable by special purification processes) emitting closer to the visible range (800 – 900 nm) are used. In order to avoid susceptibility of optical sensors to fluctuating ambient light, internal references such as SWCNTs that are modified to be non-responsive or stable NIR emitters can be used. An alternative is to measure fluorescence lifetimes instead of fluorescence intensities. Overall, SWCNTs therefore have great potential as building blocks for various biosensors.
To render SWCNTs suitable for biosensing, their surface needs to be modified to ensure colloidal stability and provide a handle for biological recognition. Therefore, biosensing and surface modifications (functionalization) are closely related.
Potential future applications include biomedical and environmental applications such as monitoring plant health in agriculture, standoff process control in bioreactors, research/diagnostics of neuronal communication and numerous diseases such as coagulation disorders, diabetes, cancer, microbial and viral infections, testing the efficacy of pharmaceuticals or infection monitoring using smart implants. In industry, SWCNTs are already used as sensors in the detection of gases and odors in the form of an electronic nose or in enzyme screening.
Other current applications
Easton-Bell Sports, Inc. have been in partnership with Zyvex Performance Materials, using CNT technology in a number of their bicycle components – including flat and riser handlebars, cranks, forks, seatposts, stems and aero bars.
Amroy Europe Oy manufactures Hybtonite carbon nano-epoxy resins where carbon nanotubes have been chemically activated to bond to epoxy, resulting in a composite material that is 20% to 30% stronger than other composite materials. It has been used for wind turbines, marine paints and a variety of sports gear such as skis, ice hockey sticks, baseball bats, hunting arrows, and surfboards.
Surrey NanoSystems synthesizes carbon nanotubes to create vantablack ultra-absorptive black paint.
"Gecko tape" (also called "nano tape") is often commercially sold as double-sided adhesive tape. It can be used to hang lightweight items such as pictures and decorative items on smooth walls without punching holes in the wall. The carbon nanotube arrays comprising the synthetic setae leave no residue after removal and can stay sticky in extreme temperatures.
Tips for atomic force microscope probes.
Applications under development
Applications of nanotubes in development in academia and industry include:
Medical devices: Using single wall carbon nanotubes in medical devices results in no skin contamination, high flexibility, and softness, which are crucial for healthcare applications.
Wearable electronics and 5G/6G communication: Electrodes with single wall carbon nanotubes (SWCNTs) exhibit excellent electrochemical properties and flexibility.
Bitumen and asphalt: The world's first test section of road pavement with single wall carbon nanotubes (SWCNTs) showed a 67% increase in resistance to cracks and ruts, increasing the lifespan of the materials.
Nanocomposites for aviation, automotive, and renewable energy markets: Modifying resin with just 0.02% single wall carbon nanotubes (SWCNTs) increases electrical conductivity by 276% without compromising the mechanical properties of fiber-reinforced polymers, also improving flexural properties and delaying thermal degradation.
Additive manufacturing: single wall carbon nanotubes (SWCNTs) are mixed with a suitable printing medium or used as a filler material in the printing process, creating complex structures with enhanced mechanical and electrical properties.
Utilizing carbon nanotubes as the channel material of carbon nanotube field-effect transistors.
Using carbon nanotubes as a scaffold for diverse microfabrication techniques.
Energy dissipation in self-organized nanostructures under the influence of an electric field.
Using carbon nanotubes for environmental monitoring due to their active surface area and their ability to absorb gases.
Jack Andraka used carbon nanotubes in his pancreatic cancer test. His method of testing won the Intel International Science and Engineering Fair Gordon E. Moore Award in the spring of 2012.
The Boeing Company has patented the use of carbon nanotubes for structural health monitoring of composites used in aircraft structures. This technology is hoped to greatly reduce the risk of an in-flight failure caused by structural degradation of aircraft.
Zyvex Technologies has also built a 54' maritime vessel, the Piranha Unmanned Surface Vessel, as a technology demonstrator for what is possible using CNT technology. CNTs help improve the structural performance of the vessel, resulting in a lightweight 8,000 lb boat that can carry a payload of 15,000 lb over a range of 2,500 miles.
IMEC is using carbon nanotubes for pellicles in semiconductor lithography.
In tissue engineering, carbon nanotubes have been used as scaffolding for bone growth.
Carbon nanotubes can serve as additives to various structural materials. For instance, nanotubes form a tiny portion of the material(s) in some (primarily carbon fiber) baseball bats, golf clubs, car parts, or damascus steel.
IBM expected carbon nanotube transistors to be used on Integrated Circuits by 2020.
SWCNTs have found use in long lasting, faster charged lithium ion batteries; polyamide car parts for e-painting; automotive primers for cost benefits and better aesthetics of topcoats; ESD floors; electrically conductive lining coatings for tanks and pipes; rubber parts with improved heat and oil aging stability; conductive gelcoats for ATEX requirements and tooling conductive gelcoats for increased safety and efficiency; and heating fiber coatings for infrastructure elements.
Potential/Future applications
The strength and flexibility of carbon nanotubes makes them of potential use in controlling other nanoscale structures, which suggests they will have an important role in nanotechnology engineering. The highest tensile strength of an individual multi-walled carbon nanotube has been tested to be 63 GPa. Carbon nanotubes were found in Damascus steel from the 17th century, possibly helping to account for the legendary strength of the swords made of it. Recently, several studies have highlighted the prospect of using carbon nanotubes as building blocks to fabricate three-dimensional macroscopic (>1mm in all three dimensions) all-carbon devices. Lalwani et al. have reported a novel radical initiated thermal crosslinking method to fabricated macroscopic, free-standing, porous, all-carbon scaffolds using single- and multi-walled carbon nanotubes as building blocks. These scaffolds possess macro-, micro-, and nano- structured pores and the porosity can be tailored for specific applications. These 3D all-carbon scaffolds/architectures may be used for the fabrication of the next generation of energy storage, supercapacitors, field emission transistors, high-performance catalysis, photovoltaics, and biomedical devices and implants.
CNTs are potential candidates for future via and wire material in nano-scale VLSI circuits. Eliminating electromigration reliability concerns that plague today's Cu interconnects, isolated (single and multi-wall) CNTs can carry current densities in excess of 1000 MA/cm2 without electromigration damage.
Single-walled nanotubes are likely candidates for miniaturizing electronics. The most basic building block of these systems is an electric wire, and SWNTs with diameters of an order of a nanometre can be excellent conductors. One useful application of SWNTs is in the development of the first intermolecular field-effect transistors (FET). The first intermolecular logic gate using SWCNT FETs was made in 2001. A logic gate requires both a p-FET and an n-FET. Because SWNTs are p-FETs when exposed to oxygen and n-FETs otherwise, it is possible to expose half of an SWNT to oxygen and protect the other half from it. The resulting SWNT acts as a not logic gate with both p- and n-type FETs in the same molecule.
Large quantities of pure CNTs can be made into a freestanding sheet or film by surface-engineered tape-casting (SETC) fabrication technique which is a scalable method to fabricate flexible and foldable sheets with superior properties. Another reported form factor is CNT fiber (a.k.a. filament) by wet spinning. The fiber is either directly spun from the synthesis pot or spun from pre-made dissolved CNTs. Individual fibers can be turned into a yarn. Apart from its strength and flexibility, the main advantage is making an electrically conducting yarn. The electronic properties of individual CNT fibers (i.e. bundle of individual CNT) are governed by the two-dimensional structure of CNTs. The fibers were measured to have a resistivity only one order of magnitude higher than metallic conductors at . By further optimizing the CNTs and CNT fibers, CNT fibers with improved electrical properties could be developed.
CNT-based yarns are suitable for applications in energy and electrochemical water treatment when coated with an ion-exchange membrane. Also, CNT-based yarns could replace copper as a winding material. Pyrhönen et al. (2015) have built a motor using CNT winding.
| Physical sciences | Chemical elements_2 | null |
5323 | https://en.wikipedia.org/wiki/Computer%20science | Computer science | Computer science is the study of computation, information, and automation. Computer science spans theoretical disciplines (such as algorithms, theory of computation, and information theory) to applied disciplines (including the design and implementation of hardware and software).
Algorithms and data structures are central to computer science.
The theory of computation concerns abstract models of computation and general classes of problems that can be solved using them. The fields of cryptography and computer security involve studying the means for secure communication and preventing security vulnerabilities. Computer graphics and computational geometry address the generation of images. Programming language theory considers different ways to describe computational processes, and database theory concerns the management of repositories of data. Human–computer interaction investigates the interfaces through which humans and computers interact, and software engineering focuses on the design and principles behind developing software. Areas such as operating systems, networks and embedded systems investigate the principles and design behind complex systems. Computer architecture describes the construction of computer components and computer-operated equipment. Artificial intelligence and machine learning aim to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, planning and learning found in humans and animals. Within artificial intelligence, computer vision aims to understand and process image and video data, while natural language processing aims to understand and process textual and linguistic data.
The fundamental concern of computer science is determining what can and cannot be automated. The Turing Award is generally recognized as the highest distinction in computer science.
History
The earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division. Algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment.
Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623. In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. Leibniz may be considered the first computer scientist and information theorist, because of various reasons, including the fact that he documented the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry when he invented his simplified arithmometer, the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which eventually gave him the idea of the first programmable mechanical calculator, his Analytical Engine. He started developing this machine in 1834, and "in less than two years, he had sketched out many of the salient features of the modern computer". "A crucial step was the adoption of a punched card system derived from the Jacquard loom" making it infinitely programmable. In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, which is considered to be the first published algorithm ever specifically tailored for implementation on a computer. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information; eventually his company became part of IBM. Following Babbage, although unaware of his earlier work, Percy Ludgate in 1909 published the 2nd of the only two designs for mechanical analytical engines in history. In 1914, the Spanish engineer Leonardo Torres Quevedo published his Essays on Automatics, and designed, inspired by Babbage, a theoretical electromechanical calculating machine which was to be controlled by a read-only program. The paper also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, a prototype that demonstrated the feasibility of an electromechanical analytical engine, on which commands could be typed and the results printed automatically. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, which was making all kinds of punched card equipment and was also in the calculator business to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit. When the machine was finished, some hailed it as "Babbage's dream come true".
During the 1940s, with the development of new and more powerful computing machines such as the Atanasoff–Berry computer and ENIAC, the term computer came to refer to the machines rather than their human predecessors. As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City. The renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world. Ultimately, the close relationship between IBM and Columbia University was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s. The world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science department in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights.
Etymology and scope
Although first proposed in 1956, the term "computer science" appears in a 1959 article in Communications of the ACM,
in which Louis Fein argues for the creation of a Graduate School in Computer Sciences analogous to the creation of Harvard Business School in 1921. Louis justifies the name by arguing that, like management science, the subject is applied and interdisciplinary in nature, while having the characteristics typical of an academic discipline.
His efforts, and those of others such as numerical analyst George Forsythe, were rewarded: universities went on to create such departments, starting with Purdue in 1962. Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed. Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy, to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. An alternative term, also proposed by Naur, is data science; this is now used for a multi-disciplinary field of data analysis, including statistics and databases.
In the early days of computing, a number of terms for the practitioners of the field of computing were suggested (albeit facetiously) in the Communications of the ACM—turingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist. Three months later in the same journal, comptologist was suggested, followed next year by hypologist. The term computics has also been suggested. In Europe, terms derived from contracted translations of the expression "automatic information" (e.g. "" in Italian) or "information and mathematics" are often used, e.g. (French), (German), (Italian, Dutch), (Spanish, Portuguese), (Slavic languages and Hungarian) or (, which means informatics) in Greek. Similar words have also been adopted in the UK (as in the School of Informatics, University of Edinburgh). "In the U.S., however, informatics is linked with applied computing, or computing in the context of another domain."
A folkloric quotation, often attributed to—but almost certainly not first formulated by—Edsger Dijkstra, states that "computer science is no more about computers than astronomy is about telescopes." The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been exchange of ideas between the various computer-related disciplines. Computer science research also often intersects other disciplines, such as cognitive science, linguistics, mathematics, physics, biology, Earth science, statistics, philosophy, and logic.
Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science. Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel, Alan Turing, John von Neumann, Rózsa Péter and Alonzo Church and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra.
The relationship between computer science and software engineering is a contentious issue, which is further muddied by disputes over what the term "software engineering" means, and how computer science is defined. David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines.
The academic, political, and funding aspects of computer science tend to depend on whether a department is formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment with computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research.
Philosophy
Epistemology of computer science
Despite the word science in its name, there is debate over whether or not computer science is a discipline of science, mathematics, or engineering. Allen Newell and Herbert A. Simon argued in 1975, It has since been argued that computer science can be classified as an empirical science since it makes use of empirical testing to evaluate the correctness of programs, but a problem remains in defining the laws and theorems of computer science (if any exist) and defining the nature of experiments in computer science. Proponents of classifying computer science as an engineering discipline argue that the reliability of computational systems is investigated in the same way as bridges in civil engineering and airplanes in aerospace engineering. They also argue that while empirical sciences observe what presently exists, computer science observes what is possible to exist and while scientists discover laws from observation, no proper laws have been found in computer science and it is instead concerned with creating phenomena.
Proponents of classifying computer science as a mathematical discipline argue that computer programs are physical realizations of mathematical entities and programs that can be deductively reasoned through mathematical formal methods. Computer scientists Edsger W. Dijkstra and Tony Hoare regard instructions for computer programs as mathematical sentences and interpret formal semantics for programming languages as mathematical axiomatic systems.
Paradigms of computer science
A number of computer scientists have argued for the distinction of three separate paradigms in computer science. Peter Wegner argued that those paradigms are science, technology, and mathematics. Peter Denning's working group argued that they are theory, abstraction (modeling), and design. Amnon H. Eden described them as the "rationalist paradigm" (which treats computer science as a branch of mathematics, which is prevalent in theoretical computer science, and mainly employs deductive reasoning), the "technocratic paradigm" (which might be found in engineering approaches, most prominently in software engineering), and the "scientific paradigm" (which approaches computer-related artifacts from the empirical perspective of natural sciences, identifiable in some branches of artificial intelligence).
Computer science focuses on methods involved in design, specification, programming, verification, implementation and testing of human-made computing systems.
Fields
As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software.
CSAB, formerly called Computing Sciences Accreditation Board—which is made up of representatives of the Association for Computing Machinery (ACM), and the IEEE Computer Society (IEEE CS)—identifies four areas that it considers crucial to the discipline of computer science: theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and communication, database systems, parallel computation, distributed computation, human–computer interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science.
Theoretical computer science
Theoretical computer science is mathematical and abstract in spirit, but it derives its motivation from practical and everyday computation. It aims to understand the nature of computation and, as a consequence of this understanding, provide more efficient methodologies.
Theory of computation
According to Peter Denning, the fundamental question underlying computer science is, "What can be automated?" Theory of computation is focused on answering fundamental questions about what can be computed and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a multitude of computational problems.
The famous P = NP? problem, one of the Millennium Prize Problems, is an open problem in the theory of computation.
Information and coding theory
Information theory, closely related to probability and statistics, is related to the quantification of information. This was developed by Claude Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data.
Coding theory is the study of the properties of codes (systems for converting information from one form to another) and their fitness for a specific application. Codes are used for data compression, cryptography, error detection and correction, and more recently also for network coding. Codes are studied for the purpose of designing efficient and reliable data transmission methods.
Data structures and algorithms
Data structures and algorithms are the studies of commonly used computational methods and their computational efficiency.
Programming language theory and formal methods
Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering, and linguistics. It is an active research area, with numerous dedicated academic journals.
Formal methods are a particular kind of mathematically based technique for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. They form an important theoretical underpinning for software engineering, especially where safety or security is involved. Formal methods are a useful adjunct to software testing since they help avoid errors and can also give a framework for testing. For industrial use, tool support is required. However, the high cost of using formal methods means that they are usually only used in the development of high-integrity and life-critical systems, where safety or security is of utmost importance. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification.
Applied computer science
Computer graphics and visualization
Computer graphics is the study of digital visual contents and involves the synthesis and manipulation of image data. The study is connected to many other fields in computer science, including computer vision, image processing, and computational geometry, and is heavily applied in the fields of special effects and video games.
Image and sound processing
Information can take the form of images, sound, video or other multimedia. Bits of information can be streamed via signals. Its processing is the central notion of informatics, the European view on computing, which studies information processing algorithms independently of the type of information carrier – whether it is electrical, mechanical or biological. This field plays important role in information theory, telecommunications, information engineering and has applications in medical image computing and speech synthesis, among others. What is the lower bound on the complexity of fast Fourier transform algorithms? is one of the unsolved problems in theoretical computer science.
Computational science, finance and engineering
Scientific computing (or computational science) is the field of study concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems. A major usage of scientific computing is simulation of various processes, including computational fluid dynamics, physical, electrical, and electronic systems and circuits, as well as societies and social situations (notably war games) along with their habitats, among many others. Modern computers enable optimization of such designs as complete aircraft. Notable in electrical and electronic circuit design are SPICE, as well as software for physical realization of new (or modified) designs. The latter includes essential design software for integrated circuits.
Human–computer interaction
Human–computer interaction (HCI) is the field of study and research concerned with the design and use of computer systems, mainly based on the analysis of the interaction between humans and computer interfaces. HCI has several subfields that focus on the relationship between emotions, social behavior and brain activity with computers.
Software engineering
Software engineering is the study of designing, implementing, and modifying the software in order to ensure it is of high quality, affordable, maintainable, and fast to build. It is a systematic approach to software design, involving the application of engineering practices to software. Software engineering deals with the organizing and analyzing of software—it does not just deal with the creation or manufacture of new software, but its internal arrangement and maintenance. For example software testing, systems engineering, technical debt and software development processes.
Artificial intelligence
Artificial intelligence (AI) aims to or is required to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, learning, and communication found in humans and animals. From its origins in cybernetics and in the Dartmouth Conference (1956), artificial intelligence research has been necessarily cross-disciplinary, drawing on areas of expertise such as applied mathematics, symbolic logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence. AI is associated in the popular mind with robotic development, but the main field of practical application has been as an embedded component in areas of software development, which require computational understanding. The starting point in the late 1940s was Alan Turing's question "Can computers think?", and the question remains effectively unanswered, although the Turing test is still used to assess computer output on the scale of human intelligence. But the automation of evaluative and predictive tasks has been increasingly successful as a substitute for human monitoring and intervention in domains of computer application involving complex real-world data.
Computer systems
Computer architecture and microarchitecture
Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure of a computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory. Computer engineers study computational logic and design of computer hardware, from individual processor components, microcontrollers, personal computers to supercomputers and embedded systems. The term "architecture" in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks Jr., members of the Machine Organization department in IBM's main research center in 1959.
Concurrent, parallel and distributed computing
Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi and the parallel random access machine model. When multiple computers are connected in a network while using concurrency, this is known as a distributed system. Computers within that distributed system have their own private memory, and information can be exchanged to achieve common goals.
Computer networks
This branch of computer science aims to manage networks between computers worldwide.
Computer security and cryptography
Computer security is a branch of computer technology with the objective of protecting information from unauthorized access, disruption, or modification while maintaining the accessibility and usability of the system for its intended users.
Historical cryptography is the art of writing and deciphering secret messages. Modern cryptography is the scientific study of problems relating to distributed computations that can be attacked. Technologies studied in modern cryptography include symmetric and asymmetric encryption, digital signatures, cryptographic hash functions, key-agreement protocols, blockchain, zero-knowledge proofs, and garbled circuits.
Databases and data mining
A database is intended to organize, store, and retrieve large amounts of data easily. Digital databases are managed using database management systems to store, create, maintain, and search data, through database models and query languages. Data mining is a process of discovering patterns in large data sets.
Discoveries
The philosopher of computing Bill Rapaport noted three Great Insights of Computer Science:
Gottfried Wilhelm Leibniz's, George Boole's, Alan Turing's, Claude Shannon's, and Samuel Morse's insight: there are only two objects that a computer has to deal with in order to represent "anything".
All the information about any computable problem can be represented using only 0 and 1 (or any other bistable pair that can flip-flop between two easily distinguishable states, such as "on/off", "magnetized/de-magnetized", "high-voltage/low-voltage", etc.).
Alan Turing's insight: there are only five actions that a computer has to perform in order to do "anything".
Every algorithm can be expressed in a language for a computer consisting of only five basic instructions:
move left one location;
move right one location;
read symbol at current location;
print 0 at current location;
print 1 at current location.
Corrado Böhm and Giuseppe Jacopini's insight: there are only three ways of combining these actions (into more complex ones) that are needed in order for a computer to do "anything".
Only three rules are needed to combine any set of basic instructions into more complex ones:
sequence: first do this, then do that;
selection: IF such-and-such is the case, THEN do this, ELSE do that;
repetition: WHILE such-and-such is the case, DO this.
The three rules of Boehm's and Jacopini's insight can be further simplified with the use of goto (which means it is more elementary than structured programming).
Programming paradigms
Programming languages can be used to accomplish different tasks in different ways. Common programming paradigms include:
Functional programming, a style of building the structure and elements of computer programs that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It is a declarative programming paradigm, which means programming is done with expressions or declarations instead of statements.
Imperative programming, a programming paradigm that uses statements that change a program's state. In much the same way that the imperative mood in natural languages expresses commands, an imperative program consists of commands for the computer to perform. Imperative programming focuses on describing how a program operates.
Object-oriented programming, a programming paradigm based on the concept of "objects", which may contain data, in the form of fields, often known as attributes; and code, in the form of procedures, often known as methods. A feature of objects is that an object's procedures can access and often modify the data fields of the object with which they are associated. Thus object-oriented computer programs are made out of objects that interact with one another.
Service-oriented programming, a programming paradigm that uses "services" as the unit of computer work, to design and implement integrated business applications and mission critical software programs.
Many languages offer support for multiple paradigms, making the distinction more a matter of style than of technical capabilities.
Research
Conferences are important events for computer science research. During these conferences, researchers from the public and private sectors present their recent work and meet. Unlike in most other academic fields, in computer science, the prestige of conference papers is greater than that of journal publications. One proposed explanation for this is the quick development of this relatively new field requires rapid review and distribution of results, a task better handled by conferences than by journals.
| Technology | Computing and information technology | null |
5346 | https://en.wikipedia.org/wiki/Colloid | Colloid | A colloid is a mixture in which one substance consisting of microscopically dispersed insoluble particles is suspended throughout another substance. Some definitions specify that the particles must be dispersed in a liquid, while others extend the definition to include substances like aerosols and gels. The term colloidal suspension refers unambiguously to the overall mixture (although a narrower sense of the word suspension is distinguished from colloids by larger particle size). A colloid has a dispersed phase (the suspended particles) and a continuous phase (the medium of suspension). The dispersed phase particles have a diameter of approximately 1 nanometre to 1 micrometre.
Some colloids are translucent because of the Tyndall effect, which is the scattering of light by particles in the colloid. Other colloids may be opaque or have a slight color.
Colloidal suspensions are the subject of interface and colloid science. This field of study began in 1845 by Francesco Selmi, who called them pseudosolutions, and expanded by Michael Faraday and Thomas Graham, who coined the term colloid in 1861.
Classification
Colloids can be classified as follows:
Homogeneous mixtures with a dispersed phase in this size range may be called colloidal aerosols, colloidal emulsions, colloidal suspensions, colloidal foams, colloidal dispersions, or hydrosols.
Hydrocolloids
Hydrocolloids describe certain chemicals (mostly polysaccharides and proteins) that are colloidally dispersible in water. Thus becoming effectively "soluble" they change the rheology of water by raising the viscosity and/or inducing gelation. They may provide other interactive effects with other chemicals, in some cases synergistic, in others antagonistic. Using these attributes hydrocolloids are very useful chemicals since in many areas of technology from foods through pharmaceuticals, personal care and industrial applications, they can provide stabilization, destabilization and separation, gelation, flow control, crystallization control and numerous other effects. Apart from uses of the soluble forms some of the hydrocolloids have additional useful functionality in a dry form if after solubilization they have the water removed - as in the formation of films for breath strips or sausage casings or indeed, wound dressing fibers, some being more compatible with skin than others. There are many different types of hydrocolloids each with differences in structure function and utility that generally are best suited to particular application areas in the control of rheology and the physical modification of form and texture. Some hydrocolloids like starch and casein are useful foods as well as rheology modifiers, others have limited nutritive value, usually providing a source of fiber.
The term hydrocolloids also refers to a type of dressing designed to lock moisture in the skin and help the natural healing process of skin to reduce scarring, itching and soreness.
Components
Hydrocolloids contain some type of gel-forming agent, such as sodium carboxymethylcellulose (NaCMC) and gelatin. They are normally combined with some type of sealant, i.e. polyurethane to 'stick' to the skin.
Compared with solution
A colloid has a dispersed phase and a continuous phase, whereas in a solution, the solute and solvent constitute only one phase. A solute in a solution are individual molecules or ions, whereas colloidal particles are bigger. For example, in a solution of salt in water, the sodium chloride (NaCl) crystal dissolves, and the Na+ and Cl− ions are surrounded by water molecules. However, in a colloid such as milk, the colloidal particles are globules of fat, rather than individual fat molecules. Because colloid is multiple phases, it has very different properties compared to fully mixed, continuous solution.
Interaction between particles
The following forces play an important role in the interaction of colloid particles:
Excluded volume repulsion: This refers to the impossibility of any overlap between hard particles.
Electrostatic interaction: Colloidal particles often carry an electrical charge and therefore attract or repel each other. The charge of both the continuous and the dispersed phase, as well as the mobility of the phases are factors affecting this interaction.
van der Waals forces: This is due to interaction between two dipoles that are either permanent or induced. Even if the particles do not have a permanent dipole, fluctuations of the electron density gives rise to a temporary dipole in a particle. This temporary dipole induces a dipole in particles nearby. The temporary dipole and the induced dipoles are then attracted to each other. This is known as van der Waals force, and is always present (unless the refractive indexes of the dispersed and continuous phases are matched), is short-range, and is attractive.
Steric forces: A repulsive steric force typically occurring due to adsorbed polymers coating a colloid's surface.
Depletion forces: An attractive entropic force arising from an osmotic pressure imbalance when colloids are suspended in a medium of much smaller particles or polymers called depletants.
Sedimentation velocity
The Earth’s gravitational field acts upon colloidal particles. Therefore, if the colloidal particles are denser than the medium of suspension, they will sediment (fall to the bottom), or if they are less dense, they will cream (float to the top). Larger particles also have a greater tendency to sediment because they have smaller Brownian motion to counteract this movement.
The sedimentation or creaming velocity is found by equating the Stokes drag force with the gravitational force:
where
is the Archimedean weight of the colloidal particles,
is the viscosity of the suspension medium,
is the radius of the colloidal particle,
and is the sedimentation or creaming velocity.
The mass of the colloidal particle is found using:
where
is the volume of the colloidal particle, calculated using the volume of a sphere ,
and is the difference in mass density between the colloidal particle and the suspension medium.
By rearranging, the sedimentation or creaming velocity is:
There is an upper size-limit for the diameter of colloidal particles because particles larger than 1 μm tend to sediment, and thus the substance would no longer be considered a colloidal suspension.
The colloidal particles are said to be in sedimentation equilibrium if the rate of sedimentation is equal to the rate of movement from Brownian motion.
Preparation
There are two principal ways to prepare colloids:
Dispersion of large particles or droplets to the colloidal dimensions by milling, spraying, or application of shear (e.g., shaking, mixing, or high shear mixing).
Condensation of small dissolved molecules into larger colloidal particles by precipitation, condensation, or redox reactions. Such processes are used in the preparation of colloidal silica or gold.
Stabilization
The stability of a colloidal system is defined by particles remaining suspended in solution and depends on the interaction forces between the particles. These include electrostatic interactions and van der Waals forces, because they both contribute to the overall free energy of the system.
A colloid is stable if the interaction energy due to attractive forces between the colloidal particles is less than kT, where k is the Boltzmann constant and T is the absolute temperature. If this is the case, then the colloidal particles will repel or only weakly attract each other, and the substance will remain a suspension.
If the interaction energy is greater than kT, the attractive forces will prevail, and the colloidal particles will begin to clump together. This process is referred to generally as aggregation, but is also referred to as flocculation, coagulation or precipitation. While these terms are often used interchangeably, for some definitions they have slightly different meanings. For example, coagulation can be used to describe irreversible, permanent aggregation where the forces holding the particles together are stronger than any external forces caused by stirring or mixing. Flocculation can be used to describe reversible aggregation involving weaker attractive forces, and the aggregate is usually called a floc. The term precipitation is normally reserved for describing a phase change from a colloid dispersion to a solid (precipitate) when it is subjected to a perturbation. Aggregation causes sedimentation or creaming, therefore the colloid is unstable: if either of these processes occur the colloid will no longer be a suspension.
Electrostatic stabilization and steric stabilization are the two main mechanisms for stabilization against aggregation.
Electrostatic stabilization is based on the mutual repulsion of like electrical charges. The charge of colloidal particles is structured in an electrical double layer, where the particles are charged on the surface, but then attract counterions (ions of opposite charge) which surround the particle. The electrostatic repulsion between suspended colloidal particles is most readily quantified in terms of the zeta potential. The combined effect of van der Waals attraction and electrostatic repulsion on aggregation is described quantitatively by the DLVO theory. A common method of stabilising a colloid (converting it from a precipitate) is peptization, a process where it is shaken with an electrolyte.
Steric stabilization consists absorbing a layer of a polymer or surfactant on the particles to prevent them from getting close in the range of attractive forces. The polymer consists of chains that are attached to the particle surface, and the part of the chain that extends out is soluble in the suspension medium. This technique is used to stabilize colloidal particles in all types of solvents, including organic solvents.
A combination of the two mechanisms is also possible (electrosteric stabilization).
A method called gel network stabilization represents the principal way to produce colloids stable to both aggregation and sedimentation. The method consists in adding to the colloidal suspension a polymer able to form a gel network. Particle settling is hindered by the stiffness of the polymeric matrix where particles are trapped, and the long polymeric chains can provide a steric or electrosteric stabilization to dispersed particles. Examples of such substances are xanthan and guar gum.
Destabilization
Destabilization can be accomplished by different methods:
Removal of the electrostatic barrier that prevents aggregation of the particles. This can be accomplished by the addition of salt to a suspension to reduce the Debye screening length (the width of the electrical double layer) of the particles. It is also accomplished by changing the pH of a suspension to effectively neutralise the surface charge of the particles in suspension. This removes the repulsive forces that keep colloidal particles separate and allows for aggregation due to van der Waals forces. Minor changes in pH can manifest in significant alteration to the zeta potential. When the magnitude of the zeta potential lies below a certain threshold, typically around ± 5mV, rapid coagulation or aggregation tends to occur.
Addition of a charged polymer flocculant. Polymer flocculants can bridge individual colloidal particles by attractive electrostatic interactions. For example, negatively charged colloidal silica or clay particles can be flocculated by the addition of a positively charged polymer.
Addition of non-adsorbed polymers called depletants that cause aggregation due to entropic effects.
Unstable colloidal suspensions of low-volume fraction form clustered liquid suspensions, wherein individual clusters of particles sediment if they are more dense than the suspension medium, or cream if they are less dense. However, colloidal suspensions of higher-volume fraction form colloidal gels with viscoelastic properties. Viscoelastic colloidal gels, such as bentonite and toothpaste, flow like liquids under shear, but maintain their shape when shear is removed. It is for this reason that toothpaste can be squeezed from a toothpaste tube, but stays on the toothbrush after it is applied.
Monitoring stability
The most widely used technique to monitor the dispersion state of a product, and to identify and quantify destabilization phenomena, is multiple light scattering coupled with vertical scanning. This method, known as turbidimetry, is based on measuring the fraction of light that, after being sent through the sample, it backscattered by the colloidal particles. The backscattering intensity is directly proportional to the average particle size and volume fraction of the dispersed phase. Therefore, local changes in concentration caused by sedimentation or creaming, and clumping together of particles caused by aggregation, are detected and monitored. These phenomena are associated with unstable colloids.
Dynamic light scattering can be used to detect the size of a colloidal particle by measuring how fast they diffuse. This method involves directing laser light towards a colloid. The scattered light will form an interference pattern, and the fluctuation in light intensity in this pattern is caused by the Brownian motion of the particles. If the apparent size of the particles increases due to them clumping together via aggregation, it will result in slower Brownian motion. This technique can confirm that aggregation has occurred if the apparent particle size is determined to be beyond the typical size range for colloidal particles.
Accelerating methods for shelf life prediction
The kinetic process of destabilisation can be rather long (up to several months or years for some products). Thus, it is often required for the formulator to use further accelerating methods to reach reasonable development time for new product design. Thermal methods are the most commonly used and consist of increasing temperature to accelerate destabilisation (below critical temperatures of phase inversion or chemical degradation). Temperature affects not only viscosity, but also interfacial tension in the case of non-ionic surfactants or more generally interactions forces inside the system. Storing a dispersion at high temperatures enables to simulate real life conditions for a product (e.g. tube of sunscreen cream in a car in the summer), but also to accelerate destabilisation processes up to 200 times.
Mechanical acceleration including vibration, centrifugation and agitation are sometimes used. They subject the product to different forces that pushes the particles / droplets against one another, hence helping in the film drainage. Some emulsions would never coalesce in normal gravity, while they do under artificial gravity. Segregation of different populations of particles have been highlighted when using centrifugation and vibration.
As a model system for atoms
In physics, colloids are an interesting model system for atoms. Micrometre-scale colloidal particles are large enough to be observed by optical techniques such as confocal microscopy. Many of the forces that govern the structure and behavior of matter, such as excluded volume interactions or electrostatic forces, govern the structure and behavior of colloidal suspensions. For example, the same techniques used to model ideal gases can be applied to model the behavior of a hard sphere colloidal suspension. Phase transitions in colloidal suspensions can be studied in real time using optical techniques, and are analogous to phase transitions in liquids. In many interesting cases optical fluidity is used to control colloid suspensions.
Crystals
A colloidal crystal is a highly ordered array of particles that can be formed over a very long range (typically on the order of a few millimeters to one centimeter) and that appear analogous to their atomic or molecular counterparts. One of the finest natural examples of this ordering phenomenon can be found in precious opal, in which brilliant regions of pure spectral color result from close-packed domains of amorphous colloidal spheres of silicon dioxide (or silica, SiO2). These spherical particles precipitate in highly siliceous pools in Australia and elsewhere, and form these highly ordered arrays after years of sedimentation and compression under hydrostatic and gravitational forces. The periodic arrays of submicrometre spherical particles provide similar arrays of interstitial voids, which act as a natural diffraction grating for visible light waves, particularly when the interstitial spacing is of the same order of magnitude as the incident lightwave.
Thus, it has been known for many years that, due to repulsive Coulombic interactions, electrically charged macromolecules in an aqueous environment can exhibit long-range crystal-like correlations with interparticle separation distances, often being considerably greater than the individual particle diameter. In all of these cases in nature, the same brilliant iridescence (or play of colors) can be attributed to the diffraction and constructive interference of visible lightwaves that satisfy Bragg’s law, in a matter analogous to the scattering of X-rays in crystalline solids.
The large number of experiments exploring the physics and chemistry of these so-called "colloidal crystals" has emerged as a result of the relatively simple methods that have evolved in the last 20 years for preparing synthetic monodisperse colloids (both polymer and mineral) and, through various mechanisms, implementing and preserving their long-range order formation.
In biology
Colloidal phase separation is an important organising principle for compartmentalisation of both the cytoplasm and nucleus of cells into biomolecular condensates—similar in importance to compartmentalisation via lipid bilayer membranes, a type of liquid crystal. The term biomolecular condensate has been used to refer to clusters of macromolecules that arise via liquid-liquid or liquid-solid phase separation within cells. Macromolecular crowding strongly enhances colloidal phase separation and formation of biomolecular condensates.
In the environment
Colloidal particles can also serve as transport vector
of diverse contaminants in the surface water (sea water, lakes, rivers, fresh water bodies) and in underground water circulating in fissured rocks
(e.g. limestone, sandstone, granite). Radionuclides and heavy metals easily sorb onto colloids suspended in water. Various types of colloids are recognised: inorganic colloids (e.g. clay particles, silicates, iron oxy-hydroxides), organic colloids (humic and fulvic substances). When heavy metals or radionuclides form their own pure colloids, the term "eigencolloid" is used to designate pure phases, i.e., pure Tc(OH)4, U(OH)4, or Am(OH)3. Colloids have been suspected for the long-range transport of plutonium on the Nevada Nuclear Test Site. They have been the subject of detailed studies for many years. However, the mobility of inorganic colloids is very low in compacted bentonites and in deep clay formations
because of the process of ultrafiltration occurring in dense clay membrane.
The question is less clear for small organic colloids often mixed in porewater with truly dissolved organic molecules.
In soil science, the colloidal fraction in soils consists of tiny clay and humus particles that are less than 1μm in diameter and carry either positive and/or negative electrostatic charges that vary depending on the chemical conditions of the soil sample, i.e. soil pH.
Intravenous therapy
Colloid solutions used in intravenous therapy belong to a major group of volume expanders, and can be used for intravenous fluid replacement. Colloids preserve a high colloid osmotic pressure in the blood, and therefore, they should theoretically preferentially increase the intravascular volume, whereas other types of volume expanders called crystalloids also increase the interstitial volume and intracellular volume. However, there is still controversy to the actual difference in efficacy by this difference, and much of the research related to this use of colloids is based on fraudulent research by Joachim Boldt. Another difference is that crystalloids generally are much cheaper than colloids.
| Physical sciences | Chemical mixtures: General | null |
5367 | https://en.wikipedia.org/wiki/Cambrian | Cambrian | The Cambrian ( ) is the first geological period of the Paleozoic Era, and the Phanerozoic Eon. The Cambrian lasted 51.95 million years from the end of the preceding Ediacaran period 538.8 Ma (million years ago) to the beginning of the Ordovician Period 486.85 Ma.
Most of the continents lay in the southern hemisphere surrounded by the vast Panthalassa Ocean. The assembly of Gondwana during the Ediacaran and early Cambrian led to the development of new convergent plate boundaries and continental-margin arc magmatism along its margins that helped drive up global temperatures. Laurentia lay across the equator, separated from Gondwana by the opening Iapetus Ocean.
The Cambrian was a time of greenhouse climate conditions, with high levels of atmospheric carbon dioxide and low levels of oxygen in the atmosphere and seas. Upwellings of anoxic deep ocean waters into shallow marine environments led to extinction events, whilst periods of raised oxygenation led to increased biodiversity.
The Cambrian marked a profound change in life on Earth; prior to the Period, the majority of living organisms were small, unicellular and poorly preserved. Complex, multicellular organisms gradually became more common during the Ediacaran, but it was not until the Cambrian that the rapid diversification of lifeforms, known as the Cambrian explosion, produced the first representatives of most modern animal phyla. The Period is also unique in its unusually high proportion of lagerstätte deposits, sites of exceptional preservation where "soft" parts of organisms are preserved as well as their more resistant shells.
By the end of the Cambrian, myriapods, arachnids, and hexapods started adapting to the land, along with the first plants.
Etymology and history
The term Cambrian is derived from the Latin version of Cymru, the Welsh name for Wales, where rocks of this age were first studied. It was named by Adam Sedgwick in 1835, who divided it into three groups; the Lower, Middle, and Upper. He defined the boundary between the Cambrian and the overlying Silurian, together with Roderick Murchison, in their joint paper "On the Silurian and Cambrian Systems, Exhibiting the Order in which the Older Sedimentary Strata Succeed each other in England and Wales". This early agreement did not last.
Due to the scarcity of fossils, Sedgwick used rock types to identify Cambrian strata. He was also slow in publishing further work. The clear fossil record of the Silurian, however, allowed Murchison to correlate rocks of a similar age across Europe and Russia, and on these he published extensively. As increasing numbers of fossils were identified in older rocks, he extended the base of the Silurian downwards into the Sedgwick's "Upper Cambrian", claiming all fossilised strata for "his" Silurian series. Matters were complicated further when, in 1852, fieldwork carried out by Sedgwick and others revealed an unconformity within the Silurian, with a clear difference in fauna between the two. This allowed Sedgwick to now claim a large section of the Silurian for "his" Cambrian and gave the Cambrian an identifiable fossil record. The dispute between the two geologists and their supporters, over the boundary between the Cambrian and Silurian, would extend beyond the life times of both Sedgwick and Murchison. It was not resolved until 1879, when Charles Lapworth proposed the disputed strata belong to its own system, which he named the Ordovician.
The term Cambrian for the oldest period of the Paleozoic was officially agreed in 1960, at the 21st International Geological Congress. It only includes Sedgwick's "Lower Cambrian series", but its base has been extended into much older rocks.
Geology
Stratigraphy
Systems, series and stages can be defined globally or regionally. For global stratigraphic correlation, the ICS ratify rock units based on a Global Boundary Stratotype Section and Point (GSSP) from a single formation (a stratotype) identifying the lower boundary of the unit. Currently the boundaries of the Cambrian System, three series and six stages are defined by global stratotype sections and points.
Ediacaran-Cambrian boundary
The lower boundary of the Cambrian was originally held to represent the first appearance of complex life, represented by trilobites. The recognition of small shelly fossils before the first trilobites, and Ediacara biota substantially earlier, has led to calls for a more precisely defined base to the Cambrian Period.
Despite the long recognition of its distinction from younger Ordovician rocks and older Precambrian rocks, it was not until 1994 that the Cambrian system/period was internationally ratified. After decades of careful consideration, a continuous sedimentary sequence at Fortune Head, Newfoundland was settled upon as a formal base of the Cambrian Period, which was to be correlated worldwide by the earliest appearance of Treptichnus pedum. Discovery of this fossil a few metres below the GSSP led to the refinement of this statement, and it is the T. pedum ichnofossil assemblage that is now formally used to correlate the base of the Cambrian.
This formal designation allowed radiometric dates to be obtained from samples across the globe that corresponded to the base of the Cambrian. An early date of 570 Ma quickly gained favour, though the methods used to obtain this number are now considered to be unsuitable and inaccurate. A more precise analysis using modern radiometric dating yields a date of 538.8 ± 0.6 Ma. The ash horizon in Oman from which this date was recovered corresponds to a marked fall in the abundance of carbon-13 that correlates to equivalent excursions elsewhere in the world, and to the disappearance of distinctive Ediacaran fossils (Namacalathus, Cloudina). Nevertheless, there are arguments that the dated horizon in Oman does not correspond to the Ediacaran-Cambrian boundary, but represents a facies change from marine to evaporite-dominated strata – which would mean that dates from other sections, ranging from 544 to 542 Ma, are more suitable.
*Most Russian paleontologists define the lower boundary of the Cambrian at the base of the Tommotian Stage, characterized by diversification and global distribution of organisms with mineral skeletons and the appearance of the first Archaeocyath bioherms.
Terreneuvian
The Terreneuvian is the lowermost series/epoch of the Cambrian, lasting from 538.8 ± 0.6 Ma to c. 521 Ma. It is divided into two stages: the Fortunian stage, 538.8 ± 0.6 Ma to c. 529 Ma; and the unnamed Stage 2, c. 529 Ma to c. 521 Ma. The name Terreneuvian was ratified by the International Union of Geological Sciences (IUGS) in 2007, replacing the previous "Cambrian Series 1". The GSSP defining its base is at Fortune Head on the Burin Peninsula, eastern Newfoundland, Canada (see Ediacaran - Cambrian boundary above). The Terreneuvian is the only series in the Cambrian to contain no trilobite fossils. Its lower part is characterised by complex, sediment-penetrating Phanerozoic-type trace fossils, and its upper part by small shelly fossils.
Cambrian Series 2
The second series/epoch of the Cambrian is currently unnamed and known as Cambrian Series 2. It lasted from c. 521 Ma to c. 506.5 Ma. Its two stages are also unnamed and known as Cambrian Stage 3, c. 521 Ma to c. 514.5 Ma, and Cambrian Stage 4, c. 514.5 Ma to c. 506.5 Ma. The base of Series 2 does not yet have a GSSP, but it is expected to be defined in strata marking the first appearance of trilobites in Gondwana. There was a rapid diversification of metazoans during this epoch, but their restricted geographic distribution, particularly of the trilobites and archaeocyaths, have made global correlations difficult, hence ongoing efforts to establish a GSSP.
Miaolingian
The Miaolingian is the third series/epoch of the Cambrian, lasting from c. 506.5 Ma to c. 497 Ma, and roughly identical to the middle Cambrian in older literature. It is divided into three stages: the Wuliuan c. 506.5 Ma to 504.5 Ma; the Drumian c. 504.5 Ma to c. 500.5 Ma; and the Guzhangian c. 500.5 Ma to c. 497 Ma. The name replaces Cambrian Series 3 and was ratified by the IUGS in 2018. It is named after the Miaoling Mountains in southeastern Guizhou Province, South China, where the GSSP marking its base is found. This is defined by the first appearance of the oryctocephalid trilobite Oryctocephalus indicus. Secondary markers for the base of the Miaolingian include the appearance of many acritarchs forms, a global marine transgression, and the disappearance of the polymerid trilobites, Bathynotus or Ovatoryctocara. Unlike the Terreneuvian and Series 2, all the stages of the Miaolingian are defined by GSSPs.
The olenellids, eodiscids, and most redlichiids trilobites went extinct at the boundary between Series 2 and the Miaolingian. This is considered the oldest mass extinction of trilobites.
Furongian
The Furongian, c. 497 Ma to 486.85 ± 1.5 Ma, is the fourth and uppermost series/epoch of the Cambrian. The name was ratified by the IUGS in 2003 and replaces Cambrian Series 4 and the traditional "Upper Cambrian". The GSSP for the base of the Furongian is in the Wuling Mountains, in northwestern Hunan Province, China. It coincides with the first appearance of the agnostoid trilobite Glyptagnostus reticulatus, and is near the beginning of a large positive δ13C isotopic excursion.
The Furongian is divided into three stages: the Paibian, c. 497 Ma to c. 494 Ma, and the Jiangshanian c. 494.2 Ma to c. 491 Ma, which have defined GSSPs; and the unnamed Cambrian Stage 10, c. 491 Ma to 486.85 ± 1.5 Ma.
Cambrian–Ordovician boundary
The GSSP for the Cambrian–Ordovician boundary is at Green Point, western Newfoundland, Canada, and is dated at 486.85 Ma. It is defined by the appearance of the conodont Iapetognathus fluctivagus. Where these conodonts are not found the appearance of planktonic graptolites or the trilobite Jujuyaspis borealis can be used. The boundary also corresponds with the peak of the largest positive variation in the δ13C curve during the boundary time interval and with a global marine transgression.
Impact structures
Major meteorite impact structures include: the early Cambrian (c. 535 Ma) Neugrund crater in the Gulf of Finland, Estonia, a complex meteorite crater about 20 km in diameter, with two inner ridges of about 7 km and 6 km diameter, and an outer ridge of 8 km that formed as the result of an impact of an asteroid 1 km in diameter; the 5 km diameter Gardnos crater (500±10 Ma) in Buskerud, Norway, where post-impact sediments indicate the impact occurred in a shallow marine environment with rock avalanches and debris flows occurring as the crater rim was breached not long after impact; the 24 km diameter Presqu'ile crater (500 Ma or younger) Quebec, Canada; the 19 km diameter Glikson crater (c. 508 Ma) in Western Australia; the 5 km diameter Mizarai crater (500±10 Ma) in Lithuania; and the 3.2 km diameter Newporte structure (c. 500 Ma or slightly younger) in North Dakota, U.S.A.
Paleogeography
Reconstructing the position of the continents during the Cambrian is based on palaeomagnetic, palaeobiogeographic, tectonic, geological and palaeoclimatic data. However, these have different levels of uncertainty and can produce contradictory locations for the major continents. This, together with the ongoing debate around the existence of the Neoproterozoic supercontinent of Pannotia, means that while most models agree the continents lay in the southern hemisphere, with the vast Panthalassa Ocean covering most of northern hemisphere, the exact distribution and timing of the movements of the Cambrian continents varies between models.
Most models show Gondwana stretching from the south polar region to north of the equator. Early in the Cambrian, the south pole corresponded with the western South American sector and as Gondwana rotated anti-clockwise, by the middle of the Cambrian, the south pole lay in the northwest African region.
Laurentia lay across the equator, separated from Gondwana by the Iapetus Ocean. Proponents of Pannotia have Laurentia and Baltica close to the Amazonia region of Gondwana with a narrow Iapetus Ocean that only began to open once Gondwana was fully assembled c. 520 Ma. Those not in favour of the existence of Pannotia show the Iapetus opening during the Late Neoproterozoic, with up to c. 6,500 km (c. 4038 miles) between Laurentia and West Gondwana at the beginning of the Cambrian.
Of the smaller continents, Baltica lay between Laurentia and Gondwana, the Ran Ocean (an arm of the Iapetus) opening between it and Gondwana. Siberia lay close to the western margin of Gondwana and to the north of Baltica. Annamia and South China formed a single continent situated off north central Gondwana. The location of North China is unclear. It may have lain along the northeast Indian sector of Gondwana or already have been a separate continent.
Laurentia
During the Cambrian, Laurentia lay across or close to the equator. It drifted south and rotated c. 20° anticlockwise during the middle Cambrian, before drifting north again in the late Cambrian.
After the Late Neoproterozoic (or mid-Cambrian) rifting of Laurentia from Gondwana and the subsequent opening of the Iapetus Ocean, Laurentia was largely surrounded by passive margins with much of the continent covered by shallow seas.
As Laurentia separated from Gondwana, a sliver of continental terrane rifted from Laurentia with the narrow Taconic seaway opening between them. The remains of this terrane are now found in southern Scotland, Ireland, and Newfoundland. Intra-oceanic subduction either to the southeast of this terrane in the Iapetus, or to its northwest in the Taconic seaway, resulted in the formation of an island arc. This accreted to the terrane in the late Cambrian, triggering southeast-dipping subduction beneath the terrane itself and consequent closure of the marginal seaway. The terrane collided with Laurentia in the Early Ordovician.
Towards the end of the early Cambrian, rifting along Laurentia's southeastern margin led to the separation of Cuyania (now part of Argentina) from the Ouachita embayment with a new ocean established that continued to widen through the Cambrian and Early Ordovician.
Gondwana
Gondwana was a massive continent, three times the size of any of the other Cambrian continents. Its continental land area extended from the south pole to north of the equator. Around it were extensive shallow seas and numerous smaller land areas.
The cratons that formed Gondwana came together during the Neoproterozoic to early Cambrian. A narrow ocean separated Amazonia from Gondwana until c. 530 Ma and the Arequipa-Antofalla block united with the South American sector of Gondwana in the early Cambrian. The Kuunga Orogeny between northern (Congo Craton, Madagascar and India) and southern Gondwana (Kalahari Craton and East Antarctica), which began c. 570 Ma, continued with parts of northern Gondwana over-riding southern Gondwana and was accompanied by metamorphism and the intrusion of granites.
Subduction zones, active since the Neoproterozoic, extended around much of Gondwana's margins, from northwest Africa southwards round South America, South Africa, East Antarctica, and the eastern edge of West Australia. Shorter subduction zones existed north of Arabia and India.
The Famatinian continental arc stretched from central Peru in the north to central Argentina in the south. Subduction beneath this proto-Andean margin began by the late Cambrian.
Along the northern margin of Gondwana, between northern Africa and the Armorican Terranes of southern Europe, the continental arc of the Cadomian Orogeny continued from the Neoproterozoic in response to the oblique subduction of the Iapetus Ocean. This subduction extended west along the Gondwanan margin and by c. 530 Ma may have evolved into a major transform fault system.
At c. 511 Ma the continental flood basalts of the Kalkarindji large igneous province (LIP) began to erupt. These covered an area of > 2.1 × 106 km2 across northern, central and Western Australia regions of Gondwana making it one of the largest, as well as the earliest, LIPs of the Phanerozoic. The timing of the eruptions suggests they played a role in the early to middle Cambrian mass extinction.
Ganderia, East and West Avalonia, Carolinia and Meguma Terranes
The terranes of Ganderia, East and West Avalonia, Carolinia and Meguma lay in polar regions during the early Cambrian, and high-to-mid southern latitudes by the mid to late Cambrian. They are commonly shown as an island arc-transform fault system along the northwestern margin of Gondwana north of northwest Africa and Amazonia, which rifted from Gondwana during the Ordovician. However, some models show these terranes as part of a single independent microcontinent, Greater Avalonia, lying to the west of Baltica and aligned with its eastern (Timanide) margin, with the Iapetus to the north and the Ran Ocean to the south.
Baltica
During the Cambrian, Baltica rotated more than 60° anti-clockwise and began to drift northwards. This rotation was accommodated by major strike-slip movements in the Ran Ocean between it and Gondwana.
Baltica lay at mid-to-high southerly latitudes, separated from Laurentia by the Iapetus and from Gondwana by the Ran Ocean. It was composed of two continents, Fennoscandia and Sarmatia, separated by shallow seas. The sediments deposited in these unconformably overlay Precambrian basement rocks. The lack of coarse-grained sediments indicates low lying topography across the centre of the craton.
Along Baltica's northeastern margin subduction and arc magmatism associated with the Ediacaran Timanian Orogeny was coming to an end. In this region the early to middle Cambrian was a time of non-deposition and followed by late Cambrian rifting and sedimentation.
Its southeastern margin was also a convergent boundary, with the accretion of island arcs and microcontinents to the craton, although the details are unclear.
Siberia
Siberia began the Cambrian close to western Gondwana and north of Baltica. It drifted northwestwards to close to the equator as the Ægir Ocean opened between it and Baltica. Much of the continent was covered by shallow seas with extensive archaeocyathan reefs. The then northern third of the continent (present day south; Siberia has rotated 180° since the Cambrian) adjacent to its convergent margin was mountainous.
From the Late Neoproterozoic to the Ordovician, a series of island arcs accreted to Siberia's then northeastern margin, accompanied by extensive arc and back-arc volcanism. These now form the Altai-Sayan terranes. Some models show a convergent plate margin extending from Greater Avalonia, through the Timanide margin of Baltica, forming the Kipchak island arc offshore of southeastern Siberia and curving round to become part of the Altai-Sayan convergent margin.
Along the then western margin, Late Neoproterozoic to early Cambrian rifting was followed by the development of a passive margin.
To the then north, Siberia was separated from the Central Mongolian terrane by the narrow and slowly opening Mongol-Okhotsk Ocean. The Central Mongolian terrane's northern margin with the Panthalassa was convergent, whilst its southern margin facing the Mongol-Okhotsk Ocean was passive.
Central Asia
During the Cambrian, the terranes that would form Kazakhstania later in the Paleozoic were a series of island arc and accretionary complexes that lay along an intra-oceanic convergent plate margin to the south of North China.
To the south of these the Tarim microcontinent lay between Gondwana and Siberia. Its northern margin was passive for much of the Paleozoic, with thick sequences of platform carbonates and fluvial to marine sediments resting unconformably on Precambrian basement. Along its southeast margin was the Altyn Cambro–Ordovician accretionary complex, whilst to the southwest a subduction zone was closing the narrow seaway between the North West Kunlun region of Tarim and the South West Kunlun terrane.
North China
North China lay at equatorial to tropical latitudes during the early Cambrian, although its exact position is unknown. Much of the craton was covered by shallow seas, with land in the northwest and southeast.
Northern North China was a passive margin until the onset of subduction and the development of the Bainaimiao arc in the late Cambrian. To its south was a convergent margin with a southwest dipping subduction zone, beyond which lay the North Qinling terrane (now part of the Qinling Orogenic Belt).
South China and Annamia
South China and Annamia formed a single continent. Strike-slip movement between it and Gondwana accommodated its steady drift northwards from offshore the Indian sector of Gondwana to near the western Australian sector. This northward drift is evidenced by the progressive increase in limestones and increasing faunal diversity.
The northern margin South China, including the South Qinling block, was a passive margin.
Along the southeastern margin, lower Cambrian volcanics indicate the accretion of an island arc along the Song Ma suture zone. Also, early in the Cambrian, the eastern margin of South China changed from passive to active, with the development of oceanic volcanic island arcs that now form part of the Japanese terrane.
Climate
The distribution of climate-indicating sediments, including the wide latitudinal distribution of tropical carbonate platforms, archaeocyathan reefs and bauxites, and arid zone evaporites and calcrete deposits, show the Cambrian was a time of greenhouse climate conditions. During the late Cambrian the distribution of trilobite provinces also indicate only a moderate pole-to-equator temperature gradient. There is evidence of glaciation at high latitudes on Avalonia. However, it is unclear whether these sediments are early Cambrian or actually late Neoproterozoic in age.
Calculations of global average temperatures (GAT) vary depending on which techniques are used. Whilst some measurements show GAT over c. models that combine multiple sources give GAT of c. in the Terreneuvian increasing to c. for the rest of the Cambrian. The warm climate was linked to elevated atmospheric carbon dioxide levels. Assembly of Gondwana led to the reorganisation of the tectonic plates with the development of new convergent plate margins and continental-margin arc magmatism that helped drive climatic warming. The eruptions of the Kalkarindji LIP basalts during Stage 4 and into the early Miaolingian, also released large quantities of carbon dioxide, methane and sulphur dioxide into the atmosphere leading to rapid climatic changes and elevated sea surface temperatures.
There is uncertainty around the maximum sea surface temperatures. These are calculated using δ18O values from marine rocks, and there is an ongoing debate about the levels δ18O in Cambrian seawater relative to the rest of the Phanerozoic. Estimates for tropical sea surface temperatures vary from c. , to c. . Modern average tropical sea surface temperatures are .
Atmospheric oxygen levels rose steadily rising from the Neoproterozoic due to the increase in photosynthesising organisms. Cambrian levels varied between c. 3% and 14% (present day levels are c. 21%). Low levels of atmospheric oxygen and the warm climate resulted in lower dissolved oxygen concentrations in marine waters and widespread anoxia in deep ocean waters.
There is a complex relationship between oxygen levels, the biogeochemistry of ocean waters, and the evolution of life. Newly evolved burrowing organisms exposed anoxic sediments to the overlying oxygenated seawater. This bioturbation decreased the burial rates of organic carbon and sulphur, which over time reduced atmospheric and oceanic oxygen levels, leading to widespread anoxic conditions. Periods of higher rates of continental weathering led to increased delivery of nutrients to the oceans, boosting productivity of phytoplankton and stimulating metazoan evolution. However, rapid increases in nutrient supply led to eutrophication, where rapid growth in phytoplankton numbers result in the depletion of oxygen in the surrounding waters.
Pulses of increased oxygen levels are linked to increased biodiversity; raised oxygen levels supported the increasing metabolic demands of organisms, and increased ecological niches by expanding habitable areas of seafloor. Conversely, incursions of oxygen-deficient water, due to changes in sea level, ocean circulation, upwellings from deeper waters and/or biological productivity, produced anoxic conditions that limited habitable areas, reduced ecological niches and resulted in extinction events both regional and global.
Overall, these dynamic, fluctuating environments, with global and regional anoxic incursions resulting in extinction events, and periods of increased oceanic oxygenation stimulating biodiversity, drove evolutionary innovation.
Geochemistry
During the Cambrian, variations in isotope ratios were more frequent and more pronounced than later in the Phanerozoic, with at least 10 carbon isotope (δ13C) excursions (significant variations in global isotope ratios) recognised. These excursions record changes in the biogeochemistry of the oceans and atmosphere, which are due to processes such as the global rates of continental arc magmatism, rates of weathering and nutrients levels entering the marine environment, sea level changes, and biological factors including the impact of burrowing fauna on oxygen levels.
Isotope excursions
Base of Cambrian
The basal Cambrian δ13C excursion (BACE), together with low δ238U and raised δ34S indicates a period of widespread shallow marine anoxia, which occurs at the same time as the extinction off the Ediacaran acritarchs. It was followed by the rapid appearance and diversification of bilaterian animals.
Cambrian Stages 2 and 3
During the early Cambrian, 87Sr/86Sr rose in response to enhanced continental weathering. This increased the input of nutrients into the oceans and led to higher burial rates of organic matter. Over long timescales, the extra oxygen released by organic carbon burial is balanced by a decrease in the rates of pyrite (FeS2) burial (a process which also releases oxygen), leading to stable levels of oxygen in the atmosphere. However, during the early Cambrian, a series of linked δ13C and δ34S excursions indicate high burial rates of both organic carbon and pyrite in biologically productive yet anoxic ocean floor waters. The oxygen-rich waters produced by these processes spread from the deep ocean into shallow marine environments, extending the habitable regions of the seafloor. These pulses of oxygen are associated with the radiation of the small shelly fossils and the Cambrian arthropod radiation isotope excursion (CARE). The increase in oxygenated waters in the deep ocean ultimately reduced the levels of organic carbon and pyrite burial, leading to a decrease in oxygen production and the re-establishment of anoxic conditions. This cycle was repeated several times during the early Cambrian.
Cambrian Stage 4 to early Miaolingian
The beginning of the eruptions of the Kalkarindji LIP basalts during Stage 4 and the early Miaolingian released large quantities of carbon dioxide, methane and sulphur dioxide into the atmosphere. The changes these wrought are reflected by three large and rapid δ13C excursions. Increased temperatures led to a global sea level rise that flooded continental shelves and interiors with anoxic waters from the deeper ocean and drowned carbonate platforms of archaeocyathan reefs, resulting in the widespread accumulation of black organic-rich shales. Known as the Sinsk anoxic extinction event, this triggered the first major extinction of the Phanerozoic, the 513 – 508 Ma Botoman-Toyonian Extinction (BTE), which included the loss of the archaeocyathids and hyoliths and saw a major drop in biodiversity. The rise in sea levels is also evidenced by a global decrease in 87Sr/86Sr. The flooding of continental areas decreased the rates of continental weathering, reducing the input of 87Sr to the oceans and lowering the 87Sr/86Sr of seawater.
The base of the Miaolingian is marked by the Redlichiid–Olenellid extinction carbon isotope event (ROECE), which coincides with the main phase of Kalkarindji volcanism.
During the Miaolingian, orogenic events along the Australian-Antarctic margin of Gondwana led to an increase in weathering and an influx of nutrients into the ocean, raising the level of productivity and organic carbon burial. These can be seen in the steady increase in 87Sr/86Sr and δ13C.
Early Furongian
Continued erosion of the deeper levels of the Gondwanan mountain belts led to a peak in 87Sr/86Sr and linked positive δ13C and δ34S excursions, known as the Steptoean positive carbon isotope excursion (SPICE). This indicates similar geochemical conditions to Stages 2 and 3 of the early Cambrian existed, with the expansion of seafloor anoxia enhancing the burial rates of organic matter and pyrite. This increase in the extent of anoxic seafloor conditions led to the extinction of the marjumiid and damesellid trilobites, whilst the increase in oxygen levels that followed helped drive the radiation of plankton.
87Sr/86Sr fell sharply near the top of the Jiangshanian Stage, and through Stage 10 as the Gondwanan mountains were eroded down and rates of weathering decreased.
Magnesium/calcium isotope ratios in seawater
The mineralogy of inorganic marine carbonates has varied through the Phanerozoic, controlled by the Mg2+/Ca2+ values of seawater. High Mg2+/Ca2+ result in calcium carbonate precipitation dominated by aragonite and high-magnesium calcite, known as aragonite seas, and low ratios result in calcite seas where low-magnesium calcite is the primary calcium carbonate precipitate. The shells and skeletons of biomineralising organisms reflect the dominant form of calcite.
During the late Ediacaran to early Cambrian increasing oxygen levels led to a decrease in ocean acidity and an increase in the concentration of calcium in sea water. However, there was not a simple transition from aragonite to calcite seas, rather a protracted and variable change through the Cambrian. Aragonite and high-magnesium precipitation continued from the Ediacaran into Cambrian Stage 2. Low-magnesium calcite skeletal hard parts appear in Cambrian Age 2, but inorganic precipitation of aragonite also occurred at this time. Mixed aragonite–calcite seas continued through the middle and late Cambrian, with fully calcite seas not established until the early Ordovician.
These variations and slow decrease in Mg2+/Ca2+ of seawater were due to low oxygen levels, high continental weathering rates and the geochemistry of the Cambrian seas. In conditions of low oxygen and high iron levels, iron substitutes for magnesium in authigenic clay minerals deposited on the ocean floor, slowing the removal rates of magnesium from seawater. The enrichment of ocean waters in silica, prior to the radiation of siliceous organisms, and the limited bioturbation of the anoxic ocean floor increased the rates of deposition, relative to the rest of the Phanerozoic, of these clays. This, together with the high input of magnesium into the oceans via enhanced continental weathering, delayed the reduction in Mg2+/Ca2+ and facilitated continued aragonite precipitation.
The conditions that favoured the deposition of authigenic clays were also ideal for the formation of lagerstätten, with the minerals in the clays replacing the soft body parts of Cambrian organisms.
Flora
The Cambrian flora was little different from the Ediacaran. The principal taxa were the marine macroalgae Fuxianospira, Sinocylindra, and Marpolia. No calcareous macroalgae are known from the period.
No land plant (embryophyte) fossils are known from the Cambrian. However, biofilms and microbial mats were well developed on Cambrian tidal flats and beaches 500 mya, and microbes forming microbial Earth ecosystems, comparable with modern soil crust of desert regions, contributing to soil formation. Although molecular clock estimates suggest terrestrial plants may have first emerged during the Middle or Late Cambrian, the consequent large-scale removal of the greenhouse gas CO2 from the atmosphere through sequestration did not begin until the Ordovician.
Oceanic life
The Cambrian explosion was a period of rapid multicellular growth. Most animal life during the Cambrian was aquatic. Trilobites were once assumed to be the dominant life form at that time, but this has proven to be incorrect. Arthropods were by far the most dominant animals in the ocean, but trilobites were only a minor part of the total arthropod diversity. What made them so apparently abundant was their heavy armor reinforced by calcium carbonate (CaCO3), which fossilized far more easily than the fragile chitinous exoskeletons of other arthropods, leaving numerous preserved remains.
The period marked a steep change in the diversity and composition of Earth's biosphere. The Ediacaran biota suffered a mass extinction at the start of the Cambrian Period, which corresponded with an increase in the abundance and complexity of burrowing behaviour. This behaviour had a profound and irreversible effect on the substrate which transformed the seabed ecosystems. Before the Cambrian, the sea floor was covered by microbial mats. By the end of the Cambrian, burrowing animals had destroyed the mats in many areas through bioturbation. As a consequence, many of those organisms that were dependent on the mats became extinct, while the other species adapted to the changed environment that now offered new ecological niches. Around the same time there was a seemingly rapid appearance of representatives of all the mineralized phyla, including the Bryozoa, which were once thought to have only appeared in the Lower Ordovician. However, many of those phyla were represented only by stem-group forms; and since mineralized phyla generally have a benthic origin, they may not be a good proxy for (more abundant) non-mineralized phyla.
While the early Cambrian showed such diversification that it has been named the Cambrian Explosion, this changed later in the period, when there occurred a sharp drop in biodiversity. About 515 Ma, the number of species going extinct exceeded the number of new species appearing. Five million years later, the number of genera had dropped from an earlier peak of about 600 to just 450. Also, the speciation rate in many groups was reduced to between a fifth and a third of previous levels. 500 Ma, oxygen levels fell dramatically in the oceans, leading to hypoxia, while the level of poisonous hydrogen sulfide simultaneously increased, causing another extinction. The later half of Cambrian was surprisingly barren and showed evidence of several rapid extinction events; the stromatolites which had been replaced by reef building sponges known as Archaeocyatha, returned once more as the archaeocyathids became extinct. This declining trend did not change until the Great Ordovician Biodiversification Event.
Some Cambrian organisms ventured onto land, producing the trace fossils Protichnites and Climactichnites. Fossil evidence suggests that euthycarcinoids, an extinct group of arthropods, produced at least some of the Protichnites. Fossils of the track-maker of Climactichnites have not been found; however, fossil trackways and resting traces suggest a large, slug-like mollusc.
In contrast to later periods, the Cambrian fauna was somewhat restricted; free-floating organisms were rare, with the majority living on or close to the sea floor; and mineralizing animals were rarer than in future periods, in part due to the unfavourable ocean chemistry.
Many modes of preservation are unique to the Cambrian, and some preserve soft body parts, resulting in an abundance of . These include Sirius Passet, the Sinsk Algal Lens, the Maotianshan Shales, the Emu Bay Shale, and the Burgess Shale.
Symbol
The United States Federal Geographic Data Committee uses a "barred capital C" character to represent the Cambrian Period.
The Unicode character is .
Gallery
| Physical sciences | Geological periods | null |
5371 | https://en.wikipedia.org/wiki/Concrete | Concrete | Concrete is a composite material composed of aggregate bonded together with a fluid cement that cures to a solid over time. Concrete is the second-most-used substance in the world after water, and is the most widely used building material. Its usage worldwide, ton for ton, is twice that of steel, wood, plastics, and aluminium combined.
When aggregate is mixed with dry Portland cement and water, the mixture forms a fluid slurry that is easily poured and molded into shape. The cement reacts with the water through a process called concrete hydration that hardens it over several hours to form a hard matrix that binds the materials together into a durable stone-like material that has many uses. This time allows concrete to not only be cast in forms, but also to have a variety of tooled processes performed. The hydration process is exothermic, which means ambient temperature plays a significant role in how long it takes concrete to set. Often, additives (such as pozzolans or superplasticizers) are included in the mixture to improve the physical properties of the wet mix, delay or accelerate the curing time, or otherwise change the finished material. Most concrete is poured with reinforcing materials (such as steel rebar) embedded to provide tensile strength, yielding reinforced concrete.
In the past, lime-based cement binders, such as lime putty, were often used but sometimes with other hydraulic cements, (water resistant) such as a calcium aluminate cement or with Portland cement to form Portland cement concrete (named for its visual resemblance to Portland stone). Many other non-cementitious types of concrete exist with other methods of binding aggregate together, including asphalt concrete with a bitumen binder, which is frequently used for road surfaces, and polymer concretes that use polymers as a binder. Concrete is distinct from mortar. Whereas concrete is itself a building material, mortar is a bonding agent that typically holds bricks, tiles and other masonry units together. Grout is another material associated with concrete and cement. It does not contain coarse aggregates and is usually either pourable or thixotropic, and is used to fill gaps between masonry components or coarse aggregate which has already been put in place. Some methods of concrete manufacture and repair involve pumping grout into the gaps to make up a solid mass in situ.
Etymology
The word concrete comes from the Latin word "" (meaning compact or condensed), the perfect passive participle of "", from "-" (together) and "" (to grow).
History
Ancient times
Concrete floors were found in the royal palace of Tiryns, Greece, which dates roughly to 1400 to 1200 BC. Lime mortars were used in Greece, such as in Crete and Cyprus, in 800 BC. The Assyrian Jerwan Aqueduct (688 BC) made use of waterproof concrete. Concrete was used for construction in many ancient structures.
Mayan concrete at the ruins of Uxmal (AD 850–925) is referenced in Incidents of Travel in the Yucatán by John L. Stephens. "The roof is flat and had been covered with cement". "The floors were cement, in some places hard, but, by long exposure, broken, and now crumbling under the feet." "But throughout the wall was solid, and consisting of large stones imbedded in mortar, almost as hard as rock."
Small-scale production of concrete-like materials was pioneered by the Nabatean traders who occupied and controlled a series of oases and developed a small empire in the regions of southern Syria and northern Jordan from the 4th century BC. They discovered the advantages of hydraulic lime, with some self-cementing properties, by 700 BC. They built kilns to supply mortar for the construction of rubble masonry houses, concrete floors, and underground waterproof cisterns. They kept the cisterns secret as these enabled the Nabataeans to thrive in the desert. Some of these structures survive to this day.
In the Ancient Egyptian and later Roman eras, builders discovered that adding volcanic ash to lime allowed the mix to set underwater. They discovered the pozzolanic reaction.
Classical era
The Romans used concrete extensively from 300 BC to AD 476. During the Roman Empire, Roman concrete (or opus caementicium) was made from quicklime, pozzolana and an aggregate of pumice. Its widespread use in many Roman structures, a key event in the history of architecture termed the Roman architectural revolution, freed Roman construction from the restrictions of stone and brick materials. It enabled revolutionary new designs in terms of both structural complexity and dimension. The Colosseum in Rome was built largely of concrete, and the Pantheon has the world's largest unreinforced concrete dome.
Concrete, as the Romans knew it, was a new and revolutionary material. Laid in the shape of arches, vaults and domes, it quickly hardened into a rigid mass, free from many of the internal thrusts and strains that troubled the builders of similar structures in stone or brick.
Modern tests show that opus caementicium had as much compressive strength as modern Portland-cement concrete (c. ). However, due to the absence of reinforcement, its tensile strength was far lower than modern reinforced concrete, and its mode of application also differed:
Modern structural concrete differs from Roman concrete in two important details. First, its mix consistency is fluid and homogeneous, allowing it to be poured into forms rather than requiring hand-layering together with the placement of aggregate, which, in Roman practice, often consisted of rubble. Second, integral reinforcing steel gives modern concrete assemblies great strength in tension, whereas Roman concrete could depend only upon the strength of the concrete bonding to resist tension.
The long-term durability of Roman concrete structures has been found to be due to its use of pyroclastic (volcanic) rock and ash, whereby the crystallization of strätlingite (a specific and complex calcium aluminosilicate hydrate) and the coalescence of this and similar calcium–aluminium-silicate–hydrate cementing binders helped give the concrete a greater degree of fracture resistance even in seismically active environments. Roman concrete is significantly more resistant to erosion by seawater than modern concrete; it used pyroclastic materials which react with seawater to form Al-tobermorite crystals over time. The use of hot mixing and the presence of lime clasts are thought to give the concrete a self-healing ability, where cracks that form become filled with calcite that prevents the crack from spreading.
The widespread use of concrete in many Roman structures ensured that many survive to the present day. The Baths of Caracalla in Rome are just one example. Many Roman aqueducts and bridges, such as the magnificent Pont du Gard in southern France, have masonry cladding on a concrete core, as does the dome of the Pantheon.
Middle Ages
After the Roman Empire, the use of burned lime and pozzolana was greatly reduced. Low kiln temperatures in the burning of lime, lack of pozzolana, and poor mixing all contributed to a decline in the quality of concrete and mortar. From the 11th century, the increased use of stone in church and castle construction led to an increased demand for mortar. Quality began to improve in the 12th century through better grinding and sieving. Medieval lime mortars and concretes were non-hydraulic and were used for binding masonry, "hearting" (binding rubble masonry cores) and foundations. Bartholomaeus Anglicus in his De proprietatibus rerum (1240) describes the making of mortar. In an English translation from 1397, it reads "lyme ... is a stone brent; by medlynge thereof with sonde and water sement is made". From the 14th century, the quality of mortar was again excellent, but only from the 17th century was pozzolana commonly added.
The Canal du Midi was built using concrete in 1670.
Industrial era
Perhaps the greatest step forward in the modern use of concrete was Smeaton's Tower, built by British engineer John Smeaton in Devon, England, between 1756 and 1759. This third Eddystone Lighthouse pioneered the use of hydraulic lime in concrete, using pebbles and powdered brick as aggregate.
A method for producing Portland cement was developed in England and patented by Joseph Aspdin in 1824. Aspdin chose the name for its similarity to Portland stone, which was quarried on the Isle of Portland in Dorset, England. His son William continued developments into the 1840s, earning him recognition for the development of "modern" Portland cement.
Reinforced concrete was invented in 1849 by Joseph Monier. and the first reinforced concrete house was built by François Coignet in 1853.
The first concrete reinforced bridge was designed and built by Joseph Monier in 1875.
Prestressed concrete and post-tensioned concrete were pioneered by Eugène Freyssinet, a French structural and civil engineer. Concrete components or structures are compressed by tendon cables during, or after, their fabrication in order to strengthen them against tensile forces developing when put in service. Freyssinet patented the technique on 2 October 1928.
Composition
Concrete is an artificial composite material, comprising a matrix of cementitious binder (typically Portland cement paste or asphalt) and a dispersed phase or "filler" of aggregate (typically a rocky material, loose stones, and sand). The binder "glues" the filler together to form a synthetic conglomerate. Many types of concrete are available, determined by the formulations of binders and the types of aggregate used to suit the application of the engineered material. These variables determine strength and density, as well as chemical and thermal resistance of the finished product.
Construction aggregates consist of large chunks of material in a concrete mix, generally a coarse gravel or crushed rocks such as limestone, or granite, along with finer materials such as sand.
Cement paste, most commonly made of Portland cement, is the most prevalent kind of concrete binder. For cementitious binders, water is mixed with the dry cement powder and aggregate, which produces a semi-liquid slurry (paste) that can be shaped, typically by pouring it into a form. The concrete solidifies and hardens through a chemical process called hydration. The water reacts with the cement, which bonds the other components together, creating a robust, stone-like material. Other cementitious materials, such as fly ash and slag cement, are sometimes added—either pre-blended with the cement or directly as a concrete component—and become a part of the binder for the aggregate. Fly ash and slag can enhance some properties of concrete such as fresh properties and durability. Alternatively, other materials can also be used as a concrete binder: the most prevalent substitute is asphalt, which is used as the binder in asphalt concrete.
Admixtures are added to modify the cure rate or properties of the material. Mineral admixtures use recycled materials as concrete ingredients. Conspicuous materials include fly ash, a by-product of coal-fired power plants; ground granulated blast furnace slag, a by-product of steelmaking; and silica fume, a by-product of industrial electric arc furnaces.
Structures employing Portland cement concrete usually include steel reinforcement because this type of concrete can be formulated with high compressive strength, but always has lower tensile strength. Therefore, it is usually reinforced with materials that are strong in tension, typically steel rebar.
The mix design depends on the type of structure being built, how the concrete is mixed and delivered, and how it is placed to form the structure.
Cement
Portland cement is the most common type of cement in general usage. It is a basic ingredient of concrete, mortar, and many plasters. It consists of a mixture of calcium silicates (alite, belite), aluminates and ferrites—compounds, which will react with water. Portland cement and similar materials are made by heating limestone (a source of calcium) with clay or shale (a source of silicon, aluminium and iron) and grinding this product (called clinker) with a source of sulfate (most commonly gypsum).
Cement kilns are extremely large, complex, and inherently dusty industrial installations. Of the various ingredients used to produce a given quantity of concrete, the cement is the most energetically expensive. Even complex and efficient kilns require 3.3 to 3.6 gigajoules of energy to produce a ton of clinker and then grind it into cement. Many kilns can be fueled with difficult-to-dispose-of wastes, the most common being used tires. The extremely high temperatures and long periods of time at those temperatures allows cement kilns to efficiently and completely burn even difficult-to-use fuels. The five major compounds of calcium silicates and aluminates comprising Portland cement range from 5 to 50% in weight.
Curing
Combining water with a cementitious material forms a cement paste by the process of hydration. The cement paste glues the aggregate together, fills voids within it, and makes it flow more freely.
As stated by Abrams' law, a lower water-to-cement ratio yields a stronger, more durable concrete, whereas more water gives a freer-flowing concrete with a higher slump. The hydration of cement involves many concurrent reactions. The process involves polymerization, the interlinking of the silicates and aluminate components as well as their bonding to sand and gravel particles to form a solid mass. One illustrative conversion is the hydration of tricalcium silicate:
Cement chemist notation: C3S + H → C-S-H + CH + heat
Standard notation: Ca3SiO5 + H2O → CaO・SiO2・H2O (gel) + Ca(OH)2 + heat
Balanced: 2 Ca3SiO5 + 7 H2O → 3 CaO・2 SiO2・4 H2O (gel) + 3 Ca(OH)2 + heat
(approximately as the exact ratios of CaO, SiO2 and H2O in C-S-H can vary)
The hydration (curing) of cement is irreversible.
Aggregates
Fine and coarse aggregates make up the bulk of a concrete mixture. Sand, natural gravel, and crushed stone are used mainly for this purpose. Recycled aggregates (from construction, demolition, and excavation waste) are increasingly used as partial replacements for natural aggregates, while a number of manufactured aggregates, including air-cooled blast furnace slag and bottom ash are also permitted.
The size distribution of the aggregate determines how much binder is required. Aggregate with a very even size distribution has the biggest gaps whereas adding aggregate with smaller particles tends to fill these gaps. The binder must fill the gaps between the aggregate as well as paste the surfaces of the aggregate together, and is typically the most expensive component. Thus, variation in sizes of the aggregate reduces the cost of concrete. The aggregate is nearly always stronger than the binder, so its use does not negatively affect the strength of the concrete.
Redistribution of aggregates after compaction often creates non-homogeneity due to the influence of vibration. This can lead to strength gradients.
Decorative stones such as quartzite, small river stones or crushed glass are sometimes added to the surface of concrete for a decorative "exposed aggregate" finish, popular among landscape designers.
Admixtures
Admixtures are materials in the form of powder or fluids that are added to the concrete to give it certain characteristics not obtainable with plain concrete mixes. Admixtures are defined as additions "made as the concrete mix is being prepared". The most common admixtures are retarders and accelerators. In normal use, admixture dosages are less than 5% by mass of cement and are added to the concrete at the time of batching/mixing. (See below.) The common types of admixtures are as follows:
Accelerators speed up the hydration (hardening) of the concrete. Typical materials used are calcium chloride, calcium nitrate and sodium nitrate. However, use of chlorides may cause corrosion in steel reinforcing and is prohibited in some countries, so that nitrates may be favored, even though they are less effective than the chloride salt. Accelerating admixtures are especially useful for modifying the properties of concrete in cold weather.
Air entraining agents add and entrain tiny air bubbles in the concrete, which reduces damage during freeze-thaw cycles, increasing durability. However, entrained air entails a tradeoff with strength, as each 1% of air may decrease compressive strength by 5%. If too much air becomes trapped in the concrete as a result of the mixing process, defoamers can be used to encourage the air bubble to agglomerate, rise to the surface of the wet concrete and then disperse.
Bonding agents are used to create a bond between old and new concrete (typically a type of polymer) with wide temperature tolerance and corrosion resistance.
Corrosion inhibitors are used to minimize the corrosion of steel and steel bars in concrete.
Crystalline admixtures are typically added during batching of the concrete to lower permeability. The reaction takes place when exposed to water and un-hydrated cement particles to form insoluble needle-shaped crystals, which fill capillary pores and micro-cracks in the concrete to block pathways for water and waterborne contaminates. Concrete with crystalline admixture can expect to self-seal as constant exposure to water will continuously initiate crystallization to ensure permanent waterproof protection.
Pigments can be used to change the color of concrete, for aesthetics.
Plasticizers increase the workability of plastic, or "fresh", concrete, allowing it to be placed more easily, with less consolidating effort. A typical plasticizer is lignosulfonate. Plasticizers can be used to reduce the water content of a concrete while maintaining workability and are sometimes called water-reducers due to this use. Such treatment improves its strength and durability characteristics.
Superplasticizers (also called high-range water-reducers) are a class of plasticizers that have fewer deleterious effects and can be used to increase workability more than is practical with traditional plasticizers. Superplasticizers are used to increase compressive strength. It increases the workability of the concrete and lowers the need for water content by 15–30%.
Pumping aids improve pumpability, thicken the paste and reduce separation and bleeding.
Retarders slow the hydration of concrete and are used in large or difficult pours where partial setting is undesirable before completion of the pour. Typical retarders include sugar, sodium gluconate, citric acid, and tartaric acid.
Mineral admixtures and blended cements
Inorganic materials that have pozzolanic or latent hydraulic properties, these very fine-grained materials are added to the concrete mix to improve the properties of concrete (mineral admixtures), or as a replacement for Portland cement (blended cements). Products which incorporate limestone, fly ash, blast furnace slag, and other useful materials with pozzolanic properties into the mix, are being tested and used. These developments are ever growing in relevance to minimize the impacts caused by cement use, notorious for being one of the largest producers (at about 5 to 10%) of global greenhouse gas emissions. The use of alternative materials also is capable of lowering costs, improving concrete properties, and recycling wastes, the latest being relevant for circular economy aspects of the construction industry, whose demand is ever growing with greater impacts on raw material extraction, waste generation and landfill practices.
Fly ash: A by-product of coal-fired electric generating plants, it is used to partially replace Portland cement (by up to 60% by mass). The properties of fly ash depend on the type of coal burnt. In general, siliceous fly ash is pozzolanic, while calcareous fly ash has latent hydraulic properties.
Ground granulated blast furnace slag (GGBFS or GGBS): A by-product of steel production is used to partially replace Portland cement (by up to 80% by mass). It has latent hydraulic properties.
Silica fume: A by-product of the production of silicon and ferrosilicon alloys. Silica fume is similar to fly ash, but has a particle size 100 times smaller. This results in a higher surface-to-volume ratio and a much faster pozzolanic reaction. Silica fume is used to increase strength and durability of concrete, but generally requires the use of superplasticizers for workability.
High reactivity metakaolin (HRM): Metakaolin produces concrete with strength and durability similar to concrete made with silica fume. While silica fume is usually dark gray or black in color, high-reactivity metakaolin is usually bright white in color, making it the preferred choice for architectural concrete where appearance is important.
Carbon nanofibers can be added to concrete to enhance compressive strength and gain a higher Young's modulus, and also to improve the electrical properties required for strain monitoring, damage evaluation and self-health monitoring of concrete. Carbon fiber has many advantages in terms of mechanical and electrical properties (e.g., higher strength) and self-monitoring behavior due to the high tensile strength and high electrical conductivity.
Carbon products have been added to make concrete electrically conductive, for deicing purposes.
New research from Japan's University of Kitakyushu shows that a washed and dried recycled mix of used diapers can be an environmental solution to producing less landfill and using less sand in concrete production. A model home was built in Indonesia to test the strength and durability of the new diaper-cement composite.
Production
Concrete production is the process of mixing together the various ingredients—water, aggregate, cement, and any additives—to produce concrete. Concrete production is time-sensitive. Once the ingredients are mixed, workers must put the concrete in place before it hardens. In modern usage, most concrete production takes place in a large type of industrial facility called a concrete plant, or often a batch plant. The usual method of placement is casting in formwork, which holds the mix in shape until it has set enough to hold its shape unaided.
Concrete plants come in two main types, ready-mix plants and central mix plants. A ready-mix plant blends all of the solid ingredients, while a central mix does the same but adds water. A central-mix plant offers more precise control of the concrete quality. Central mix plants must be close to the work site where the concrete will be used, since hydration begins at the plant.
A concrete plant consists of large hoppers for storage of various ingredients like cement, storage for bulk ingredients like aggregate and water, mechanisms for the addition of various additives and amendments, machinery to accurately weigh, move, and mix some or all of those ingredients, and facilities to dispense the mixed concrete, often to a concrete mixer truck.
Modern concrete is usually prepared as a viscous fluid, so that it may be poured into forms. The forms are containers that define the desired shape. Concrete formwork can be prepared in several ways, such as slip forming and steel plate construction. Alternatively, concrete can be mixed into dryer, non-fluid forms and used in factory settings to manufacture precast concrete products.
Interruption in pouring the concrete can cause the initially placed material to begin to set before the next batch is added on top. This creates a horizontal plane of weakness called a cold joint between the two batches. Once the mix is where it should be, the curing process must be controlled to ensure that the concrete attains the desired attributes. During concrete preparation, various technical details may affect the quality and nature of the product.
Design mix
Design mix ratios are decided by an engineer after analyzing the properties of the specific ingredients being used. Instead of using a 'nominal mix' of 1 part cement, 2 parts sand, and 4 parts aggregate, a civil engineer will custom-design a concrete mix to exactly meet the requirements of the site and conditions, setting material ratios and often designing an admixture package to fine-tune the properties or increase the performance envelope of the mix. Design-mix concrete can have very broad specifications that cannot be met with more basic nominal mixes, but the involvement of the engineer often increases the cost of the concrete mix.
Concrete mixes are primarily divided into nominal mix, standard mix and design mix.
Nominal mix ratios are given in volume of . Nominal mixes are a simple, fast way of getting a basic idea of the properties of the finished concrete without having to perform testing in advance.
Various governing bodies (such as British Standards) define nominal mix ratios into a number of grades, usually ranging from lower compressive strength to higher compressive strength. The grades usually indicate the 28-day cure strength.
Mixing
Thorough mixing is essential to produce uniform, high-quality concrete.
has shown that the mixing of cement and water into a paste before combining these materials with aggregates can increase the compressive strength of the resulting concrete. The paste is generally mixed in a , shear-type mixer at a w/c (water to cement ratio) of 0.30 to 0.45 by mass. The cement paste premix may include admixtures such as accelerators or retarders, superplasticizers, pigments, or silica fume. The premixed paste is then blended with aggregates and any remaining batch water and final mixing is completed in conventional concrete mixing equipment.
Sample analysis—workability
Workability is the ability of a fresh (plastic) concrete mix to fill the form/mold properly with the desired work (pouring, pumping, spreading, tamping, vibration) and without reducing the concrete's quality. Workability depends on water content, aggregate (shape and size distribution), cementitious content and age (level of hydration) and can be modified by adding chemical admixtures, like superplasticizer. Raising the water content or adding chemical admixtures increases concrete workability. Excessive water leads to increased bleeding or segregation of aggregates (when the cement and aggregates start to separate), with the resulting concrete having reduced quality. Changes in gradation can also affect workability of the concrete, although a wide range of gradation can be used for various applications. An undesirable gradation can mean using a large aggregate that is too large for the size of the formwork, or which has too few smaller aggregate grades to serve to fill the gaps between the larger grades, or using too little or too much sand for the same reason, or using too little water, or too much cement, or even using jagged crushed stone instead of smoother round aggregate such as pebbles. Any combination of these factors and others may result in a mix which is too harsh, i.e., which does not flow or spread out smoothly, is difficult to get into the formwork, and which is difficult to surface finish.
Workability can be measured by the concrete slump test, a simple measure of the plasticity of a fresh batch of concrete following the ASTM C 143 or EN 12350-2 test standards. Slump is normally measured by filling an "Abrams cone" with a sample from a fresh batch of concrete. The cone is placed with the wide end down onto a level, non-absorptive surface. It is then filled in three layers of equal volume, with each layer being tamped with a steel rod to consolidate the layer. When the cone is carefully lifted off, the enclosed material slumps a certain amount, owing to gravity. A relatively dry sample slumps very little, having a slump value of one or two inches (25 or 50 mm) out of . A relatively wet concrete sample may slump as much as eight inches. Workability can also be measured by the flow table test.
Slump can be increased by addition of chemical admixtures such as plasticizer or superplasticizer without changing the water-cement ratio. Some other admixtures, especially air-entraining admixture, can increase the slump of a mix.
High-flow concrete, like self-consolidating concrete, is tested by other flow-measuring methods. One of these methods includes placing the cone on the narrow end and observing how the mix flows through the cone while it is gradually lifted.
After mixing, concrete is a fluid and can be pumped to the location where needed.
Curing
Maintaining optimal conditions for cement hydration
Concrete must be kept moist during curing in order to achieve optimal strength and durability. During curing hydration occurs, allowing calcium-silicate hydrate (C-S-H) to form. Over 90% of a mix's final strength is typically reached within four weeks, with the remaining 10% achieved over years or even decades. The conversion of calcium hydroxide in the concrete into calcium carbonate from absorption of CO2 over several decades further strengthens the concrete and makes it more resistant to damage. This carbonation reaction, however, lowers the pH of the cement pore solution and can corrode the reinforcement bars.
Hydration and hardening of concrete during the first three days is critical. Abnormally fast drying and shrinkage due to factors such as evaporation from wind during placement may lead to increased tensile stresses at a time when it has not yet gained sufficient strength, resulting in greater shrinkage cracking. The early strength of the concrete can be increased if it is kept damp during the curing process. Minimizing stress prior to curing minimizes cracking. High-early-strength concrete is designed to hydrate faster, often by increased use of cement that increases shrinkage and cracking. The strength of concrete changes (increases) for up to three years. It depends on cross-section dimension of elements and conditions of structure exploitation. Addition of short-cut polymer fibers can improve (reduce) shrinkage-induced stresses during curing and increase early and ultimate compression strength.
Properly curing concrete leads to increased strength and lower permeability and avoids cracking where the surface dries out prematurely. Care must also be taken to avoid freezing or overheating due to the exothermic setting of cement. Improper curing can cause spalling, reduced strength, poor abrasion resistance and cracking.
Curing techniques avoiding water loss by evaporation
During the curing period, concrete is ideally maintained at controlled temperature and humidity. To ensure full hydration during curing, concrete slabs are often sprayed with "curing compounds" that create a water-retaining film over the concrete. Typical films are made of wax or related hydrophobic compounds. After the concrete is sufficiently cured, the film is allowed to abrade from the concrete through normal use.
Traditional conditions for curing involve spraying or ponding the concrete surface with water. The adjacent picture shows one of many ways to achieve this, ponding—submerging setting concrete in water and wrapping in plastic to prevent dehydration. Additional common curing methods include wet burlap and plastic sheeting covering the fresh concrete.
For higher-strength applications, accelerated curing techniques may be applied to the concrete. A common technique involves heating the poured concrete with steam, which serves to both keep it damp and raise the temperature so that the hydration process proceeds more quickly and more thoroughly.
Alternative types
Asphalt
Asphalt concrete (commonly called asphalt, blacktop, or pavement in North America, and tarmac, bitumen macadam, or rolled asphalt in the United Kingdom and the Republic of Ireland) is a composite material commonly used to surface roads, parking lots, airports, as well as the core of embankment dams. Asphalt mixtures have been used in pavement construction since the beginning of the twentieth century. It consists of mineral aggregate bound together with asphalt, laid in layers, and compacted. The process was refined and enhanced by Belgian inventor and U.S. immigrant Edward De Smedt.
The terms asphalt (or asphaltic) concrete, bituminous asphalt concrete, and bituminous mixture are typically used only in engineering and construction documents, which define concrete as any composite material composed of mineral aggregate adhered with a binder. The abbreviation, AC, is sometimes used for asphalt concrete but can also denote asphalt content or asphalt cement, referring to the liquid asphalt portion of the composite material.
Graphene enhanced concrete
Graphene enhanced concretes are standard designs of concrete mixes, except that during the cement-mixing or production process, a small amount of chemically engineered graphene is added. These enhanced graphene concretes are designed around the concrete application.
Microbial
Bacteria such as Bacillus pasteurii, Bacillus pseudofirmus, Bacillus cohnii, Sporosarcina pasteuri, and Arthrobacter crystallopoietes increase the compression strength of concrete through their biomass. However some forms of bacteria can also be concrete-destroying. Bacillus sp. CT-5. can reduce corrosion of reinforcement in reinforced concrete by up to four times. Sporosarcina pasteurii reduces water and chloride permeability. B. pasteurii increases resistance to acid. Bacillus pasteurii and B. sphaericuscan induce calcium carbonate precipitation in the surface of cracks, adding compression strength.
Nanoconcrete
Nanoconcrete (also spelled "nano concrete"' or "nano-concrete") is a class of materials that contains Portland cement particles that are no greater than 100 μm and particles of silica no greater than 500 μm, which fill voids that would otherwise occur in normal concrete, thereby substantially increasing the material's strength. It is widely used in foot and highway bridges where high flexural and compressive strength are indicated.
Pervious
Pervious concrete is a mix of specially graded coarse aggregate, cement, water, and little-to-no fine aggregates. This concrete is also known as "no-fines" or porous concrete. Mixing the ingredients in a carefully controlled process creates a paste that coats and bonds the aggregate particles. The hardened concrete contains interconnected air voids totaling approximately 15 to 25 percent. Water runs through the voids in the pavement to the soil underneath. Air entrainment admixtures are often used in freeze-thaw climates to minimize the possibility of frost damage. Pervious concrete also permits rainwater to filter through roads and parking lots, to recharge aquifers, instead of contributing to runoff and flooding.
Polymer
Polymer concretes are mixtures of aggregate and any of various polymers and may be reinforced. The cement is costlier than lime-based cements, but polymer concretes nevertheless have advantages; they have significant tensile strength even without reinforcement, and they are largely impervious to water. Polymer concretes are frequently used for the repair and construction of other applications, such as drains.
Plant fibers
Plant fibers and particles can be used in a concrete mix or as a reinforcement. These materials can increase ductility but the lignocellulosic particles hydrolyze during concrete curing as a result of alkaline environment and elevated temperatures Such process, that is difficult to measure, can affect the properties of the resulting concrete.
Sulfur concrete
Sulfur concrete is a special concrete that uses sulfur as a binder and does not require cement or water.
Volcanic
Volcanic concrete substitutes volcanic rock for the limestone that is burned to form clinker. It consumes a similar amount of energy, but does not directly emit carbon as a byproduct. Volcanic rock/ash are used as supplementary cementitious materials in concrete to improve the resistance to sulfate, chloride and alkali silica reaction due to pore refinement. Also, they are generally cost effective in comparison to other aggregates, good for semi and light weight concretes, and good for thermal and acoustic insulation.
Pyroclastic materials, such as pumice, scoria, and ashes are formed from cooling magma during explosive volcanic eruptions. They are used as supplementary cementitious materials (SCM) or as aggregates for cements and concretes. They have been extensively used since ancient times to produce materials for building applications. For example, pumice and other volcanic glasses were added as a natural pozzolanic material for mortars and plasters during the construction of the Villa San Marco in the Roman period (89 BC – 79 AD), which remain one of the best-preserved otium villae of the Bay of Naples in Italy.
Waste light
Waste light is a form of polymer modified concrete. The specific polymer admixture allows the replacement of all the traditional aggregates (gravel, sand, stone) by any mixture of solid waste materials in the grain size of 3–10 mm to form a low-compressive-strength (3–20 N/mm2) product for road and building construction. One cubic meter of waste light concrete contains 1.1–1.3 m3 of shredded waste and no other aggregates.
Recycled Aggregate Concrete (RAC)
Recycled aggregate concretes are standard concrete mixes with the addition or substitution of natural aggregates with recycled aggregates sourced from construction and demolition wastes, disused pre-cast concretes or masonry. In most cases, recycled aggregate concrete results in higher water absorption levels by capillary action and permeation, which are the prominent determiners of the strength and durability of the resulting concrete. The increase in water absorption levels is mainly caused by the porous adhered mortar that exists in the recycled aggregates. Accordingly, recycled concrete aggregates that have been washed to reduce the quantity of mortar adhered to aggregates show lower water absorption levels compared to untreated recycled aggregates.
The quality of the recycled aggregate concrete is determined by several factors, including the size, the number of replacement cycles, and the moisture levels of the recycled aggregates. When the recycled concrete aggregates are crushed into coarser fractures, the mixed concrete shows better permeability levels, resulting in an overall increase in strength. In contrast, recycled masonry aggregates provide better qualities when crushed in finer fractures. With each generation of recycled concrete, the resulting compressive strength decreases.
Properties
Concrete has relatively high compressive strength, but much lower tensile strength. Therefore, it is usually reinforced with materials that are strong in tension (often steel). The elasticity of concrete is relatively constant at low stress levels but starts decreasing at higher stress levels as matrix cracking develops. Concrete has a very low coefficient of thermal expansion and shrinks as it matures. All concrete structures crack to some extent, due to shrinkage and tension. Concrete that is subjected to long-duration forces is prone to creep.
Tests can be performed to ensure that the properties of concrete correspond to specifications for the application.
The ingredients affect the strengths of the material. Concrete strength values are usually specified as the lower-bound compressive strength of either a cylindrical or cubic specimen as determined by standard test procedures.
The strengths of concrete is dictated by its function. Very low-strength— or less—concrete may be used when the concrete must be lightweight. Lightweight concrete is often achieved by adding air, foams, or lightweight aggregates, with the side effect that the strength is reduced. For most routine uses, concrete is often used. concrete is readily commercially available as a more durable, although more expensive, option. Higher-strength concrete is often used for larger civil projects. Strengths above are often used for specific building elements. For example, the lower floor columns of high-rise concrete buildings may use concrete of or more, to keep the size of the columns small. Bridges may use long beams of high-strength concrete to lower the number of spans required. Occasionally, other structural needs may require high-strength concrete. If a structure must be very rigid, concrete of very high strength may be specified, even much stronger than is required to bear the service loads. Strengths as high as have been used commercially for these reasons.
Energy efficiency
The cement produced for making concrete accounts for about 8% of worldwide emissions per year (compared to, e.g., global aviation at 1.9%). The two largest sources of are produced by the cement manufacturing process, arising from (1) the decarbonation reaction of limestone in the cement kiln (T ≈ 950 °C), and (2) from the combustion of fossil fuel to reach the sintering temperature (T ≈ 1450 °C) of cement clinker in the kiln. The energy required for extracting, crushing, and mixing the raw materials (construction aggregates used in the concrete production, and also limestone and clay feeding the cement kiln) is lower. Energy requirement for transportation of ready-mix concrete is also lower because it is produced nearby the construction site from local resources, typically manufactured within 100 kilometers of the job site. The overall embodied energy of concrete at roughly 1 to 1.5 megajoules per kilogram is therefore lower than for many structural and construction materials.
Once in place, concrete offers a great energy efficiency over the lifetime of a building. Concrete walls leak air far less than those made of wood frames. Air leakage accounts for a large percentage of energy loss from a home. The thermal mass properties of concrete increase the efficiency of both residential and commercial buildings. By storing and releasing the energy needed for heating or cooling, concrete's thermal mass delivers year-round benefits by reducing temperature swings inside and minimizing heating and cooling costs. While insulation reduces energy loss through the building envelope, thermal mass uses walls to store and release energy. Modern concrete wall systems use both external insulation and thermal mass to create an energy-efficient building. Insulating concrete forms (ICFs) are hollow blocks or panels made of either insulating foam or rastra that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure.
Fire safety
Concrete buildings are more resistant to fire than those constructed using steel frames, since concrete has lower heat conductivity than steel and can thus last longer under the same fire conditions. Concrete is sometimes used as a fire protection for steel frames, for the same effect as above. Concrete as a fire shield, for example Fondu fyre, can also be used in extreme environments like a missile launch pad.
Options for non-combustible construction include floors, ceilings and roofs made of cast-in-place and hollow-core precast concrete. For walls, concrete masonry technology and Insulating Concrete Forms (ICFs) are additional options. ICFs are hollow blocks or panels made of fireproof insulating foam that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure.
Concrete also provides good resistance against externally applied forces such as high winds, hurricanes, and tornadoes owing to its lateral stiffness, which results in minimal horizontal movement. However, this stiffness can work against certain types of concrete structures, particularly where a relatively higher flexing structure is required to resist more extreme forces.
Earthquake safety
As discussed above, concrete is very strong in compression, but weak in tension. Larger earthquakes can generate very large shear loads on structures. These shear loads subject the structure to both tensile and compressional loads. Concrete structures without reinforcement, like other unreinforced masonry structures, can fail during severe earthquake shaking. Unreinforced masonry structures constitute one of the largest earthquake risks globally. These risks can be reduced through seismic retrofitting of at-risk buildings, (e.g. school buildings in Istanbul, Turkey).
Construction
Concrete is one of the most durable building materials. It provides superior fire resistance compared with wooden construction and gains strength over time. Structures made of concrete can have a long service life. Concrete is used more than any other artificial material in the world. As of 2006, about 7.5 billion cubic meters of concrete are made each year, more than one cubic meter for every person on Earth.
Reinforced
The use of reinforcement, in the form of iron was introduced in the 1850s by French industrialist François Coignet, and it was not until the 1880s that German civil engineer G. A. Wayss used steel as reinforcement. Concrete is a relatively brittle material that is strong under compression but less in tension. Plain, unreinforced concrete is unsuitable for many structures as it is relatively poor at withstanding stresses induced by vibrations, wind loading, and so on. Hence, to increase its overall strength, steel rods, wires, mesh or cables can be embedded in concrete before it is set. This reinforcement, often known as rebar, resists tensile forces.
Reinforced concrete (RC) is a versatile composite and one of the most widely used materials in modern construction. It is made up of different constituent materials with very different properties that complement each other. In the case of reinforced concrete, the component materials are almost always concrete and steel. These two materials form a strong bond together and are able to resist a variety of applied forces, effectively acting as a single structural element.
Reinforced concrete can be precast or cast-in-place (in situ) concrete, and is used in a wide range of applications such as; slab, wall, beam, column, foundation, and frame construction. Reinforcement is generally placed in areas of the concrete that are likely to be subject to tension, such as the lower portion of beams. Usually, there is a minimum of 50 mm cover, both above and below the steel reinforcement, to resist spalling and corrosion which can lead to structural instability. Other types of non-steel reinforcement, such as Fibre-reinforced concretes are used for specialized applications, predominately as a means of controlling cracking.
Precast
Precast concrete is concrete which is cast in one place for use elsewhere and is a mobile material. The largest part of precast production is carried out in the works of specialist suppliers, although in some instances, due to economic and geographical factors, scale of product or difficulty of access, the elements are cast on or adjacent to the construction site. Precasting offers considerable advantages because it is carried out in a controlled environment, protected from the elements, but the downside of this is the contribution to greenhouse gas emission from transportation to the construction site.
Advantages to be achieved by employing precast concrete:
Preferred dimension schemes exist, with elements of tried and tested designs available from a catalogue.
Major savings in time result from manufacture of structural elements apart from the series of events which determine overall duration of the construction, known by planning engineers as the 'critical path'.
Availability of Laboratory facilities capable of the required control tests, many being certified for specific testing in accordance with National Standards.
Equipment with capability suited to specific types of production such as stressing beds with appropriate capacity, moulds and machinery dedicated to particular products.
High-quality finishes achieved direct from the mould eliminate the need for interior decoration and ensure low maintenance costs.
Mass structures
Due to cement's exothermic chemical reaction while setting up, large concrete structures such as dams, navigation locks, large mat foundations, and large breakwaters generate excessive heat during hydration and associated expansion. To mitigate these effects, post-cooling is commonly applied during construction. An early example at Hoover Dam used a network of pipes between vertical concrete placements to circulate cooling water during the curing process to avoid damaging overheating. Similar systems are still used; depending on volume of the pour, the concrete mix used, and ambient air temperature, the cooling process may last for many months after the concrete is placed. Various methods also are used to pre-cool the concrete mix in mass concrete structures.
Another approach to mass concrete structures that minimizes cement's thermal by-product is the use of roller-compacted concrete, which uses a dry mix which has a much lower cooling requirement than conventional wet placement. It is deposited in thick layers as a semi-dry material then roller compacted into a dense, strong mass.
Surface finishes
Raw concrete surfaces tend to be porous and have a relatively uninteresting appearance. Many finishes can be applied to improve the appearance and preserve the surface against staining, water penetration, and freezing.
Examples of improved appearance include stamped concrete where the wet concrete has a pattern impressed on the surface, to give a paved, cobbled or brick-like effect, and may be accompanied with coloration. Another popular effect for flooring and table tops is polished concrete where the concrete is polished optically flat with diamond abrasives and sealed with polymers or other sealants.
Other finishes can be achieved with chiseling, or more conventional techniques such as painting or covering it with other materials.
The proper treatment of the surface of concrete, and therefore its characteristics, is an important stage in the construction and renovation of architectural structures.
Prestressed
Prestressed concrete is a form of reinforced concrete that builds in compressive stresses during construction to oppose tensile stresses experienced in use. This can greatly reduce the weight of beams or slabs, by
better distributing the stresses in the structure to make optimal use of the reinforcement. For example, a horizontal beam tends to sag. Prestressed reinforcement along the bottom of the beam counteracts this.
In pre-tensioned concrete, the prestressing is achieved by using steel or polymer tendons or bars that are subjected to a tensile force prior to casting, or for post-tensioned concrete, after casting.
There are two different systems being used:
Pretensioned concrete is almost always precast, and contains steel wires (tendons) that are held in tension while the concrete is placed and sets around them.
Post-tensioned concrete has ducts through it. After the concrete has gained strength, tendons are pulled through the ducts and stressed. The ducts are then filled with grout. Bridges built in this way have experienced considerable corrosion of the tendons, so external post-tensioning may now be used in which the tendons run along the outer surface of the concrete.
More than of highways in the United States are paved with this material. Reinforced concrete, prestressed concrete and precast concrete are the most widely used types of concrete functional extensions in modern days. For more information see Brutalist architecture.
Placement
Once mixed, concrete is typically transported to the place where it is intended to become a structural item. Various methods of transportation and placement are used depending on the distances involve, quantity needed, and other details of application. Large amounts are often transported by truck, poured free under gravity or through a tremie, or pumped through a pipe. Smaller amounts may be carried in a skip (a metal container which can be tilted or opened to release the contents, usually transported by crane or hoist), or wheelbarrow, or carried in toggle bags for manual placement underwater.
Cold weather placement
Extreme weather conditions (extreme heat or cold; windy conditions, and humidity variations) can significantly alter the quality of concrete. Many precautions are observed in cold weather placement. Low temperatures significantly slow the chemical reactions involved in hydration of cement, thus affecting the strength development. Preventing freezing is the most important precaution, as formation of ice crystals can cause damage to the crystalline structure of the hydrated cement paste. If the surface of the concrete pour is insulated from the outside temperatures, the heat of hydration will prevent freezing.
The American Concrete Institute (ACI) definition of cold weather placement, ACI 306, is:
A period when for more than three successive days the average daily air temperature drops below 40 °F (~ 4.5 °C), and
Temperature stays below for more than one-half of any 24-hour period.
In Canada, where temperatures tend to be much lower during the cold season, the following criteria are used by CSA A23.1:
When the air temperature is ≤ 5 °C, and
When there is a probability that the temperature may fall below 5 °C within 24 hours of placing the concrete.
The minimum strength before exposing concrete to extreme cold is . CSA A 23.1 specified a compressive strength of 7.0 MPa to be considered safe for exposure to freezing.
Underwater placement
Concrete may be placed and cured underwater. Care must be taken in the placement method to prevent washing out the cement. Underwater placement methods include the tremie, pumping, skip placement, manual placement using toggle bags, and bagwork.
A tremie is a vertical, or near-vertical, pipe with a hopper at the top used to pour concrete underwater in a way that avoids washout of cement from the mix due to turbulent water contact with the concrete while it is flowing. This produces a more reliable strength of the product. The method is generally used for placing small quantities and for repairs. Wet concrete is loaded into a reusable canvas bag and squeezed out at the required place by the diver. Care must be taken to avoid washout of the cement and fines.
is the manual placement by divers of woven cloth bags containing dry mix, followed by piercing the bags with steel rebar pins to tie the bags together after every two or three layers, and create a path for hydration to induce curing, which can typically take about 6 to 12 hours for initial hardening and full hardening by the next day. Bagwork concrete will generally reach full strength within 28 days. Each bag must be pierced by at least one, and preferably up to four pins. Bagwork is a simple and convenient method of underwater concrete placement which does not require pumps, plant, or formwork, and which can minimise environmental effects from dispersing cement in the water. Prefilled bags are available, which are sealed to prevent premature hydration if stored in suitable dry conditions. The bags may be biodegradable.
is an alternative method of forming a concrete mass underwater, where the forms are filled with coarse aggregate and the voids then completely filled from the bottom by displacing the water with pumped grout.
Roads
Concrete roads are more fuel efficient to drive on, more reflective and last significantly longer than other paving surfaces, yet have a much smaller market share than other paving solutions. Modern-paving methods and design practices have changed the economics of concrete paving, so that a well-designed and placed concrete pavement will be less expensive on initial costs and significantly less expensive over the life cycle. Another major benefit is that pervious concrete can be used, which eliminates the need to place storm drains near the road, and reducing the need for slightly sloped roadway to help rainwater to run off. No longer requiring discarding rainwater through use of drains also means that less electricity is needed (more pumping is otherwise needed in the water-distribution system), and no rainwater gets polluted as it no longer mixes with polluted water. Rather, it is immediately absorbed by the ground.
Tube forest
Cement molded into a forest of tubular structures can be 5.6 times more resistant to cracking/failure than standard concrete. The approach mimics mammalian cortical bone that features elliptical, hollow osteons suspended in an organic matrix, connected by relatively weak "cement lines". Cement lines provide a preferable in-plane crack path. This design fails via a "stepwise toughening mechanism". Cracks are contained within the tube, reducing spreading, by dissipating energy at each tube/step.
Environment, health and safety
The manufacture and use of concrete produce a wide range of environmental, economic and social impacts.
Health and safety
Grinding of concrete can produce hazardous dust. Exposure to cement dust can lead to issues such as silicosis, kidney disease, skin irritation and similar effects. The U.S. National Institute for Occupational Safety and Health in the United States recommends attaching local exhaust ventilation shrouds to electric concrete grinders to control the spread of this dust. In addition, the Occupational Safety and Health Administration (OSHA) has placed more stringent regulations on companies whose workers regularly come into contact with silica dust. An updated silica rule, which OSHA put into effect 23 September 2017 for construction companies, restricted the amount of breathable crystalline silica workers could legally come into contact with to 50 micro grams per cubic meter of air per 8-hour workday. That same rule went into effect 23 June 2018 for general industry, hydraulic fracturing and maritime. That deadline was extended to 23 June 2021 for engineering controls in the hydraulic fracturing industry. Companies which fail to meet the tightened safety regulations can face financial charges and extensive penalties. The presence of some substances in concrete, including useful and unwanted additives, can cause health concerns due to toxicity and radioactivity. Fresh concrete (before curing is complete) is highly alkaline and must be handled with proper protective equipment.
Cement
A major component of concrete is cement, a fine powder used mainly to bind sand and coarser aggregates together in concrete. Although a variety of cement types exist, the most common is "Portland cement", which is produced by mixing clinker with smaller quantities of other additives such as gypsum and ground limestone. The production of clinker, the main constituent of cement, is responsible for the bulk of the sector's greenhouse gas emissions, including both energy intensity and process emissions.
The cement industry is one of the three primary producers of carbon dioxide, a major greenhouse gas – the other two being energy production and transportation industries. On average, every tonne of cement produced releases one tonne of CO2 into the atmosphere. Pioneer cement manufacturers have claimed to reach lower carbon intensities, with 590 kg of CO2eq per tonne of cement produced. The emissions are due to combustion and calcination processes, which roughly account for 40% and 60% of the greenhouse gases, respectively. Considering that cement is only a fraction of the constituents of concrete, it is estimated that a tonne of concrete is responsible for emitting about 100–200 kg of CO2. Every year more than 10 billion tonnes of concrete are used worldwide. In the coming years, large quantities of concrete will continue to be used, and the mitigation of CO2 emissions from the sector will be even more critical.
Concrete is used to create hard surfaces that contribute to surface runoff, which can cause heavy soil erosion, water pollution, and flooding, but conversely can be used to divert, dam, and control flooding. Concrete dust released by building demolition and natural disasters can be a major source of dangerous air pollution. Concrete is a contributor to the urban heat island effect, though less so than asphalt.
Climate change mitigation
Reducing the cement clinker content might have positive effects on the environmental life-cycle assessment of concrete. Some research work on reducing the cement clinker content in concrete has already been carried out. However, there exist different research strategies. Often replacement of some clinker for large amounts of slag or fly ash was investigated based on conventional concrete technology. This could lead to a waste of scarce raw materials such as slag and fly ash. The aim of other research activities is the efficient use of cement and reactive materials like slag and fly ash in concrete based on a modified mix design approach.
The embodied carbon of a precast concrete facade can be reduced by 50% when using the presented fiber reinforced high performance concrete in place of typical reinforced concrete cladding. Studies have been conducted about commercialization of low-carbon concretes. Life cycle assessment (LCA) of low-carbon concrete was investigated according to the ground granulated blast-furnace slag (GGBS) and fly ash (FA) replacement ratios. Global warming potential (GWP) of GGBS decreased by 1.1 kg CO2 eq/m3, while FA decreased by 17.3 kg CO2 eq/m3 when the mineral admixture replacement ratio was increased by 10%. This study also compared the compressive strength properties of binary blended low-carbon concrete according to the replacement ratios, and the applicable range of mixing proportions was derived.
Climate change adaptation
High-performance building materials will be particularly important for enhancing resilience, including for flood defenses and critical-infrastructure protection. Risks to infrastructure and cities posed by extreme weather events are especially serious for those places exposed to flood and hurricane damage, but also where residents need protection from extreme summer temperatures. Traditional concrete can come under strain when exposed to humidity and higher concentrations of atmospheric CO2. While concrete is likely to remain important in applications where the environment is challenging, novel, smarter and more adaptable materials are also needed.
End-of-life: degradation and waste
Recycling
There have been concerns about the recycling of painted concrete due to possible lead content. Studies have indicated that recycled concrete exhibits lower strength and durability compared to concrete produced using natural aggregates. This deficiency can be addressed by incorporating supplementary materials such as fly ash into the mixture.
World records
The world record for the largest concrete pour in a single project is the Three Gorges Dam in Hubei Province, China by the Three Gorges Corporation. The amount of concrete used in the construction of the dam is estimated at 16 million cubic meters over 17 years. The previous record was 12.3 million cubic meters held by Itaipu hydropower station in Brazil.
The world record for concrete pumping was set on 7 August 2009 during the construction of the Parbati Hydroelectric Project, near the village of Suind, Himachal Pradesh, India, when the concrete mix was pumped through a vertical height of .
The Polavaram dam works in Andhra Pradesh on 6 January 2019 entered the Guinness World Records by pouring 32,100 cubic metres of concrete in 24 hours. The world record for the largest continuously poured concrete raft was achieved in August 2007 in Abu Dhabi by contracting firm Al Habtoor-CCC Joint Venture and the concrete supplier is Unibeton Ready Mix. The pour (a part of the foundation for the Abu Dhabi's Landmark Tower) was 16,000 cubic meters of concrete poured within a two-day period. The previous record, 13,200 cubic meters poured in 54 hours despite a severe tropical storm requiring the site to be covered with tarpaulins to allow work to continue, was achieved in 1992 by joint Japanese and South Korean consortiums Hazama Corporation and the Samsung C&T Corporation for the construction of the Petronas Towers in Kuala Lumpur, Malaysia.
The world record for largest continuously poured concrete floor was completed 8 November 1997, in Louisville, Kentucky by design-build firm EXXCEL Project Management. The monolithic placement consisted of of concrete placed in 30 hours, finished to a flatness tolerance of FF 54.60 and a levelness tolerance of FL 43.83. This surpassed the previous record by 50% in total volume and 7.5% in total area.
The record for the largest continuously placed underwater concrete pour was completed 18 October 2010, in New Orleans, Louisiana by contractor C. J. Mahan Construction Company, LLC of Grove City, Ohio. The placement consisted of 10,251 cubic yards of concrete placed in 58.5 hours using two concrete pumps and two dedicated concrete batch plants. Upon curing, this placement allows the cofferdam to be dewatered approximately below sea level to allow the construction of the Inner Harbor Navigation Canal Sill & Monolith Project to be completed in the dry.
| Technology | Materials | null |
5376 | https://en.wikipedia.org/wiki/Cladistics | Cladistics | Cladistics ( ; from Ancient Greek 'branch') is an approach to biological classification in which organisms are categorized in groups ("clades") based on hypotheses of most recent common ancestry. The evidence for hypothesized relationships is typically shared derived characteristics (synapomorphies) that are not present in more distant groups and ancestors. However, from an empirical perspective, common ancestors are inferences based on a cladistic hypothesis of relationships of taxa whose character states can be observed. Theoretically, a last common ancestor and all its descendants constitute a (minimal) clade. Importantly, all descendants stay in their overarching ancestral clade. For example, if the terms worms or fishes were used within a strict cladistic framework, these terms would include humans. Many of these terms are normally used paraphyletically, outside of cladistics, e.g. as a 'grade', which are fruitless to precisely delineate, especially when including extinct species. Radiation results in the generation of new subclades by bifurcation, but in practice sexual hybridization may blur very closely related groupings.
As a hypothesis, a clade can be rejected only if some groupings were explicitly excluded. It may then be found that the excluded group did actually descend from the last common ancestor of the group, and thus emerged within the group. ("Evolved from" is misleading, because in cladistics all descendants stay in the ancestral group). To keep only valid clades, upon finding that the group is paraphyletic this way, either such excluded groups should be granted to the clade, or the group should be abolished.
Branches down to the divergence to the next significant (e.g. extant) sister are considered stem-groupings of the clade, but in principle each level stands on its own, to be assigned a unique name. For a fully bifurcated tree, adding a group to a tree also adds an additional (named) clade, and a new level on that branch. Specifically, also extinct groups are always put on a side-branch, not distinguishing whether an actual ancestor of other groupings was found.
The techniques and nomenclature of cladistics have been applied to disciplines other than biology. (See phylogenetic nomenclature.)
Cladistics findings are posing a difficulty for taxonomy, where the rank and (genus-)naming of established groupings may turn out to be inconsistent.
Cladistics is now the most commonly used method to classify organisms.
History
The original methods used in cladistic analysis and the school of taxonomy derived from the work of the German entomologist Willi Hennig, who referred to it as phylogenetic systematics (also the title of his 1966 book); but the terms "cladistics" and "clade" were popularized by other researchers. Cladistics in the original sense refers to a particular set of methods used in phylogenetic analysis, although it is now sometimes used to refer to the whole field.
What is now called the cladistic method appeared as early as 1901 with a work by Peter Chalmers Mitchell for birds and subsequently by Robert John Tillyard (for insects) in 1921, and W. Zimmermann (for plants) in 1943. The term "clade" was introduced in 1958 by Julian Huxley after having been coined by Lucien Cuénot in 1940, "cladogenesis" in 1958, "cladistic" by Arthur Cain and Harrison in 1960, "cladist" (for an adherent of Hennig's school) by Ernst Mayr in 1965, and "cladistics" in 1966. Hennig referred to his own approach as "phylogenetic systematics". From the time of his original formulation until the end of the 1970s, cladistics competed as an analytical and philosophical approach to systematics with phenetics and so-called evolutionary taxonomy. Phenetics was championed at this time by the numerical taxonomists Peter Sneath and Robert Sokal, and evolutionary taxonomy by Ernst Mayr.
Originally conceived, if only in essence, by Willi Hennig in a book published in 1950, cladistics did not flourish until its translation into English in 1966 (Lewin 1997). Today, cladistics is the most popular method for inferring phylogenetic trees from morphological data.
In the 1990s, the development of effective polymerase chain reaction techniques allowed the application of cladistic methods to biochemical and molecular genetic traits of organisms, vastly expanding the amount of data available for phylogenetics. At the same time, cladistics rapidly became popular in evolutionary biology, because computers made it possible to process large quantities of data about organisms and their characteristics.
Methodology
The cladistic method interprets each shared character state transformation as a potential piece of evidence for grouping. Synapomorphies (shared, derived character states) are viewed as evidence of grouping, while symplesiomorphies (shared ancestral character states) are not. The outcome of a cladistic analysis is a cladogram – a tree-shaped diagram (dendrogram) that is interpreted to represent the best hypothesis of phylogenetic relationships. Although traditionally such cladograms were generated largely on the basis of morphological characters and originally calculated by hand, genetic sequencing data and computational phylogenetics are now commonly used in phylogenetic analyses, and the parsimony criterion has been abandoned by many phylogeneticists in favor of more "sophisticated" but less parsimonious evolutionary models of character state transformation. Cladists contend that these models are unjustified because there is no evidence that they recover more "true" or "correct" results from actual empirical data sets
Every cladogram is based on a particular dataset analyzed with a particular method. Datasets are tables consisting of molecular, morphological, ethological and/or other characters and a list of operational taxonomic units (OTUs), which may be genes, individuals, populations, species, or larger taxa that are presumed to be monophyletic and therefore to form, all together, one large clade; phylogenetic analysis infers the branching pattern within that clade. Different datasets and different methods, not to mention violations of the mentioned assumptions, often result in different cladograms. Only scientific investigation can show which is more likely to be correct.
Until recently, for example, cladograms like the following have generally been accepted as accurate representations of the ancestral relations among turtles, lizards, crocodilians, and birds:
If this phylogenetic hypothesis is correct, then the last common ancestor of turtles and birds, at the branch near the lived earlier than the last common ancestor of lizards and birds, near the . Most molecular evidence, however, produces cladograms more like this:
If this is accurate, then the last common ancestor of turtles and birds lived later than the last common ancestor of lizards and birds. Since the cladograms show two mutually exclusive hypotheses to describe the evolutionary history, at most one of them is correct.
The cladogram to the right represents the current universally accepted hypothesis that all primates, including strepsirrhines like the lemurs and lorises, had a common ancestor all of whose descendants are or were primates, and so form a clade; the name Primates is therefore recognized for this clade. Within the primates, all anthropoids (monkeys, apes, and humans) are hypothesized to have had a common ancestor all of whose descendants are or were anthropoids, so they form the clade called Anthropoidea. The "prosimians", on the other hand, form a paraphyletic taxon. The name Prosimii is not used in phylogenetic nomenclature, which names only clades; the "prosimians" are instead divided between the clades Strepsirhini and Haplorhini, where the latter contains Tarsiiformes and Anthropoidea.
Lemurs and tarsiers may have looked closely related to humans, in the sense of being close on the evolutionary tree to humans. However, from the perspective of a tarsier, humans and lemurs would have looked close, in the exact same sense. Cladistics forces a neutral perspective, treating all branches (extant or extinct) in the same manner. It also forces one to try to make statements, and honestly take into account findings, about the exact historic relationships between the groups.
Terminology for character states
The following terms, coined by Hennig, are used to identify shared or distinct character states among groups:
A plesiomorphy ("close form") or ancestral state is a character state that a taxon has retained from its ancestors. When two or more taxa that are not nested within each other share a plesiomorphy, it is a symplesiomorphy (from syn-, "together"). Symplesiomorphies do not mean that the taxa that exhibit that character state are necessarily closely related. For example, Reptilia is traditionally characterized by (among other things) being cold-blooded (i.e., not maintaining a constant high body temperature), whereas birds are warm-blooded. Since cold-bloodedness is a plesiomorphy, inherited from the common ancestor of traditional reptiles and birds, and thus a symplesiomorphy of turtles, snakes and crocodiles (among others), it does not mean that turtles, snakes and crocodiles form a clade that excludes the birds.
An apomorphy ("separate form") or derived state is an innovation. It can thus be used to diagnose a clade – or even to help define a clade name in phylogenetic nomenclature. Features that are derived in individual taxa (a single species or a group that is represented by a single terminal in a given phylogenetic analysis) are called autapomorphies (from auto-, "self"). Autapomorphies express nothing about relationships among groups; clades are identified (or defined) by synapomorphies (from syn-, "together"). For example, the possession of digits that are homologous with those of Homo sapiens is a synapomorphy within the vertebrates. The tetrapods can be singled out as consisting of the first vertebrate with such digits homologous to those of Homo sapiens together with all descendants of this vertebrate (an apomorphy-based phylogenetic definition). Importantly, snakes and other tetrapods that do not have digits are nonetheless tetrapods: other characters, such as amniotic eggs and diapsid skulls, indicate that they descended from ancestors that possessed digits which are homologous with ours.
A character state is homoplastic or "an instance of homoplasy" if it is shared by two or more organisms but is absent from their common ancestor or from a later ancestor in the lineage leading to one of the organisms. It is therefore inferred to have evolved by convergence or reversal. Both mammals and birds are able to maintain a high constant body temperature (i.e., they are warm-blooded). However, the accepted cladogram explaining their significant features indicates that their common ancestor is in a group lacking this character state, so the state must have evolved independently in the two clades. Warm-bloodedness is separately a synapomorphy of mammals (or a larger clade) and of birds (or a larger clade), but it is not a synapomorphy of any group including both these clades. Hennig's Auxiliary Principle states that shared character states should be considered evidence of grouping unless they are contradicted by the weight of other evidence; thus, homoplasy of some feature among members of a group may only be inferred after a phylogenetic hypothesis for that group has been established.
The terms plesiomorphy and apomorphy are relative; their application depends on the position of a group within a tree. For example, when trying to decide whether the tetrapods form a clade, an important question is whether having four limbs is a synapomorphy of the earliest taxa to be included within Tetrapoda: did all the earliest members of the Tetrapoda inherit four limbs from a common ancestor, whereas all other vertebrates did not, or at least not homologously? By contrast, for a group within the tetrapods, such as birds, having four limbs is a plesiomorphy. Using these two terms allows a greater precision in the discussion of homology, in particular allowing clear expression of the hierarchical relationships among different homologous features.
It can be difficult to decide whether a character state is in fact the same and thus can be classified as a synapomorphy, which may identify a monophyletic group, or whether it only appears to be the same and is thus a homoplasy, which cannot identify such a group. There is a danger of circular reasoning: assumptions about the shape of a phylogenetic tree are used to justify decisions about character states, which are then used as evidence for the shape of the tree. Phylogenetics uses various forms of parsimony to decide such questions; the conclusions reached often depend on the dataset and the methods. Such is the nature of empirical science, and for this reason, most cladists refer to their cladograms as hypotheses of relationship. Cladograms that are supported by a large number and variety of different kinds of characters are viewed as more robust than those based on more limited evidence.
Terminology for taxa
Mono-, para- and polyphyletic taxa can be understood based on the shape of the tree (as done above), as well as based on their character states. These are compared in the table below.
Criticism
Cladistics, either generally or in specific applications, has been criticized from its beginnings. Decisions as to whether particular character states are homologous, a precondition of their being synapomorphies, have been challenged as involving circular reasoning and subjective judgements. Of course, the potential unreliability of evidence is a problem for any systematic method, or for that matter, for any empirical scientific endeavor at all.
Transformed cladistics arose in the late 1970s in an attempt to resolve some of these problems by removing a priori assumptions about phylogeny from cladistic analysis, but it has remained unpopular.
Issues
Ancestors
The cladistic method does not identify fossil species as actual ancestors of a clade. Instead, fossil taxa are identified as belonging to separate extinct branches. While a fossil species could be the actual ancestor of a clade, there is no way to know that. Therefore, a more conservative hypothesis is that the fossil taxon is related to other fossil and extant taxa, as implied by the pattern of shared apomorphic features.
Extinction status
An otherwise extinct group with any extant descendants, is not considered (literally) extinct, and for instance does not have a date of extinction.
Hybridization, interbreeding
Anything having to do with biology and sex is complicated and messy, and cladistics is no exception. Many species reproduce sexually, and are capable of interbreeding for millions of years. Worse, during such a period, many branches may have radiated, and it may take hundreds of millions of years for them to have whittled down to just two. Only then one can theoretically assign proper last common ancestors of groupings which do not inadvertently include earlier branches. The process of true cladistic bifurcation can thus take a much more extended time than one is usually aware of. In practice, for recent radiations, cladistically guided findings only give a coarse impression of the complexity. A more detailed account will give details about fractions of introgressions between groupings, and even geographic variations thereof. This has been used as an argument for the use of paraphyletic groupings, but typically other reasons are quoted.
Horizontal gene transfer
Horizontal gene transfer is the mobility of genetic info between different organisms that can have immediate or delayed effects for the reciprocal host. There are several processes in nature which can cause horizontal gene transfer. This does typically not directly interfere with ancestry of the organism, but can complicate the determination of that ancestry. On another level, one can map the horizontal gene transfer processes, by determining the phylogeny of the individual genes using cladistics.
Naming stability
If there is unclarity in mutual relationships, there are a lot of possible trees. Assigning names to each possible clade may not be prudent. Furthermore, established names are discarded in cladistics, or alternatively carry connotations which may no longer hold, such as when additional groups are found to have emerged in them. Naming changes are the direct result of changes in the recognition of mutual relationships, which often is still in flux, especially for extinct species. Hanging on to older naming and/or connotations is counter-productive, as they typically do not reflect actual mutual relationships precisely at all. E.g. Archaea, Asgard archaea, protists, slime molds, worms, invertebrata, fishes, reptilia, monkeys, Ardipithecus, Australopithecus, Homo erectus all contain Homo sapiens cladistically, in their sensu lato meaning. For originally extinct stem groups, sensu lato generally means generously keeping previously included groups, which then may come to include even living species. A pruned sensu stricto meaning is often adopted instead, but the group would need to be restricted to a single branch on the stem. Other branches then get their own name and level. This is commensurate to the fact that more senior stem branches are in fact closer related to the resulting group than the more basal stem branches; that those stem branches only may have lived for a short time does not affect that assessment in cladistics.
In disciplines other than biology
The comparisons used to acquire data on which cladograms can be based are not limited to the field of biology. Any group of individuals or classes that are hypothesized to have a common ancestor, and to which a set of common characteristics may or may not apply, can be compared pairwise. Cladograms can be used to depict the hypothetical descent relationships within groups of items in many different academic realms. The only requirement is that the items have characteristics that can be identified and measured.
Anthropology and archaeology: Cladistic methods have been used to reconstruct the development of cultures or artifacts using groups of cultural traits or artifact features.
Comparative mythology and folktale use cladistic methods to reconstruct the protoversion of many myths. Mythological phylogenies constructed with mythemes clearly support low horizontal transmissions (borrowings), historical (sometimes Palaeolithic) diffusions and punctuated evolution. They also are a powerful way to test hypotheses about cross-cultural relationships among folktales.
Literature: Cladistic methods have been used in the classification of the surviving manuscripts of the Canterbury Tales, and the manuscripts of the Sanskrit Charaka Samhita.
Historical linguistics: Cladistic methods have been used to reconstruct the phylogeny of languages using linguistic features. This is similar to the traditional comparative method of historical linguistics, but is more explicit in its use of parsimony and allows much faster analysis of large datasets (computational phylogenetics).
Textual criticism or stemmatics: Cladistic methods have been used to reconstruct the phylogeny of manuscripts of the same work (and reconstruct the lost original) using distinctive copying errors as apomorphies. This differs from traditional historical-comparative linguistics in enabling the editor to evaluate and place in genetic relationship large groups of manuscripts with large numbers of variants that would be impossible to handle manually. It also enables parsimony analysis of contaminated traditions of transmission that would be impossible to evaluate manually in a reasonable period of time.
Astrophysics infers the history of relationships between galaxies to create branching diagram hypotheses of galaxy diversification.
| Biology and health sciences | Phylogenetics and taxonomy | Biology |
5377 | https://en.wikipedia.org/wiki/Calendar | Calendar | A calendar is a system of organizing days. This is done by giving names to periods of time, typically days, weeks, months and years. A date is the designation of a single and specific day within such a system. A calendar is also a physical record (often paper) of such a system. A calendar can also mean a list of planned events, such as a court calendar, or a partly or fully chronological list of documents, such as a calendar of wills.
Periods in a calendar (such as years and months) are usually, though not necessarily, synchronized with the cycle of the sun or the moon. The most common type of pre-modern calendar was the lunisolar calendar, a lunar calendar that occasionally adds one intercalary month to remain synchronized with the solar year over the long term.
Etymology
The term calendar is taken from , the term for the first day of the month in the Roman calendar, related to the verb 'to call out', referring to the "calling" of the new moon when it was first seen. Latin meant 'account book, register' (as accounts were settled and debts were collected on the calends of each month). The Latin term was adopted in Old French as and from there in Middle English as by the 13th century (the spelling calendar is early modern).
History
The course of the Sun and the Moon are the most salient regularly recurring natural events useful for timekeeping, and in pre-modern societies around the world lunation and the year were most commonly used as time units. Nevertheless, the Roman calendar contained remnants of a very ancient pre-Etruscan 10-month solar year.
The first recorded physical calendars, dependent on the development of writing in the Ancient Near East, are the Bronze Age Egyptian and Sumerian calendars.
During the Vedic period India developed a sophisticated timekeeping methodology and calendars for Vedic rituals. According to Yukio Ohashi, the Vedanga calendar in ancient India was based on astronomical studies during the Vedic Period and was not derived from other cultures.
A large number of calendar systems in the Ancient Near East were based on the Babylonian calendar dating from the Iron Age, among them the calendar system of the Persian Empire, which in turn gave rise to the Zoroastrian calendar and the Hebrew calendar.
A great number of Hellenic calendars were developed in Classical Greece, and during the Hellenistic period they gave rise to the ancient Roman calendar and to various Hindu calendars.
Calendars in antiquity were lunisolar, depending on the introduction of intercalary months to align the solar and the lunar years. This was mostly based on observation, but there may have been early attempts to model the pattern of intercalation algorithmically, as evidenced in the fragmentary 2nd-century Coligny calendar.
The Roman calendar was reformed by Julius Caesar in 46 BC. His "Julian" calendar was no longer dependent on the observation of the new moon, but followed an algorithm of introducing a leap day every four years. This created a dissociation of the calendar month from lunation. The Gregorian calendar, introduced in 1582, corrected most of the remaining difference between the Julian calendar and the solar year.
The Islamic calendar is based on the prohibition of intercalation (nasi') by Muhammad, in Islamic tradition dated to a sermon given on 9 Dhu al-Hijjah AH 10 (Julian date: 6 March 632). This resulted in an observation-based lunar calendar that shifts relative to the seasons of the solar year.
There have been several modern proposals for reform of the modern calendar, such as the World Calendar, the International Fixed Calendar, the Holocene calendar, and the Hanke–Henry Permanent Calendar. Such ideas are promoted from time to time, but have failed to gain traction because of the loss of continuity and the massive upheaval that implementing them would involve, as well as their effect on cycles of religious activity.
Systems
A full calendar system has a different calendar date for every day. Thus the week cycle is by itself not a full calendar system; neither is a system to name the days within a year without a system for identifying the years.
The simplest calendar system just counts time periods from a reference date. This applies for the Julian day or Unix Time. Virtually the only possible variation is using a different reference date, in particular, one less distant in the past to make the numbers smaller. Computations in these systems are just a matter of addition and subtraction.
Other calendars have one (or multiple) larger units of time.
Calendars that contain one level of cycles:
week and weekday – this system (without year, the week number keeps on increasing) is not very common
year and ordinal date within the year, e.g., the ISO 8601 ordinal date system
Calendars with two levels of cycles:
year, month, and day – most systems, including the Gregorian calendar (and its very similar predecessor, the Julian calendar), the Islamic calendar, the Solar Hijri calendar and the Hebrew calendar
year, week, and weekday – e.g., the ISO week date
Cycles can be synchronized with periodic phenomena:
Lunar calendars are synchronized to the motion of the Moon (lunar phases); an example is the Islamic calendar.
Solar calendars are based on perceived seasonal changes synchronized to the apparent motion of the Sun; an example is the Persian calendar.
Lunisolar calendars are based on a combination of both solar and lunar reckonings; examples include the traditional calendar of China, the Hindu calendar in India and Nepal, and the Hebrew calendar.
The week cycle is an example of one that is not synchronized to any external phenomenon (although it may have been derived from lunar phases, beginning anew every month).
Very commonly a calendar includes more than one type of cycle or has both cyclic and non-cyclic elements.
Most calendars incorporate more complex cycles. For example, the vast majority of them track years, months, weeks and days. The seven-day week is practically universal, though its use varies. It has run uninterrupted for millennia.
Solar
Solar calendars assign a date to each solar day. A day may consist of the period between sunrise and sunset, with a following period of night, or it may be a period between successive events such as two sunsets. The length of the interval between two such successive events may be allowed to vary slightly during the year, or it may be averaged into a mean solar day. Other types of calendar may also use a solar day.
The Egyptians appear to have been the first to develop a solar calendar, using as a fixed point the annual sunrise reappearance of the Dog Star—Sirius, or Sothis—in the eastern sky, which coincided with the annual flooding of the Nile River. They built a calendar with 365 days, divided into 12 months of 30 days each, with 5 extra days at the end of the year. However, they did not include the extra bit of time in each year, and this caused their calendar to slowly become inaccurate.
Lunar
Not all calendars use the solar year as a unit. A lunar calendar is one in which days are numbered within each lunar phase cycle. Because the length of the lunar month is not an even fraction of the length of the tropical year, a purely lunar calendar quickly drifts against the seasons, which do not vary much near the equator. It does, however, stay constant with respect to other phenomena, notably tides. An example is the Islamic calendar.
Alexander Marshack, in a controversial reading, believed that marks on a bone baton () represented a lunar calendar. Other marked bones may also represent lunar calendars. Similarly, Michael Rappenglueck believes that marks on a 15,000-year-old cave painting represent a lunar calendar.
Lunisolar
A lunisolar calendar is a lunar calendar that compensates by adding an extra month as needed to realign the months with the seasons. Prominent examples of lunisolar calendar are Hindu calendar and Buddhist calendar that are popular in South Asia and Southeast Asia. Another example is the Hebrew calendar, which uses a 19-year cycle.
Subdivisions
Nearly all calendar systems group consecutive days into "months" and also into "years". In a solar calendar a year approximates Earth's tropical year (that is, the time it takes for a complete cycle of seasons), traditionally used to facilitate the planning of agricultural activities. In a lunar calendar, the month approximates the cycle of the moon phase. Consecutive days may be grouped into other periods such as the week.
Because the number of days in the tropical year is not a whole number, a solar calendar must have a different number of days in different years. This may be handled, for example, by adding an extra day in leap years. The same applies to months in a lunar calendar and also the number of months in a year in a lunisolar calendar. This is generally known as intercalation. Even if a calendar is solar, but not lunar, the year cannot be divided entirely into months that never vary in length.
Cultures may define other units of time, such as the week, for the purpose of scheduling regular activities that do not easily coincide with months or years. Many cultures use different baselines for their calendars' starting years. Historically, several countries have based their calendars on regnal years, a calendar based on the reign of their current sovereign. For example, the year 2006 in Japan is year 18 Heisei, with Heisei being the era name of Emperor Akihito.
Other types
Arithmetical and astronomical
An astronomical calendar is based on ongoing observation; examples are the religious Islamic calendar and the old religious Jewish calendar in the time of the Second Temple. Such a calendar is also referred to as an observation-based calendar. The advantage of such a calendar is that it is perfectly and perpetually accurate. The disadvantage is that working out when a particular date would occur is difficult.
An arithmetic calendar is one that is based on a strict set of rules; an example is the current Jewish calendar. Such a calendar is also referred to as a rule-based calendar. The advantage of such a calendar is the ease of calculating when a particular date occurs. The disadvantage is imperfect accuracy. Furthermore, even if the calendar is very accurate, its accuracy diminishes slowly over time, owing to changes in Earth's rotation. This limits the lifetime of an accurate arithmetic calendar to a few thousand years. After then, the rules would need to be modified from observations made since the invention of the calendar.
Other variants
The early Roman calendar, created during the reign of Romulus, lumped the 61 days of the winter period together as simply "winter". Over time, this period became January and February; through further changes over time (including the creation of the Julian calendar) this calendar became the modern Gregorian calendar, introduced in the 1570s.
Usage
The primary practical use of a calendar is to identify days: to be informed about or to agree on a future event and to record an event that has happened. Days may be significant for agricultural, civil, religious, or social reasons. For example, a calendar provides a way to determine when to start planting or harvesting, which days are religious or civil holidays, which days mark the beginning and end of business accounting periods, and which days have legal significance, such as the day taxes are due or a contract expires. Also, a calendar may, by identifying a day, provide other useful information about the day such as its season.
Calendars are also used as part of a complete timekeeping system: date and time of day together specify a moment in time. In the modern world, timekeepers can show time, date, and weekday. Some may also show the lunar phase.
Gregorian
The Gregorian calendar is the de facto international standard and is used almost everywhere in the world for civil purposes. The widely used solar aspect is a cycle of leap days in a 400-year cycle designed to keep the duration of the year aligned with the solar year. There is a lunar aspect which approximates the position of the moon during the year, and is used in the calculation of the date of Easter. Each Gregorian year has either 365 or 366 days (the leap day being inserted as 29 February), amounting to an average Gregorian year of 365.2425 days (compared to a solar year of 365.2422 days).
The Gregorian calendar was introduced in 1582 as a refinement to the Julian calendar, that had been in use throughout the European Middle Ages, amounting to a 0.002% correction in the length of the year. During the Early Modern period, its adoption was mostly limited to Roman Catholic nations, but by the 19th century it had become widely adopted for the sake of convenience in international trade. The last European country to adopt it was Greece, in 1923.
The calendar epoch used by the Gregorian calendar is inherited from the medieval convention established by Dionysius Exiguus and associated with the Julian calendar. The year number is variously given as AD (for Anno Domini) or CE (for Common Era or Christian Era).
Religious
The most important use of pre-modern calendars is keeping track of the liturgical year and the observation of religious feast days.
While the Gregorian calendar is itself historically motivated to the calculation of the Easter date, it is now in worldwide secular use as the de facto standard. Alongside the use of the Gregorian calendar for secular matters, there remain several calendars in use for religious purposes.
Western Christian liturgical calendars are based on the cycle of the Roman Rite of the Catholic Church, and generally include the liturgical seasons of Advent, Christmas, Ordinary Time (Time after Epiphany), Lent, Easter, and Ordinary Time (Time after Pentecost). Some Christian calendars do not include Ordinary Time and every day falls into a denominated season.
The Eastern Orthodox Church employs the use of 2 liturgical calendars; the Julian calendar (often called the Old Calendar) and the Revised Julian Calendar (often called the New Calendar). The Revised Julian Calendar is nearly the same as the Gregorian calendar, with the addition that years divisible by 100 are not leap years, except that years with remainders of 200 or 600 when divided by 900 remain leap years, e.g. 2000 and 2400 as in the Gregorian calendar.
The Islamic calendar or Hijri calendar is a lunar calendar consisting of 12 lunar months in a year of 354 or 355 days. It is used to date events in most of the Muslim countries (concurrently with the Gregorian calendar) and used by Muslims everywhere to determine the proper day on which to celebrate Islamic holy days and festivals. Its epoch is the Hijra (corresponding to AD 622). With an annual drift of 11 or 12 days, the seasonal relation is repeated approximately every 33 Islamic years.
Various Hindu calendars remain in use in the Indian subcontinent, including the Nepali calendars, Bengali calendar, Malayalam calendar, Tamil calendar, Vikrama Samvat used in Northern India, and Shalivahana calendar in the Deccan states.
The Buddhist calendar and the traditional lunisolar calendars of Cambodia, Laos, Myanmar, Sri Lanka and Thailand are also based on an older version of the Hindu calendar.
Most of the Hindu calendars are inherited from a system first enunciated in Vedanga Jyotisha of Lagadha, standardized in the Sūrya Siddhānta and subsequently reformed by astronomers such as Āryabhaṭa (AD 499), Varāhamihira (6th century) and Bhāskara II (12th century).
The Hebrew calendar is used by Jews worldwide for religious and cultural affairs, also influences civil matters in Israel (such as national holidays) and can be used business dealings (such as for the dating of cheques).
Followers of the Baháʼí Faith use the Baháʼí calendar. The Baháʼí Calendar, also known as the Badi Calendar was first established by the Bab in the Kitab-i-Asma. The Baháʼí Calendar is also purely a solar calendar and comprises 19 months each having nineteen days.
National
The Chinese, Hebrew, Hindu, and Julian calendars are widely used for religious and social purposes.
The Iranian (Persian) calendar is used in Iran and some parts of Afghanistan. The Assyrian calendar is in use by the members of the Assyrian community in the Middle East (mainly Iraq, Syria, Turkey, and Iran) and the diaspora. The first year of the calendar is exactly 4750 years prior to the start of the Gregorian calendar. The Ethiopian calendar or Ethiopic calendar is the principal calendar used in Ethiopia and Eritrea, with the Oromo calendar also in use in some areas. In neighboring Somalia, the Somali calendar co-exists alongside the Gregorian and Islamic calendars. In Thailand, where the Thai solar calendar is used, the months and days have adopted the western standard, although the years are still based on the traditional Buddhist calendar.
Fiscal
A fiscal calendar generally means the accounting year of a government or a business. It is used for budgeting, keeping accounts, and taxation. It is a set of 12 months that may start at any date in a year. The US government's fiscal year starts on 1 October and ends on 30 September. The government of India's fiscal year starts on 1 April and ends on 31 March. Small traditional businesses in India start the fiscal year on Diwali festival and end the day before the next year's Diwali festival.
In accounting (and particularly accounting software), a fiscal calendar (such as a 4/4/5 calendar) fixes each month at a specific number of weeks to facilitate comparisons from month to month and year to year. January always has exactly 4 weeks (Sunday through Saturday), February has 4 weeks, March has 5 weeks, etc. Note that this calendar will normally need to add a 53rd week to every 5th or 6th year, which might be added to December or might not be, depending on how the organization uses those dates. There exists an international standard way to do this (the ISO week). The ISO week starts on a Monday and ends on a Sunday. Week 1 is always the week that contains 4 January in the Gregorian calendar.
Formats
The term calendar applies not only to a given scheme of timekeeping but also to a specific record or device displaying such a scheme, for example, an appointment book in the form of a pocket calendar (or personal organizer), desktop calendar, a wall calendar, etc.
In a paper calendar, one or two sheets can show a single day, a week, a month, or a year. If a sheet is for a single day, it easily shows the date and the weekday. If a sheet is for multiple days it shows a conversion table to convert from weekday to date and back. With a special pointing device, or by crossing out past days, it may indicate the current date and weekday. This is the most common usage of the word.
In the US Sunday is considered the first day of the week and so appears on the far left and Saturday the last day of the week appearing on the far right. In Britain, the weekend may appear at the end of the week so the first day is Monday and the last day is Sunday. The US calendar display is also used in Britain.
It is common to display the Gregorian calendar in separate monthly grids of seven columns (from Monday to Sunday, or Sunday to Saturday depending on which day is considered to start the week – this varies according to country) and five to six rows (or rarely, four rows when the month of February contains 28 days in common years beginning on the first day of the week), with the day of the month numbered in each cell, beginning with 1. The sixth row is sometimes eliminated by marking 23/30 and 24/31 together as necessary.
When working with weeks rather than months, a continuous format is sometimes more convenient, where no blank cells are inserted to ensure that the first day of a new month begins on a fresh row.
Software
Calendaring software provides users with an electronic version of a calendar, and may additionally provide an appointment book, address book, or contact list.
Calendaring is a standard feature of many PDAs, EDAs, and smartphones. The software may be a local package designed for individual use (e.g., Lightning extension for Mozilla Thunderbird, Microsoft Outlook without Exchange Server, or Windows Calendar) or maybe a networked package that allows for the sharing of information between users (e.g., Mozilla Sunbird, Windows Live Calendar, Google Calendar, or Microsoft Outlook with Exchange Server).
| Technology | Navigation and timekeeping | null |
5378 | https://en.wikipedia.org/wiki/Physical%20cosmology | Physical cosmology | Physical cosmology is a branch of cosmology concerned with the study of cosmological models. A cosmological model, or simply cosmology, provides a description of the largest-scale structures and dynamics of the universe and allows study of fundamental questions about its origin, structure, evolution, and ultimate fate. Cosmology as a science originated with the Copernican principle, which implies that celestial bodies obey identical physical laws to those on Earth, and Newtonian mechanics, which first allowed those physical laws to be understood.
Physical cosmology, as it is now understood, began in 1915 with the development of Albert Einstein's general theory of relativity, followed by major observational discoveries in the 1920s: first, Edwin Hubble discovered that the universe contains a huge number of external galaxies beyond the Milky Way; then, work by Vesto Slipher and others showed that the universe is expanding. These advances made it possible to speculate about the origin of the universe, and allowed the establishment of the Big Bang theory, by Georges Lemaître, as the leading cosmological model. A few researchers still advocate a handful of alternative cosmologies; however, most cosmologists agree that the Big Bang theory best explains the observations.
Dramatic advances in observational cosmology since the 1990s, including the cosmic microwave background, distant supernovae and galaxy redshift surveys, have led to the development of a standard model of cosmology. This model requires the universe to contain large amounts of dark matter and dark energy whose nature is currently not well understood, but the model gives detailed predictions that are in excellent agreement with many diverse observations.
Cosmology draws heavily on the work of many disparate areas of research in theoretical and applied physics. Areas relevant to cosmology include particle physics experiments and theory, theoretical and observational astrophysics, general relativity, quantum mechanics, and plasma physics.
Subject history
Modern cosmology developed along tandem tracks of theory and observation. In 1916, Albert Einstein published his theory of general relativity, which provided a unified description of gravity as a geometric property of space and time. At the time, Einstein believed in a static universe, but found that his original formulation of the theory did not permit it. This is because masses distributed throughout the universe gravitationally attract, and move toward each other over time. However, he realized that his equations permitted the introduction of a constant term which could counteract the attractive force of gravity on the cosmic scale. Einstein published his first paper on relativistic cosmology in 1917, in which he added this cosmological constant to his field equations in order to force them to model a static universe. The Einstein model describes a static universe; space is finite and unbounded (analogous to the surface of a sphere, which has a finite area but no edges). However, this so-called Einstein model is unstable to small perturbations—it will eventually start to expand or contract. It was later realized that Einstein's model was just one of a larger set of possibilities, all of which were consistent with general relativity and the cosmological principle. The cosmological solutions of general relativity were found by Alexander Friedmann in the early 1920s. His equations describe the Friedmann–Lemaître–Robertson–Walker universe, which may expand or contract, and whose geometry may be open, flat, or closed.
In the 1910s, Vesto Slipher (and later Carl Wilhelm Wirtz) interpreted the red shift of spiral nebulae as a Doppler shift that indicated they were receding from Earth. However, it is difficult to determine the distance to astronomical objects. One way is to compare the physical size of an object to its angular size, but a physical size must be assumed in order to do this. Another method is to measure the brightness of an object and assume an intrinsic luminosity, from which the distance may be determined using the inverse-square law. Due to the difficulty of using these methods, they did not realize that the nebulae were actually galaxies outside our own Milky Way, nor did they speculate about the cosmological implications. In 1927, the Belgian Roman Catholic priest Georges Lemaître independently derived the Friedmann–Lemaître–Robertson–Walker equations and proposed, on the basis of the recession of spiral nebulae, that the universe began with the "explosion" of a "primeval atom"—which was later called the Big Bang. In 1929, Edwin Hubble provided an observational basis for Lemaître's theory. Hubble showed that the spiral nebulae were galaxies by determining their distances using measurements of the brightness of Cepheid variable stars. He discovered a relationship between the redshift of a galaxy and its distance. He interpreted this as evidence that the galaxies are receding from Earth in every direction at speeds proportional to their distance from Earth. This fact is now known as Hubble's law, though the numerical factor Hubble found relating recessional velocity and distance was off by a factor of ten, due to not knowing about the types of Cepheid variables.
Given the cosmological principle, Hubble's law suggested that the universe was expanding. Two primary explanations were proposed for the expansion. One was Lemaître's Big Bang theory, advocated and developed by George Gamow. The other explanation was Fred Hoyle's steady state model in which new matter is created as the galaxies move away from each other. In this model, the universe is roughly the same at any point in time.
For a number of years, support for these theories was evenly divided. However, the observational evidence began to support the idea that the universe evolved from a hot dense state. The discovery of the cosmic microwave background in 1965 lent strong support to the Big Bang model, and since the precise measurements of the cosmic microwave background by the Cosmic Background Explorer in the early 1990s, few cosmologists have seriously proposed other theories of the origin and evolution of the cosmos. One consequence of this is that in standard general relativity, the universe began with a singularity, as demonstrated by Roger Penrose and Stephen Hawking in the 1960s.
An alternative view to extend the Big Bang model, suggesting the universe had no beginning or singularity and the age of the universe is infinite, has been presented.
In September 2023, astrophysicists questioned the overall current view of the universe, in the form of the Standard Model of Cosmology, based on the latest James Webb Space Telescope studies.
Energy of the cosmos
The lightest chemical elements, primarily hydrogen and helium, were created during the Big Bang through the process of nucleosynthesis. In a sequence of stellar nucleosynthesis reactions, smaller atomic nuclei are then combined into larger atomic nuclei, ultimately forming stable iron group elements such as iron and nickel, which have the highest nuclear binding energies. The net process results in a later energy release, meaning subsequent to the Big Bang. Such reactions of nuclear particles can lead to sudden energy releases from cataclysmic variable stars such as novae. Gravitational collapse of matter into black holes also powers the most energetic processes, generally seen in the nuclear regions of galaxies, forming quasars and active galaxies.
Cosmologists cannot explain all cosmic phenomena exactly, such as those related to the accelerating expansion of the universe, using conventional forms of energy. Instead, cosmologists propose a new form of energy called dark energy that permeates all space. One hypothesis is that dark energy is just the vacuum energy, a component of empty space that is associated with the virtual particles that exist due to the uncertainty principle.
There is no clear way to define the total energy in the universe using the most widely accepted theory of gravity, general relativity. Therefore, it remains controversial whether the total energy is conserved in an expanding universe. For instance, each photon that travels through intergalactic space loses energy due to the redshift effect. This energy is not transferred to any other system, so seems to be permanently lost. On the other hand, some cosmologists insist that energy is conserved in some sense; this follows the law of conservation of energy.
Different forms of energy may dominate the cosmos—relativistic particles which are referred to as radiation, or non-relativistic particles referred to as matter. Relativistic particles are particles whose rest mass is zero or negligible compared to their kinetic energy, and so move at the speed of light or very close to it; non-relativistic particles have much higher rest mass than their energy and so move much slower than the speed of light.
As the universe expands, both matter and radiation become diluted. However, the energy densities of radiation and matter dilute at different rates. As a particular volume expands, mass-energy density is changed only by the increase in volume, but the energy density of radiation is changed both by the increase in volume and by the increase in the wavelength of the photons that make it up. Thus the energy of radiation becomes a smaller part of the universe's total energy than that of matter as it expands. The very early universe is said to have been 'radiation dominated' and radiation controlled the deceleration of expansion. Later, as the average energy per photon becomes roughly 10 eV and lower, matter dictates the rate of deceleration and the universe is said to be 'matter dominated'. The intermediate case is not treated well analytically. As the expansion of the universe continues, matter dilutes even further and the cosmological constant becomes dominant, leading to an acceleration in the universe's expansion.
History of the universe
The history of the universe is a central issue in cosmology. The history of the universe is divided into different periods called epochs, according to the dominant forces and processes in each period. The standard cosmological model is known as the Lambda-CDM model.
Equations of motion
Within the standard cosmological model, the equations of motion governing the universe as a whole are derived from general relativity with a small, positive cosmological constant. The solution is an expanding universe; due to this expansion, the radiation and matter in the universe cool and become diluted. At first, the expansion is slowed down by gravitation attracting the radiation and matter in the universe. However, as these become diluted, the cosmological constant becomes more dominant and the expansion of the universe starts to accelerate rather than decelerate. In our universe this happened billions of years ago.
Particle physics in cosmology
During the earliest moments of the universe, the average energy density was very high, making knowledge of particle physics critical to understanding this environment. Hence, scattering processes and decay of unstable elementary particles are important for cosmological models of this period.
As a rule of thumb, a scattering or a decay process is cosmologically important in a certain epoch if the time scale describing that process is smaller than, or comparable to, the time scale of the expansion of the universe. The time scale that describes the expansion of the universe is with being the Hubble parameter, which varies with time. The expansion timescale is roughly equal to the age of the universe at each point in time.
Timeline of the Big Bang
Observations suggest that the universe began around 13.8 billion years ago. Since then, the evolution of the universe has passed through three phases. The very early universe, which is still poorly understood, was the split second in which the universe was so hot that particles had energies higher than those currently accessible in particle accelerators on Earth. Therefore, while the basic features of this epoch have been worked out in the Big Bang theory, the details are largely based on educated guesses.
Following this, in the early universe, the evolution of the universe proceeded according to known high energy physics. This is when the first protons, electrons and neutrons formed, then nuclei and finally atoms. With the formation of neutral hydrogen, the cosmic microwave background was emitted. Finally, the epoch of structure formation began, when matter started to aggregate into the first stars and quasars, and ultimately galaxies, clusters of galaxies and superclusters formed. The future of the universe is not yet firmly known, but according to the ΛCDM model it will continue expanding forever.
Areas of study
Below, some of the most active areas of inquiry in cosmology are described, in roughly chronological order. This does not include all of the Big Bang cosmology, which is presented in Timeline of the Big Bang.
Very early universe
The early, hot universe appears to be well explained by the Big Bang from roughly 10−33 seconds onwards, but there are several problems. One is that there is no compelling reason, using current particle physics, for the universe to be flat, homogeneous, and isotropic (see the cosmological principle). Moreover, grand unified theories of particle physics suggest that there should be magnetic monopoles in the universe, which have not been found. These problems are resolved by a brief period of cosmic inflation, which drives the universe to flatness, smooths out anisotropies and inhomogeneities to the observed level, and exponentially dilutes the monopoles. The physical model behind cosmic inflation is extremely simple, but it has not yet been confirmed by particle physics, and there are difficult problems reconciling inflation and quantum field theory. Some cosmologists think that string theory and brane cosmology will provide an alternative to inflation.
Another major problem in cosmology is what caused the universe to contain far more matter than antimatter. Cosmologists can observationally deduce that the universe is not split into regions of matter and antimatter. If it were, there would be X-rays and gamma rays produced as a result of annihilation, but this is not observed. Therefore, some process in the early universe must have created a small excess of matter over antimatter, and this (currently not understood) process is called baryogenesis. Three required conditions for baryogenesis were derived by Andrei Sakharov in 1967, and requires a violation of the particle physics symmetry, called CP-symmetry, between matter and antimatter. However, particle accelerators measure too small a violation of CP-symmetry to account for the baryon asymmetry. Cosmologists and particle physicists look for additional violations of the CP-symmetry in the early universe that might account for the baryon asymmetry.
Both the problems of baryogenesis and cosmic inflation are very closely related to particle physics, and their resolution might come from high energy theory and experiment, rather than through observations of the universe.
Big Bang Theory
Big Bang nucleosynthesis is the theory of the formation of the elements in the early universe. It finished when the universe was about three minutes old and its temperature dropped below that at which nuclear fusion could occur. Big Bang nucleosynthesis had a brief period during which it could operate, so only the very lightest elements were produced. Starting from hydrogen ions (protons), it principally produced deuterium, helium-4, and lithium. Other elements were produced in only trace abundances. The basic theory of nucleosynthesis was developed in 1948 by George Gamow, Ralph Asher Alpher, and Robert Herman. It was used for many years as a probe of physics at the time of the Big Bang, as the theory of Big Bang nucleosynthesis connects the abundances of primordial light elements with the features of the early universe. Specifically, it can be used to test the equivalence principle, to probe dark matter, and test neutrino physics. Some cosmologists have proposed that Big Bang nucleosynthesis suggests there is a fourth "sterile" species of neutrino.
Standard model of Big Bang cosmology
The ΛCDM (Lambda cold dark matter) or Lambda-CDM model is a parametrization of the Big Bang cosmological model in which the universe contains a cosmological constant, denoted by Lambda (Greek Λ), associated with dark energy, and cold dark matter (abbreviated CDM). It is frequently referred to as the standard model of Big Bang cosmology.
Cosmic microwave background
The cosmic microwave background is radiation left over from decoupling after the epoch of recombination when neutral atoms first formed. At this point, radiation produced in the Big Bang stopped Thomson scattering from charged ions. The radiation, first observed in 1965 by Arno Penzias and Robert Woodrow Wilson, has a perfect thermal black-body spectrum. It has a temperature of 2.7 kelvins today and is isotropic to one part in 105. Cosmological perturbation theory, which describes the evolution of slight inhomogeneities in the early universe, has allowed cosmologists to precisely calculate the angular power spectrum of the radiation, and it has been measured by the recent satellite experiments (COBE and WMAP) and many ground and balloon-based experiments (such as Degree Angular Scale Interferometer, Cosmic Background Imager, and Boomerang). One of the goals of these efforts is to measure the basic parameters of the Lambda-CDM model with increasing accuracy, as well as to test the predictions of the Big Bang model and look for new physics. The results of measurements made by WMAP, for example, have placed limits on the neutrino masses.
Newer experiments, such as QUIET and the Atacama Cosmology Telescope, are trying to measure the polarization of the cosmic microwave background. These measurements are expected to provide further confirmation of the theory as well as information about cosmic inflation, and the so-called secondary anisotropies, such as the Sunyaev-Zel'dovich effect and Sachs-Wolfe effect, which are caused by interaction between galaxies and clusters with the cosmic microwave background.
On 17 March 2014, astronomers of the BICEP2 Collaboration announced the apparent detection of B-mode polarization of the CMB, considered to be evidence of primordial gravitational waves that are predicted by the theory of inflation to occur during the earliest phase of the Big Bang. However, later that year the Planck collaboration provided a more accurate measurement of cosmic dust, concluding that the B-mode signal from dust is the same strength as that reported from BICEP2. On 30 January 2015, a joint analysis of BICEP2 and Planck data was published and the European Space Agency announced that the signal can be entirely attributed to interstellar dust in the Milky Way.
Formation and evolution of large-scale structure
Understanding the formation and evolution of the largest and earliest structures (i.e., quasars, galaxies, clusters and superclusters) is one of the largest efforts in cosmology. Cosmologists study a model of hierarchical structure formation in which structures form from the bottom up, with smaller objects forming first, while the largest objects, such as superclusters, are still assembling. One way to study structure in the universe is to survey the visible galaxies, in order to construct a three-dimensional picture of the galaxies in the universe and measure the matter power spectrum. This is the approach of the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey.
Another tool for understanding structure formation is simulations, which cosmologists use to study the gravitational aggregation of matter in the universe, as it clusters into filaments, superclusters and voids. Most simulations contain only non-baryonic cold dark matter, which should suffice to understand the universe on the largest scales, as there is much more dark matter in the universe than visible, baryonic matter. More advanced simulations are starting to include baryons and study the formation of individual galaxies. Cosmologists study these simulations to see if they agree with the galaxy surveys, and to understand any discrepancy.
Other, complementary observations to measure the distribution of matter in the distant universe and to probe reionization include:
The Lyman-alpha forest, which allows cosmologists to measure the distribution of neutral atomic hydrogen gas in the early universe, by measuring the absorption of light from distant quasars by the gas.
The 21-centimeter absorption line of neutral atomic hydrogen also provides a sensitive test of cosmology.
Weak lensing, the distortion of a distant image by gravitational lensing due to dark matter.
These will help cosmologists settle the question of when and how structure formed in the universe.
Dark matter
Evidence from Big Bang nucleosynthesis, the cosmic microwave background, structure formation, and galaxy rotation curves suggests that about 23% of the mass of the universe consists of non-baryonic dark matter, whereas only 4% consists of visible, baryonic matter. The gravitational effects of dark matter are well understood, as it behaves like a cold, non-radiative fluid that forms haloes around galaxies. Dark matter has never been detected in the laboratory, and the particle physics nature of dark matter remains completely unknown. Without observational constraints, there are a number of candidates, such as a stable supersymmetric particle, a weakly interacting massive particle, a gravitationally-interacting massive particle, an axion, and a massive compact halo object. Alternatives to the dark matter hypothesis include a modification of gravity at small accelerations (MOND) or an effect from brane cosmology. TeVeS is a version of MOND that can explain gravitational lensing.
Dark energy
If the universe is flat, there must be an additional component making up 73% (in addition to the 23% dark matter and 4% baryons) of the energy density of the universe. This is called dark energy. In order not to interfere with Big Bang nucleosynthesis and the cosmic microwave background, it must not cluster in haloes like baryons and dark matter. There is strong observational evidence for dark energy, as the total energy density of the universe is known through constraints on the flatness of the universe, but the amount of clustering matter is tightly measured, and is much less than this. The case for dark energy was strengthened in 1999, when measurements demonstrated that the expansion of the universe has begun to gradually accelerate.
Apart from its density and its clustering properties, nothing is known about dark energy. Quantum field theory predicts a cosmological constant (CC) much like dark energy, but 120 orders of magnitude larger than that observed. Steven Weinberg and a number of string theorists (see string landscape) have invoked the 'weak anthropic principle': i.e. the reason that physicists observe a universe with such a small cosmological constant is that no physicists (or any life) could exist in a universe with a larger cosmological constant. Many cosmologists find this an unsatisfying explanation: perhaps because while the weak anthropic principle is self-evident (given that living observers exist, there must be at least one universe with a cosmological constant (CC) which allows for life to exist) it does not attempt to explain the context of that universe. For example, the weak anthropic principle alone does not distinguish between:
Only one universe will ever exist and there is some underlying principle that constrains the CC to the value we observe.
Only one universe will ever exist and although there is no underlying principle fixing the CC, we got lucky.
Lots of universes exist (simultaneously or serially) with a range of CC values, and of course ours is one of the life-supporting ones.
Other possible explanations for dark energy include quintessence or a modification of gravity on the largest scales. The effect on cosmology of the dark energy that these models describe is given by the dark energy's equation of state, which varies depending upon the theory. The nature of dark energy is one of the most challenging problems in cosmology.
A better understanding of dark energy is likely to solve the problem of the ultimate fate of the universe. In the current cosmological epoch, the accelerated expansion due to dark energy is preventing structures larger than superclusters from forming. It is not known whether the acceleration will continue indefinitely, perhaps even increasing until a big rip, or whether it will eventually reverse, lead to a Big Freeze, or follow some other scenario.
Gravitational waves
Gravitational waves are ripples in the curvature of spacetime that propagate as waves at the speed of light, generated in certain gravitational interactions that propagate outward from their source. Gravitational-wave astronomy is an emerging branch of observational astronomy which aims to use gravitational waves to collect observational data about sources of detectable gravitational waves such as binary star systems composed of white dwarfs, neutron stars, and black holes; and events such as supernovae, and the formation of the early universe shortly after the Big Bang.
In 2016, the LIGO Scientific Collaboration and Virgo Collaboration teams announced that they had made the first observation of gravitational waves, originating from a pair of merging black holes using the Advanced LIGO detectors. On 15 June 2016, a second detection of gravitational waves from coalescing black holes was announced. Besides LIGO, many other gravitational-wave observatories (detectors) are under construction.
Other areas of inquiry
Cosmologists also study:
Whether primordial black holes were formed in our universe, and what happened to them.
Detection of cosmic rays with energies above the GZK cutoff, and whether it signals a failure of special relativity at high energies.
The equivalence principle, whether or not Einstein's general theory of relativity is the correct theory of gravitation, and if the fundamental laws of physics are the same everywhere in the universe.
Biophysical cosmology: a type of physical cosmology that studies and understands life as part or an inherent part of physical cosmology. It stresses that life is inherent to the universe and therefore frequent.
| Physical sciences | Astronomy | null |
5382 | https://en.wikipedia.org/wiki/Cosmic%20inflation | Cosmic inflation | In physical cosmology, cosmic inflation, cosmological inflation, or just inflation, is a theory of exponential expansion of space in the very early universe. Following the inflationary period, the universe continued to expand, but at a slower rate. The re-acceleration of this slowing expansion due to dark energy began after the universe was already over 7.7 billion years old (5.4 billion years ago).
Inflation theory was developed in the late 1970s and early 1980s, with notable contributions by several theoretical physicists, including Alexei Starobinsky at Landau Institute for Theoretical Physics, Alan Guth at Cornell University, and Andrei Linde at Lebedev Physical Institute. Starobinsky, Guth, and Linde won the 2014 Kavli Prize "for pioneering the theory of cosmic inflation". It was developed further in the early 1980s. It explains the origin of the large-scale structure of the cosmos. Quantum fluctuations in the microscopic inflationary region, magnified to cosmic size, become the seeds for the growth of structure in the Universe (see galaxy formation and evolution and structure formation). Many physicists also believe that inflation explains why the universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the universe is flat, and why no magnetic monopoles have been observed.
The detailed particle physics mechanism responsible for inflation is unknown. A number of inflation model predictions have been confirmed by observation; for example temperature anisotropies observed by the COBE satellite in 1992 exhibit nearly scale-invariant spectra as predicted by the inflationary paradigm and WMAP results also show strong evidence for inflation. However, some scientists dissent from this position. The hypothetical field thought to be responsible for inflation is called the inflaton.
In 2002, three of the original architects of the theory were recognized for their major contributions; physicists Alan Guth of M.I.T., Andrei Linde of Stanford, and Paul Steinhardt of Princeton shared the Dirac Prize "for development of the concept of inflation in cosmology". In 2012, Guth and Linde were awarded the Breakthrough Prize in Fundamental Physics for their invention and development of inflationary cosmology.
Overview
Around 1930, Edwin Hubble discovered that light from remote galaxies was redshifted; the more remote, the more shifted. This implies that the galaxies are receding from the Earth, with more distant galaxies receding more rapidly, such that galaxies also recede from each other. This expansion of the universe was previously predicted by Alexander Friedmann and Georges Lemaître from the theory of general relativity. It can be understood as a consequence of an initial impulse, which sent the contents of the universe flying apart at such a rate that their mutual gravitational attraction has not reversed their increasing separation.
Inflation may have provided this initial impulse. According to the Friedmann equations that describe the dynamics of an expanding universe, a fluid with sufficiently negative pressure exerts gravitational repulsion in the cosmological context. A field in a positive-energy false vacuum state could represent such a fluid, and the resulting repulsion would set the universe into exponential expansion. This inflation phase was originally proposed by Alan Guth in 1979 because the exponential expansion could dilute exotic relics, such as magnetic monopoles, that were predicted by grand unified theories at the time. This would explain why such relics were not seen. It was quickly realized that such accelerated expansion would resolve the horizon problem and the flatness problem. These problems arise from the notion that to look like it does today, the Universe must have started from very finely tuned, or "special", initial conditions at the Big Bang.
Theory
An expanding universe generally has a cosmological horizon, which, by analogy with the more familiar horizon caused by the curvature of Earth's surface, marks the boundary of the part of the Universe that an observer can see. Light (or other radiation) emitted by objects beyond the cosmological horizon in an accelerating universe never reaches the observer, because the space in between the observer and the object is expanding too rapidly.
The observable universe is one causal patch of a much larger unobservable universe; other parts of the Universe cannot communicate with Earth yet. These parts of the Universe are outside our current cosmological horizon, which is believed to be 46 billion light years in all directions from Earth. In the standard hot big bang model, without inflation, the cosmological horizon moves out, bringing new regions into view. Yet as a local observer sees such a region for the first time, it looks no different from any other region of space the local observer has already seen: Its background radiation is at nearly the same temperature as the background radiation of other regions, and its space-time curvature is evolving lock-step with the others. This presents a mystery: how did these new regions know what temperature and curvature they were supposed to have? They could not have learned it by getting signals, because they were not previously in communication with our past light cone.
Inflation answers this question by postulating that all the regions come from an earlier era with a big vacuum energy, or cosmological constant. A space with a cosmological constant is qualitatively different: instead of moving outward, the cosmological horizon stays put. For any one observer, the distance to the cosmological horizon is constant. With exponentially expanding space, two nearby observers are separated very quickly; so much so, that the distance between them quickly exceeds the limits of communication. The spatial slices are expanding very fast to cover huge volumes. Things are constantly moving beyond the cosmological horizon, which is a fixed distance away, and everything becomes homogeneous.
As the inflationary field slowly relaxes to the vacuum, the cosmological constant goes to zero and space begins to expand normally. The new regions that come into view during the normal expansion phase are exactly the same regions that were pushed out of the horizon during inflation, and so they are at nearly the same temperature and curvature, because they come from the same originally small patch of space.
The theory of inflation thus explains why the temperatures and curvatures of different regions are so nearly equal. It also predicts that the total curvature of a space-slice at constant global time is zero. This prediction implies that the total ordinary matter, dark matter and residual vacuum energy in the Universe have to add up to the critical density, and the evidence supports this. More strikingly, inflation allows physicists to calculate the minute differences in temperature of different regions from quantum fluctuations during the inflationary era, and many of these quantitative predictions have been confirmed.
Space expands
In a space that expands exponentially (or nearly exponentially) with time, any pair of free-floating objects that are initially at rest will move apart from each other at an accelerating rate, at least as long as they are not bound together by any force. From the point of view of one such object, the spacetime is something like an inside-out Schwarzschild black hole—each object is surrounded by a spherical event horizon. Once the other object has fallen through this horizon it can never return, and even light signals it sends will never reach the first object (at least so long as the space continues to expand exponentially).
In the approximation that the expansion is exactly exponential, the horizon is static and remains a fixed physical distance away. This patch of an inflating universe can be described by the following metric:
This exponentially expanding spacetime is called a de Sitter space, and to sustain it there must be a cosmological constant, a vacuum energy density that is constant in space and time and proportional to Λ in the above metric. For the case of exactly exponential expansion, the vacuum energy has a negative pressure p equal in magnitude to its energy density ρ; the equation of state is p=−ρ.
Inflation is typically not an exactly exponential expansion, but rather quasi- or near-exponential. In such a universe the horizon will slowly grow with time as the vacuum energy density gradually decreases.
Few inhomogeneities remain
Because the accelerating expansion of space stretches out any initial variations in density or temperature to very large length scales, an essential feature of inflation is that it smooths out inhomogeneities and anisotropies, and reduces the curvature of space. This pushes the Universe into a very simple state in which it is completely dominated by the inflaton field and the only significant inhomogeneities are tiny quantum fluctuations. Inflation also dilutes exotic heavy particles, such as the magnetic monopoles predicted by many extensions to the Standard Model of particle physics. If the Universe was only hot enough to form such particles before a period of inflation, they would not be observed in nature, as they would be so rare that it is quite likely that there are none in the observable universe. Together, these effects are called the inflationary "no-hair theorem" by analogy with the no hair theorem for black holes.
The "no-hair" theorem works essentially because the cosmological horizon is no different from a black-hole horizon, except for not testable disagreements about what is on the other side. The interpretation of the no-hair theorem is that the Universe (observable and unobservable) expands by an enormous factor during inflation. In an expanding universe, energy densities generally fall, or get diluted, as the volume of the Universe increases. For example, the density of ordinary "cold" matter (dust) declines as the inverse of the volume: when linear dimensions double, the energy density declines by a factor of eight; the radiation energy density declines even more rapidly as the Universe expands since the wavelength of each photon is stretched (redshifted), in addition to the photons being dispersed by the expansion. When linear dimensions are doubled, the energy density in radiation falls by a factor of sixteen (see the solution of the energy density continuity equation for an ultra-relativistic fluid). During inflation, the energy density in the inflaton field is roughly constant. However, the energy density in everything else, including inhomogeneities, curvature, anisotropies, exotic particles, and standard-model particles is falling, and through sufficient inflation these all become negligible. This leaves the Universe flat and symmetric, and (apart from the homogeneous inflaton field) mostly empty, at the moment inflation ends and reheating begins.
Reheating
Inflation is a period of supercooled expansion, when the temperature drops by a factor of 100,000 or so. (The exact drop is model-dependent, but in the first models it was typically from K down to K.) This relatively low temperature is maintained during the inflationary phase. When inflation ends, the temperature returns to the pre-inflationary temperature; this is called reheating or thermalization because the large potential energy of the inflaton field decays into particles and fills the Universe with Standard Model particles, including electromagnetic radiation, starting the radiation dominated phase of the Universe. Because the nature of the inflaton field is not known, this process is still poorly understood, although it is believed to take place through a parametric resonance.
Motivations
Inflation tries to resolve several problems in Big Bang cosmology that were discovered in the 1970s. Inflation was first proposed by Alan Guth in 1979 while investigating the problem of why no magnetic monopoles are seen today; he found that a positive-energy false vacuum would, according to general relativity, generate an exponential expansion of space. It was quickly realised that such an expansion would resolve many other long-standing problems. These problems arise from the observation that to look like it does today, the Universe would have to have started from very finely tuned, or "special" initial conditions at the Big Bang. Inflation attempts to resolve these problems by providing a dynamical mechanism that drives the Universe to this special state, thus making a universe like ours much more likely in the context of the Big Bang theory.
Horizon problem
The horizon problem is the problem of determining why the universe appears statistically homogeneous and isotropic in accordance with the cosmological principle. For example, molecules in a canister of gas are distributed homogeneously and isotropically because they are in thermal equilibrium: gas throughout the canister has had enough time to interact to dissipate inhomogeneities and anisotropies. The situation is quite different in the big bang model without inflation, because gravitational expansion does not give the early universe enough time to equilibrate. In a big bang with only the matter and radiation known in the Standard Model, two widely separated regions of the observable universe cannot have equilibrated because they move apart from each other faster than the speed of light and thus have never come into causal contact. In the early Universe, it was not possible to send a light signal between the two regions. Because they have had no interaction, it is difficult to explain why they have the same temperature (are thermally equilibrated). Historically, proposed solutions included the Phoenix universe of Georges Lemaître, the related oscillatory universe of Richard Chase Tolman, and the Mixmaster universe of Charles Misner. Lemaître and Tolman proposed that a universe undergoing a number of cycles of contraction and expansion could come into thermal equilibrium. Their models failed, however, because of the buildup of entropy over several cycles. Misner made the (ultimately incorrect) conjecture that the Mixmaster mechanism, which made the Universe more chaotic, could lead to statistical homogeneity and isotropy.
Flatness problem
The flatness problem is sometimes called one of the Dicke coincidences (along with the cosmological constant problem). It became known in the 1960s that the density of matter in the Universe was comparable to the critical density necessary for a flat universe (that is, a universe whose large-scale geometry is the usual Euclidean geometry, rather than a non-Euclidean hyperbolic or spherical geometry).
Therefore, regardless of the shape of the universe, the contribution of spatial curvature to the expansion of the Universe could not be much greater than the contribution of matter. But as the Universe expands, the curvature redshifts away more slowly than matter and radiation. Extrapolated into the past, this presents a fine-tuning problem because the contribution of curvature to the Universe must be exponentially small (sixteen orders of magnitude less than the density of radiation at Big Bang nucleosynthesis, for example). Observations of the cosmic microwave background have demonstrated that the Universe is flat to within a few percent.
Magnetic-monopole problem
Stable magnetic monopoles are a problem for Grand Unified Theories, which propose that at high temperatures (such as in the early universe), the electromagnetic force, strong, and weak nuclear forces are not actually fundamental forces but arise due to spontaneous symmetry breaking from a single gauge theory. These theories predict a number of heavy, stable particles that have not been observed in nature. The most notorious is the magnetic monopole, a kind of stable, heavy "charge" of magnetic field.
Monopoles are predicted to be copiously produced following Grand Unified Theories at high temperature, and they should have persisted to the present day, to such an extent that they would become the primary constituent of the Universe. Not only is that not the case, but all searches for them have failed, placing stringent limits on the density of relic magnetic monopoles in the Universe.
A period of inflation that occurs below the temperature where magnetic monopoles can be produced would offer a possible resolution of this problem: Monopoles would be separated from each other as the Universe around them expands, potentially lowering their observed density by many orders of magnitude. Though, as cosmologist Martin Rees has written,
"Skeptics about exotic physics might not be hugely impressed by a theoretical argument to explain the absence of particles that are themselves only hypothetical. Preventive medicine can readily seem 100 percent effective against a disease that doesn't exist!"
History
Precursors
In the early days of general relativity, Albert Einstein introduced the cosmological constant to allow a static solution, which was a three-dimensional sphere with a uniform density of matter. Later, Willem de Sitter found a highly symmetric inflating universe, which described a universe with a cosmological constant that is otherwise empty. It was discovered that Einstein's universe is unstable, and that small fluctuations cause it to collapse or turn into a de Sitter universe.
In 1965, Erast Gliner proposed a unique assumption regarding the early Universe's pressure in the context of the Einstein–Friedmann equations. According to his idea, the pressure was negatively proportional to the energy density. This relationship between pressure and energy density served as the initial theoretical prediction of dark energy.
In the early 1970s, Yakov Zeldovich noticed the flatness and horizon problems of Big Bang cosmology; before his work, cosmology was presumed to be symmetrical on purely philosophical grounds. In the Soviet Union, this and other considerations led Vladimir Belinski and Isaak Khalatnikov to analyze the chaotic BKL singularity in general relativity. Misner's Mixmaster universe attempted to use this chaotic behavior to solve the cosmological problems, with limited success.
False vacuum
In the late 1970s, Sidney Coleman applied the instanton techniques developed by Alexander Polyakov and collaborators to study the fate of the false vacuum in quantum field theory. Like a metastable phase in statistical mechanics—water below the freezing temperature or above the boiling point—a quantum field would need to nucleate a large enough bubble of the new vacuum, the new phase, in order to make a transition. Coleman found the most likely decay pathway for vacuum decay and calculated the inverse lifetime per unit volume. He eventually noted that gravitational effects would be significant, but he did not calculate these effects and did not apply the results to cosmology.
The universe could have been spontaneously created from nothing (no space, time, nor matter) by quantum fluctuations of metastable false vacuum causing an expanding bubble of true vacuum.
The Causal Universe of Brout Englert and Gunzig
In 1978 and 1979, Robert Brout, François Englert and Edgard Gunzig suggested that the universe could originate from a fluctuation of Minkowski space which would be followed by a period in which the geometry would resemble De Sitter space.
This initial period would then evolve into the standard expanding universe. They noted that their proposal makes the universe causal, as there are neither particle nor event horizons in their model.
Starobinsky inflation
In the Soviet Union, Alexei Starobinsky noted that quantum corrections to general relativity should be important for the early universe. These generically lead to curvature-squared corrections to the Einstein–Hilbert action and a form of modified gravity. The solution to Einstein's equations in the presence of curvature squared terms, when the curvatures are large, leads to an effective cosmological constant. Therefore, he proposed that the early universe went through an inflationary de Sitter era. This resolved the cosmology problems and led to specific predictions for the corrections to the microwave background radiation, corrections that were then calculated in detail. Starobinsky used the action
which corresponds to the potential
in the Einstein frame. This results in the observables:
Monopole problem
In 1978, Zeldovich noted the magnetic monopole problem, which was an unambiguous quantitative version of the horizon problem, this time in a subfield of particle physics, which led to several speculative attempts to resolve it. In 1980, Alan Guth realized that false vacuum decay in the early universe would solve the problem, leading him to propose a scalar-driven inflation. Starobinsky's and Guth's scenarios both predicted an initial de Sitter phase, differing only in mechanistic details.
Early inflationary models
Guth proposed inflation in January 1981 to explain the nonexistence of magnetic monopoles; it was Guth who coined the term "inflation". At the same time, Starobinsky argued that quantum corrections to gravity would replace the supposed initial singularity of the Universe with an exponentially expanding de Sitter phase. In October 1980, Demosthenes Kazanas suggested that exponential expansion could eliminate the particle horizon and perhaps solve the horizon problem, while Katsuhiko Sato suggested that an exponential expansion could eliminate domain walls (another kind of exotic relic). In 1981, Einhorn and Sato published a model similar to Guth's and showed that it would resolve the puzzle of the magnetic monopole abundance in Grand Unified Theories. Like Guth, they concluded that such a model not only required fine tuning of the cosmological constant, but also would likely lead to a much too granular universe, i.e., to large density variations resulting from bubble wall collisions.
Guth proposed that as the early universe cooled, it was trapped in a false vacuum with a high energy density, which is much like a cosmological constant. As the very early universe cooled it was trapped in a metastable state (it was supercooled), which it could only decay out of through the process of bubble nucleation via quantum tunneling. Bubbles of true vacuum spontaneously form in the sea of false vacuum and rapidly begin expanding at the speed of light. Guth recognized that this model was problematic because the model did not reheat properly: when the bubbles nucleated, they did not generate radiation. Radiation could only be generated in collisions between bubble walls. But if inflation lasted long enough to solve the initial conditions problems, collisions between bubbles became exceedingly rare. In any one causal patch it is likely that only one bubble would nucleate.
Slow-roll inflation
The bubble collision problem was solved by Andrei Linde and independently by Andreas Albrecht and Paul Steinhardt in a model named new inflation or slow-roll inflation (Guth's model then became known as old inflation). In this model, instead of tunneling out of a false vacuum state, inflation occurred by a scalar field rolling down a potential energy hill. When the field rolls very slowly compared to the expansion of the Universe, inflation occurs. However, when the hill becomes steeper, inflation ends and reheating can occur.
Effects of asymmetries
Eventually, it was shown that new inflation does not produce a perfectly symmetric universe, but that quantum fluctuations in the inflaton are created. These fluctuations form the primordial seeds for all structure created in the later universe. These fluctuations were first calculated by Viatcheslav Mukhanov and G. V. Chibisov in analyzing Starobinsky's similar model. In the context of inflation, they were worked out independently of the work of Mukhanov and Chibisov at the three-week 1982 Nuffield Workshop on the Very Early Universe at Cambridge University. The fluctuations were calculated by four groups working separately over the course of the workshop: Stephen Hawking; Starobinsky; Alan Guth and So-Young Pi; and James Bardeen, Paul Steinhardt and Michael Turner.
Observational status
Inflation is a mechanism for realizing the cosmological principle, which is the basis of the standard model of physical cosmology: it accounts for the homogeneity and isotropy of the observable universe. In addition, it accounts for the observed flatness and absence of magnetic monopoles. Since Guth's early work, each of these observations has received further confirmation, most impressively by the detailed observations of the cosmic microwave background made by the Planck spacecraft. This analysis shows that the Universe is flat to within percent, and that it is homogeneous and isotropic to one part in 100,000.
Inflation predicts that the structures visible in the Universe today formed through the gravitational collapse of perturbations that were formed as quantum mechanical fluctuations in the inflationary epoch. The detailed form of the spectrum of perturbations, called a nearly-scale-invariant Gaussian random field is very specific and has only two free parameters. One is the amplitude of the spectrum and the spectral index, which measures the slight deviation from scale invariance predicted by inflation (perfect scale invariance corresponds to the idealized de Sitter universe).
The other free parameter is the tensor to scalar ratio. The simplest inflation models, those without fine-tuning, predict a tensor to scalar ratio near 0.1 .
Inflation predicts that the observed perturbations should be in thermal equilibrium with each other (these are called adiabatic or isentropic perturbations). This structure for the perturbations has been confirmed by the Planck spacecraft, WMAP spacecraft and other cosmic microwave background (CMB) experiments, and galaxy surveys, especially the ongoing Sloan Digital Sky Survey. These experiments have shown that the one part in 100,000 inhomogeneities observed have exactly the form predicted by theory. There is evidence for a slight deviation from scale invariance. The spectral index, is one for a scale-invariant Harrison–Zel'dovich spectrum. The simplest inflation models predict that is between 0.92 and 0.98 . This is the range that is possible without fine-tuning of the parameters related to energy. From Planck data it can be inferred that =0.968 ± 0.006, and a tensor to scalar ratio that is less than 0.11 . These are considered an important confirmation of the theory of inflation.
Various inflation theories have been proposed that make radically different predictions, but they generally have much more fine-tuning than should be necessary. As a physical model, however, inflation is most valuable in that it robustly predicts the initial conditions of the Universe based on only two adjustable parameters: the spectral index (that can only change in a small range) and the amplitude of the perturbations. Except in contrived models, this is true regardless of how inflation is realized in particle physics.
Occasionally, effects are observed that appear to contradict the simplest models of inflation. The first-year WMAP data suggested that the spectrum might not be nearly scale-invariant, but might instead have a slight curvature. However, the third-year data revealed that the effect was a statistical anomaly. Another effect remarked upon since the first cosmic microwave background satellite, the Cosmic Background Explorer is that the amplitude of the quadrupole moment of the CMB is unexpectedly low and the other low multipoles appear to be preferentially aligned with the ecliptic plane. Some have claimed that this is a signature of non-Gaussianity and thus contradicts the simplest models of inflation. Others have suggested that the effect may be due to other new physics, foreground contamination, or even publication bias.
An experimental program is underway to further test inflation with more precise CMB measurements. In particular, high precision measurements of the so-called "B-modes" of the polarization of the background radiation could provide evidence of the gravitational radiation produced by inflation, and could also show whether the energy scale of inflation predicted by the simplest models (~ GeV) is correct. In March 2014, the BICEP2 team announced B-mode CMB polarization confirming inflation had been demonstrated. The team announced the tensor-to-scalar power ratio was between 0.15 and 0.27 (rejecting the null hypothesis; is expected to be 0 in the absence of inflation). However, on 19 June 2014, lowered confidence in confirming the findings was reported; on 19 September 2014, a further reduction in confidence was reported and, on 30 January 2015, even less confidence yet was reported. By 2018, additional data suggested, with 95% confidence, that is 0.06 or lower: Consistent with the null hypothesis, but still also consistent with many remaining models of inflation.
Other potentially corroborating measurements are expected from the Planck spacecraft, although it is unclear if the signal will be visible, or if contamination from foreground sources will interfere.
Other forthcoming measurements, such as those of 21 centimeter radiation (radiation emitted and absorbed from neutral hydrogen before the first stars formed), may measure the power spectrum with even greater resolution than the CMB and galaxy surveys, although it is not known if these measurements will be possible or if interference with radio sources on Earth and in the galaxy will be too great.
Theoretical status
In Guth's early proposal, it was thought that the inflaton was the Higgs field, the field that explains the mass of the elementary particles. It is now believed by some that the inflaton cannot be the Higgs field. One problem of this identification is the current tension with experimental data at the electroweak scale,. Other models of inflation relied on the properties of Grand Unified Theories.
Fine-tuning problem
One of the most severe challenges for inflation arises from the need for fine tuning. In new inflation, the slow-roll conditions must be satisfied for inflation to occur. The slow-roll conditions say that the inflaton potential must be flat (compared to the large vacuum energy) and that the inflaton particles must have a small mass.
New inflation requires the Universe to have a scalar field with an especially flat potential and special initial conditions. However, explanations for these fine-tunings have been proposed. For example, classically scale invariant field theories, where scale invariance is broken by quantum effects, provide an explanation of the flatness of inflationary potentials, as long as the theory can be studied through perturbation theory.
Linde proposed a theory known as chaotic inflation in which he suggested that the conditions for inflation were actually satisfied quite generically. Inflation will occur in virtually any universe that begins in a chaotic, high energy state that has a scalar field with unbounded potential energy.
However, in his model, the inflaton field necessarily takes values larger than one Planck unit: For this reason, these are often called large field models and the competing new inflation models are called small field models. In this situation, the predictions of effective field theory are thought to be invalid, as renormalization should cause large corrections that could prevent inflation.
This problem has not yet been resolved and some cosmologists argue that the small field models, in which inflation can occur at a much lower energy scale, are better models.
While inflation depends on quantum field theory (and the semiclassical approximation to quantum gravity) in an important way, it has not been completely reconciled with these theories.
Brandenberger commented on fine-tuning in another situation.
The amplitude of the primordial inhomogeneities produced in inflation is directly tied to the energy scale of inflation. This scale is suggested to be around GeV or times the Planck energy. The natural scale is naïvely the Planck scale so this small value could be seen as another form of fine-tuning (called a hierarchy problem): The energy density given by the scalar potential is down by compared to the Planck density. This is not usually considered to be a critical problem, however, because the scale of inflation corresponds naturally to the scale of gauge unification.
Eternal inflation
In many models, the inflationary phase of the Universe's expansion lasts forever in at least some regions of the Universe. This occurs because inflating regions expand very rapidly, reproducing themselves. Unless the rate of decay to the non-inflating phase is sufficiently fast, new inflating regions are produced more rapidly than non-inflating regions. In such models, most of the volume of the Universe is continuously inflating at any given time.
All models of eternal inflation produce an infinite, hypothetical multiverse, typically a fractal. The multiverse theory has created significant dissension in the scientific community about the viability of the inflationary model.
Paul Steinhardt, one of the original architects of the inflationary model, introduced the first example of eternal inflation in 1983. He showed that the inflation could proceed forever by producing bubbles of non-inflating space filled with hot matter and radiation surrounded by empty space that continues to inflate. The bubbles could not grow fast enough to keep up with the inflation. Later that same year, Alexander Vilenkin showed that eternal inflation is generic.
Although new inflation is classically rolling down the potential, quantum fluctuations can sometimes lift it to previous levels. These regions in which the inflaton fluctuates upwards, expand much faster than regions in which the inflaton has a lower potential energy, and tend to dominate in terms of physical volume. It has been shown that any inflationary theory with an unbounded potential is eternal. There are well-known theorems that this steady state cannot continue forever into the past. Inflationary spacetime, which is similar to de Sitter space, is incomplete without a contracting region. However, unlike de Sitter space, fluctuations in a contracting inflationary space collapse to form a gravitational singularity, a point where densities become infinite. Therefore, it is necessary to have a theory for the Universe's initial conditions.
In eternal inflation, regions with inflation have an exponentially growing volume, while regions that are not inflating do not. This suggests that the volume of the inflating part of the Universe in the global picture is always unimaginably larger than the part that has stopped inflating, even though inflation eventually ends as seen by any single pre-inflationary observer. Scientists disagree about how to assign a probability distribution to this hypothetical anthropic landscape. If the probability of different regions is counted by volume, one should expect that inflation will never end or applying boundary conditions that a local observer exists to observe it, that inflation will end as late as possible.
Some physicists believe this paradox can be resolved by weighting observers by their pre-inflationary volume. Others believe that there is no resolution to the paradox and that the multiverse is a critical flaw in the inflationary paradigm. Paul Steinhardt, who first introduced the eternal inflationary model, later became one of its most vocal critics for this reason.
Initial conditions
Some physicists have tried to avoid the initial conditions problem by proposing models for an eternally inflating universe with no origin. These models propose that while the Universe, on the largest scales, expands exponentially it was, is and always will be, spatially infinite and has existed, and will exist, forever.
Other proposals attempt to describe the ex nihilo creation of the Universe based on quantum cosmology and the following inflation. Vilenkin put forth one such scenario. Hartle and Hawking offered the no-boundary proposal for the initial creation of the Universe in which inflation comes about naturally.
Guth described the inflationary universe as the "ultimate free lunch": new universes, similar to our own, are continually produced in a vast inflating background. Gravitational interactions, in this case, circumvent (but do not violate) the first law of thermodynamics (energy conservation) and the second law of thermodynamics (entropy and the arrow of time problem). However, while there is consensus that this solves the initial conditions problem, some have disputed this, as it is much more likely that the Universe came about by a quantum fluctuation. Don Page was an outspoken critic of inflation because of this anomaly. He stressed that the thermodynamic arrow of time necessitates low entropy initial conditions, which would be highly unlikely. According to them, rather than solving this problem, the inflation theory aggravates it – the reheating at the end of the inflation era increases entropy, making it necessary for the initial state of the Universe to be even more orderly than in other Big Bang theories with no inflation phase.
Hawking and Page later found ambiguous results when they attempted to compute the probability of inflation in the Hartle–Hawking initial state. Other authors have argued that, since inflation is eternal, the probability doesn't matter as long as it is not precisely zero: once it starts, inflation perpetuates itself and quickly dominates the Universe. However, Albrecht and Lorenzo Sorbo argued that the probability of an inflationary cosmos, consistent with today's observations, emerging by a random fluctuation from some pre-existent state is much higher than that of a non-inflationary cosmos. This is because the "seed" amount of non-gravitational energy required for the inflationary cosmos is so much less than that for a non-inflationary alternative, which outweighs any entropic considerations.
Another problem that has occasionally been mentioned is the trans-Planckian problem or trans-Planckian effects. Since the energy scale of inflation and the Planck scale are relatively close, some of the quantum fluctuations that have made up the structure in our universe were smaller than the Planck length before inflation. Therefore, there ought to be corrections from Planck-scale physics, in particular the unknown quantum theory of gravity. Some disagreement remains about the magnitude of this effect: about whether it is just on the threshold of detectability or completely undetectable.
Hybrid inflation
Another kind of inflation, called hybrid inflation, is an extension of new inflation. It introduces additional scalar fields, so that while one of the scalar fields is responsible for normal slow roll inflation, another triggers the end of inflation: when inflation has continued for sufficiently long, it becomes favorable to the second field to decay into a much lower energy state.
In hybrid inflation, one scalar field is responsible for most of the energy density (thus determining the rate of expansion), while another is responsible for the slow roll (thus determining the period of inflation and its termination). Thus fluctuations in the former inflaton would not affect inflation termination, while fluctuations in the latter would not affect the rate of expansion. Therefore, hybrid inflation is not eternal. When the second (slow-rolling) inflaton reaches the bottom of its potential, it changes the location of the minimum of the first inflaton's potential, which leads to a fast roll of the inflaton down its potential, leading to termination of inflation.
Relation to dark energy
Dark energy is broadly similar to inflation and is thought to be causing the expansion of the present-day universe to accelerate. However, the energy scale of dark energy is much lower, GeV, roughly 27 orders of magnitude less than the scale of inflation.
Inflation and string cosmology
The discovery of flux compactifications opened the way for reconciling inflation and string theory. Brane inflation suggests that inflation arises from the motion of D-branes in the compactified geometry, usually towards a stack of anti-D-branes. This theory, governed by the Dirac–Born–Infeld action, is different from ordinary inflation. The dynamics are not completely understood. It appears that special conditions are necessary since inflation occurs in tunneling between two vacua in the string landscape. The process of tunneling between two vacua is a form of old inflation, but new inflation must then occur by some other mechanism.
Inflation and loop quantum gravity
When investigating the effects the theory of loop quantum gravity would have on cosmology, a loop quantum cosmology model has evolved that provides a possible mechanism for cosmological inflation. Loop quantum gravity assumes a quantized spacetime. If the energy density is larger than can be held by the quantized spacetime, it is thought to bounce back.
Alternatives and adjuncts
Other models have been advanced that are claimed to explain some or all of the observations addressed by inflation.
Big bounce
The big bounce hypothesis attempts to replace the cosmic singularity with a cosmic contraction and bounce, thereby explaining the initial conditions that led to the big bang. The flatness and horizon problems are naturally solved in the Einstein–Cartan–Sciama–Kibble theory of gravity, without needing an exotic form of matter or free parameters. This theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. The minimal coupling between torsion and Dirac spinors generates a spin-spin interaction that is significant in fermionic matter at extremely high densities. Such an interaction averts the unphysical Big Bang singularity, replacing it with a cusp-like bounce at a finite minimum scale factor, before which the Universe was contracting. The rapid expansion immediately after the Big Bounce explains why the present Universe at largest scales appears spatially flat, homogeneous and isotropic. As the density of the Universe decreases, the effects of torsion weaken and the Universe smoothly enters the radiation-dominated era.
Ekpyrotic and cyclic models
The ekpyrotic and cyclic models are also considered adjuncts to inflation. These models solve the horizon problem through an expanding epoch well before the Big Bang, and then generate the required spectrum of primordial density perturbations during a contracting phase leading to a Big Crunch. The Universe passes through the Big Crunch and emerges in a hot Big Bang phase. In this sense they are reminiscent of Richard Chace Tolman's oscillatory universe; in Tolman's model, however, the total age of the Universe is necessarily finite, while in these models this is not necessarily so. Whether the correct spectrum of density fluctuations can be produced, and whether the Universe can successfully navigate the Big Bang/Big Crunch transition, remains a topic of controversy and current research. Ekpyrotic models avoid the magnetic monopole problem as long as the temperature at the Big Crunch/Big Bang transition remains below the Grand Unified Scale, as this is the temperature required to produce magnetic monopoles in the first place. As things stand, there is no evidence of any 'slowing down' of the expansion, but this is not surprising as each cycle is expected to last on the order of a trillion years.
String gas cosmology
String theory requires that, in addition to the three observable spatial dimensions, additional dimensions exist that are curled up or compactified (see also Kaluza–Klein theory). Extra dimensions appear as a frequent component of supergravity models and other approaches to quantum gravity. This raised the contingent question of why four space-time dimensions became large and the rest became unobservably small. An attempt to address this question, called string gas cosmology, was proposed by Robert Brandenberger and Cumrun Vafa. This model focuses on the dynamics of the early universe considered as a hot gas of strings. Brandenberger and Vafa show that a dimension of spacetime can only expand if the strings that wind around it can efficiently annihilate each other. Each string is a one-dimensional object, and the largest number of dimensions in which two strings will generically intersect (and, presumably, annihilate) is three. Therefore, the most likely number of non-compact (large) spatial dimensions is three. Current work on this model centers on whether it can succeed in stabilizing the size of the compactified dimensions and produce the correct spectrum of primordial density perturbations. The original model did not "solve the entropy and flatness problems of standard cosmology", although Brandenburger and coauthors later argued that these problems can be eliminated by implementing string gas cosmology in the context of a bouncing-universe scenario.
Varying c
Cosmological models employing a variable speed of light have been proposed to resolve the horizon problem of and provide an alternative to cosmic inflation. In the VSL models, the fundamental constant c, denoting the speed of light in vacuum, is greater in the early universe than its present value, effectively increasing the particle horizon at the time of decoupling sufficiently to account for the observed isotropy of the CMB.
Criticisms
Since its introduction by Alan Guth in 1980, the inflationary paradigm has become widely accepted. Nevertheless, many physicists, mathematicians, and philosophers of science have voiced criticisms, claiming untestable predictions and a lack of serious empirical support. In 1999, John Earman and Jesús Mosterín published a thorough critical review of inflationary cosmology, concluding,
"we do not think that there are, as yet, good grounds for admitting any of the models of inflation into the standard core of cosmology."
As pointed out by Roger Penrose from 1986 on, in order to work, inflation requires extremely specific initial conditions of its own, so that the problem (or pseudo-problem) of initial conditions is not solved:
"There is something fundamentally misconceived about trying to explain the uniformity of the early universe as resulting from a thermalization process. ... For, if the thermalization is actually doing anything ... then it represents a definite increasing of the entropy. Thus, the universe would have been even more special before the thermalization than after."
The problem of specific or "fine-tuned" initial conditions would not have been solved; it would have gotten worse. At a conference in 2015, Penrose said that
"inflation isn't falsifiable, it's falsified. ... BICEP did a wonderful service by bringing all the inflation-ists out of their shell, and giving them a black eye."
A recurrent criticism of inflation is that the invoked inflaton field does not correspond to any known physical field, and that its potential energy curve seems to be an ad hoc contrivance to accommodate almost any data obtainable. Paul Steinhardt, one of the founding fathers of inflationary cosmology, calls 'bad inflation' a period of accelerated expansion whose outcome conflicts with observations, and 'good inflation' one compatible with them:
"Not only is bad inflation more likely than good inflation, but no inflation is more likely than either ... Roger Penrose considered all the possible configurations of the inflaton and gravitational fields. Some of these configurations lead to inflation ... Other configurations lead to a uniform, flat universe directly – without inflation. Obtaining a flat universe is unlikely overall. Penrose's shocking conclusion, though, was that obtaining a flat universe without inflation is much more likely than with inflation – by a factor of 10 to the googol power!"
Together with Anna Ijjas and Abraham Loeb, he wrote articles claiming that the inflationary paradigm is in trouble in view of the data from the Planck satellite.
Counter-arguments were presented by Alan Guth, David Kaiser, and Yasunori Nomura and by Linde, saying that
"cosmic inflation is on a stronger footing than ever before".
| Physical sciences | Physical cosmology | null |
5385 | https://en.wikipedia.org/wiki/Candela | Candela | The candela (symbol: cd) is the unit of luminous intensity in the International System of Units (SI). It measures luminous power per unit solid angle emitted by a light source in a particular direction. Luminous intensity is analogous to radiant intensity, but instead of simply adding up the contributions of every wavelength of light in the source's spectrum, the contribution of each wavelength is weighted by the luminous efficiency function, the model of the sensitivity of the human eye to different wavelengths, standardized by the CIE and ISO. A common wax candle emits light with a luminous intensity of roughly one candela. If emission in some directions is blocked by an opaque barrier, the emission would still be approximately one candela in the directions that are not obscured.
The word candela is Latin for candle. The old name "candle" is still sometimes used, as in foot-candle and the modern definition of candlepower.
Definition
The 26th General Conference on Weights and Measures (CGPM) redefined the candela in 2018. The new definition, which took effect on 20 May 2019, is:
The candela [...] is defined by taking the fixed numerical value of the luminous efficacy of monochromatic radiation of frequency , Kcd, to be 683 when expressed in the unit lm W−1, which is equal to , or , where the kilogram, metre and second are defined in terms of h, c and ΔνCs.
Explanation
The frequency chosen is in the visible spectrum near green, corresponding to a wavelength of about 555 nanometres. The human eye, when adapted for bright conditions, is most sensitive near this frequency. Under these conditions, photopic vision dominates the visual perception of our eyes over the scotopic vision. At other frequencies, more radiant intensity is required to achieve the same luminous intensity, according to the frequency response of the human eye. The luminous intensity for light of a particular wavelength λ is given by
where is the luminous intensity, is the radiant intensity and is the photopic luminous efficiency function. If more than one wavelength is present (as is usually the case), one must integrate over the spectrum of wavelengths to get the total luminous intensity.
Examples
A common candle emits light with roughly 1 cd luminous intensity.
A 25 W compact fluorescent light bulb puts out around 1700 lumens; if that light is radiated equally in all directions (i.e. over 4 steradians), it will have an intensity of
Focused into a 20° beam (0.095 steradians), the same light bulb would have an intensity of around 18,000 cd or 18 kcd within the beam.
History
Prior to 1948, various standards for luminous intensity were in use in a number of countries. These were typically based on the brightness of the flame from a "standard candle" of defined composition, or the brightness of an incandescent filament of specific design. One of the best-known of these was the English standard of candlepower. One candlepower was the light produced by a pure spermaceti candle weighing one sixth of a pound and burning at a rate of 120 grains per hour. Germany, Austria and Scandinavia used the Hefnerkerze, a unit based on the output of a Hefner lamp.
A better standard for luminous intensity was needed. In 1884, Jules Violle had proposed a standard based on the light emitted by 1 cm2 of platinum at its melting point (or freezing point). The resulting unit of intensity, called the "violle", was roughly equal to 60 English candlepower. Platinum was convenient for this purpose because it had a high enough melting point, was not prone to oxidation, and could be obtained in pure form. Violle showed that the intensity emitted by pure platinum was strictly dependent on its temperature, and so platinum at its melting point should have a consistent luminous intensity.
In practice, realizing a standard based on Violle's proposal turned out to be more difficult than expected. Impurities on the surface of the platinum could directly affect its emissivity, and in addition impurities could affect the luminous intensity by altering the melting point. Over the following half century various scientists tried to make a practical intensity standard based on incandescent platinum. The successful approach was to suspend a hollow shell of thorium dioxide with a small hole in it in a bath of molten platinum. The shell (cavity) serves as a black body, producing black-body radiation that depends on the temperature and is not sensitive to details of how the device is constructed.
In 1937, the Commission Internationale de l'Éclairage (International Commission on Illumination) and the CIPM proposed a "new candle" based on this concept, with value chosen to make it similar to the earlier unit candlepower. The decision was promulgated by the CIPM in 1946:
The value of the new candle is such that the brightness of the full radiator at the temperature of solidification of platinum is 60 new candles per square centimetre.
It was then ratified in 1948 by the 9th CGPM which adopted a new name for this unit, the candela. In 1967 the 13th CGPM removed the term "new candle" and gave an amended version of the candela definition, specifying the atmospheric pressure applied to the freezing platinum:
The candela is the luminous intensity, in the perpendicular direction, of a surface of square metre of a black body at the temperature of freezing platinum under a pressure of newtons per square metre.
In 1979, because of the difficulties in realizing a Planck radiator at high temperatures and the new possibilities offered by radiometry, the 16th CGPM adopted a new definition of the candela:
The candela is the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency and that has a radiant intensity in that direction of watt per steradian.
The definition describes how to produce a light source that (by definition) emits one candela, but does not specify the luminous efficiency function for weighting radiation at other frequencies. Such a source could then be used to calibrate instruments designed to measure luminous intensity with reference to a specified luminous efficiency function. An appendix to the SI Brochure makes it clear that the luminous efficiency function is not uniquely specified, but must be selected to fully define the candela.
The arbitrary (1/683) term was chosen so that the new definition would precisely match the old definition. Although the candela is now defined in terms of the second (an SI base unit) and the watt (a derived SI unit), the candela remains a base unit of the SI system, by definition.
The 26th CGPM approved the modern definition of the candela in 2018 as part of the 2019 revision of the SI, which redefined the SI base units in terms of fundamental physical constants.
SI photometric light units
Relationships between luminous intensity, luminous flux, and illuminance
If a source emits a known luminous intensity (in candelas) in a well-defined cone, the total luminous flux in lumens is given by
where is the radiation angle of the lamp—the full vertex angle of the emission cone. For example, a lamp that emits 590 cd with a radiation angle of 40° emits about 224 lumens. See MR16 for emission angles of some common lamps.
If the source emits light uniformly in all directions, the flux can be found by multiplying the intensity by 4: a uniform 1 candela source emits 4 lumens (approximately 12.566 lumens).
For the purpose of measuring illumination, the candela is not a practical unit, as it only applies to idealized point light sources, each approximated by a source small compared to the distance from which its luminous radiation is measured, also assuming that it is done so in the absence of other light sources. What gets directly measured by a light meter is incident light on a sensor of finite area, i.e. illuminance in lm/m2 (lux). However, if designing illumination from many point light sources, like light bulbs, of known approximate omnidirectionally uniform intensities, the contributions to illuminance from incoherent light being additive, it is mathematically estimated as follows. If is the position of the ith source of uniform intensity , and is the unit vector normal to the illuminated elemental opaque area being measured, and provided that all light sources lie in the same half-space divided by the plane of this area,
In the case of a single point light source of intensity Iv, at a distance r and normally incident, this reduces to
SI multiples
Like other SI units, the candela can also be modified by adding a metric prefix that multiplies it by a power of 10, for example millicandela (mcd) for 10−3 candela.
| Physical sciences | Light | null |
5387 | https://en.wikipedia.org/wiki/Condensed%20matter%20physics | Condensed matter physics | Condensed matter physics is the field of physics that deals with the macroscopic and microscopic physical properties of matter, especially the solid and liquid phases, that arise from electromagnetic forces between atoms and electrons. More generally, the subject deals with condensed phases of matter: systems of many constituents with strong interactions among them. More exotic condensed phases include the superconducting phase exhibited by certain materials at extremely low cryogenic temperatures, the ferromagnetic and antiferromagnetic phases of spins on crystal lattices of atoms, the Bose–Einstein condensates found in ultracold atomic systems, and liquid crystals. Condensed matter physicists seek to understand the behavior of these phases by experiments to measure various material properties, and by applying the physical laws of quantum mechanics, electromagnetism, statistical mechanics, and other physics theories to develop mathematical models and predict the properties of extremely large groups of atoms.
The diversity of systems and phenomena available for study makes condensed matter physics the most active field of contemporary physics: one third of all American physicists self-identify as condensed matter physicists, and the Division of Condensed Matter Physics is the largest division of the American Physical Society. These include solid state and soft matter physicists, who study quantum and non-quantum physical properties of matter respectively. Both types study a great range of materials, providing many research, funding and employment opportunities. The field overlaps with chemistry, materials science, engineering and nanotechnology, and relates closely to atomic physics and biophysics. The theoretical physics of condensed matter shares important concepts and methods with that of particle physics and nuclear physics.
A variety of topics in physics such as crystallography, metallurgy, elasticity, magnetism, etc., were treated as distinct areas until the 1940s, when they were grouped together as solid-state physics. Around the 1960s, the study of physical properties of liquids was added to this list, forming the basis for the more comprehensive specialty of condensed matter physics. The Bell Telephone Laboratories was one of the first institutes to conduct a research program in condensed matter physics. According to the founding director of the Max Planck Institute for Solid State Research, physics professor Manuel Cardona, it was Albert Einstein who created the modern field of condensed matter physics starting with his seminal 1905 article on the photoelectric effect and photoluminescence which opened the fields of photoelectron spectroscopy and photoluminescence spectroscopy, and later his 1907 article on the specific heat of solids which introduced, for the first time, the effect of lattice vibrations on the thermodynamic properties of crystals, in particular the specific heat. Deputy Director of the Yale Quantum Institute A. Douglas Stone makes a similar priority case for Einstein in his work on the synthetic history of quantum mechanics.
Etymology
According to physicist Philip Warren Anderson, the use of the term "condensed matter" to designate a field of study was coined by him and Volker Heine, when they changed the name of their group at the Cavendish Laboratories, Cambridge, from Solid state theory to Theory of Condensed Matter in 1967, as they felt it better included their interest in liquids, nuclear matter, and so on. Although Anderson and Heine helped popularize the name "condensed matter", it had been used in Europe for some years, most prominently in the Springer-Verlag journal Physics of Condensed Matter, launched in 1963. The name "condensed matter physics" emphasized the commonality of scientific problems encountered by physicists working on solids, liquids, plasmas, and other complex matter, whereas "solid state physics" was often associated with restricted industrial applications of metals and semiconductors. In the 1960s and 70s, some physicists felt the more comprehensive name better fit the funding environment and Cold War politics of the time.
| Physical sciences | Basics_8 | null |
5390 | https://en.wikipedia.org/wiki/Conversion%20of%20units | Conversion of units | Conversion of units is the conversion of the unit of measurement in which a quantity is expressed, typically through a multiplicative conversion factor that changes the unit without changing the quantity. This is also often loosely taken to include replacement of a quantity with a corresponding quantity that describes the same physical property.
Unit conversion is often easier within a metric system such as the SI than in others, due to the system's coherence and its metric prefixes that act as power-of-10 multipliers.
Overview
The definition and choice of units in which to express a quantity may depend on the specific situation and the intended purpose. This may be governed by regulation, contract, technical specifications or other published standards. Engineering judgment may include such factors as:
the precision and accuracy of measurement and the associated uncertainty of measurement
the statistical confidence interval or tolerance interval of the initial measurement
the number of significant figures of the measurement
the intended use of the measurement, including the engineering tolerances
historical definitions of the units and their derivatives used in old measurements; e.g., international foot vs. US survey foot.
For some purposes, conversions from one system of units to another are needed to be exact, without increasing or decreasing the precision of the expressed quantity. An adaptive conversion may not produce an exactly equivalent expression. Nominal values are sometimes allowed and used.
Factor–label method
The factor–label method, also known as the unit–factor method or the unity bracket method, is a widely used technique for unit conversions that uses the rules of algebra.
The factor–label method is the sequential application of conversion factors expressed as fractions and arranged so that any dimensional unit appearing in both the numerator and denominator of any of the fractions can be cancelled out until only the desired set of dimensional units is obtained. For example, 10 miles per hour can be converted to metres per second by using a sequence of conversion factors as shown below:
Each conversion factor is chosen based on the relationship between one of the original units and one of the desired units (or some intermediary unit), before being rearranged to create a factor that cancels out the original unit. For example, as "mile" is the numerator in the original fraction and , "mile" will need to be the denominator in the conversion factor. Dividing both sides of the equation by 1 mile yields , which when simplified results in the dimensionless . Because of the identity property of multiplication, multiplying any quantity (physical or not) by the dimensionless 1 does not change that quantity. Once this and the conversion factor for seconds per hour have been multiplied by the original fraction to cancel out the units mile and hour, 10 miles per hour converts to 4.4704 metres per second.
As a more complex example, the concentration of nitrogen oxides (NOx) in the flue gas from an industrial furnace can be converted to a mass flow rate expressed in grams per hour (g/h) of NOx by using the following information as shown below:
NOx concentration = 10 parts per million by volume = 10 ppmv = 10 volumes/106 volumes
NOx molar mass = 46 kg/kmol = 46 g/mol
Flow rate of flue gas = 20 cubic metres per minute = 20 m3/min
The flue gas exits the furnace at 0 °C temperature and 101.325 kPa absolute pressure.
The molar volume of a gas at 0 °C temperature and 101.325 kPa is 22.414 m3/kmol.
After cancelling any dimensional units that appear both in the numerators and the denominators of the fractions in the above equation, the NOx concentration of 10 ppmv converts to mass flow rate of 24.63 grams per hour.
Checking equations that involve dimensions
The factor–label method can also be used on any mathematical equation to check whether or not the dimensional units on the left hand side of the equation are the same as the dimensional units on the right hand side of the equation. Having the same units on both sides of an equation does not ensure that the equation is correct, but having different units on the two sides (when expressed in terms of base units) of an equation implies that the equation is wrong.
For example, check the universal gas law equation of , when:
the pressure P is in pascals (Pa)
the volume V is in cubic metres (m3)
the amount of substance n is in moles (mol)
the universal gas constant R is 8.3145 Pa⋅m3/(mol⋅K)
the temperature T is in kelvins (K)
As can be seen, when the dimensional units appearing in the numerator and denominator of the equation's right hand side are cancelled out, both sides of the equation have the same dimensional units. Dimensional analysis can be used as a tool to construct equations that relate non-associated physico-chemical properties. The equations may reveal undiscovered or overlooked properties of matter, in the form of left-over dimensions – dimensional adjusters – that can then be assigned physical significance. It is important to point out that such 'mathematical manipulation' is neither without prior precedent, nor without considerable scientific significance. Indeed, the Planck constant, a fundamental physical constant, was 'discovered' as a purely mathematical abstraction or representation that built on the Rayleigh–Jeans law for preventing the ultraviolet catastrophe. It was assigned and ascended to its quantum physical significance either in tandem or post mathematical dimensional adjustment – not earlier.
Limitations
The factor–label method can convert only unit quantities for which the units are in a linear relationship intersecting at 0 (ratio scale in Stevens's typology). Most conversions fit this paradigm. An example for which it cannot be used is the conversion between the Celsius scale and the Kelvin scale (or the Fahrenheit scale). Between degrees Celsius and kelvins, there is a constant difference rather than a constant ratio, while between degrees Celsius and degrees Fahrenheit there is neither a constant difference nor a constant ratio. There is, however, an affine transform (, rather than a linear transform ) between them.
For example, the freezing point of water is 0 °C and 32 °F, and a 5 °C change is the same as a 9 °F change. Thus, to convert from units of Fahrenheit to units of Celsius, one subtracts 32 °F (the offset from the point of reference), divides by 9 °F and multiplies by 5 °C (scales by the ratio of units), and adds 0 °C (the offset from the point of reference). Reversing this yields the formula for obtaining a quantity in units of Celsius from units of Fahrenheit; one could have started with the equivalence between 100 °C and 212 °F, which yields the same formula.
Hence, to convert the numerical quantity value of a temperature T[F] in degrees Fahrenheit to a numerical quantity value T[C] in degrees Celsius, this formula may be used:
T[C] = (T[F] − 32) × 5/9.
To convert T[C] in degrees Celsius to T[F] in degrees Fahrenheit, this formula may be used:
T[F] = (T[C] × 9/5) + 32.
Example
Starting with:
replace the original unit with its meaning in terms of the desired unit , e.g. if , then:
Now and are both numerical values, so just calculate their product.
Or, which is just mathematically the same thing, multiply Z by unity, the product is still Z:
For example, you have an expression for a physical value Z involving the unit feet per second () and you want it in terms of the unit miles per hour ():
Or as an example using the metric system, you have a value of fuel economy in the unit litres per 100 kilometres and you want it in terms of the unit microlitres per metre:
Calculation involving non-SI Units
In the cases where non-SI units are used, the numerical calculation of a formula can be done by first working out the factor, and then plug in the numerical values of the given/known quantities.
For example, in the study of Bose–Einstein condensate, atomic mass is usually given in daltons, instead of kilograms, and chemical potential is often given in the Boltzmann constant times nanokelvin. The condensate's healing length is given by:
For a 23Na condensate with chemical potential of (the Boltzmann constant times) 128 nK, the calculation of healing length (in micrometres) can be done in two steps:
Calculate the factor
Assume that , this gives
which is our factor.
Calculate the numbers
Now, make use of the fact that . With , .
This method is especially useful for programming and/or making a worksheet, where input quantities are taking multiple different values; For example, with the factor calculated above, it is very easy to see that the healing length of 174Yb with chemical potential 20.3 nK is
.
Software tools
There are many conversion tools. They are found in the function libraries of applications such as spreadsheets databases, in calculators, and in macro packages and plugins for many other applications such as the mathematical, scientific and technical applications.
There are many standalone applications that offer the thousands of the various units with conversions. For example, the free software movement offers a command line utility GNU units for GNU and Windows. The Unified Code for Units of Measure is also a popular option.
| Physical sciences | Basics | Basics and measurement |
5394 | https://en.wikipedia.org/wiki/Chervil | Chervil | Chervil (; Anthriscus cerefolium), sometimes called French parsley or garden chervil (to distinguish it from similar plants also called chervil), is a delicate annual herb related to parsley. It was formerly called myrhis due to its volatile oil with an aroma similar to the resinous substance myrrh. It is commonly used to season mild-flavoured dishes and is a constituent of the French herb mixture .
Name
The name chervil is from Anglo-Norman, from Latin or , meaning "leaves of joy"; the Latin is formed, as from an Ancient Greek word ().
Description
The plants grow to , with tripinnate leaves that may be curly. The small white flowers form small umbels, across. The fruit is about 1 cm long, oblong-ovoid with a slender, ridged beak.
Distribution and habitat
A member of the Apiaceae, chervil is native to the Caucasus but was spread by the Romans through most of Europe, where it is now naturalised. It is also grown frequently in the United States, where it sometimes escapes cultivation. Such escape can be recognized, however, as garden chervil is distinguished from all other Anthriscus species growing in North America (i.e., A. caucalis and A. sylvestris) by its having lanceolate-linear bracteoles and a fruit with a relatively long beak.
Cultivation
Transplanting chervil can be difficult, due to the long taproot. It prefers a cool and moist location; otherwise, it rapidly goes to seed (also known as bolting). It is usually grown as a cool-season crop, like lettuce, and should be planted in early spring and late fall or in a winter greenhouse. Regular harvesting of leaves also helps to prevent bolting. If plants bolt despite precautions, the plant can be periodically re-sown throughout the growing season, thus producing fresh plants as older plants bolt and go out of production.
Chervil grows to a height of , and a width of .
Uses
Culinary
Chervil is used, particularly in France, to season poultry, seafood, young spring vegetables (such as carrots), soups, and sauces. More delicate than parsley, it has a faint taste of liquorice or aniseed.
Chervil is one of the four traditional French , along with tarragon, chives, and parsley, which are essential to French cooking. Unlike the more pungent, robust herbs such as thyme and rosemary, which can take prolonged cooking, the are added at the last minute, to salads, omelettes, and soups.
Chemical constituents
Essential oil obtained via water distillation of wild Turkish Anthriscus cerefolium was analyzed by gas chromatography - mass spectrometry identifying 4 compounds: methyl chavicol (83.10%), 1-allyl-2,4-dimethoxybenzene (15.15%), undecane (1.75%) and β-pinene (<0.01%).
Horticulture
According to some, slugs are attracted to chervil and the plant is sometimes used to bait them.
Health
Chervil has had various uses in folk medicine. It was claimed to be useful as a digestive aid, for lowering high blood pressure, and, infused with vinegar, for curing hiccups. Besides its digestive properties, it is used as a mild stimulant.
Chervil has also been implicated in "strimmer dermatitis", another name for phytophotodermatitis, due to spray from weed trimmers and similar forms of contact. Other plants in the family Apiaceae can have similar effects.
| Biology and health sciences | Herbs and spices | Plants |
5395 | https://en.wikipedia.org/wiki/Chives | Chives | Chives, scientific name Allium schoenoprasum, is a species of flowering plant in the family Amaryllidaceae.
A perennial plant, A. schoenoprasum is widespread in nature across much of Eurasia and North America. It is the only species of Allium native to both the New and the Old Worlds.
The leaves and flowers are edible. Chives are a commonly used herb and vegetable with a variety of culinary uses. They are also used to repel insects.
Description
Chives are a bulb-forming herbaceous perennial plant, growing to tall. The bulbs are slender, conical, long and broad, and grow in dense clusters from the roots. The scapes (or stems) are hollow and tubular, up to long and across, with a soft texture, although, prior to the emergence of a flower, they may appear stiffer than usual. The grass-like leaves, which are shorter than the scapes, are also hollow and tubular, or terete (round in cross-section).
The flowers are pale purple, and star-shaped with six petals, wide, and produced in a dense inflorescence of 10–30 together; before opening, the inflorescence is surrounded by a papery bract. The seeds are produced in a small, three-valved capsule, maturing in summer. The herb flowers from April to May in the southern parts of its habitat zones and in June in the northern parts.
Chives are the only species of Allium native to both the New and the Old Worlds. Sometimes, the plants found in North America are classified as A. schoenoprasum var. sibiricum, although this is disputed. Differences between specimens are significant. One example was found in northern Maine growing solitary, instead of in clumps, also exhibiting dingy grey flowers.
Similar species
Close relatives of chives include common onions, garlic, shallot, leek, scallion, and Chinese onion.
The terete hollow leaves distinguish the plant from Allium tuberosum (garlic chives).
Taxonomy
It was formally described by the Swedish botanist Carl Linnaeus in his seminal publication Species Plantarum in 1753.
The name of the species derives from the Greek σχοίνος, skhoínos (sedge or rush) and πράσον, práson (leek). Its English name, chives, derives from the French word cive, from cepa, the Latin word for onion. In the Middle Ages, it was known as 'rush leek'.
Several subspecies have been proposed, but are not accepted by Plants of the World Online, , which sinks them into two subspecies:
Allium schoenoprasum subsp. gredense (Rivas Goday) Rivas Mart., Fern.Gonz. & Sánchez Mata
Allium schoenoprasum subsp. latiorifolium (Pau) Rivas Mart., Fern.Gonz. & Sánchez Mata
Varieties have also been proposed, including A. schoenoprasum var. sibiricum. The Flora of North America notes that the species is very variable, and considers recognition of varieties as "unsound".
Distribution and habitat
Chives are native to temperate areas of Europe, Asia and North America.
Range
Chives have a wide natural range across much of the Northern Hemisphere.
In Asia it is native from the Ural Mountains in Russia to Kamchatka in the far east. It grows natively in the Korean peninsula, but only the islands of Hokkaido and Honshu in Japan. Likewise its natural range in China only extends to Xinjiang and Inner Mongolia, though it is also found in adjacent Mongolia. It is native to all the nations of the Caucasus. However, in Central Asia it is only found in Kazakhstan and Kyrgyzstan. To the south its range also extends to Afghanistan, Iran, Iraq, Pakistan, and the Western Himalayas in India.
It is native to all parts of Europe with the exception of Sicily, Sardinia, the island of Cyprus, Iceland, Crimea, and Hungary and other offshore islands. It also is not native to Belgium and Ireland, but it grows there as an introduced plant.
In North America it is native to Alaska and almost every province of Canada, but has been introduced to the island of Newfoundland. In the United States the certain native range in the lower 48 is in two separated areas. In the west its range is in Washington, Oregon, Idaho, Montana, Wyoming, and Colorado. In the east it extends from Minnesota, eastward through Wisconsin, Michigan, Ohio, Pennsylvania, and New Jersey. Then northward into New York and all of New England. The Plants of the World Online database lists it as introduced to Illinois and Maryland and the USDA Natural Resources Conservation Service PLANTS database additionally lists it as growing in Nevada, Utah, Missouri, and Virginia without information on if it is native or introduced to those states.
In other areas of the Americas chives grow as an introduced plant in Mexico, Honduras, Costa Rica, Cuba, Jamaica, Hispaniola, Trinidad, Colombia, Bolivia, and the southern part of Argentina in Tierra del Fuego.
Ecology
Chives are repulsive to most insects due to their sulfur compounds, but their flowers attract bees, and they are at times kept to increase desired insect life.
The plant provides a great deal of nectar for pollinators. It was rated in the top 10 for most nectar production (nectar per unit cover per year) in a United Kingdom plants survey conducted by the AgriLand project which is supported by the UK Insect Pollinators Initiative.
Cultivation
Chives have been cultivated in Europe since the Middle Ages (from the fifth until the 15th centuries), although their usage dates back 5,000 years.
Chives are cultivated both for their culinary uses and for their ornamental value; the violet flowers are often used in ornamental dry bouquets.
Chives thrive in well-drained soil, rich in organic matter, with a pH of 6–7 and full sun. They can be grown from seed and mature in summer, or early the following spring. Typically, chives need to be germinated at a temperature of and kept moist. They can also be planted under a cloche or germinated indoors in cooler climates, then planted out later. After at least four weeks, the young shoots should be ready to be planted out. They are also easily propagated by division.
In cold regions, chives die back to the underground bulbs in winter, with the new leaves appearing in early spring. Chives starting to look old can be cut back to about 2–5 cm. When harvesting, the needed number of stalks should be cut to the base. During the growing season, the plant continually regrows leaves, allowing for a continuous harvest.
Chives are susceptible to damage by leek moth larvae, which bore into the leaves or bulbs of the plant.
Uses
Culinary arts
Chives are grown for their scapes and leaves, which are used for culinary purposes as a flavoring herb, and provide a somewhat milder onion-like flavor than those of other Allium species. The edible flowers are used in salads, or used to make blossom vinegars. Both the scapes and the unopened, immature flower buds are diced and used as an ingredient for omelettes, fish, potatoes, soups, and many other dishes.
Chives have a wide variety of culinary uses, such as in traditional dishes in France, Sweden, and elsewhere. In his 1806 book Attempt at a Flora (Försök til en flora), Anders Jahan Retzius describes how chives are used with pancakes, soups, fish, and sandwiches. They are also an ingredient of the gräddfil sauce with the traditional herring dish served at Swedish midsummer celebrations. The flowers may also be used to garnish dishes.
In Poland and Germany, chives are served with quark. Chives are one of the fines herbes of French cuisine, the others being tarragon, chervil and parsley. Chives can be found fresh at most markets year-round, making them readily available; they can also be dry-frozen without much impairment to the taste, giving home growers the opportunity to store large quantities harvested from their own gardens.
Uses in plant cultivation
Retzius also describes how farmers would plant chives between the rocks making up the borders of their flowerbeds, to keep the plants free from pests (such as Japanese beetles). The growing plant repels unwanted insect life, and the juice of the leaves can be used for the same purpose, as well as fighting fungal infections, mildew, and scab.
In culture
In Europe, chives were sometimes referred to as "rush leeks".
It was mentioned in 80 A.D. by Marcus Valerius Martialis in his "Epigrams".
The Romans believed chives could relieve the pain from sunburn or a sore throat. They believed eating chives could increase blood pressure and act as a diuretic.
Romani have used chives in fortune telling. Bunches of dried chives hung around a house were believed to ward off disease and evil.
In the 19th century, Dutch farmers fed cattle on the herb to give a different taste to their milk.
| Biology and health sciences | Herbs and spices | Plants |
5401 | https://en.wikipedia.org/wiki/Carboniferous | Carboniferous | The Carboniferous ( ) is a geologic period and system of the Paleozoic era that spans 60 million years from the end of the Devonian Period Ma (million years ago) to the beginning of the Permian Period, Ma. It is the fifth and penultimate period of the Paleozoic era and the fifth period of the Phanerozoic eon. In North America, the Carboniferous is often treated as two separate geological periods, the earlier Mississippian and the later Pennsylvanian.
The name Carboniferous means "coal-bearing", from the Latin ("coal") and ("bear, carry"), and refers to the many coal beds formed globally during that time. The first of the modern "system" names, it was coined by geologists William Conybeare and William Phillips in 1822, based on a study of the British rock succession.
Carboniferous is the period during which both terrestrial animal and land plant life was well established. Stegocephalia (four-limbed vertebrates including true tetrapods), whose forerunners (tetrapodomorphs) had evolved from lobe-finned fish during the preceding Devonian period, became pentadactylous during the Carboniferous. The period is sometimes called the Age of Amphibians because of the diversification of early amphibians such as the temnospondyls, which became dominant land vertebrates, as well as the first appearance of amniotes including synapsids (the clade to which modern mammals belong) and sauropsids (which include modern reptiles and birds) during the late Carboniferous. Land arthropods such as arachnids (e.g. trigonotarbids and Pulmonoscorpius), myriapods (e.g. Arthropleura) and especially insects (particularly flying insects) also underwent a major evolutionary radiation during the late Carboniferous. Vast swaths of forests and swamps covered the land, which eventually became the coal beds characteristic of the Carboniferous stratigraphy evident today.
The later half of the period experienced glaciations, low sea level, and mountain building as the continents collided to form Pangaea. A minor marine and terrestrial extinction event, the Carboniferous rainforest collapse, occurred at the end of the period, caused by climate change. Atmospheric oxygen levels, originally thought to be consistently higher than today throughout the Carboniferous, have been shown to be more variable, increasing from low levels at the beginning of the Period to highs of 25–30%.
Etymology and history
The development of a Carboniferous chronostratigraphic timescale began in the late 18th century. The term "Carboniferous" was first used as an adjective by Irish geologist Richard Kirwan in 1799 and later used in a heading entitled "Coal-measures or Carboniferous Strata" by John Farey Sr. in 1811. Four units were originally ascribed to the Carboniferous, in ascending order, the Old Red Sandstone, Carboniferous Limestone, Millstone Grit and the Coal Measures. These four units were placed into a formalised Carboniferous unit by William Conybeare and William Phillips in 1822 and then into the Carboniferous System by Phillips in 1835. The Old Red Sandstone was later considered Devonian in age.
The similarity in successions between the British Isles and Western Europe led to the development of a common European timescale with the Carboniferous System divided into the lower Dinantian, dominated by carbonate deposition and the upper Silesian with mainly siliciclastic deposition. The Dinantian was divided into the Tournaisian and Viséan stages. The Silesian was divided into the Namurian, Westphalian and Stephanian stages. The Tournaisian is the same length as the International Commission on Stratigraphy (ICS) stage, but the Viséan is longer, extending into the lower Serpukhovian.
North American geologists recognised a similar stratigraphy but divided it into two systems rather than one. These are the lower carbonate-rich sequence of the Mississippian System and the upper siliciclastic and coal-rich sequence of the Pennsylvanian. The United States Geological Survey officially recognised these two systems in 1953. In Russia, in the 1840s British and Russian geologists divided the Carboniferous into the Lower, Middle and Upper series based on Russian sequences. In the 1890s these became the Dinantian, Moscovian and Uralian stages. The Serpukivian was proposed as part of the Lower Carboniferous, and the Upper Carboniferous was divided into the Moscovian and Gzhelian. The Bashkirian was added in 1934.
In 1975, the ICS formally ratified the Carboniferous System, with the Mississippian and Pennsylvanian subsystems from the North American timescale, the Tournaisian and Visean stages from the Western European and the Serpukhovian, Bashkirian, Moscovian, Kasimovian and Gzhelian from the Russian. With the formal ratification of the Carboniferous System, the Dinantian, Silesian, Namurian, Westphalian and Stephanian became redundant terms, although the latter three are still in common use in Western Europe.
Geology
Stratigraphy
Stages can be defined globally or regionally. For global stratigraphic correlation, the ICS ratify global stages based on a Global Boundary Stratotype Section and Point (GSSP) from a single formation (a stratotype) identifying the lower boundary of the stage. Only the boundaries of the Carboniferous System and three of the stage bases are defined by global stratotype sections and points because of the complexity of the geology. The ICS subdivisions from youngest to oldest are as follows:
Mississippian
The Mississippian was proposed by Alexander Winchell in 1870 named after the extensive exposure of lower Carboniferous limestone in the upper Mississippi River valley. During the Mississippian, there was a marine connection between the Paleo-Tethys and Panthalassa through the Rheic Ocean resulting in the near worldwide distribution of marine faunas and so allowing widespread correlations using marine biostratigraphy. However, there are few Mississippian volcanic rocks, and so obtaining radiometric dates is difficult.
The Tournaisian Stage is named after the Belgian city of Tournai. It was introduced in scientific literature by Belgian geologist André Dumont in 1832. The GSSP for the base of the Carboniferous System, Mississippian Subsystem and Tournaisian Stage is located at the La Serre section in Montagne Noire, southern France. It is defined by the first appearance of the conodont Siphonodella sulcata within the evolutionary lineage from Siphonodella praesulcata to Siphonodella sulcata. This was ratified by the ICS in 1990. However, in 2006 further study revealed the presence of Siphonodella sulcata below the boundary, and the presence of Siphonodella praesulcata and Siphonodella sulcata together above a local unconformity. This means the evolution of one species to the other, the definition of the boundary, is not seen at the La Serre site making precise correlation difficult.The Viséan Stage was introduced by André Dumont in 1832 and is named after the city of Visé, Liège Province, Belgium. In 1967, the base of the Visean was officially defined as the first black limestone in the Leffe facies at the Bastion Section in the Dinant Basin. These changes are now thought to be ecologically driven rather than caused by evolutionary change, and so this has not been used as the location for the GSSP. Instead, the GSSP for the base of the Visean is located in Bed 83 of the sequence of dark grey limestones and shales at the Pengchong section, Guangxi, southern China. It is defined by the first appearance of the fusulinid Eoparastaffella simplex in the evolutionary lineage Eoparastaffella ovalis – Eoparastaffella simplex and was ratified in 2009.
The Serpukhovian Stage was proposed in 1890 by Russian stratigrapher Sergei Nikitin. It is named after the city of Serpukhov, near Moscow. currently lacks a defined GSSP. The Visean-Serpukhovian boundary coincides with a major period of glaciation. The resulting sea level fall and climatic changes led to the loss of connections between marine basins and endemism of marine fauna across the Russian margin. This means changes in biota are environmental rather than evolutionary making wider correlation difficult. Work is underway in the Urals and Nashui, Guizhou Province, southwestern China for a suitable site for the GSSP with the proposed definition for the base of the Serpukhovian as the first appearance of conodont Lochriea ziegleri.
Pennsylvanian
The Pennsylvanian was proposed by J.J.Stevenson in 1888, named after the widespread coal-rich strata found across the state of Pennsylvania. The closure of the Rheic Ocean and formation of Pangea during the Pennsylvanian, together with widespread glaciation across Gondwana led to major climate and sea level changes, which restricted marine fauna to particular geographic areas thereby reducing widespread biostratigraphic correlations. Extensive volcanic events associated with the assembling of Pangea means more radiometric dating is possible relative to the Mississippian.
The Bashkirian Stage was proposed by Russian stratigrapher Sofia Semikhatova in 1934. It was named after Bashkiria, the then Russian name of the republic of Bashkortostan in the southern Ural Mountains of Russia. The GSSP for the base of the Pennsylvanian Subsystem and Bashkirian Stage is located at Arrow Canyon in Nevada, US and was ratified in 1996. It is defined by the first appearance of the conodont Declinognathodus noduliferus. Arrow Canyon lay in a shallow, tropical seaway which stretched from Southern California to Alaska. The boundary is within a cyclothem sequence of transgressive limestones and fine sandstones, and regressive mudstones and brecciated limestones.
The Moscovian Stage is named after shallow marine limestones and colourful clays found around Moscow, Russia. It was first introduced by Sergei Nikitin in 1890. The Moscovian currently lacks a defined GSSP. The fusulinid Aljutovella aljutovica can be used to define the base of the Moscovian across the northern and eastern margins of Pangea, however, it is restricted in geographic area, which means it cannot be used for global correlations. The first appearance of the conodonts Declinognathodus donetzianus or Idiognathoides postsulcatus have been proposed as a boundary marking species and potential sites in the Urals and Nashui, Guizhou Province, southwestern China are being considered.
The Kasimovian is the first stage in the Upper Pennsylvanian. It is named after the Russian city of Kasimov, and was originally included as part of Nikitin's 1890 definition of the Moscovian. It was first recognised as a distinct unit by A.P. Ivanov in 1926, who named it the "Tiguliferina" Horizon after a type of brachiopod. The boundary of the Kasimovian covers a period of globally low sea level, which has resulted in disconformities within many sequences of this age. This has created difficulties in finding suitable marine fauna that can used to correlate boundaries worldwide. The Kasimovian currently lacks a defined GSSP; potential sites in the southern Urals, southwest USA and Nashui, Guizhou Province, southwestern China are being considered.
The Gzhelian is named after the Russian village of Gzhel, near Ramenskoye, not far from Moscow. The name and type locality were defined by Sergei Nikitin in 1890. The Gzhelian currently lacks a defined GSSP. The first appearance of the fusulinid Rauserites rossicus and Rauserites stuckenbergi can be used in the Boreal Sea and Paleo-Tethyan regions but not eastern Pangea or Panthalassa margins. Potential sites in the Urals and Nashui, Guizhou Province, southwestern China for the GSSP are being considered.
The GSSP for the base of the Permian is located in the Aidaralash River valley near Aqtöbe, Kazakhstan and was ratified in 1996. The beginning of the stage is defined by the first appearance of the conodont Streptognathodus postfusus.
Cyclothems
A cyclothem is a succession of non-marine and marine sedimentary rocks, deposited during a single sedimentary cycle, with an erosional surface at its base. Whilst individual cyclothems are often only metres to a few tens of metres thick, cyclothem sequences can be many hundreds to thousands of metres thick and contain tens to hundreds of individual cyclothems. Cyclothems were deposited along continental shelves where the very gentle gradient of the shelves meant even small changes in sea level led to large advances or retreats of the sea. Cyclothem lithologies vary from mudrock and carbonate-dominated to coarse siliciclastic sediment-dominated sequences depending on the paleo-topography, climate and supply of sediments to the shelf.
The main period of cyclothem deposition occurred during the Late Paleozoic Ice Age from the Late Mississippian to early Permian, when the waxing and waning of ice sheets led to rapid changes in eustatic sea level. The growth of ice sheets led global sea levels to fall as water was locked away in glaciers. Falling sea levels exposed large tracts of the continental shelves across which river systems eroded channels and valleys and vegetation broke down the surface to form soils. The non-marine sediments deposited on this erosional surface form the base of the cyclothem. As sea levels began to rise, the rivers flowed through increasingly water-logged landscapes of swamps and lakes. Peat mires developed in these wet and oxygen-poor conditions, leading to coal formation. With continuing sea level rise, coastlines migrated landward and deltas, lagoons and esturaries developed; their sediments deposited over the peat mires. As fully marine conditions were established, limestones succeeded these marginal marine deposits. The limestones were in turn overlain by deep water black shales as maximum sea levels were reached.
Ideally, this sequence would be reversed as sea levels began to fall again; however, sea level falls tend to be protracted, whilst sea level rises are rapid, ice sheets grow slowly but melt quickly. Therefore, the majority of a cyclothem sequence occurred during falling sea levels, when rates of erosion were high, meaning they were often periods of non-deposition. Erosion during sea level falls could also result in the full or partial removal of previous cyclothem sequences. Individual cyclothems are generally less than 10 m thick because the speed at which sea level rose gave only limited time for sediments to accumulate.
During the Pennsylvanian, cyclothems were deposited in shallow, epicontinental seas across the tropical regions of Laurussia (present day western and central US, Europe, Russia and central Asia) and the North and South China cratons. The rapid sea levels fluctuations they represent correlate with the glacial cycles of the Late Paleozoic Ice Age. The advance and retreat of ice sheets across Gondwana followed a 100 kyr Milankovitch cycle, and so each cyclothem represents a cycle of sea level fall and rise over a 100 kyr period.
Coal formation
Coal forms when organic matter builds up in waterlogged, anoxic swamps, known as peat mires, and is then buried, compressing the peat into coal. The majority of Earth's coal deposits were formed during the late Carboniferous and early Permian. The plants from which they formed contributed to changes in the Carboniferous Earth's atmosphere.
During the Pennsylvanian, vast amounts of organic debris accumulated in the peat mires that formed across the low-lying, humid equatorial wetlands of the foreland basins of the Central Pangean Mountains in Laurussia, and around the margins of the North and South China cratons. During glacial periods, low sea levels exposed large areas of the continental shelves. Major river channels, up to several kilometres wide, stretched across these shelves feeding a network of smaller channels, lakes and peat mires. These wetlands were then buried by sediment as sea levels rose during interglacials. Continued crustal subsidence of the foreland basins and continental margins allowed this accumulation and burial of peat deposits to continue over millions of years resulting in the formation of thick and widespread coal formations. During the warm interglacials, smaller coal swamps with plants adapted to the temperate conditions formed on the Siberian craton and the western Australian region of Gondwana.
There is ongoing debate as to why this peak in the formation of Earth's coal deposits occurred during the Carboniferous. The first theory, known as the delayed fungal evolution hypothesis, is that a delay between the development of trees with the wood fibre lignin and the subsequent evolution of lignin-degrading fungi gave a period of time where vast amounts of lignin-based organic material could accumulate. Genetic analysis of basidiomycete fungi, which have enzymes capable of breaking down lignin, supports this theory by suggesting this fungi evolved in the Permian. However, significant Mesozoic and Cenozoic coal deposits formed after lignin-digesting fungi had become well established, and fungal degradation of lignin may have already evolved by the end of the Devonian, even if the specific enzymes used by basidiomycetes had not. The second theory is that the geographical setting and climate of the Carboniferous were unique in Earth's history: the co-occurrence of the position of the continents across the humid equatorial zone, high biological productivity, and the low-lying, water-logged and slowly subsiding sedimentary basins that allowed the thick accumulation of peat were sufficient to account for the peak in coal formation.
Palaeogeography
During the Carboniferous, there was an increased rate in tectonic plate movements as the supercontinent Pangea assembled. The continents themselves formed a near circle around the opening Paleo-Tethys Ocean, with the massive Panthalassic Ocean beyond. Gondwana covered the south polar region. To its northwest was Laurussia. These two continents slowly collided to form the core of Pangea. To the north of Laurussia lay Siberia and Amuria. To the east of Siberia, Kazakhstania, North China and South China formed the northern margin of the Paleo-Tethys, with Annamia laying to the south.
Variscan-Alleghanian-Ouachita orogeny
The Central Pangean Mountains were formed during the Variscan-Alleghanian-Ouachita orogeny. Today their remains stretch over 10,000 km from the Gulf of Mexico in the west to Turkey in the east. The orogeny was caused by a series of continental collisions between Laurussia, Gondwana and the Armorican terrane assemblage (much of modern-day Central and Western Europe including Iberia) as the Rheic Ocean closed and Pangea formed. This mountain building process began in the Middle Devonian and continued into the early Permian.
The Armorican terranes rifted away from Gondwana during the Late Ordovician. As they drifted northwards the Rheic Ocean closed in front of them, and they began to collide with southeastern Laurussia in the Middle Devonian. The resulting Variscan orogeny involved a complex series of oblique collisions with associated metamorphism, igneous activity, and large-scale deformation between these terranes and Laurussia, which continued into the Carboniferous.
During the mid Carboniferous, the South American sector of Gondwana collided obliquely with Laurussia's southern margin resulting in the Ouachita orogeny. The major strike-slip faulting that occurred between Laurussia and Gondwana extended eastwards into the Appalachian Mountains where early deformation in the Alleghanian orogeny was predominantly strike-slip. As the West African sector of Gondwana collided with Laurussia during the Late Pennsylvanian, deformation along the Alleghanian orogen became northwesterly-directed compression.
Uralian orogeny
The Uralian orogeny is a north–south trending fold and thrust belt that forms the western edge of the Central Asian Orogenic Belt. The Uralian orogeny began in the Late Devonian and continued, with some hiatuses, into the Jurassic. From the Late Devonian to early Carboniferous, the Magnitogorsk island arc, which lay between Kazakhstania and Laurussia in the Ural Ocean, collided with the passive margin of northeastern Laurussia (Baltica craton). The suture zone between the former island arc complex and the continental margin formed the Main Uralian Fault, a major structure that runs for more than 2,000 km along the orogen. Accretion of the island arc was complete by the Tournaisian, but subduction of the Ural Ocean between Kazakhstania and Laurussia continued until the Bashkirian when the ocean finally closed and continental collision began. Significant strike-slip movement along this zone indicates the collision was oblique. Deformation continued into the Permian and during the late Carboniferous and Permian the region was extensively intruded by granites.
Laurussia
The Laurussian continent was formed by the collision between Laurentia, Baltica and Avalonia during the Devonian. At the beginning of the Carboniferous, some models show it at the equator, while others place it further south. In either case, the continent drifted northwards, reaching low latitudes in the northern hemisphere by the end of the Period. The Central Pangean Mountain drew in moist air from the Paleo-Tethys Ocean resulting in heavy precipitation and a tropical wetland environment. Extensive coal deposits developed within the cyclothem sequences that dominated the Pennsylvanian sedimentary basins associated with the growing orogenic belt.
Subduction of the Panthalassic oceanic plate along its western margin resulted in the Antler orogeny in the Late Devonian to Early Mississippian. Further north along the margin, slab roll-back, beginning in the Early Mississippian, led to the rifting of the Yukon–Tanana terrane and the opening of the Slide Mountain Ocean. Along the northern margin of Laurussia, orogenic collapse of the Late Devonian to Early Mississippian Innuitian orogeny led to the development of the Sverdrup Basin.
Gondwana
Much of Gondwana lay in the southern polar region during the Carboniferous. As the plate moved, the South Pole drifted from southern Africa in the early Carboniferous to eastern Antarctica by the end of the period. Glacial deposits are widespread across Gondwana and indicate multiple ice centres and long-distance movement of ice. The northern to northeastern margin of Gondwana (northeast Africa, Arabia, India and northeastern West Australia) was a passive margin along the southern edge of the Paleo-Tethys with cyclothem deposition including, during more temperate intervals, coal swamps in Western Australia. The Mexican terranes along the northwestern Gondwana margin, were affected by the subduction of the Rheic Ocean. However, they lay to west of the Ouachita orogeny and were not impacted by continental collision but became part of the active margin of the Pacific. The Moroccan margin was affected by periods of widespread dextral strike-slip deformation, magmatism and metamorphism associated with the Variscan orogeny.
Towards the end of the Carboniferous, extension and rifting across the northern margin of Gondwana led to the breaking away of the Cimmerian terrane during the early Permian and the opening of the Neo-Tethys Ocean. Along the southeastern and southern margin of Gondwana (eastern Australia and Antarctica), northward subduction of Panthalassa continued. Changes in the relative motion of the plates resulted in the early Carboniferous Kanimblan Orogeny. Continental arc magmatism continued into the late Carboniferous and extended round to connect with the developing proto-Andean subduction zone along the western South American margin of Gondwana.
Siberia and Amuria
Shallow seas covered much of the Siberian craton in the early Carboniferous. These retreated as sea levels fell in the Pennsylvanian and as the continent drifted north into more temperate zones extensive coal deposits formed in the Kuznetsk Basin. The northwest to eastern margins of Siberia were passive margins along the Mongol-Okhotsk Ocean on the far side of which lay Amuria. From the mid Carboniferous, subduction zones with associated magmatic arcs developed along both margins of the ocean.
The southwestern margin of Siberia was the site of a long lasting and complex accretionary orogen. The Devonian to early Carboniferous Siberian and South Chinese Altai accretionary complexes developed above an east-dipping subduction zone, whilst further south, the Zharma-Saur arc formed along the northeastern margin of Kazakhstania. By the late Carboniferous, all these complexes had accreted to the Siberian craton as shown by the intrusion of post-orogenic granites across the region. As Kazakhstania had already accreted to Laurussia, Siberia was effectively part of Pangea by 310 Ma, although major strike-slip movements continued between it and Laurussia into the Permian.
Central and East Asia
The Kazakhstanian microcontinent is composed of a series of Devonian and older accretionary complexes. It was strongly deformed during the Carboniferous as its western margin collided with Laurussia during the Uralian orogen and its northeastern margin collided with Siberia. Continuing strike-slip motion between Laurussia and Siberia led the formerly elongate microcontinent to bend into an orocline.
During the Carboniferous, the Tarim craton lay along the northwestern edge of North China. Subduction along the Kazakhstanian margin of the Turkestan Ocean resulted in collision between northern Tarim and Kazakhstania during the mid Carboniferous as the ocean closed. The South Tian Shan fold and thrust belt, which extends over 2,000 km from Uzbekistan to northwest China, is the remains of this accretionary complex and forms the suture between Kazakhstania and Tarim. A continental magmatic arc above a south-dipping subduction zone lay along the northern North China margin, consuming the Paleoasian Ocean. Northward subduction of the Paleo-Tethys beneath the southern margins of North China and Tarim continued during the Carboniferous, with the South Qinling block accreted to North China during the mid to late Carboniferous. No sediments are preserved from the early Carboniferous in North China. However, bauxite deposits immediately above the regional mid Carboniferous unconformity indicate warm tropical conditions and are overlain by cyclothems including extensive coals.
South China and Annamia (Southeast Asia) rifted from Gondwana during the Devonian. During the Carboniferous, they were separated from each other and North China by the Paleoasian Ocean with the Paleo-Tethys to the southwest and Panthalassa to the northeast. Cyclothem sediments with coal and evaporites were deposited across the passive margins that surrounded both continents.
Climate
The Carboniferous climate was dominated by the Late Paleozoic Ice Age (LPIA), the most extensive and longest icehouse period of the Phanerozoic, which lasted from the Late Devonian to the Permian (365 Ma-253 Ma). Temperatures began to drop during the late Devonian with a short-lived glaciation in the late Famennian through Devonian–Carboniferous boundary, before the Early Tournaisian Warm Interval. Following this, a reduction in atmospheric CO2 levels, caused by the increased burial of organic matter and widespread ocean anoxia led to climate cooling and glaciation across the south polar region. During the Visean Warm Interval glaciers nearly vanished retreating to the proto-Andes in Bolivia and western Argentina and the Pan-African mountain ranges in southeastern Brazil and southwest Africa.
The main phase of the LPIA (c. 335–290 Ma) began in the late Visean, as the climate cooled and atmospheric CO2 levels dropped. Its onset was accompanied by a global fall in sea level and widespread multimillion-year unconformities. This main phase consisted of a series of discrete several million-year-long glacial periods during which ice expanded out from up to 30 ice centres that stretched across mid- to high latitudes of Gondwana in eastern Australia, northwestern Argentina, southern Brazil, and central and Southern Africa.
Isotope records indicate this drop in CO2 levels was triggered by tectonic factors with increased weathering of the growing Central Pangean Mountains and the influence of the mountains on precipitation and surface water flow. Closure of the oceanic gateway between the Rheic and Tethys oceans in the early Bashkirian also contributed to climate cooling by changing ocean circulation and heat flow patterns.
Warmer periods with reduced ice volume within the Bashkirian, the late Moscovian and the latest Kasimovian to mid-Gzhelian are inferred from the disappearance of glacial sediments, the appearance of deglaciation deposits and rises in sea levels.
In the early Kasimovian there was short-lived (<1 million years) intense period of glaciation, with atmospheric CO2 concentration levels dropping as low as 180 ppm. This ended suddenly as a rapid increase in CO2 concentrations to c. 600 ppm resulted in a warmer climate. This rapid rise in CO2 may have been due to a peak in pyroclastic volcanism and/or a reduction in burial of terrestrial organic matter.
The LPIA peaked across the Carboniferous-Permian boundary. Widespread glacial deposits are found across South America, western and central Africa, Antarctica, Australia, Tasmania, the Arabian Peninsula, India, and the Cimmerian blocks, indicating trans-continental ice sheets across southern Gondwana that reached to sea-level. In response to the uplift and erosion of the more mafic basement rocks of the Central Pangea Mountains at this time, CO2 levels dropped as low as 175 ppm and remained under 400 ppm for 10 Ma.
Temperatures
Temperatures across the Carboniferous reflect the phases of the LPIA. At the extremes, during the Permo-Carboniferous Glacial Maximum (299–293 Ma) the global average temperature (GAT) was c. 13 °C (55 °F), the average temperature in the tropics c. 24 °C (75 °F) and in polar regions c. -23 °C (-10 °F), whilst during the Early Tournaisian Warm Interval (358–353 Ma) the GAT was c. 22 °C (72 °F), the tropics c. 30 °C (86 °F) and polar regions c. 1.5 °C (35 °F). Overall, for the Ice Age the GAT was c. 17 °C (62 °F), with tropical temperatures c. 26 °C and polar temperatures c. -9.0 °C (16 °F).
Atmospheric oxygen levels
There are a variety of methods for reconstructing past atmospheric oxygen levels, including the charcoal record, halite gas inclusions, burial rates of organic carbon and pyrite, carbon isotopes of organic material, isotope mass balance and forward modelling. Depending on the preservation of source material, some techniques represent moments in time (e.g. halite gas inclusions), whilst others have a wider time range (e.g. the charcoal record and pyrite). Results from these different methods for the Carboniferous vary. For example: the increasing occurrence of charcoal produced by wildfires from the Late Devonian into the Carboniferous indicates increasing oxygen levels, with calculations showing oxygen levels above 21% for most of the Carboniferous; halite gas inclusions from sediments dated 337–335 Ma give estimates for the Visean of c. 15.3%, although with large uncertainties; and, pyrite records suggest levels of c. 15% early in the Carboniferous, to over 25% during the Pennsylvanian, before dropping back below 20% towards the end. However, whilst exact numbers vary, all models show an overall increase in atmospheric oxygen levels from a low of between 15–20% at the beginning of the Carboniferous to highs of 25–30% during the Period. This was not a steady rise, but included peaks and troughs reflecting the dynamic climate conditions of the time. How the atmospheric oxygen concentrations influenced the large body size of arthropods and other fauna and flora during the Carboniferous is also a subject of ongoing debate.
Effects of climate on sedimentation
The changing climate was reflected in regional-scale changes in sedimentation patterns. In the relatively warm waters of the Early to Middle Mississippian, carbonate production occurred to depth across the gently dipping continental slopes of Laurussia and North and South China (carbonate ramp architecture) and evaporites formed around the coastal regions of Laurussia, Kazakhstania, and northern Gondwana.
From the late Visean, the cooling climate restricted carbonate production to depths of less than c. 10 m forming carbonate shelves with flat-tops and steep sides. By the Moscovian, the waxing and waning of the ice sheets led to cyclothem deposition with mixed carbonate-siliciclastic sequences deposited on continental platforms and shelves.
Seasonal melting of glaciers resulted in near freezing waters around the margins of Gondwana. This is evidenced by the occurrence of glendonite (a pseudomorph of ikaite; a form of calcite deposited in glacial waters) in fine-grained, shallow marine sediments.
The glacial grinding and erosion of siliciclastic rocks across Gondwana and the Central Pangaean Mountains produced vast amounts of silt-sized sediment. Redistributed by the wind, this formed widespread deposits of loess across equatorial Pangea.
Effects of climate on biodiversity
The main phase of the LPIA was considered a crisis for marine biodiversity with the loss of many genera, followed by low biodiversity. However, recent studies of marine life suggest the rapid climate and environmental changes that accompanied the onset of the main glacial phase resulted in an adaptive radiation with a rapid increase in the number of species.
The oscillating climate conditions also led to repeated restructuring of Laurasian tropical forests between wetlands and seasonally dry ecosystems, and the appearance and diversification of tetrapods species. There was a major restructuring of wetland forests during the Kasimovian glacial interval, with the loss of arborescent (tree-like) lycopisids and other wetland groups, and a general decline in biodiversity. These events are attributed to the drop in CO2 levels below 400 ppm. Although referred to as the Carboniferous rainforest collapse, this was a complex replacement of one type of rainforest by another, not a complete disappearance of rainforest vegetation.
Across the Carboniferous–Permian boundary interval, a rapid drop in CO2 levels and increasingly arid conditions at low-latitudes led to a permanent shift to seasonally dry woodland vegetation. Tetrapods acquired new terrestrial adaptations and there was a radiation of dryland-adapted amniotes.
Geochemistry
As the continents assembled to form Pangea, the growth of the Central Pangean Mountains led to increased weathering and carbonate sedimentation on the ocean floor, whilst the distribution of continents across the paleo-tropics meant vast areas of land were available for the spread of tropical rainforests. Together these two factors significantly increased CO2 drawdown from the atmosphere, lowering global temperatures, increasing ocean pH and triggering the Late Paleozoic Ice Age. The growth of the supercontinent also changed seafloor spreading rates and led to a decrease in the length and volume of mid-ocean ridge systems.
Magnesium/calcium isotope ratios in seawater
During the early Carboniferous, the Mg2+/Ca2+ ratio in seawater began to rise and by the Middle Mississippian aragonite seas had replaced calcite seas. The concentration of calcium in seawater is largely controlled by ocean pH, and as this increased the calcium concentration was reduced. At the same time, the increase in weathering, increased the amount of magnesium entering the marine environment. As magnesium is removed from seawater and calcium added along mid-ocean ridges where seawater reacts with the newly formed lithosphere, the reduction in length of mid-ocean ridge systems increased the Mg2+/Ca2+ ratio further. The Mg2+/Ca2+ ratio of the seas also affects the ability of organisms to biomineralize. The Carboniferous aragonite seas favoured those that secreted aragonite and the dominant reef builders of the time were aragonitic sponges and corals.
Strontium isotopic composition of seawater
The strontium isotopic composition (87Sr/86Sr) of seawater represents a mix of strontium derived from continental weathering which is rich in 87Sr and from mantle sources e.g. mid-ocean ridges, which are relatively depleted in 87Sr. 87Sr/86Sr ratios above 0.7075 indicate continental weathering is the main source of 87Sr, whilst ratios below indicate mantle-derived sources are the principal contributor.
87Sr/86Sr values varied through the Carboniferous, although they remained above 0.775, indicating continental weathering dominated as the source of 87Sr throughout. The 87Sr/86Sr during the Tournaisian was c. 0.70840, it decreased through the Visean to 0.70771 before increasing during the Serpukhovian to the lowermost Gzhelian where it plateaued at 0.70827, before decreasing again to 0.70814 at the Carboniferous-Permian boundary. These variations reflect the changing influence of weathering and sediment supply to the oceans of the growing Central Pangean Mountains. By the Serpukhovian basement rocks, such as granite, had been uplifted and exposed to weathering. The decline towards the end of the Carboniferous is interpreted as a decrease in continental weathering due to the more arid conditions.
Oxygen and carbon isotope ratios in seawater
Unlike Mg2+/Ca2+ and 87Sr/86Sr isotope ratios, which are consistent across the world's oceans at any one time, δ18O and δ13C preserved in the fossil record can be affected by regional factors. Carboniferous δ18O and δ13C records show regional differences between the South China open-water setting and the epicontinental seas of Laurussia. These differences are due to variations in seawater salinity and evaporation between epicontinental seas relative to the more open waters. However, large scale trends can still be determined. δ13C rose rapidly from c. 0 to 1‰ (parts per thousand) to c. 5 to 7‰ in the Early Mississippian and remained high for the duration of the Late Paleozoic Ice Age (c. 3–6‰) into the early Permian. Similarly from the Early Mississippian there was a long-term increase in δ18O values as the climate cooled.
Both δ13C and δ18O records show significant global isotope changes (known as excursions) during the Carboniferous. The mid-Tournaisian positive δ13C and δ18O excursions lasted between 6 and 10 million years and were also accompanied by c. 6‰ positive excursion in organic matter δ15N values, a negative excursion in carbonate δ238U and a positive excursion in carbonate-associated sulphate δ34S. These changes in seawater geochemistry are interpreted as a decrease in atmospheric CO2 due to increased organic matter burial and widespread ocean anoxia triggering climate cooling and onset of glaciation.
The Mississippian-Pennsylvanian boundary positive δ18O excursion occurred at the same time as global sea level falls and widespread glacial deposits across southern Gondwana, indicating climate cooling and ice build-up. The rise in 87Sr/86Sr just before the δ18O excursion suggests climate cooling in this case was caused by increased continental weathering of the growing Central Pangean Mountains and the influence of the orogeny on precipitation and surface water flow rather than increased burial of organic matter. δ13C values show more regional variation, and it is unclear whether there is a positive δ13C excursion or a readjustment from previous lower values.
During the early Kasimovian there was a short (<1myr), intense glacial period, which came to a sudden end as atmospheric CO2 concentrations rapidly rose. There was a steady increase in arid conditions across tropical regions and a major reduction in the extent of tropical rainforests, as shown by the widespread loss of coal deposits from this time. The resulting reduction in productivity and burial of organic matter led to increasing atmospheric CO2 levels, which were recorded by a negative δ13C excursion and an accompanying, but smaller decrease in δ18O values.
Life
Plants
Early Carboniferous land plants, some of which were preserved in coal balls, were very similar to those of the preceding Late Devonian, but new groups also appeared at this time. The main early Carboniferous plants were the Equisetales (horse-tails), Sphenophyllales (scrambling plants), Lycopodiales (club mosses), Lepidodendrales (scale trees), Filicales (ferns), Medullosales (informally included in the "seed ferns", an assemblage of a number of early gymnosperm groups) and the Cordaitales. These continued to dominate throughout the period, but during the late Carboniferous, several other groups, Cycadophyta (cycads), the Callistophytales (another group of "seed ferns"), and the Voltziales, appeared.
The Carboniferous lycophytes of the order Lepidodendrales, which are cousins (but not ancestors) of the tiny club-moss of today, were huge trees with trunks 30 meters high and up to 1.5 meters in diameter. These included Lepidodendron (with its cone called Lepidostrobus), Anabathra, Lepidophloios and Sigillaria. The roots of several of these forms are known as Stigmaria. Unlike present-day trees, their secondary growth took place in the cortex, which also provided stability, instead of the xylem. The Cladoxylopsids were large trees, that were ancestors of ferns, first arising in the Carboniferous.
The fronds of some Carboniferous ferns are almost identical with those of living species. Probably many species were epiphytic. Fossil ferns and "seed ferns" include Pecopteris, Cyclopteris, Neuropteris, Alethopteris, and Sphenopteris; Megaphyton and Caulopteris were tree ferns.
The Equisetales included the common giant form Calamites, with a trunk diameter of 30 to and a height of up to . Sphenophyllum was a slender climbing plant with whorls of leaves, which was probably related both to the calamites and the lycopods.
Cordaites, a tall plant (6 to over 30 meters) with strap-like leaves, was related to the cycads and conifers; the catkin-like reproductive organs, which bore ovules/seeds, is called Cardiocarpus. These plants were thought to live in swamps. True coniferous trees (Walchia, of the order Voltziales) appear later in the Carboniferous, and preferred higher drier ground.
Marine invertebrates
In the oceans the marine invertebrate groups are the Foraminifera, corals, Bryozoa, Ostracoda, brachiopods, ammonoids, hederelloids, microconchids and echinoderms (especially crinoids). The diversity of brachiopods and fusilinid foraminiferans, surged beginning in the Visean, continuing through the end of the Carboniferous, although cephalopod and nektonic conodont diversity declined. This evolutionary radiation was known as the Carboniferous-Earliest Permian Biodiversification Event. For the first time foraminifera took a prominent part in the marine faunas. The large spindle-shaped genus Fusulina and its relatives were abundant in what is now Russia, China, Japan, North America; other important genera include Valvulina, Endothyra, Archaediscus, and Saccammina (the latter common in Britain and Belgium). Some Carboniferous genera are still extant. The first true priapulids appeared during this period.
The microscopic shells of radiolarians are found in cherts of this age in the Culm of Devon and Cornwall, and in Russia, Germany and elsewhere. Sponges are known from spicules and anchor ropes, and include various forms such as the Calcispongea Cotyliscus and Girtycoelia, the demosponge Chaetetes, and the genus of unusual colonial glass sponges Titusvillia. Both reef-building and solitary corals diversify and flourish; these include both rugose (for example, Caninia, Corwenia, Neozaphrentis), heterocorals, and tabulate (for example, Chladochonus, Michelinia) forms. Conularids were well represented by Conularia
Bryozoa are abundant in some regions; the fenestellids including Fenestella, Polypora, and Archimedes, so named because it is in the shape of an Archimedean screw. Brachiopods are also abundant; they include productids, some of which reached very large for brachiopods size and had very thick shells (for example, the -wide Gigantoproductus), while others like Chonetes were more conservative in form. Athyridids, spiriferids, rhynchonellids, and terebratulids are also very common. Inarticulate forms include Discina and Crania. Some species and genera had a very wide distribution with only minor variations.
Annelids such as Serpulites are common fossils in some horizons. Among the mollusca, the bivalves continue to increase in numbers and importance. Typical genera include Aviculopecten, Posidonomya, Nucula, Carbonicola, Edmondia, and Modiola. Gastropods are also numerous, including the genera Murchisonia, Euomphalus, Naticopsis. Nautiloid cephalopods are represented by tightly coiled nautilids, with straight-shelled and curved-shelled forms becoming increasingly rare. Goniatite ammonoids such as Aenigmatoceras are common.
Trilobites are rarer than in previous periods, on a steady trend towards extinction, represented only by the proetid group. Ostracoda, a class of crustaceans, were abundant as representatives of the meiobenthos; genera included Amphissites, Bairdia, Beyrichiopsis, Cavellina, Coryellina, Cribroconcha, Hollinella, Kirkbya, Knoxiella, and Libumella. Crinoids were highly numerous during the Carboniferous, though they suffered a gradual decline in diversity during the Middle Mississippian. Dense submarine thickets of long-stemmed crinoids appear to have flourished in shallow seas, and their remains were consolidated into thick beds of rock. Prominent genera include Cyathocrinus, Woodocrinus, and Actinocrinus. Echinoids such as Archaeocidaris and Palaeechinus were also present. The blastoids, which included the Pentreinitidae and Codasteridae and superficially resembled crinoids in the possession of long stalks attached to the seabed, attain their maximum development at this time.
Freshwater and lagoonal invertebrates
Freshwater Carboniferous invertebrates include various bivalve molluscs that lived in brackish or fresh water, such as Anthraconaia, Naiadites, and Carbonicola; diverse crustaceans such as Candona, Carbonita, Darwinula, Estheria, Acanthocaris, Dithyrocaris, and Anthrapalaemon. The eurypterids were also diverse, and are represented by such genera as Adelophthalmus, Megarachne (originally misinterpreted as a giant spider, hence its name) and the specialised very large Hibbertopterus. Many of these were amphibious. Frequently a temporary return of marine conditions resulted in marine or brackish water genera such as Lingula, Orbiculoidea, and Productus being found in the thin beds known as marine bands.
Terrestrial invertebrates
Fossil remains of air-breathing insects, myriapods, and arachnids are known from the Carboniferous. Their diversity when they do appear, however, shows that these arthropods were both well-developed and numerous. Some arthropods grew to large sizes with the up to millipede-like Arthropleura being the largest-known land invertebrate of all time. Among the insect groups are the huge predatory Protodonata (griffinflies), among which was Meganeura, a giant dragonfly-like insect and with a wingspan of ca. —the largest flying insect ever to roam the planet. Further groups are the Syntonopterodea (relatives of present-day mayflies), the abundant and often large sap-sucking Palaeodictyopteroidea, the diverse herbivorous Protorthoptera, and numerous basal Dictyoptera (ancestors of cockroaches).
Many insects have been obtained from the coalfields of Saarbrücken and Commentry, and from the hollow trunks of fossil trees in Nova Scotia. Some British coalfields have yielded good specimens: Archaeoptilus, from the Derbyshire coalfield, had a large wing with preserved part, and some specimens (Brodia) still exhibit traces of brilliant wing colors. In the Nova Scotian tree trunks land snails (Archaeozonites, Dendropupa) have been found.
Fish
Many fish inhabited the Carboniferous seas; predominantly Elasmobranchs (sharks and their relatives). These included some, like Psammodus, with crushing pavement-like teeth adapted for grinding the shells of brachiopods, crustaceans, and other marine organisms. Other groups of elasmobranchs, like the ctenacanthiformes grew to large sizes, with some genera like Saivodus reaching around 6–9 meters (20–30 feet). Other fish had piercing teeth, such as the Symmoriida; some, the petalodonts, had peculiar cycloid cutting teeth. Most of the other cartilaginous fish were marine, but others like the Xenacanthida, and several genera like Bandringa invaded fresh waters of the coal swamps. Among the bony fish, the Palaeonisciformes found in coastal waters also appear to have migrated to rivers. Sarcopterygian fish were also prominent, and one group, the Rhizodonts, reached very large size.
Most species of Carboniferous marine fish have been described largely from teeth, fin spines and dermal ossicles, with smaller freshwater fish preserved whole. Freshwater fish were abundant, and include the genera Ctenodus, Uronemus, Acanthodes, Cheirodus, and Gyracanthus. Chondrichthyes (especially holocephalans like the Stethacanthids) underwent a major evolutionary radiation during the Carboniferous. It is believed that this evolutionary radiation occurred because the decline of the placoderms at the end of the Devonian caused many environmental niches to become unoccupied and allowed new organisms to evolve and fill these niches. As a result of the evolutionary radiation Carboniferous holocephalans assumed a wide variety of bizarre shapes including Stethacanthus which possessed a flat brush-like dorsal fin with a patch of denticles on its top. Stethacanthus unusual fin may have been used in mating rituals.
Other groups like the eugeneodonts filled in the niches left by large predatory placoderms. These fish were unique as they only possessed one row of teeth in their upper or lower jaws in the form of elaborate tooth whorls. The first members of the helicoprionidae, a family eugeneodonts that were characterized by the presence of one circular tooth whorl in the lower jaw, appeared during the early Carboniferous. Perhaps the most bizarre radiation of holocephalans at this time was that of the iniopterygiformes, an order of holocephalans that greatly resembled modern day flying fish that could have also "flown" in the water with their massive, elongated pectoral fins. They were further characterized by their large eye sockets, club-like structures on their tails, and spines on the tips of their fins.
Tetrapods
Carboniferous amphibians were diverse and common by the middle of the period, more so than they are today; some were as long as 6 meters, and those fully terrestrial as adults had scaly skin. They included basal tetrapod groups classified in early books under the Labyrinthodontia. These had a long body, a head covered with bony plates, and generally weak or undeveloped limbs. The largest were over 2 meters long. They were accompanied by an assemblage of smaller amphibians included under the Lepospondyli, often only about long. Some Carboniferous amphibians were aquatic and lived in rivers (Loxomma, Eogyrinus, Proterogyrinus); others may have been semi-aquatic (Ophiderpeton, Amphibamus, Hyloplesion) or terrestrial (Dendrerpeton, Tuditanus, Anthracosaurus).
The Carboniferous rainforest collapse slowed the evolution of amphibians who could not survive as well in the cooler, drier conditions. Amniotes, however, prospered because of specific key adaptations. One of the greatest evolutionary innovations of the Carboniferous was the amniote egg, which allowed the laying of eggs in a dry environment, as well as keratinized scales and claws, allowing for the further exploitation of the land by certain tetrapods. These included the earliest sauropsid reptiles (Hylonomus), and the earliest known synapsid (Archaeothyris). Synapsids quickly became huge and diversified in the Permian, only for their dominance to stop during the Mesozoic. Sauropsids (reptiles, and also, later, birds) also diversified but remained small until the Mesozoic, during which they dominated the land, as well as the water and sky, only for their dominance to stop during the Cenozoic.
Reptiles underwent a major evolutionary radiation in response to the drier climate that preceded the rainforest collapse. By the end of the Carboniferous amniotes had already diversified into a number of groups, including several families of synapsid pelycosaurs, protorothyridids, captorhinids, saurians and araeoscelids.
Fungi
As plants and animals were growing in size and abundance in this time, land fungi diversified further. Marine fungi still occupied the oceans. All modern classes of fungi were present in the late Carboniferous.
Extinction events
Romer's gap
The first 15 million years of the Carboniferous had very limited terrestrial fossils. While it has long been debated whether the gap is a result of fossilisation or relates to an actual event, recent work indicates there was a drop in atmospheric oxygen levels, indicating some sort of ecological collapse. The gap saw the demise of the Devonian fish-like ichthyostegalian labyrinthodonts and the rise of the more advanced temnospondylian and reptiliomorphan amphibians that so typify the Carboniferous terrestrial vertebrate fauna.
Carboniferous rainforest collapse
Before the end of the Carboniferous, an extinction event occurred. On land this event is referred to as the Carboniferous rainforest collapse. Vast tropical rainforests collapsed suddenly as the climate changed from hot and humid to cool and arid. This was likely caused by intense glaciation and a drop in sea levels. The new climatic conditions were not favorable to the growth of rainforest and the animals within them. Rainforests shrank into isolated islands, surrounded by seasonally dry habitats. Towering lycopsid forests with a heterogeneous mixture of vegetation were replaced by much less diverse tree fern dominated flora.
Amphibians, the dominant vertebrates at the time, fared poorly through this event with large losses in biodiversity; reptiles continued to diversify through key adaptations that let them survive in the drier habitat, specifically the hard-shelled egg and scales, both of which retain water better than their amphibian counterparts.
| Physical sciences | Geological timescale | Earth science |
5409 | https://en.wikipedia.org/wiki/Commelinales | Commelinales | Commelinales is an order of flowering plants. It comprises five families: Commelinaceae, Haemodoraceae, Hanguanaceae, Philydraceae, and Pontederiaceae. All the families combined contain over 885 species in about 70 genera; the majority of species are in the Commelinaceae. Plants in the order share a number of synapomorphies that tie them together, such as a lack of mycorrhizal associations and tapetal raphides. Estimates differ as to when the Commelinales evolved, but most suggest an origin and diversification sometime during the mid- to late Cretaceous. Depending on the methods used, studies suggest a range of origin between 123 and 73 million years, with diversification occurring within the group 110 to 66 million years ago. The order's closest relatives are in the Zingiberales, which includes ginger, bananas, cardamom, and others.
Taxonomy
According to the most recent classification scheme, the APG IV of 2016, the order includes five families:
Commelinaceae
Haemodoraceae
Hanguanaceae
Philydraceae
Pontederiaceae
This is unchanged from the APG III of 2009 and the APG II of 2003, but different from the older APG system of 1998, which did not include Hanguanaceae.
Previous classification systems
The older Cronquist system of 1981, which was based purely on morphological data, placed the order in subclass Commelinidae of class Liliopsida and included the families Commelinaceae, Mayacaceae, Rapateaceae and Xyridaceae. These families are now known to be only distantly related.
In the classification system of Dahlgren the Commelinales were one of four orders in the superorder Commeliniflorae (also called Commelinanae), and contained five families, of which only Commelinaceae has been retained by the Angiosperm Phylogeny Group.
| Biology and health sciences | Commelinales | Plants |
5421 | https://en.wikipedia.org/wiki/Cardiology | Cardiology | Cardiology () is the study of the heart. Cardiology is a branch of medicine that deals with disorders of the heart and the cardiovascular system. The field includes medical diagnosis and treatment of congenital heart defects, coronary artery disease, heart failure, valvular heart disease, and electrophysiology. Physicians who specialize in this field of medicine are called cardiologists, a sub-specialty of internal medicine. Pediatric cardiologists are pediatricians who specialize in cardiology. Physicians who specialize in cardiac surgery are called cardiothoracic surgeons or cardiac surgeons, a specialty of general surgery.
Specializations
All cardiologists in the branch of medicine study the disorders of the heart, but the study of adult and child heart disorders each require different training pathways. Therefore, an adult cardiologist (often simply called "cardiologist") is inadequately trained to take care of children, and pediatric cardiologists are not trained to treat adult heart disease. Surgical aspects outside of cardiac rhythm device implant are not included in cardiology and are in the domain of cardiothoracic surgery. For example, coronary artery bypass surgery (CABG), cardiopulmonary bypass and valve replacement are surgical procedures performed by surgeons, not cardiologists. Typically a cardiologist would first identify who is in need of cardiac surgery and refer them to a cardiac surgeon for the procedure. However, some invasive procedures such as cardiac catheterization and pacemaker implantation are performed by cardiologists.
Adult cardiology
Cardiology is a specialty of internal medicine.
To become a cardiologist in the United States, a three-year residency in internal medicine is followed by a three-year fellowship in cardiology. It is possible to specialize further in a sub-specialty. Recognized sub-specialties in the U.S. by the Accreditation Council for Graduate Medical Education are clinical cardiac electrophysiology, interventional cardiology, adult congenital heart disease, and advanced heart failure and transplant cardiology. Cardiologists may further become certified in echocardiography by the National Board of Echocardiography, in nuclear cardiology by the Certification Board of Nuclear Cardiology, in cardiovascular computed tomography by the Certification Board of Cardiovascular Computed Tomography in cardiovascular MRI by the Certification Board of Cardiovascular Magnetic Resonance. Recognized subspecialties in the U.S. by the American Osteopathic Association Bureau of Osteopathic Specialists include clinical cardiac electrophysiology and interventional cardiology.
In India, a three-year residency in General Medicine or Pediatrics after M.B.B.S. and then three years of residency in cardiology are needed to be a D.M. (holder of a Doctorate of Medicine [D.M.])/Diplomate of National Board (DNB) in Cardiology.
Per Doximity, adult cardiologists earn an average of $436,849 per year in the U.S.
Cardiac electrophysiology
Cardiac electrophysiology is the science of elucidating, diagnosing, and treating the electrical activities of the heart. The term is usually used to describe studies of such phenomena by invasive (intracardiac) catheter recording of spontaneous activity as well as of cardiac responses to programmed electrical stimulation (PES). These studies are performed to assess complex arrhythmias, elucidate symptoms, evaluate abnormal electrocardiograms, assess risk of developing arrhythmias in the future, and design treatment. These procedures increasingly include therapeutic methods (typically radiofrequency ablation, or cryoablation) in addition to diagnostic and prognostic procedures.
Other therapeutic modalities employed in this field include antiarrhythmic drug therapy and implantation of pacemakers and automatic implantable cardioverter-defibrillators (AICD).
The cardiac electrophysiology study typically measures the response of the injured or cardiomyopathic myocardium to PES on specific pharmacological regimens in order to assess the likelihood that the regimen will successfully prevent potentially fatal sustained ventricular tachycardia (VT) or ventricular fibrillation (VF) in the future. Sometimes a series of electrophysiology-study drug trials must be conducted to enable the cardiologist to select the one regimen for long-term treatment that best prevents or slows the development of VT or VF following PES. Such studies may also be conducted in the presence of a newly implanted or newly replaced cardiac pacemaker or AICD.
Clinical cardiac electrophysiology
Clinical cardiac electrophysiology is a branch of the medical specialty of cardiology and is concerned with the study and treatment of rhythm disorders of the heart. Cardiologists with expertise in this area are usually referred to as electrophysiologists. Electrophysiologists are trained in the mechanism, function, and performance of the electrical activities of the heart. Electrophysiologists work closely with other cardiologists and cardiac surgeons to assist or guide therapy for heart rhythm disturbances (arrhythmias). They are trained to perform interventional and surgical procedures to treat cardiac arrhythmia.
The training required to become an electrophysiologist is long and requires eight years after medical school (within the U.S.). Three years of internal medicine residency, three years of cardiology fellowship, and two years of clinical cardiac electrophysiology.
Cardiogeriatrics
Cardiogeriatrics, or geriatric cardiology, is the branch of cardiology and geriatric medicine that deals with the cardiovascular disorders in elderly people.
Cardiac disorders such as coronary heart disease, including myocardial infarction, heart failure, cardiomyopathy, and arrhythmias such as atrial fibrillation, are common and are a major cause of mortality in elderly people. Vascular disorders such as atherosclerosis and peripheral arterial disease cause significant morbidity and mortality in aged people.
Imaging
Cardiac imaging includes echocardiography (echo), cardiac magnetic resonance imaging (CMR), and computed tomography of the heart.
Those who specialize in cardiac imaging may undergo more training in all imaging modes or focus on a single imaging modality.
Echocardiography (or "echo") uses standard two-dimensional, three-dimensional, and Doppler ultrasound to create images of the heart. It is used to evaluate and quantify cardiac size and function, valvular function, and can assist with diagnosis and treatment of conditions including heart failure, heart attack, valvular heart disease, congenital heart defects, pericardial disease, and aortic disease.
Those who specialize in echo may spend a significant amount of their clinical time reading echos and performing transesophageal echo, in particular using the latter during procedures such as insertion of a left atrial appendage occlusion device. Transesophageal echo provides higher spatial resolution than trans thoracic echocardiography and because the probe is located in the esophagus, it is not limited by attenuation due to anterior chest structures such as the ribs, chest wall, breasts, lungs that can hinder the quality of trans thoracic echocardiography. It is generally indicated for a variety of indications including: when the standard transthoracic echocardiogram is non diagnostic, for detailed evaluation of abnormalities that are typically in the far field, such as the aorta, left atrial appendage, evaluation of native or prosthetic heart valves, evaluation of cardiac masses, evaluation of endocarditis, valvular abscesses, or for the evaluation of cardiac source of embolus. It is frequently used in the setting of atrial fibrillation or atrial flutter to facilitate the clinical decision with regard to anticoagulation, cardioversion and/or radio frequency ablation.
Cardiac MRI utilizes special protocols to image heart structure and function with specific sequences for certain diseases such as hemochromatosis and amyloidosis.
Cardiac CT utilizes special protocols to image heart structure and function with particular emphasis on coronary arteries.
Interventional cardiology
Interventional cardiology is a branch of cardiology that deals specifically with the catheter based treatment of structural heart diseases. A large number of procedures can be performed on the heart by catheterization, including angiogram, angioplasty, atherectomy, and stent implantation. These procedures all involve insertion of a sheath into the femoral artery or radial artery (but, in practice, any large peripheral artery or vein) and cannulating the heart under visualization (most commonly fluoroscopy). This cannulation allows indirect access to the heart, bypassing the trauma caused by surgical opening of the chest.
The main advantages of using the interventional cardiology or radiology approach are the avoidance of the scars and pain, and long post-operative recovery. Additionally, interventional cardiology procedure of primary angioplasty is now the gold standard of care for an acute myocardial infarction. This procedure can also be done proactively, when areas of the vascular system become occluded from atherosclerosis. The Cardiologist will thread this sheath through the vascular system to access the heart. This sheath has a balloon and a tiny wire mesh tube wrapped around it, and if the cardiologist finds a blockage or stenosis, they can inflate the balloon at the occlusion site in the vascular system to flatten or compress the plaque against the vascular wall. Once that is complete a stent is placed as a type of scaffold to hold the vasculature open permanently.
Cardiomyopathy/heart failure
A relatively newer specialization of cardiology is in the field of heart failure and heart transplant. Cardiomyopathy is a disease of the heart muscle that make it larger or stiffer, sometimes making the heart worse at pumping blood. Specialization of general cardiology to just that of the cardiomyopathies leads to also specializing in heart transplant and pulmonary hypertension.
Cardiooncology
A recent specialization of cardiology is that of cardiooncology.
This area specializes in the cardiac management in those with cancer and in particular those with plans for chemotherapy or those who have experienced cardiac complications of chemotherapy.
Preventive cardiology and cardiac rehabilitation
In recent times, the focus is gradually shifting to preventive cardiology due to increased cardiovascular disease burden at an early age. According to the WHO, 37% of all premature deaths are due to cardiovascular diseases and out of this, 82% are in low and middle income countries. Clinical cardiology is the sub specialty of cardiology which looks after preventive cardiology and cardiac rehabilitation. Preventive cardiology also deals with routine preventive checkup though noninvasive tests, specifically electrocardiography, fasegraphy, stress tests, lipid profile and general physical examination to detect any cardiovascular diseases at an early age, while cardiac rehabilitation is the upcoming branch of cardiology which helps a person regain their overall strength and live a normal life after a cardiovascular event. A subspecialty of preventive cardiology is sports cardiology. Because heart disease is the leading cause of death in the world including United States (cdc.gov), national health campaigns and randomized control research has developed to improve heart health.
Pediatric cardiology
Helen B. Taussig is known as the founder of pediatric cardiology. She became famous through her work with Tetralogy congenital heart defect in which oxygenated and deoxygenated blood enters the circulatory system resulting from a ventricular septal defect (VSD) right beneath the aorta. This condition causes newborns to have a bluish-tint, cyanosis, and have a deficiency of oxygen to their tissues, hypoxemia. She worked with Alfred Blalock and Vivien Thomas at the Johns Hopkins Hospital where they experimented with dogs to look at how they would attempt to surgically cure these "blue babies". They eventually figured out how to do just that by the anastomosis of the systemic artery to the pulmonary artery and called this the Blalock-Taussig Shunt.
Tetralogy of Fallot, pulmonary atresia, double outlet right ventricle, transposition of the great arteries, persistent truncus arteriosus, and Ebstein's anomaly are various congenital cyanotic heart diseases, in which the blood of the newborn is not oxygenated efficiently, due to the heart defect.
Adult congenital heart disease
As more children with congenital heart disease are surviving into adulthood, a hybrid of adult and pediatric cardiology has emerged called adult congenital heart disease (ACHD).
This field can be entered as either adult or pediatric cardiology.
ACHD specializes in congenital diseases in the setting of adult diseases (e.g., coronary artery disease, COPD, diabetes) that is, otherwise, atypical for adult or pediatric cardiology.
The heart
As the center focus of cardiology, the heart has numerous anatomical features (e.g., atria, ventricles, heart valves) and numerous physiological features (e.g., systole, heart sounds, afterload) that have been encyclopedically documented for many centuries. The heart is located in the middle of the abdomen with its tip slightly towards the left side of the abdomen.
Disorders of the heart lead to heart disease and cardiovascular disease and can lead to a significant number of deaths: cardiovascular disease is the leading cause of death in the U.S. and caused 24.95% of total deaths in 2008.
The primary responsibility of the heart is to pump blood throughout the body.
It pumps blood from the body — called the systemic circulation — through the lungs — called the pulmonary circulation — and then back out to the body. This means that the heart is connected to and affects the entirety of the body. Simplified, the heart is a circuit of the circulation. While plenty is known about the healthy heart, the bulk of study in cardiology is in disorders of the heart and restoration, and where possible, of function.
The heart is a muscle that squeezes blood and functions like a pump. The heart's systems can be classified as either electrical or mechanical, and both of these systems are susceptible to failure or dysfunction.
The electrical system of the heart is centered on the periodic contraction (squeezing) of the muscle cells that is caused by the cardiac pacemaker located in the sinoatrial node.
The study of the electrical aspects is a sub-field of electrophysiology called cardiac electrophysiology and is epitomized with the electrocardiogram (ECG/EKG).
The action potentials generated in the pacemaker propagate throughout the heart in a specific pattern. The system that carries this potential is called the electrical conduction system.
Dysfunction of the electrical system manifests in many ways and may include Wolff–Parkinson–White syndrome, ventricular fibrillation, and heart block.
The mechanical system of the heart is centered on the fluidic movement of blood and the functionality of the heart as a pump.
The mechanical part is ultimately the purpose of the heart and many of the disorders of the heart disrupt the ability to move blood.
Heart failure is one condition in which the mechanical properties of the heart have failed or are failing, which means insufficient blood is being circulated. Failure to move a sufficient amount of blood through the body can cause damage or failure of other organs and may result in death if severe.
Coronary circulation
Coronary circulation is the circulation of blood in the blood vessels of the heart muscle (the myocardium). The vessels that deliver oxygen-rich blood to the myocardium are known as coronary arteries. The vessels that remove the deoxygenated blood from the heart muscle are known as cardiac veins. These include the great cardiac vein, the middle cardiac vein, the small cardiac vein and the anterior cardiac veins.
As the left and right coronary arteries run on the surface of the heart, they can be called epicardial coronary arteries. These arteries, when healthy, are capable of autoregulation to maintain coronary blood flow at levels appropriate to the needs of the heart muscle. These relatively narrow vessels are commonly affected by atherosclerosis and can become blocked, causing angina or myocardial infarction (a.k.a., a heart attack). The coronary arteries that run deep within the myocardium are referred to as subendocardial.
The coronary arteries are classified as "end circulation", since they represent the only source of blood supply to the myocardium; there is very little redundant blood supply, which is why blockage of these vessels can be so critical.
Cardiac examination
The cardiac examination (also called the "precordial exam"), is performed as part of a physical examination, or when a patient presents with chest pain suggestive of a cardiovascular pathology. It would typically be modified depending on the indication and integrated with other examinations especially the respiratory examination.
Like all medical examinations, the cardiac examination follows the standard structure of inspection, palpation and auscultation.
Heart disorders
Cardiology is concerned with the normal functionality of the heart and the deviation from a healthy heart. Many disorders involve the heart itself, but some are outside of the heart and in the vascular system. Collectively, the two are jointly termed the cardiovascular system, and diseases of one part tend to affect the other.
Coronary artery disease
Coronary artery disease, also known as "ischemic heart disease", is a group of diseases that includes: stable angina, unstable angina, myocardial infarction, and is one of the causes of sudden cardiac death. It is within the group of cardiovascular diseases of which it is the most common type. A common symptom is chest pain or discomfort which may travel into the shoulder, arm, back, neck, or jaw. Occasionally it may feel like heartburn. Usually symptoms occur with exercise or emotional stress, last less than a few minutes, and get better with rest. Shortness of breath may also occur and sometimes no symptoms are present. The first sign is occasionally a heart attack. Other complications include heart failure or an irregular heartbeat.
Risk factors include: high blood pressure, smoking, diabetes, lack of exercise, obesity, high blood cholesterol, poor diet, and excessive alcohol, among others. Other risks include depression. The underlying mechanism involves atherosclerosis of the arteries of the heart. A number of tests may help with diagnoses including: electrocardiogram, cardiac stress testing, coronary computed tomographic angiography, and coronary angiogram, among others.
Prevention is by eating a healthy diet, regular exercise, maintaining a healthy weight and not smoking. Sometimes medication for diabetes, high cholesterol, or high blood pressure are also used. There is limited evidence for screening people who are at low risk and do not have symptoms. Treatment involves the same measures as prevention. Additional medications such as antiplatelets including aspirin, beta blockers, or nitroglycerin may be recommended. Procedures such as percutaneous coronary intervention (PCI) or coronary artery bypass surgery (CABG) may be used in severe disease. In those with stable CAD it is unclear if PCI or CABG in addition to the other treatments improve life expectancy or decreases heart attack risk.
In 2013 CAD was the most common cause of death globally, resulting in 8.14 million deaths (16.8%) up from 5.74 million deaths (12%) in 1990. The risk of death from CAD for a given age has decreased between 1980 and 2010 especially in developed countries. The number of cases of CAD for a given age has also decreased between 1990 and 2010. In the U.S. in 2010 about 20% of those over 65 had CAD, while it was present in 7% of those 45 to 64, and 1.3% of those 18 to 45. Rates are higher among men than women of a given age.
Cardiomyopathy
Heart failure, or formally cardiomyopathy, is the impaired function of the heart, and there are numerous causes and forms of heart failure.
The causes of cardiomyopathy can be genetic, viral, or lifestyle-related. Key symptoms of cardiomyopathy include shortness of breath, fatigue, and irregular heartbeats. Understanding the specific function of cardiac muscle is crucial, as the heart muscle's main role is to pump blood throughout the body efficiently.
Cardiac arrhythmia
Cardiac arrhythmia, also known as "cardiac dysrhythmia" or "irregular heartbeat", is a group of conditions in which the heartbeat is too fast, too slow, or irregular in its rhythm. A heart rate that is too fast – above 100 beats per minute in adults – is called tachycardia. A heart rate that is too slow – below 60 beats per minute – is called bradycardia. Many types of arrhythmia present no symptoms. When symptoms are present, they may include palpitations, or feeling a pause between heartbeats. More serious symptoms may include lightheadedness, passing out, shortness of breath, or chest pain. While most types of arrhythmia are not serious, some predispose a person to complications such as stroke or heart failure. Others may result in cardiac arrest.
There are four main types of arrhythmia: extra beats, supraventricular tachycardias, ventricular arrhythmias, and bradyarrhythmias. Extra beats include premature atrial contractions, premature ventricular contractions, and premature junctional contractions. Supraventricular tachycardias include atrial fibrillation, atrial flutter, and paroxysmal supraventricular tachycardia. Ventricular arrhythmias include ventricular fibrillation and ventricular tachycardia. Arrhythmias are due to problems with the electrical conduction system of the heart. Arrhythmias may occur in children; however, the normal range for the heart rate is different and depends on age. A number of tests can help diagnose arrhythmia, including an electrocardiogram and Holter monitor.
Most arrhythmias can be effectively treated. Treatments may include medications, medical procedures such as a pacemaker, and surgery. Medications for a fast heart rate may include beta blockers or agents that attempt to restore a normal heart rhythm such as procainamide. This later group may have more significant side effects especially if taken for a long period of time. Pacemakers are often used for slow heart rates. Those with an irregular heartbeat are often treated with blood thinners to reduce the risk of complications. Those who have severe symptoms from an arrhythmia may receive urgent treatment with a jolt of electricity in the form of cardioversion or defibrillation.
Arrhythmia affects millions of people. In Europe and North America, as of 2014, atrial fibrillation affects about 2% to 3% of the population. Atrial fibrillation and atrial flutter resulted in 112,000 deaths in 2013, up from 29,000 in 1990. Sudden cardiac death is the cause of about half of deaths due to cardiovascular disease or about 15% of all deaths globally. About 80% of sudden cardiac death is the result of ventricular arrhythmias. Arrhythmias may occur at any age but are more common among older people.
Cardiac arrest
Cardiac arrest is a sudden stop in effective blood flow due to the failure of the heart to contract effectively. Symptoms include loss of consciousness and abnormal or absent breathing. Some people may have chest pain, shortness of breath, or nausea before this occurs. If not treated within minutes, death usually occurs.
The most common cause of cardiac arrest is coronary artery disease. Less common causes include major blood loss, lack of oxygen, very low potassium, heart failure, and intense physical exercise. A number of inherited disorders may also increase the risk including long QT syndrome. The initial heart rhythm is most often ventricular fibrillation. The diagnosis is confirmed by finding no pulse. While a cardiac arrest may be caused by heart attack or heart failure these are not the same.
Prevention includes not smoking, physical activity, and maintaining a healthy weight. Treatment for cardiac arrest is immediate cardiopulmonary resuscitation (CPR) and, if a shockable rhythm is present, defibrillation. Among those who survive targeted temperature management may improve outcomes. An implantable cardiac defibrillator may be placed to reduce the chance of death from recurrence.
In the United States, cardiac arrest outside of hospital occurs in about 13 per 10,000 people per year (326,000 cases). In hospital cardiac arrest occurs in an additional 209,000 Cardiac arrest becomes more common with age. It affects males more often than females. The percentage of people who survive with treatment is about 8%. Many who survive have significant disability. Many U.S. television shows, however, have portrayed unrealistically high survival rates of 67%.
Hypertension
Hypertension, also known as "high blood pressure", is a long term medical condition in which the blood pressure in the arteries is persistently elevated. High blood pressure usually does not cause symptoms. Long term high blood pressure, however, is a major risk factor for coronary artery disease, stroke, heart failure, peripheral vascular disease, vision loss, and chronic kidney disease.
Lifestyle factors can increase the risk of hypertension. These include excess salt in the diet, excess body weight, smoking, and alcohol consumption. Hypertension can also be caused by other diseases, or occur as a side-effect of drugs.
Blood pressure is expressed by two measurements, the systolic and diastolic pressures, which are the maximum and minimum pressures, respectively. Normal blood pressure when at rest is within the range of 100–140 millimeters mercury (mmHg) systolic and 60–90 mmHg diastolic. High blood pressure is present if the resting blood pressure is persistently at or above 140/90 mmHg for most adults. Different numbers apply to children. When diagnosing high blood pressure, ambulatory blood pressure monitoring over a 24-hour period appears to be more accurate than "in-office" blood pressure measurement at a physician's office or other blood pressure screening location.
Lifestyle changes and medications can lower blood pressure and decrease the risk of health complications. Lifestyle changes include weight loss, decreased salt intake, physical exercise, and a healthy diet. If changes in lifestyle are insufficient, blood pressure medications may be used. A regimen of up to three medications effectively controls blood pressure in 90% of people. The treatment of moderate to severe high arterial blood pressure (defined as >160/100 mmHg) with medication is associated with an improved life expectancy and reduced morbidity. The effect of treatment for blood pressure between 140/90 mmHg and 160/100 mmHg is less clear, with some studies finding benefits while others do not. High blood pressure affects between 16% and 37% of the population globally. In 2010, hypertension was believed to have been a factor in 18% (9.4 million) deaths.
Essential vs Secondary hypertension
Essential hypertension is the form of hypertension that by definition has no identifiable cause. It is the most common type of hypertension, affecting 95% of hypertensive patients, it tends to be familial and is likely to be the consequence of an interaction between environmental and genetic factors. Prevalence of essential hypertension increases with age, and individuals with relatively high blood pressure at younger ages are at increased risk for the subsequent development of hypertension.
Hypertension can increase the risk of cerebral, cardiac, and renal events.
Secondary hypertension is a type of hypertension which is caused by an identifiable underlying secondary cause. It is much less common than essential hypertension, affecting only 5% of hypertensive patients. It has many different causes including endocrine diseases, kidney diseases, and tumors. It also can be a side effect of many medications.
Complications of hypertension
Complications of hypertension are clinical outcomes that result from persistent elevation of blood pressure. Hypertension is a risk factor for all clinical manifestations of atherosclerosis since it is a risk factor for atherosclerosis itself. It is an independent predisposing factor for heart failure, coronary artery disease, stroke, renal disease, and peripheral arterial disease. It is the most important risk factor for cardiovascular morbidity and mortality, in industrialized countries.
Congenital heart defects
A congenital heart defect, also known as a "congenital heart anomaly" or "congenital heart disease", is a problem in the structure of the heart that is present at birth. Signs and symptoms depend on the specific type of problem. Symptoms can vary from none to life-threatening. When present they may include rapid breathing, bluish skin, poor weight gain, and feeling tired. It does not cause chest pain. Most congenital heart problems do not occur with other diseases. Complications that can result from heart defects include heart failure.
The cause of a congenital heart defect is often unknown. Certain cases may be due to infections during pregnancy such as rubella, use of certain medications or drugs such as alcohol or tobacco, parents being closely related, or poor nutritional status or obesity in the mother. Having a parent with a congenital heart defect is also a risk factor. A number of genetic conditions are associated with heart defects including Down syndrome, Turner syndrome, and Marfan syndrome. Congenital heart defects are divided into two main groups: cyanotic heart defects and non-cyanotic heart defects, depending on whether the child has the potential to turn bluish in color. The problems may involve the interior walls of the heart, the heart valves, or the large blood vessels that lead to and from the heart.
Congenital heart defects are partly preventable through rubella vaccination, the adding of iodine to salt, and the adding of folic acid to certain food products. Some defects do not need treatment. Other may be effectively treated with catheter based procedures or heart surgery. Occasionally a number of operations may be needed. Occasionally heart transplantation is required. With appropriate treatment outcomes, even with complex problems, are generally good.
Heart defects are the most common birth defect. In 2013 they were present in 34.3 million people globally. They affect between 4 and 75 per 1,000 live births depending upon how they are diagnosed. About 6 to 19 per 1,000 cause a moderate to severe degree of problems. Congenital heart defects are the leading cause of birth defect-related deaths. In 2013 they resulted in 323,000 deaths down from 366,000 deaths in 1990.
Tetralogy of Fallot
Tetralogy of Fallot is the most common congenital heart disease arising in 1–3 cases per 1,000 births. The cause of this defect is a ventricular septal defect (VSD) and an overriding aorta. These two defects combined causes deoxygenated blood to bypass the lungs and going right back into the circulatory system. The modified Blalock-Taussig shunt is usually used to fix the circulation. This procedure is done by placing a graft between the subclavian artery and the ipsilateral pulmonary artery to restore the correct blood flow.
Pulmonary atresia
Pulmonary atresia happens in 7–8 per 100,000 births and is characterized by the aorta branching out of the right ventricle. This causes the deoxygenated blood to bypass the lungs and enter the circulatory system. Surgeries can fix this by redirecting the aorta and fixing the right ventricle and pulmonary artery connection.
There are two types of pulmonary atresia, classified by whether or not the baby also has a ventricular septal defect.
Pulmonary atresia with an intact ventricular septum: This type of pulmonary atresia is associated with complete and intact septum between the ventricles.
Pulmonary atresia with a ventricular septal defect: This type of pulmonary atresia happens when a ventricular septal defect allows blood to flow into and out of the right ventricle.
Double outlet right ventricle
Double outlet right ventricle (DORV) is when both great arteries, the pulmonary artery and the aorta, are connected to the right ventricle. There is usually a VSD in different particular places depending on the variations of DORV, typically 50% are subaortic and 30%. The surgeries that can be done to fix this defect can vary due to the different physiology and blood flow in the defected heart. One way it can be cured is by a VSD closure and placing conduits to restart the blood flow between the left ventricle and the aorta and between the right ventricle and the pulmonary artery. Another way is systemic-to-pulmonary artery shunt in cases associated with pulmonary stenosis. Also, a balloon atrial septostomy can be done to relieve hypoxemia caused by DORV with the Taussig-Bing anomaly while surgical correction is awaited.
Transposition of great arteries
There are two different types of transposition of the great arteries, Dextro-transposition of the great arteries and Levo-transposition of the great arteries, depending on where the chambers and vessels connect. Dextro-transposition happens in about 1 in 4,000 newborns and is when the right ventricle pumps blood into the aorta and deoxygenated blood enters the bloodstream. The temporary procedure is to create an atrial septal defect. A permanent fix is more complicated and involves redirecting the pulmonary return to the right atrium and the systemic return to the left atrium, which is known as the Senning procedure. The Rastelli procedure can also be done by rerouting the left ventricular outflow, dividing the pulmonary trunk, and placing a conduit in between the right ventricle and pulmonary trunk. Levo-transposition happens in about 1 in 13,000 newborns and is characterized by the left ventricle pumping blood into the lungs and the right ventricle pumping the blood into the aorta. This may not produce problems at the beginning, but will eventually due to the different pressures each ventricle uses to pump blood. Switching the left ventricle to be the systemic ventricle and the right ventricle to pump blood into the pulmonary artery can repair levo-transposition.
Persistent truncus arteriosus
Persistent truncus arteriosus is when the truncus arteriosus fails to split into the aorta and pulmonary trunk. This occurs in about 1 in 11,000 live births and allows both oxygenated and deoxygenated blood into the body. The repair consists of a VSD closure and the Rastelli procedure.
Ebstein anomaly
Ebstein's anomaly is characterized by a right atrium that is significantly enlarged and a heart that is shaped like a box. This is very rare and happens in less than 1% of congenital heart disease cases. The surgical repair varies depending on the severity of the disease.
Pediatric cardiology is a sub-specialty of pediatrics. To become a pediatric cardiologist in the U.S., one must complete a three-year residency in pediatrics, followed by a three-year fellowship in pediatric cardiology. Per doximity, pediatric cardiologists make an average of $303,917 in the U.S.
Diagnostic tests in cardiology
Diagnostic tests in cardiology are the methods of identifying heart conditions associated with healthy vs. unhealthy, pathologic heart function. The starting point is obtaining a medical history, followed by Auscultation. Then blood tests, electrophysiological procedures, and cardiac imaging can be ordered for further analysis. Electrophysiological procedures include electrocardiogram, cardiac monitoring, cardiac stress testing, and the electrophysiology study.
Trials
Cardiology is known for randomized controlled trials that guide clinical treatment of cardiac diseases. While dozens are published every year, there are landmark trials that shift treatment significantly. Trials often have an acronym of the trial name, and this acronym is used to reference the trial and its results. Some of these landmark trials include:
V-HeFT (1986) — use of vasodilators (hydralazine and isosorbide dinitrate) in heart failure
ISIS-2 (1988) — use of aspirin in myocardial infarction
CASE I (1991) — use of antiarrhythmic agents after a heart attack increases mortality
SOLVD (1991) — use of ACE inhibitors in heart failure
4S (1994) — statins reduce risk of heart disease
CURE (1991) — use of dual antiplatelet therapy in NSTEMI
MIRACLE (2002) — use of cardiac resynchronization therapy in heart failure
SCD-HeFT (2005) — the use of implantable cardioverter-defibrillator in heart failure
RELY (2009), ROCKET-AF (2011), ARISTOTLE (2011) — use of DOACs in atrial fibrillation instead of warfarin
PARADIGM-HF (2014) — use of angiotensin-neprilysin inhibitor in heart failure
ISCHEMIA (2020) — medical therapy is as good as coronary stents in stable heart disease
EMPEROR-Preserved (2021) — SGLT2 receptors in heart failure
Cardiology community
Associations
American College of Cardiology
American Heart Association
European Society of Cardiology
Heart Rhythm Society
Canadian Cardiovascular Society
Indian Heart Association
National Heart Foundation of Australia
Cardiology Society of India
Journals
Acta Cardiologica
American Journal of Cardiology
Annals of Cardiac Anaesthesia
Current Research: Cardiology
Cardiology in Review
Circulation
Circulation Research
Clinical and Experimental Hypertension
Clinical Cardiology
EP – Europace
European Heart Journal
Heart
Heart Rhythm
International Journal of Cardiology
Journal of the American College of Cardiology
Pacing and Clinical Electrophysiology
Indian Heart Journal
Cardiologists
Robert Atkins (1930–2003), known for the Atkins diet
Christiaan Barnard (1922–2001), cardiac surgeon who performed the world's first human-to-human heart transplant operation
Eugene Braunwald (born 1929), editor of Braunwald's Heart Disease and 1000+ publications
Wallace Brigden (1916–2008), identified cardiomyopathy
Manoj Durairaj (1971– ), cardiologist from Pune, India who received Pro Ecclesia et Pontifice
Willem Einthoven (1860–1927), a physiologist who built the first practical ECG and won the 1924 Nobel Prize in Physiology or Medicine ("for the discovery of the mechanism of the electrocardiogram")
Werner Forssmann (1904–1979), who infamously performed the first human catheterization on himself that led to him being let go from Berliner Charité Hospital, quitting cardiology as a speciality, and then winning the 1956 Nobel Prize in Physiology or Medicine ("for their discoveries concerning heart catheterization and pathological changes in the circulatory system")
Andreas Gruentzig (1939–1985), first developed balloon angioplasty
William Harvey (1578–1657), wrote Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus that first described the closed circulatory system and whom Forssmann described as founding cardiology in his Nobel lecture
Murray S. Hoffman (1924–2018) As president of the Colorado Heart Association, he initiated one of the first jogging programs promoting cardiac health
Max Holzmann (1899–1994), co-founder of the Swiss Society of Cardiology, president from 1952 to 1955
Samuel A. Levine (1891–1966), recognized the sign known as Levine's sign as well as the current grading of the intensity of heart murmurs, known as the Levine scale
Henry Joseph Llewellyn "Barney" Marriott (1917–2007), ECG interpretation and Practical Electrocardiography
Bernard Lown (1921–2021), original developer of the defibrillator
Woldemar Mobitz (1889–1951), described and classified the two types of second-degree atrioventricular block often called "Mobitz Type I" and "Mobitz Type II"
Jacqueline Noonan (1928–2020), discoverer of Noonan syndrome that is the top syndromic cause of congenital heart disease
John Parkinson (1885–1976), known for Wolff–Parkinson–White syndrome
Helen B. Taussig (1898–1986), founder of pediatric cardiology and extensively worked on blue baby syndrome
Paul Dudley White (1886–1973), known for Wolff–Parkinson–White syndrome
Fredrick Arthur Willius (1888–1972), founder of the cardiology department at the Mayo Clinic and an early pioneer of electrocardiography
Louis Wolff (1898–1972), known for Wolff–Parkinson–White syndrome
Karel Frederik Wenckebach (1864–1940), first described what is now called type I second-degree atrioventricular block in 1898
| Biology and health sciences | Fields of medicine | null |
5439 | https://en.wikipedia.org/wiki/Capricornus | Capricornus | Capricornus is one of the constellations of the zodiac. Its name is Latin for "horned goat" or "goat horn" or "having horns like a goat's", and it is commonly represented in the form of a sea goat: a mythical creature that is half goat, half fish.
Capricornus is one of the 88 modern constellations, and was also one of the 48 constellations listed by the 2nd century astronomer Claudius Ptolemy. Its old astronomical symbol is (♑︎). Under its modern boundaries it is bordered by Aquila, Sagittarius, Microscopium, Piscis Austrinus, and Aquarius. The constellation is located in an area of sky called the Sea or the Water, consisting of many water-related constellations such as Aquarius, Pisces and Eridanus. It is the smallest constellation in the zodiac.
Notable features
Stars
Capricornus is a faint constellation, with only one star above magnitude 3; its alpha star has a magnitude of only 3.6.
The brightest star in Capricornus is δ Capricorni, also called Deneb Algedi, with a magnitude of 2.9, located 39 light-years from Earth. Like several other stars such as Denebola and Deneb, it is named for the Arabic word for "tail or end" (deneb) and “young goat / kid” (al-gedi); its traditional name means "tail to head” or “back to the beginning", which could be related to the Ouroboros or Janus since the zodiac relates to January. Deneb Algedi is a Beta Lyrae variable star (a type of eclipsing binary). It ranges by about 0.2 magnitudes with a period of 24.5 hours.
The other bright stars in Capricornus range in magnitude from 3.1 to 5.1. α Capricorni is a multiple star. The primary (α2 Cap), 109 light-years from Earth, is a yellow-hued giant star of magnitude 3.6; the secondary (α1 Cap), 690 light-years from Earth, is a yellow-hued supergiant star of magnitude 4.3. The two stars are distinguishable by the naked eye, and both are themselves multiple stars. α1 Capricorni is accompanied by a star of magnitude 9.2; α2 Capricorni is accompanied by a star of magnitude 11.0; this faint star is itself a binary star with two components of magnitude 11. Also called Algedi or Giedi, the traditional names of α Capricorni come from the Arabic word for "the kid", which references the constellation's mythology.
β Capricorni is a double star also known as Dabih. It is a yellow-hued giant star of magnitude 3.1, 340 light-years from Earth. The secondary is a blue-white hued star of magnitude 6.1. The two stars are distinguishable in binoculars. β Capricorni's traditional name comes from the Arabic phrase for "the lucky stars of the slaughterer," a reference to ritual sacrifices performed by ancient Arabs at the heliacal rising of Capricornus. Another star visible to the naked eye is γ Capricorni, sometimes called Nashira ("bringing good tidings"); it is a white-hued giant star of magnitude 3.7, 139 light-years from Earth. π Capricorni is a double star with a blue-white hued primary of magnitude 5.1 and a white-hued secondary of magnitude 8.3. It is 670 light-years from Earth and the components are distinguishable in a small telescope.
Deep-sky objects
Several galaxies and star clusters are contained within Capricornus. Messier 30 is a globular cluster located 1 degree south of the galaxy group that contains NGC 7103. The constellation also harbors the wide spiral galaxy NGC 6907.
Messier 30 (NGC 7099) is a centrally-condensed globular cluster of magnitude 7.5 . At a distance of 30,000 light-years, it has chains of stars extending to the north that are resolvable in small amateur telescopes.
One galaxy group located in Capricornus is HCG 87, a group of at least three galaxies located 400 million light-years from Earth (redshift 0.0296). It contains a large elliptical galaxy, a face-on spiral galaxy, and an edge-on spiral galaxy. The face-on spiral galaxy is experiencing abnormally high rates of star formation, indicating that it is interacting with one or both members of the group. Furthermore, the large elliptical galaxy and the edge-on spiral galaxy, both of which have active nuclei, are connected by a stream of stars and dust, indicating that they too are interacting. Astronomers predict that the three galaxies may merge millions of years in the future to form a giant elliptical galaxy.
History
The constellation was first attested in depictions on a cylinder-seal from around the 21st century BCE, it was explicitly recorded in the Babylonian star catalogues before 1000 BCE. In the Early Bronze Age the winter solstice occurred in the constellation, but due to the precession of the equinoxes, the December solstice now takes place in the constellation Sagittarius. The Sun is now in the constellation Capricorn (as distinct from the astrological sign) from late January through mid-February.
Although the solstice during the northern hemisphere's winter no longer takes place while the sun is in the constellation Capricornus, as it did until 130 BCE, the astrological sign called Capricorn is still used to denote the position of the solstice, and the latitude of the sun's most southerly position continues to be called the Tropic of Capricorn, a term which also applies to the line on the Earth at which the sun is directly overhead at local noon on the day of the December solstice.
The planet Neptune was discovered by German astronomer Johann Galle, near Deneb Algedi (δ Capricorni) on 23 September 1846, as Capricornus can be seen best from Europe at 4:00 in September (although, by modern constellation boundaries established in the early 20th century CE, Neptune lay within the confines of Aquarius at the time of its discovery).
Mythology
Despite its faintness, the constellation Capricornus has one of the oldest mythological associations, having been consistently represented as a hybrid of a goat and a fish since the Middle Bronze Age, when the Babylonians used "The Goat-Fish" as a symbol of their god Ea.
In Greek mythology, the constellation is sometimes identified as Amalthea, the goat that suckled the infant Zeus after his mother, Rhea, saved him from being devoured by his father, Cronos. Amalthea's broken horn was transformed into the cornucopia or "horn of plenty".
Capricornus is also sometimes identified as Pan, the god with a goat's horns and legs, who saved himself from the monster Typhon by giving himself a fish's tail and diving into a river.
Visualizations
Capricornus's brighter stars are found on a triangle whose vertices are α2 Capricorni (Giedi), δ Capricorni (Deneb Algiedi), and ω Capricorni. Ptolemy's method of connecting the stars of Capricornus has been influential. Capricornus is usually drawn as a goat with the tail of a fish.
H. A. Rey has suggested an alternative visualization, which graphically shows a goat. The goat's head is formed by the triangle of stars ι Cap, θ Cap, and ζ Cap. The goat's horn sticks out with stars γ Cap and δ Cap. Star δ Cap, at the tip of the horn, is of the third magnitude. The goat's tail consists of stars β Cap and α2 Cap: star β Cap being of the third magnitude. The goat's hind foot consists of stars ψ Cap and ω Cap. Both of these stars are of the fourth magnitude.
Equivalents
In Chinese astronomy, constellation Capricornus lies in The Black Tortoise of the North ().
The Nakh peoples called this constellation Roofing Towers ().
In the Society Islands, the figure of Capricornus was called Rua-o-Mere, "Cavern of parental yearnings".
In Indian astronomy and Indian astrology, it is called Makara, the crocodile.
| Physical sciences | Zodiac | Astronomy |
5615 | https://en.wikipedia.org/wiki/Cretaceous | Cretaceous | The Cretaceous ( ) is a geological period that lasted from about 143.1 to 66 million years ago (Mya). It is the third and final period of the Mesozoic Era, as well as the longest. At around 77 million years, it is the ninth and longest geological period of the entire Phanerozoic. The name is derived from the Latin , 'chalk', which is abundant in the latter half of the period. It is usually abbreviated K, for its German translation .
The Cretaceous was a period with a relatively warm climate, resulting in high eustatic sea levels that created numerous shallow inland seas. These oceans and seas were populated with now-extinct marine reptiles, ammonites, and rudists, while dinosaurs continued to dominate on land. The world was largely ice-free, although there is some evidence of brief periods of glaciation during the cooler first half, and forests extended to the poles.
Many of the dominant taxonomic groups present in modern times can be ultimately traced back to origins in the Cretaceous. During this time, new groups of mammals and birds appeared, including the earliest relatives of placentals & marsupials (Eutheria and Metatheria respectively), with the earliest crown group birds appearing towards to the end of the Cretaceous. Teleost fish, the most diverse group of modern vertebrates continued to diversify during the Cretaceous with the appearance of their most diverse subgroup Acanthomorpha during this period. During the Early Cretaceous, flowering plants appeared and began to rapidly diversify, becoming the dominant group of plants across the Earth by the end of the Cretaceous, coincident with the decline and extinction of previously widespread gymnosperm groups.
The Cretaceous (along with the Mesozoic) ended with the Cretaceous–Paleogene extinction event, a large mass extinction in which many groups, including non-avian dinosaurs, pterosaurs, and large marine reptiles, died out, widely thought to have been caused by the impact of a large asteroid that formed the Chicxulub crater in the Gulf of Mexico. The end of the Cretaceous is defined by the abrupt Cretaceous–Paleogene boundary (K–Pg boundary), a geologic signature associated with the mass extinction that lies between the Mesozoic and Cenozoic Eras.
Etymology and history
The Cretaceous as a separate period was first defined by Belgian geologist Jean d'Omalius d'Halloy in 1822 as the Terrain Crétacé, using strata in the Paris Basin and named for the extensive beds of chalk (calcium carbonate deposited by the shells of marine invertebrates, principally coccoliths), found in the upper Cretaceous of Western Europe. The name Cretaceous was derived from the Latin creta, meaning chalk. The twofold division of the Cretaceous was implemented by Conybeare and Phillips in 1822. Alcide d'Orbigny in 1840 divided the French Cretaceous into five étages (stages): the Neocomian, Aptian, Albian, Turonian, and Senonian, later adding the Urgonian between Neocomian and Aptian and the Cenomanian between the Albian and Turonian.
Geology
Subdivisions
The Cretaceous is divided into Early and Late Cretaceous epochs, or Lower and Upper Cretaceous series. In older literature, the Cretaceous is sometimes divided into three series: Neocomian (lower/early), Gallic (middle) and Senonian (upper/late). A subdivision into 12 stages, all originating from European stratigraphy, is now used worldwide. In many parts of the world, alternative local subdivisions are still in use.
From youngest to oldest, the subdivisions of the Cretaceous period are:
Boundaries
The lower boundary of the Cretaceous is currently undefined, and the Jurassic–Cretaceous boundary is currently the only system boundary to lack a defined Global Boundary Stratotype Section and Point (GSSP). Placing a GSSP for this boundary has been difficult because of the strong regionality of most biostratigraphic markers, and the lack of any chemostratigraphic events, such as isotope excursions (large sudden changes in ratios of isotopes) that could be used to define or correlate a boundary. Calpionellids, an enigmatic group of planktonic protists with urn-shaped calcitic tests briefly abundant during the latest Jurassic to earliest Cretaceous, have been suggested as the most promising candidates for fixing the Jurassic–Cretaceous boundary. In particular, the first appearance Calpionella alpina, coinciding with the base of the eponymous Alpina subzone, has been proposed as the definition of the base of the Cretaceous. The working definition for the boundary has often been placed as the first appearance of the ammonite Strambergella jacobi, formerly placed in the genus Berriasella, but its use as a stratigraphic indicator has been questioned, as its first appearance does not correlate with that of C. alpina. The boundary is officially considered by the International Commission on Stratigraphy to be approximately 145 million years ago, but other estimates have been proposed based on U-Pb geochronology, ranging as young as 140 million years ago.
The upper boundary of the Cretaceous is sharply defined, being placed at an iridium-rich layer found worldwide that is believed to be associated with the Chicxulub impact crater, with its boundaries circumscribing parts of the Yucatán Peninsula and extending into the Gulf of Mexico. This layer has been dated at 66.043 Mya.
At the end of the Cretaceous, the impact of a large body with the Earth may have been the punctuation mark at the end of a progressive decline in biodiversity during the Maastrichtian age. The result was the extinction of three-quarters of Earth's plant and animal species. The impact created the sharp break known as the K–Pg boundary (formerly known as the K–T boundary). Earth's biodiversity required substantial time to recover from this event, despite the probable existence of an abundance of vacant ecological niches.
Despite the severity of the K-Pg extinction event, there were significant variations in the rate of extinction between and within different clades. Species that depended on photosynthesis declined or became extinct as atmospheric particles blocked solar energy. As is the case today, photosynthesizing organisms, such as phytoplankton and land plants, formed the primary part of the food chain in the late Cretaceous, and all else that depended on them suffered, as well. Herbivorous animals, which depended on plants and plankton as their food, died out as their food sources became scarce; consequently, the top predators, such as Tyrannosaurus rex, also perished. Yet only three major groups of tetrapods disappeared completely; the nonavian dinosaurs, the plesiosaurs and the pterosaurs. The other Cretaceous groups that did not survive into the Cenozoic the ichthyosaurs, last remaining temnospondyls (Koolasuchus), and nonmammalian were already extinct millions of years before the event occurred.
Coccolithophorids and molluscs, including ammonites, rudists, freshwater snails, and mussels, as well as organisms whose food chain included these shell builders, became extinct or suffered heavy losses. For example, ammonites are thought to have been the principal food of mosasaurs, a group of giant marine lizards related to snakes that became extinct at the boundary.
Omnivores, insectivores, and carrion-eaters survived the extinction event, perhaps because of the increased availability of their food sources. At the end of the Cretaceous, there seem to have been no purely herbivorous or carnivorous mammals. Mammals and birds that survived the extinction fed on insects, larvae, worms, and snails, which in turn fed on dead plant and animal matter. Scientists theorise that these organisms survived the collapse of plant-based food chains because they fed on detritus.
In stream communities, few groups of animals became extinct. Stream communities rely less on food from living plants and more on detritus that washes in from land. This particular ecological niche buffered them from extinction. Similar, but more complex patterns have been found in the oceans. Extinction was more severe among animals living in the water column than among animals living on or in the seafloor. Animals in the water column are almost entirely dependent on primary production from living phytoplankton, while animals living on or in the ocean floor feed on detritus or can switch to detritus feeding.
The largest air-breathing survivors of the event, crocodilians and champsosaurs, were semiaquatic and had access to detritus. Modern crocodilians can live as scavengers and can survive for months without food and go into hibernation when conditions are unfavorable, and their young are small, grow slowly, and feed largely on invertebrates and dead organisms or fragments of organisms for their first few years. These characteristics have been linked to crocodilian survival at the end of the Cretaceous.
Geologic formations
The high sea level and warm climate of the Cretaceous meant large areas of the continents were covered by warm, shallow seas, providing habitat for many marine organisms. The Cretaceous was named for the extensive chalk deposits of this age in Europe, but in many parts of the world, the deposits from the Cretaceous are of marine limestone, a rock type that is formed under warm, shallow marine conditions. Due to the high sea level, there was extensive space for such sedimentation. Because of the relatively young age and great thickness of the system, Cretaceous rocks are evident in many areas worldwide.
Chalk is a rock type characteristic for (but not restricted to) the Cretaceous. It consists of coccoliths, microscopically small calcite skeletons of coccolithophores, a type of algae that prospered in the Cretaceous seas.
Stagnation of deep sea currents in middle Cretaceous times caused anoxic conditions in the sea water leaving the deposited organic matter undecomposed. Half of the world's petroleum reserves were laid down at this time in the anoxic conditions of what would become the Persian Gulf and the Gulf of Mexico. In many places around the world, dark anoxic shales were formed during this interval, such as the Mancos Shale of western North America. These shales are an important source rock for oil and gas, for example in the subsurface of the North Sea.
Europe
In northwestern Europe, chalk deposits from the Upper Cretaceous are characteristic for the Chalk Group, which forms the white cliffs of Dover on the south coast of England and similar cliffs on the French Normandian coast. The group is found in England, northern France, the low countries, northern Germany, Denmark and in the subsurface of the southern part of the North Sea. Chalk is not easily consolidated and the Chalk Group still consists of loose sediments in many places. The group also has other limestones and arenites. Among the fossils it contains are sea urchins, belemnites, ammonites and sea reptiles such as Mosasaurus.
In southern Europe, the Cretaceous is usually a marine system consisting of competent limestone beds or incompetent marls. Because the Alpine mountain chains did not yet exist in the Cretaceous, these deposits formed on the southern edge of the European continental shelf, at the margin of the Tethys Ocean.
North America
During the Cretaceous, the present North American continent was isolated from the other continents. In the Jurassic, the North Atlantic already opened, leaving a proto-ocean between Europe and North America. From north to south across the continent, the Western Interior Seaway started forming. This inland sea separated the elevated areas of Laramidia in the west and Appalachia in the east. Three dinosaur clades found in Laramidia (troodontids, therizinosaurids and oviraptorosaurs) are absent from Appalachia from the Coniacian through the Maastrichtian.
Paleogeography
During the Cretaceous, the late-Paleozoic-to-early-Mesozoic supercontinent of Pangaea completed its tectonic breakup into the present-day continents, although their positions were substantially different at the time. As the Atlantic Ocean widened, the convergent-margin mountain building (orogenies) that had begun during the Jurassic continued in the North American Cordillera, as the Nevadan orogeny was followed by the Sevier and Laramide orogenies.
Gondwana had begun to break up during the Jurassic Period, but its fragmentation accelerated during the Cretaceous and was largely complete by the end of the period. South America, Antarctica, and Australia rifted away from Africa (though India and Madagascar remained attached to each other until around 80 million years ago); thus, the South Atlantic and Indian Oceans were newly formed. Such active rifting lifted great undersea mountain chains along the welts, raising eustatic sea levels worldwide. To the north of Africa the Tethys Sea continued to narrow. During most of the Late Cretaceous, North America would be divided in two by the Western Interior Seaway, a large interior sea, separating Laramidia to the west and Appalachia to the east, then receded late in the period, leaving thick marine deposits sandwiched between coal beds. Bivalve palaeobiogeography also indicates that Africa was split in half by a shallow sea during the Coniacian and Santonian, connecting the Tethys with the South Atlantic by way of the central Sahara and Central Africa, which were then underwater. Yet another shallow seaway ran between what is now Norway and Greenland, connecting the Tethys to the Arctic Ocean and enabling biotic exchange between the two oceans. At the peak of the Cretaceous transgression, one-third of Earth's present land area was submerged.
The Cretaceous is justly famous for its chalk; indeed, more chalk formed in the Cretaceous than in any other period in the Phanerozoic. Mid-ocean ridge activity—or rather, the circulation of seawater through the enlarged ridges—enriched the oceans in calcium; this made the oceans more saturated, as well as increased the bioavailability of the element for calcareous nanoplankton. These widespread carbonates and other sedimentary deposits make the Cretaceous rock record especially fine. Famous formations from North America include the rich marine fossils of Kansas's Smoky Hill Chalk Member and the terrestrial fauna of the late Cretaceous Hell Creek Formation. Other important Cretaceous exposures occur in Europe (e.g., the Weald) and China (the Yixian Formation). In the area that is now India, massive lava beds called the Deccan Traps were erupted in the very late Cretaceous and early Paleocene.
Climate
Palynological evidence indicates the Cretaceous climate had three broad phases: a Berriasian–Barremian warm-dry phase, an Aptian–Santonian warm-wet phase, and a Campanian–Maastrichtian cool-dry phase. As in the Cenozoic, the 400,000 year eccentricity cycle was the dominant orbital cycle governing carbon flux between different reservoirs and influencing global climate. The location of the Intertropical Convergence Zone (ITCZ) was roughly the same as in the present.
The cooling trend of the last epoch of the Jurassic, the Tithonian, continued into the Berriasian, the first age of the Cretaceous. The North Atlantic seaway opened and enabled the flow of cool water from the Boreal Ocean into the Tethys. There is evidence that snowfalls were common in the higher latitudes during this age, and the tropics became wetter than during the Triassic and Jurassic. Glaciation was restricted to high-latitude mountains, though seasonal snow may have existed farther from the poles. After the end of the first age, however, temperatures began to increase again, with a number of thermal excursions, such as the middle Valanginian Weissert Thermal Excursion (WTX), which was caused by the Paraná-Etendeka Large Igneous Province's activity. It was followed by the middle Hauterivian Faraoni Thermal Excursion (FTX) and the early Barremian Hauptblatterton Thermal Event (HTE). The HTE marked the ultimate end of the Tithonian-early Barremian Cool Interval (TEBCI). During this interval, precession was the dominant orbital driver of environmental changes in the Vocontian Basin. For much of the TEBCI, northern Gondwana experienced a monsoonal climate. A shallow thermocline existed in the mid-latitude Tethys. The TEBCI was followed by the Barremian-Aptian Warm Interval (BAWI). This hot climatic interval coincides with Manihiki and Ontong Java Plateau volcanism and with the Selli Event. Early Aptian tropical sea surface temperatures (SSTs) were 27–32 °C, based on TEX86 measurements from the equatorial Pacific. During the Aptian, Milankovitch cycles governed the occurrence of anoxic events by modulating the intensity of the hydrological cycle and terrestrial runoff. The early Aptian was also notable for its millennial scale hyperarid events in the mid-latitudes of Asia. The BAWI itself was followed by the Aptian-Albian Cold Snap (AACS) that began about 118 Ma. A short, relatively minor ice age may have occurred during this so-called "cold snap", as evidenced by glacial dropstones in the western parts of the Tethys Ocean and the expansion of calcareous nannofossils that dwelt in cold water into lower latitudes. The AACS is associated with an arid period in the Iberian Peninsula.
Temperatures increased drastically after the end of the AACS, which ended around 111 Ma with the Paquier/Urbino Thermal Maximum, giving way to the Mid-Cretaceous Hothouse (MKH), which lasted from the early Albian until the early Campanian. Faster rates of seafloor spreading and entry of carbon dioxide into the atmosphere are believed to have initiated this period of extreme warmth, along with high flood basalt activity. The MKH was punctuated by multiple thermal maxima of extreme warmth. The Leenhardt Thermal Event (LTE) occurred around 110 Ma, followed shortly by the l’Arboudeyesse Thermal Event (ATE) a million years later. Following these two hyperthermals was the Amadeus Thermal Maximum around 106 Ma, during the middle Albian. Then, around a million years after that, occurred the Petite Verol Thermal Event (PVTE). Afterwards, around 102.5 Ma, the Event 6 Thermal Event (EV6) took place; this event was itself followed by the Breistroffer Thermal Maximum around 101 Ma, during the latest Albian. Approximately 94 Ma, the Cenomanian-Turonian Thermal Maximum occurred, with this hyperthermal being the most extreme hothouse interval of the Cretaceous and being associated with a sea level highstand. Temperatures cooled down slightly over the next few million years, but then another thermal maximum, the Coniacian Thermal Maximum, happened, with this thermal event being dated to around 87 Ma. Atmospheric CO2 levels may have varied by thousands of ppm throughout the MKH. Mean annual temperatures at the poles during the MKH exceeded 14 °C. Such hot temperatures during the MKH resulted in a very gentle temperature gradient from the equator to the poles; the latitudinal temperature gradient during the Cenomanian-Turonian Thermal Maximum was 0.54 °C per ° latitude for the Southern Hemisphere and 0.49 °C per ° latitude for the Northern Hemisphere, in contrast to present day values of 1.07 and 0.69 °C per ° latitude for the Southern and Northern hemispheres, respectively. This meant weaker global winds, which drive the ocean currents, and resulted in less upwelling and more stagnant oceans than today. This is evidenced by widespread black shale deposition and frequent anoxic events. Tropical SSTs during the late Albian most likely averaged around 30 °C. Despite this high SST, seawater was not hypersaline at this time, as this would have required significantly higher temperatures still. On land, arid zones in the Albian regularly expanded northward in tandem with expansions of subtropical high pressure belts. The Cedar Mountain Formation's Soap Wash flora indicates a mean annual temperature of between 19 and 26 °C in Utah at the Albian-Cenomanian boundary. Tropical SSTs during the Cenomanian-Turonian Thermal Maximum were at least 30 °C, though one study estimated them as high as between 33 and 42 °C. An intermediate estimate of ~33-34 °C has also been given. Meanwhile, deep ocean temperatures were as much as warmer than today's; one study estimated that deep ocean temperatures were between 12 and 20 °C during the MKH. The poles were so warm that ectothermic reptiles were able to inhabit them.
Beginning in the Santonian, near the end of the MKH, the global climate began to cool, with this cooling trend continuing across the Campanian. This period of cooling, driven by falling levels of atmospheric carbon dioxide, caused the end of the MKH and the transition into a cooler climatic interval, known formally as the Late Cretaceous-Early Palaeogene Cool Interval (LKEPCI). Tropical SSTs declined from around 35 °C in the early Campanian to around 28 °C in the Maastrichtian. Deep ocean temperatures declined to 9 to 12 °C, though the shallow temperature gradient between tropical and polar seas remained. Regional conditions in the Western Interior Seaway changed little between the MKH and the LKEPCI. During this period of relatively cool temperatures, the ITCZ became narrower, while the strength of both summer and winter monsoons in East Asia was directly correlated to atmospheric CO2 concentrations. Laramidia likewise had a seasonal, monsoonal climate. The Maastrichtian was a time of chaotic, highly variable climate. Two upticks in global temperatures are known to have occurred during the Maastrichtian, bucking the trend of overall cooler temperatures during the LKEPCI. Between 70 and 69 Ma and 66–65 Ma, isotopic ratios indicate elevated atmospheric CO2 pressures with levels of 1000–1400 ppmV and mean annual temperatures in west Texas between . Atmospheric CO2 and temperature relations indicate a doubling of pCO2 was accompanied by a ~0.6 °C increase in temperature. The latter warming interval, occurring at the very end of the Cretaceous, was triggered by the activity of the Deccan Traps. The LKEPCI lasted into the Late Palaeocene, when it gave way to another supergreenhouse interval.
The production of large quantities of magma, variously attributed to mantle plumes or to extensional tectonics, further pushed sea levels up, so that large areas of the continental crust were covered with shallow seas. The Tethys Sea connecting the tropical oceans east to west also helped to warm the global climate. Warm-adapted plant fossils are known from localities as far north as Alaska and Greenland, while dinosaur fossils have been found within 15 degrees of the Cretaceous south pole. It was suggested that there was Antarctic marine glaciation in the Turonian Age, based on isotopic evidence. However, this has subsequently been suggested to be the result of inconsistent isotopic proxies, with evidence of polar rainforests during this time interval at 82° S. Rafting by ice of stones into marine environments occurred during much of the Cretaceous, but evidence of deposition directly from glaciers is limited to the Early Cretaceous of the Eromanga Basin in southern Australia.
Flora
Flowering plants (angiosperms) make up around 90% of living plant species today. Prior to the rise of angiosperms, during the Jurassic and the Early Cretaceous, the higher flora was dominated by gymnosperm groups, including cycads, conifers, ginkgophytes, gnetophytes and close relatives, as well as the extinct Bennettitales. Other groups of plants included pteridosperms or "seed ferns", a collective term that refers to disparate groups of extinct seed plants with fern-like foliage, including groups such as Corystospermaceae and Caytoniales. The exact origins of angiosperms are uncertain, although molecular evidence suggests that they are not closely related to any living group of gymnosperms.
The earliest widely accepted evidence of flowering plants are monosulcate (single-grooved) pollen grains from the late Valanginian (~ 134 million years ago) found in Israel and Italy, initially at low abundance. Molecular clock estimates conflict with fossil estimates, suggesting the diversification of crown-group angiosperms during the Late Triassic or the Jurassic, but such estimates are difficult to reconcile with the heavily sampled pollen record and the distinctive tricolpate to tricolporoidate (triple grooved) pollen of eudicot angiosperms. Among the oldest records of Angiosperm macrofossils are Montsechia from the Barremian aged Las Hoyas beds of Spain and Archaefructus from the Barremian-Aptian boundary Yixian Formation in China. Tricolpate pollen distinctive of eudicots first appears in the Late Barremian, while the earliest remains of monocots are known from the Aptian. Flowering plants underwent a rapid radiation beginning during the middle Cretaceous, becoming the dominant group of land plants by the end of the period, coincident with the decline of previously dominant groups such as conifers. The oldest known fossils of grasses are from the Albian, with the family having diversified into modern groups by the end of the Cretaceous. The oldest large angiosperm trees are known from the Turonian (c. 90 Mya) of New Jersey, with the trunk having a preserved diameter of and an estimated height of .
During the Cretaceous, ferns in the order Polypodiales, which make up 80% of living fern species, would also begin to diversify.
Terrestrial fauna
On land, mammals were generally small sized, but a very relevant component of the fauna, with cimolodont multituberculates outnumbering dinosaurs in some sites. Neither true marsupials nor placentals existed until the very end, but a variety of non-marsupial metatherians and non-placental eutherians had already begun to diversify greatly, ranging as carnivores (Deltatheroida), aquatic foragers (Stagodontidae) and herbivores (Schowalteria, Zhelestidae). Various "archaic" groups like eutriconodonts were common in the Early Cretaceous, but by the Late Cretaceous northern mammalian faunas were dominated by multituberculates and therians, with dryolestoids dominating South America.
The apex predators were archosaurian reptiles, especially dinosaurs, which were at their most diverse stage. Avians such as the ancestors of modern-day birds also diversified. They inhabited every continent, and were even found in cold polar latitudes. Pterosaurs were common in the early and middle Cretaceous, but as the Cretaceous proceeded they declined for poorly understood reasons (once thought to be due to competition with early birds, but now it is understood avian adaptive radiation is not consistent with pterosaur decline). By the end of the period only three highly specialized families remained; Pteranodontidae, Nyctosauridae, and Azhdarchidae.
The Liaoning lagerstätte (Yixian Formation) in China is an important site, full of preserved remains of numerous types of small dinosaurs, birds and mammals, that provides a glimpse of life in the Early Cretaceous. The coelurosaur dinosaurs found there represent types of the group Maniraptora, which includes modern birds and their closest non-avian relatives, such as dromaeosaurs, oviraptorosaurs, therizinosaurs, troodontids along with other avialans. Fossils of these dinosaurs from the Liaoning lagerstätte are notable for the presence of hair-like feathers.
Insects diversified during the Cretaceous, and the oldest known ants, termites and some lepidopterans, akin to butterflies and moths, appeared. Aphids, grasshoppers and gall wasps appeared.
Rhynchocephalians
Rhynchocephalians (which today only includes the tuatara) disappeared from North America and Europe after the Early Cretaceous, and were absent from North Africa and northern South America by the early Late Cretaceous. The cause of the decline of Rhynchocephalia remains unclear, but has often been suggested to be due to competition with advanced lizards and mammals. They appear to have remained diverse in high-latitude southern South America during the Late Cretaceous, where lizards remained rare, with their remains outnumbering terrestrial lizards 200:1.
Choristodera
Choristoderes, a group of freshwater aquatic reptiles that first appeared during the preceding Jurassic, underwent a major evolutionary radiation in Asia during the Early Cretaceous, which represents the high point of choristoderan diversity, including long necked forms such as Hyphalosaurus and the first records of the gharial-like Neochoristodera, which appear to have evolved in the regional absence of aquatic neosuchian crocodyliformes. During the Late Cretaceous the neochoristodere Champsosaurus was widely distributed across western North America. Due to the extreme climatic warmth in the Arctic, choristoderans were able to colonise it too during the Late Cretaceous.
Marine fauna
In the seas, rays, modern sharks and teleosts became common. Marine reptiles included ichthyosaurs in the early and mid-Cretaceous (becoming extinct during the late Cretaceous Cenomanian-Turonian anoxic event), plesiosaurs throughout the entire period, and mosasaurs appearing in the Late Cretaceous. Sea turtles in the form of Cheloniidae and Panchelonioidea lived during the period and survived the extinction event. Panchelonioidea is today represented by a single species; the leatherback sea turtle. The Hesperornithiformes were flightless, marine diving birds that swam like grebes.
Baculites, an ammonite genus with a straight shell, flourished in the seas along with reef-building rudist clams. Inoceramids were also particularly notable among Cretaceous bivalves, and they have been used to identify major biotic turnovers such as at the Turonian-Coniacian boundary. Predatory gastropods with drilling habits were widespread. Globotruncanid foraminifera and echinoderms such as sea urchins and starfish (sea stars) thrived. Ostracods were abundant in Cretaceous marine settings; ostracod species characterised by high male sexual investment had the highest rates of extinction and turnover. Thylacocephala, a class of crustaceans, went extinct in the Late Cretaceous. The first radiation of the diatoms (generally siliceous shelled, rather than calcareous) in the oceans occurred during the Cretaceous; freshwater diatoms did not appear until the Miocene. Calcareous nannoplankton were important components of the marine microbiota and important as biostratigraphic markers and recorders of environmental change.
The Cretaceous was also an important interval in the evolution of bioerosion, the production of borings and scrapings in rocks, hardgrounds and shells.
| Physical sciences | Geological periods | null |
5617 | https://en.wikipedia.org/wiki/Creutzfeldt%E2%80%93Jakob%20disease | Creutzfeldt–Jakob disease | Creutzfeldt–Jakob disease (CJD), also known as subacute spongiform encephalopathy or neurocognitive disorder due to prion disease, is a fatal neurodegenerative disease. Early symptoms include memory problems, behavioral changes, poor coordination, and visual disturbances. Later symptoms include dementia, involuntary movements, blindness, weakness, and coma. About 70% of people die within a year of diagnosis. The name "Creutzfeldt–Jakob disease" was introduced by Walther Spielmeyer in 1922, after the German neurologists Hans Gerhard Creutzfeldt and Alfons Maria Jakob.
CJD is caused by abnormal folding of a protein known as a prion. Infectious prions are misfolded proteins that can cause normally folded proteins to also become misfolded. About 85% of cases of CJD occur for unknown reasons, while about 7.5% of cases are inherited in an autosomal dominant manner. Exposure to brain or spinal tissue from an infected person may also result in spread. There is no evidence that sporadic CJD can spread among people via normal contact or blood transfusions, although this is possible in variant Creutzfeldt–Jakob disease. Diagnosis involves ruling out other potential causes. An electroencephalogram, spinal tap, or magnetic resonance imaging may support the diagnosis.
There is no specific treatment for CJD. Opioids may be used to help with pain, while clonazepam or sodium valproate may help with involuntary movements. CJD affects about one person per million people per year. Onset is typically around 60 years of age. The condition was first described in 1920. It is classified as a type of transmissible spongiform encephalopathy. Inherited CJD accounts for about 10% of prion disease cases. Sporadic CJD is different from bovine spongiform encephalopathy (mad cow disease) and variant Creutzfeldt–Jakob disease (vCJD).
Signs and symptoms
The first symptom of CJD is usually rapidly progressive dementia, leading to memory loss, personality changes, and hallucinations. Myoclonus (jerky movements) typically occurs in 90% of cases, but may be absent at initial onset. Other frequently occurring features include anxiety, depression, paranoia, obsessive-compulsive symptoms, and psychosis. This is accompanied by physical problems such as speech impairment, balance and coordination dysfunction (ataxia), changes in gait, and rigid posture. In most people with CJD, these symptoms are accompanied by involuntary movements. The duration of the disease varies greatly, but sporadic (non-inherited) CJD can be fatal within months or even weeks. Most affected people die six months after initial symptoms appear, often of pneumonia due to impaired coughing reflexes. About 15% of people with CJD survive for two or more years.
The symptoms of CJD are caused by the progressive death of the brain's nerve cells, which are associated with the build-up of abnormal prion proteins forming in the brain. When brain tissue from a person with CJD is examined under a microscope, many tiny holes can be seen where the nerve cells have died. Parts of the brain may resemble a sponge where the prions were infecting the areas of the brain.
Cause
CJD is a type of transmissible spongiform encephalopathy (TSE), which are caused by prions. Prions are misfolded proteins that occur in the neurons of the central nervous system (CNS). They are thought to affect signaling processes, damaging neurons and resulting in degeneration that causes the spongiform appearance in the affected brain.
The CJD prion is dangerous because it promotes refolding of the cellular prion protein into the diseased state. The number of misfolded protein molecules will increase exponentially and the process leads to a large quantity of insoluble protein in affected cells. This mass of misfolded proteins disrupts neuronal cell function and causes cell death. Mutations in the gene for the prion protein can cause a misfolding of the dominantly alpha helical regions into beta pleated sheets. This change in conformation disables the ability of the protein to undergo digestion. Once the prion is transmitted, the defective proteins invade the brain and induce other prion protein molecules to misfold in a self-sustaining feedback loop. These neurodegenerative diseases are commonly called prion diseases.
Transmission
The defective protein can be transmitted by contaminated harvested human brain products, corneal grafts, dural grafts, or electrode implants and pituitary human growth hormone, which has been replaced by recombinant human growth hormone that poses no such risk.
It can be familial (fCJD); or it may appear without clear risk factors (sporadic form: sCJD). In the familial form, a mutation has occurred in the gene for PrP, PRNP, in that family. All types of CJD are transmissible irrespective of how they occur in the person.
It is thought that humans can contract the variant form of the disease by eating food from animals infected with bovine spongiform encephalopathy (BSE), the bovine form of TSE also known as mad cow disease. However, it can also cause sCJD in some cases.
Cannibalism has also been implicated as a transmission mechanism for abnormal prions, causing the disease known as kuru, once found primarily among women and children of the Fore people in Papua New Guinea, who previously engaged in funerary cannibalism. While the men of the tribe ate the muscle tissue of the deceased, women and children consumed other parts, such as the brain, and were more likely than men to contract kuru from infected tissue.
Prions, the infectious agent of CJD, may not be inactivated by means of routine surgical instrument sterilization procedures. The World Health Organization and the US Centers for Disease Control and Prevention recommend that instrumentation used in such cases be immediately destroyed after use; short of destruction, it is recommended that heat and chemical decontamination be used in combination to process instruments that come in contact with high-infectivity tissues. Thermal depolymerization also destroys prions in infected organic and inorganic matter, since the process chemically attacks protein at the molecular level, although more effective and practical methods involve destruction by combinations of detergents and enzymes similar to biological washing powders.
Genetics
People can also develop CJD because they carry a mutation of the gene that codes for the prion protein (PRNP), located on chromosome 202p12-pter. This occurs in only 10–15% of all CJD cases. In sporadic cases, the misfolding of the prion protein is a process that is hypothesized to occur as a result of the effects of aging on cellular machinery, explaining why the disease often appears later in life. An EU study determined that "87% of cases were sporadic, 8% genetic, 5% iatrogenic and less than 1% variant."
Diagnosis
Testing for CJD has historically been problematic, due to nonspecific nature of early symptoms and difficulty in safely obtaining brain tissue for confirmation. The diagnosis may initially be suspected in a person with rapidly progressing dementia, particularly when they are also found with the characteristic medical signs and symptoms such as involuntary muscle jerking, difficulty with coordination/balance and walking, and visual disturbances. Further testing can support the diagnosis and may include:
Electroencephalography – may have characteristic generalized periodic sharp wave pattern. Periodic sharp wave complexes develop in half of the people with sporadic CJD, particularly in the later stages.
Cerebrospinal fluid (CSF) analysis for elevated levels of 14-3-3 protein and tau protein could be supportive in the diagnosis of sCJD. The two proteins are released into the CSF by damaged nerve cells. Increased levels of tau or 14-3-3 proteins are seen in 90% of prion diseases. The markers have a specificity of 95% in clinical symptoms suggestive of CJD, but specificity is 70% in other less characteristic cases. 14-3-3 and tau proteins may also be elevated in the CSF after ischemic strokes, inflammatory brain diseases, or seizures. The protein markers are also less specific in early CJD, genetic CJD or the bovine variant. However, a positive result should not be regarded as sufficient for the diagnosis. The Real-Time Quaking-Induced Conversion (RT-QuIC) assay has a diagnostic sensitivity of more than 80% and a specificity approaching 100%, tested in detecting PrPSc in CSF samples of people with CJD. It is therefore suggested as a high-value diagnostic method for the disease.
MRI with diffusion weighted inversion (DWI) and fluid-attenuated inversion recovery (FLAIR) shows a high signal intensity in certain parts of the cortex (a cortical ribboning appearance), the basal ganglia, and the thalami. The most common presenting patterns are simultaneous involvement of the cortex and striatum (60% of cases), cortical involvement without the striatum (30%), thalamus (21%), cerebellum (8%) and striatum without cortical involvement (7%). In populations with a rapidly progressive dementia (early in the disease process), MRI has a sensitivity of 91% and specificity of 97% for diagnosing CJD. The MRI changes characteristic of CJD may also be seen in the immediate aftermath (hours after the event) of autoimmune encephalitis or focal seizures.
In recent years, studies have shown that the tumour marker neuron-specific enolase (NSE) is often elevated in CJD cases; however, its diagnostic utility is seen primarily when combined with a test for the 14-3-3 protein. , screening tests to identify infected asymptomatic individuals, such as blood donors, are not yet available, though methods have been proposed and evaluated.
Imaging
Imaging of the brain may be performed during medical evaluation, both to rule out other causes and to obtain supportive evidence for diagnosis. Imaging findings are variable in their appearance, and also variable in sensitivity and specificity. While imaging plays a lesser role in diagnosis of CJD, characteristic findings on brain MRI in some cases may precede onset of clinical manifestations.
Brain MRI is the most useful imaging modality for changes related to CJD. Of the MRI sequences, diffuse-weighted imaging sequences are most sensitive. Characteristic findings are as follows:
Focal or diffuse diffusion-restriction involving the cerebral cortex and/or basal ganglia. The most characteristic and striking cortical abnormality has been called "cortical ribboning" or "cortical ribbon sign" due to hyperintensities resembling ribbons appearing in the cortex on MRI. The involvement of the thalamus can be found in sCJD, is even stronger and constant in vCJD.
Varying degree of symmetric T2 hyperintense signal changes in the basal ganglia (i.e., caudate and putamen), and to a lesser extent globus pallidus and occipital cortex.
Brain FDG PET-CT tends to be markedly abnormal, and is increasingly used in the investigation of dementias.
Patients with CJD will normally have hypometabolism on FDG PET.
Histopathology
Testing of tissue remains the most definitive way of confirming the diagnosis of CJD, although it must be recognized that even biopsy is not always conclusive.
In one-third of people with sporadic CJD, deposits of "prion protein (scrapie)", PrPSc, can be found in the skeletal muscle and/or the spleen. Diagnosis of vCJD can be supported by biopsy of the tonsils, which harbor significant amounts of PrPSc; however, biopsy of brain tissue is the definitive diagnostic test for all other forms of prion disease. Due to its invasiveness, biopsy will not be done if clinical suspicion is sufficiently high or low. A negative biopsy does not rule out CJD, since it may predominate in a specific part of the brain.
The classic histologic appearance is spongiform change in the gray matter: the presence of many round vacuoles from one to 50 micrometers in the neuropil, in all six cortical layers in the cerebral cortex or with diffuse involvement of the cerebellar molecular layer. These vacuoles appear glassy or eosinophilic and may coalesce. Neuronal loss and gliosis are also seen. Plaques of amyloid-like material can be seen in the neocortex in some cases of CJD.
However, extra-neuronal vacuolization can also be seen in other disease states. Diffuse cortical vacuolization occurs in Alzheimer's disease, and superficial cortical vacuolization occurs in ischemia and frontotemporal dementia. These vacuoles appear clear and punched-out. Larger vacuoles encircling neurons, vessels, and glia are a possible processing artifact.
Classification
Types of CJD include:
Sporadic (sCJD), caused by the spontaneous misfolding of prion-protein in an individual. This accounts for 85% of cases of CJD.
Familial (fCJD), caused by an inherited mutation in the prion-protein gene. This accounts for the majority of the other 15% of cases of CJD.
Acquired CJD, caused by contamination with tissue from an infected person, usually as the result of a medical procedure (iatrogenic CJD). Medical procedures that are associated with the spread of this form of CJD include blood transfusion from the infected person, use of human-derived pituitary growth hormones, gonadotropin hormone therapy, and corneal and meningeal transplants. Variant Creutzfeldt–Jakob disease (vCJD) is a type of acquired CJD potentially acquired from bovine spongiform encephalopathy or caused by consuming food contaminated with prions.
Treatment
As of 2025, there is no cure or effective treatment for CJD. Some of the symptoms like twitching can be managed, but otherwise treatment is palliative care. Psychiatric symptoms like anxiety and depression can be treated with sedatives and antidepressants. Myoclonic jerks can be handled with clonazepam or sodium valproate. Opiates can help in pain. Seizures are very uncommon but can nevertheless be treated with antiepileptic drugs.
Prognosis
Life expectancy is greatly reduced for people with Creutzfeldt–Jakob disease, with the average being less than 6 months. As of 1981, no one was known to have lived longer than 2.5 years after the onset of CJD symptoms. In 2011, Jonathan Simms, a Northern Irish man who lived 10 years after his diagnosis, was reported to be one of the world's longest survivors of variant Creutzfeldt–Jakob disease (vCJD).
Epidemiology
CDC monitors the occurrence of CJD in the United States through periodic reviews of national mortality data. According to the CDC:
CJD occurs worldwide at a rate of about 1 case per million population per year.
On the basis of mortality surveillance from 1979 to 1994, the annual incidence of CJD remained stable at approximately 1 case per million people in the United States.
In the United States, CJD deaths among people younger than 30 years of age are extremely rare (fewer than five deaths per billion per year).
The disease is found most frequently in people 55–65 years of age, but cases can occur in people older than 90 years and younger than 55 years of age.
In more than 85% of cases, the duration of CJD is less than one year (median: four months) after the onset of symptoms.
Further information from the CDC:
Risk of developing CJD increases with age.
CJD incidence was 3.5 cases per million among those over 50 years of age between 1979 and 2017.
Approximately 85% of CJD cases are sporadic and 10–15% of CJD cases are due to inherited mutations of the prion protein gene.
CJD deaths and age-adjusted death rate in the United States indicate an increasing trend in the number of deaths between 1979 and 2017.
Although not fully understood, additional information suggests that CJD rates in African American and nonwhite groups are lower than in whites. While the mean onset is approximately 67 years of age, cases of sCJD have been reported as young as 17 years and over 80 years of age. Mental capabilities rapidly deteriorate and the average amount of time from onset of symptoms to death is 7 to 9 months.
According to a 2020 systematic review on the international epidemiology of CJD:
Surveillance studies from 2005 and later show the estimated global incidence is 1–2 cases per million population per year.
Sporadic CJD (sCJD) incidence increased from the years 1990–2018 in the UK.
Probable or definite sCJD deaths also increased from the years 1996–2018 in twelve additional countries.
CJD incidence is greatest in those over the age of 55 years old, with an average age of 67 years old.
The intensity of CJD surveillance increases the number of reported cases, often in countries where CJD epidemics have occurred in the past and where surveillance resources are greatest. An increase in surveillance and reporting of CJD is most likely in response to BSE and vCJD. Possible factors contributing to an increase of CJD incidence are an aging population, population increase, clinician awareness, and more accurate diagnostic methods. Since CJD symptoms are similar to other neurological conditions, it is also possible that CJD is mistaken for stroke, acute nephropathy, general dementia, and hyperparathyroidism.
History
The disease was first described by German neurologist Hans Gerhard Creutzfeldt in 1920 and shortly afterward by Alfons Maria Jakob, giving it the name Creutzfeldt–Jakob disease. Some of the clinical findings described in their first papers do not match current criteria for Creutzfeldt–Jakob disease, and it has been speculated that at least two of the people in initial studies had a different ailment. An early description of familial CJD stems from the German psychiatrist and neurologist Friedrich Meggendorfer (1880–1953). A study published in 1997 counted more than 100 cases worldwide of transmissible CJD and new cases continued to appear at the time.
The first report of suspected iatrogenic CJD was published in 1974. Animal experiments showed that corneas of infected animals could transmit CJD, and the causative agent spreads along visual pathways. A second case of CJD associated with a corneal transplant was reported without details. In 1977, CJD transmission caused by silver electrodes previously used in the brain of a person with CJD was first reported. Transmission occurred despite the decontamination of the electrodes with ethanol and formaldehyde. Retrospective studies identified four other cases likely of similar cause. The rate of transmission from a single contaminated instrument is unknown, although it is not 100%. In some cases, the exposure occurred weeks after the instruments were used on a person with CJD. In the 1980s it was discovered that Lyodura, a dura mater transplant product, was shown to transmit CJD from the donor to the recipient. This led to the product being banned in Canada but it was used in other countries such as Japan until 1993. A review article published in 1979 indicated that 25 dura mater cases had occurred by that date in Australia, Canada, Germany, Italy, Japan, New Zealand, Spain, the United Kingdom, and the United States.
By 1985, a series of case reports in the United States showed that when injected, cadaver-extracted pituitary human growth hormone could transmit CJD to humans.
In 1992, it was recognized that human gonadotropin administered by injection could also transmit CJD from person to person.
Stanley B. Prusiner of the University of California, San Francisco (UCSF) was awarded the Nobel Prize in Physiology or Medicine in 1997 "for his discovery of Prions—a new biological principle of infection".
Yale University neuropathologist Laura Manuelidis has challenged the prion protein (PrP) explanation for the disease. In January 2007, she and her colleagues reported that they had found a virus-like particle in naturally and experimentally infected animals. "The high infectivity of comparable, isolated virus-like particles that show no intrinsic PrP by antibody labeling, combined with their loss of infectivity when nucleic acid–protein complexes are disrupted, make it likely that these 25-nm particles are the causal TSE virions".
Australia
Australia has documented 10 cases of healthcare-acquired CJD (iatrogenic or ICJD). Five of the deaths resulted after the patients, who were in treatment either for infertility or short stature, were treated using contaminated pituitary extract hormone but no new cases have been noted since 1991. The other five deaths occurred due to dura grafting procedures that were performed during brain surgery, in which the covering of the brain is repaired. There have been no other ICJD deaths documented in Australia due to transmission during healthcare procedures.
New Zealand
A case was reported in 1989 in a 25-year-old man from New Zealand, who also received dura mater transplant. Five New Zealanders have been confirmed to have died of the sporadic form of Creutzfeldt–Jakob disease (CJD) in 2012.
United States
In 1988, there was a confirmed death from CJD of a person from Manchester, New Hampshire. Massachusetts General Hospital believed the person acquired the disease from a surgical instrument at a podiatrist's office. In 2007, Michael Homer, former Vice President of Netscape, had been experiencing consistent memory problems which led to his diagnosis. In September 2013, another person in Manchester was posthumously determined to have died of the disease. The person had undergone brain surgery at Catholic Medical Center three months before his death, and a surgical probe used in the procedure was subsequently reused in other operations. Public health officials identified thirteen people at three hospitals who may have been exposed to the disease through the contaminated probe but said the risk of anyone contracting CJD is "extremely low". In January 2015, former speaker of the Utah House of Representatives Rebecca D. Lockhart died of the disease within a few weeks of diagnosis. John Carroll, former editor of The Baltimore Sun and Los Angeles Times, died of CJD in Kentucky in June 2015, after having been diagnosed in January. American actress Barbara Tarbuck (General Hospital, American Horror Story) died of the disease on December 26, 2016. José Baselga, clinical oncologist having headed the AstraZeneca oncology division, died in Cerdanya, March 21, 2021, from CJD. In April 2024, a report was published regarding two hunters from the same lodge who, in 2022, were found to be afflicted with sporadic CJD after eating deer meat infected with chronic wasting disease (CWD), suggesting a potential link between CWD and CJD.
Research
Diagnosis
In 2010, a team from New York described detection of PrPSc in sheep's blood, even when initially present at only one part in one hundred billion (10−11) in sheep's brain tissue. The method combines amplification with a novel technology called surround optical fiber immunoassay (SOFIA) and some specific antibodies against PrPSc. The technique allowed improved detection and testing time for PrPSc.
In 2014, a human study showed a nasal brushing method that can accurately detect PrP in the olfactory epithelial cells of people with CJD.
Treatment
Pentosan polysulfate (PPS) may slow the progression of the disease, and may have contributed to the longer than expected survival of the seven people studied. The CJD Therapy Advisory Group to the UK Health Departments advises that data are not sufficient to support claims that pentosan polysulfate is an effective treatment and suggests that further research in animal models is appropriate. A 2007 review of the treatment of 26 people with PPS finds no proof of efficacy because of the lack of accepted objective criteria, but it was unclear to the authors whether that was caused by PPS itself. In 2012 it was claimed that the lack of significant benefits has likely been caused because of the drug being administered very late in the disease in many patients.
Use of RNA interference to slow the progression of scrapie has been studied in mice. The RNA blocks production of the protein that the CJD process transforms into prions.
Both amphotericin B and doxorubicin have been investigated as treatments for CJD, but as yet there is no strong evidence that either drug is effective in stopping the disease. Further study has been taken with other medical drugs, but none are effective. However, anticonvulsants and anxiolytic agents, such as valproate or a benzodiazepine, may be administered to relieve associated symptoms.
Quinacrine, a medicine originally created for malaria, has been evaluated as a treatment for CJD. The efficacy of quinacrine was assessed in a rigorous clinical trial in the UK and the results were published in Lancet Neurology, and concluded that quinacrine had no measurable effect on the clinical course of CJD.
Astemizole, a medication approved for human use, has been found to have anti-prion activity and may lead to a treatment for Creutzfeldt–Jakob disease.
A monoclonal antibody (code name PRN100) targeting the prion protein (PrP) was given to six people with Creutzfeldt–Jakob disease in an early-stage clinical trial conducted from 2018 to 2022. The treatment appeared to be well-tolerated and was able to access the brain, where it might have helped to clear PrPC. While the treated patients still showed progressive neurological decline, and while none of them survived longer than expected from the normal course of the disease, the scientists at University College London who conducted the study see these early-stage results as encouraging and suggest to conduct a larger study, ideally at the earliest possible intervention.
| Biology and health sciences | Prion diseases | Health |
5623 | https://en.wikipedia.org/wiki/Canal | Canal | Canals or artificial waterways are waterways or engineered channels built for drainage management (e.g. flood control and irrigation) or for conveyancing water transport vehicles (e.g. water taxi). They carry free, calm surface flow under atmospheric pressure, and can be thought of as artificial rivers.
In most cases, a canal has a series of dams and locks that create reservoirs of low speed current flow. These reservoirs are referred to as slack water levels, often just called levels. A canal can be called a navigation canal when it parallels a natural river and shares part of the latter's discharges and drainage basin, and leverages its resources by building dams and locks to increase and lengthen its stretches of slack water levels while staying in its valley.
A canal can cut across a drainage divide atop a ridge, generally requiring an external water source above the highest elevation. The best-known example of such a canal is the Panama Canal.
Many canals have been built at elevations, above valleys and other waterways. Canals with sources of water at a higher level can deliver water to a destination such as a city where water is needed. The Roman Empire's aqueducts were such water supply canals.
The term was once used to describe linear features seen on the surface of Mars, Martian canals, an optical illusion.
Types of artificial waterways
A navigation is a series of channels that run roughly parallel to the valley and stream bed of an unimproved river. A navigation always shares the drainage basin of the river. A vessel uses the calm parts of the river itself as well as improvements, traversing the same changes in height.
A true canal is a channel that cuts across a drainage divide, making a navigable channel connecting two different drainage basins.
Structures used in artificial waterways
Both navigations and canals use engineered structures to improve navigation:
weirs and dams to raise river water levels to usable depths;
looping descents to create a longer and gentler channel around a stretch of rapids or falls;
locks to allow ships and barges to ascend/descend.
Since they cut across drainage divides, canals are more difficult to construct and often need additional improvements, like viaducts and aqueducts to bridge waters over streams and roads, and ways to keep water in the channel.
Types of canals
There are two broad types of canal:
Waterways: canals and navigations used for carrying vessels transporting goods and people. These can be subdivided into two kinds:
Those connecting existing lakes, rivers, other canals or seas and oceans.
Those connected in a city network: such as the Canal Grande and others of Venice; the grachten of Amsterdam or Utrecht, and the waterways of Bangkok.
Aqueducts: water supply canals that are used for the conveyance and delivery of potable water, municipal uses, hydro power canals and agriculture irrigation.
Importance
Historically, canals were of immense importance to the commerce, development, growth and vitality of a civilization. The movement of bulk raw materials such as coal and ores—practically a prerequisite for further urbanization and industrialization—were difficult and only marginally affordable to move without water transport. The movement of bulk raw materials, facilitated by canals, fueled the Industrial Revolution, leading to new research disciplines, new industries and economies of scale, raising the standard of living for industrialized societies.
The few canals still in operation in the 21st century are a fraction of the number that were once maintained during the earlier part of the Industrial Revolution. Their replacement was gradual, beginning first in the United Kingdom in the 1840s, where canal shipping was first augmented by, and later superseded by the much faster, less geographically constrained, and generally cheaper to maintain railways.
By the early 1880s, many canals which had little ability to compete with rail transport were abandoned. In the 20th century, oil was increasingly used as the heating fuel of choice, and the growth of coal shipments began to decrease. After the First World War, technological advances in motor trucks as well as expanding road networks saw increasing amounts of freight being transported by road, and the last small U.S. barge canals saw a steady decline in cargo ton-miles.
The once critical smaller inland waterways conceived and engineered as boat and barge canals have largely been supplanted and filled in, abandoned and left to deteriorate, or kept in service under a park service and staffed by government employees, where dams and locks are maintained for flood control or pleasure boating. Today, most ship canals (intended for larger, oceangoing vessels) service primarily service bulk cargo and large ship transportation industries.
The longest extant canal today, the Grand Canal in northern China, still remains in heavy use, especially the portion south of the Yellow River. It stretches from Beijing to Hangzhou at 1,794 kilometres (1,115 miles).
Construction
Canals are built in one of three ways, or a combination of the three, depending on available water and available path:
Human made streams
A canal can be created where no stream presently exists. Either the body of the canal is dug or the sides of the canal are created by making dykes or levees by piling dirt, stone, concrete or other building materials. The finished shape of the canal as seen in cross section is known as the canal prism. The water for the canal must be provided from an external source, like streams or reservoirs. Where the new waterway must change elevation engineering works like locks, lifts or elevators are constructed to raise and lower vessels. Examples include canals that connect valleys over a higher body of land, like Canal du Midi, Canal de Briare and the Panama Canal.
A canal can be constructed by dredging a channel in the bottom of an existing lake. When the channel is complete, the lake is drained and the channel becomes a new canal, serving both drainage of the surrounding polder and providing transport there. Examples include the . One can also build two parallel dikes in an existing lake, forming the new canal in between, and then drain the remaining parts of the lake. The eastern and central parts of the North Sea Canal were constructed in this way. In both cases pumping stations are required to keep the land surrounding the canal dry, either pumping water from the canal into surrounding waters, or pumping it from the land into the canal.
Canalization and navigations
A stream can be canalized to make its navigable path more predictable and easier to maneuver. Canalization modifies the stream to carry traffic more safely by controlling the flow of the stream by dredging, damming and modifying its path. This frequently includes the incorporation of locks and spillways, that make the river a navigation. Examples include the Lehigh Canal in Northeastern Pennsylvania's coal Region, Basse Saône, Canal de Mines de Fer de la Moselle, and canal Aisne. Riparian zone restoration may be required.
Lateral canals
When a stream is too difficult to modify with canalization, a second stream can be created next to or at least near the existing stream. This is called a lateral canal, and may meander in a large horseshoe bend or series of curves some distance from the source waters stream bed lengthening the effective length in order to lower the ratio of rise over run (slope or pitch). The existing stream usually acts as the water source and the landscape around its banks provide a path for the new body. Examples include the Chesapeake and Ohio Canal, Canal latéral à la Loire, Garonne Lateral Canal, Welland Canal and Juliana Canal.
Smaller transportation canals can carry barges or narrowboats, while ship canals allow seagoing ships to travel to an inland port (e.g., Manchester Ship Canal), or from one sea or ocean to another (e.g., Caledonian Canal, Panama Canal).
Features
At their simplest, canals consist of a trench filled with water. Depending on the stratum the canal passes through, it may be necessary to line the cut with some form of watertight material such as clay or concrete. When this is done with clay, it is known as puddling.
Canals need to be level, and while small irregularities in the lie of the land can be dealt with through cuttings and embankments, for larger deviations other approaches have been adopted. The most common is the pound lock, which consists of a chamber within which the water level can be raised or lowered connecting either two pieces of canal at a different level or the canal with a river or the sea. When there is a hill to be climbed, flights of many locks in short succession may be used.
Prior to the development of the pound lock in 984 AD in China by Chhaio Wei-Yo and later in Europe in the 15th century, either flash locks consisting of a single gate were used or ramps, sometimes equipped with rollers, were used to change the level. Flash locks were only practical where there was plenty of water available.
Locks use a lot of water, so builders have adopted other approaches for situations where little water is available. These include boat lifts, such as the Falkirk Wheel, which use a caisson of water in which boats float while being moved between two levels; and inclined planes where a caisson is hauled up a steep railway.
To cross a stream, road or valley (where the delay caused by a flight of locks at either side would be unacceptable) the valley can be spanned by a navigable aqueduct – a famous example in Wales is the Pontcysyllte Aqueduct (now a UNESCO World Heritage Site) across the valley of the River Dee.
Another option for dealing with hills is to tunnel through them. An example of this approach is the Harecastle Tunnel on the Trent and Mersey Canal. Tunnels are only practical for smaller canals.
Some canals attempted to keep changes in level down to a minimum. These canals known as contour canals would take longer, winding routes, along which the land was a uniform altitude. Other, generally later, canals took more direct routes requiring the use of various methods to deal with the change in level.
Canals have various features to tackle the problem of water supply. In cases, like the Suez Canal, the canal is open to the sea. Where the canal is not at sea level, a number of approaches have been adopted. Taking water from existing rivers or springs was an option in some cases, sometimes supplemented by other methods to deal with seasonal variations in flow. Where such sources were unavailable, reservoirs – either separate from the canal or built into its course – and back pumping were used to provide the required water. In other cases, water pumped from mines was used to feed the canal. In certain cases, extensive "feeder canals" were built to bring water from sources located far from the canal.
Where large amounts of goods are loaded or unloaded such as at the end of a canal, a canal basin may be built. This would normally be a section of water wider than the general canal. In some cases, the canal basins contain wharfs and cranes to assist with movement of goods.
When a section of the canal needs to be sealed off so it can be drained for maintenance stop planks are frequently used. These consist of planks of wood placed across the canal to form a dam. They are generally placed in pre-existing grooves in the canal bank. On more modern canals, "guard locks" or gates were sometimes placed to allow a section of the canal to be quickly closed off, either for maintenance, or to prevent a major loss of water due to a canal breach.
Canal falls
A canal fall, or canal drop, is a vertical drop in the canal bed. These are built when the natural ground slope is steeper than the desired canal gradient. They are constructed so the falling water's kinetic energy is dissipated in order to prevent it from scouring the bed and sides of the canal.
A canal fall is constructed by cut and fill. It may be combined with a regulator, bridge, or other structure to save costs.
There are various types of canal falls, based on their shape. One type is the ogee fall, where the drop follows an s-shaped curve to create a smooth transition and reduce turbulence. However, this smooth transition does not dissipate the water's kinetic energy, which leads to heavy scouring. As a result, the canal needs to be reinforced with concrete or masonry to protect it from eroding.
Another type of canal fall is the vertical fall, which is "simple and economical". These feature a "cistern", or depressed area just downstream from the fall, to "cushion" the water by providing a deep pool for its kinetic energy to be diffused in. Vertical falls work for drops of up to 1.5 m in height, and for discharge of up to 15 cubic meters per second.
History
The transport capacity of pack animals and carts is limited. A mule can carry an eighth-ton [] maximum load over a journey measured in days and weeks, though much more for shorter distances and periods with appropriate rest. Besides, carts need roads. Transport over water is much more efficient and cost-effective for large cargoes.
Ancient canals
The oldest known canals were irrigation canals, built in Mesopotamia , in what is now Iraq. The Indus Valley civilization of ancient India () had sophisticated irrigation and storage systems developed, including the reservoirs built at Girnar in 3000 BC. This is the first time that such planned civil project had taken place in the ancient world. In Egypt, canals date back at least to the time of Pepi I Meryre (reigned 2332–2283 BC), who ordered a canal built to bypass the cataract on the Nile near Aswan.
In ancient China, large canals for river transport were established as far back as the Spring and Autumn period (8th–5th centuries BC), the longest one of that period being the Hong Gou (Canal of the Wild Geese), which according to the ancient historian Sima Qian connected the old states of Song, Zhang, Chen, Cai, Cao, and Wei. The Caoyun System of canals was essential for imperial taxation, which was largely assessed in kind and involved enormous shipments of rice and other grains. By far the longest canal was the Grand Canal of China, still the longest canal in the world today and the oldest extant one. It is long and was built to carry the Emperor Yang Guang between Zhuodu (Beijing) and Yuhang (Hangzhou). The project began in 605 and was completed in 609, although much of the work combined older canals, the oldest section of the canal existing since at least 486 BC. Even in its narrowest urban sections it is rarely less than wide.
In the 5th century BC, Achaemenid king Xerxes I of Persia ordered the construction of the Xerxes Canal through the base of Mount Athos peninsula, Chalkidiki, northern Greece. It was constructed as part of his preparations for the Second Persian invasion of Greece, a part of the Greco-Persian Wars. It is one of the few monuments left by the Persian Empire in Europe.
Greek engineers were also among the first to use canal locks, by which they regulated the water flow in the Ancient Suez Canal as early as the 3rd century BC.
There was little experience moving bulk loads by carts, while a pack-horse would [i.e. 'could'] carry only an eighth of a ton. On a soft road a horse might be able to draw 5/8ths of a ton. But if the load were carried by a barge on a waterway, then up to 30 tons could be drawn by the same horse.— technology historian Ronald W. Clark referring to transport realities before the industrial revolution and the Canal age.
Hohokam was a society in the North American Southwest in what is now part of Arizona, United States, and Sonora, Mexico. Their irrigation systems supported the largest population in the Southwest by 1300 CE. Archaeologists working at a major archaeological dig in the 1990s in the Tucson Basin, along the Santa Cruz River, identified a culture and people that may have been the ancestors of the Hohokam. This prehistoric group occupied southern Arizona as early as 2000 BCE, and in the Early Agricultural period grew corn, lived year-round in sedentary villages, and developed sophisticated irrigation canals.
The large-scale Hohokam irrigation network in the Phoenix metropolitan area was the most complex in ancient North America. A portion of the ancient canals has been renovated for the Salt River Project and now helps to supply the city's water.
The Sinhalese constructed the 87 km (54 mi) Yodha Ela in 459 A.D. as a part of their extensive irrigation network which functioned in a way of a moving reservoir due to its single banking aspect to manage the canal pressure with the influx of water. It was also designed as an elongated reservoir passing through traps creating 66 mini catchments as it flows from Kala Wewa to Thissa Wawa. The canal was not designed for the quick conveying of water from Kala Wewa to Thissa Wawa but to create a mass of water between the two reservoirs, which would in turn provided for agriculture and the use of humans and animals.
They also achieved a rather low gradient for its time. The canal is still in use after renovation.
Middle Ages
In the Middle Ages, water transport was several times cheaper and faster than transport overland. Overland transport by animal drawn conveyances was used around settled areas, but unimproved roads required pack animal trains, usually of mules to carry any degree of mass, and while a mule could carry an eighth ton, it also needed teamsters to tend it and one man could only tend perhaps five mules, meaning overland bulk transport was also expensive, as men expect compensation in the form of wages, room and board. This was because long-haul roads were unpaved, more often than not too narrow for carts, much less wagons, and in poor condition, wending their way through forests, marshy or muddy quagmires as often as unimproved but dry footing. In that era, as today, greater cargoes, especially bulk goods and raw materials, could be transported by ship far more economically than by land; in the pre-railroad days of the industrial revolution, water transport was the gold standard of fast transportation. The first artificial canal in Western Europe was the Fossa Carolina built at the end of the 8th century under personal supervision of Charlemagne.
In Britain, the Glastonbury Canal is believed to be the first post-Roman canal and was built in the middle of the 10th century to link the River Brue at Northover with Glastonbury Abbey, a distance of about . Its initial purpose is believed to be the transport of building stone for the abbey, but later it was used for delivering produce, including grain, wine and fish, from the abbey's outlying properties. It remained in use until at least the 14th century, but possibly as late as the mid-16th century.More lasting and of more economic impact were canals like the Naviglio Grande built between 1127 and 1257 to connect Milan with the river Ticino. The Naviglio Grande is the most important of the lombard "navigli" and the oldest functioning canal in Europe.Later, canals were built in the Netherlands and Flanders to drain the polders and assist transportation of goods and people.
Canal building was revived in this age because of commercial expansion from the 12th century. River navigations were improved progressively by the use of single, or flash locks. Taking boats through these used large amounts of water leading to conflicts with watermill owners and to correct this, the pound or chamber lock first appeared, in the 10th century in China and in Europe in 1373 in Vreeswijk, Netherlands. Another important development was the mitre gate, which was, it is presumed, introduced in Italy by Bertola da Novate in the 16th century. This allowed wider gates and also removed the height restriction of guillotine locks.
To break out of the limitations caused by river valleys, the first summit level canals were developed with the Grand Canal of China in 581–617 AD whilst in Europe the first, also using single locks, was the Stecknitz Canal in Germany in 1398.
Africa
In the Songhai Empire of West Africa, several canals were constructed under Sunni Ali and Askia Muhammad I between Kabara and Timbuktu in the 15th century. These were used primarily for irrigation and transport. Sunni Ali also attempted to construct a canal from the Niger River to Walata to facilitate conquest of the city but his progress was halted when he went to war with the Mossi Kingdoms.
Early modern period
Around 1500–1800 the first summit level canal to use pound locks in Europe was the Briare Canal connecting the Loire and Seine (1642), followed by the more ambitious Canal du Midi (1683) connecting the Atlantic to the Mediterranean. This included a staircase of 8 locks at Béziers, a tunnel, and three major aqueducts.
Canal building progressed steadily in Germany in the 17th and 18th centuries with three great rivers, the Elbe, Oder and Weser being linked by canals. In post-Roman Britain, the first early modern period canal built appears to have been the Exeter Canal, which was surveyed in 1563, and open in 1566.
The oldest canal in the European settlements of North America, technically a mill race built for industrial purposes, is Mother Brook between the Boston, Massachusetts neighbourhoods of Dedham and Hyde Park connecting the higher waters of the Charles River and the mouth of the Neponset River and the sea. It was constructed in 1639 to provide water power for mills.
In Russia, the Volga–Baltic Waterway, a nationwide canal system connecting the Baltic Sea and Caspian Sea via the Neva and Volga rivers, was opened in 1718.
Industrial Revolution
The modern canal system was mainly a product of the 18th century and early 19th century. It came into being because the Industrial Revolution (which began in Britain during the mid-18th century) demanded an economic and reliable way to transport goods and commodities in large quantities.
By the early 18th century, river navigations such as the Aire and Calder Navigation were becoming quite sophisticated, with pound locks and longer and longer "cuts" (some with intermediate locks) to avoid circuitous or difficult stretches of river. Eventually, the experience of building long multi-level cuts with their own locks gave rise to the idea of building a "pure" canal, a waterway designed on the basis of where goods needed to go, not where a river happened to be.
The claim for the first pure canal in Great Britain is debated between "Sankey" and "Bridgewater" supporters. The first true canal in what is now the United Kingdom was the Newry Canal in Northern Ireland constructed by Thomas Steers in 1741.
The Sankey Brook Navigation, which connected St Helens with the River Mersey, is often claimed as the first modern "purely artificial" canal because although originally a scheme to make the Sankey Brook navigable, it included an entirely new artificial channel that was effectively a canal along the Sankey Brook valley. However, "Bridgewater" supporters point out that the last quarter-mile of the navigation is indeed a canalized stretch of the Brook, and that it was the Bridgewater Canal (less obviously associated with an existing river) that captured the popular imagination and inspired further canals.
In the mid-eighteenth century the 3rd Duke of Bridgewater, who owned a number of coal mines in northern England, wanted a reliable way to transport his coal to the rapidly industrializing city of Manchester. He commissioned the engineer James Brindley to build a canal for that purpose. Brindley's design included an aqueduct carrying the canal over the River Irwell. This was an engineering wonder which immediately attracted tourists. The construction of this canal was funded entirely by the Duke and was called the Bridgewater Canal. It opened in 1761 and was the first major British canal.
The new canals proved highly successful. The boats on the canal were horse-drawn with a towpath alongside the canal for the horse to walk along. This horse-drawn system proved to be highly economical and became standard across the British canal network. Commercial horse-drawn canal boats could be seen on the UK's canals until as late as the 1950s, although by then diesel-powered boats, often towing a second unpowered boat, had become standard.
The canal boats could carry thirty tons at a time with only one horse pulling – more than ten times the amount of cargo per horse that was possible with a cart. Because of this huge increase in supply, the Bridgewater canal reduced the price of coal in Manchester by nearly two-thirds within just a year of its opening. The Bridgewater was also a huge financial success, with it earning what had been spent on its construction within just a few years.
This success proved the viability of canal transport, and soon industrialists in many other parts of the country wanted canals. After the Bridgewater canal, early canals were built by groups of private individuals with an interest in improving communications. In Staffordshire the famous potter Josiah Wedgwood saw an opportunity to bring bulky cargoes of clay to his factory doors and to transport his fragile finished goods to market in Manchester, Birmingham or further away, by water, minimizing breakages. Within just a few years of the Bridgewater's opening, an embryonic national canal network came into being, with the construction of canals such as the Oxford Canal and the Trent & Mersey Canal.
The new canal system was both cause and effect of the rapid industrialization of The Midlands and the north. The period between the 1770s and the 1830s is often referred to as the "Golden Age" of British canals.
For each canal, an Act of Parliament was necessary to authorize construction, and as people saw the high incomes achieved from canal tolls, canal proposals came to be put forward by investors interested in profiting from dividends, at least as much as by people whose businesses would profit from cheaper transport of raw materials and finished goods.
In a further development, there was often out-and-out speculation, where people would try to buy shares in a newly floated company to sell them on for an immediate profit, regardless of whether the canal was ever profitable, or even built. During this period of "canal mania", huge sums were invested in canal building, and although many schemes came to nothing, the canal system rapidly expanded to nearly 4,000 miles (over 6,400 kilometres) in length.
Many rival canal companies were formed and competition was rampant. Perhaps the best example was Worcester Bar in Birmingham, a point where the Worcester and Birmingham Canal and the Birmingham Canal Navigations Main Line were only seven feet apart. For many years, a dispute about tolls meant that goods travelling through Birmingham had to be portaged from boats in one canal to boats in the other.
Canal companies were initially chartered by individual states in the United States. These early canals were constructed, owned, and operated by private joint-stock companies. Four were completed when the War of 1812 broke out; these were the South Hadley Canal (opened 1795) in Massachusetts, Santee Canal (opened 1800) in South Carolina, the Middlesex Canal (opened 1802) also in Massachusetts, and the Dismal Swamp Canal (opened 1805) in Virginia. The Erie Canal (opened 1825) was chartered and owned by the state of New York and financed by bonds bought by private investors. The Erie canal runs about from Albany, New York, on the Hudson River to Buffalo, New York, at Lake Erie. The Hudson River connects Albany to the Atlantic port of New York City and the Erie Canal completed a navigable water route from the Atlantic Ocean to the Great Lakes. The canal contains 36 locks and encompasses a total elevation differential of around 565 ft. (169 m). The Erie Canal with its easy connections to most of the U.S. mid-west and New York City soon quickly paid back all its invested capital (US$7 million) and started turning a profit. By cutting transportation costs in half or more it became a large profit center for Albany and New York City as it allowed the cheap transportation of many of the agricultural products grown in the mid west of the United States to the rest of the world. From New York City these agricultural products could easily be shipped to other U.S. states or overseas. Assured of a market for their farm products the settlement of the U.S. mid-west was greatly accelerated by the Erie Canal. The profits generated by the Erie Canal project started a canal building boom in the United States that lasted until about 1850 when railroads started becoming seriously competitive in price and convenience. The Blackstone Canal (finished in 1828) in Massachusetts and Rhode Island fulfilled a similar role in the early industrial revolution between 1828 and 1848. The Blackstone Valley was a major contributor of the American Industrial Revolution where Samuel Slater built his first textile mill.
Power canals
A power canal refers to a canal used for hydraulic power generation, rather than for transport. Nowadays power canals are built almost exclusively as parts of hydroelectric power stations. Parts of the United States, particularly in the Northeast, had enough fast-flowing rivers that water power was the primary means of powering factories (usually textile mills) until after the American Civil War. For example, Lowell, Massachusetts, considered to be "The Cradle of the American Industrial Revolution," has of canals, built from around 1790 to 1850, that provided water power and a means of transportation for the city. The output of the system is estimated at 10,000 horsepower. Other cities with extensive power canal systems include Lawrence, Massachusetts, Holyoke, Massachusetts, Manchester, New Hampshire, and Augusta, Georgia. The most notable power canal was built in 1862 for the Niagara Falls Hydraulic Power and Manufacturing Company.
19th century
Competition, from railways from the 1830s and roads in the 20th century, made the smaller canals obsolete for most commercial transport, and many of the British canals fell into decay. Only the Manchester Ship Canal and the Aire and Calder Canal bucked this trend. Yet in other countries canals grew in size as construction techniques improved. During the 19th century in the US, the length of canals grew from to over 4,000, with a complex network making the Great Lakes navigable, in conjunction with Canada, although some canals were later drained and used as railroad rights-of-way.
In the United States, navigable canals reached into isolated areas and brought them in touch with the world beyond. By 1825 the Erie Canal, long with 36 locks, opened up a connection from the populated Northeast to the Great Lakes. Settlers flooded into regions serviced by such canals, since access to markets was available. The Erie Canal (as well as other canals) was instrumental in lowering the differences in commodity prices between these various markets across America. The canals caused price convergence between different regions because of their reduction in transportation costs, which allowed Americans to ship and buy goods from farther distances much cheaper. Ohio built many miles of canal, Indiana had working canals for a few decades, and the Illinois and Michigan Canal connected the Great Lakes to the Mississippi River system until replaced by a channelized river waterway.
Three major canals with very different purposes were built in what is now Canada. The first Welland Canal, which opened in 1829 between Lake Ontario and Lake Erie, bypassing Niagara Falls and the Lachine Canal (1825), which allowed ships to skirt the nearly impassable rapids on the St. Lawrence River at Montreal, were built for commerce. The Rideau Canal, completed in 1832, connects Ottawa on the Ottawa River to Kingston, Ontario on Lake Ontario. The Rideau Canal was built as a result of the War of 1812 to provide military transportation between the British colonies of Upper Canada and Lower Canada as an alternative to part of the St. Lawrence River, which was susceptible to blockade by the United States.
In France, a steady linking of all the river systems – Rhine, Rhône, Saône and Seine – and the North Sea was boosted in 1879 by the establishment of the Freycinet gauge, which specified the minimum size of locks. Canal traffic doubled in the first decades of the 20th century.
Many notable sea canals were completed in this period, starting with the Suez Canal (1869) – which carries tonnage many times that of most other canals – and the Kiel Canal (1897), though the Panama Canal was not opened until 1914.
In the 19th century, a number of canals were built in Japan including the Biwako canal and the Tone canal. These canals were partially built with the help of engineers from the Netherlands and other countries.
A major question was how to connect the Atlantic and the Pacific with a canal through narrow Central America. (The Panama Railroad opened in 1855.) The original proposal was for a sea-level canal through what is today Nicaragua, taking advantage of the relatively large Lake Nicaragua. This canal has never been built in part because of political instability, which scared off potential investors. It remains an active project (the geography has not changed), and in the 2010s Chinese involvement was developing.
The second choice for a Central American canal was a Panama Canal. The De Lesseps company, which ran the Suez Canal, first attempted to build a Panama Canal in the 1880s. The difficulty of the terrain and weather (rain) encountered caused the company to go bankrupt. High worker mortality from disease also discouraged further investment in the project. DeLesseps' abandoned excavating equipment sits, isolated decaying machines, today tourist attractions.
Twenty years later, an expansionist United States, that just acquired colonies after defeating Spain in the 1898 Spanish–American War, and whose Navy became more important, decided to reactivate the project. The United States and Colombia did not reach agreement on the terms of a canal treaty (see Hay–Herrán Treaty). Panama, which did not have (and still does not have) a land connection with the rest of Colombia, was already thinking of independence. In 1903 the United States, with support from Panamanians who expected the canal to provide substantial wages, revenues, and markets for local goods and services, took Panama province away from Colombia, and set up a puppet republic (Panama). Its currency, the Balboa – a name that suggests the country began as a way to get from one hemisphere to the other – was a replica of the US dollar. The US dollar was and remains legal tender (used as currency). A U.S. military zone, the Canal Zone, wide, with U.S. military stationed there (bases, 2 TV stations, channels 8 and 10, Pxs, a U.S.-style high school), split Panama in half. The Canal – a major engineering project – was built. The U.S. did not feel that conditions were stable enough to withdraw until 1979. The withdrawal from Panama contributed to President Jimmy Carter's defeat in 1980.
Modern uses
Large-scale ship canals such as the Panama Canal and Suez Canal continue to operate for cargo transportation, as do European barge canals. Due to globalization, they are becoming increasingly important, resulting in expansion projects such as the Panama Canal expansion project. The expanded canal began commercial operation on 26 June 2016. The new set of locks allow transit of larger, Post-Panamax and New Panamax ships.
The narrow early industrial canals, however, have ceased to carry significant amounts of trade and many have been abandoned to navigation, but may still be used as a system for transportation of untreated water. In some cases railways have been built along the canal route, an example being the Croydon Canal.
A movement that began in Britain and France to use the early industrial canals for pleasure boats, such as hotel barges, has spurred rehabilitation of stretches of historic canals. In some cases, abandoned canals such as the Kennet and Avon Canal have been restored and are now used by pleasure boaters. In Britain, canalside housing has also proven popular in recent years.
The Seine–Nord Europe Canal is being developed into a major transportation waterway, linking France with Belgium, Germany, and the Netherlands.
Canals have found another use in the 21st century, as easements for the installation of fibre optic telecommunications network cabling, avoiding having them buried in roadways while facilitating access and reducing the hazard of being damaged from digging equipment.
Canals are still used to provide water for agriculture. An extensive canal system exists within the Imperial Valley in the Southern California desert to provide irrigation to agriculture within the area.
Cities on water
Canals are so deeply identified with Venice that many canal cities have been nicknamed "the Venice of…". The city is built on marshy islands, with wooden piles supporting the buildings, so that the land is man-made rather than the waterways. The islands have a long history of settlement; by the 12th century, Venice was a powerful city state.
Amsterdam was built in a similar way, with buildings on wooden piles. It became a city around 1300. Many Amsterdam canals were built as part of fortifications. They became grachten when the city was enlarged and houses were built alongside the water. Its nickname as the "Venice of the North" is shared with Hamburg of Germany, St. Petersburg of Russia and Bruges of Belgium.
Suzhou was dubbed the "Venice of the East" by Marco Polo during his travels there in the 13th century, with its modern canalside Pingjiang Road and Shantang Street becoming major tourist attractions. Other nearby cities including Nanjing, Shanghai, Wuxi, Jiaxing, Huzhou, Nantong, Taizhou, Yangzhou, and Changzhou are located along the lower mouth of the Yangtze River and Lake Tai, yet another source of small rivers and creeks, which have been canalized and developed for centuries.
Other cities with extensive canal networks include: Alkmaar, Amersfoort, Bolsward, Brielle, Delft, Den Bosch, Dokkum, Dordrecht, Enkhuizen, Franeker, Gouda, Haarlem, Harlingen, Leeuwarden, Leiden, Sneek and Utrecht in the Netherlands; Brugge and Gent in Flanders, Belgium; Birmingham in England; Saint Petersburg in Russia; Bydgoszcz, Gdańsk, Szczecin and Wrocław in Poland; Aveiro in Portugal; Hamburg and Berlin in Germany; Fort Lauderdale and Cape Coral in Florida, United States, Wenzhou in China, Cần Thơ in Vietnam, Bangkok in Thailand, and Lahore in Pakistan.
Liverpool Maritime Mercantile City was a UNESCO World Heritage Site near the centre of Liverpool, England, where a system of intertwining waterways and docks is now being developed for mainly residential and leisure use.
Canal estates (sometimes known as bayous in the United States) are a form of subdivision popular in cities like Miami, Florida, Texas City, Texas and the Gold Coast, Queensland; the Gold Coast has over 890 km of residential canals. Wetlands are difficult areas upon which to build housing estates, so dredging part of the wetland down to a navigable channel provides fill to build up another part of the wetland above the flood level for houses. Land is built up in a finger pattern that provides a suburban street layout of waterfront housing blocks.
Boats
Inland canals have often had boats specifically built for them. An example of this is the British narrowboat, which is up to long and wide and was primarily built for British Midland canals. In this case the limiting factor was the size of the locks. This is also the limiting factor on the Panama canal where Panamax ships were limited to a length of and a beam of until 26 June 2016 when the opening of larger locks allowed for the passage of larger New Panamax ships. For the lockless Suez Canal the limiting factor for Suezmax ships is generally draft, which is limited to . At the other end of the scale, tub-boat canals such as the Bude Canal were limited to boats of under 10 tons for much of their length due to the capacity of their inclined planes or boat lifts. Most canals have a limit on height imposed either by bridges or by tunnels.
Lists of canals
Africa
Bahr Yussef
El Salam Canal (Egypt)
Ibrahimiya Canal (Egypt)
Mahmoudiyah Canal (Egypt)
Suez Canal (Egypt)
Asia
see List of canals in India
see List of canals in Pakistan
see History of canals in China
King Abdullah Canal (Jordan)
Qanat al-Jaish (Iraq)
Europe
Danube–Black Sea Canal (Romania)
North Crimean Canal (Ukraine)
Canals of France
Canals of Amsterdam
Canals of Germany
Canals of Ireland
Canals of Russia
Canals of the United Kingdom
List of canals in the United Kingdom
Great Bačka Canal (Serbia)
North America
Canals of Canada
Canals of the United States
Panama Canal
Lists of proposed canals
Eurasia Canal
Istanbul Canal
Nicaragua Canal
Salwa Canal
Thai Canal
Sulawesi Canal
Two Seas Canal
Northern river reversal
Balkan Canal or Danube–Morava–Vardar–Aegean Canal
Iranrud
| Technology | Transportation | null |
5636 | https://en.wikipedia.org/wiki/Chemist | Chemist | A chemist (from Greek chēm(ía) alchemy; replacing chymist from Medieval Latin alchemist) is a graduated scientist trained in the study of chemistry, or an officially enrolled student in the field. Chemists study the composition of matter and its properties. Chemists carefully describe the properties they study in terms of quantities, with detail on the level of molecules and their component atoms. Chemists carefully measure substance proportions, chemical reaction rates, and other chemical properties. In Commonwealth English, pharmacists are often called chemists.
Chemists use their knowledge to learn the composition and properties of unfamiliar substances, as well as to reproduce and synthesize large quantities of useful naturally occurring substances and create new artificial substances and useful processes. Chemists may specialize in any number of subdisciplines of chemistry. Materials scientists and metallurgists share much of the same education and skills with chemists. The work of chemists is often related to the work of chemical engineers, who are primarily concerned with the proper design, construction and evaluation of the most cost-effective large-scale chemical plants and work closely with industrial chemists on the development of new processes and methods for the commercial-scale manufacture of chemicals and related products.
History of chemistry
The roots of chemistry can be traced to the phenomenon of burning. Fire was a mystical force that transformed one substance into another and thus was of primary interest to mankind. It was fire that led to the discovery of iron and glasses. After gold was discovered and became a precious metal, many people were interested to find a method that could convert other substances into gold. This led to the protoscience called alchemy. The word chemist is derived from the Neo-Latin noun chimista, an abbreviation of alchimista (alchemist). Alchemists discovered many chemical processes that led to the development of modern chemistry.
Chemistry as we know it today, was invented by Antoine Lavoisier with his law of conservation of mass in 1783. The discoveries of the chemical elements has a long history culminating in the creation of the periodic table by Dmitri Mendeleev. The Nobel Prize in Chemistry created in 1901 gives an excellent overview of chemical discovery since the start of the 20th century.
At the Washington Academy of Sciences during World War I, it was said that the side with the best chemists would win the war.
Education
Formal education
Jobs for chemists generally require at least a bachelor's degree in chemistry, which takes four years. However, many positions, especially those in research, require a Master of Science or a Doctor of Philosophy (PhD.). Most undergraduate programs emphasize mathematics and physics as well as chemistry, partly because chemistry is also known as "the central science", thus chemists ought to have a well-rounded knowledge about science. At the Master's level and higher, students tend to specialize in a particular field. Fields of specialization include biochemistry, nuclear chemistry, organic chemistry, inorganic chemistry, polymer chemistry, analytical chemistry, physical chemistry, theoretical chemistry, quantum chemistry, environmental chemistry, and thermochemistry. Postdoctoral experience may be required for certain positions.
Workers whose work involves chemistry, but not at a complexity requiring an education with a chemistry degree, are commonly referred to as chemical technicians. Such technicians commonly do such work as simpler, routine analyses for quality control or in clinical laboratories, having an associate degree. A chemical technologist has more education or experience than a chemical technician but less than a chemist, often having a bachelor's degree in a different field of science with also an associate degree in chemistry (or many credits related to chemistry) or having the same education as a chemical technician but more experience. There are also degrees specific to become a chemical technologist, which are somewhat distinct from those required when a student is interested in becoming a professional chemist. A Chemical technologist is more involved in the management and operation of the equipment and instrumentation necessary to perform chemical analyzes than a chemical technician. They are part of the team of a chemical laboratory in which the quality of the raw material, intermediate products and finished products is analyzed. They also perform functions in the areas of environmental quality control and the operational phase of a chemical plant.
Training
In addition to all the training usually given to chemical technologists in their respective degree (or one given via an associate degree), a chemist is also trained to understand more details related to chemical phenomena so that the chemist can be capable of more planning on the steps to achieve a distinct goal via a chemistry-related endeavor. The higher the competency level achieved in the field of chemistry (as assessed via a combination of education, experience and personal achievements), the higher the responsibility given to that chemist and the more complicated the task might be. Chemistry, as a field, have so many applications that different tasks and objectives can be given to workers or scientists with these different levels of education or experience. The specific title of each job varies from position to position, depending on factors such as the kind of industry, the routine level of the task, the current needs of a particular enterprise, the size of the enterprise or hiring firm, the philosophy and management principles of the hiring firm, the visibility of the competency and individual achievements of the one seeking employment, economic factors such as recession or economic depression, among other factors, so this makes it difficult to categorize the exact roles of these chemistry-related workers as standard for that given level of education. Because of these factors affecting exact job titles with distinct responsibilities, some chemists might begin doing technician tasks while other chemists might begin doing more complicated tasks than those of a technician, such as tasks that also involve formal applied research, management, or supervision included within the responsibilities of that same job title. The level of supervision given to that chemist also varies in a similar manner, with factors similar to those that affect the tasks demanded for a particular chemist
It is important that those interested in a Chemistry degree understand the variety of roles available to them (on average), which vary depending on education and job experience. Those Chemists who hold a bachelor's degree are most commonly involved in positions related to either research assistance (working under the guidance of senior chemists in a research-oriented activity), or, alternatively, they may work on distinct (chemistry-related) aspects of a business, organization or enterprise including aspects that involve quality control, quality assurance, manufacturing, production, formulation, inspection, method validation, visitation for troubleshooting of chemistry-related instruments, regulatory affairs, "on-demand" technical services, chemical analysis for non-research purposes (e.g., as a legal request, for testing purposes, or for government or non-profit agencies); chemists may also work in environmental evaluation and assessment. Other jobs or roles may include sales and marketing of chemical products and chemistry-related instruments or technical writing. The more experience obtained, the more independence and leadership or management roles these chemists may perform in those organizations. Some chemists with relatively higher experience might change jobs or job position to become a manager of a chemistry-related enterprise, a supervisor, an entrepreneur or a chemistry consultant. Other chemists choose to combine their education and experience as a chemist with a distinct credential to provide different services (e.g., forensic chemists, chemistry-related software development, patent law specialists, environmental law firm staff, scientific news reporting staff, engineering design staff, etc.).
In comparison, chemists who have obtained a Master of Science (M.S.) in chemistry or in a very related discipline may find chemist roles that allow them to enjoy more independence, leadership and responsibility earlier in their careers with less years of experience than those with a bachelor's degree as highest degree. Sometimes, M.S. chemists receive more complex tasks duties in comparison with the roles and positions found by chemists with a bachelor's degree as their highest academic degree and with the same or close-to-same years of job experience. There are positions that are open only to those that at least have a degree related to chemistry at the master's level. Although good chemists without a Ph.D. degree but with relatively many years of experience may be allowed some applied research positions, the general rule is that Ph.D. chemists are preferred for research positions and are typically the preferred choice for the highest administrative positions on big enterprises involved in chemistry-related duties. Some positions, especially research oriented, will only allow those chemists who are Ph.D. holders. Jobs that involve intensive research and actively seek to lead the discovery of completely new chemical compounds under specifically assigned monetary funds and resources or jobs that seek to develop new scientific theories require a Ph.D. more often than not. Chemists with a Ph.D. as the highest academic degree are found typically on the research-and-development department of an enterprise and can also hold university positions as professors. Professors for research universities or for big universities usually have a Ph.D., and some research-oriented institutions might require post-doctoral training. Some smaller colleges (including some smaller four-year colleges or smaller non-research universities for undergraduates) as well as community colleges usually hire chemists with a M.S. as professors too (and rarely, some big universities who need part-time or temporary instructors, or temporary staff), but when the positions are scarce and the applicants are many, they might prefer Ph.D. holders instead.
Skills
Skills that a chemist may need on the job include:
Knowledge of chemistry
Familiarity with product development
Using scientific rules, strategies, or concepts to solve problems
Putting together small parts using hands and fingers with dexterity
Employment
Most chemists begin their lives in research laboratories. Many chemists continue working at universities. Other chemists may start companies, teach at high schools or colleges, take samples outside (as environmental chemists), or work in medical examiner offices or police departments (as forensic chemists).
Some software that chemists may find themselves using include:
ChemSW Buffer Maker
LabTrack Electronic Lab Notebook
Agilent ChemStation
Waters Empower Chromatography Data Software
Microsoft Excel
Increasingly, chemists may also find themselves using artificial intelligence, such as for drug discovery.
Subdisciplines
Chemistry typically is divided into several major sub-disciplines. There are also several main cross-disciplinary and more specialized fields of chemistry. There is a great deal of overlap between different branches of chemistry, as well as with other scientific fields such as biology, medicine, physics, radiology, and several engineering disciplines.
Analytical chemistry is the analysis of material samples to gain an understanding of their chemical composition and structure. Analytical chemistry incorporates standardized experimental methods in chemistry. These methods may be used in all subdisciplines of chemistry, excluding purely theoretical chemistry.
Biochemistry is the study of the chemicals, chemical reactions and chemical interactions that take place in living organisms. Biochemistry and organic chemistry are closely related, for example, in medicinal chemistry.
Inorganic chemistry is the study of the properties and reactions of inorganic compounds. The distinction between organic and inorganic disciplines is not absolute and there is much overlap, most importantly in the sub-discipline of organometallic chemistry. The Inorganic chemistry is also the study of atomic and molecular structure and bonding.
Medicinal chemistry is the science involved with designing, synthesizing and developing pharmaceutical drugs. Medicinal chemistry involves the identification, synthesis and development of new chemical entities suitable for therapeutic use. It also includes the study of existing drugs, their biological properties, and their quantitative structure-activity relationships.
Organic chemistry is the study of the structure, properties, composition, mechanisms, and chemical reaction of carbon compounds.
Physical chemistry is the study of the physical fundamental basis of chemical systems and processes. In particular, the energetics and dynamics of such systems and processes are of interest to physical chemists. Important areas of study include chemical thermodynamics, chemical kinetics, electrochemistry, quantum chemistry, statistical mechanics, and spectroscopy. Physical chemistry has a large overlap with theoretical chemistry and molecular physics. Physical chemistry involves the use of calculus in deriving equations.
Theoretical chemistry is the study of chemistry via theoretical reasoning (usually within mathematics or physics). In particular, the application of quantum mechanics to chemistry is called quantum chemistry. Since the end of the Second World War, the development of computers has allowed a systematic development of computational chemistry, which is the art of developing and applying computer programs for solving chemical problems. Theoretical chemistry has large overlap with condensed matter physics and molecular physics. See reductionism.
All the above major areas of chemistry employ chemists. Other fields where chemical degrees are useful include astrochemistry (and cosmochemistry), atmospheric chemistry, chemical engineering, chemo-informatics, electrochemistry, environmental science, forensic science, geochemistry, green chemistry, history of chemistry, materials science, medical science, molecular biology, molecular genetics, nanotechnology, nuclear chemistry, oenology, organometallic chemistry, petrochemistry, pharmacology, photochemistry, phytochemistry, polymer chemistry, supramolecular chemistry and surface chemistry.
Professional societies
Chemists may belong to professional societies specifically for professionals and researchers within the field of chemistry, such as the Royal Society of Chemistry in the United Kingdom, the American Chemical Society (ACS) in the United States, or the Institution of Chemists in India.
Ethics
The "Global Chemists' Code of Ethics" suggests several ethical principles that all chemists should follow:
Promoting the general public's appreciation of chemistry
The importance of sustainability and protecting the environment
The importance of scientific research and publications
Respecting safety, such as by using proper personal protective equipment
Respecting chemical security throughout the chemical supply chain, especially for labs and industrial facilities
This code of ethics was codified in a 2016 conference held in Kuala Lumpur, Malaysia, run by the American Chemical Society. The points listed are inspired by the 2015 Hague Ethical Guidelines.
Honors and awards
The highest honor awarded to chemists is the Nobel Prize in Chemistry, awarded since 1901, by the Royal Swedish Academy of Sciences.
| Physical sciences | Basics: General | Chemistry |
5638 | https://en.wikipedia.org/wiki/Combustion | Combustion | Combustion, or burning, is a high-temperature exothermic redox chemical reaction between a fuel (the reductant) and an oxidant, usually atmospheric oxygen, that produces oxidized, often gaseous products, in a mixture termed as smoke. Combustion does not always result in fire, because a flame is only visible when substances undergoing combustion vaporize, but when it does, a flame is a characteristic indicator of the reaction. While activation energy must be supplied to initiate combustion (e.g., using a lit match to light a fire), the heat from a flame may provide enough energy to make the reaction self-sustaining. The study of combustion is known as combustion science.
Combustion is often a complicated sequence of elementary radical reactions. Solid fuels, such as wood and coal, first undergo endothermic pyrolysis to produce gaseous fuels whose combustion then supplies the heat required to produce more of them. Combustion is often hot enough that incandescent light in the form of either glowing or a flame is produced. A simple example can be seen in the combustion of hydrogen and oxygen into water vapor, a reaction which is commonly used to fuel rocket engines. This reaction releases 242kJ/mol of heat and reduces the enthalpy accordingly (at constant temperature and pressure):
2H_2(g){+}O_2(g)\rightarrow 2H_2O\uparrow
Uncatalyzed combustion in air requires relatively high temperatures. Complete combustion is stoichiometric concerning the fuel, where there is no remaining fuel, and ideally, no residual oxidant. Thermodynamically, the chemical equilibrium of combustion in air is overwhelmingly on the side of the products. However, complete combustion is almost impossible to achieve, since the chemical equilibrium is not necessarily reached, or may contain unburnt products such as carbon monoxide, hydrogen and even carbon (soot or ash). Thus, the produced smoke is usually toxic and contains unburned or partially oxidized products. Any combustion at high temperatures in atmospheric air, which is 78 percent nitrogen, will also create small amounts of several nitrogen oxides, commonly referred to as NOx, since the combustion of nitrogen is thermodynamically favored at high, but not low temperatures. Since burning is rarely clean, fuel gas cleaning or catalytic converters may be required by law.
Fires occur naturally, ignited by lightning strikes or by volcanic products. Combustion (fire) was the first controlled chemical reaction discovered by humans, in the form of campfires and bonfires, and continues to be the main method to produce energy for humanity. Usually, the fuel is carbon, hydrocarbons, or more complicated mixtures such as wood that contain partially oxidized hydrocarbons. The thermal energy produced from the combustion of either fossil fuels such as coal or oil, or from renewable fuels such as firewood, is harvested for diverse uses such as cooking, production of electricity or industrial or domestic heating. Combustion is also currently the only reaction used to power rockets. Combustion is also used to destroy (incinerate) waste, both nonhazardous and hazardous.
Oxidants for combustion have high oxidation potential and include atmospheric or pure oxygen, chlorine, fluorine, chlorine trifluoride, nitrous oxide and nitric acid. For instance, hydrogen burns in chlorine to form hydrogen chloride with the liberation of heat and light characteristic of combustion. Although usually not catalyzed, combustion can be catalyzed by platinum or vanadium, as in the contact process.
Types
Complete and incomplete
Complete
In complete combustion, the reactant burns in oxygen and produces a limited number of products. When a hydrocarbon burns in oxygen, the reaction will primarily yield carbon dioxide and water. When elements are burned, the products are primarily the most common oxides. Carbon will yield carbon dioxide, sulfur will yield sulfur dioxide, and iron will yield iron(III) oxide. Nitrogen is not considered to be a combustible substance when oxygen is the oxidant. Still, small amounts of various nitrogen oxides (commonly designated species) form when the air is the oxidative.
Combustion is not necessarily favorable to the maximum degree of oxidation, and it can be temperature-dependent. For example, sulfur trioxide is not produced quantitatively by the combustion of sulfur. species appear in significant amounts above about , and more is produced at higher temperatures. The amount of is also a function of oxygen excess.
In most industrial applications and in fires, air is the source of oxygen (). In the air, each mole of oxygen is mixed with approximately of nitrogen. Nitrogen does not take part in combustion, but at high temperatures, some nitrogen will be converted to (mostly , with much smaller amounts of ). On the other hand, when there is insufficient oxygen to combust the fuel completely, some fuel carbon is converted to carbon monoxide, and some of the hydrogens remain unreacted. A complete set of equations for the combustion of a hydrocarbon in the air, therefore, requires an additional calculation for the distribution of oxygen between the carbon and hydrogen in the fuel.
The amount of air required for complete combustion is known as the "theoretical air" or "stoichiometric air". The amount of air above this value actually needed for optimal combustion is known as the "excess air", and can vary from 5% for a natural gas boiler, to 40% for anthracite coal, to 300% for a gas turbine.
Incomplete
Incomplete combustion will occur when there is not enough oxygen to allow the fuel to react completely to produce carbon dioxide and water. It also happens when the combustion is quenched by a heat sink, such as a solid surface or flame trap. As is the case with complete combustion, water is produced by incomplete combustion; however, carbon and carbon monoxide are produced instead of carbon dioxide.
For most fuels, such as diesel oil, coal, or wood, pyrolysis occurs before combustion. In incomplete combustion, products of pyrolysis remain unburnt and contaminate the smoke with noxious particulate matter and gases. Partially oxidized compounds are also a concern; partial oxidation of ethanol can produce harmful acetaldehyde, and carbon can produce toxic carbon monoxide.
The designs of combustion devices can improve the quality of combustion, such as burners and internal combustion engines. Further improvements are achievable by catalytic after-burning devices (such as catalytic converters) or by the simple partial return of the exhaust gases into the combustion process. Such devices are required by environmental legislation for cars in most countries. They may be necessary to enable large combustion devices, such as thermal power stations, to reach legal emission standards.
The degree of combustion can be measured and analyzed with test equipment. HVAC contractors, firefighters and engineers use combustion analyzers to test the efficiency of a burner during the combustion process. Also, the efficiency of an internal combustion engine can be measured in this way, and some U.S. states and local municipalities use combustion analysis to define and rate the efficiency of vehicles on the road today.
Carbon monoxide is one of the products from incomplete combustion. The formation of carbon monoxide produces less heat than formation of carbon dioxide so complete combustion is greatly preferred especially as carbon monoxide is a poisonous gas. When breathed, carbon monoxide takes the place of oxygen and combines with some of the hemoglobin in the blood, rendering it unable to transport oxygen.
Problems associated with incomplete combustion
Environmental problems
These oxides combine with water and oxygen in the atmosphere, creating nitric acid and sulfuric acids, which return to Earth's surface as acid deposition, or "acid rain." Acid deposition harms aquatic organisms and kills trees. Due to its formation of certain nutrients that are less available to plants such as calcium and phosphorus, it reduces the productivity of the ecosystem and farms. An additional problem associated with nitrogen oxides is that they, along with hydrocarbon pollutants, contribute to the formation of ground level ozone, a major component of smog.
Human health problems
Breathing carbon monoxide causes headache, dizziness, vomiting, and nausea. If carbon monoxide levels are high enough, humans become unconscious or die. Exposure to moderate and high levels of carbon monoxide over long periods is positively correlated with the risk of heart disease. People who survive severe carbon monoxide poisoning may suffer long-term health problems. Carbon monoxide from the air is absorbed in the lungs which then binds with hemoglobin in human's red blood cells. This reduces the capacity of red blood cells that carry oxygen throughout the body.
Smoldering
Smoldering is the slow, low-temperature, flameless form of combustion, sustained by the heat evolved when oxygen directly attacks the surface of a condensed-phase fuel. It is a typically incomplete combustion reaction. Solid materials that can sustain a smoldering reaction include coal, cellulose, wood, cotton, tobacco, peat, duff, humus, synthetic foams, charring polymers (including polyurethane foam) and dust. Common examples of smoldering phenomena are the initiation of residential fires on upholstered furniture by weak heat sources (e.g., a cigarette, a short-circuited wire) and the persistent combustion of biomass behind the flaming fronts of wildfires.
Spontaneous
Spontaneous combustion is a type of combustion that occurs by self-heating (increase in temperature due to exothermic internal reactions), followed by thermal runaway (self-heating which rapidly accelerates to high temperatures) and finally, ignition.
For example, phosphorus self-ignites at room temperature without the application of heat. Organic materials undergoing bacterial composting can generate enough heat to reach the point of combustion.
Turbulent
Combustion resulting in a turbulent flame is the most used for industrial applications (e.g. gas turbines, gasoline engines, etc.) because the turbulence helps the mixing process between the fuel and oxidizer.
Micro-gravity
The term 'micro' gravity refers to a gravitational state that is 'low' (i.e., 'micro' in the sense of 'small' and not necessarily a millionth of Earth's normal gravity) such that the influence of buoyancy on physical processes may be considered small relative to other flow processes that would be present at normal gravity. In such an environment, the thermal and flow transport dynamics can behave quite differently than in normal gravity conditions (e.g., a candle's flame takes the shape of a sphere.). Microgravity combustion research contributes to the understanding of a wide variety of aspects that are relevant to both the environment of a spacecraft (e.g., fire dynamics relevant to crew safety on the International Space Station) and terrestrial (Earth-based) conditions (e.g., droplet combustion dynamics to assist developing new fuel blends for improved combustion, materials fabrication processes, thermal management of electronic systems, multiphase flow boiling dynamics, and many others).
Micro-combustion
Combustion processes that happen in very small volumes are considered micro-combustion. The high surface-to-volume ratio increases specific heat loss. Quenching distance plays a vital role in stabilizing the flame in such combustion chambers.
Chemical equations
Stoichiometric combustion of a hydrocarbon in oxygen
Generally, the chemical equation for stoichiometric combustion of a hydrocarbon in oxygen is:
For example, the stoichiometric combustion of methane in oxygen is:
\underset{methane}{CH4} + 2O2 -> CO2 + 2H2O
Stoichiometric combustion of a hydrocarbon in air
If the stoichiometric combustion takes place using air as the oxygen source, the nitrogen present in the air (Atmosphere of Earth) can be added to the equation (although it does not react) to show the stoichiometric composition of the fuel in air and the composition of the resultant flue gas. Treating all non-oxygen components in air as nitrogen gives a 'nitrogen' to oxygen ratio of 3.77, i.e. (100% − %) / % where % is 20.95% vol:
where .
For example, the stoichiometric combustion of methane in air is:
The stoichiometric composition of methane in air is 1 / (1 + 2 + 7.54) = 9.49% vol.
The stoichiometric combustion reaction for CHO in air:
The stoichiometric combustion reaction for CHOS:
The stoichiometric combustion reaction for CHONS:
The stoichiometric combustion reaction for CHOF:
Trace combustion products
Various other substances begin to appear in significant amounts in combustion products when the flame temperature is above about . When excess air is used, nitrogen may oxidize to and, to a much lesser extent, to . forms by disproportionation of , and and form by disproportionation of .
For example, when of propane is burned with of air (120% of the stoichiometric amount), the combustion products contain 3.3% . At , the equilibrium combustion products contain 0.03% and 0.002% . At , the combustion products contain 0.17% , 0.05% , 0.01% , and 0.004% .
Diesel engines are run with an excess of oxygen to combust small particles that tend to form with only a stoichiometric amount of oxygen, necessarily producing nitrogen oxide emissions. Both the United States and European Union enforce limits to vehicle nitrogen oxide emissions, which necessitate the use of special catalytic converters or treatment of the exhaust with urea (see Diesel exhaust fluid).
Incomplete combustion of a hydrocarbon in oxygen
The incomplete (partial) combustion of a hydrocarbon with oxygen produces a gas mixture containing mainly , , , and . Such gas mixtures are commonly prepared for use as protective atmospheres for the heat-treatment of metals and for gas carburizing. The general reaction equation for incomplete combustion of one mole of a hydrocarbon in oxygen is:
\underset{fuel}{C_\mathit{x} H_\mathit{y}} + \underset{oxygen}{\mathit{z} O2} -> \underset{carbon \ dioxide}{\mathit{a}CO2} + \underset{carbon\ monoxide}{\mathit{b}CO} + \underset{water}{\mathit{c}H2O} + \underset{hydrogen}{\mathit{d}H2}
When z falls below roughly 50% of the stoichiometric value, can become an important combustion product; when z falls below roughly 35% of the stoichiometric value, elemental carbon may become stable.
The products of incomplete combustion can be calculated with the aid of a material balance, together with the assumption that the combustion products reach equilibrium. For example, in the combustion of one mole of propane () with four moles of , seven moles of combustion gas are formed, and z is 80% of the stoichiometric value. The three elemental balance equations are:
Carbon:
Hydrogen:
Oxygen:
These three equations are insufficient in themselves to calculate the combustion gas composition.
However, at the equilibrium position, the water-gas shift reaction gives another equation:
CO + H2O -> CO2 + H2;
For example, at the value of K is 0.728. Solving, the combustion gas consists of 42.4% , 29.0% , 14.7% , and 13.9% . Carbon becomes a stable phase at and pressure when z is less than 30% of the stoichiometric value, at which point the combustion products contain more than 98% and and about 0.5% .
Substances or materials which undergo combustion are called fuels. The most common examples are natural gas, propane, kerosene, diesel, petrol, charcoal, coal, wood, etc.
Liquid fuels
Combustion of a liquid fuel in an oxidizing atmosphere actually happens in the gas phase. It is the vapor that burns, not the liquid. Therefore, a liquid will normally catch fire only above a certain temperature: its flash point. The flash point of liquid fuel is the lowest temperature at which it can form an ignitable mix with air. It is the minimum temperature at which there is enough evaporated fuel in the air to start combustion.
Gaseous fuels
Combustion of gaseous fuels may occur through one of four distinctive types of burning: diffusion flame, premixed flame, autoignitive reaction front, or as a detonation. The type of burning that actually occurs depends on the degree to which the fuel and oxidizer are mixed prior to heating: for example, a diffusion flame is formed if the fuel and oxidizer are separated initially, whereas a premixed flame is formed otherwise. Similarly, the type of burning also depends on the pressure: a detonation, for example, is an autoignitive reaction front coupled to a strong shock wave giving it its characteristic high-pressure peak and high detonation velocity.
Solid fuels
The act of combustion consists of three relatively distinct but overlapping phases:
Preheating phase, when the unburned fuel is heated up to its flash point and then fire point. Flammable gases start being evolved in a process similar to dry distillation.
Distillation phase or gaseous phase, when the mix of evolved flammable gases with oxygen is ignited. Energy is produced in the form of heat and light. Flames are often visible. Heat transfer from the combustion to the solid maintains the evolution of flammable vapours.
Charcoal phase or solid phase, when the output of flammable gases from the material is too low for the persistent presence of flame and the charred fuel does not burn rapidly and just glows and later only smoulders.
Combustion management
Efficient process heating requires recovery of the largest possible part of a fuel's heat of combustion into the material being processed. There are many avenues of loss in the operation of a heating process. Typically, the dominant loss is sensible heat leaving with the offgas (i.e., the flue gas). The temperature and quantity of offgas indicates its heat content (enthalpy), so keeping its quantity low minimizes heat loss.
In a perfect furnace, the combustion air flow would be matched to the fuel flow to give each fuel molecule the exact amount of oxygen needed to cause complete combustion. However, in the real world, combustion does not proceed in a perfect manner. Unburned fuel (usually and ) discharged from the system represents a heating value loss (as well as a safety hazard). Since combustibles are undesirable in the offgas, while the presence of unreacted oxygen there presents minimal safety and environmental concerns, the first principle of combustion management is to provide more oxygen than is theoretically needed to ensure that all the fuel burns. For methane () combustion, for example, slightly more than two molecules of oxygen are required.
The second principle of combustion management, however, is to not use too much oxygen. The correct amount of oxygen requires three types of measurement: first, active control of air and fuel flow; second, offgas oxygen measurement; and third, measurement of offgas combustibles. For each heating process, there exists an optimum condition of minimal offgas heat loss with acceptable levels of combustibles concentration. Minimizing excess oxygen pays an additional benefit: for a given offgas temperature, the NOx level is lowest when excess oxygen is kept lowest.
Adherence to these two principles is furthered by making material and heat balances on the combustion process. The material balance directly relates the air/fuel ratio to the percentage of in the combustion gas. The heat balance relates the heat available for the charge to the overall net heat produced by fuel combustion. Additional material and heat balances can be made to quantify the thermal advantage from preheating the combustion air, or enriching it in oxygen.
Reaction mechanism
Combustion in oxygen is a chain reaction in which many distinct radical intermediates participate. The high energy required for initiation is explained by the unusual structure of the dioxygen molecule. The lowest-energy configuration of the dioxygen molecule is a stable, relatively unreactive diradical in a triplet spin state. Bonding can be described with three bonding electron pairs and two antibonding electrons, with spins aligned, such that the molecule has nonzero total angular momentum. Most fuels, on the other hand, are in a singlet state, with paired spins and zero total angular momentum. Interaction between the two is quantum mechanically a "forbidden transition", i.e. possible with a very low probability. To initiate combustion, energy is required to force dioxygen into a spin-paired state, or singlet oxygen. This intermediate is extremely reactive. The energy is supplied as heat, and the reaction then produces additional heat, which allows it to continue.
Combustion of hydrocarbons is thought to be initiated by hydrogen atom abstraction (not proton abstraction) from the fuel to oxygen, to give a hydroperoxide radical (HOO). This reacts further to give hydroperoxides, which break up to give hydroxyl radicals. There are a great variety of these processes that produce fuel radicals and oxidizing radicals. Oxidizing species include singlet oxygen, hydroxyl, monatomic oxygen, and hydroperoxyl. Such intermediates are short-lived and cannot be isolated. However, non-radical intermediates are stable and are produced in incomplete combustion. An example is acetaldehyde produced in the combustion of ethanol. An intermediate in the combustion of carbon and hydrocarbons, carbon monoxide, is of special importance because it is a poisonous gas, but also economically useful for the production of syngas.
Solid and heavy liquid fuels also undergo a great number of pyrolysis reactions that give more easily oxidized, gaseous fuels. These reactions are endothermic and require constant energy input from the ongoing combustion reactions. A lack of oxygen or other improperly designed conditions result in these noxious and carcinogenic pyrolysis products being emitted as thick, black smoke.
The rate of combustion is the amount of a material that undergoes combustion over a period of time. It can be expressed in grams per second (g/s) or kilograms per second (kg/s).
Detailed descriptions of combustion processes, from the chemical kinetics perspective, require the formulation of large and intricate webs of elementary reactions. For instance, combustion of hydrocarbon fuels typically involve hundreds of chemical species reacting according to thousands of reactions.
The inclusion of such mechanisms within computational flow solvers still represents a pretty challenging task mainly in two aspects. First, the number of degrees of freedom (proportional to the number of chemical species) can be dramatically large; second, the source term due to reactions introduces a disparate number of time scales which makes the whole dynamical system stiff. As a result, the direct numerical simulation of turbulent reactive flows with heavy fuels soon becomes intractable even for modern supercomputers.
Therefore, a plethora of methodologies have been devised for reducing the complexity of combustion mechanisms without resorting to high detail levels. Examples are provided by:
The Relaxation Redistribution Method (RRM)
The Intrinsic Low-Dimensional Manifold (ILDM) approach and further developments
The invariant-constrained equilibrium edge preimage curve method.
A few variational approaches
The Computational Singular perturbation (CSP) method and further developments.
The Rate Controlled Constrained Equilibrium (RCCE) and Quasi Equilibrium Manifold (QEM) approach.
The G-Scheme.
The Method of Invariant Grids (MIG).
Kinetic modelling
The kinetic modelling may be explored for insight into the reaction mechanisms of thermal decomposition in the combustion of different materials by using for instance Thermogravimetric analysis.
Temperature
Assuming perfect combustion conditions, such as complete combustion under adiabatic conditions (i.e., no heat loss or gain), the adiabatic combustion temperature can be determined. The formula that yields this temperature is based on the first law of thermodynamics and takes note of the fact that the heat of combustion is used entirely for heating the fuel, the combustion air or oxygen, and the combustion product gases (commonly referred to as the flue gas).
In the case of fossil fuels burnt in air, the combustion temperature depends on all of the following:
the heating value;
the stoichiometric air to fuel ratio ;
the specific heat capacity of fuel and air;
the air and fuel inlet temperatures.
The adiabatic combustion temperature (also known as the adiabatic flame temperature) increases for higher heating values and inlet air and fuel temperatures and for stoichiometric air ratios approaching one.
Most commonly, the adiabatic combustion temperatures for coals are around (for inlet air and fuel at ambient temperatures and for ), around for oil and for natural gas.
In industrial fired heaters, power station steam generators, and large gas-fired turbines, the more common way of expressing the usage of more than the stoichiometric combustion air is percent excess combustion air. For example, excess combustion air of 15 percent means that 15 percent more than the required stoichiometric air is being used.
Instabilities
Combustion instabilities are typically violent pressure oscillations in a combustion chamber. These pressure oscillations can be as high as 180dB, and long-term exposure to these cyclic pressure and thermal loads reduces the life of engine components. In rockets, such as the F1 used in the Saturn V program, instabilities led to massive damage to the combustion chamber and surrounding components. This problem was solved by re-designing the fuel injector. In liquid jet engines, the droplet size and distribution can be used to attenuate the instabilities. Combustion instabilities are a major concern in ground-based gas turbine engines because of emissions. The tendency is to run lean, an equivalence ratio less than 1, to reduce the combustion temperature and thus reduce the emissions; however, running the combustion lean makes it very susceptible to combustion instability.
The Rayleigh Criterion is the basis for analysis of thermoacoustic combustion instability and is evaluated using the Rayleigh Index over one cycle of instability
where q' is the heat release rate perturbation and p' is the pressure fluctuation.
When the heat release oscillations are in phase with the pressure oscillations, the Rayleigh Index is positive and the magnitude of the thermoacoustic instability is maximised. On the other hand, if the Rayleigh Index is negative, then thermoacoustic damping occurs. The Rayleigh Criterion implies that thermoacoustic instability can be optimally controlled by having heat release oscillations 180 degrees out of phase with pressure oscillations at the same frequency. This minimizes the Rayleigh Index.
| Physical sciences | Chemical reactions | null |
5659 | https://en.wikipedia.org/wiki/Chemical%20element | Chemical element | A chemical element is a chemical substance whose atoms all have the same number of protons. The number of protons is called the atomic number of that element. For example, oxygen has an atomic number of 8, meaning each oxygen atom has 8 protons in its nucleus. Atoms of the same element can have different numbers of neutrons in their nuclei, known as isotopes of the element. Two or more atoms can combine to form molecules. Some elements are formed from molecules of identical atoms, e. g. atoms of hydrogen (H) form diatomic molecules (H2). Chemical compounds are substances made of atoms of different elements; they can have molecular or non-molecular structure. Mixtures are materials containing different chemical substances; that means (in case of molecular substances) that they contain different types of molecules. Atoms of one element can be transformed into atoms of a different element in nuclear reactions, which change an atom's atomic number.
Historically, the term "chemical element" meant a substance that cannot be broken down into constituent substances by chemical reactions, and for most practical purposes this definition still has validity. There was some controversy in the 1920s over whether isotopes deserved to be recognized as separate elements if they could be separated by chemical means.
The term "(chemical) element" is used in two different but closely related meanings: it can mean a chemical substance consisting of a single kind of atoms, or it can mean that kind of atoms as a component of various chemical substances. For example, molecules of water (H2O) contain atoms of hydrogen (H) and oxygen (O), so water can be said as a compound consisting of the elements hydrogen (H) and oxygen (O) even though it does not contain the chemical substances (di)hydrogen (H2) and (di)oxygen (O2), as H2O molecules are different from H2 and O2 molecules. For the meaning "chemical substance consisting of a single kind of atoms", the terms "elementary substance" and "simple substance" have been suggested, but they have not gained much acceptance in English chemical literature, whereas in some other languages their equivalent is widely used. For example, the French chemical terminology distinguishes (kind of atoms) and (chemical substance consisting of a single kind of atoms); the Russian chemical terminology distinguishes and .
Almost all baryonic matter in the universe is composed of elements (among rare exceptions are neutron stars). When different elements undergo chemical reactions, atoms are rearranged into new compounds held together by chemical bonds. Only a few elements, such as silver and gold, are found uncombined as relatively pure native element minerals. Nearly all other naturally occurring elements occur in the Earth as compounds or mixtures. Air is mostly a mixture of molecular nitrogen and oxygen, though it does contain compounds including carbon dioxide and water, as well as atomic argon, a noble gas which is chemically inert and therefore does not undergo chemical reactions.
The history of the discovery and use of elements began with early human societies that discovered native minerals like carbon, sulfur, copper and gold (though the modern concept of an element was not yet understood). Attempts to classify materials such as these resulted in the concepts of classical elements, alchemy, and similar theories throughout history. Much of the modern understanding of elements developed from the work of Dmitri Mendeleev, a Russian chemist who published the first recognizable periodic table in 1869. This table organizes the elements by increasing atomic number into rows ("periods") in which the columns ("groups") share recurring ("periodic") physical and chemical properties. The periodic table summarizes various properties of the elements, allowing chemists to derive relationships between them and to make predictions about elements not yet discovered, and potential new compounds.
By November 2016, the International Union of Pure and Applied Chemistry (IUPAC) had recognized a total of 118 elements. The first 94 occur naturally on Earth, and the remaining 24 are synthetic elements produced in nuclear reactions. Save for unstable radioactive elements (radioelements) which decay quickly, nearly all elements are available industrially in varying amounts. The discovery and synthesis of further new elements is an ongoing area of scientific study.
Description
The lightest elements are hydrogen and helium, both created by Big Bang nucleosynthesis in the first 20 minutes of the universe in a ratio of around 3:1 by mass (or 12:1 by number of atoms), along with tiny traces of the next two elements, lithium and beryllium. Almost all other elements found in nature were made by various natural methods of nucleosynthesis. On Earth, small amounts of new atoms are naturally produced in nucleogenic reactions, or in cosmogenic processes, such as cosmic ray spallation. New atoms are also naturally produced on Earth as radiogenic daughter isotopes of ongoing radioactive decay processes such as alpha decay, beta decay, spontaneous fission, cluster decay, and other rarer modes of decay.
Of the 94 naturally occurring elements, those with atomic numbers 1 through 82 each have at least one stable isotope (except for technetium, element 43 and promethium, element 61, which have no stable isotopes). Isotopes considered stable are those for which no radioactive decay has yet been observed. Elements with atomic numbers 83 through 94 are unstable to the point that radioactive decay of all isotopes can be detected. Some of these elements, notably bismuth (atomic number 83), thorium (atomic number 90), and uranium (atomic number 92), have one or more isotopes with half-lives long enough to survive as remnants of the explosive stellar nucleosynthesis that produced the heavy metals before the formation of our Solar System. At over 1.9 years, over a billion times longer than the estimated age of the universe, bismuth-209 has the longest known alpha decay half-life of any isotope, and is almost always considered on par with the 80 stable elements. The heaviest elements (those beyond plutonium, element 94) undergo radioactive decay with half-lives so short that they are not found in nature and must be synthesized.
There are now 118 known elements. In this context, "known" means observed well enough, even from just a few decay products, to have been differentiated from other elements. Most recently, the synthesis of element 118 (since named oganesson) was reported in October 2006, and the synthesis of element 117 (tennessine) was reported in April 2010. Of these 118 elements, 94 occur naturally on Earth. Six of these occur in extreme trace quantities: technetium, atomic number 43; promethium, number 61; astatine, number 85; francium, number 87; neptunium, number 93; and plutonium, number 94. These 94 elements have been detected in the universe at large, in the spectra of stars and also supernovae, where short-lived radioactive elements are newly being made. The first 94 elements have been detected directly on Earth as primordial nuclides present from the formation of the Solar System, or as naturally occurring fission or transmutation products of uranium and thorium.
The remaining 24 heavier elements, not found today either on Earth or in astronomical spectra, have been produced artificially: all are radioactive, with short half-lives; if any of these elements were present at the formation of Earth, they are certain to have completely decayed, and if present in novae, are in quantities too small to have been noted. Technetium was the first purportedly non-naturally occurring element synthesized, in 1937, though trace amounts of technetium have since been found in nature (and also the element may have been discovered naturally in 1925). This pattern of artificial production and later natural discovery has been repeated with several other radioactive naturally occurring rare elements.
List of the elements are available by name, atomic number, density, melting point, boiling point and chemical symbol, as well as ionization energy. The nuclides of stable and radioactive elements are also available as a list of nuclides, sorted by length of half-life for those that are unstable. One of the most convenient, and certainly the most traditional presentation of the elements, is in the form of the periodic table, which groups together elements with similar chemical properties (and usually also similar electronic structures).
Atomic number
The atomic number of an element is equal to the number of protons in each atom, and defines the element. For example, all carbon atoms contain 6 protons in their atomic nucleus; so the atomic number of carbon is 6. Carbon atoms may have different numbers of neutrons; atoms of the same element having different numbers of neutrons are known as isotopes of the element.
The number of protons in the nucleus also determines its electric charge, which in turn determines the number of electrons of the atom in its non-ionized state. The electrons are placed into atomic orbitals that determine the atom's chemical properties. The number of neutrons in a nucleus usually has very little effect on an element's chemical properties; except for hydrogen (for which the kinetic isotope effect is significant). Thus, all carbon isotopes have nearly identical chemical properties because they all have six electrons, even though they may have 6 to 8 neutrons. That is why atomic number, rather than mass number or atomic weight, is considered the identifying characteristic of an element.
The symbol for atomic number is Z.
Isotopes
Isotopes are atoms of the same element (that is, with the same number of protons in their nucleus), but having different numbers of neutrons. Thus, for example, there are three main isotopes of carbon. All carbon atoms have 6 protons, but they can have either 6, 7, or 8 neutrons. Since the mass numbers of these are 12, 13 and 14 respectively, said three isotopes are known as carbon-12, carbon-13, and carbon-14 (C, C, and C). Natural carbon is a mixture of C (about 98.9%), C (about 1.1%) and about 1 atom per trillion of C.
Most (54 of 94) naturally occurring elements have more than one stable isotope. Except for the isotopes of hydrogen (which differ greatly from each other in relative mass—enough to cause chemical effects), the isotopes of a given element are chemically nearly indistinguishable.
All elements have radioactive isotopes (radioisotopes); most of these radioisotopes do not occur naturally. Radioisotopes typically decay into other elements via alpha decay, beta decay, or inverse beta decay; some isotopes of the heaviest elements also undergo spontaneous fission. Isotopes that are not radioactive, are termed "stable" isotopes. All known stable isotopes occur naturally (see primordial nuclide). The many radioisotopes that are not found in nature have been characterized after being artificially produced. Certain elements have no stable isotopes and are composed only of radioisotopes: specifically the elements without any stable isotopes are technetium (atomic number 43), promethium (atomic number 61), and all observed elements with atomic number greater than 82.
Of the 80 elements with at least one stable isotope, 26 have only one stable isotope. The mean number of stable isotopes for the 80 stable elements is 3.1 stable isotopes per element. The largest number of stable isotopes for a single element is 10 (for tin, element 50).
Isotopic mass and atomic mass
The mass number of an element, A, is the number of nucleons (protons and neutrons) in the atomic nucleus. Different isotopes of a given element are distinguished by their mass number, which is written as a superscript on the left hand side of the chemical symbol (e.g., U). The mass number is always an integer and has units of "nucleons". Thus, magnesium-24 (24 is the mass number) is an atom with 24 nucleons (12 protons and 12 neutrons).
Whereas the mass number simply counts the total number of neutrons and protons and is thus an integer, the atomic mass of a particular isotope (or "nuclide") of the element is the mass of a single atom of that isotope, and is typically expressed in daltons (symbol: Da), or universal atomic mass units (symbol: u). Its relative atomic mass is a dimensionless number equal to the atomic mass divided by the atomic mass constant, which equals 1 Da. In general, the mass number of a given nuclide differs in value slightly from its relative atomic mass, since the mass of each proton and neutron is not exactly 1 Da; since the electrons contribute a lesser share to the atomic mass as neutron number exceeds proton number; and because of the nuclear binding energy and electron binding energy. For example, the atomic mass of chlorine-35 to five significant digits is 34.969 Da and that of chlorine-37 is 36.966 Da. However, the relative atomic mass of each isotope is quite close to its mass number (always within 1%). The only isotope whose atomic mass is exactly a natural number is C, which has a mass of 12 Da; because the dalton is defined as 1/12 of the mass of a free neutral carbon-12 atom in the ground state.
The standard atomic weight (commonly called "atomic weight") of an element is the average of the atomic masses of all the chemical element's isotopes as found in a particular environment, weighted by isotopic abundance, relative to the atomic mass unit. This number may be a fraction that is not close to a whole number. For example, the relative atomic mass of chlorine is 35.453 u, which differs greatly from a whole number as it is an average of about 76% chlorine-35 and 24% chlorine-37. Whenever a relative atomic mass value differs by more than ~1% from a whole number, it is due to this averaging effect, as significant amounts of more than one isotope are naturally present in a sample of that element.
Chemically pure and isotopically pure
Chemists and nuclear scientists have different definitions of a pure element. In chemistry, a pure element means a substance whose atoms all (or in practice almost all) have the same atomic number, or number of protons. Nuclear scientists, however, define a pure element as one that consists of only one isotope.
For example, a copper wire is 99.99% chemically pure if 99.99% of its atoms are copper, with 29 protons each. However it is not isotopically pure since ordinary copper consists of two stable isotopes, 69% Cu and 31% Cu, with different numbers of neutrons. However, pure gold would be both chemically and isotopically pure, since ordinary gold consists only of one isotope, Au.
Allotropes
Atoms of chemically pure elements may bond to each other chemically in more than one way, allowing the pure element to exist in multiple chemical structures (spatial arrangements of atoms), known as allotropes, which differ in their properties. For example, carbon can be found as diamond, which has a tetrahedral structure around each carbon atom; graphite, which has layers of carbon atoms with a hexagonal structure stacked on top of each other; graphene, which is a single layer of graphite that is very strong; fullerenes, which have nearly spherical shapes; and carbon nanotubes, which are tubes with a hexagonal structure (even these may differ from each other in electrical properties). The ability of an element to exist in one of many structural forms is known as 'allotropy'.
The reference state of an element is defined by convention, usually as the thermodynamically most stable allotrope and physical state at a pressure of 1 bar and a given temperature (typically at 298.15K). However, for phosphorus, the reference state is white phosphorus even though it is not the most stable allotrope, and the reference state for carbon is graphite, because the structure of graphite is more stable than that of the other allotropes. In thermochemistry, an element is defined to have an enthalpy of formation of zero in its reference state.
Properties
Several kinds of descriptive categorizations can be applied broadly to the elements, including consideration of their general physical and chemical properties, their states of matter under familiar conditions, their melting and boiling points, their densities, their crystal structures as solids, and their origins.
General properties
Several terms are commonly used to characterize the general physical and chemical properties of the chemical elements. A first distinction is between metals, which readily conduct electricity, nonmetals, which do not, and a small group, (the metalloids), having intermediate properties and often behaving as semiconductors.
A more refined classification is often shown in colored presentations of the periodic table. This system restricts the terms "metal" and "nonmetal" to only certain of the more broadly defined metals and nonmetals, adding additional terms for certain sets of the more broadly viewed metals and nonmetals. The version of this classification used in the periodic tables presented here includes: actinides, alkali metals, alkaline earth metals, halogens, lanthanides, transition metals, post-transition metals, metalloids, reactive nonmetals, and noble gases. In this system, the alkali metals, alkaline earth metals, and transition metals, as well as the lanthanides and the actinides, are special groups of the metals viewed in a broader sense. Similarly, the reactive nonmetals and the noble gases are nonmetals viewed in the broader sense. In some presentations, the halogens are not distinguished, with astatine identified as a metalloid and the others identified as nonmetals.
States of matter
Another commonly used basic distinction among the elements is their state of matter (phase), whether solid, liquid, or gas, at standard temperature and pressure (STP). Most elements are solids at STP, while several are gases. Only bromine and mercury are liquid at 0 degrees Celsius (32 degrees Fahrenheit) and 1 atmosphere pressure; caesium and gallium are solid at that temperature, but melt at 28.4°C (83.2°F) and 29.8°C (85.6°F), respectively.
Melting and boiling points
Melting and boiling points, typically expressed in degrees Celsius at a pressure of one atmosphere, are commonly used in characterizing the various elements. While known for most elements, either or both of these measurements is still undetermined for some of the radioactive elements available in only tiny quantities. Since helium remains a liquid even at absolute zero at atmospheric pressure, it has only a boiling point, and not a melting point, in conventional presentations.
Densities
The density at selected standard temperature and pressure (STP) is often used in characterizing the elements. Density is often expressed in grams per cubic centimetre (g/cm). Since several elements are gases at commonly encountered temperatures, their densities are usually stated for their gaseous forms; when liquefied or solidified, the gaseous elements have densities similar to those of the other elements.
When an element has allotropes with different densities, one representative allotrope is typically selected in summary presentations, while densities for each allotrope can be stated where more detail is provided. For example, the three familiar allotropes of carbon (amorphous carbon, graphite, and diamond) have densities of 1.8–2.1, 2.267, and 3.515 g/cm, respectively.
Crystal structures
The elements studied to date as solid samples have eight kinds of crystal structures: cubic, body-centered cubic, face-centered cubic, hexagonal, monoclinic, orthorhombic, rhombohedral, and tetragonal. For some of the synthetically produced transuranic elements, available samples have been too small to determine crystal structures.
Occurrence and origin on Earth
Chemical elements may also be categorized by their origin on Earth, with the first 94 considered naturally occurring, while those with atomic numbers beyond 94 have only been produced artificially via human-made nuclear reactions.
Of the 94 naturally occurring elements, 83 are considered primordial and either stable or weakly radioactive. The longest-lived isotopes of the remaining 11 elements have half lives too short for them to have been present at the beginning of the Solar System, and are therefore considered transient elements. Of these 11 transient elements, five (polonium, radon, radium, actinium, and protactinium) are relatively common decay products of thorium and uranium. The remaining six transient elements (technetium, promethium, astatine, francium, neptunium, and plutonium) occur only rarely, as products of rare decay modes or nuclear reaction processes involving uranium or other heavy elements.
Elements with atomic numbers 1 through 82, except 43 (technetium) and 61 (promethium), each have at least one isotope for which no radioactive decay has been observed. Observationally stable isotopes of some elements (such as tungsten and lead), however, are predicted to be slightly radioactive with very long half-lives: for example, the half-lives predicted for the observationally stable lead isotopes range from 10 to 10 years. Elements with atomic numbers 43, 61, and 83 through 94 are unstable enough that their radioactive decay can be detected. Three of these elements, bismuth (element 83), thorium (90), and uranium (92) have one or more isotopes with half-lives long enough to survive as remnants of the explosive stellar nucleosynthesis that produced the heavy elements before the formation of the Solar System. For example, at over 1.9 years, over a billion times longer than the estimated age of the universe, bismuth-209 has the longest known alpha decay half-life of any isotope. The last 24 elements (those beyond plutonium, element 94) undergo radioactive decay with short half-lives and cannot be produced as daughters of longer-lived elements, and thus are not known to occur in nature at all.
Periodic table
The properties of the elements are often summarized using the periodic table, which powerfully and elegantly organizes the elements by increasing atomic number into rows ("periods") in which the columns ("groups") share recurring ("periodic") physical and chemical properties. The table contains 118 confirmed elements as of 2021.
Although earlier precursors to this presentation exist, its invention is generally credited to Russian chemist Dmitri Mendeleev in 1869, who intended the table to illustrate recurring trends in the properties of the elements. The layout of the table has been refined and extended over time as new elements have been discovered and new theoretical models have been developed to explain chemical behavior.
Use of the periodic table is now ubiquitous in chemistry, providing an extremely useful framework to classify, systematize and compare all the many different forms of chemical behavior. The table has also found wide application in physics, geology, biology, materials science, engineering, agriculture, medicine, nutrition, environmental health, and astronomy. Its principles are especially important in chemical engineering.
Nomenclature and symbols
The various chemical elements are formally identified by their unique atomic numbers, their accepted names, and their chemical symbols.
Atomic numbers
The known elements have atomic numbers from 1 to 118, conventionally presented as Arabic numerals. Since the elements can be uniquely sequenced by atomic number, conventionally from lowest to highest (as in a periodic table), sets of elements are sometimes specified by such notation as "through", "beyond", or "from ... through", as in "through iron", "beyond uranium", or "from lanthanum through lutetium". The terms "light" and "heavy" are sometimes also used informally to indicate relative atomic numbers (not densities), as in "lighter than carbon" or "heavier than lead", though the atomic masses of the elements (their atomic weights or atomic masses) do not always increase monotonically with their atomic numbers.
Element names
The naming of various substances now known as elements precedes the atomic theory of matter, as names were given locally by various cultures to various minerals, metals, compounds, alloys, mixtures, and other materials, though at the time it was not known which chemicals were elements and which compounds. As they were identified as elements, the existing names for anciently known elements (e.g., gold, mercury, iron) were kept in most countries. National differences emerged over the element names either for convenience, linguistic niceties, or nationalism. For example, German speakers use "Wasserstoff" (water substance) for "hydrogen", "Sauerstoff" (acid substance) for "oxygen" and "Stickstoff" (smothering substance) for "nitrogen"; English and some other languages use "sodium" for "natrium", and "potassium" for "kalium"; and the French, Italians, Greeks, Portuguese and Poles prefer "azote/azot/azoto" (from roots meaning "no life") for "nitrogen".
For purposes of international communication and trade, the official names of the chemical elements both ancient and more recently recognized are decided by the International Union of Pure and Applied Chemistry (IUPAC), which has decided on a sort of international English language, drawing on traditional English names even when an element's chemical symbol is based on a Latin or other traditional word, for example adopting "gold" rather than "aurum" as the name for the 79th element (Au). IUPAC prefers the British spellings "aluminium" and "caesium" over the U.S. spellings "aluminum" and "cesium", and the U.S. "sulfur" over British "sulphur". However, elements that are practical to sell in bulk in many countries often still have locally used national names, and countries whose national language does not use the Latin alphabet are likely to use the IUPAC element names.
According to IUPAC, element names are not proper nouns; therefore, the full name of an element is not capitalized in English, even if derived from a proper noun, as in californium and einsteinium. Isotope names are also uncapitalized if written out, e.g., carbon-12 or uranium-235. Chemical element symbols (such as Cf for californium and Es for einsteinium), are always capitalized (see below).
In the second half of the 20th century, physics laboratories became able to produce elements with half-lives too short for an appreciable amount of them to exist at any time. These are also named by IUPAC, which generally adopts the name chosen by the discoverer. This practice can lead to the controversial question of which research group actually discovered an element, a question that delayed the naming of elements with atomic number of 104 and higher for a considerable amount of time. (See element naming controversy).
Precursors of such controversies involved the nationalistic namings of elements in the late 19th century. For example, lutetium was named in reference to Paris, France. The Germans were reluctant to relinquish naming rights to the French, often calling it cassiopeium. Similarly, the British discoverer of niobium originally named it columbium, in reference to the New World. It was used extensively as such by American publications before the international standardization (in 1950).
Chemical symbols
Specific elements
Before chemistry became a science, alchemists designed arcane symbols for both metals and common compounds. These were however used as abbreviations in diagrams or procedures; there was no concept of atoms combining to form molecules. With his advances in the atomic theory of matter, John Dalton devised his own simpler symbols, based on circles, to depict molecules.
The current system of chemical notation was invented by Jöns Jacob Berzelius in 1814. In this system, chemical symbols are not mere abbreviations—though each consists of letters of the Latin alphabet. They are intended as universal symbols for people of all languages and alphabets.
Since Latin was the common language of science at Berzelius' time, his symbols were abbreviations based on the Latin names of elements (they may be Classical Latin names of elements known since antiquity or Neo-Latin coinages for later elements). The symbols are not followed by a period (full stop) as with abbreviations. In most cases, Latin names of elements as used by Berzelius have the same roots as the modern English name. For example, hydrogen has the symbol "H" from Neo-Latin , which has the same Greek roots as English hydrogen. However, in eleven cases Latin (as used by Berzelius) and English names of elements have different roots. Eight of them are the seven metals of antiquity and a metalloid also known since antiquity: "Fe" (Latin ) for iron, "Hg" (Latin ) for mercury, "Sn" (Latin ) for tin, "Au" (Latin ) for gold, "Ag" (Latin ) for silver, "Pb" (Latin ) for lead, "Cu" (Latin ) for copper, and "Sb" (Latin ) for antimony. The three other mismatches between Neo-Latin (as used by Berzelius) and English names are "Na" (Neo-Latin ) for sodium, "K" (Neo-Latin ) for potassium, and "W" (Neo-Latin ) for tungsten. These mismatches came from different suggestings of naming the elements in the Modern era. Initially Berzelius had suggested "So" and "Po" for sodium and potassium, but he changed the symbols to "Na" and "K" later in the same year.
Elements discovered after 1814 were also assigned unique chemical symbols, based on the name of the element. The use of Latin as the universal language of science was fading, but chemical names of newly discovered elements came to be borrowed from language to language with little or no modifications. Symbols of elements discovered after 1814 match their names in English, French (ignoring the acute accent on ), and German (though German often allows alternate spellings with or instead of : e.g., the name of calcium may be spelled or in German, but its symbol is always "Ca"). Other languages sometimes modify element name spellings: Spanish (ytterbium), Italian (hafnium), Swedish (moscovium); but those modifications do not affect chemical symbols: Yb, Hf, Mc.
Chemical symbols are understood internationally when element names might require translation. There have been some differences in the past. For example, Germans in the past have used "J" (for the name ) for iodine, but now use "I" and .
The first letter of a chemical symbol is always capitalized, as in the preceding examples, and the subsequent letters, if any, are always lower case. Thus, the symbols for californium and einsteinium are Cf and Es.
General chemical symbols
There are also symbols in chemical equations for groups of elements, for example in comparative formulas. These are often a single capital letter, and the letters are reserved and not used for names of specific elements. For example, "X" indicates a variable group (usually a halogen) in a class of compounds, while "R" is a radical, meaning a compound structure such as a hydrocarbon chain. The letter "Q" is reserved for "heat" in a chemical reaction. "Y" is also often used as a general chemical symbol, though it is also the symbol of yttrium. "Z" is also often used as a general variable group. "E" is used in organic chemistry to denote an electron-withdrawing group or an electrophile; similarly "Nu" denotes a nucleophile. "L" is used to represent a general ligand in inorganic and organometallic chemistry. "M" is also often used in place of a general metal.
At least two other, two-letter generic chemical symbols are also in informal use, "Ln" for any lanthanide and "An" for any actinide. "Rg" was formerly used for any rare gas element, but the group of rare gases has now been renamed noble gases and "Rg" now refers to roentgenium.
Isotope symbols
Isotopes of an element are distinguished by mass number (total protons and neutrons), with this number combined with the element's symbol. IUPAC prefers that isotope symbols be written in superscript notation when practical, for example C and U. However, other notations, such as carbon-12 and uranium-235, or C-12 and U-235, are also used.
As a special case, the three naturally occurring isotopes of hydrogen are often specified as H for H (protium), D for H (deuterium), and T for H (tritium). This convention is easier to use in chemical equations, replacing the need to write out the mass number each time. Thus, the formula for heavy water may be written DO instead of HO.
Origin of the elements
Only about 4% of the total mass of the universe is made of atoms or ions, and thus represented by elements. This fraction is about 15% of the total matter, with the remainder of the matter (85%) being dark matter. The nature of dark matter is unknown, but it is not composed of atoms of elements because it contains no protons, neutrons, or electrons. (The remaining non-matter part of the mass of the universe is composed of the even less well understood dark energy).
The 94 naturally occurring elements were produced by at least four classes of astrophysical process. Most of the hydrogen, helium and a very small quantity of lithium were produced in the first few minutes of the Big Bang. This Big Bang nucleosynthesis happened only once; the other processes are ongoing. Nuclear fusion inside stars produces elements through stellar nucleosynthesis, including all elements from carbon to iron in atomic number. Elements higher in atomic number than iron, including heavy elements like uranium and plutonium, are produced by various forms of explosive nucleosynthesis in supernovae and neutron star mergers. The light elements lithium, beryllium and boron are produced mostly through cosmic ray spallation (fragmentation induced by cosmic rays) of carbon, nitrogen, and oxygen.
In the early phases of the Big Bang, nucleosynthesis of hydrogen resulted in the production of hydrogen-1 (protium, H) and helium-4 (He), as well as a smaller amount of deuterium (H) and tiny amounts (on the order of 10) of lithium and beryllium. Even smaller amounts of boron may have been produced in the Big Bang, since it has been observed in some very old stars, while carbon has not. No elements heavier than boron were produced in the Big Bang. As a result, the primordial abundance of atoms (or ions) consisted of ~75% H, 25% He, and 0.01% deuterium, with only tiny traces of lithium, beryllium, and perhaps boron. Subsequent enrichment of galactic halos occurred due to stellar nucleosynthesis and supernova nucleosynthesis. However, the element abundance in intergalactic space can still closely resemble primordial conditions, unless it has been enriched by some means.
On Earth (and elsewhere), trace amounts of various elements continue to be produced from other elements as products of nuclear transmutation processes. These include some produced by cosmic rays or other nuclear reactions (see cosmogenic and nucleogenic nuclides), and others produced as decay products of long-lived primordial nuclides. For example, trace (but detectable) amounts of carbon-14 (C) are continually produced in the air by cosmic rays impacting nitrogen atoms, and argon-40 (Ar) is continually produced by the decay of primordially occurring but unstable potassium-40 (K). Also, three primordially occurring but radioactive actinides, thorium, uranium, and plutonium, decay through a series of recurrently produced but unstable elements such as radium and radon, which are transiently present in any sample of containing these metals. Three other radioactive elements, technetium, promethium, and neptunium, occur only incidentally in natural materials, produced as individual atoms by nuclear fission of the nuclei of various heavy elements or in other rare nuclear processes.
Besides the 94 naturally occurring elements, several artificial elements have been produced by nuclear physics technology. By 2016, these experiments had produced all elements up to atomic number 118.
Abundance
The following graph (note log scale) shows the abundance of elements in our Solar System. The table shows the 12 most common elements in our galaxy (estimated spectroscopically), as measured in parts per million by mass. Nearby galaxies that have evolved along similar lines have a corresponding enrichment of elements heavier than hydrogen and helium. The more distant galaxies are being viewed as they appeared in the past, so their abundances of elements appear closer to the primordial mixture. As physical laws and processes appear common throughout the visible universe, however, scientists expect that these galaxies evolved elements in similar abundance.
The abundance of elements in the Solar System is in keeping with their origin from nucleosynthesis in the Big Bang and a number of progenitor supernova stars. Very abundant hydrogen and helium are products of the Big Bang, but the next three elements are rare since they had little time to form in the Big Bang and are not made in stars (they are, however, produced in small quantities by the breakup of heavier elements in interstellar dust, as a result of impact by cosmic rays). Beginning with carbon, elements are produced in stars by buildup from alpha particles (helium nuclei), resulting in an alternatingly larger abundance of elements with even atomic numbers (these are also more stable). In general, such elements up to iron are made in large stars in the process of becoming supernovas. Iron-56 is particularly common, since it is the most stable nuclide that can easily be made from alpha particles (being a product of decay of radioactive nickel-56, ultimately made from 14 helium nuclei). Elements heavier than iron are made in energy-absorbing processes in large stars, and their abundance in the universe (and on Earth) generally decreases with their atomic number.
The abundance of the chemical elements on Earth varies from air to crust to ocean, and in various types of life. The abundance of elements in Earth's crust differs from that in the Solar System (as seen in the Sun and massive planets like Jupiter) mainly in selective loss of the very lightest elements (hydrogen and helium) and also volatile neon, carbon (as hydrocarbons), nitrogen and sulfur, as a result of solar heating in the early formation of the Solar System. Oxygen, the most abundant Earth element by mass, is retained on Earth by combination with silicon. Aluminium at 8% by mass is more common in the Earth's crust than in the universe and solar system, but the composition of the far more bulky mantle, which has magnesium and iron in place of aluminium (which occurs there only at 2% of mass) more closely mirrors the elemental composition of the solar system, save for the noted loss of volatile elements to space, and loss of iron which has migrated to the Earth's core.
The composition of the human body, by contrast, more closely follows the composition of seawater—save that the human body has additional stores of carbon and nitrogen necessary to form the proteins and nucleic acids, together with phosphorus in the nucleic acids and energy transfer molecule adenosine triphosphate (ATP) that occurs in the cells of all living organisms. Certain kinds of organisms require particular additional elements, for example the magnesium in chlorophyll in green plants, the calcium in mollusc shells, or the iron in the hemoglobin in vertebrates' red blood cells.
History
Evolving definitions
The concept of an "element" as an indivisible substance has developed through three major historical phases: Classical definitions (such as those of the ancient Greeks), chemical definitions, and atomic definitions.
Classical definitions
Ancient philosophy posited a set of classical elements to explain observed patterns in nature. These elements originally referred to earth, water, air and fire rather than the chemical elements of modern science.
The term 'elements' (stoicheia) was first used by Greek philosopher Plato around 360 BCE in his dialogue Timaeus, which includes a discussion of the composition of inorganic and organic bodies and is a speculative treatise on chemistry. Plato believed the elements introduced a century earlier by Empedocles were composed of small polyhedral forms: tetrahedron (fire), octahedron (air), icosahedron (water), and cube (earth).
Aristotle, , also used the term stoicheia and added a fifth element, aether, which formed the heavens. Aristotle defined an element as:
Chemical definitions
Robert Boyle
In 1661, in The Sceptical Chymist, Robert Boyle proposed his theory of corpuscularism which favoured the analysis of matter as constituted of irreducible units of matter (atoms); and, choosing to side with neither Aristotle's view of the four elements nor Paracelsus' view of three fundamental elements, left open the question of the number of elements. Boyle argued against a pre-determined number of elements—directly against Paracelsus' three principles (sulfur, mercury, and salt), indirectly against the "Aristotelian" elements (earth, water, air, and fire), for Boyle felt that the arguments against the former were at least as valid against the latter.
Then Boyle stated his view in four propositions. In the first and second, he suggests that matter consists of particles, but that these particles may be difficult to separate. Boyle used the concept of "corpuscles"—or "atomes", as he also called them—to explain how a limited number of elements could combine into a vast number of compounds.
Boyle explained that gold reacts with aqua regia, and mercury with nitric acid, sulfuric acid, and sulfur to produce various "compounds", and that they could be recovered from those compounds, just as would be expected of elements. Yet, Boyle did not consider gold, mercury, or lead elements, but rather—together with wine—"perfectly mixt bodies".
Even though Boyle is primarily regarded as the first modern chemist, The Sceptical Chymist still contains old ideas about the elements, alien to a contemporary viewpoint. Sulfur, for example, is not only the familiar yellow non-metal but also an inflammable "spirit".
Isaac Watts
In 1724, in his book Logick, the English minister and logician Isaac Watts enumerated the elements then recognized by chemists. Watts' list of elements included two of Paracelsus' principles (sulfur and salt) and two classical elements (earth and water) as well as "spirit". Watts did, however, note a lack of consensus among chemists.
Antoine Lavoisier, Jöns Jacob Berzelius, and Dmitri Mendeleev
The first modern list of elements was given in Antoine Lavoisier's 1789 Elements of Chemistry, which contained 33 elements, including light and caloric. By 1818, Jöns Jacob Berzelius had determined atomic weights for 45 of the 49 then-accepted elements. Dmitri Mendeleev had 63 elements in his 1869 periodic table.
From Boyle until the early 20th century, an element was defined as a pure substance that cannot be decomposed into any simpler substance and cannot be transformed into other elements by chemical processes. Elements at the time were generally distinguished by their atomic weights, a property measurable with fair accuracy by available analytical techniques.
Atomic definitions
The 1913 discovery by English physicist Henry Moseley that the nuclear charge is the physical basis for the atomic number, further refined when the nature of protons and neutrons became appreciated, eventually led to the current definition of an element based on atomic number (number of protons). The use of atomic numbers, rather than atomic weights, to distinguish elements has greater predictive value (since these numbers are integers) and also resolves some ambiguities in the chemistry-based view due to varying properties of isotopes and allotropes within the same element. Currently, IUPAC defines an element to exist if it has isotopes with a lifetime longer than the 10 seconds it takes the nucleus to form an electronic cloud.
By 1914, eighty-seven elements were known, all naturally occurring (see Discovery of chemical elements). The remaining naturally occurring elements were discovered or isolated in subsequent decades, and various additional elements have also been produced synthetically, with much of that work pioneered by Glenn T. Seaborg. In 1955, element 101 was discovered and named mendelevium in honor of D. I. Mendeleev, the first to arrange the elements periodically.
Discovery and recognition of various elements
Ten materials familiar to various prehistoric cultures are now known to be elements: Carbon, copper, gold, iron, lead, mercury, silver, sulfur, tin, and zinc. Three additional materials now accepted as elements, arsenic, antimony, and bismuth, were recognized as distinct substances before 1500 AD. Phosphorus, cobalt, and platinum were isolated before 1750.
Most of the remaining naturally occurring elements were identified and characterized by 1900, including:
Such now-familiar industrial materials as aluminium, silicon, nickel, chromium, magnesium, and tungsten
Reactive metals such as lithium, sodium, potassium, and calcium
The halogens fluorine, chlorine, bromine, and iodine
Gases such as hydrogen, oxygen, nitrogen, helium, argon, and neon
Most of the rare-earth elements, including cerium, lanthanum, gadolinium, and neodymium
The more common radioactive elements, including uranium, thorium, and radium
Elements isolated or produced since 1900 include:
The three remaining undiscovered stable elements: hafnium, lutetium, and rhenium
Plutonium, which was first produced synthetically in 1940 by Glenn T. Seaborg, but is now also known from a few long-persisting natural occurrences
The three incidentally occurring natural elements (neptunium, promethium, and technetium), which were all first produced synthetically but later discovered in trace amounts in geological samples
Four scarce decay products of uranium or thorium (astatine, francium, actinium, and protactinium), and
All synthetic transuranic elements, beginning with americium and curium
Recently discovered elements
The first transuranium element (element with an atomic number greater than 92) discovered was neptunium in 1940. Since 1999, the IUPAC/IUPAP Joint Working Party has considered claims for the discovery of new elements. As of January 2016, all 118 elements have been confirmed by IUPAC as being discovered. The discovery of element 112 was acknowledged in 2009, and the name copernicium and the chemical symbol Cn were suggested for it. The name and symbol were officially endorsed by IUPAC on 19 February 2010. The heaviest element that is believed to have been synthesized to date is element 118, oganesson, on 9 October 2006, by the Flerov Laboratory of Nuclear Reactions in Dubna, Russia. Tennessine, element 117 was the latest element claimed to be discovered, in 2009. On 28 November 2016, scientists at the IUPAC officially recognized the names for the four newest elements, with atomic numbers 113, 115, 117, and 118.
List of the 118 known chemical elements
The following sortable table shows the 118 known elements.
Atomic number, Element, and Symbol all serve independently as unique identifiers.
Element names are those accepted by IUPAC.
Block indicates the periodic table block for each element: red = s-block, yellow = p-block, blue = d-block, green = f-block.
Group and period refer to an element's position in the periodic table. Group numbers here show the currently accepted numbering; for older numberings, see Group (periodic table).
| Physical sciences | Science and medicine | null |
5667 | https://en.wikipedia.org/wiki/Chlorine | Chlorine | Chlorine is a chemical element; it has symbol Cl and atomic number 17. The second-lightest of the halogens, it appears between fluorine and bromine in the periodic table and its properties are mostly intermediate between them. Chlorine is a yellow-green gas at room temperature. It is an extremely reactive element and a strong oxidising agent: among the elements, it has the highest electron affinity and the third-highest electronegativity on the revised Pauling scale, behind only oxygen and fluorine.
Chlorine played an important role in the experiments conducted by medieval alchemists, which commonly involved the heating of chloride salts like ammonium chloride (sal ammoniac) and sodium chloride (common salt), producing various chemical substances containing chlorine such as hydrogen chloride, mercury(II) chloride (corrosive sublimate), and . However, the nature of free chlorine gas as a separate substance was only recognised around 1630 by Jan Baptist van Helmont. Carl Wilhelm Scheele wrote a description of chlorine gas in 1774, supposing it to be an oxide of a new element. In 1809, chemists suggested that the gas might be a pure element, and this was confirmed by Sir Humphry Davy in 1810, who named it after the Ancient Greek (, "pale green") because of its colour.
Because of its great reactivity, all chlorine in the Earth's crust is in the form of ionic chloride compounds, which includes table salt. It is the second-most abundant halogen (after fluorine) and 20th most abundant element in Earth's crust. These crystal deposits are nevertheless dwarfed by the huge reserves of chloride in seawater.
Elemental chlorine is commercially produced from brine by electrolysis, predominantly in the chloralkali process. The high oxidising potential of elemental chlorine led to the development of commercial bleaches and disinfectants, and a reagent for many processes in the chemical industry. Chlorine is used in the manufacture of a wide range of consumer products, about two-thirds of them organic chemicals such as polyvinyl chloride (PVC), many intermediates for the production of plastics, and other end products which do not contain the element. As a common disinfectant, elemental chlorine and chlorine-generating compounds are used more directly in swimming pools to keep them sanitary. Elemental chlorine at high concentration is extremely dangerous, and poisonous to most living organisms. As a chemical warfare agent, chlorine was first used in World War I as a poison gas weapon.
In the form of chloride ions, chlorine is necessary to all known species of life. Other types of chlorine compounds are rare in living organisms, and artificially produced chlorinated organics range from inert to toxic. In the upper atmosphere, chlorine-containing organic molecules such as chlorofluorocarbons have been implicated in ozone depletion. Small quantities of elemental chlorine are generated by oxidation of chloride ions in neutrophils as part of an immune system response against bacteria.
History
The most common compound of chlorine, sodium chloride, has been known since ancient times; archaeologists have found evidence that rock salt was used as early as 3000 BC and brine as early as 6000 BC.
Early discoveries
Around 900, the authors of the Arabic writings attributed to Jabir ibn Hayyan (Latin: Geber) and the Persian physician and alchemist Abu Bakr al-Razi ( 865–925, Latin: Rhazes) were experimenting with sal ammoniac (ammonium chloride), which when it was distilled together with vitriol (hydrated sulfates of various metals) produced hydrogen chloride. However, it appears that in these early experiments with chloride salts, the gaseous products were discarded, and hydrogen chloride may have been produced many times before it was discovered that it can be put to chemical use. One of the first such uses was the synthesis of mercury(II) chloride (corrosive sublimate), whose production from the heating of mercury either with alum and ammonium chloride or with vitriol and sodium chloride was first described in the De aluminibus et salibus ("On Alums and Salts", an eleventh- or twelfth century Arabic text falsely attributed to Abu Bakr al-Razi and translated into Latin in the second half of the twelfth century by Gerard of Cremona, 1144–1187). Another important development was the discovery by pseudo-Geber (in the De inventione veritatis, "On the Discovery of Truth", after c. 1300) that by adding ammonium chloride to nitric acid, a strong solvent capable of dissolving gold (i.e., aqua regia) could be produced. Although aqua regia is an unstable mixture that continually gives off fumes containing free chlorine gas, this chlorine gas appears to have been ignored until c. 1630, when its nature as a separate gaseous substance was recognised by the Brabantian chemist and physician Jan Baptist van Helmont.
Isolation
The element was first studied in detail in 1774 by Swedish chemist Carl Wilhelm Scheele, and he is credited with the discovery. Scheele produced chlorine by reacting MnO2 (as the mineral pyrolusite) with HCl:
4 HCl + MnO2 → MnCl2 + 2 H2O + Cl2
Scheele observed several of the properties of chlorine: the bleaching effect on litmus, the deadly effect on insects, the yellow-green colour, and the smell similar to aqua regia. He called it "dephlogisticated muriatic acid air" since it is a gas (then called "airs") and it came from hydrochloric acid (then known as "muriatic acid"). He failed to establish chlorine as an element.
Common chemical theory at that time held that an acid is a compound that contains oxygen (remnants of this survive in the German and Dutch names of oxygen: or , both translating into English as acid substance), so a number of chemists, including Claude Berthollet, suggested that Scheele's dephlogisticated muriatic acid air must be a combination of oxygen and the yet undiscovered element, muriaticum.
In 1809, Joseph Louis Gay-Lussac and Louis-Jacques Thénard tried to decompose dephlogisticated muriatic acid air by reacting it with charcoal to release the free element muriaticum (and carbon dioxide). They did not succeed and published a report in which they considered the possibility that dephlogisticated muriatic acid air is an element, but were not convinced.
In 1810, Sir Humphry Davy tried the same experiment again, and concluded that the substance was an element, and not a compound. He announced his results to the Royal Society on 15 November that year. At that time, he named this new element "chlorine", from the Greek word χλωρος (chlōros, "green-yellow"), in reference to its colour. The name "halogen", meaning "salt producer", was originally used for chlorine in 1811 by Johann Salomo Christoph Schweigger. This term was later used as a generic term to describe all the elements in the chlorine family (fluorine, bromine, iodine), after a suggestion by Jöns Jakob Berzelius in 1826. In 1823, Michael Faraday liquefied chlorine for the first time, and demonstrated that what was then known as "solid chlorine" had a structure of chlorine hydrate (Cl2·H2O).
Later uses
Chlorine gas was first used by French chemist Claude Berthollet to bleach textiles in 1785. Modern bleaches resulted from further work by Berthollet, who first produced sodium hypochlorite in 1789 in his laboratory in the town of Javel (now part of Paris, France), by passing chlorine gas through a solution of sodium carbonate. The resulting liquid, known as "" ("Javel water"), was a weak solution of sodium hypochlorite. This process was not very efficient, and alternative production methods were sought. Scottish chemist and industrialist Charles Tennant first produced a solution of calcium hypochlorite ("chlorinated lime"), then solid calcium hypochlorite (bleaching powder). These compounds produced low levels of elemental chlorine and could be more efficiently transported than sodium hypochlorite, which remained as dilute solutions because when purified to eliminate water, it became a dangerously powerful and unstable oxidizer. Near the end of the nineteenth century, E. S. Smith patented a method of sodium hypochlorite production involving electrolysis of brine to produce sodium hydroxide and chlorine gas, which then mixed to form sodium hypochlorite. This is known as the chloralkali process, first introduced on an industrial scale in 1892, and now the source of most elemental chlorine and sodium hydroxide. In 1884 Chemischen Fabrik Griesheim of Germany developed another chloralkali process which entered commercial production in 1888.
Elemental chlorine solutions dissolved in chemically basic water (sodium and calcium hypochlorite) were first used as anti-putrefaction agents and disinfectants in the 1820s, in France, long before the establishment of the germ theory of disease. This practice was pioneered by Antoine-Germain Labarraque, who adapted Berthollet's "Javel water" bleach and other chlorine preparations. Elemental chlorine has since served a continuous function in topical antisepsis (wound irrigation solutions and the like) and public sanitation, particularly in swimming and drinking water.
Chlorine gas was first used as a weapon on April 22, 1915, at the Second Battle of Ypres by the German Army. The effect on the allies was devastating because the existing gas masks were difficult to deploy and had not been broadly distributed.
Properties
Chlorine is the second halogen, being a nonmetal in group 17 of the periodic table. Its properties are thus similar to fluorine, bromine, and iodine, and are largely intermediate between those of the first two. Chlorine has the electron configuration [Ne]3s23p5, with the seven electrons in the third and outermost shell acting as its valence electrons. Like all halogens, it is thus one electron short of a full octet, and is hence a strong oxidising agent, reacting with many elements in order to complete its outer shell. Corresponding to periodic trends, it is intermediate in electronegativity between fluorine and bromine (F: 3.98, Cl: 3.16, Br: 2.96, I: 2.66), and is less reactive than fluorine and more reactive than bromine. It is also a weaker oxidising agent than fluorine, but a stronger one than bromine. Conversely, the chloride ion is a weaker reducing agent than bromide, but a stronger one than fluoride. It is intermediate in atomic radius between fluorine and bromine, and this leads to many of its atomic properties similarly continuing the trend from iodine to bromine upward, such as first ionisation energy, electron affinity, enthalpy of dissociation of the X2 molecule (X = Cl, Br, I), ionic radius, and X–X bond length. (Fluorine is anomalous due to its small size.)
All four stable halogens experience intermolecular van der Waals forces of attraction, and their strength increases together with the number of electrons among all homonuclear diatomic halogen molecules. Thus, the melting and boiling points of chlorine are intermediate between those of fluorine and bromine: chlorine melts at −101.0 °C and boils at −34.0 °C. As a result of the increasing molecular weight of the halogens down the group, the density and heats of fusion and vaporisation of chlorine are again intermediate between those of bromine and fluorine, although all their heats of vaporisation are fairly low (leading to high volatility) thanks to their diatomic molecular structure. The halogens darken in colour as the group is descended: thus, while fluorine is a pale yellow gas, chlorine is distinctly yellow-green. This trend occurs because the wavelengths of visible light absorbed by the halogens increase down the group. Specifically, the colour of a halogen, such as chlorine, results from the electron transition between the highest occupied antibonding πg molecular orbital and the lowest vacant antibonding σu molecular orbital. The colour fades at low temperatures, so that solid chlorine at −195 °C is almost colourless.
Like solid bromine and iodine, solid chlorine crystallises in the orthorhombic crystal system, in a layered lattice of Cl2 molecules. The Cl–Cl distance is 198 pm (close to the gaseous Cl–Cl distance of 199 pm) and the Cl···Cl distance between molecules is 332 pm within a layer and 382 pm between layers (compare the van der Waals radius of chlorine, 180 pm). This structure means that chlorine is a very poor conductor of electricity, and indeed its conductivity is so low as to be practically unmeasurable.
Isotopes
Chlorine has two stable isotopes, 35Cl and 37Cl. These are its only two natural isotopes occurring in quantity, with 35Cl making up 76% of natural chlorine and 37Cl making up the remaining 24%. Both are synthesised in stars in the oxygen-burning and silicon-burning processes. Both have nuclear spin 3/2+ and thus may be used for nuclear magnetic resonance, although the spin magnitude being greater than 1/2 results in non-spherical nuclear charge distribution and thus resonance broadening as a result of a nonzero nuclear quadrupole moment and resultant quadrupolar relaxation. The other chlorine isotopes are all radioactive, with half-lives too short to occur in nature primordially. Of these, the most commonly used in the laboratory are 36Cl (t1/2 = 3.0×105 y) and 38Cl (t1/2 = 37.2 min), which may be produced from the neutron activation of natural chlorine.
The most stable chlorine radioisotope is 36Cl. The primary decay mode of isotopes lighter than 35Cl is electron capture to isotopes of sulfur; that of isotopes heavier than 37Cl is beta decay to isotopes of argon; and 36Cl may decay by either mode to stable 36S or 36Ar. 36Cl occurs in trace quantities in nature as a cosmogenic nuclide in a ratio of about (7–10) × 10−13 to 1 with stable chlorine isotopes: it is produced in the atmosphere by spallation of 36Ar by interactions with cosmic ray protons. In the top meter of the lithosphere, 36Cl is generated primarily by thermal neutron activation of 35Cl and spallation of 39K and 40Ca. In the subsurface environment, muon capture by 40Ca becomes more important as a way to generate 36Cl.
Chemistry and compounds
Chlorine is intermediate in reactivity between fluorine and bromine, and is one of the most reactive elements. Chlorine is a weaker oxidising agent than fluorine but a stronger one than bromine or iodine. This can be seen from the standard electrode potentials of the X2/X− couples (F, +2.866 V; Cl, +1.395 V; Br, +1.087 V; I, +0.615 V; At, approximately +0.3 V). However, this trend is not shown in the bond energies because fluorine is singular due to its small size, low polarisability, and inability to show hypervalence. As another difference, chlorine has a significant chemistry in positive oxidation states while fluorine does not. Chlorination often leads to higher oxidation states than bromination or iodination but lower oxidation states than fluorination. Chlorine tends to react with compounds including M–M, M–H, or M–C bonds to form M–Cl bonds.
Given that E°(O2/H2O) = +1.229 V, which is less than +1.395 V, it would be expected that chlorine should be able to oxidise water to oxygen and hydrochloric acid. However, the kinetics of this reaction are unfavorable, and there is also a bubble overpotential effect to consider, so that electrolysis of aqueous chloride solutions evolves chlorine gas and not oxygen gas, a fact that is very useful for the industrial production of chlorine.
Hydrogen chloride
The simplest chlorine compound is hydrogen chloride, HCl, a major chemical in industry as well as in the laboratory, both as a gas and dissolved in water as hydrochloric acid. It is often produced by burning hydrogen gas in chlorine gas, or as a byproduct of chlorinating hydrocarbons. Another approach is to treat sodium chloride with concentrated sulfuric acid to produce hydrochloric acid, also known as the "salt-cake" process:
NaCl + H2SO4 NaHSO4 + HCl
NaCl + NaHSO4 Na2SO4 + HCl
In the laboratory, hydrogen chloride gas may be made by drying the acid with concentrated sulfuric acid. Deuterium chloride, DCl, may be produced by reacting benzoyl chloride with heavy water (D2O).
At room temperature, hydrogen chloride is a colourless gas, like all the hydrogen halides apart from hydrogen fluoride, since hydrogen cannot form strong hydrogen bonds to the larger electronegative chlorine atom; however, weak hydrogen bonding is present in solid crystalline hydrogen chloride at low temperatures, similar to the hydrogen fluoride structure, before disorder begins to prevail as the temperature is raised. Hydrochloric acid is a strong acid (pKa = −7) because the hydrogen-chlorine bonds are too weak to inhibit dissociation. The HCl/H2O system has many hydrates HCl·nH2O for n = 1, 2, 3, 4, and 6. Beyond a 1:1 mixture of HCl and H2O, the system separates completely into two separate liquid phases. Hydrochloric acid forms an azeotrope with boiling point 108.58 °C at 20.22 g HCl per 100 g solution; thus hydrochloric acid cannot be concentrated beyond this point by distillation.
Unlike hydrogen fluoride, anhydrous liquid hydrogen chloride is difficult to work with as a solvent, because its boiling point is low, it has a small liquid range, its dielectric constant is low and it does not dissociate appreciably into H2Cl+ and ions – the latter, in any case, are much less stable than the bifluoride ions () due to the very weak hydrogen bonding between hydrogen and chlorine, though its salts with very large and weakly polarising cations such as Cs+ and (R = Me, Et, Bun) may still be isolated. Anhydrous hydrogen chloride is a poor solvent, only able to dissolve small molecular compounds such as nitrosyl chloride and phenol, or salts with very low lattice energies such as tetraalkylammonium halides. It readily protonates nucleophiles containing lone-pairs or π bonds. Solvolysis, ligand replacement reactions, and oxidations are well-characterised in hydrogen chloride solution:
Ph3SnCl + HCl ⟶ Ph2SnCl2 + PhH (solvolysis)
Ph3COH + 3 HCl ⟶ + H3O+Cl− (solvolysis)
+ BCl3 ⟶ + HCl (ligand replacement)
PCl3 + Cl2 + HCl ⟶ (oxidation)
Other binary chlorides
Nearly all elements in the periodic table form binary chlorides. The exceptions are decidedly in the minority and stem in each case from one of three causes: extreme inertness and reluctance to participate in chemical reactions (the noble gases, with the exception of xenon in the highly unstable XeCl2 and XeCl4); extreme nuclear instability hampering chemical investigation before decay and transmutation (many of the heaviest elements beyond bismuth); and having an electronegativity higher than chlorine's (oxygen and fluorine) so that the resultant binary compounds are formally not chlorides but rather oxides or fluorides of chlorine. Even though nitrogen in NCl3 is bearing a negative charge, the compound is usually called nitrogen trichloride.
Chlorination of metals with Cl2 usually leads to a higher oxidation state than bromination with Br2 when multiple oxidation states are available, such as in MoCl5 and MoBr3. Chlorides can be made by reaction of an element or its oxide, hydroxide, or carbonate with hydrochloric acid, and then dehydrated by mildly high temperatures combined with either low pressure or anhydrous hydrogen chloride gas. These methods work best when the chloride product is stable to hydrolysis; otherwise, the possibilities include high-temperature oxidative chlorination of the element with chlorine or hydrogen chloride, high-temperature chlorination of a metal oxide or other halide by chlorine, a volatile metal chloride, carbon tetrachloride, or an organic chloride. For instance, zirconium dioxide reacts with chlorine at standard conditions to produce zirconium tetrachloride, and uranium trioxide reacts with hexachloropropene when heated under reflux to give uranium tetrachloride. The second example also involves a reduction in oxidation state, which can also be achieved by reducing a higher chloride using hydrogen or a metal as a reducing agent. This may also be achieved by thermal decomposition or disproportionation as follows:
EuCl3 + H2 ⟶ EuCl2 + HCl
ReCl5 ReCl3 + Cl2
AuCl3 AuCl + Cl2
Most metal chlorides with the metal in low oxidation states (+1 to +3) are ionic. Nonmetals tend to form covalent molecular chlorides, as do metals in high oxidation states from +3 and above. Both ionic and covalent chlorides are known for metals in oxidation state +3 (e.g. scandium chloride is mostly ionic, but aluminium chloride is not). Silver chloride is very insoluble in water and is thus often used as a qualitative test for chlorine.
Polychlorine compounds
Although dichlorine is a strong oxidising agent with a high first ionisation energy, it may be oxidised under extreme conditions to form the cation. This is very unstable and has only been characterised by its electronic band spectrum when produced in a low-pressure discharge tube. The yellow cation is more stable and may be produced as follows:
This reaction is conducted in the oxidising solvent arsenic pentafluoride. The trichloride anion, , has also been characterised; it is analogous to triiodide.
Chlorine fluorides
The three fluorides of chlorine form a subset of the interhalogen compounds, all of which are diamagnetic. Some cationic and anionic derivatives are known, such as , , , and Cl2F+. Some pseudohalides of chlorine are also known, such as cyanogen chloride (ClCN, linear), chlorine cyanate (ClNCO), chlorine thiocyanate (ClSCN, unlike its oxygen counterpart), and chlorine azide (ClN3).
Chlorine monofluoride (ClF) is extremely thermally stable, and is sold commercially in 500-gram steel lecture bottles. It is a colourless gas that melts at −155.6 °C and boils at −100.1 °C. It may be produced by the reaction of its elements at 225 °C, though it must then be separated and purified from chlorine trifluoride and its reactants. Its properties are mostly intermediate between those of chlorine and fluorine. It will react with many metals and nonmetals from room temperature and above, fluorinating them and liberating chlorine. It will also act as a chlorofluorinating agent, adding chlorine and fluorine across a multiple bond or by oxidation: for example, it will attack carbon monoxide to form carbonyl chlorofluoride, COFCl. It will react analogously with hexafluoroacetone, (CF3)2CO, with a potassium fluoride catalyst to produce heptafluoroisopropyl hypochlorite, (CF3)2CFOCl; with nitriles RCN to produce RCF2NCl2; and with the sulfur oxides SO2 and SO3 to produce ClSO2F and ClOSO2F respectively. It will also react exothermically with compounds containing –OH and –NH groups, such as water:
H2O + 2 ClF ⟶ 2 HF + Cl2O
Chlorine trifluoride (ClF3) is a volatile colourless molecular liquid which melts at −76.3 °C and boils at 11.8 °C. It may be formed by directly fluorinating gaseous chlorine or chlorine monofluoride at 200–300 °C. One of the most reactive chemical compounds known, the list of elements it sets on fire is diverse, containing hydrogen, potassium, phosphorus, arsenic, antimony, sulfur, selenium, tellurium, bromine, iodine, and powdered molybdenum, tungsten, rhodium, iridium, and iron. It will also ignite water, along with many substances which in ordinary circumstances would be considered chemically inert such as asbestos, concrete, glass, and sand. When heated, it will even corrode noble metals as palladium, platinum, and gold, and even the noble gases xenon and radon do not escape fluorination. An impermeable fluoride layer is formed by sodium, magnesium, aluminium, zinc, tin, and silver, which may be removed by heating. Nickel, copper, and steel containers are usually used due to their great resistance to attack by chlorine trifluoride, stemming from the formation of an unreactive layer of metal fluoride. Its reaction with hydrazine to form hydrogen fluoride, nitrogen, and chlorine gases was used in experimental rocket engine, but has problems largely stemming from its extreme hypergolicity resulting in ignition without any measurable delay. Today, it is mostly used in nuclear fuel processing, to oxidise uranium to uranium hexafluoride for its enriching and to separate it from plutonium, as well as in the semiconductor industry, where it is used to clean chemical vapor deposition chambers. It can act as a fluoride ion donor or acceptor (Lewis base or acid), although it does not dissociate appreciably into and ions.
Chlorine pentafluoride (ClF5) is made on a large scale by direct fluorination of chlorine with excess fluorine gas at 350 °C and 250 atm, and on a small scale by reacting metal chlorides with fluorine gas at 100–300 °C. It melts at −103 °C and boils at −13.1 °C. It is a very strong fluorinating agent, although it is still not as effective as chlorine trifluoride. Only a few specific stoichiometric reactions have been characterised. Arsenic pentafluoride and antimony pentafluoride form ionic adducts of the form [ClF4]+[MF6]− (M = As, Sb) and water reacts vigorously as follows:
2 H2O + ClF5 ⟶ 4 HF + FClO2
The product, chloryl fluoride, is one of the five known chlorine oxide fluorides. These range from the thermally unstable FClO to the chemically unreactive perchloryl fluoride (FClO3), the other three being FClO2, F3ClO, and F3ClO2. All five behave similarly to the chlorine fluorides, both structurally and chemically, and may act as Lewis acids or bases by gaining or losing fluoride ions respectively or as very strong oxidising and fluorinating agents.
Chlorine oxides
The chlorine oxides are well-studied in spite of their instability (all of them are endothermic compounds). They are important because they are produced when chlorofluorocarbons undergo photolysis in the upper atmosphere and cause the destruction of the ozone layer. None of them can be made from directly reacting the elements.
Dichlorine monoxide (Cl2O) is a brownish-yellow gas (red-brown when solid or liquid) which may be obtained by reacting chlorine gas with yellow mercury(II) oxide. It is very soluble in water, in which it is in equilibrium with hypochlorous acid (HOCl), of which it is the anhydride. It is thus an effective bleach and is mostly used to make hypochlorites. It explodes on heating or sparking or in the presence of ammonia gas.
Chlorine dioxide (ClO2) was the first chlorine oxide to be discovered in 1811 by Humphry Davy. It is a yellow paramagnetic gas (deep-red as a solid or liquid), as expected from its having an odd number of electrons: it is stable towards dimerisation due to the delocalisation of the unpaired electron. It explodes above −40 °C as a liquid and under pressure as a gas and therefore must be made at low concentrations for wood-pulp bleaching and water treatment. It is usually prepared by reducing a chlorate as follows:
+ Cl− + 2 H+ ⟶ ClO2 + Cl2 + H2O
Its production is thus intimately linked to the redox reactions of the chlorine oxoacids. It is a strong oxidising agent, reacting with sulfur, phosphorus, phosphorus halides, and potassium borohydride. It dissolves exothermically in water to form dark-green solutions that very slowly decompose in the dark. Crystalline clathrate hydrates ClO2·nH2O (n ≈ 6–10) separate out at low temperatures. However, in the presence of light, these solutions rapidly photodecompose to form a mixture of chloric and hydrochloric acids. Photolysis of individual ClO2 molecules result in the radicals ClO and ClOO, while at room temperature mostly chlorine, oxygen, and some ClO3 and Cl2O6 are produced. Cl2O3 is also produced when photolysing the solid at −78 °C: it is a dark brown solid that explodes below 0 °C. The ClO radical leads to the depletion of atmospheric ozone and is thus environmentally important as follows:
Cl• + O3 ⟶ ClO• + O2
ClO• + O• ⟶ Cl• + O2
Chlorine perchlorate (ClOClO3) is a pale yellow liquid that is less stable than ClO2 and decomposes at room temperature to form chlorine, oxygen, and dichlorine hexoxide (Cl2O6). Chlorine perchlorate may also be considered a chlorine derivative of perchloric acid (HOClO3), similar to the thermally unstable chlorine derivatives of other oxoacids: examples include chlorine nitrate (ClONO2, vigorously reactive and explosive), and chlorine fluorosulfate (ClOSO2F, more stable but still moisture-sensitive and highly reactive). Dichlorine hexoxide is a dark-red liquid that freezes to form a solid which turns yellow at −180 °C: it is usually made by reaction of chlorine dioxide with oxygen. Despite attempts to rationalise it as the dimer of ClO3, it reacts more as though it were chloryl perchlorate, [ClO2]+[ClO4]−, which has been confirmed to be the correct structure of the solid. It hydrolyses in water to give a mixture of chloric and perchloric acids: the analogous reaction with anhydrous hydrogen fluoride does not proceed to completion.
Dichlorine heptoxide (Cl2O7) is the anhydride of perchloric acid (HClO4) and can readily be obtained from it by dehydrating it with phosphoric acid at −10 °C and then distilling the product at −35 °C and 1 mmHg. It is a shock-sensitive, colourless oily liquid. It is the least reactive of the chlorine oxides, being the only one to not set organic materials on fire at room temperature. It may be dissolved in water to regenerate perchloric acid or in aqueous alkalis to regenerate perchlorates. However, it thermally decomposes explosively by breaking one of the central Cl–O bonds, producing the radicals ClO3 and ClO4 which immediately decompose to the elements through intermediate oxides.
Chlorine oxoacids and oxyanions
Chlorine forms four oxoacids: hypochlorous acid (HOCl), chlorous acid (HOClO), chloric acid (HOClO2), and perchloric acid (HOClO3). As can be seen from the redox potentials given in the adjacent table, chlorine is much more stable towards disproportionation in acidic solutions than in alkaline solutions:
{|
|-
| Cl2 + H2O || HOCl + H+ + Cl− || Kac = 4.2 × 10−4 mol2 l−2
|-
| Cl2 + 2 OH− || OCl− + H2O + Cl− || Kalk = 7.5 × 1015 mol−1 l
|}
The hypochlorite ions also disproportionate further to produce chloride and chlorate (3 ClO− 2 Cl− + ) but this reaction is quite slow at temperatures below 70 °C in spite of the very favourable equilibrium constant of 1027. The chlorate ions may themselves disproportionate to form chloride and perchlorate (4 Cl− + 3 ) but this is still very slow even at 100 °C despite the very favourable equilibrium constant of 1020. The rates of reaction for the chlorine oxyanions increases as the oxidation state of chlorine decreases. The strengths of the chlorine oxyacids increase very quickly as the oxidation state of chlorine increases due to the increasing delocalisation of charge over more and more oxygen atoms in their conjugate bases.
Most of the chlorine oxoacids may be produced by exploiting these disproportionation reactions. Hypochlorous acid (HOCl) is highly reactive and quite unstable; its salts are mostly used for their bleaching and sterilising abilities. They are very strong oxidising agents, transferring an oxygen atom to most inorganic species. Chlorous acid (HOClO) is even more unstable and cannot be isolated or concentrated without decomposition: it is known from the decomposition of aqueous chlorine dioxide. However, sodium chlorite is a stable salt and is useful for bleaching and stripping textiles, as an oxidising agent, and as a source of chlorine dioxide. Chloric acid (HOClO2) is a strong acid that is quite stable in cold water up to 30% concentration, but on warming gives chlorine and chlorine dioxide. Evaporation under reduced pressure allows it to be concentrated further to about 40%, but then it decomposes to perchloric acid, chlorine, oxygen, water, and chlorine dioxide. Its most important salt is sodium chlorate, mostly used to make chlorine dioxide to bleach paper pulp. The decomposition of chlorate to chloride and oxygen is a common way to produce oxygen in the laboratory on a small scale. Chloride and chlorate may comproportionate to form chlorine as follows:
+ 5 Cl− + 6 H+ ⟶ 3 Cl2 + 3 H2O
Perchlorates and perchloric acid (HOClO3) are the most stable oxo-compounds of chlorine, in keeping with the fact that chlorine compounds are most stable when the chlorine atom is in its lowest (−1) or highest (+7) possible oxidation states. Perchloric acid and aqueous perchlorates are vigorous and sometimes violent oxidising agents when heated, in stark contrast to their mostly inactive nature at room temperature due to the high activation energies for these reactions for kinetic reasons. Perchlorates are made by electrolytically oxidising sodium chlorate, and perchloric acid is made by reacting anhydrous sodium perchlorate or barium perchlorate with concentrated hydrochloric acid, filtering away the chloride precipitated and distilling the filtrate to concentrate it. Anhydrous perchloric acid is a colourless mobile liquid that is sensitive to shock that explodes on contact with most organic compounds, sets hydrogen iodide and thionyl chloride on fire and even oxidises silver and gold. Although it is a weak ligand, weaker than water, a few compounds involving coordinated are known. The Table below presents typical oxidation states for chlorine element as given in the secondary schools or colleges. There are more complex chemical compounds, the structure of which can only be explained using modern quantum chemical methods, for example, cluster technetium chloride [(CH3)4N]3[Tc6Cl14], in which 6 of the 14 chlorine atoms are formally divalent, and oxidation states are fractional. In addition, all the above chemical regularities are valid for "normal" or close to normal conditions, while at ultra-high pressures (for example, in the cores of large planets), chlorine can exhibit an oxidation state of -3, forming a Na3Cl compound with sodium, which does not fit into traditional concepts of chemistry.
Organochlorine compounds
Like the other carbon–halogen bonds, the C–Cl bond is a common functional group that forms part of core organic chemistry. Formally, compounds with this functional group may be considered organic derivatives of the chloride anion. Due to the difference of electronegativity between chlorine (3.16) and carbon (2.55), the carbon in a C–Cl bond is electron-deficient and thus electrophilic. Chlorination modifies the physical properties of hydrocarbons in several ways: chlorocarbons are typically denser than water due to the higher atomic weight of chlorine versus hydrogen, and aliphatic organochlorides are alkylating agents because chloride is a leaving group.
Alkanes and aryl alkanes may be chlorinated under free-radical conditions, with UV light. However, the extent of chlorination is difficult to control: the reaction is not regioselective and often results in a mixture of various isomers with different degrees of chlorination, though this may be permissible if the products are easily separated. Aryl chlorides may be prepared by the Friedel-Crafts halogenation, using chlorine and a Lewis acid catalyst. The haloform reaction, using chlorine and sodium hydroxide, is also able to generate alkyl halides from methyl ketones, and related compounds. Chlorine adds to the multiple bonds on alkenes and alkynes as well, giving di- or tetrachloro compounds. However, due to the expense and reactivity of chlorine, organochlorine compounds are more commonly produced by using hydrogen chloride, or with chlorinating agents such as phosphorus pentachloride (PCl5) or thionyl chloride (SOCl2). The last is very convenient in the laboratory because all side products are gaseous and do not have to be distilled out.
Many organochlorine compounds have been isolated from natural sources ranging from bacteria to humans. Chlorinated organic compounds are found in nearly every class of biomolecules including alkaloids, terpenes, amino acids, flavonoids, steroids, and fatty acids. Organochlorides, including dioxins, are produced in the high temperature environment of forest fires, and dioxins have been found in the preserved ashes of lightning-ignited fires that predate synthetic dioxins. In addition, a variety of simple chlorinated hydrocarbons including dichloromethane, chloroform, and carbon tetrachloride have been isolated from marine algae. A majority of the chloromethane in the environment is produced naturally by biological decomposition, forest fires, and volcanoes.
Some types of organochlorides, though not all, have significant toxicity to plants or animals, including humans. Dioxins, produced when organic matter is burned in the presence of chlorine, and some insecticides, such as DDT, are persistent organic pollutants which pose dangers when they are released into the environment. For example, DDT, which was widely used to control insects in the mid 20th century, also accumulates in food chains, and causes reproductive problems (e.g., eggshell thinning) in certain bird species. Due to the ready homolytic fission of the C–Cl bond to create chlorine radicals in the upper atmosphere, chlorofluorocarbons have been phased out due to the harm they do to the ozone layer.
Occurrence
Chlorine is too reactive to occur as the free element in nature but is very abundant in the form of its chloride salts. It is the 20th most abundant element in Earth's crust and makes up 126 parts per million of it, through the large deposits of chloride minerals, especially sodium chloride, that have been evaporated from water bodies. All of these pale in comparison to the reserves of chloride ions in seawater: smaller amounts at higher concentrations occur in some inland seas and underground brine wells, such as the Great Salt Lake in Utah and the Dead Sea in Israel.
Small batches of chlorine gas are prepared in the laboratory by combining hydrochloric acid and manganese dioxide, but the need rarely arises due to its ready availability. In industry, elemental chlorine is usually produced by the electrolysis of sodium chloride dissolved in water. This method, the chloralkali process industrialized in 1892, now provides most industrial chlorine gas. Along with chlorine, the method yields hydrogen gas and sodium hydroxide, which is the most valuable product. The process proceeds according to the following chemical equation:
2 NaCl + 2 H2O → Cl2 + H2 + 2 NaOH
Production
Chlorine is primarily produced by the chloralkali process, although non-chloralkali processes exist. Global 2022 production was estimated to be 97 million tonnes. The most visible use of chlorine is in water disinfection. 35-40 % of chlorine produced is used to make poly(vinyl chloride) through ethylene dichloride and vinyl chloride. The chlorine produced is available in cylinders from sizes ranging from 450 g to 70 kg, as well as drums (865 kg), tank wagons (15 tonnes on roads; 27–90 tonnes by rail), and barges (600–1200 tonnes).
Due to the difficulty and hazards in transporting elemental chlorine, production is typically located near where it is consumed. As examples, vinyl chloride producers such as Westlake Chemical and Formosa Plastics have integrated chloralkali assets.
Chloralkali processes
The electrolysis of chloride solutions all proceed according to the following equations:
Cathode: 2 H2O + 2 e− → H2 + 2 OH−
Anode: 2 Cl− → Cl2 + 2 e−
In the conventional case where sodium chloride is electrolyzed, sodium hydroxide and chlorine are coproducts.
Industrially, there are three chloralkali processes:
The Castner–Kellner process that utilizes a mercury electrode
The diaphragm cell process that utilizes an asbestos diaphragm that separates the cathode and anode
The membrane cell process that uses an ion exchange membrane in place of the diaphragm
The Castner–Kellner process was the first method used at the end of the nineteenth century to produce chlorine on an industrial scale. Mercury (that is toxic) was used as an electrode to amalgamate the sodium product, preventing undesirable side reactions.
In diaphragm cell electrolysis, an asbestos (or polymer-fiber) diaphragm separates a cathode and an anode, preventing the chlorine forming at the anode from re-mixing with the sodium hydroxide and the hydrogen formed at the cathode. The salt solution (brine) is continuously fed to the anode compartment and flows through the diaphragm to the cathode compartment, where the caustic alkali is produced and the brine is partially depleted. Diaphragm methods produce dilute and slightly impure alkali, but they are not burdened with the problem of mercury disposal and they are more energy efficient.
Membrane cell electrolysis employs permeable membrane as an ion exchanger. Saturated sodium (or potassium) chloride solution is passed through the anode compartment, leaving at a lower concentration. This method also produces very pure sodium (or potassium) hydroxide but has the disadvantage of requiring very pure brine at high concentrations.
However, due to the lower energy requirements of the membrane process, new chlor-alkali installations are now almost exclusively employing the membrane process. Next to this, the use of large volumes of mercury is considered undesirable.
Also, older plants are converted into the membrane process.
Non-chloralkali processes
In the Deacon process, hydrogen chloride recovered from the production of organochlorine compounds is recovered as chlorine. The process relies on oxidation using oxygen:
4 HCl + O2 → 2 Cl2 + 2 H2O
The reaction requires a catalyst. As introduced by Deacon, early catalysts were based on copper. Commercial processes, such as the Mitsui MT-Chlorine Process, have switched to chromium and ruthenium-based catalysts.
Applications
Sodium chloride is the most common chlorine compound, and is the main source of chlorine for the demand by the chemical industry. About 15000 chlorine-containing compounds are commercially traded, including such diverse compounds as chlorinated methane, ethanes, vinyl chloride, polyvinyl chloride (PVC), aluminium trichloride for catalysis, the chlorides of magnesium, titanium, zirconium, and hafnium which are the precursors for producing the pure form of those elements.
Quantitatively, of all elemental chlorine produced, about 63% is used in the manufacture of organic compounds, and 18% in the manufacture of inorganic chlorine compounds. About 15,000 chlorine compounds are used commercially. The remaining 19% of chlorine produced is used for bleaches and disinfection products. The most significant of organic compounds in terms of production volume are 1,2-dichloroethane and vinyl chloride, intermediates in the production of PVC. Other particularly important organochlorines are methyl chloride, methylene chloride, chloroform, vinylidene chloride, trichloroethylene, perchloroethylene, allyl chloride, epichlorohydrin, chlorobenzene, dichlorobenzenes, and trichlorobenzenes. The major inorganic compounds include HCl, Cl2O, HOCl, NaClO3, AlCl3, SiCl4, SnCl4, PCl3, PCl5, POCl3, AsCl3, SbCl3, SbCl5, BiCl3, and ZnCl2.
Sanitation, disinfection, and antisepsis
Combating putrefaction
In France (as elsewhere), animal intestines were processed to make musical instrument strings, Goldbeater's skin and other products. This was done in "gut factories" (boyauderies), and it was an odiferous and unhealthy process. In or about 1820, the Société d'encouragement pour l'industrie nationale offered a prize for the discovery of a method, chemical or mechanical, for separating the peritoneal membrane of animal intestines without putrefaction. The prize was won by Antoine-Germain Labarraque, a 44-year-old French chemist and pharmacist who had discovered that Berthollet's chlorinated bleaching solutions ("Eau de Javel") not only destroyed the smell of putrefaction of animal tissue decomposition, but also actually retarded the decomposition.
Labarraque's research resulted in the use of chlorides and hypochlorites of lime (calcium hypochlorite) and of sodium (sodium hypochlorite) in the boyauderies. The same chemicals were found to be useful in the routine disinfection and deodorization of latrines, sewers, markets, abattoirs, anatomical theatres, and morgues. They were successful in hospitals, lazarets, prisons, infirmaries (both on land and at sea), magnaneries, stables, cattle-sheds, etc.; and they were beneficial during exhumations, embalming, outbreaks of epidemic disease, fever, and blackleg in cattle.
Disinfection
Labarraque's chlorinated lime and soda solutions have been advocated since 1828 to prevent infection (called "contagious infection", presumed to be transmitted by "miasmas"), and to treat putrefaction of existing wounds, including septic wounds. In his 1828 work, Labarraque recommended that doctors breathe chlorine, wash their hands in chlorinated lime, and even sprinkle chlorinated lime about the patients' beds in cases of "contagious infection". In 1828, the contagion of infections was well known, even though the agency of the microbe was not discovered until more than half a century later.
During the Paris cholera outbreak of 1832, large quantities of so-called chloride of lime were used to disinfect the capital. This was not simply modern calcium chloride, but chlorine gas dissolved in lime-water (dilute calcium hydroxide) to form calcium hypochlorite (chlorinated lime). Labarraque's discovery helped to remove the terrible stench of decay from hospitals and dissecting rooms, and by doing so, effectively deodorised the Latin Quarter of Paris. These "putrid miasmas" were thought by many to cause the spread of "contagion" and "infection" – both words used before the germ theory of infection. Chloride of lime was used for destroying odors and "putrid matter". One source claims chloride of lime was used by Dr. John Snow to disinfect water from the cholera-contaminated well that was feeding the Broad Street pump in 1854 London, though three other reputable sources that describe that famous cholera epidemic do not mention the incident. One reference makes it clear that chloride of lime was used to disinfect the offal and filth in the streets surrounding the Broad Street pump – a common practice in mid-nineteenth century England.
Semmelweis and experiments with antisepsis
Perhaps the most famous application of Labarraque's chlorine and chemical base solutions was in 1847, when Ignaz Semmelweis used chlorine-water (chlorine dissolved in pure water, which was cheaper than chlorinated lime solutions) to disinfect the hands of Austrian doctors, which Semmelweis noticed still carried the stench of decomposition from the dissection rooms to the patient examination rooms. Long before the germ theory of disease, Semmelweis theorized that "cadaveric particles" were transmitting decay from fresh medical cadavers to living patients, and he used the well-known "Labarraque's solutions" as the only known method to remove the smell of decay and tissue decomposition (which he found that soap did not). The solutions proved to be far more effective antiseptics than soap (Semmelweis was also aware of their greater efficacy, but not the reason), and this resulted in Semmelweis's celebrated success in stopping the transmission of childbed fever ("puerperal fever") in the maternity wards of Vienna General Hospital in Austria in 1847.
Much later, during World War I in 1916, a standardized and diluted modification of Labarraque's solution containing hypochlorite (0.5%) and boric acid as an acidic stabilizer was developed by Henry Drysdale Dakin (who gave full credit to Labarraque's prior work in this area). Called Dakin's solution, the method of wound irrigation with chlorinated solutions allowed antiseptic treatment of a wide variety of open wounds, long before the modern antibiotic era. A modified version of this solution continues to be employed in wound irrigation in modern times, where it remains effective against bacteria that are resistant to multiple antibiotics (see Century Pharmaceuticals).
Public sanitation
The first continuous application of chlorination to drinking U.S. water was installed in Jersey City, New Jersey, in 1908. By 1918, the US Department of Treasury called for all drinking water to be disinfected with chlorine. Chlorine is presently an important chemical for water purification (such as in water treatment plants), in disinfectants, and in bleach. Even small water supplies are now routinely chlorinated.
Chlorine is usually used (in the form of hypochlorous acid) to kill bacteria and other microbes in drinking water supplies and public swimming pools. In most private swimming pools, chlorine itself is not used, but rather sodium hypochlorite, formed from chlorine and sodium hydroxide, or solid tablets of chlorinated isocyanurates. The drawback of using chlorine in swimming pools is that the chlorine reacts with the amino acids in proteins in human hair and skin. Contrary to popular belief, the distinctive "chlorine aroma" associated with swimming pools is not the result of elemental chlorine itself, but of chloramine, a chemical compound produced by the reaction of free dissolved chlorine with amines in organic substances including those in urine and sweat. As a disinfectant in water, chlorine is more than three times as effective against Escherichia coli as bromine, and more than six times as effective as iodine. Increasingly, monochloramine itself is being directly added to drinking water for purposes of disinfection, a process known as chloramination.
It is often impractical to store and use poisonous chlorine gas for water treatment, so alternative methods of adding chlorine are used. These include hypochlorite solutions, which gradually release chlorine into the water, and compounds like sodium dichloro-s-triazinetrione (dihydrate or anhydrous), sometimes referred to as "dichlor", and trichloro-s-triazinetrione, sometimes referred to as "trichlor". These compounds are stable while solid and may be used in powdered, granular, or tablet form. When added in small amounts to pool water or industrial water systems, the chlorine atoms hydrolyze from the rest of the molecule, forming hypochlorous acid (HOCl), which acts as a general biocide, killing germs, microorganisms, algae, and so on.
Use as a weapon
World War I
Chlorine gas, also known as bertholite, was first used as a weapon in World War I by Germany on April 22, 1915, in the Second Battle of Ypres. As described by the soldiers, it had the distinctive smell of a mixture of pepper and pineapple. It also tasted metallic and stung the back of the throat and chest. Chlorine reacts with water in the mucosa of the lungs to form hydrochloric acid, destructive to living tissue and potentially lethal. Human respiratory systems can be protected from chlorine gas by gas masks with activated charcoal or other filters, which makes chlorine gas much less lethal than other chemical weapons. It was pioneered by a German scientist later to be a Nobel laureate, Fritz Haber of the Kaiser Wilhelm Institute in Berlin, in collaboration with the German chemical conglomerate IG Farben, which developed methods for discharging chlorine gas against an entrenched enemy. After its first use, both sides in the conflict used chlorine as a chemical weapon, but it was soon replaced by the more deadly phosgene and mustard gas.
Middle east
Chlorine gas was also used during the Iraq War in Anbar Province in 2007, with insurgents packing truck bombs with mortar shells and chlorine tanks. The attacks killed two people from the explosives and sickened more than 350. Most of the deaths were caused by the force of the explosions rather than the effects of chlorine since the toxic gas is readily dispersed and diluted in the atmosphere by the blast. In some bombings, over a hundred civilians were hospitalized due to breathing difficulties. The Iraqi authorities tightened security for elemental chlorine, which is essential for providing safe drinking water to the population.
On 23 October 2014, it was reported that the Islamic State of Iraq and the Levant had used chlorine gas in the town of Duluiyah, Iraq. Laboratory analysis of clothing and soil samples confirmed the use of chlorine gas against Kurdish Peshmerga Forces in a vehicle-borne improvised explosive device attack on 23 January 2015 at the Highway 47 Kiske Junction near Mosul.
Another country in the middle east, Syria, has used chlorine as a chemical weapon delivered from barrel bombs and rockets. In 2016, the OPCW-UN Joint Investigative Mechanism concluded that the Syrian government used chlorine as a chemical weapon in three separate attacks. Later investigations from the OPCW's Investigation and Identification Team concluded that the Syrian Air Force was responsible for chlorine attacks in 2017 and 2018.
Biological role
The chloride anion is an essential nutrient for metabolism. Chlorine is needed for the production of hydrochloric acid in the stomach and in cellular pump functions. The main dietary source is table salt, or sodium chloride. Overly low or high concentrations of chloride in the blood are examples of electrolyte disturbances. Hypochloremia (having too little chloride) rarely occurs in the absence of other abnormalities. It is sometimes associated with hypoventilation. It can be associated with chronic respiratory acidosis. Hyperchloremia (having too much chloride) usually does not produce symptoms. When symptoms do occur, they tend to resemble those of hypernatremia (having too much sodium). Reduction in blood chloride leads to cerebral dehydration; symptoms are most often caused by rapid rehydration which results in cerebral edema. Hyperchloremia can affect oxygen transport.
Hazards
Chlorine is a toxic gas that attacks the respiratory system, eyes, and skin. Because it is denser than air, it tends to accumulate at the bottom of poorly ventilated spaces. Chlorine gas is a strong oxidizer, which may react with flammable materials.
Chlorine is detectable with measuring devices in concentrations as low as 0.2 parts per million (ppm), and by smell at 3 ppm. Coughing and vomiting may occur at 30 ppm and lung damage at 60 ppm. About 1000 ppm can be fatal after a few deep breaths of the gas. The IDLH (immediately dangerous to life and health) concentration is 10 ppm. Breathing lower concentrations can aggravate the respiratory system and exposure to the gas can irritate the eyes. When chlorine is inhaled at concentrations greater than 30 ppm, it reacts with water within the lungs, producing hydrochloric acid (HCl) and hypochlorous acid (HOCl).
When used at specified levels for water disinfection, the reaction of chlorine with water is not a major concern for human health. Other materials present in the water may generate disinfection by-products that are associated with negative effects on human health.
In the United States, the Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit for elemental chlorine at 1 ppm, or 3 mg/m3. The National Institute for Occupational Safety and Health has designated a recommended exposure limit of 0.5 ppm over 15 minutes.
In the home, accidents occur when hypochlorite bleach solutions come into contact with certain acidic drain-cleaners to produce chlorine gas. Hypochlorite bleach (a popular laundry additive) combined with ammonia (another popular laundry additive) produces chloramines, another toxic group of chemicals.
Chlorine-induced cracking in structural materials
Chlorine is widely used for purifying water, especially potable water supplies and water used in swimming pools. Several catastrophic collapses of swimming pool ceilings have occurred from chlorine-induced stress corrosion cracking of stainless steel suspension rods. Some polymers are also sensitive to attack, including acetal resin and polybutene. Both materials were used in hot and cold water domestic plumbing, and stress corrosion cracking caused widespread failures in the US in the 1980s and 1990s.
Chlorine-iron fire
The element iron can combine with chlorine at high temperatures in a strong exothermic reaction, creating a chlorine-iron fire. Chlorine-iron fires are a risk in chemical process plants, where much of the pipework that carries chlorine gas is made of steel.
| Physical sciences | Chemical elements_2 | null |
5668 | https://en.wikipedia.org/wiki/Calcium | Calcium | Calcium is a chemical element; it has symbol Ca and atomic number 20. As an alkaline earth metal, calcium is a reactive metal that forms a dark oxide-nitride layer when exposed to air. Its physical and chemical properties are most similar to its heavier homologues strontium and barium. It is the fifth most abundant element in Earth's crust, and the third most abundant metal, after iron and aluminium. The most common calcium compound on Earth is calcium carbonate, found in limestone and the fossilized remnants of early sea life; gypsum, anhydrite, fluorite, and apatite are also sources of calcium. The name derives from Latin calx "lime", which was obtained from heating limestone.
Some calcium compounds were known to the ancients, though their chemistry was unknown until the seventeenth century. Pure calcium was isolated in 1808 via electrolysis of its oxide by Humphry Davy, who named the element. Calcium compounds are widely used in many industries: in foods and pharmaceuticals for calcium supplementation, in the paper industry as bleaches, as components in cement and electrical insulators, and in the manufacture of soaps. On the other hand, the metal in pure form has few applications due to its high reactivity; still, in small quantities it is often used as an alloying component in steelmaking, and sometimes, as a calcium–lead alloy, in making automotive batteries.
Calcium is the most abundant metal and the fifth-most abundant element in the human body. As electrolytes, calcium ions (Ca2+) play a vital role in the physiological and biochemical processes of organisms and cells: in signal transduction pathways where they act as a second messenger; in neurotransmitter release from neurons; in contraction of all muscle cell types; as cofactors in many enzymes; and in fertilization. Calcium ions outside cells are important for maintaining the potential difference across excitable cell membranes, protein synthesis, and bone formation.
Characteristics
Classification
Calcium is a very ductile silvery metal (sometimes described as pale yellow) whose properties are very similar to the heavier elements in its group, strontium, barium, and radium. A calcium atom has twenty electrons, with electron configuration [Ar]4s. Like the other elements placed in group 2 of the periodic table, calcium has two valence electrons in the outermost s-orbital, which are very easily lost in chemical reactions to form a dipositive ion with the stable electron configuration of a noble gas, in this case argon.
Hence, calcium is almost always divalent in its compounds, which are usually ionic. Hypothetical univalent salts of calcium would be stable with respect to their elements, but not to disproportionation to the divalent salts and calcium metal, because the enthalpy of formation of MX is much higher than those of the hypothetical MX. This occurs because of the much greater lattice energy afforded by the more highly charged Ca cation compared to the hypothetical Ca cation.
Calcium, strontium, barium, and radium are always considered to be alkaline earth metals; the lighter beryllium and magnesium, also in group 2 of the periodic table, are often included as well. Nevertheless, beryllium and magnesium differ significantly from the other members of the group in their physical and chemical behaviour: they behave more like aluminium and zinc respectively and have some of the weaker metallic character of the post-transition metals, which is why the traditional definition of the term "alkaline earth metal" excludes them.
Physical properties
Calcium metal melts at 842 °C and boils at 1494 °C; these values are higher than those for magnesium and strontium, the neighbouring group 2 metals. It crystallises in the face-centered cubic arrangement like strontium and barium; above , it changes to a body-centered cubic. Its density of 1.526 g/cm3 (at 20 °C) is the lowest in its group.
Calcium is harder than lead but can be cut with a knife with effort. While calcium is a poorer conductor of electricity than copper or aluminium by volume, it is a better conductor by mass than both due to its very low density. While calcium is infeasible as a conductor for most terrestrial applications as it reacts quickly with atmospheric oxygen, its use as such in space has been considered.
Chemical properties
The chemistry of calcium is that of a typical heavy alkaline earth metal. For example, calcium spontaneously reacts with water more quickly than magnesium and less quickly than strontium to produce calcium hydroxide and hydrogen gas. It also reacts with the oxygen and nitrogen in air to form a mixture of calcium oxide and calcium nitride. When finely divided, it spontaneously burns in air to produce the nitride. Bulk calcium is less reactive: it quickly forms a hydration coating in moist air, but below 30% relative humidity it may be stored indefinitely at room temperature.
Besides the simple oxide CaO, calcium peroxide, CaO, can be made by direct oxidation of calcium metal under a high pressure of oxygen, and there is some evidence for a yellow superoxide Ca(O).Calcium hydroxide, Ca(OH), is a strong base, though not as strong as the hydroxides of strontium, barium or the alkali metals. All four dihalides of calcium are known. Calcium carbonate (CaCO) and calcium sulfate (CaSO) are particularly abundant minerals. Like strontium and barium, as well as the alkali metals and the divalent lanthanides europium and ytterbium, calcium metal dissolves directly in liquid ammonia to give a dark blue solution.
Due to the large size of the calcium ion (Ca), high coordination numbers are common, up to 24 in some intermetallic compounds such as CaZn. Calcium is readily complexed by oxygen chelates such as EDTA and polyphosphates, which are useful in analytic chemistry and removing calcium ions from hard water. In the absence of steric hindrance, smaller group 2 cations tend to form stronger complexes, but when large polydentate macrocycles are involved the trend is reversed.
Though calcium is in the same group as magnesium and organomagnesium compounds are very widely used throughout chemistry, organocalcium compounds are not similarly widespread because they are more difficult to make and more reactive, though they have recently been investigated as possible catalysts. Organocalcium compounds tend to be more similar to organoytterbium compounds due to the similar ionic radii of Yb (102 pm) and Ca (100 pm).
Most of these compounds can only be prepared at low temperatures; bulky ligands tend to favour stability. For example, calcium dicyclopentadienyl, Ca(CH), must be made by directly reacting calcium metal with mercurocene or cyclopentadiene itself; replacing the CH ligand with the bulkier C(CH) ligand on the other hand increases the compound's solubility, volatility, and kinetic stability.
Isotopes
Natural calcium is a mixture of five stable isotopes (Ca, Ca, Ca, Ca, and Ca) and one isotope with a half-life so long that it is for all practical purposes stable (Ca, with a half-life of about 4.3 × 10 years). Calcium is the first (lightest) element to have six naturally occurring isotopes.
By far the most common isotope of calcium in nature is Ca, which makes up 96.941% of all natural calcium. It is produced in the silicon-burning process from fusion of alpha particles and is the heaviest stable nuclide with equal proton and neutron numbers; its occurrence is also supplemented slowly by the decay of primordial K. Adding another alpha particle leads to unstable Ti, which decays via two successive electron captures to stable Ca; this makes up 2.806% of all natural calcium and is the second-most common isotope.
The other four natural isotopes, Ca, Ca, Ca, and Ca, are significantly rarer, each comprising less than 1% of all natural calcium. The four lighter isotopes are mainly products of the oxygen-burning and silicon-burning processes, leaving the two heavier ones to be produced via neutron capture processes. Ca is mostly produced in a "hot" s-process, as its formation requires a rather high neutron flux to allow short-lived Ca to capture a neutron. Ca is produced by electron capture in the r-process in type Ia supernovae, where high neutron excess and low enough entropy ensures its survival.
Ca and Ca are the first "classically stable" nuclides with a 6-neutron or 8-neutron excess respectively. Although extremely neutron-rich for such a light element, Ca is very stable because it is a doubly magic nucleus, having 20 protons and 28 neutrons arranged in closed shells. Its beta decay to Sc is very hindered because of the gross mismatch of nuclear spin: Ca has zero nuclear spin, being even–even, while Sc has spin 6+, so the decay is forbidden by the conservation of angular momentum. While two excited states of Sc are available for decay as well, they are also forbidden due to their high spins. As a result, when Ca does decay, it does so by double beta decay to Ti instead, being the lightest nuclide known to undergo double beta decay.
Ca can also theoretically undergo double beta decay to Ti, but this has never been observed. The most common isotope Ca is also doubly magic and could undergo double electron capture to Ar, but this has likewise never been observed. Calcium is the only element with two primordial doubly magic isotopes. The experimental lower limits for the half-lives of Ca and Ca are 5.9 × 10 years and 2.8 × 10 years respectively.
Apart from the practically stable Ca, the longest lived radioisotope of calcium is Ca. It decays by electron capture to stable K with a half-life of about 10 years. Its existence in the early Solar System as an extinct radionuclide has been inferred from excesses of K: traces of Ca also still exist today, as it is a cosmogenic nuclide, continuously produced through neutron activation of natural Ca.
Many other calcium radioisotopes are known, ranging from Ca to Ca. They are all much shorter-lived than Ca, the most stable being Ca (half-life 163 days) and Ca (half-life 4.54 days). Isotopes lighter than Ca usually undergo beta plus decay to isotopes of potassium, and those heavier than Ca usually undergo beta minus decay to isotopes of scandium, though near the nuclear drip lines, proton emission and neutron emission begin to be significant decay modes as well.
Like other elements, a variety of processes alter the relative abundance of calcium isotopes. The best studied of these processes is the mass-dependent fractionation of calcium isotopes that accompanies the precipitation of calcium minerals such as calcite, aragonite and apatite from solution. Lighter isotopes are preferentially incorporated into these minerals, leaving the surrounding solution enriched in heavier isotopes at a magnitude of roughly 0.025% per atomic mass unit (amu) at room temperature. Mass-dependent differences in calcium isotope composition are conventionally expressed by the ratio of two isotopes (usually Ca/Ca) in a sample compared to the same ratio in a standard reference material. Ca/Ca varies by about 1–2‰ among organisms on Earth.
History
Calcium compounds were known for millennia, though their chemical makeup was not understood until the 17th century. Lime as a building material and as plaster for statues was used as far back as around 7000 BC. The first dated lime kiln dates back to 2500 BC and was found in Khafajah, Mesopotamia.
About the same time, dehydrated gypsum (CaSO·2HO) was being used in the Great Pyramid of Giza. This material would later be used for the plaster in the tomb of Tutankhamun. The ancient Romans instead used lime mortars made by heating limestone (CaCO). The name "calcium" itself derives from the Latin word calx "lime".
Vitruvius noted that the lime that resulted was lighter than the original limestone, attributing this to the boiling of the water. In 1755, Joseph Black proved that this was due to the loss of carbon dioxide, which as a gas had not been recognized by the ancient Romans.
In 1789, Antoine Lavoisier suspected that lime might be an oxide of a fundamental chemical element. In his table of the elements, Lavoisier listed five "salifiable earths" (i.e., ores that could be made to react with acids to produce salts (salis = salt, in Latin): chaux (calcium oxide), magnésie (magnesia, magnesium oxide), baryte (barium sulfate), alumine (alumina, aluminium oxide), and silice (silica, silicon dioxide)). About these "elements", Lavoisier reasoned:
Calcium, along with its congeners magnesium, strontium, and barium, was first isolated by Humphry Davy in 1808. Following the work of Jöns Jakob Berzelius and Magnus Martin af Pontin on electrolysis, Davy isolated calcium and magnesium by putting a mixture of the respective metal oxides with mercury(II) oxide on a platinum plate which was used as the anode, the cathode being a platinum wire partially submerged into mercury. Electrolysis then gave calcium–mercury and magnesium–mercury amalgams, and distilling off the mercury gave the metal. However, pure calcium cannot be prepared in bulk by this method and a workable commercial process for its production was not found until over a century later.
Occurrence and production
At 3%, calcium is the fifth most abundant element in the Earth's crust, and the third most abundant metal behind aluminium and iron. It is also the fourth most abundant element in the lunar highlands. Sedimentary calcium carbonate deposits pervade the Earth's surface as fossilized remains of past marine life; they occur in two forms, the rhombohedral calcite (more common) and the orthorhombic aragonite (forming in more temperate seas). Minerals of the first type include limestone, dolomite, marble, chalk, and iceland spar; aragonite beds make up the Bahamas, the Florida Keys, and the Red Sea basins. Corals, sea shells, and pearls are mostly made up of calcium carbonate. Among the other important minerals of calcium are gypsum (CaSO4·2H2O), anhydrite (CaSO4), fluorite (CaF2), and apatite ([Ca5(PO4)3X], X = OH, Cl, or F).gre
The major producers of calcium are China (about 10000 to 12000 tonnes per year), Russia (about 6000 to 8000 tonnes per year), and the United States (about 2000 to 4000 tonnes per year). Canada and France are also among the minor producers. In 2005, about 24000 tonnes of calcium were produced; about half of the world's extracted calcium is used by the United States, with about 80% of the output used each year.
In Russia and China, Davy's method of electrolysis is still used, but is instead applied to molten calcium chloride. Since calcium is less reactive than strontium or barium, the oxide–nitride coating that results in air is stable and lathe machining and other standard metallurgical techniques are suitable for calcium. In the United States and Canada, calcium is instead produced by reducing lime with aluminium at high temperatures.
Geochemical cycling
Calcium cycling provides a link between tectonics, climate, and the carbon cycle. In the simplest terms, mountain-building exposes calcium-bearing rocks such as basalt and granodiorite to chemical weathering and releases Ca2+ into surface water. These ions are transported to the ocean where they react with dissolved CO2 to form limestone (), which in turn settles to the sea floor where it is incorporated into new rocks. Dissolved CO2, along with carbonate and bicarbonate ions, are termed "dissolved inorganic carbon" (DIC).
The actual reaction is more complicated and involves the bicarbonate ion (HCO) that forms when CO2 reacts with water at seawater pH:
At seawater pH, most of the dissolved CO2 is immediately converted back into . The reaction results in a net transport of one molecule of CO2 from the ocean/atmosphere into the lithosphere. The result is that each Ca2+ ion released by chemical weathering ultimately removes one CO2 molecule from the surficial system (atmosphere, ocean, soils and living organisms), storing it in carbonate rocks where it is likely to stay for hundreds of millions of years. The weathering of calcium from rocks thus scrubs CO2 from the ocean and atmosphere, exerting a strong long-term effect on climate.
Applications
The largest use of metallic calcium is in steelmaking, due to its strong chemical affinity for oxygen and sulfur. Its oxides and sulfides, once formed, give liquid lime aluminate and sulfide inclusions in steel which float out; on treatment, these inclusions disperse throughout the steel and become small and spherical, improving castability, cleanliness and general mechanical properties. Calcium is also used in maintenance-free automotive batteries, in which the use of 0.1% calcium–lead alloys instead of the usual antimony–lead alloys leads to lower water loss and lower self-discharging.
Due to the risk of expansion and cracking, aluminium is sometimes also incorporated into these alloys. These lead–calcium alloys are also used in casting, replacing lead–antimony alloys. Calcium is also used to strengthen aluminium alloys used for bearings, for the control of graphitic carbon in cast iron, and to remove bismuth impurities from lead. Calcium metal is found in some drain cleaners, where it functions to generate heat and calcium hydroxide that saponifies the fats and liquefies the proteins (for example, those in hair) that block drains.
Besides metallurgy, the reactivity of calcium is exploited to remove nitrogen from high-purity argon gas and as a getter for oxygen and nitrogen. It is also used as a reducing agent in the production of chromium, zirconium, thorium, vanadium and uranium. It can also be used to store hydrogen gas, as it reacts with hydrogen to form solid calcium hydride, from which the hydrogen can easily be re-extracted.
Calcium isotope fractionation during mineral formation has led to several applications of calcium isotopes. In particular, the 1997 observation by Skulan and DePaolo that calcium minerals are isotopically lighter than the solutions from which the minerals precipitate is the basis of analogous applications in medicine and in paleoceanography. In animals with skeletons mineralized with calcium, the calcium isotopic composition of soft tissues reflects the relative rate of formation and dissolution of skeletal mineral.
In humans, changes in the calcium isotopic composition of urine have been shown to be related to changes in bone mineral balance. When the rate of bone formation exceeds the rate of bone resorption, the 44Ca/40Ca ratio in soft tissue rises and vice versa. Because of this relationship, calcium isotopic measurements of urine or blood may be useful in the early detection of metabolic bone diseases like osteoporosis.
A similar system exists in seawater, where 44Ca/40Ca tends to rise when the rate of removal of Ca2+ by mineral precipitation exceeds the input of new calcium into the ocean. In 1997, Skulan and DePaolo presented the first evidence of change in seawater 44Ca/40Ca over geologic time, along with a theoretical explanation of these changes. More recent papers have confirmed this observation, demonstrating that seawater Ca2+ concentration is not constant, and that the ocean is never in a "steady state" with respect to calcium input and output. This has important climatological implications, as the marine calcium cycle is closely tied to the carbon cycle.
Many calcium compounds are used in food, as pharmaceuticals, and in medicine, among others. For example, calcium and phosphorus are supplemented in foods through the addition of calcium lactate, calcium diphosphate, and tricalcium phosphate. The last is also used as a polishing agent in toothpaste and in antacids. Calcium lactobionate is a white powder that is used as a suspending agent for pharmaceuticals. In baking, calcium phosphate is used as a leavening agent. Calcium sulfite is used as a bleach in papermaking and as a disinfectant, calcium silicate is used as a reinforcing agent in rubber, and calcium acetate is a component of liming rosin and is used to make metallic soaps and synthetic resins.
Calcium is on the World Health Organization's List of Essential Medicines.
Food sources
Foods rich in calcium include dairy products such as milk and yogurt, cheese, sardines, salmon, soy products, kale, and fortified breakfast cereals.
Because of concerns for long-term adverse side effects, including calcification of arteries and kidney stones, both the U.S. Institute of Medicine (IOM) and the European Food Safety Authority (EFSA) set Tolerable Upper Intake Levels (ULs) for combined dietary and supplemental calcium. From the IOM, people of ages 9–18 years are not to exceed 3 g/day combined intake; for ages 19–50, not to exceed 2.5 g/day; for ages 51 and older, not to exceed 2 g/day. EFSA set the UL for all adults at 2.5 g/day, but decided the information for children and adolescents was not sufficient to determine ULs.
Biological and pathological role
Function
Calcium is an essential element needed in large quantities. The Ca2+ ion acts as an electrolyte and is vital to the health of the muscular, circulatory, and digestive systems; is indispensable to the building of bone in the form of hydroxyapatite; and supports synthesis and function of blood cells. For example, it regulates the contraction of muscles, nerve conduction, and the clotting of blood. As a result, intra- and extracellular calcium levels are tightly regulated by the body. Calcium can play this role because the Ca2+ ion forms stable coordination complexes with many organic compounds, especially proteins; it also forms compounds with a wide range of solubilities, enabling the formation of the skeleton.
Binding
Calcium ions may be complexed by proteins through binding the carboxyl groups of glutamic acid or aspartic acid residues; through interacting with phosphorylated serine, tyrosine, or threonine residues; or by being chelated by γ-carboxylated amino acid residues. Trypsin, a digestive enzyme, uses the first method; osteocalcin, a bone matrix protein, uses the third.
Some other bone matrix proteins such as osteopontin and bone sialoprotein use both the first and the second. Direct activation of enzymes by binding calcium is common; some other enzymes are activated by noncovalent association with direct calcium-binding enzymes. Calcium also binds to the phospholipid layer of the cell membrane, anchoring proteins associated with the cell surface.
Solubility
As an example of the wide range of solubility of calcium compounds, monocalcium phosphate is very soluble in water, 85% of extracellular calcium is as dicalcium phosphate with a solubility of 2.00 mM, and the hydroxyapatite of bones in an organic matrix is tricalcium phosphate with a solubility of 1000 μM.
Nutrition
Calcium is a common constituent of multivitamin dietary supplements, but the composition of calcium complexes in supplements may affect its bioavailability which varies by solubility of the salt involved: calcium citrate, malate, and lactate are highly bioavailable, while the oxalate is less. Other calcium preparations include calcium carbonate, calcium citrate malate, and calcium gluconate. The intestine absorbs about one-third of calcium eaten as the free ion, and plasma calcium level is then regulated by the kidneys.
Hormonal regulation of bone formation and serum levels
Parathyroid hormone and vitamin D promote the formation of bone by allowing and enhancing the deposition of calcium ions there, allowing rapid bone turnover without affecting bone mass or mineral content. When plasma calcium levels fall, cell surface receptors are activated and the secretion of parathyroid hormone occurs; it then proceeds to stimulate the entry of calcium into the plasma pool by taking it from targeted kidney, gut, and bone cells, with the bone-forming action of parathyroid hormone being antagonized by calcitonin, whose secretion increases with increasing plasma calcium levels.
Abnormal serum levels
Excess intake of calcium may cause hypercalcemia. However, because calcium is absorbed rather inefficiently by the intestines, high serum calcium is more likely caused by excessive secretion of parathyroid hormone (PTH) or possibly by excessive intake of vitamin D, both of which facilitate calcium absorption. All these conditions result in excess calcium salts being deposited in the heart, blood vessels, or kidneys. Symptoms include anorexia, nausea, vomiting, memory loss, confusion, muscle weakness, increased urination, dehydration, and metabolic bone disease.
Chronic hypercalcaemia typically leads to calcification of soft tissue and its serious consequences: for example, calcification can cause loss of elasticity of vascular walls and disruption of laminar blood flow—and thence to plaque rupture and thrombosis. Conversely, inadequate calcium or vitamin D intakes may result in hypocalcemia, often caused also by inadequate secretion of parathyroid hormone or defective PTH receptors in cells. Symptoms include neuromuscular excitability, which potentially causes tetany and disruption of conductivity in cardiac tissue.
Bone disease
As calcium is required for bone development, many bone diseases can be traced to the organic matrix or the hydroxyapatite in molecular structure or organization of bone. Osteoporosis is a reduction in mineral content of bone per unit volume, and can be treated by supplementation of calcium, vitamin D, and bisphosphonates. Inadequate amounts of calcium, vitamin D, or phosphates can lead to softening of bones, called osteomalacia.
Safety
Metallic calcium
Because calcium reacts exothermically with water and acids, calcium metal coming into contact with bodily moisture results in severe corrosive irritation. When swallowed, calcium metal has the same effect on the mouth, oesophagus, and stomach, and can be fatal. However, long-term exposure is not known to have distinct adverse effects.
| Physical sciences | Chemical elements_2 | null |
5669 | https://en.wikipedia.org/wiki/Chromium | Chromium | Chromium is a chemical element; it has symbol Cr and atomic number 24. It is the first element in group 6. It is a steely-grey, lustrous, hard, and brittle transition metal.
Chromium is valued for its high corrosion resistance and hardness. A major development in steel production was the discovery that steel could be made highly resistant to corrosion and discoloration by adding metallic chromium to form stainless steel. Stainless steel and chrome plating (electroplating with chromium) together comprise 85% of the commercial use. Chromium is also greatly valued as a metal that is able to be highly polished while resisting tarnishing. Polished chromium reflects almost 70% of the visible spectrum, and almost 90% of infrared light. The name of the element is derived from the Greek word χρῶμα, chrōma, meaning color, because many chromium compounds are intensely colored.
Industrial production of chromium proceeds from chromite ore (mostly FeCr2O4) to produce ferrochromium, an iron-chromium alloy, by means of aluminothermic or silicothermic reactions. Ferrochromium is then used to produce alloys such as stainless steel. Pure chromium metal is produced by a different process: roasting and leaching of chromite to separate it from iron, followed by reduction with carbon and then aluminium.
Trivalent chromium (Cr(III)) occurs naturally in many foods and is sold as a dietary supplement, although there is insufficient evidence that dietary chromium provides nutritional benefit to people. In 2014, the European Food Safety Authority concluded that research on dietary chromium did not justify it to be recognized as an essential nutrient.
While chromium metal and Cr(III) ions are considered non-toxic, chromate and its derivatives, often called "hexavalent chromium", is toxic and carcinogenic. According to the European Chemicals Agency (ECHA), chromium trioxide that is used in industrial electroplating processes is a "substance of very high concern" (SVHC).
Physical properties
Atomic
Gaseous chromium has a ground-state electron configuration of [Ar] 3d5 4s1. It is the first element in the periodic table whose configuration violates the Aufbau principle. Exceptions to the principle also occur later in the periodic table for elements such as copper, niobium and molybdenum.
Chromium is the first element in the 3d series where the 3d electrons start to sink into the core; they thus contribute less to metallic bonding, and hence the melting and boiling points and the enthalpy of atomisation of chromium are lower than those of the preceding element vanadium. Chromium(VI) is a strong oxidising agent in contrast to the molybdenum(VI) and tungsten(VI) oxides.
Bulk
Chromium is the third hardest element after carbon (diamond) and boron. Its Mohs hardness is 8.5, which means that it can scratch samples of quartz and topaz, but can be scratched by corundum. Chromium is highly resistant to tarnishing, which makes it useful as a metal that preserves its outermost layer from corroding, unlike other metals such as copper, magnesium, and aluminium.
Chromium has a melting point of 1907 °C (3465 °F), which is relatively low compared to the majority of transition metals. However, it still has the second highest melting point out of all the period 4 elements, being topped by vanadium by 3 °C (5 °F) at 1910 °C (3470 °F). The boiling point of 2671 °C (4840 °F), however, is comparatively lower, having the fourth lowest boiling point out of the Period 4 transition metals alone behind copper, manganese and zinc. The electrical resistivity of chromium at 20 °C is 125 nanoohm-meters.
Chromium has a high specular reflection in comparison to other transition metals. In infrared, at 425 μm, chromium has a maximum reflectance of about 72%, reducing to a minimum of 62% at 750 μm before rising again to 90% at 4000 μm. When chromium is used in stainless steel alloys and polished, the specular reflection decreases with the inclusion of additional metals, yet is still high in comparison with other alloys. Between 40% and 60% of the visible spectrum is reflected from polished stainless steel. The explanation on why chromium displays such a high turnout of reflected photon waves in general, especially the 90% in infrared, can be attributed to chromium's magnetic properties. Chromium has unique magnetic properties; it is the only elemental solid that shows antiferromagnetic ordering at room temperature and below. Above 38 °C, its magnetic ordering becomes paramagnetic. The antiferromagnetic properties, which cause the chromium atoms to temporarily ionize and bond with themselves, are present because the body-centric cubic's magnetic properties are disproportionate to the lattice periodicity. This is due to the magnetic moments at the cube's corners and the unequal, but antiparallel, cube centers. From here, the frequency-dependent relative permittivity of chromium, deriving from Maxwell's equations and chromium's antiferromagnetism, leaves chromium with a high infrared and visible light reflectance.
Passivation
Chromium metal in air is passivated: it forms a thin, protective surface layer of chromium oxide with the corundum structure. Passivation can be enhanced by short contact with oxidizing acids like nitric acid. Passivated chromium is stable against acids. Passivation can be removed with a strong reducing agent that destroys the protective oxide layer on the metal. Chromium metal treated in this way readily dissolves in weak acids.
The surface chromia scale, is adherent to the metal. In contrast, iron forms a more porous oxide which is weak and flakes easily and exposes fresh metal to the air, causing continued rusting. At room temperature, the chromia scale is a few atomic layers thick, growing in thickness by outward diffusion of metal ions across the scale. Above 950 °C volatile chromium trioxide forms from the chromia scale, limiting the scale thickness and oxidation protection.
Chromium, unlike iron and nickel, does not suffer from hydrogen embrittlement. However, it does suffer from nitrogen embrittlement, reacting with nitrogen from air and forming brittle nitrides at the high temperatures necessary to work the metal parts.
Isotopes
Naturally occurring chromium is composed of four stable isotopes; 50Cr, 52Cr, 53Cr and 54Cr, with 52Cr being the most abundant (83.789% natural abundance). 50Cr is observationally stable, as it is theoretically capable of decaying to 50Ti via double electron capture with a half-life of no less than 1.3 years. Twenty-five radioisotopes have been characterized, ranging from 42Cr to 70Cr; the most stable radioisotope is 51Cr with a half-life of 27.7 days. All of the remaining radioactive isotopes have half-lives that are less than 24 hours and the majority less than 1 minute. Chromium also has two metastable nuclear isomers. The primary decay mode before the most abundant stable isotope, 52Cr, is electron capture and the primary mode after is beta decay.
53Cr is the radiogenic decay product of 53Mn (half-life 3.74 million years). Chromium isotopes are typically collocated (and compounded) with manganese isotopes. This circumstance is useful in isotope geology. Manganese-chromium isotope ratios reinforce the evidence from 26Al and 107Pd concerning the early history of the Solar System. Variations in 53Cr/52Cr and Mn/Cr ratios from several meteorites indicate an initial 53Mn/55Mn ratio that suggests Mn-Cr isotopic composition must result from in-situ decay of 53Mn in differentiated planetary bodies. Hence 53Cr provides additional evidence for nucleosynthetic processes immediately before coalescence of the Solar System. 53Cr has been posited as a proxy for atmospheric oxygen concentration.
Chemistry and compounds
Chromium is a member of group 6, of the transition metals. The +3 and +6 states occur most commonly within chromium compounds, followed by +2; charges of +1, +4 and +5 for chromium are rare, but do nevertheless occasionally exist.
Common oxidation states
Chromium(0)
Many Cr(0) complexes are known. Bis(benzene)chromium and chromium hexacarbonyl are highlights in organochromium chemistry.
Chromium(II)
Chromium(II) compounds are uncommon, in part because they readily oxidize to chromium(III) derivatives in air. Water-stable chromium(II) chloride that can be made by reducing chromium(III) chloride with zinc. The resulting bright blue solution created from dissolving chromium(II) chloride is stable at neutral pH. Some other notable chromium(II) compounds include chromium(II) oxide , and chromium(II) sulfate . Many chromium(II) carboxylates are known. The red chromium(II) acetate (Cr2(O2CCH3)4) is somewhat famous. It features a Cr-Cr quadruple bond.
Chromium(III)
A large number of chromium(III) compounds are known, such as chromium(III) nitrate, chromium(III) acetate, and chromium(III) oxide. Chromium(III) can be obtained by dissolving elemental chromium in acids like hydrochloric acid or sulfuric acid, but it can also be formed through the reduction of chromium(VI) by cytochrome c7. The ion has a similar radius (63 pm) to (radius 50 pm), and they can replace each other in some compounds, such as in chrome alum and alum.
Chromium(III) tends to form octahedral complexes. Commercially available chromium(III) chloride hydrate is the dark green complex [CrCl2(H2O)4]Cl. Closely related compounds are the pale green [CrCl(H2O)5]Cl2 and violet [Cr(H2O)6]Cl3. If anhydrous violet chromium(III) chloride is dissolved in water, the violet solution turns green after some time as the chloride in the inner coordination sphere is replaced by water. This kind of reaction is also observed with solutions of chrome alum and other water-soluble chromium(III) salts. A tetrahedral coordination of chromium(III) has been reported for the Cr-centered Keggin anion [α-CrW12O40]5–.
Chromium(III) hydroxide (Cr(OH)3) is amphoteric, dissolving in acidic solutions to form [Cr(H2O)6]3+, and in basic solutions to form . It is dehydrated by heating to form the green chromium(III) oxide (Cr2O3), a stable oxide with a crystal structure identical to that of corundum.
Chromium(VI)
Chromium(VI) compounds are oxidants at low or neutral pH. Chromate anions () and dichromate (Cr2O72−) anions are the principal ions at this oxidation state. They exist at an equilibrium, determined by pH:
2 [CrO4]2− + 2 H+ [Cr2O7]2− + H2O
Chromium(VI) oxyhalides are known also and include chromyl fluoride (CrO2F2) and chromyl chloride (). However, despite several erroneous claims, chromium hexafluoride (as well as all higher hexahalides) remains unknown, as of 2020.
Sodium chromate is produced industrially by the oxidative roasting of chromite ore with sodium carbonate. The change in equilibrium is visible by a change from yellow (chromate) to orange (dichromate), such as when an acid is added to a neutral solution of potassium chromate. At yet lower pH values, further condensation to more complex oxyanions of chromium is possible.
Both the chromate and dichromate anions are strong oxidizing reagents at low pH:
+ 14 + 6 e− → 2 + 21 (ε0 = 1.33 V)
They are, however, only moderately oxidizing at high pH:
+ 4 + 3 e− → + 5 (ε0 = −0.13 V)
Chromium(VI) compounds in solution can be detected by adding an acidic hydrogen peroxide solution. The unstable dark blue chromium(VI) peroxide (CrO5) is formed, which can be stabilized as an ether adduct .
Chromic acid has the hypothetical formula . It is a vaguely described chemical, despite many well-defined chromates and dichromates being known. The dark red chromium(VI) oxide , the acid anhydride of chromic acid, is sold industrially as "chromic acid". It can be produced by mixing sulfuric acid with dichromate and is a strong oxidizing agent.
Other oxidation states
Compounds of chromium(V) are rather rare; the oxidation state +5 is only realized in few compounds but are intermediates in many reactions involving oxidations by chromate. The only binary compound is the volatile chromium(V) fluoride (CrF5). This red solid has a melting point of 30 °C and a boiling point of 117 °C. It can be prepared by treating chromium metal with fluorine at 400 °C and 200 bar pressure. The peroxochromate(V) is another example of the +5 oxidation state. Potassium peroxochromate (K3[Cr(O2)4]) is made by reacting potassium chromate with hydrogen peroxide at low temperatures. This red brown compound is stable at room temperature but decomposes spontaneously at 150–170 °C.
Compounds of chromium(IV) are slightly more common than those of chromium(V). The tetrahalides, CrF4, CrCl4, and CrBr4, can be produced by treating the trihalides () with the corresponding halogen at elevated temperatures. Such compounds are susceptible to disproportionation reactions and are not stable in water. Organic compounds containing Cr(IV) state such as chromium tetra t-butoxide are also known.
Most chromium(I) compounds are obtained solely by oxidation of electron-rich, octahedral chromium(0) complexes. Other chromium(I) complexes contain cyclopentadienyl ligands. As verified by X-ray diffraction, a Cr-Cr quintuple bond (length 183.51(4) pm) has also been described. Extremely bulky monodentate ligands stabilize this compound by shielding the quintuple bond from further reactions.
Occurrence
Chromium is the 21st most abundant element in Earth's crust with an average concentration of 100 ppm. Chromium compounds are found in the environment from the erosion of chromium-containing rocks, and can be redistributed by volcanic eruptions. Typical background concentrations of chromium in environmental media are: atmosphere <10 ng/m3; soil <500 mg/kg; vegetation <0.5 mg/kg; freshwater <10 μg/L; seawater <1 μg/L; sediment <80 mg/kg. Chromium is mined as chromite (FeCr2O4) ore.
About two-fifths of the chromite ores and concentrates in the world are produced in South Africa, about a third in Kazakhstan, while India, Russia, and Turkey are also substantial producers. Untapped chromite deposits are plentiful, but geographically concentrated in Kazakhstan and southern Africa. Although rare, deposits of native chromium exist. The Udachnaya Pipe in Russia produces samples of the native metal. This mine is a kimberlite pipe, rich in diamonds, and the reducing environment helped produce both elemental chromium and diamonds.
The relation between Cr(III) and Cr(VI) strongly depends on pH and oxidative properties of the location. In most cases, Cr(III) is the dominating species, but in some areas, the ground water can contain up to 39 μg/L of total chromium, of which 30 μg/L is Cr(VI).
History
Early applications
The ancient Chinese are credited with the first ever use of chromium to prevent rusting. Modern archaeologists discovered that bronze-tipped crossbow bolts at the tomb of Qin Shi Huang showed no sign of corrosion after more than 2,000 years, because they had been coated in chromium. Chromium was not used anywhere else until the experiments of French pharmacist and chemist Louis Nicolas Vauquelin (1763–1829) in the late 1790s. In multiple Warring States period tombs, sharp jians and other weapons were also found to be coated with 10 to 15 micrometers of chromium oxide, which left them in pristine condition to this day.
Chromium minerals as pigments came to the attention of the west in the eighteenth century. On 26 July 1761, Johann Gottlob Lehmann found an orange-red mineral in the Beryozovskoye mines in the Ural Mountains which he named Siberian red lead. Though misidentified as a lead compound with selenium and iron components, the mineral was in fact crocoite with a formula of PbCrO4. In 1770, Peter Simon Pallas visited the same site as Lehmann and found a red lead mineral that was discovered to possess useful properties as a pigment in paints. After Pallas, the use of Siberian red lead as a paint pigment began to develop rapidly throughout the region. Crocoite would be the principal source of chromium in pigments until the discovery of chromite many years later.
In 1794, Louis Nicolas Vauquelin received samples of crocoite ore. He produced chromium trioxide (CrO3) by mixing crocoite with hydrochloric acid. In 1797, Vauquelin discovered that he could isolate metallic chromium by heating the oxide in a charcoal oven, for which he is credited as the one who truly discovered the element. Vauquelin was also able to detect traces of chromium in precious gemstones, such as ruby and emerald.
During the nineteenth century, chromium was primarily used not only as a component of paints, but in tanning salts as well. For quite some time, the crocoite found in Russia was the main source for such tanning materials. In 1827, a larger chromite deposit was discovered near Baltimore, United States, which quickly met the demand for tanning salts much more adequately than the crocoite that had been used previously. This made the United States the largest producer of chromium products until the year 1848, when larger deposits of chromite were uncovered near the city of Bursa, Turkey. With the development of metallurgy and chemical industries in the Western world, the need for chromium increased.
Chromium is also famous for its reflective, metallic luster when polished. It is used as a protective and decorative coating on car parts, plumbing fixtures, furniture parts and many other items, usually applied by electroplating. Chromium was used for electroplating as early as 1848, but this use only became widespread with the development of an improved process in 1924.
Production
Approximately 28.8 million metric tons (Mt) of marketable chromite ore was produced in 2013, and converted into 7.5 Mt of ferrochromium. According to John F. Papp, writing for the USGS, "Ferrochromium is the leading end use of chromite ore, [and] stainless steel is the leading end use of ferrochromium."
The largest producers of chromium ore in 2013 have been South Africa (48%), Kazakhstan (13%), Turkey (11%), and India (10%), with several other countries producing the rest of about 18% of the world production.
The two main products of chromium ore refining are ferrochromium and metallic chromium. For those products the ore smelter process differs considerably. For the production of ferrochromium, the chromite ore (FeCr2O4) is reduced in large scale in electric arc furnace or in smaller smelters with either aluminium or silicon in an aluminothermic reaction.
For the production of pure chromium, the iron must be separated from the chromium in a two step roasting and leaching process. The chromite ore is heated with a mixture of calcium carbonate and sodium carbonate in the presence of air. The chromium is oxidized to the hexavalent form, while the iron forms the stable Fe2O3. The subsequent leaching at higher elevated temperatures dissolves the chromates and leaves the insoluble iron oxide. The chromate is converted by sulfuric acid into the dichromate.
4 FeCr2O4 + 8 Na2CO3 + 7 O2 → 8 Na2CrO4 + 2 Fe2O3 + 8 CO2
2 Na2CrO4 + H2SO4 → Na2Cr2O7 + Na2SO4 + H2O
The dichromate is converted to the chromium(III) oxide by reduction with carbon and then reduced in an aluminothermic reaction to chromium.
Na2Cr2O7 + 2 C → Cr2O3 + Na2CO3 + CO
Cr2O3 + 2 Al → Al2O3 + 2 Cr
Applications
The creation of metal alloys account for 85% of the available chromium's usage. The remainder of chromium is used in the chemical, refractory, and foundry industries.
Metallurgy
The strengthening effect of forming stable metal carbides at grain boundaries, and the strong increase in corrosion resistance made chromium an important alloying material for steel. High-speed tool steels contain 3–5% chromium. Stainless steel, the primary corrosion-resistant metal alloy, is formed when chromium is introduced to iron in concentrations above 11%. For stainless steel's formation, ferrochromium is added to the molten iron. Also, nickel-based alloys have increased strength due to the formation of discrete, stable, metal, carbide particles at the grain boundaries. For example, Inconel 718 contains 18.6% chromium. Because of the excellent high-temperature properties of these nickel superalloys, they are used in jet engines and gas turbines in lieu of common structural materials. ASTM B163 relies on chromium for condenser and heat-exchanger tubes, while castings with high strength at elevated temperatures that contain chromium are standardised with ASTM A567. AISI type 332 is used where high temperature would normally cause carburization, oxidation or corrosion. Incoloy 800 "is capable of remaining stable and maintaining its austenitic structure even after long time exposures to high temperatures". Nichrome is used as resistance wire for heating elements in things like toasters and space heaters. These uses make chromium a strategic material. Consequently, during World War II, U.S. road engineers were instructed to avoid chromium in yellow road paint, as it "may become a critical material during the emergency". The United States likewise considered chromium "essential for the German war industry" and made intense diplomatic efforts to keep it out of the hands of Nazi Germany.
The high hardness and corrosion resistance of unalloyed chromium makes it a reliable metal for surface coating; it is still the most popular metal for sheet coating, with its above-average durability, compared to other coating metals. A layer of chromium is deposited on pretreated metallic surfaces by electroplating techniques. There are two deposition methods: thin, and thick. Thin deposition involves a layer of chromium below 1 μm thickness deposited by chrome plating, and is used for decorative surfaces. Thicker chromium layers are deposited if wear-resistant surfaces are needed. Both methods use acidic chromate or dichromate solutions. To prevent the energy-consuming change in oxidation state, the use of chromium(III) sulfate is under development; for most applications of chromium, the previously established process is used.
In the chromate conversion coating process, the strong oxidative properties of chromates are used to deposit a protective oxide layer on metals like aluminium, zinc, and cadmium. This passivation and the self-healing properties of the chromate stored in the chromate conversion coating, which is able to migrate to local defects, are the benefits of this coating method. Because of environmental and health regulations on chromates, alternative coating methods are under development.
Chromic acid anodizing (or Type I anodizing) of aluminium is another electrochemical process that does not lead to the deposition of chromium, but uses chromic acid as an electrolyte in the solution. During anodization, an oxide layer is formed on the aluminium. The use of chromic acid, instead of the normally used sulfuric acid, leads to a slight difference of these oxide layers.
The high toxicity of Cr(VI) compounds, used in the established chromium electroplating process, and the strengthening of safety and environmental regulations demand a search for substitutes for chromium, or at least a change to less toxic chromium(III) compounds.
Pigment
The mineral crocoite (which is also lead chromate PbCrO4) was used as a yellow pigment shortly after its discovery. After a synthesis method became available starting from the more abundant chromite, chrome yellow was, together with cadmium yellow, one of the most used yellow pigments. The pigment does not photodegrade, but it tends to darken due to the formation of chromium(III) oxide. It has a strong color, and was used for school buses in the United States and for the postal services (for example, the Deutsche Post) in Europe. The use of chrome yellow has since declined due to environmental and safety concerns and was replaced by organic pigments or other alternatives that are free from lead and chromium. Other pigments that are based around chromium are, for example, the deep shade of red pigment chrome red, which is simply lead chromate with lead(II) hydroxide (PbCrO4·Pb(OH)2). A very important chromate pigment, which was used widely in metal primer formulations, was zinc chromate, now replaced by zinc phosphate. A wash primer was formulated to replace the dangerous practice of pre-treating aluminium aircraft bodies with a phosphoric acid solution. This used zinc tetroxychromate dispersed in a solution of polyvinyl butyral. An 8% solution of phosphoric acid in solvent was added just before application. It was found that an easily oxidized alcohol was an essential ingredient. A thin layer of about 10–15 μm was applied, which turned from yellow to dark green when it was cured. There is still a question as to the correct mechanism. Chrome green is a mixture of Prussian blue and chrome yellow, while the chrome oxide green is chromium(III) oxide.
Chromium oxides are also used as a green pigment in the field of glassmaking and also as a glaze for ceramics. Green chromium oxide is extremely lightfast and as such is used in cladding coatings. It is also the main ingredient in infrared reflecting paints, used by the armed forces to paint vehicles and to give them the same infrared reflectance as green leaves.
Other uses
Chromium(III) ions present in corundum crystals (aluminium oxide) cause them to be colored red; when corundum appears as such, it is known as a ruby. If the corundum is lacking in chromium(III) ions, it is known as a sapphire. A red-colored artificial ruby may also be achieved by doping chromium(III) into artificial corundum crystals, thus making chromium a requirement for making synthetic rubies. Such a synthetic ruby crystal was the basis for the first laser, produced in 1960, which relied on stimulated emission of light from the chromium atoms in such a crystal. Ruby has a laser transition at 694.3 nanometers, in a deep red color.
Chromium(VI) salts are used for the preservation of wood. For example, chromated copper arsenate (CCA) is used in timber treatment to protect wood from decay fungi, wood-attacking insects, including termites, and marine borers. The formulations contain chromium based on the oxide CrO3 between 35.3% and 65.5%. In the United States, 65,300 metric tons of CCA solution were used in 1996.
Chromium(III) salts, especially chrome alum and chromium(III) sulfate, are used in the tanning of leather. The chromium(III) stabilizes the leather by cross linking the collagen fibers. Chromium tanned leather can contain 4–5% of chromium, which is tightly bound to the proteins. Although the form of chromium used for tanning is not the toxic hexavalent variety, there remains interest in management of chromium in the tanning industry. Recovery and reuse, direct/indirect recycling, and "chrome-less" or "chrome-free" tanning are practiced to better manage chromium usage.
The high heat resistivity and high melting point makes chromite and chromium(III) oxide a material for high temperature refractory applications, like blast furnaces, cement kilns, molds for the firing of bricks and as foundry sands for the casting of metals. In these applications, the refractory materials are made from mixtures of chromite and magnesite. The use is declining because of the environmental regulations due to the possibility of the formation of chromium(VI).
Several chromium compounds are used as catalysts for processing hydrocarbons. For example, the Phillips catalyst, prepared from chromium oxides, is used for the production of about half the world's polyethylene. Fe-Cr mixed oxides are employed as high-temperature catalysts for the water gas shift reaction. Copper chromite is a useful hydrogenation catalyst.
Uses of compounds
Chromium(IV) oxide (CrO2) is a magnetic compound. Its ideal shape anisotropy, which imparts high coercivity and remnant magnetization, made it a compound superior to γ-Fe2O3. Chromium(IV) oxide is used to manufacture magnetic tape used in high-performance audio tape and standard audio cassettes.
Chromium(III) oxide (Cr2O3) is a metal polish known as green rouge.
Chromic acid is a powerful oxidizing agent and is a useful compound for cleaning laboratory glassware of any trace of organic compounds. It is prepared by dissolving potassium dichromate in concentrated sulfuric acid, which is then used to wash the apparatus. Sodium dichromate is sometimes used because of its higher solubility (50 g/L versus 200 g/L respectively). The use of dichromate cleaning solutions is now phased out due to the high toxicity and environmental concerns. Modern cleaning solutions are highly effective and chromium free.
Potassium dichromate is a chemical reagent, used as a titrating agent.
Chromates are added to drilling muds to prevent corrosion of steel under wet conditions.
Chrome alum is Chromium(III) potassium sulfate and is used as a mordant (i.e., a fixing agent) for dyes in fabric and in tanning.
Biological role
The possible nutritional value of chromium(III) is unproven. Although chromium is regarded as a trace element and dietary mineral, its suspected roles in the action of insulin – a hormone that mediates the metabolism and storage of carbohydrate, fat, and protein – have not been adequately established. The mechanism of its actions in the body is undefined, leaving in doubt whether chromium has a biological role in healthy people.
In contrast, hexavalent chromium (Cr(VI) or Cr6+) is highly toxic and mutagenic. Ingestion of chromium(VI) in water has been linked to stomach tumors, and it may also cause allergic contact dermatitis.
"Chromium deficiency", involving a lack of Cr(III) in the body, or perhaps some complex of it, such as glucose tolerance factor, is not accepted as a medical condition, as it has no symptoms and healthy people do not require chromium supplementation. Some studies suggest that the biologically active form of chromium(III) is transported in the body via an oligopeptide called low-molecular-weight chromium-binding substance (chromodulin), which might play a role in the insulin signaling pathway.
The chromium content of common foods is generally low (1–13 micrograms per serving). The chromium content of food varies widely, due to differences in soil mineral content, growing season, plant cultivar, and contamination during processing. Chromium (and nickel) leach into food cooked in stainless steel, with the effect being largest when the cookware is new. Acidic foods that are cooked for many hours also exacerbate this effect.
Dietary recommendations
There is disagreement on chromium's status as an essential nutrient. Governmental departments from Australia, New Zealand, India, and Japan consider chromium as essential, while the United States and European Food Safety Authority of the European Union do not.
The U.S. National Academy of Medicine (NAM) updated the Estimated Average Requirements (EARs) and the Recommended Dietary Allowances (RDAs) for chromium in 2001. For chromium, there was insufficient information to set EARs and RDAs, so its needs are described as estimates for Adequate Intake (AI). From a 2001 assessment, AI of chromium for women ages 14 through 50 is 25 μg/day, and the AI for women ages 50 and above is 20 μg/day. The AIs for women who are pregnant are 30 μg/day, and for women who are lactating, the set AI is 45 μg/day. The AI for men ages 14 through 50 is 35 μg/day, and the AI for men ages 50 and above is 30 μg/day. For children ages 1 through 13, the AI increases with age from 0.2 μg/day up to 25 μg/day. As for safety, the NAM sets Tolerable Upper Intake Levels (ULs) for vitamins and minerals when the evidence is sufficient. In the case of chromium, there is not yet enough information, hence no UL has been established. Collectively, the EARs, RDAs, AIs, and ULs are the parameters for the nutrition recommendation system known as Dietary Reference Intake (DRI).
Australia and New Zealand consider chromium to be an essential nutrient, with an AI of 35 μg/day for men, 25 μg/day for women, 30 μg/day for women who are pregnant, and 45 μg/day for women who are lactating. A UL has not been set due to the lack of sufficient data. India considers chromium to be an essential nutrient, with an adult recommended intake of 33 μg/day. Japan also considers chromium to be an essential nutrient, with an AI of 10 μg/day for adults, including women who are pregnant or lactating. A UL has not been set.
The EFSA does not consider chromium to be an essential nutrient.
Labeling
For U.S. food and dietary supplement labeling purposes, the amount of the substance in a serving is expressed as a percent of the Daily Value (%DV). For chromium labeling purposes, 100% of the Daily Value was 120 μg. As of 27 May 2016, the percentage of daily value was revised to 35 μg to bring the chromium intake into a consensus with the official Recommended Dietary Allowance. A table of the old and new adult daily values in the United States is provided at Reference Daily Intake.
After evaluation of research on the potential nutritional value of chromium, the European Food Safety Authority concluded that there was no evidence of benefit by dietary chromium in healthy people, thereby declining to establish recommendations in Europe for dietary intake of chromium.
Food sources
Food composition databases such as those maintained by the U.S. Department of Agriculture do not contain information on the chromium content of foods. A wide variety of animal and vegetable foods contain chromium. Content per serving is influenced by the chromium content of the soil in which the plants are grown, by foodstuffs fed to animals, and by processing methods, as chromium is leached into foods if processed or cooked in stainless steel equipment. One diet analysis study conducted in Mexico reported an average daily chromium intake of 30 micrograms. An estimated 31% of adults in the United States consume multi-vitamin/mineral dietary supplements, which often contain 25 to 60 micrograms of chromium.
Supplementation
Chromium is an ingredient in total parenteral nutrition (TPN), because deficiency can occur after months of intravenous feeding with chromium-free TPN. It is also added to nutritional products for preterm infants. Although the mechanism of action in biological roles for chromium is unclear, in the United States chromium-containing products are sold as non-prescription dietary supplements in amounts ranging from 50 to 1,000 μg. Lower amounts of chromium are also often incorporated into multi-vitamin/mineral supplements consumed by an estimated 31% of adults in the United States. Chemical compounds used in dietary supplements include chromium chloride, chromium citrate, chromium(III) picolinate, chromium(III) polynicotinate, and other chemical compositions. The benefit of supplements has not been proven.
Initiation of research on glucose
The notion of chromium as a potential regulator of glucose metabolism began in the 1950s when scientists performed a series of experiments controlling the diet of rats. The experimenters subjected the rats to a chromium deficient diet, and witnessed an inability to respond effectively to increased levels of blood glucose. A chromium-rich Brewer's yeast was provided in the diet, enabling the rats to effectively metabolize glucose, and so giving evidence that chromium may have a role in glucose management.
Approved and disapproved health claims
In 2005, the U.S. Food and Drug Administration had approved a qualified health claim for chromium picolinate with a requirement for specific label wording:
"One small study suggests that chromium picolinate may reduce the risk of insulin resistance, and therefore possibly may reduce the risk of type 2 diabetes. FDA concludes, however, that the existence of such a relationship between chromium picolinate and either insulin resistance or type 2 diabetes is highly uncertain."
In other parts of the petition, the FDA rejected claims for chromium picolinate and cardiovascular disease, retinopathy or kidney disease caused by abnormally high blood sugar levels. As of March 2024, this ruling on chromium remains in effect.
In 2010, chromium(III) picolinate was approved by Health Canada to be used in dietary supplements. Approved labeling statements include: a factor in the maintenance of good health, provides support for healthy glucose metabolism, helps the body to metabolize carbohydrates and helps the body to metabolize fats. The European Food Safety Authority approved claims in 2010 that chromium contributed to normal macronutrient metabolism and maintenance of normal blood glucose concentration, but rejected claims for maintenance or achievement of a normal body weight, or reduction of tiredness or fatigue.
However, in a 2014 reassessment of studies to determine whether a Dietary Reference Intake value could be established for chromium, EFSA stated:
"The Panel concludes that no Average Requirement and no Population Reference Intake for chromium for the performance of physiological functions can be defined." and
"The Panel considered that there is no evidence of beneficial effects associated with chromium intake in healthy subjects. The Panel concludes that the setting of an Adequate Intake for chromium is also not appropriate."
Diabetes
Given the evidence for chromium deficiency causing problems with glucose management in the context of intravenous nutrition products formulated without chromium, research interest turned to whether chromium supplementation would benefit people who have type 2 diabetes but are not chromium deficient. Looking at the results from four meta-analyses, one reported a statistically significant decrease in fasting plasma glucose levels and a non-significant trend in lower hemoglobin A1C. A second reported the same, a third reported significant decreases for both measures, while a fourth reported no benefit for either. A review published in 2016 listed 53 randomized clinical trials that were included in one or more of six meta-analyses. It concluded that whereas there may be modest decreases in fasting blood glucose and/or HbA1C that achieve statistical significance in some of these meta-analyses, few of the trials achieved decreases large enough to be expected to be relevant to clinical outcome.
Body weight
Two systematic reviews looked at chromium supplements as a mean of managing body weight in overweight and obese people. One, limited to chromium picolinate, a common supplement ingredient, reported a statistically significant −1.1 kg (2.4 lb) weight loss in trials longer than 12 weeks. The other included all chromium compounds and reported a statistically significant −0.50 kg (1.1 lb) weight change. Change in percent body fat did not reach statistical significance. Authors of both reviews considered the clinical relevance of this modest weight loss as uncertain/unreliable. The European Food Safety Authority reviewed the literature and concluded that there was insufficient evidence to support a claim.
Sports
Chromium is promoted as a sports performance dietary supplement, based on the theory that it potentiates insulin activity, with anticipated results of increased muscle mass, and faster recovery of glycogen storage during post-exercise recovery. A review of clinical trials reported that chromium supplementation did not improve exercise performance or increase muscle strength. The International Olympic Committee reviewed dietary supplements for high-performance athletes in 2018 and concluded there was no need to increase chromium intake for athletes, nor support for claims of losing body fat.
Fresh-water fish
Irrigation water standards for chromium are 0.1 mg/L, but some rivers in Bangladesh are more than five times that amount. The standard for fish for human consumption is less than 1 mg/kg, but many tested samples were more than five times that amount. Chromium, especially hexavalent chromium, is highly toxic to fish because it is easily absorbed across the gills, readily enters blood circulation, crosses cell membranes and bioconcentrates up the food chain. In contrast, the toxicity of trivalent chromium is very low, attributed to poor membrane permeability and little biomagnification.
Acute and chronic exposure to chromium(VI) affects fish behavior, physiology, reproduction and survival. Hyperactivity and erratic swimming have been reported in contaminated environments. Egg hatching and fingerling survival are affected. In adult fish there are reports of histopathological damage to liver, kidney, muscle, intestines, and gills. Mechanisms include mutagenic gene damage and disruptions of enzyme functions.
There is evidence that fish may not require chromium, but benefit from a measured amount in diet. In one study, juvenile fish gained weight on a zero chromium diet, but the addition of 500 μg of chromium in the form of chromium chloride or other supplement types, per kilogram of food (dry weight), increased weight gain. At 2,000 μg/kg the weight gain was no better than with the zero chromium diet, and there were increased DNA strand breaks.
Precautions
Water-insoluble chromium(III) compounds and chromium metal are not considered a health hazard, while the toxicity and carcinogenic properties of chromium(VI) have been known for a long time. Because of the specific transport mechanisms, only limited amounts of chromium(III) enter the cells. Acute oral toxicity ranges between 50 and 150 mg/kg. A 2008 review suggested that moderate uptake of chromium(III) through dietary supplements poses no genetic-toxic risk. In the US, the Occupational Safety and Health Administration (OSHA) has designated an air permissible exposure limit (PEL) in the workplace as a time-weighted average (TWA) of 1 mg/m3. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 0.5 mg/m3, time-weighted average. The IDLH (immediately dangerous to life and health) value is 250 mg/m3.
Chromium(VI) toxicity
The acute oral toxicity for chromium(VI) ranges between 1.5 and 3.3 mg/kg. In the body, chromium(VI) is reduced by several mechanisms to chromium(III) already in the blood before it enters the cells. The chromium(III) is excreted from the body, whereas the chromate ion is transferred into the cell by a transport mechanism, by which also sulfate and phosphate ions enter the cell. The acute toxicity of chromium(VI) is due to its strong oxidant properties. After it reaches the blood stream, it damages the kidneys, the liver and blood cells through oxidation reactions. Hemolysis, renal, and liver failure result. Aggressive dialysis can be therapeutic.
The carcinogenity of chromate dust has been known for a long time, and in 1890 the first publication described the elevated cancer risk of workers in a chromate dye company. Three mechanisms have been proposed to describe the genotoxicity of chromium(VI). The first mechanism includes highly reactive hydroxyl radicals and other reactive radicals which are by products of the reduction of chromium(VI) to chromium(III). The second process includes the direct binding of chromium(V), produced by reduction in the cell, and chromium(IV) compounds to the DNA. The last mechanism attributed the genotoxicity to the binding to the DNA of the end product of the chromium(III) reduction.
Chromium salts (chromates) are also the cause of allergic reactions in some people. Chromates are often used to manufacture, amongst other things, leather products, paints, cement, mortar and anti-corrosives. Contact with products containing chromates can lead to allergic contact dermatitis and irritant dermatitis, resulting in ulceration of the skin, sometimes referred to as "chrome ulcers". This condition is often found in workers that have been exposed to strong chromate solutions in electroplating, tanning and chrome-producing manufacturers.
Environmental issues
Because chromium compounds were used in dyes, paints, and leather tanning compounds, these compounds are often found in soil and groundwater at active and abandoned industrial sites, needing environmental cleanup and remediation. Primer paint containing hexavalent chromium is still widely used for aerospace and automobile refinishing applications.
In 2010, the Environmental Working Group studied the drinking water in 35 American cities in the first nationwide study. The study found measurable hexavalent chromium in the tap water of 31 of the cities sampled, with Norman, Oklahoma, at the top of list; 25 cities had levels that exceeded California's proposed limit.
The more toxic hexavalent chromium form can be reduced to the less soluble trivalent oxidation state in soils by organic matter, ferrous iron, sulfides, and other reducing agents, with the rates of such reduction being faster under more acidic conditions than under more alkaline ones. In contrast, trivalent chromium can be oxidized to hexavalent chromium in soils by manganese oxides, such as Mn(III) and Mn(IV) compounds. Since the solubility and toxicity of chromium (VI) are greater that those of chromium (III), the oxidation-reduction conversions between the two oxidation states have implications for movement and bioavailability of chromium in soils, groundwater, and plants.
| Physical sciences | Chemical elements_2 | null |
5672 | https://en.wikipedia.org/wiki/Cadmium | Cadmium | Cadmium is a chemical element; it has symbol Cd and atomic number 48. This soft, silvery-white metal is chemically similar to the two other stable metals in group 12, zinc and mercury. Like zinc, it demonstrates oxidation state +2 in most of its compounds, and like mercury, it has a lower melting point than the transition metals in groups 3 through 11. Cadmium and its congeners in group 12 are often not considered transition metals, in that they do not have partly filled d or f electron shells in the elemental or common oxidation states. The average concentration of cadmium in Earth's crust is between 0.1 and 0.5 parts per million (ppm). It was discovered in 1817 simultaneously by Stromeyer and Hermann, both in Germany, as an impurity in zinc carbonate.
Cadmium occurs as a minor component in most zinc ores and is a byproduct of zinc production. It was used for a long time in the 1900s as a corrosion-resistant plating on steel, and cadmium compounds are used as red, orange, and yellow pigments, to color glass, and to stabilize plastic. Cadmium's use is generally decreasing because it is toxic (it is specifically listed in the European Restriction of Hazardous Substances Directive) and nickel–cadmium batteries have been replaced with nickel–metal hydride and lithium-ion batteries. Due to it being a neutron poison, cadmium is also used as a component of control rods in nuclear fission reactors. One of its few new uses is in cadmium telluride solar panels.
Although cadmium has no known biological function in higher organisms, a cadmium-dependent carbonic anhydrase has been found in marine diatoms.
Characteristics
Physical properties
Cadmium is a soft, malleable, ductile, silvery-white divalent metal. It is similar in many respects to zinc but forms complex compounds. Unlike most other metals, cadmium is resistant to corrosion and is used as a protective plate on other metals. As a bulk metal, cadmium is insoluble in water and is not flammable; however, in its powdered form it may burn and release toxic fumes.
Chemical properties
Although cadmium usually has an oxidation state of +2, it also exists in the +1 state. Cadmium and its congeners are not always considered transition metals, in that they do not have partly filled d or f electron shells in the elemental or common oxidation states. Cadmium burns in air to form brown amorphous cadmium oxide (CdO); the crystalline form of this compound is a dark red which changes color when heated, similar to zinc oxide. Hydrochloric acid, sulfuric acid, and nitric acid dissolve cadmium by forming cadmium chloride (CdCl2), cadmium sulfate (CdSO4), or cadmium nitrate (Cd(NO3)2). The oxidation state +1 can be produced by dissolving cadmium in a mixture of cadmium chloride and aluminium chloride, forming the Cd22+ cation, which is similar to the Hg22+ cation in mercury(I) chloride.
Cd + CdCl2 + 2 AlCl3 → Cd2(AlCl4)2
The structures of many cadmium complexes with nucleobases, amino acids, and vitamins have been determined.
Isotopes
Naturally occurring cadmium is composed of eight isotopes. Two of them are radioactive, and three are expected to decay but have not measurably done so under laboratory conditions. The two natural radioactive isotopes are 113Cd (beta decay, half-life is ) and 116Cd (two-neutrino double beta decay, half-life is ). The other three are 106Cd, 108Cd (both double electron capture), and 114Cd (double beta decay); only lower limits on these half-lives have been determined. At least three isotopes – 110Cd, 111Cd, and 112Cd – are stable. Among the isotopes that do not occur naturally, the most long-lived are 109Cd with a half-life of 462.6 days, and 115Cd with a half-life of 53.46 hours. All of the remaining radioactive isotopes have half-lives of less than 2.5 hours, and the majority have half-lives of less than 5 minutes. Cadmium has 8 known meta states, with the most stable being 113mCd (t1⁄2 = 14.1 years), 115mCd (t1⁄2 = 44.6 days), and 117mCd (t1⁄2 = 3.36 hours).
The known isotopes of cadmium range in atomic mass from 94.950 u (95Cd) to 131.946 u (132Cd). For isotopes lighter than 112 u, the primary decay mode is electron capture and the dominant decay product is element 47 (silver). Heavier isotopes decay mostly through beta emission producing element 49 (indium).
One isotope of cadmium, 113Cd, absorbs neutrons with high selectivity: With very high probability, neutrons with energy below the cadmium cut-off will be absorbed; those higher than the cut-off will be transmitted. The cadmium cut-off is about 0.5 eV, and neutrons below that level are deemed slow neutrons, distinct from intermediate and fast neutrons.
Cadmium is created via the s-process in low- to medium-mass stars with masses of 0.6 to 10 solar masses, over thousands of years. In that process, a silver atom captures a neutron and then undergoes beta decay.
History
Cadmium (Latin cadmia, Greek καδμεία meaning "calamine", a cadmium-bearing mixture of minerals that was named after the Greek mythological character Κάδμος, Cadmus, the founder of Thebes) was discovered in contaminated zinc compounds sold in pharmacies in Germany in 1817 by Friedrich Stromeyer. Karl Samuel Leberecht Hermann simultaneously investigated the discoloration in zinc oxide and found an impurity, first suspected to be arsenic, because of the yellow precipitate with hydrogen sulfide. Additionally Stromeyer discovered that one supplier sold zinc carbonate instead of zinc oxide. Stromeyer found the new element as an impurity in zinc carbonate (calamine), and, for 100 years, Germany remained the only important producer of the metal. The metal was named after the Latin word for calamine, because it was found in this zinc ore. Stromeyer noted that some impure samples of calamine changed color when heated but pure calamine did not. He was persistent in studying these results and eventually isolated cadmium metal by roasting and reducing the sulfide. The potential for cadmium yellow as pigment was recognized in the 1840s, but the lack of cadmium limited this application.
Even though cadmium and its compounds are toxic in certain forms and concentrations, the British Pharmaceutical Codex from 1907 states that cadmium iodide was used as a medication to treat "enlarged joints, scrofulous glands, and chilblains".
In 1907, the International Astronomical Union defined the international ångström in terms of a red cadmium spectral line (1 wavelength = 6438.46963 Å). This was adopted by the 7th General Conference on Weights and Measures in 1927. In 1960, the definitions of both the metre and ångström were changed to use krypton.
After the industrial scale production of cadmium started in the 1930s and 1940s, the major application of cadmium was the coating of iron and steel to prevent corrosion; in 1944, 62% and in 1956, 59% of the cadmium in the United States was used for plating. In 1956, 24% of the cadmium in the United States was used for a second application in red, orange and yellow pigments from sulfides and selenides of cadmium.
The stabilizing effect of cadmium chemicals like the carboxylates cadmium laurate and cadmium stearate on PVC led to an increased use of those compounds in the 1970s and 1980s. The demand for cadmium in pigments, coatings, stabilizers, and alloys declined as a result of environmental and health regulations in the 1980s and 1990s; in 2006, only 7% of total cadmium consumption was used for plating, and only 10% was used for pigments.
At the same time, these decreases in consumption were compensated by a growing demand for cadmium for nickel–cadmium batteries, which accounted for 81% of the cadmium consumption in the United States in 2006.
Occurrence
Cadmium makes up about 0.1 ppm of Earth's crust and is the 65th most abundant element. It is much rarer than zinc, which makes up about 65 ppm. No significant deposits of cadmium-containing ores are known. The only cadmium mineral of importance, greenockite (CdS), is nearly always associated with sphalerite (ZnS). This association is caused by geochemical similarity between zinc and cadmium, with no geological process likely to separate them. Thus, cadmium is produced mainly as a byproduct of mining, smelting, and refining sulfidic ores of zinc, and, to a lesser degree, lead and copper. Small amounts of cadmium, about 10% of consumption, are produced from secondary sources, mainly from dust generated by recycling iron and steel scrap. Production in the United States began in 1907, but wide use began after World War I.
Metallic cadmium can be found in the Vilyuy River basin in Siberia.
Rocks mined for phosphate fertilizers contain varying amounts of cadmium, resulting in a cadmium concentration of as much as 300 mg/kg in the fertilizers and a high cadmium content in agricultural soils. Coal can contain significant amounts of cadmium, which ends up mostly in coal fly ash.
Cadmium in soil can be absorbed by crops such as rice and cocoa. In 2002, the Chinese ministry of agriculture measured that 28% of rice it sampled had excess lead and 10% had excess cadmium above limits defined by law. Consumer Reports tested 28 brands of dark chocolate sold in the United States in 2022, and found cadmium in all of them, with 13 exceeding the California Maximum Allowable Dose level.
Some plants such as willow trees and poplars have been found to clean both lead and cadmium from soil.
Typical background concentrations of cadmium do not exceed 5 ng/m3 in the atmosphere; 2 mg/kg in soil; 1 μg/L in freshwater and 50 ng/L in seawater. Concentrations of cadmium above 10 μg/L may be stable in water having low total solute concentrations and p H and can be difficult to remove by conventional water treatment processes.
Production
Cadmium is a common impurity in zinc ores, and it is most often isolated during the production of zinc. Some zinc ores concentrates from zinc sulfate ores contain up to 1.4% of cadmium. In the 1970s, the output of cadmium was per ton of zinc. Zinc sulfide ores are roasted in the presence of oxygen, converting the zinc sulfide to the oxide. Zinc metal is produced either by smelting the oxide with carbon or by electrolysis in sulfuric acid. Cadmium is isolated from the zinc metal by vacuum distillation if the zinc is smelted, or cadmium sulfate is precipitated from the electrolysis solution.
The British Geological Survey reports that in 2001, China was the top producer of cadmium with almost one-sixth of the world's production, closely followed by South Korea and Japan.
Applications
Cadmium is a common component of electric batteries, pigments, coatings, and electroplating.
Batteries
In 2009, 86% of cadmium was used in batteries, predominantly in rechargeable nickel–cadmium batteries. Nickel–cadmium cells have a nominal cell potential of 1.2 V. The cell consists of a positive nickel hydroxide electrode and a negative cadmium electrode plate separated by an alkaline electrolyte (potassium hydroxide). The European Union put a limit on cadmium in electronics in 2004 of 0.01%, with some exceptions, and in 2006 reduced the limit on cadmium content to 0.002%. Another type of battery based on cadmium is the silver–cadmium battery.
Electroplating
Cadmium electroplating, consuming 6% of the global production, is used in the aircraft industry to reduce corrosion of steel components. This coating is passivated by chromate salts. A limitation of cadmium plating is hydrogen embrittlement of high-strength steels from the electroplating process. Therefore, steel parts heat-treated to tensile strength above 1300 MPa (200 ksi) should be coated by an alternative method (such as special low-embrittlement cadmium electroplating processes or physical vapor deposition).
Titanium embrittlement from cadmium-plated tool residues resulted in banishment of those tools (and the implementation of routine tool testing to detect cadmium contamination) in the A-12/SR-71, U-2, and subsequent aircraft programs that use titanium.
Nuclear technology
Cadmium is used in the control rods of nuclear reactors, acting as a very effective neutron poison to control neutron flux in nuclear fission. When cadmium rods are inserted in the core of a nuclear reactor, cadmium absorbs neutrons, preventing them from creating additional fission events, thus controlling the amount of reactivity. The pressurized water reactor designed by Westinghouse Electric Company uses an alloy consisting of 80% silver, 15% indium, and 5% cadmium.
Televisions
QLED TVs have been starting to include cadmium in construction. Some companies have been looking to reduce the environmental impact of human exposure and pollution of the material in televisions during production.
Anticancer drugs
Complexes based on cadmium and other heavy metals have potential for the treatment of cancer, but their use is often limited due to toxic side effects.
Compounds
Cadmium oxide was used in black and white television phosphors and in the blue and green phosphors of color television cathode ray tubes. Cadmium sulfide (CdS) is used as a photoconductive surface coating for photocopier drums.
Various cadmium salts are used in paint pigments, with CdS as a yellow pigment being the most common. Cadmium selenide is a red pigment, commonly called cadmium red. To painters who work with the pigment, cadmium provides the most brilliant and durable yellows, oranges, and reds – so much so that during production, these colors are significantly toned down before they are ground with oils and binders or blended into watercolors, gouaches, acrylics, and other paint and pigment formulations. Because these pigments are potentially toxic, users should use a barrier cream on the hands to prevent absorption through the skin even though the amount of cadmium absorbed into the body through the skin is reported to be less than 1%.
In PVC, cadmium was used as heat, light, and weathering stabilizers. Currently, cadmium stabilizers have been completely replaced with barium-zinc, calcium-zinc and organo-tin stabilizers. Cadmium is used in many kinds of solder and bearing alloys, because it has a low coefficient of friction and fatigue resistance. It is also found in some of the lowest-melting alloys, such as Wood's metal.
Semiconductors
Cadmium is an element in some semiconductor materials. Cadmium sulfide, cadmium selenide, and cadmium telluride are used in some photodetectors and solar cells. HgCdTe detectors are sensitive to mid-infrared light and used in some motion detectors.
Laboratory uses
Helium–cadmium lasers are a common source of blue or ultraviolet laser light. Lasers at wavelengths of 325, 354 and 442 nm are made using this gain medium; some models can switch between these wavelengths. They are notably used in fluorescence microscopy as well as various laboratory uses requiring laser light at these wavelengths.
Cadmium selenide quantum dots emit bright luminescence under UV excitation (He–Cd laser, for example). The color of this luminescence can be green, yellow or red depending on the particle size. Colloidal solutions of those particles are used for imaging of biological tissues and solutions with a fluorescence microscope.
In molecular biology, cadmium is used to block voltage-dependent calcium channels from fluxing calcium ions, as well as in hypoxia research to stimulate proteasome-dependent degradation of Hif-1α.
Cadmium-selective sensors based on the fluorophore BODIPY have been developed for imaging and sensing of cadmium in cells. One powerful method for monitoring cadmium in aqueous environments involves electrochemistry. By employing a self-assembled monolayer one can obtain a cadmium selective electrode with a ppt-level sensitivity.
Biological role
Cadmium has no known function in higher organisms and is considered toxic. Cadmium is considered an environmental pollutant hazardous to living organisms. A cadmium-dependent carbonic anhydrase has been found in some marine diatoms, which live in environments with low zinc concentrations.
Cadmium is preferentially absorbed in the kidneys of humans. Up to about 30 mg of cadmium is commonly inhaled throughout human childhood and adolescence.
Cadmium is under research for its potential toxicity to increase the risk of cancer, cardiovascular disease, and osteoporosis.
Environmental impact
The biogeochemistry of cadmium and its release to the environment is under research.
Safety
Individuals and organizations have been reviewing cadmium's bioinorganic aspects for its toxicity. The most dangerous form of occupational exposure to cadmium is inhalation of fine dust and fumes, or ingestion of highly soluble cadmium compounds. Inhalation of cadmium fumes can result initially in metal fume fever, but may progress to chemical pneumonitis, pulmonary edema, and death.
Cadmium is also an environmental hazard. Human exposure is primarily from fossil fuel combustion, phosphate fertilizers, natural sources, iron and steel production, cement production and related activities, nonferrous metals production, and municipal solid waste incineration. Other sources of cadmium include bread, root crops, and vegetables.
There have been a few instances of general population poisoning as the result of long-term exposure to cadmium in contaminated food and water. Research into an estrogen mimicry that may induce breast cancer is ongoing, . In the decades leading up to World War II, mining operations contaminated the Jinzū River in Japan with cadmium and traces of other toxic metals. As a consequence, cadmium accumulated in the rice crops along the riverbanks downstream of the mines. Some members of the local agricultural communities consumed the contaminated rice and developed itai-itai disease and renal abnormalities, including proteinuria and glucosuria. The victims of this poisoning were almost exclusively post-menopausal women with low iron and low body stores of other minerals. Similar general population cadmium exposures in other parts of the world have not resulted in the same health problems because the populations maintained sufficient iron and other mineral levels. Thus, although cadmium is a major factor in the itai-itai disease in Japan, most researchers have concluded that it was one of several factors.
Cadmium is one of ten substances banned by the European Union's Restriction of Hazardous Substances (RoHS) directive, which regulates hazardous substances in electrical and electronic equipment, but allows for certain exemptions and exclusions from the scope of the law.
The International Agency for Research on Cancer has classified cadmium and cadmium compounds as carcinogenic to humans. Although occupational exposure to cadmium is linked to lung and prostate cancer, there is still uncertainty about the carcinogenicity of cadmium in low environmental exposure. Recent data from epidemiological studies suggest that intake of cadmium through diet is associated with a higher risk of endometrial, breast, and prostate cancer as well as with osteoporosis in humans. A recent study has demonstrated that endometrial tissue is characterized by higher levels of cadmium in current and former smoking females.
Cadmium exposure is associated with a large number of illnesses including kidney disease, early atherosclerosis, hypertension, and cardiovascular diseases. Although studies show a significant correlation between cadmium exposure and occurrence of disease in human populations, a molecular mechanism has not yet been identified. One hypothesis holds that cadmium is an endocrine disruptor and some experimental studies have shown that it can interact with different hormonal signaling pathways. For example, cadmium can bind to the estrogen receptor alpha, and affect signal transduction along the estrogen and MAPK signaling pathways at low doses.
The tobacco plant absorbs and accumulates heavy metals such as cadmium from the surrounding soil into its leaves. Following tobacco smoke inhalation, these are readily absorbed into the body of users. Tobacco smoking is the most important single source of cadmium exposure in the general population. An estimated 10% of the cadmium content of a cigarette is inhaled through smoking. Absorption of cadmium through the lungs is more effective than through the gut. As much as 50% of the cadmium inhaled in cigarette smoke may be absorbed.
On average, cadmium concentrations in the blood of smokers is 4 to 5 times greater than non-smokers and in the kidney, 2–3 times greater than in non-smokers. Despite the high cadmium content in cigarette smoke, there seems to be little exposure to cadmium from passive smoking.
In a non-smoking population, food is the greatest source of exposure. High quantities of cadmium can be found in crustaceans, mollusks, offal, frog legs, cocoa solids, bitter and semi-bitter chocolate, seaweed, fungi and algae products. However, grains, vegetables, and starchy roots and tubers are consumed in much greater quantity in the U.S., and are the source of the greatest dietary exposure there. Most plants bio-accumulate metal toxins such as cadmium and when composted to form organic fertilizers, yield a product that often can contain high amounts (e.g., over 0.5 mg) of metal toxins for every kilogram of fertilizer. Fertilizers made from animal dung (e.g., cow dung) or urban waste can contain similar amounts of cadmium. The cadmium added to the soil from fertilizers (rock phosphates or organic fertilizers) become bio-available and toxic only if the soil pH is low (i.e., acidic soils). In the European Union, an analysis of almost 22,000 topsoil samples with LUCAS survey concluded that 5.5% of samples have concentrations higher than 1 mg kg−1.
Zinc, copper, calcium, and iron ions, and selenium with vitamin C are used to treat cadmium intoxication, although it is not easily reversed.
Regulations
Because of the adverse effects of cadmium on the environment and human health, the supply and use of cadmium is restricted in Europe under the REACH Regulation.
The EFSA Panel on Contaminants in the Food Chain specifies that 2.5 μg/kg body weight is a tolerable weekly intake for humans. The Joint FAO/WHO Expert Committee on Food Additives has declared 7 μg/kg body weight to be the provisional tolerable weekly intake level. The state of California requires a food label to carry a warning about potential exposure to cadmium on products such as cocoa powder. The European Commission has put in place the EU regulation (2019/1009) on fertilizing products (EU, 2019), adopted in June 2019 and fully applicable as of July 2022. It sets a Cd limit value in phosphate fertilizers to 60 mg kg−1 of P2O5.
The U.S. Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit (PEL) for cadmium at a time-weighted average (TWA) of 0.005 ppm. The National Institute for Occupational Safety and Health (NIOSH) has not set a recommended exposure limit (REL) and has designated cadmium as a known human carcinogen. The IDLH (immediately dangerous to life and health) level for cadmium is 9 mg/m3.
In addition to mercury, the presence of cadmium in some batteries has led to the requirement of proper disposal (or recycling) of batteries.
Product recalls
In May 2006, a sale of the seats from Arsenal F.C.'s old stadium, Highbury in London, England was cancelled when the seats were discovered to contain trace amounts of cadmium. Reports of high levels of cadmium use in children's jewelry in 2010 led to a US Consumer Product Safety Commission investigation. The U.S. CPSC issued specific recall notices for cadmium content in jewelry sold by Claire's and Wal-Mart stores.
In June 2010, McDonald's voluntarily recalled more than 12 million promotional Shrek Forever After 3D Collectible Drinking Glasses because of the cadmium levels in paint pigments on the glassware. The glasses were manufactured by Arc International, of Millville, New Jersey, USA.
| Physical sciences | Chemical elements_2 | null |
5675 | https://en.wikipedia.org/wiki/Curium | Curium | Curium is a synthetic chemical element; it has symbol Cm and atomic number 96. This transuranic actinide element was named after eminent scientists Marie and Pierre Curie, both known for their research on radioactivity. Curium was first intentionally made by the team of Glenn T. Seaborg, Ralph A. James, and Albert Ghiorso in 1944, using the cyclotron at Berkeley. They bombarded the newly discovered element plutonium (the isotope 239Pu) with alpha particles. This was then sent to the Metallurgical Laboratory at University of Chicago where a tiny sample of curium was eventually separated and identified. The discovery was kept secret until after the end of World War II. The news was released to the public in November 1947. Most curium is produced by bombarding uranium or plutonium with neutrons in nuclear reactors – one tonne of spent nuclear fuel contains ~20 grams of curium.
Curium is a hard, dense, silvery metal with a high melting and boiling point for an actinide. It is paramagnetic at ambient conditions, but becomes antiferromagnetic upon cooling, and other magnetic transitions are also seen in many curium compounds. In compounds, curium usually has valence +3 and sometimes +4; the +3 valence is predominant in solutions. Curium readily oxidizes, and its oxides are a dominant form of this element. It forms strongly fluorescent complexes with various organic compounds. If it gets into the human body, curium accumulates in bones, lungs, and liver, where it promotes cancer.
All known isotopes of curium are radioactive and have small critical mass for a nuclear chain reaction. The most stable isotope, 247Cm, has a half-life of 15.6 million years; the longest-lived curium isotopes predominantly emit alpha particles. Radioisotope thermoelectric generators can use the heat from this process, but this is hindered by the rarity and high cost of curium. Curium is used in making heavier actinides and the 238Pu radionuclide for power sources in artificial cardiac pacemakers and RTGs for spacecraft. It served as the α-source in the alpha particle X-ray spectrometers of several space probes, including the Sojourner, Spirit, Opportunity, and Curiosity Mars rovers and the Philae lander on comet 67P/Churyumov–Gerasimenko, to analyze the composition and structure of the surface.
History
Though curium had likely been produced in previous nuclear experiments as well as the natural nuclear fission reactor at Oklo, Gabon, it was first intentionally synthesized, isolated and identified in 1944, at University of California, Berkeley, by Glenn T. Seaborg, Ralph A. James, and Albert Ghiorso. In their experiments, they used a cyclotron.
Curium was chemically identified at the Metallurgical Laboratory (now Argonne National Laboratory), University of Chicago. It was the third transuranium element to be discovered even though it is the fourth in the series – the lighter element americium was still unknown.
The sample was prepared as follows: first plutonium nitrate solution was coated on a platinum foil of ~0.5 cm2 area, the solution was evaporated and the residue was converted into plutonium(IV) oxide (PuO2) by annealing. Following cyclotron irradiation of the oxide, the coating was dissolved with nitric acid and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The residue was dissolved in perchloric acid, and further separation was done by ion exchange to yield a certain isotope of curium. The separation of curium and americium was so painstaking that the Berkeley group initially called those elements pandemonium (from Greek for all demons or hell) and delirium (from Latin for madness).
Curium-242 was made in July–August 1944 by bombarding 239Pu with α-particles to produce curium with the release of a neutron:
^{239}_{94}Pu + ^{4}_{2}He -> ^{242}_{96}Cm + ^{1}_{0}n
Curium-242 was unambiguously identified by the characteristic energy of the α-particles emitted during the decay:
^{242}_{96}Cm -> ^{238}_{94}Pu + ^{4}_{2}He
The half-life of this alpha decay was first measured as 5 months (150 days) and then corrected to 162.8 days.
Another isotope 240Cm was produced in a similar reaction in March 1945:
^{239}_{94}Pu + ^{4}_{2}He -> ^{240}_{96}Cm + 3^{1}_{0}n
The α-decay half-life of 240Cm was determined as 26.8 days and later revised to 30.4 days.
The discovery of curium and americium in 1944 was closely related to the Manhattan Project, so the results were confidential and declassified only in 1945. Seaborg leaked the synthesis of the elements 95 and 96 on the U.S. radio show for children, the Quiz Kids, five days before the official presentation at an American Chemical Society meeting on November 11, 1945, when one listener asked if any new transuranic element beside plutonium and neptunium had been discovered during the war. The discovery of curium (242Cm and 240Cm), its production, and its compounds was later patented listing only Seaborg as the inventor.
The element was named after Marie Curie and her husband Pierre Curie, who are known for discovering radium and for their work in radioactivity. It followed the example of gadolinium, a lanthanide element above curium in the periodic table, which was named after the explorer of rare-earth elements Johan Gadolin:
As the name for the element of atomic number 96 we should like to propose "curium", with symbol Cm. The evidence indicates that element 96 contains seven 5f electrons and is thus analogous to the element gadolinium, with its seven 4f electrons in the regular rare earth series. On this basis element 96 is named after the Curies in a manner analogous to the naming of gadolinium, in which the chemist Gadolin was honored.
The first curium samples were barely visible, and were identified by their radioactivity. Louis Werner and Isadore Perlman made the first substantial sample of 30 μg curium-242 hydroxide at University of California, Berkeley in 1947 by bombarding americium-241 with neutrons. Macroscopic amounts of curium(III) fluoride were obtained in 1950 by W. W. T. Crane, J. C. Wallmann and B. B. Cunningham. Its magnetic susceptibility was very close to that of GdF3 providing the first experimental evidence for the +3 valence of curium in its compounds. Curium metal was produced only in 1951 by reduction of CmF3 with barium.
Characteristics
Physical
A synthetic, radioactive element, curium is a hard, dense metal with a silvery-white appearance and physical and chemical properties resembling gadolinium. Its melting point of 1344 °C is significantly higher than that of the previous elements neptunium (637 °C), plutonium (639 °C) and americium (1176 °C). In comparison, gadolinium melts at 1312 °C. Curium boils at 3556 °C. With a density of 13.52 g/cm3, curium is lighter than neptunium (20.45 g/cm3) and plutonium (19.8 g/cm3), but heavier than most other metals. Of two crystalline forms of curium, α-Cm is more stable at ambient conditions. It has a hexagonal symmetry, space group P63/mmc, lattice parameters a = 365 pm and c = 1182 pm, and four formula units per unit cell. The crystal consists of double-hexagonal close packing with the layer sequence ABAC and so is isotypic with α-lanthanum. At pressure >23 GPa, at room temperature, α-Cm becomes β-Cm, which has face-centered cubic symmetry, space group Fmm and lattice constant a = 493 pm. On further compression to 43 GPa, curium becomes an orthorhombic γ-Cm structure similar to α-uranium, with no further transitions observed up to 52 GPa. These three curium phases are also called Cm I, II and III.
Curium has peculiar magnetic properties. Its neighbor element americium shows no deviation from Curie-Weiss paramagnetism in the entire temperature range, but α-Cm transforms to an antiferromagnetic state upon cooling to 65–52 K, and β-Cm exhibits a ferrimagnetic transition at ~205 K. Curium pnictides show ferromagnetic transitions upon cooling: 244CmN and 244CmAs at 109 K, 248CmP at 73 K and 248CmSb at 162 K. The lanthanide analog of curium, gadolinium, and its pnictides, also show magnetic transitions upon cooling, but the transition character is somewhat different: Gd and GdN become ferromagnetic, and GdP, GdAs and GdSb show antiferromagnetic ordering.
In accordance with magnetic data, electrical resistivity of curium increases with temperature – about twice between 4 and 60 K – and then is nearly constant up to room temperature. There is a significant increase in resistivity over time (~) due to self-damage of the crystal lattice by alpha decay. This makes uncertain the true resistivity of curium (~). Curium's resistivity is similar to that of gadolinium, and the actinides plutonium and neptunium, but significantly higher than that of americium, uranium, polonium and thorium.
Under ultraviolet illumination, curium(III) ions show strong and stable yellow-orange fluorescence with a maximum in the range of 590–640 nm depending on their environment. The fluorescence originates from the transitions from the first excited state 6D7/2 and the ground state 8S7/2. Analysis of this fluorescence allows monitoring interactions between Cm(III) ions in organic and inorganic complexes.
Chemical
Curium ion in solution almost always has a +3 oxidation state, the most stable oxidation state for curium. A +4 oxidation state is seen mainly in a few solid phases, such as CmO2 and CmF4. Aqueous curium(IV) is only known in the presence of strong oxidizers such as potassium persulfate, and is easily reduced to curium(III) by radiolysis and even by water itself. Chemical behavior of curium is different from the actinides thorium and uranium, and is similar to americium and many lanthanides. In aqueous solution, the Cm3+ ion is colorless to pale green; Cm4+ ion is pale yellow. The optical absorption of Cm3+ ion contains three sharp peaks at 375.4, 381.2 and 396.5 nm and their strength can be directly converted into the concentration of the ions. The +6 oxidation state has only been reported once in solution in 1978, as the curyl ion (): this was prepared from beta decay of americium-242 in the americium(V) ion . Failure to get Cm(VI) from oxidation of Cm(III) and Cm(IV) may be due to the high Cm4+/Cm3+ ionization potential and the instability of Cm(V).
Curium ions are hard Lewis acids and thus form most stable complexes with hard bases. The bonding is mostly ionic, with a small covalent component. Curium in its complexes commonly exhibits a 9-fold coordination environment, with a tricapped trigonal prismatic molecular geometry.
Isotopes
About 19 radioisotopes and 7 nuclear isomers, 233Cm to 251Cm, are known; none are stable. The longest half-lives are 15.6 million years (247Cm) and 348,000 years (248Cm). Other long-lived ones are 245Cm (8500 years), 250Cm (8300 years) and 246Cm (4760 years). Curium-250 is unusual: it mostly (~86%) decays by spontaneous fission. The most commonly used isotopes are 242Cm and 244Cm with the half-lives 162.8 days and 18.11 years, respectively.
All isotopes ranging from 242Cm to 248Cm, as well as 250Cm, undergo a self-sustaining nuclear chain reaction and thus in principle can be a nuclear fuel in a reactor. As in most transuranic elements, nuclear fission cross section is especially high for the odd-mass curium isotopes 243Cm, 245Cm and 247Cm. These can be used in thermal-neutron reactors, whereas a mixture of curium isotopes is only suitable for fast breeder reactors since the even-mass isotopes are not fissile in a thermal reactor and accumulate as burn-up increases. The mixed-oxide (MOX) fuel, which is to be used in power reactors, should contain little or no curium because neutron activation of 248Cm will create californium. Californium is a strong neutron emitter, and would pollute the back end of the fuel cycle and increase the dose to reactor personnel. Hence, if minor actinides are to be used as fuel in a thermal neutron reactor, the curium should be excluded from the fuel or placed in special fuel rods where it is the only actinide present.
The adjacent table lists the critical masses for curium isotopes for a sphere, without moderator or reflector. With a metal reflector (30 cm of steel), the critical masses of the odd isotopes are about 3–4 kg. When using water (thickness ~20–30 cm) as the reflector, the critical mass can be as small as 59 grams for 245Cm, 155 grams for 243Cm and 1550 grams for 247Cm. There is significant uncertainty in these critical mass values. While it is usually on the order of 20%, the values for 242Cm and 246Cm were listed as large as 371 kg and 70.1 kg, respectively, by some research groups.
Curium is not currently used as nuclear fuel due to its low availability and high price. 245Cm and 247Cm have very small critical mass and so could be used in tactical nuclear weapons, but none are known to have been made. Curium-243 is not suitable for such, due to its short half-life and strong α emission, which would cause excessive heat. Curium-247 would be highly suitable due to its long half-life, which is 647 times longer than plutonium-239 (used in many existing nuclear weapons).
Occurrence
The longest-lived isotope, 247Cm, has half-life 15.6 million years; so any primordial curium, that is, present on Earth when it formed, should have decayed by now. Its past presence as an extinct radionuclide is detectable as an excess of its primordial, long-lived daughter 235U. Traces of 242Cm may occur naturally in uranium minerals due to neutron capture and beta decay (238U → 239Pu → 240Pu → 241Am → 242Cm), though the quantities would be tiny and this has not been confirmed: even with "extremely generous" estimates for neutron absorption possibilities, the quantity of 242Cm present in 1 × 108 kg of 18% uranium pitchblende would not even be one atom. Traces of 247Cm are also probably brought to Earth in cosmic rays, but this also has not been confirmed. There is also the possibility of 244Cm being produced as the double beta decay daughter of natural 244Pu.
Curium is made artificially in small amounts for research purposes. It also occurs as one of the waste products in spent nuclear fuel. Curium is present in nature in some areas used for nuclear weapons testing. Analysis of the debris at the test site of the United States' first thermonuclear weapon, Ivy Mike (1 November 1952, Enewetak Atoll), besides einsteinium, fermium, plutonium and americium also revealed isotopes of berkelium, californium and curium, in particular 245Cm, 246Cm and smaller quantities of 247Cm, 248Cm and 249Cm.
Atmospheric curium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 4,000 times higher concentration of curium at the sandy soil particles than in water present in the soil pores. An even higher ratio of about 18,000 was measured in loam soils.
The transuranium elements from americium to fermium, including curium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so.
Curium, and other non-primordial actinides, have also been suspected to exist in the spectrum of Przybylski's Star.
Synthesis
Isotope preparation
Curium is made in small amounts in nuclear reactors, and by now only kilograms of 242Cm and 244Cm have been accumulated, and grams or even milligrams for heavier isotopes. Hence the high price of curium, which has been quoted at 160–185 USD per milligram, with a more recent estimate at US$2,000/g for 242Cm and US$170/g for 244Cm. In nuclear reactors, curium is formed from 238U in a series of nuclear reactions. In the first chain, 238U captures a neutron and converts into 239U, which via β− decay transforms into 239Np and 239Pu.
Further neutron capture followed by β−-decay gives americium (241Am) which further becomes 242Cm:
For research purposes, curium is obtained by irradiating not uranium but plutonium, which is available in large amounts from spent nuclear fuel. A much higher neutron flux is used for the irradiation that results in a different reaction chain and formation of 244Cm:
Curium-244 alpha decays to 240Pu, but it also absorbs neutrons, hence a small amount of heavier curium isotopes. Of those, 247Cm and 248Cm are popular in scientific research due to their long half-lives. But the production rate of 247Cm in thermal neutron reactors is low because it is prone to fission due to thermal neutrons. Synthesis of 250Cm by neutron capture is unlikely due to the short half-life of the intermediate 249Cm (64 min), which β− decays to the berkelium isotope 249Bk.
The above cascade of (n,γ) reactions gives a mix of different curium isotopes. Their post-synthesis separation is cumbersome, so a selective synthesis is desired. Curium-248 is favored for research purposes due to its long half-life. The most efficient way to prepare this isotope is by α-decay of the californium isotope 252Cf, which is available in relatively large amounts due to its long half-life (2.65 years). About 35–50 mg of 248Cm is produced thus, per year. The associated reaction produces 248Cm with isotopic purity of 97%.
Another isotope, 245Cm, can be obtained for research, from α-decay of 249Cf; the latter isotope is produced in small amounts from β−-decay of 249Bk.
Metal preparation
Most synthesis routines yield a mix of actinide isotopes as oxides, from which a given isotope of curium needs to be separated. An example procedure could be to dissolve spent reactor fuel (e.g. MOX fuel) in nitric acid, and remove the bulk of the uranium and plutonium using a PUREX (Plutonium – URanium EXtraction) type extraction with tributyl phosphate in a hydrocarbon. The lanthanides and the remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction to give, after stripping, a mixture of trivalent actinides and lanthanides. A curium compound is then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. Bis-triazinyl bipyridine complex has been recently proposed as such reagent which is highly selective to curium. Separation of curium from the very chemically similar americium can also be done by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone at elevated temperature. Both americium and curium are present in solutions mostly in the +3 valence state; americium oxidizes to soluble Am(IV) complexes, but curium stays unchanged and so can be isolated by repeated centrifugation.
Metallic curium is obtained by reduction of its compounds. Initially, curium(III) fluoride was used for this purpose. The reaction was done in an environment free of water and oxygen, in an apparatus made of tantalum and tungsten, using elemental barium or lithium as reducing agents.
Another possibility is reduction of curium(IV) oxide using a magnesium-zinc alloy in a melt of magnesium chloride and magnesium fluoride.
Compounds and reactions
Oxides
Curium readily reacts with oxygen forming mostly Cm2O3 and CmO2 oxides, but the divalent oxide CmO is also known. Black CmO2 can be obtained by burning curium oxalate (), nitrate (), or hydroxide in pure oxygen. Upon heating to 600–650 °C in vacuum (about 0.01 Pa), it transforms into the whitish Cm2O3:
4CmO2 ->[\Delta T] 2Cm2O3 + O2.
Or, Cm2O3 can be obtained by reducing CmO2 with molecular hydrogen:
2CmO2 + H2 -> Cm2O3 + H2O
Also, a number of ternary oxides of the type M(II)CmO3 are known, where M stands for a divalent metal, such as barium.
Thermal oxidation of trace quantities of curium hydride (CmH2–3) has been reported to give a volatile form of CmO2 and the volatile trioxide CmO3, one of two known examples of the very rare +6 state for curium. Another observed species was reported to behave similar to a supposed plutonium tetroxide and was tentatively characterized as CmO4, with curium in the extremely rare +8 state; but new experiments seem to indicate that CmO4 does not exist, and have cast doubt on the existence of PuO4 as well.
Halides
The colorless curium(III) fluoride (CmF3) can be made by adding fluoride ions into curium(III)-containing solutions. The brown tetravalent curium(IV) fluoride (CmF4) on the other hand is only obtained by reacting curium(III) fluoride with molecular fluorine:
A series of ternary fluorides are known of the form A7Cm6F31 (A = alkali metal).
The colorless curium(III) chloride (CmCl3) is made by reacting curium hydroxide (Cm(OH)3) with anhydrous hydrogen chloride gas. It can be further turned into other halides such as curium(III) bromide (colorless to light green) and curium(III) iodide (colorless), by reacting it with the ammonia salt of the corresponding halide at temperatures of ~400–450 °C:
Or, one can heat curium oxide to ~600°C with the corresponding acid (such as hydrobromic for curium bromide). Vapor phase hydrolysis of curium(III) chloride gives curium oxychloride:
Chalcogenides and pnictides
Sulfides, selenides and tellurides of curium have been obtained by treating curium with gaseous sulfur, selenium or tellurium in vacuum at elevated temperature. Curium pnictides of the type CmX are known for nitrogen, phosphorus, arsenic and antimony. They can be prepared by reacting either curium(III) hydride (CmH3) or metallic curium with these elements at elevated temperature.
Organocurium compounds and biological aspects
Organometallic complexes analogous to uranocene are known also for other actinides, such as thorium, protactinium, neptunium, plutonium and americium. Molecular orbital theory predicts a stable "curocene" complex (η8-C8H8)2Cm, but it has not been reported experimentally yet.
Formation of the complexes of the type (BTP = 2,6-di(1,2,4-triazin-3-yl)pyridine), in solutions containing n-C3H7-BTP and Cm3+ ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with curium and thus are useful for separating it from lanthanides and another actinides. Dissolved Cm3+ ions bind with many organic compounds, such as hydroxamic acid, urea, fluorescein and adenosine triphosphate. Many of these compounds are related to biological activity of various microorganisms. The resulting complexes show strong yellow-orange emission under UV light excitation, which is convenient not only for their detection, but also for studying interactions between the Cm3+ ion and the ligands via changes in the half-life (of the order ~0.1 ms) and spectrum of the fluorescence.
There are a few reports on biosorption of Cm3+ by bacteria and archaea, and in the laboratory both americium and curium were found to support the growth of methylotrophs.
Applications
Radionuclides
Curium is one of the most radioactive isolable elements. Its two most common isotopes 242Cm and 244Cm are strong alpha emitters (energy 6 MeV); they have fairly short half-lives, 162.8 days and 18.1 years, and give as much as 120 W/g and 3 W/g of heat, respectively. Therefore, curium can be used in its common oxide form in radioisotope thermoelectric generators like those in spacecraft. This application has been studied for the 244Cm isotope, while 242Cm was abandoned due to its prohibitive price, around 2000 USD/g. 243Cm with a ~30-year half-life and good energy yield of ~1.6 W/g could be a suitable fuel, but it gives significant amounts of harmful gamma and beta rays from radioactive decay products. As an α-emitter, 244Cm needs much less radiation shielding, but it has a high spontaneous fission rate, and thus a lot of neutron and gamma radiation. Compared to a competing thermoelectric generator isotope such as 238Pu, 244Cm emits 500 times more neutrons, and its higher gamma emission requires a shield that is 20 times thicker— of lead for a 1 kW source, compared to for 238Pu. Therefore, this use of curium is currently considered impractical.
A more promising use of 242Cm is for making 238Pu, a better radioisotope for thermoelectric generators such as in heart pacemakers. The alternate routes to 238Pu use the (n,γ) reaction of 237Np, or deuteron bombardment of uranium, though both reactions always produce 236Pu as an undesired by-product since the latter decays to 232U with strong gamma emission. Curium is a common starting material for making higher transuranic and superheavy elements. Thus, bombarding 248Cm with neon (22Ne), magnesium (26Mg), or calcium (48Ca) yields isotopes of seaborgium (265Sg), hassium (269Hs and 270Hs), and livermorium (292Lv, 293Lv, and possibly 294Lv). Californium was discovered when a microgram-sized target of curium-242 was irradiated with 35 MeV alpha particles using the cyclotron at Berkeley:
+ → +
Only about 5,000 atoms of californium were produced in this experiment.
The odd-mass curium isotopes 243Cm, 245Cm, and 247Cm are all highly fissile and can release additional energy in a thermal spectrum nuclear reactor. All curium isotopes are fissionable in fast-neutron reactors. This is one of the motives for minor actinide separation and transmutation in the nuclear fuel cycle, helping to reduce the long-term radiotoxicity of used, or spent nuclear fuel.
X-ray spectrometer
The most practical application of 244Cm—though rather limited in total volume—is as α-particle source in alpha particle X-ray spectrometers (APXS). These instruments were installed on the Sojourner, Mars, Mars 96, Mars Exploration Rovers and Philae comet lander, as well as the Mars Science Laboratory to analyze the composition and structure of the rocks on the surface of planet Mars. APXS was also used in the Surveyor 5–7 moon probes but with a 242Cm source.
An elaborate APXS setup has a sensor head containing six curium sources with a total decay rate of several tens of millicuries (roughly one gigabecquerel). The sources are collimated on a sample, and the energy spectra of the alpha particles and protons scattered from the sample are analyzed (proton analysis is done only in some spectrometers). These spectra contain quantitative information on all major elements in the sample except for hydrogen, helium and lithium.
Safety
Due to its radioactivity, curium and its compounds must be handled in appropriate labs under special arrangements. While curium itself mostly emits α-particles which are absorbed by thin layers of common materials, some of its decay products emit significant fractions of beta and gamma rays, which require a more elaborate protection. If consumed, curium is excreted within a few days and only 0.05% is absorbed in the blood. From there, ~45% goes to the liver, 45% to the bones, and the remaining 10% is excreted. In bone, curium accumulates on the inside of the interfaces to the bone marrow and does not significantly redistribute with time; its radiation destroys bone marrow and thus stops red blood cell creation. The biological half-life of curium is about 20 years in the liver and 50 years in the bones. Curium is absorbed in the body much more strongly via inhalation, and the allowed total dose of 244Cm in soluble form is 0.3 μCi. Intravenous injection of 242Cm- and 244Cm-containing solutions to rats increased the incidence of bone tumor, and inhalation promoted lung and liver cancer.
Curium isotopes are inevitably present in spent nuclear fuel (about 20 g/tonne). The isotopes 245Cm–248Cm have decay times of thousands of years and must be removed to neutralize the fuel for disposal. Such a procedure involves several steps, where curium is first separated and then converted by neutron bombardment in special reactors to short-lived nuclides. This procedure, nuclear transmutation, while well documented for other elements, is still being developed for curium.
| Physical sciences | Chemical elements_2 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.