text stringlengths 26 3.6k | page_title stringlengths 1 71 | source stringclasses 1
value | token_count int64 10 512 | id stringlengths 2 8 | url stringlengths 31 117 | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|---|---|
AutoCAD LT
AutoCAD LT is the lower-cost version of AutoCAD, with reduced capabilities, first released in November 1993. Autodesk developed AutoCAD LT to have an entry-level CAD package to compete in the lower price level. Priced at $495, it became the first AutoCAD product priced below $1000. It was sold directly by Autodesk and in computer stores unlike the full version of AutoCAD, which must be purchased from official Autodesk dealers. AutoCAD LT 2015 introduced Desktop Subscription service from $360 per year; as of 2018, three subscription plans were available, from $50 a month to a 3-year, $1170 license. Since AutoCAD LT 2024, AutoCAD LT support LISP customization.
While there are hundreds of small differences between the full AutoCAD package and AutoCAD LT, there are a few recognized major differences in the software's features:
3D capabilities: AutoCAD LT lacks the ability to create, visualize and render 3D models as well as 3D printing.
Network licensing: AutoCAD LT cannot be used on multiple machines over a network.
Customization: AutoCAD LT does not support customization with LISP, ARX, .NET and VBA (Feature introduced with release 2024)
Management and automation capabilities with Sheet Set Manager and Action Recorder.
CAD standards management tools.
AutoCAD Mobile and AutoCAD Web
AutoCAD Mobile and AutoCAD Web (formerly AutoCAD WS and AutoCAD 360) is an account-based mobile and web application enabling registered users to view, edit, and share AutoCAD files via mobile device and web using a limited AutoCAD feature set — and using cloud-stored drawing files. The program, which is an evolution and combination of previous products, uses a freemium business model with a free plan and two paid levels, including various amounts of storage, tools, and online access to drawings. 360 includes new features such as a "Smart Pen" mode and linking to third-party cloud-based storage such as Dropbox. Having evolved from Flash-based software, AutoCAD Web uses HTML5 browser technology available in newer browsers including Firefox and Google Chrome. | AutoCAD | Wikipedia | 462 | 2753 | https://en.wikipedia.org/wiki/AutoCAD | Technology | Science and Engineering | null |
AutoCAD WS began with a version for the iPhone and subsequently expanded to include versions for the iPod Touch, iPad, Android phones, and Android tablets. Autodesk released the iOS version in September 2010, following with the Android version on April 20, 2011. The program is available via download at no cost from the App Store (iOS), Google Play (Android) and Amazon Appstore (Android).
In its initial iOS version, AutoCAD WS supported drawing of lines, circles, and other shapes; creation of text and comment boxes; and management of color, layer, and measurements — in both landscape and portrait modes. Version 1.3, released August 17, 2011, added support for unit typing, layer visibility, area measurement and file management. The Android variant includes the iOS feature set along with such unique features as the ability to insert text or captions by voice command as well as manually. Both Android and iOS versions allow the user to save files on-line — or off-line in the absence of an Internet connection.
In 2011, Autodesk announced plans to migrate the majority of its software to "the cloud", starting with the AutoCAD WS mobile application.
According to a 2013 interview with Ilai Rotbaein, an AutoCAD WS product manager for Autodesk, the name AutoCAD WS had no definitive meaning, and was interpreted variously as Autodesk Web Service, White Sheet or Work Space. In 2013, AutoCAD WS was renamed to AutoCAD 360. Later, it was renamed to AutoCAD Web App. | AutoCAD | Wikipedia | 327 | 2753 | https://en.wikipedia.org/wiki/AutoCAD | Technology | Science and Engineering | null |
Student versions
AutoCAD is licensed, for free, to students, educators, and educational institutions, with a 12-month renewable license available. Licenses acquired before March 25, 2020, were a 36-month license, with its last renovation on March 24, 2020. The student version of AutoCAD is functionally identical to the full commercial version, with one exception: DWG files created or edited by a student version have an internal bit-flag set (the "educational flag"). When such a DWG file is printed by any version of AutoCAD (commercial or student) older than AutoCAD 2014 SP1 or AutoCAD 2019 and newer, the output includes a plot stamp/banner on all four sides. Objects created in the Student Version cannot be used for commercial use. Student Version objects "infect" a commercial version DWG file if they are imported in versions older than AutoCAD 2015 or newer than AutoCAD 2018.
Version history | AutoCAD | Wikipedia | 197 | 2753 | https://en.wikipedia.org/wiki/AutoCAD | Technology | Science and Engineering | null |
Asexual reproduction is a type of reproduction that does not involve the fusion of gametes or change in the number of chromosomes. The offspring that arise by asexual reproduction from either unicellular or multicellular organisms inherit the full set of genes of their single parent and thus the newly created individual is genetically and physically similar to the parent or an exact clone of the parent. Asexual reproduction is the primary form of reproduction for single-celled organisms such as archaea and bacteria. Many eukaryotic organisms including plants, animals, and fungi can also reproduce asexually. In vertebrates, the most common form of asexual reproduction is parthenogenesis, which is typically used as an alternative to sexual reproduction in times when reproductive opportunities are limited. Some monitor lizards, including Komodo dragons, can reproduce asexually.
While all prokaryotes reproduce without the formation and fusion of gametes, mechanisms for lateral gene transfer such as conjugation, transformation and transduction can be likened to sexual reproduction in the sense of genetic recombination in meiosis.
Types of asexual reproduction
Fission
Prokaryotes (Archaea and Bacteria) reproduce asexually through binary fission, in which the parent organism divides in two to produce two genetically identical daughter organisms. Eukaryotes (such as protists and unicellular fungi) may reproduce in a functionally similar manner by mitosis; most of these are also capable of sexual reproduction.
Multiple fission at the cellular level occurs in many protists, e.g. sporozoans and algae. The nucleus of the parent cell divides several times by mitosis, producing several nuclei. The cytoplasm then separates, creating multiple daughter cells.
In apicomplexans, multiple fission, or schizogony appears either as merogony, sporogony or gametogony. Merogony results in merozoites, which are multiple daughter cells, that originate within the same cell membrane, sporogony results in sporozoites, and gametogony results in microgametes.
Budding | Asexual reproduction | Wikipedia | 437 | 2756 | https://en.wikipedia.org/wiki/Asexual%20reproduction | Biology and health sciences | Biological reproduction | null |
Some cells divide by budding (for example baker's yeast), resulting in a "mother" and a "daughter" cell that is initially smaller than the parent. Budding is also known on a multicellular level; an animal example is the hydra, which reproduces by budding. The buds grow into fully matured individuals which eventually break away from the parent organism.
Internal budding is a process of asexual reproduction, favoured by parasites such as Toxoplasma gondii. It involves an unusual process in which two (endodyogeny) or more (endopolygeny) daughter cells are produced inside a mother cell, which is then consumed by the offspring prior to their separation.
Also, budding (external or internal) occurs in some worms like Taenia or Echinococcus; these worms produce cysts and then produce (invaginated or evaginated) protoscolex with budding.
Vegetative propagation
Vegetative propagation is a type of asexual reproduction found in plants where new individuals are formed without the production of seeds or spores and thus without syngamy or meiosis. Examples of vegetative reproduction include the formation of miniaturized plants called plantlets on specialized leaves, for example in kalanchoe (Bryophyllum daigremontianum) and many produce new plants from rhizomes or stolon (for example in strawberry). Some plants reproduce by forming bulbs or tubers, for example tulip bulbs and Dahlia tubers. In these examples, all the individuals are clones, and the clonal population may cover a large area.
Spore formation | Asexual reproduction | Wikipedia | 340 | 2756 | https://en.wikipedia.org/wiki/Asexual%20reproduction | Biology and health sciences | Biological reproduction | null |
Many multicellular organisms produce spores during their biological life cycle in a process called sporogenesis. Exceptions are animals and some protists, which undergo meiosis immediately followed by fertilization. Plants and many algae on the other hand undergo sporic meiosis where meiosis leads to the formation of haploid spores rather than gametes. These spores grow into multicellular individuals called gametophytes, without a fertilization event. These haploid individuals produce gametes through mitosis. Meiosis and gamete formation therefore occur in separate multicellular generations or "phases" of the life cycle, referred to as alternation of generations. Since sexual reproduction is often more narrowly defined as the fusion of gametes (fertilization), spore formation in plant sporophytes and algae might be considered a form of asexual reproduction (agamogenesis) despite being the result of meiosis and undergoing a reduction in ploidy. However, both events (spore formation and fertilization) are necessary to complete sexual reproduction in the plant life cycle.
Fungi and some algae can also utilize true asexual spore formation, which involves mitosis giving rise to reproductive cells called mitospores that develop into a new organism after dispersal. This method of reproduction is found for example in conidial fungi and the red algae Polysiphonia, and involves sporogenesis without meiosis. Thus the chromosome number of the spore cell is the same as that of the parent producing the spores. However, mitotic sporogenesis is an exception and most spores, such as those of plants and many algae, are produced by meiosis.
Fragmentation | Asexual reproduction | Wikipedia | 338 | 2756 | https://en.wikipedia.org/wiki/Asexual%20reproduction | Biology and health sciences | Biological reproduction | null |
Fragmentation is a form of asexual reproduction where a new organism grows from a fragment of the parent. Each fragment develops into a mature, fully grown individual. Fragmentation is seen in many organisms. Animals that reproduce asexually include planarians, many annelid worms including polychaetes and some oligochaetes, turbellarians and sea stars. Many fungi and plants reproduce asexually. Some plants have specialized structures for reproduction via fragmentation, such as gemmae in mosses and liverworts. Most lichens, which are a symbiotic union of a fungus and photosynthetic algae or cyanobacteria, reproduce through fragmentation to ensure that new individuals contain both symbionts. These fragments can take the form of soredia, dust-like particles consisting of fungal hyphae wrapped around photobiont cells.
Clonal Fragmentation in multicellular or colonial organisms is a form of asexual reproduction or cloning where an organism is split into fragments. Each of these fragments develop into mature, fully grown individuals that are clones of the original organism. In echinoderms, this method of reproduction is usually known as fissiparity. Due to many environmental and epigenetic differences, clones originating from the same ancestor might actually be genetically and epigenetically different.
Agamogenesis
Agamogenesis is any form of reproduction that does not involve a male gamete. Examples are parthenogenesis and apomixis.
Parthenogenesis
Parthenogenesis is a form of agamogenesis in which an unfertilized egg develops into a new individual. It has been documented in over 2,000 species. Parthenogenesis occurs in the wild in many invertebrates (e.g. water fleas, rotifers, aphids, stick insects, some ants, bees and parasitic wasps) and vertebrates (mostly reptiles, amphibians, and fish). It has also been documented in domestic birds and in genetically altered lab mice. Plants can engage in parthenogenesis as well through a process called apomixis. However this process is considered by many to not be an independent reproduction method, but instead a breakdown of the mechanisms behind sexual reproduction. Parthenogenetic organisms can be split into two main categories: facultative and obligate.
Facultative parthenogenesis | Asexual reproduction | Wikipedia | 486 | 2756 | https://en.wikipedia.org/wiki/Asexual%20reproduction | Biology and health sciences | Biological reproduction | null |
In facultative parthenogenesis, females can reproduce both sexually and asexually. Because of the many advantages of sexual reproduction, most facultative parthenotes only reproduce asexually when forced to. This typically occurs in instances when finding a mate becomes difficult. For example, female zebra sharks will reproduce asexually if they are unable to find a mate in their ocean habitats.
Parthenogenesis was previously believed to rarely occur in vertebrates, and only be possible in very small animals. However, it has been discovered in many more species in recent years. Today, the largest species that has been documented reproducing parthenogenically is the Komodo dragon at 10 feet long and over 300 pounds.
Heterogony is a form of facultative parthenogenesis where females alternate between sexual and asexual reproduction at regular intervals (see Alternation between sexual and asexual reproduction). Aphids are one group of organism that engages in this type of reproduction. They use asexual reproduction to reproduce quickly and create winged offspring that can colonize new plants and reproduce sexually in the fall to lay eggs for the next season. However, some aphid species are obligate parthenotes.
Obligate parthenogenesis
In obligate parthenogenesis, females only reproduce asexually. One example of this is the desert grassland whiptail lizard, a hybrid of two other species. Typically hybrids are infertile but through parthenogenesis this species has been able to develop stable populations.
Gynogenesis is a form of obligate parthenogenesis where a sperm cell is used to initiate reproduction. However, the sperm's genes never get incorporated into the egg cell. The best known example of this is the Amazon molly. Because they are obligate parthenotes, there are no males in their species so they depend on males from a closely related species (the Sailfin molly) for sperm.
Apomixis and nucellar embryony | Asexual reproduction | Wikipedia | 421 | 2756 | https://en.wikipedia.org/wiki/Asexual%20reproduction | Biology and health sciences | Biological reproduction | null |
Apomixis in plants is the formation of a new sporophyte without fertilization. It is important in ferns and in flowering plants, but is very rare in other seed plants. In flowering plants, the term "apomixis" is now most often used for agamospermy, the formation of seeds without fertilization, but was once used to include vegetative reproduction. An example of an apomictic plant would be the triploid European dandelion. Apomixis mainly occurs in two forms: In gametophytic apomixis, the embryo arises from an unfertilized egg within a diploid embryo sac that was formed without completing meiosis. In nucellar embryony, the embryo is formed from the diploid nucellus tissue surrounding the embryo sac. Nucellar embryony occurs in some citrus seeds. Male apomixis can occur in rare cases, such as in the Saharan Cypress Cupressus dupreziana, where the genetic material of the embryo is derived entirely from pollen.
Androgenesis
Androgenesis occurs when a zygote is produced with only paternal nuclear genes. During standard sexual reproduction, one female and one male parent each produce haploid gametes (such as a sperm or egg cell, each containing only a single set of chromosomes), which recombine to create offspring with genetic material from both parents. However, in androgenesis, there is no recombination of maternal and paternal chromosomes, and only the paternal chromosomes are passed down to the offspring (the inverse of this is gynogenesis, where only the maternal chromosomes are inherited, which is more common than androgenesis). The offspring produced in androgenesis will still have maternally inherited mitochondria, as is the case with most sexually reproducing species.
Androgenesis occurs in nature in many invertebrates (for example, clams, stick insects, some ants, bees, flies and parasitic wasps) and vertebrates (mainly amphibians and fish). The androgenesis has also been seen in genetically modified laboratory mice.
One of two things can occur to produce offspring with exclusively paternal genetic material: the maternal nuclear genome can be eliminated from the zygote, or the female can produce an egg with no nucleus, resulting in an embryo developing with only the genome of the male gamete. | Asexual reproduction | Wikipedia | 492 | 2756 | https://en.wikipedia.org/wiki/Asexual%20reproduction | Biology and health sciences | Biological reproduction | null |
Male apomixis
Other type of androgenesis is the male apomixis or paternal apomixis is a reproductive process in which a plant develops from a sperm cell (male gamete) without the participation of a female cell (ovum). In this process, the zygote is formed solely with genetic material from the father, resulting in offspring genetically identical to the male organism. This has been noted in many plants like Nicotiana, Capsicum frutescens, Cicer arietinum, Poa arachnifera, Solanum verrucosum, Phaeophyceae, Pripsacum dactyloides, Zea mays, and occurs as the regular reproductive method in Cupressus dupreziana. This contrasts with the more common apomixis, where development occurs without fertilization, but with genetic material only from the mother.
There are also clonal species that reproduce through vegetative reproduction like Lomatia tasmanica and Pando, where the genetic material is exclusively male.
Other species where androgenesis has been observed naturally are the stick insects Bacillus rossius and Bassillus Grandii, the little fire ant Wasmannia auropunctata, Vollenhovia emeryi, Paratrechina longicornis, occasionally in Apis mellifera, the Hypseleotris carp gudgeons, the parasitoid Venturia canescens, and occasionally in fruit flies Drosophila melanogaster carrying a specific mutant allele. It has also been induced in many crops and fish via irradiation of an egg cell to destroy the maternal nuclear genome.
Obligate androgenesis
Obligate androgenesis is the process in which males are capable of producing both eggs and sperm, however, the eggs have no genetic contribution and the offspring come only from the sperm, which allows these individuals to self-fertilize and produce clonal offspring without the need for females. They are also capable of interbreeding with sexual and other androgenetic lineages in a phenomenon known as "egg parasitism." This method of reproduction has been found in several species of the clam genus Corbicula, many plants like, Cupressus dupreziana, Lomatia tasmanica, Pando and recently in the fish Squalius alburnoides. | Asexual reproduction | Wikipedia | 497 | 2756 | https://en.wikipedia.org/wiki/Asexual%20reproduction | Biology and health sciences | Biological reproduction | null |
Other species where androgenesis has been observed naturally are the stick insects Bacillus rossius and Bassillus Grandii, the little fire ant Wasmannia auropunctata, Vollenhovia emeryi, Paratrechina longicornis, occasionally in Apis mellifera, the Hypseleotris carp gudgeons, the parasitoid Venturia canescens, and occasionally in fruit flies Drosophila melanogaster carrying a specific mutant allele. It has also been induced in many crops and fish via irradiation of an egg cell to destroy the maternal nuclear genome.
Alternation between sexual and asexual reproduction
Some species can alternate between sexual and asexual strategies, an ability known as heterogamy, depending on many conditions. Alternation is observed in several rotifer species (cyclical parthenogenesis e.g. in Brachionus species) and a few types of insects.
One example of this is aphids which can engage in heterogony. In this system, females are born pregnant and produce only female offspring. This cycle allows them to reproduce very quickly. However, most species reproduce sexually once a year. This switch is triggered by environmental changes in the fall and causes females to develop eggs instead of embryos. This dynamic reproductive cycle allows them to produce specialized offspring with polyphenism, a type of polymorphism where different phenotypes have evolved to carry out specific tasks.
The cape bee Apis mellifera subsp. capensis can reproduce asexually through a process called thelytoky. The freshwater crustacean Daphnia reproduces by parthenogenesis in the spring to rapidly populate ponds, then switches to sexual reproduction as the intensity of competition and predation increases. Monogonont rotifers of the genus Brachionus reproduce via cyclical parthenogenesis: at low population densities females produce asexually and at higher densities a chemical cue accumulates and induces the transition to sexual reproduction. Many protists and fungi alternate between sexual and asexual reproduction. A few species of amphibians, reptiles, and birds have a similar ability. | Asexual reproduction | Wikipedia | 450 | 2756 | https://en.wikipedia.org/wiki/Asexual%20reproduction | Biology and health sciences | Biological reproduction | null |
The slime mold Dictyostelium undergoes binary fission (mitosis) as single-celled amoebae under favorable conditions. However, when conditions turn unfavorable, the cells aggregate and follow one of two different developmental pathways, depending on conditions. In the social pathway, they form a multi-cellular slug which then forms a fruiting body with asexually generated spores. In the sexual pathway, two cells fuse to form a giant cell that develops into a large cyst. When this macrocyst germinates, it releases hundreds of amoebic cells that are the product of meiotic recombination between the original two cells.
The hyphae of the common mold (Rhizopus) are capable of producing both mitotic as well as meiotic spores. Many algae similarly switch between sexual and asexual reproduction. A number of plants use both sexual and asexual means to produce new plants, some species alter their primary modes of reproduction from sexual to asexual under varying environmental conditions.
Inheritance in asexual species
In the rotifer Brachionus calyciflorus asexual reproduction (obligate parthenogenesis) can be inherited by a recessive allele, which leads to loss of sexual reproduction in homozygous offspring.
Inheritance of asexual reproduction by a single recessive locus has also been found in the parasitoid wasp Lysiphlebus fabarum.
Examples in animals
Asexual reproduction is found in nearly half of the animal phyla. Parthenogenesis occurs in the hammerhead shark and the blacktip shark. In both cases, the sharks had reached sexual maturity in captivity in the absence of males, and in both cases the offspring were shown to be genetically identical to the mothers. The New Mexico whiptail is another example.
Some reptiles use the ZW sex-determination system, which produces either males (with ZZ sex chromosomes) or females (with ZW or WW sex chromosomes). Until 2010, it was thought that the ZW chromosome system used by reptiles was incapable of producing viable WW offspring, but a (ZW) female boa constrictor was discovered to have produced viable female offspring with WW chromosomes. The female boa could have chosen any number of male partners (and had successfully in the past) but on this occasion she reproduced asexually, creating 22 female babies with WW sex-chromosomes. | Asexual reproduction | Wikipedia | 502 | 2756 | https://en.wikipedia.org/wiki/Asexual%20reproduction | Biology and health sciences | Biological reproduction | null |
Polyembryony is a widespread form of asexual reproduction in animals, whereby the fertilized egg or a later stage of embryonic development splits to form genetically identical clones. Within animals, this phenomenon has been best studied in the parasitic Hymenoptera. In the nine-banded armadillos, this process is obligatory and usually gives rise to genetically identical quadruplets. In other mammals, monozygotic twinning has no apparent genetic basis, though its occurrence is common. There are at least 10 million identical human twins and triplets in the world today.
Bdelloid rotifers reproduce exclusively asexually, and all individuals in the class Bdelloidea are females. Asexuality evolved in these animals millions of years ago and has persisted since. There is evidence to suggest that asexual reproduction has allowed the animals to evolve new proteins through the Meselson effect that have allowed them to survive better in periods of dehydration. Bdelloid rotifers are extraordinarily resistant to damage from ionizing radiation due to the same DNA-preserving adaptations used to survive dormancy. These adaptations include an extremely efficient mechanism for repairing DNA double-strand breaks. This repair mechanism was studied in two Bdelloidea species, Adineta vaga, and Philodina roseola. and appears to involve mitotic recombination between homologous DNA regions within each species.
Molecular evidence strongly suggests that several species of the stick insect genus Timema have used only asexual (parthenogenetic) reproduction for millions of years, the longest period known for any insect. Similar findings suggest that the mite species Oppiella nova may have reproduced entirely asexually for millions of years.
In the grass thrips genus Aptinothrips there have been several transitions to asexuality, likely due to different causes. | Asexual reproduction | Wikipedia | 386 | 2756 | https://en.wikipedia.org/wiki/Asexual%20reproduction | Biology and health sciences | Biological reproduction | null |
Adaptive significance of asexual reproduction
A complete lack of sexual reproduction is relatively rare among multicellular organisms, particularly animals. It is not entirely understood why the ability to reproduce sexually is so common among them. Current hypotheses suggest that asexual reproduction may have short term benefits when rapid population growth is important or in stable environments, while sexual reproduction offers a net advantage by allowing more rapid generation of genetic diversity, allowing adaptation to changing environments. Developmental constraints may underlie why few animals have relinquished sexual reproduction completely in their life-cycles. Almost all asexual modes of reproduction maintain meiosis either in a modified form or as an alternative pathway. Facultatively apomictic plants increase frequencies of sexuality relative to apomixis after abiotic stress. Another constraint on switching from sexual to asexual reproduction would be the concomitant loss of meiosis and the protective recombinational repair of DNA damage afforded as one function of meiosis. | Asexual reproduction | Wikipedia | 197 | 2756 | https://en.wikipedia.org/wiki/Asexual%20reproduction | Biology and health sciences | Biological reproduction | null |
In organic chemistry, an alkene, or olefin, is a hydrocarbon containing a carbon–carbon double bond. The double bond may be internal or in the terminal position. Terminal alkenes are also known as α-olefins.
The International Union of Pure and Applied Chemistry (IUPAC) recommends using the name "alkene" only for acyclic hydrocarbons with just one double bond; alkadiene, alkatriene, etc., or polyene for acyclic hydrocarbons with two or more double bonds; cycloalkene, cycloalkadiene, etc. for cyclic ones; and "olefin" for the general class – cyclic or acyclic, with one or more double bonds.
Acyclic alkenes, with only one double bond and no other functional groups (also known as mono-enes) form a homologous series of hydrocarbons with the general formula with n being a >1 natural number (which is two hydrogens less than the corresponding alkane). When n is four or more, isomers are possible, distinguished by the position and conformation of the double bond.
Alkenes are generally colorless non-polar compounds, somewhat similar to alkanes but more reactive. The first few members of the series are gases or liquids at room temperature. The simplest alkene, ethylene () (or "ethene" in the IUPAC nomenclature) is the organic compound produced on the largest scale industrially.
Aromatic compounds are often drawn as cyclic alkenes, however their structure and properties are sufficiently distinct that they are not classified as alkenes or olefins. Hydrocarbons with two overlapping double bonds () are called allenes—the simplest such compound is itself called allene—and those with three or more overlapping bonds (, , etc.) are called cumulenes.
Structural isomerism
Alkenes having four or more carbon atoms can form diverse structural isomers. Most alkenes are also isomers of cycloalkanes. Acyclic alkene structural isomers with only one double bond follow: | Alkene | Wikipedia | 454 | 2761 | https://en.wikipedia.org/wiki/Alkene | Physical sciences | Hydrocarbons | null |
: ethylene only
: propylene only
: 3 isomers: 1-butene, 2-butene, and isobutylene
: 5 isomers: 1-pentene, 2-pentene, 2-methyl-1-butene, 3-methyl-1-butene, 2-methyl-2-butene
: 13 isomers: 1-hexene, 2-hexene, 3-hexene, 2-methyl-1-pentene, 3-methyl-1-pentene, 4-methyl-1-pentene, 2-methyl-2-pentene, 3-methyl-2-pentene, 4-methyl-2-pentene, 2,3-dimethyl-1-butene, 3,3-dimethyl-1-butene, 2,3-dimethyl-2-butene, 2-ethyl-1-butene
Many of these molecules exhibit cis–trans isomerism. There may also be chiral carbon atoms particularly within the larger molecules (from ). The number of potential isomers increases rapidly with additional carbon atoms.
Structure and bonding
Bonding
A carbon–carbon double bond consists of a sigma bond and a pi bond. This double bond is stronger than a single covalent bond (611 kJ/mol for C=C vs. 347 kJ/mol for C–C), but not twice as strong. Double bonds are shorter than single bonds with an average bond length of 1.33 Å (133 pm) vs 1.53 Å for a typical C-C single bond.
Each carbon atom of the double bond uses its three sp2 hybrid orbitals to form sigma bonds to three atoms (the other carbon atom and two hydrogen atoms). The unhybridized 2p atomic orbitals, which lie perpendicular to the plane created by the axes of the three sp2 hybrid orbitals, combine to form the pi bond. This bond lies outside the main C–C axis, with half of the bond on one side of the molecule and a half on the other. With a strength of 65 kcal/mol, the pi bond is significantly weaker than the sigma bond. | Alkene | Wikipedia | 467 | 2761 | https://en.wikipedia.org/wiki/Alkene | Physical sciences | Hydrocarbons | null |
Rotation about the carbon–carbon double bond is restricted because it incurs an energetic cost to break the alignment of the p orbitals on the two carbon atoms. Consequently cis or trans isomers interconvert so slowly that they can be freely handled at ambient conditions without isomerization. More complex alkenes may be named with the E–Z notation for molecules with three or four different substituents (side groups). For example, of the isomers of butene, the two methyl groups of (Z)-but-2-ene (a.k.a. cis-2-butene) appear on the same side of the double bond, and in (E)-but-2-ene (a.k.a. trans-2-butene) the methyl groups appear on opposite sides. These two isomers of butene have distinct properties.
Shape
As predicted by the VSEPR model of electron pair repulsion, the molecular geometry of alkenes includes bond angles about each carbon atom in a double bond of about 120°. The angle may vary because of steric strain introduced by nonbonded interactions between functional groups attached to the carbon atoms of the double bond. For example, the C–C–C bond angle in propylene is 123.9°.
For bridged alkenes, Bredt's rule states that a double bond cannot occur at the bridgehead of a bridged ring system unless the rings are large enough. Following Fawcett and defining S as the total number of non-bridgehead atoms in the rings, bicyclic systems require S ≥ 7 for stability and tricyclic systems require S ≥ 11.
Isomerism
In organic chemistry,the prefixes cis- and trans- are used to describe the positions of functional groups attached to carbon atoms joined by a double bond. In Latin, cis and trans mean "on this side of" and "on the other side of" respectively. Therefore, if the functional groups are both on the same side of the carbon chain, the bond is said to have cis- configuration, otherwise (i.e. the functional groups are on the opposite side of the carbon chain), the bond is said to have trans- configuration. | Alkene | Wikipedia | 462 | 2761 | https://en.wikipedia.org/wiki/Alkene | Physical sciences | Hydrocarbons | null |
For there to be cis- and trans- configurations, there must be a carbon chain, or at least one functional group attached to each carbon is the same for both. E- and Z- configuration can be used instead in a more general case where all four functional groups attached to carbon atoms in a double bond are different. E- and Z- are abbreviations of German words zusammen (together) and entgegen (opposite). In E- and Z-isomerism, each functional group is assigned a priority based on the Cahn–Ingold–Prelog priority rules. If the two groups with higher priority are on the same side of the double bond, the bond is assigned Z- configuration, otherwise (i.e. the two groups with higher priority are on the opposite side of the double bond), the bond is assigned E- configuration. Cis- and trans- configurations do not have a fixed relationship with E- and Z-configurations.
Physical properties
Many of the physical properties of alkenes and alkanes are similar: they are colorless, nonpolar, and combustible. The physical state depends on molecular mass: like the corresponding saturated hydrocarbons, the simplest alkenes (ethylene, propylene, and butene) are gases at room temperature. Linear alkenes of approximately five to sixteen carbon atoms are liquids, and higher alkenes are waxy solids. The melting point of the solids also increases with increase in molecular mass.
Alkenes generally have stronger smells than their corresponding alkanes. Ethylene has a sweet and musty odor. Strained alkenes, in particular, like norbornene and trans-cyclooctene are known to have strong, unpleasant odors, a fact consistent with the stronger π complexes they form with metal ions including copper.
Boiling and melting points
Below is a list of the boiling and melting points of various alkenes with the corresponding alkane and alkyne analogues.
Infrared spectroscopy
In the IR spectrum, the stretching/compression of C=C bond gives a peak at 1670–1600 cm−1. The band is weak in symmetrical alkenes. The bending of C=C bond absorbs between 1000 and 650 cm−1 wavelength | Alkene | Wikipedia | 463 | 2761 | https://en.wikipedia.org/wiki/Alkene | Physical sciences | Hydrocarbons | null |
NMR spectroscopy
In 1H NMR spectroscopy, the hydrogen bonded to the carbon adjacent to double bonds will give a δH of 4.5–6.5 ppm. The double bond will also deshield the hydrogen attached to the carbons adjacent to sp2 carbons, and this generates δH=1.6–2. ppm peaks. Cis/trans isomers are distinguishable due to different J-coupling effect. Cis vicinal hydrogens will have coupling constants in the range of 6–14 Hz, whereas the trans will have coupling constants of 11–18 Hz.
In their 13C NMR spectra of alkenes, double bonds also deshield the carbons, making them have low field shift. C=C double bonds usually have chemical shift of about 100–170 ppm.
Combustion
Like most other hydrocarbons, alkenes combust to give carbon dioxide and water.
The combustion of alkenes release less energy than burning same molarity of saturated ones with same number of carbons.
This trend can be clearly seen in the list of standard enthalpy of combustion of hydrocarbons.
Reactions
Alkenes are relatively stable compounds, but are more reactive than alkanes. Most reactions of alkenes involve additions to this pi bond, forming new single bonds. Alkenes serve as a feedstock for the petrochemical industry because they can participate in a wide variety of reactions, prominently polymerization and alkylation. Except for ethylene, alkenes have two sites of reactivity: the carbon–carbon pi-bond and the presence of allylic CH centers. The former dominates but the allylic sites are important too.
Addition to the unsaturated bonds
Hydrogenation involves the addition of H2 ,resulting in an alkane. The equation of hydrogenation of ethylene to form ethane is:
H2C=CH2 + H2→H3C−CH3
Hydrogenation reactions usually require catalysts to increase their reaction rate. The total number of hydrogens that can be added to an unsaturated hydrocarbon depends on its degree of unsaturation. | Alkene | Wikipedia | 443 | 2761 | https://en.wikipedia.org/wiki/Alkene | Physical sciences | Hydrocarbons | null |
Similarly, halogenation involves the addition of a halogen molecule, such as Br2, resulting in a dihaloalkane. The equation of bromination of ethylene to form ethane is:
H2C=CH2 + Br2→H2CBr−CH2Br
Unlike hydrogenation, these halogenation reactions do not require catalysts. The reaction occurs in two steps, with a halonium ion as an intermediate.
Bromine test is used to test the saturation of hydrocarbons. The bromine test can also be used as an indication of the degree of unsaturation for unsaturated hydrocarbons. Bromine number is defined as gram of bromine able to react with 100g of product. Similar as hydrogenation, the halogenation of bromine is also depend on the number of π bond. A higher bromine number indicates higher degree of unsaturation.
The π bonds of alkenes hydrocarbons are also susceptible to hydration. The reaction usually involves strong acid as catalyst. The first step in hydration often involves formation of a carbocation. The net result of the reaction will be an alcohol. The reaction equation for hydration of ethylene is:
H2C=CH2 + H2O→
Hydrohalogenation involves addition of H−X to unsaturated hydrocarbons. This reaction results in new C−H and C−X σ bonds. The formation of the intermediate carbocation is selective and follows Markovnikov's rule. The hydrohalogenation of alkene will result in haloalkane. The reaction equation of HBr addition to ethylene is:
H2C=CH2 + HBr →
Cycloaddition
Alkenes add to dienes to give cyclohexenes. This conversion is an example of a Diels-Alder reaction. Such reaction proceed with retention of stereochemistry. The rates are sensitive to electron-withdrawing or electron-donating substituents. When irradiated by UV-light, alkenes dimerize to give cyclobutanes. Another example is the Schenck ene reaction, in which singlet oxygen reacts with an allylic structure to give a transposed allyl peroxide:
Oxidation
Alkenes react with percarboxylic acids and even hydrogen peroxide to yield epoxides: | Alkene | Wikipedia | 497 | 2761 | https://en.wikipedia.org/wiki/Alkene | Physical sciences | Hydrocarbons | null |
For ethylene, the epoxidation is conducted on a very large scale industrially using oxygen in the presence of silver-based catalysts:
Alkenes react with ozone, leading to the scission of the double bond. The process is called ozonolysis. Often the reaction procedure includes a mild reductant, such as dimethylsulfide ():
When treated with a hot concentrated, acidified solution of , alkenes are cleaved to form ketones and/or carboxylic acids. The stoichiometry of the reaction is sensitive to conditions. This reaction and the ozonolysis can be used to determine the position of a double bond in an unknown alkene.
The oxidation can be stopped at the vicinal diol rather than full cleavage of the alkene by using osmium tetroxide or other oxidants:
R'CH=CR2 + 1/2 O2 + H2O -> R'CH(OH)-C(OH)R2
This reaction is called dihydroxylation.
In the presence of an appropriate photosensitiser, such as methylene blue and light, alkenes can undergo reaction with reactive oxygen species generated by the photosensitiser, such as hydroxyl radicals, singlet oxygen or superoxide ion. Reactions of the excited sensitizer can involve electron or hydrogen transfer, usually with a reducing substrate (Type I reaction) or interaction with oxygen (Type II reaction). These various alternative processes and reactions can be controlled by choice of specific reaction conditions, leading to a wide range of products. A common example is the [4+2]-cycloaddition of singlet oxygen with a diene such as cyclopentadiene to yield an endoperoxide:
Polymerization
Terminal alkenes are precursors to polymers via processes termed polymerization. Some polymerizations are of great economic significance, as they generate the plastics polyethylene and polypropylene. Polymers from alkene are usually referred to as polyolefins although they contain no olefins. Polymerization can proceed via diverse mechanisms. Conjugated dienes such as buta-1,3-diene and isoprene (2-methylbuta-1,3-diene) also produce polymers, one example being natural rubber. | Alkene | Wikipedia | 487 | 2761 | https://en.wikipedia.org/wiki/Alkene | Physical sciences | Hydrocarbons | null |
Allylic substitution
The presence of a C=C π bond in unsaturated hydrocarbons weakens the dissociation energy of the allylic C−H bonds. Thus, these groupings are susceptible to free radical substitution at these C-H sites as well as addition reactions at the C=C site. In the presence of radical initiators, allylic C-H bonds can be halogenated. The presence of two C=C bonds flanking one methylene, i.e., doubly allylic, results in particularly weak HC-H bonds. The high reactivity of these situations is the basis for certain free radical reactions, manifested in the chemistry of drying oils.
Metathesis
Alkenes undergo olefin metathesis, which cleaves and interchanges the substituents of the alkene. A related reaction is ethenolysis:
Metal complexation
In transition metal alkene complexes, alkenes serve as ligands for metals. In this case, the π electron density is donated to the metal d orbitals. The stronger the donation is, the stronger the back bonding from the metal d orbital to π* anti-bonding orbital of the alkene. This effect lowers the bond order of the alkene and increases the C-C bond length. One example is the complex . These complexes are related to the mechanisms of metal-catalyzed reactions of unsaturated hydrocarbons.
Reaction overview
Synthesis
Industrial methods
Alkenes are produced by hydrocarbon cracking. Raw materials are mostly natural-gas condensate components (principally ethane and propane) in the US and Mideast and naphtha in Europe and Asia. Alkanes are broken apart at high temperatures, often in the presence of a zeolite catalyst, to produce a mixture of primarily aliphatic alkenes and lower molecular weight alkanes. The mixture is feedstock and temperature dependent, and separated by fractional distillation. This is mainly used for the manufacture of small alkenes (up to six carbons).
Related to this is catalytic dehydrogenation, where an alkane loses hydrogen at high temperatures to produce a corresponding alkene. This is the reverse of the catalytic hydrogenation of alkenes.
This process is also known as reforming. Both processes are endothermic and are driven towards the alkene at high temperatures by entropy. | Alkene | Wikipedia | 498 | 2761 | https://en.wikipedia.org/wiki/Alkene | Physical sciences | Hydrocarbons | null |
Catalytic synthesis of higher α-alkenes (of the type RCH=CH2) can also be achieved by a reaction of ethylene with the organometallic compound triethylaluminium in the presence of nickel, cobalt, or platinum.
Elimination reactions
One of the principal methods for alkene synthesis in the laboratory is the elimination reaction of alkyl halides, alcohols, and similar compounds. Most common is the β-elimination via the E2 or E1 mechanism. A commercially significant example is the production of vinyl chloride.
The E2 mechanism provides a more reliable β-elimination method than E1 for most alkene syntheses. Most E2 eliminations start with an alkyl halide or alkyl sulfonate ester (such as a tosylate or triflate). When an alkyl halide is used, the reaction is called a dehydrohalogenation. For unsymmetrical products, the more substituted alkenes (those with fewer hydrogens attached to the C=C) tend to predominate (see Zaitsev's rule). Two common methods of elimination reactions are dehydrohalogenation of alkyl halides and dehydration of alcohols. A typical example is shown below; note that if possible, the H is anti to the leaving group, even though this leads to the less stable Z-isomer.
Alkenes can be synthesized from alcohols via dehydration, in which case water is lost via the E1 mechanism. For example, the dehydration of ethanol produces ethylene:
CH3CH2OH → H2C=CH2 + H2O
An alcohol may also be converted to a better leaving group (e.g., xanthate), so as to allow a milder syn-elimination such as the Chugaev elimination and the Grieco elimination. Related reactions include eliminations by β-haloethers (the Boord olefin synthesis) and esters (ester pyrolysis). A thioketone and a phosphite ester combined (the Corey-Winter olefination) or diphosphorus tetraiodide will deoxygenate glycols to alkenes. | Alkene | Wikipedia | 481 | 2761 | https://en.wikipedia.org/wiki/Alkene | Physical sciences | Hydrocarbons | null |
Alkenes can be prepared indirectly from alkyl amines. The amine or ammonia is not a suitable leaving group, so the amine is first either alkylated (as in the Hofmann elimination) or oxidized to an amine oxide (the Cope reaction) to render a smooth elimination possible. The Cope reaction is a syn-elimination that occurs at or below 150 °C, for example:
The Hofmann elimination is unusual in that the less substituted (non-Zaitsev) alkene is usually the major product.
Alkenes are generated from α-halosulfones in the Ramberg–Bäcklund reaction, via a three-membered ring sulfone intermediate.
Synthesis from carbonyl compounds
Another important class of methods for alkene synthesis involves construction of a new carbon–carbon double bond by coupling or condensation of a carbonyl compound (such as an aldehyde or ketone) to a carbanion or its equivalent. Pre-eminent is the aldol condensation. Knoevenagel condensations are a related class of reactions that convert carbonyls into alkenes.Well-known methods are called olefinations. The Wittig reaction is illustrative, but other related methods are known, including the Horner–Wadsworth–Emmons reaction.
The Wittig reaction involves reaction of an aldehyde or ketone with a Wittig reagent (or phosphorane) of the type Ph3P=CHR to produce an alkene and Ph3P=O. The Wittig reagent is itself prepared easily from triphenylphosphine and an alkyl halide.
Related to the Wittig reaction is the Peterson olefination, which uses silicon-based reagents in place of the phosphorane. This reaction allows for the selection of E- or Z-products. If an E-product is desired, another alternative is the Julia olefination, which uses the carbanion generated from a phenyl sulfone. The Takai olefination based on an organochromium intermediate also delivers E-products. A titanium compound, Tebbe's reagent, is useful for the synthesis of methylene compounds; in this case, even esters and amides react. | Alkene | Wikipedia | 489 | 2761 | https://en.wikipedia.org/wiki/Alkene | Physical sciences | Hydrocarbons | null |
A pair of ketones or aldehydes can be deoxygenated to generate an alkene. Symmetrical alkenes can be prepared from a single aldehyde or ketone coupling with itself, using titanium metal reduction (the McMurry reaction). If different ketones are to be coupled, a more complicated method is required, such as the Barton–Kellogg reaction.
A single ketone can also be converted to the corresponding alkene via its tosylhydrazone, using sodium methoxide (the Bamford–Stevens reaction) or an alkyllithium (the Shapiro reaction).
Synthesis from alkenes
The formation of longer alkenes via the step-wise polymerisation of smaller ones is appealing, as ethylene (the smallest alkene) is both inexpensive and readily available, with hundreds of millions of tonnes produced annually. The Ziegler–Natta process allows for the formation of very long chains, for instance those used for polyethylene. Where shorter chains are wanted, as they for the production of surfactants, then processes incorporating a olefin metathesis step, such as the Shell higher olefin process are important.
Olefin metathesis is also used commercially for the interconversion of ethylene and 2-butene to propylene. Rhenium- and molybdenum-containing heterogeneous catalysis are used in this process:
CH2=CH2 + CH3CH=CHCH3 → 2 CH2=CHCH3
Transition metal catalyzed hydrovinylation is another important alkene synthesis process starting from alkene itself. It involves the addition of a hydrogen and a vinyl group (or an alkenyl group) across a double bond.
From alkynes
Reduction of alkynes is a useful method for the stereoselective synthesis of disubstituted alkenes. If the cis-alkene is desired, hydrogenation in the presence of Lindlar's catalyst (a heterogeneous catalyst that consists of palladium deposited on calcium carbonate and treated with various forms of lead) is commonly used, though hydroboration followed by hydrolysis provides an alternative approach. Reduction of the alkyne by sodium metal in liquid ammonia gives the trans-alkene.
For the preparation multisubstituted alkenes, carbometalation of alkynes can give rise to a large variety of alkene derivatives. | Alkene | Wikipedia | 512 | 2761 | https://en.wikipedia.org/wiki/Alkene | Physical sciences | Hydrocarbons | null |
Rearrangements and related reactions
Alkenes can be synthesized from other alkenes via rearrangement reactions. Besides olefin metathesis (described above), many pericyclic reactions can be used such as the ene reaction and the Cope rearrangement.
In the Diels–Alder reaction, a cyclohexene derivative is prepared from a diene and a reactive or electron-deficient alkene.
Application
Unsaturated hydrocarbons are widely used to produce plastics, medicines, and other useful materials.
Natural occurrence
Alkenes are prevalent in nature.
Plants are the main natural source of alkenes in the form of terpenes. Many of the most vivid natural pigments are terpenes; e.g. lycopene (red in tomatoes), carotene (orange in carrots), and xanthophylls (yellow in egg yolk). The simplest of all alkenes, ethylene is a signaling molecule that influences the ripening of plants.
IUPAC Nomenclature
Although the nomenclature is not followed widely, according to IUPAC, an alkene is an acyclic hydrocarbon with just one double bond between carbon atoms. Olefins comprise a larger collection of cyclic and acyclic alkenes as well as dienes and polyenes.
To form the root of the IUPAC names for straight-chain alkenes, change the -an- infix of the parent to -en-. For example, CH3-CH3 is the alkane ethANe. The name of CH2=CH2 is therefore ethENe. | Alkene | Wikipedia | 344 | 2761 | https://en.wikipedia.org/wiki/Alkene | Physical sciences | Hydrocarbons | null |
For straight-chain alkenes with 4 or more carbon atoms, that name does not completely identify the compound. For those cases, and for branched acyclic alkenes, the following rules apply:
Find the longest carbon chain in the molecule. If that chain does not contain the double bond, name the compound according to the alkane naming rules. Otherwise:
Number the carbons in that chain starting from the end that is closest to the double bond.
Define the location k of the double bond as being the number of its first carbon.
Name the side groups (other than hydrogen) according to the appropriate rules.
Define the position of each side group as the number of the chain carbon it is attached to.
Write the position and name of each side group.
Write the names of the alkane with the same chain, replacing the "-ane" suffix by "k-ene".
The position of the double bond is often inserted before the name of the chain (e.g. "2-pentene"), rather than before the suffix ("pent-2-ene").
The positions need not be indicated if they are unique. Note that the double bond may imply a different chain numbering than that used for the corresponding alkane: C–– is "2,2-dimethyl pentane", whereas C–= is "3,3-dimethyl 1-pentene".
More complex rules apply for polyenes and cycloalkenes.
Cis–trans isomerism
If the double bond of an acyclic mono-ene is not the first bond of the chain, the name as constructed above still does not completely identify the compound, because of cis–trans isomerism. Then one must specify whether the two single C–C bonds adjacent to the double bond are on the same side of its plane, or on opposite sides. For monoalkenes, the configuration is often indicated by the prefixes cis- (from Latin "on this side of") or trans- ("across", "on the other side of") before the name, respectively; as in cis-2-pentene or trans-2-butene. | Alkene | Wikipedia | 462 | 2761 | https://en.wikipedia.org/wiki/Alkene | Physical sciences | Hydrocarbons | null |
More generally, cis–trans isomerism will exist if each of the two carbons of in the double bond has two different atoms or groups attached to it. Accounting for these cases, the IUPAC recommends the more general E–Z notation, instead of the cis and trans prefixes. This notation considers the group with highest CIP priority in each of the two carbons. If these two groups are on opposite sides of the double bond's plane, the configuration is labeled E (from the German entgegen meaning "opposite"); if they are on the same side, it is labeled Z (from German zusammen, "together"). This labeling may be taught with mnemonic "Z means 'on ze zame zide'".
Groups containing C=C double bonds
IUPAC recognizes two names for hydrocarbon groups containing carbon–carbon double bonds, the vinyl group and the allyl group. | Alkene | Wikipedia | 192 | 2761 | https://en.wikipedia.org/wiki/Alkene | Physical sciences | Hydrocarbons | null |
Acetylene
Propyne
1-Butyne
In organic chemistry, an alkyne is an unsaturated hydrocarbon containing at least one carbon—carbon triple bond. The simplest acyclic alkynes with only one triple bond and no other functional groups form a homologous series with the general chemical formula . Alkynes are traditionally known as acetylenes, although the name acetylene also refers specifically to , known formally as ethyne using IUPAC nomenclature. Like other hydrocarbons, alkynes are generally hydrophobic.
Structure and bonding
In acetylene, the H–C≡C bond angles are 180°. By virtue of this bond angle, alkynes are rod-like. Correspondingly, cyclic alkynes are rare. Benzyne cannot be isolated. The C≡C bond distance of 118 picometers (for C2H2) is much shorter than the C=C distance in alkenes (132 pm, for C2H4) or the C–C bond in alkanes (153 pm).
The triple bond is very strong with a bond strength of 839 kJ/mol. The sigma bond contributes 369 kJ/mol, the first pi bond contributes 268 kJ/mol. and the second pi bond 202 kJ/mol. Bonding is usually discussed in the context of molecular orbital theory, which recognizes the triple bond as arising from overlap of s and p orbitals. In the language of valence bond theory, the carbon atoms in an alkyne bond are sp hybridized: they each have two unhybridized p orbitals and two sp hybrid orbitals. Overlap of an sp orbital from each atom forms one sp–sp sigma bond. Each p orbital on one atom overlaps one on the other atom, forming two pi bonds, giving a total of three bonds. The remaining sp orbital on each atom can form a sigma bond to another atom, for example to hydrogen atoms in the parent acetylene. The two sp orbitals project on opposite sides of the carbon atom.
Terminal and internal alkynes
Internal alkynes feature carbon substituents on each acetylenic carbon. Symmetrical examples include diphenylacetylene and 3-hexyne. They may also be asymmetrical, such as in 2-pentyne. | Alkyne | Wikipedia | 485 | 2763 | https://en.wikipedia.org/wiki/Alkyne | Physical sciences | Hydrocarbons | null |
Terminal alkynes have the formula , where at least one end of the alkyne is a hydrogen atom. An example is methylacetylene (propyne using IUPAC nomenclature). They are often prepared by alkylation of monosodium acetylide. Terminal alkynes, like acetylene itself, are mildly acidic, with pKa values of around 25. They are far more acidic than alkenes and alkanes, which have pKa values of around 40 and 50, respectively. The acidic hydrogen on terminal alkynes can be replaced by a variety of groups resulting in halo-, silyl-, and alkoxoalkynes. The carbanions generated by deprotonation of terminal alkynes are called acetylides. Internal alkynes are also considerably more acidic than alkenes and alkanes, though not nearly as acidic as terminal alkynes. The C–H bonds at the α position of alkynes (propargylic C–H bonds) can also be deprotonated using strong bases, with an estimated pKa of 35. This acidity can be used to isomerize internal alkynes to terminal alkynes using the alkyne zipper reaction.
Naming alkynes
In systematic chemical nomenclature, alkynes are named with the Greek prefix system without any additional letters. Examples include ethyne or octyne. In parent chains with four or more carbons, it is necessary to say where the triple bond is located. For octyne, one can either write 3-octyne or oct-3-yne when the bond starts at the third carbon. The lowest number possible is given to the triple bond. When no superior functional groups are present, the parent chain must include the triple bond even if it is not the longest possible carbon chain in the molecule. Ethyne is commonly called by its trivial name acetylene. | Alkyne | Wikipedia | 399 | 2763 | https://en.wikipedia.org/wiki/Alkyne | Physical sciences | Hydrocarbons | null |
In chemistry, the suffix -yne is used to denote the presence of a triple bond. In organic chemistry, the suffix often follows IUPAC nomenclature. However, inorganic compounds featuring unsaturation in the form of triple bonds may be denoted by substitutive nomenclature with the same methods used with alkynes (i.e. the name of the corresponding saturated compound is modified by replacing the "-ane" ending with "-yne"). "-diyne" is used when there are two triple bonds, and so on. The position of unsaturation is indicated by a numerical locant immediately preceding the "-yne" suffix, or 'locants' in the case of multiple triple bonds. Locants are chosen so that the numbers are low as possible. "-yne" is also used as a suffix to name substituent groups that are triply bound to the parent compound.
Sometimes a number between hyphens is inserted before it to state which atoms the triple bond is between. This suffix arose as a collapsed form of the end of the word "acetylene". The final "-e" disappears if it is followed by another suffix that starts with a vowel.
Structural isomerism
Alkynes having four or more carbon atoms can form different structural isomers by having the triple bond in different positions or having some of the carbon atoms be substituents rather than part of the parent chain. Other non-alkyne structural isomers are also possible.
: acetylene only
: propyne only
: 2 isomers: 1-butyne, and 2-butyne
: 3 isomers: 1-pentyne, 2-pentyne, and 3-methyl-1-butyne
: 7 isomers: 1-hexyne, 2-hexyne, 3-hexyne, 4-methyl-1-pentyne, 4-methyl-2-pentyne, 3-methyl-1-pentyne, 3,3-dimethyl-1-butyne
Synthesis | Alkyne | Wikipedia | 430 | 2763 | https://en.wikipedia.org/wiki/Alkyne | Physical sciences | Hydrocarbons | null |
From calcium carbide
Classically, acetylene was prepared by hydrolysis (protonation) of calcium carbide (Ca2+[:C≡C:]2–):
Ca^{2+}[C#C]^2- + 2 HOH -> HC#CH + Ca^{2+}[(HO^{-})2]
which was in turn synthesized by combining quicklime and coke in an electric arc furnace at 2200 °C:
CaO + 3 C (amorphous) -> CaC2 + CO
This was an industrially important process which provided access to hydrocarbons from coal resources for countries like Germany and China. However, the energy-intensive nature of this process is a major disadvantage and its share of the world's production of acetylene has steadily decreased relative to hydrocarbon cracking.
Cracking
Commercially, the dominant alkyne is acetylene itself, which is used as a fuel and a precursor to other compounds, e.g., acrylates. Hundreds of millions of kilograms are produced annually by partial oxidation of natural gas:
2 CH4 + 3/2 O2 -> HC#CH + 3 H2O
Propyne, also industrially useful, is also prepared by thermal cracking of hydrocarbons.
Alkylation and arylation of terminal alkynes
Terminal alkynes (RC≡CH, including acetylene itself) can be deprotonated by bases like NaNH2, BuLi, or EtMgBr to give acetylide anions (RC≡C:–M+, M = Na, Li, MgBr) which can be alkylated by addition to carbonyl groups (Favorskii reaction), ring opening of epoxides, or SN2-type substitution of unhindered primary alkyl halides.
In the presence of transition metal catalysts, classically a combination of Pd(PPh3)2Cl2 and CuI, terminal acetylenes (RC≡CH) can react with aryl iodides and bromides (ArI or ArBr) in the presence of a secondary or tertiary amine like Et3N to give arylacetylenes (RC≡CAr) in the Sonogashira reaction.
The availability of these reliable reactions makes terminal alkynes useful building blocks for preparing internal alkynes. | Alkyne | Wikipedia | 493 | 2763 | https://en.wikipedia.org/wiki/Alkyne | Physical sciences | Hydrocarbons | null |
Dehydrohalogenation and related reactions
Alkynes are prepared from 1,1- and 1,2-dihaloalkanes by double dehydrohalogenation. The reaction provides a means to generate alkynes from alkenes, which are first halogenated and then dehydrohalogenated. For example, phenylacetylene can be generated from styrene by bromination followed by treatment of the resulting of 1,2-dibromo-1-phenylethane with sodium amide in ammonia:
Via the Fritsch–Buttenberg–Wiechell rearrangement, alkynes are prepared from vinyl bromides. Alkynes can be prepared from aldehydes using the Corey–Fuchs reaction and from aldehydes or ketones by the Seyferth–Gilbert homologation.
Vinyl halides are susceptible to dehydrohalogenation.
Reactions, including applications
Featuring a reactive functional group, alkynes participate in many organic reactions. Such use was pioneered by Ralph Raphael, who in 1955 wrote the first book describing their versatility as intermediates in synthesis. In spite of their kinetic stability (persistence) due to their strong triple bonds, alkynes are a thermodynamically unstable functional group, as can be gleaned from the highly positive heats of formation of small alkynes. For example, acetylene has a heat of formation of +227.4 kJ/mol (+54.2 kcal/mol), indicating a much higher energy content compared to its constituent elements. The highly exothermic combustion of acetylene is exploited industrially in oxyacetylene torches used in welding. Other reactions involving alkynes are often highly thermodynamically favorable (exothermic/exergonic) for the same reason.
Hydrogenation
Being more unsaturated than alkenes, alkynes characteristically undergo reactions that show that they are "doubly unsaturated". Alkynes are capable of adding two equivalents of , whereas an alkene adds only one equivalent. Depending on catalysts and conditions, alkynes add one or two equivalents of hydrogen. Partial hydrogenation, stopping after the addition of only one equivalent to give the alkene, is usually more desirable since alkanes are less useful: | Alkyne | Wikipedia | 500 | 2763 | https://en.wikipedia.org/wiki/Alkyne | Physical sciences | Hydrocarbons | null |
The largest scale application of this technology is the conversion of acetylene to ethylene in refineries (the steam cracking of alkanes yields a few percent acetylene, which is selectively hydrogenated in the presence of a palladium/silver catalyst). For more complex alkynes, the Lindlar catalyst is widely recommended to avoid formation of the alkane, for example in the conversion of phenylacetylene to styrene. Similarly, halogenation of alkynes gives the alkene dihalides or alkyl tetrahalides:
RCH=CR'H + H2 -> RCH2CR'H2
The addition of one equivalent of to internal alkynes gives cis-alkenes.
Addition of halogens and related reagents
Alkynes characteristically are capable of adding two equivalents of halogens and hydrogen halides.
RC#CR' + 2 Br2 -> RCBr2CR'Br2
The addition of nonpolar bonds across is general for silanes, boranes, and related hydrides. The hydroboration of alkynes gives vinylic boranes which oxidize to the corresponding aldehyde or ketone. In the thiol-yne reaction the substrate is a thiol.
Addition of hydrogen halides has long been of interest. In the presence of mercuric chloride as a catalyst, acetylene and hydrogen chloride react to give vinyl chloride. While this method has been abandoned in the West, it remains the main production method in China.
Hydration
The hydration reaction of acetylene gives acetaldehyde. The reaction proceeds by formation of vinyl alcohol, which tautomerizes to form the aldehyde. This reaction was once a major industrial process but it has been displaced by the Wacker process. This reaction occurs in nature, the catalyst being acetylene hydratase.
Hydration of phenylacetylene gives acetophenone:
PhC#CH + H2O -> PhCOCH3
catalyzes hydration of 1,8-nonadiyne to 2,8-nonanedione:
HC#C(CH2)5C#CH + 2H2O -> CH3CO(CH2)5COCH3 | Alkyne | Wikipedia | 481 | 2763 | https://en.wikipedia.org/wiki/Alkyne | Physical sciences | Hydrocarbons | null |
Isomerization to allenes
Alkynes can be isomerized by strong base or transition metals to allenes. Due to their comparable thermodynamic stabilities, the equilibrium constant of alkyne/allene isomerization is generally within several orders of magnitude of unity. For example propyne can be isomerized to give an equilibrium mixture with propadiene:
HC#C-CH3 <=> CH2=C=CH2
Cycloadditions and oxidation
Alkynes undergo diverse cycloaddition reactions. The Diels–Alder reaction with 1,3-dienes gives 1,4-cyclohexadienes. This general reaction has been extensively developed. Electrophilic alkynes are especially effective dienophiles. The "cycloadduct" derived from the addition of alkynes to 2-pyrone eliminates carbon dioxide to give the aromatic compound. Other specialized cycloadditions include multicomponent reactions such as alkyne trimerisation to give aromatic compounds and the [2+2+1]-cycloaddition of an alkyne, alkene and carbon monoxide in the Pauson–Khand reaction. Non-carbon reagents also undergo cyclization, e.g. azide alkyne Huisgen cycloaddition to give triazoles. Cycloaddition processes involving alkynes are often catalyzed by metals, e.g. enyne metathesis and alkyne metathesis, which allows the scrambling of carbyne (RC) centers:
RC#CR + R'C#CR' <=> 2RC#CR'
Oxidative cleavage of alkynes proceeds via cycloaddition to metal oxides. Most famously, potassium permanganate converts alkynes to a pair of carboxylic acids.
Reactions specific for terminal alkynes
Terminal alkynes are readily converted to many derivatives, e.g. by coupling reactions and condensations. Via the condensation with formaldehyde and acetylene is produced butynediol:
2CH2O + HC#CH -> HOCH2CCCH2OH
In the Sonogashira reaction, terminal alkynes are coupled with aryl or vinyl halides: | Alkyne | Wikipedia | 489 | 2763 | https://en.wikipedia.org/wiki/Alkyne | Physical sciences | Hydrocarbons | null |
This reactivity exploits the fact that terminal alkynes are weak acids, whose typical pKa values around 25 place them between that of ammonia (35) and ethanol (16):
RC#CH + MX -> RC#CM + HX
where MX = NaNH2, LiBu, or RMgX.
The reactions of alkynes with certain metal cations, e.g. and also gives acetylides. Thus, few drops of diamminesilver(I) hydroxide () reacts with terminal alkynes signaled by formation of a white precipitate of the silver acetylide. This reactivity is the basis of alkyne coupling reactions, including the Cadiot–Chodkiewicz coupling, Glaser coupling, and the Eglinton coupling shown below:
2R-\!{\equiv}\!-H ->[\ce{Cu(OAc)2}][\ce{pyridine}] R-\!{\equiv}\!-\!{\equiv}\!-R
In the Favorskii reaction and in alkynylations in general, terminal alkynes add to carbonyl compounds to give the hydroxyalkyne.
Metal complexes
Alkynes form complexes with transition metals. Such complexes occur also in metal catalyzed reactions of alkynes such as alkyne trimerization. Terminal alkynes, including acetylene itself, react with water to give aldehydes. The transformation typically requires metal catalysts to give this anti-Markovnikov addition result. | Alkyne | Wikipedia | 338 | 2763 | https://en.wikipedia.org/wiki/Alkyne | Physical sciences | Hydrocarbons | null |
Alkynes in nature and medicine
According to Ferdinand Bohlmann, the first naturally occurring acetylenic compound, dehydromatricaria ester, was isolated from an Artemisia species in 1826. In the nearly two centuries that have followed, well over a thousand naturally occurring acetylenes have been discovered and reported. Polyynes, a subset of this class of natural products, have been isolated from a wide variety of plant species, cultures of higher fungi, bacteria, marine sponges, and corals. Some acids like tariric acid contain an alkyne group. Diynes and triynes, species with the linkage RC≡C–C≡CR′ and RC≡C–C≡C–C≡CR′ respectively, occur in certain plants (Ichthyothere, Chrysanthemum, Cicuta, Oenanthe and other members of the Asteraceae and Apiaceae families). Some examples are cicutoxin, oenanthotoxin, and falcarinol. These compounds are highly bioactive, e.g. as nematocides. 1-Phenylhepta-1,3,5-triyne is illustrative of a naturally occurring triyne.
Alkynes occur in some pharmaceuticals, including the contraceptive noretynodrel. A carbon–carbon triple bond is also present in marketed drugs such as the antiretroviral Efavirenz and the antifungal Terbinafine. Molecules called ene-diynes feature a ring containing an alkene ("ene") between two alkyne groups ("diyne"). These compounds, e.g. calicheamicin, are some of the most aggressive antitumor drugs known, so much so that the ene-diyne subunit is sometimes referred to as a "warhead". Ene-diynes undergo rearrangement via the Bergman cyclization, generating highly reactive radical intermediates that attack DNA within the tumor. | Alkyne | Wikipedia | 423 | 2763 | https://en.wikipedia.org/wiki/Alkyne | Physical sciences | Hydrocarbons | null |
The Anatomical Therapeutic Chemical (ATC) Classification System is a drug classification system that classifies the active ingredients of drugs according to the organ or system on which they act and their therapeutic, pharmacological and chemical properties. Its purpose is an aid to monitor drug use and for research to improve quality medication use. It does not imply drug recommendation or efficacy. It is controlled by the World Health Organization Collaborating Centre for Drug Statistics Methodology (WHOCC), and was first published in 1976.
Coding system
This pharmaceutical coding system divides drugs into different groups according to the organ or system on which they act, their therapeutic intent or nature, and the drug's chemical characteristics. Different brands share the same code if they have the same active substance and indications. Each bottom-level ATC code stands for a pharmaceutically used substance, or a combination of substances, in a single indication (or use). This means that one drug can have more than one code, for example acetylsalicylic acid (aspirin) has as a drug for local oral treatment, as a platelet inhibitor, and as an analgesic and antipyretic; as well as one code can represent more than one active ingredient, for example is the combination of perindopril with amlodipine, two active ingredients that have their own codes ( and respectively) when prescribed alone.
The ATC classification system is a strict hierarchy, meaning that each code necessarily has one and only one parent code, except for the 14 codes at the topmost level which have no parents. The codes are semantic identifiers, meaning they depict information by themselves beyond serving as identifiers (namely, the codes depict themselves the complete lineage of parenthood). As of 7 May 2020, there are 6,331 codes in ATC; the table below gives the count per level.
History
The ATC system is based on the earlier Anatomical Classification System, which is intended as a tool for the pharmaceutical industry to classify pharmaceutical products (as opposed to their active ingredients). This system, confusingly also called ATC, was initiated in 1971 by the European Pharmaceutical Market Research Association (EphMRA) and is being maintained by the EphMRA and Intellus. Its codes are organised into four levels. The WHO's system, having five levels, is an extension and modification of the EphMRA's. It was first published in 1976.
Classification
In this system, drugs are classified into groups at five different levels: | Anatomical Therapeutic Chemical Classification System | Wikipedia | 505 | 2770 | https://en.wikipedia.org/wiki/Anatomical%20Therapeutic%20Chemical%20Classification%20System | Biology and health sciences | General concepts_2 | Health |
First level
The first level of the code indicates the anatomical main group and consists of one letter. There are 14 main groups:
Example: C Cardiovascular system
Second level
The second level of the code indicates the therapeutic subgroup and consists of two digits.
Example: C03 Diuretics
Third level
The third level of the code indicates the therapeutic/pharmacological subgroup and consists of one letter.
Example: C03C High-ceiling diuretics
Fourth level
The fourth level of the code indicates the chemical/therapeutic/pharmacological subgroup and consists of one letter.
Example: C03CA Sulfonamides
Fifth level
The fifth level of the code indicates the chemical substance and consists of two digits.
Example: C03CA01 furosemide
Other ATC classification systems
ATCvet
The Anatomical Therapeutic Chemical Classification System for veterinary medicinal products (ATCvet) is used to classify veterinary drugs. ATCvet codes can be created by placing the letter Q in front of the ATC code of most human medications. For example, furosemide for veterinary use has the code QC03CA01.
Some codes are used exclusively for veterinary drugs, such as QI Immunologicals, QJ51 Antibacterials for intramammary use or QN05AX90 amperozide.
Herbal ATC (HATC)
The Herbal ATC system (HATC) is an ATC classification of herbal substances; it differs from the regular ATC system by using 4 digits instead of 2 at the 5th level group.
The herbal classification is not adopted by WHO. The Uppsala Monitoring Centre is responsible for the Herbal ATC classification, and it is part of the WHODrug Global portfolio available by subscription.
Defined daily dose
The ATC system also includes defined daily doses (DDDs) for many drugs. This is a measurement of drug consumption based on the usual daily dose for a given drug. According to the definition, "[t]he DDD is the assumed average maintenance dose per day for a drug used for its main indication in adults."
Adaptations and updates
National issues of the ATC classification, such as the German Anatomisch-therapeutisch-chemische Klassifikation mit Tagesdosen, may include additional codes and DDDs not present in the WHO version. | Anatomical Therapeutic Chemical Classification System | Wikipedia | 474 | 2770 | https://en.wikipedia.org/wiki/Anatomical%20Therapeutic%20Chemical%20Classification%20System | Biology and health sciences | General concepts_2 | Health |
ATC follows guidelines in creating new codes for newly approved drugs. An application is submitted to WHO for ATC classification and DDD assignment. A preliminary or temporary code is assigned and published on the website and in the WHO Drug Information for comment or objection. New ATC/DDD codes are discussed at the semi-annual Working Group meeting. If accepted it becomes a final decision and published semi-annually on the website and WHO Drug Information and implemented in the annual print/on-line ACT/DDD Index on January 1.
Changes to existing ATC/DDD follow a similar process to become temporary codes and if accepted become a final decision as ATC/DDD alterations. ATC and DDD alterations are only valid and implemented in the coming annual updates; the original codes must continue until the end of the year. An updated version of the complete on-line/print ATC index with DDDs is published annually on January 1. | Anatomical Therapeutic Chemical Classification System | Wikipedia | 191 | 2770 | https://en.wikipedia.org/wiki/Anatomical%20Therapeutic%20Chemical%20Classification%20System | Biology and health sciences | General concepts_2 | Health |
Parallel ATA (PATA), originally , also known as Integrated Drive Electronics (IDE), is a standard interface designed for IBM PC-compatible computers. It was first developed by Western Digital and Compaq in 1986 for compatible hard drives and CD or DVD drives. The connection is used for storage devices such as hard disk drives, floppy disk drives, optical disc drives, and tape drives in computers.
The standard is maintained by the X3/INCITS committee. It uses the underlying (ATA) and Packet Interface (ATAPI) standards.
The Parallel ATA standard is the result of a long history of incremental technical development, which began with the original AT Attachment interface, developed for use in early PC AT equipment. The ATA interface itself evolved in several stages from Western Digital's original Integrated Drive Electronics (IDE) interface. As a result, many near-synonyms for ATA/ATAPI and its previous incarnations are still in common informal use, in particular Extended IDE (EIDE) and Ultra ATA (UATA). After the introduction of SATA in 2003, the original ATA was renamed to Parallel ATA, or PATA for short.
Parallel ATA cables have a maximum allowable length of . Because of this limit, the technology normally appears as an internal computer storage interface. For many years, ATA provided the most common and the least expensive interface for this application. It has largely been replaced by SATA in newer systems.
History and terminology
The standard was originally conceived as the "AT Bus Attachment", officially called "AT Attachment" and abbreviated "ATA" because its primary feature was a direct connection to the 16-bit ISA bus introduced with the IBM PC/AT. The original ATA specifications published by the standards committees use the name "AT Attachment". The "AT" in the IBM PC/AT referred to "Advanced Technology" so ATA has also been referred to as "Advanced Technology Attachment". When a newer Serial ATA (SATA) was introduced in 2003, the original ATA was renamed to Parallel ATA, or PATA for short.
Physical ATA interfaces became a standard component in all PCs, initially on host bus adapters, sometimes on a sound card but ultimately as two physical interfaces embedded in a Southbridge chip on a motherboard. Called the "primary" and "secondary" ATA interfaces, they were assigned to base addresses 0x1F0 and 0x170 on ISA bus systems. They were replaced by SATA interfaces.
IDE and ATA-1 | Parallel ATA | Wikipedia | 507 | 2778 | https://en.wikipedia.org/wiki/Parallel%20ATA | Technology | Computer hardware | null |
The first version of what is now called the ATA/ATAPI interface was developed by Western Digital under the name Integrated Drive Electronics (IDE). Together with Compaq (the initial customer), they worked with various disk drive manufacturers to develop and ship early products with the goal of remaining software compatible with the existing IBM PC hard drive interface. The first such drives appeared internally in Compaq PCs in 1986 and were first separately offered by Conner Peripherals as the CP342 in June 1987.
The term Integrated Drive Electronics refers to the drive controller being integrated into the drive, as opposed to a separate controller situated at the other side of the connection cable to the drive. On an IBM PC compatible, CP/M machine, or similar, this was typically a card installed on a motherboard. The interface cards used to connect a parallel ATA drive to, for example, an ISA Slot, are not drive controllers: they are merely bridges between the host bus and the ATA interface. Since the original ATA interface is essentially just a 16-bit ISA bus, the bridge was especially simple in case of an ATA connector being located on an ISA interface card. The integrated controller presented the drive to the host computer as an array of 512-byte blocks with a relatively simple command interface. This relieved the mainboard and interface cards in the host computer of the chores of stepping the disk head arm, moving the head arm in and out, and so on, as had to be done with earlier ST-506 and ESDI hard drives. All of these low-level details of the mechanical operation of the drive were now handled by the controller on the drive itself. This also eliminated the need to design a single controller that could handle many different types of drives, since the controller could be unique for the drive. The host need only to ask for a particular sector, or block, to be read or written, and either accept the data from the drive or send the data to it.
The interface used by these drives was standardized in 1994 as ANSI standard X3.221-1994, AT Attachment Interface for Disk Drives. After later versions of the standard were developed, this became known as "ATA-1".
A short-lived, seldom-used implementation of ATA was created for the IBM XT and similar machines that used the 8-bit version of the ISA bus. It has been referred to as "XT-IDE", "XTA" or "XT Attachment".
EIDE and ATA-2 | Parallel ATA | Wikipedia | 508 | 2778 | https://en.wikipedia.org/wiki/Parallel%20ATA | Technology | Computer hardware | null |
In 1994, about the same time that the ATA-1 standard was adopted, Western Digital introduced drives under a newer name, Enhanced IDE (EIDE). These included most of the features of the forthcoming ATA-2 specification and several additional enhancements. Other manufacturers introduced their own variations of ATA-1 such as "Fast ATA" and "Fast ATA-2".
The new version of the ANSI standard, AT Attachment Interface with Extensions ATA-2 (X3.279-1996), was approved in 1996. It included most of the features of the manufacturer-specific variants.
ATA-2 also was the first to note that devices other than hard drives could be attached to the interface:
ATAPI
ATA was originally designed for, and worked only with, hard disk drives and devices that could emulate them. The introduction of ATAPI (ATA Packet Interface) by a group called the Small Form Factor committee (SFF) allowed ATA to be used for a variety of other devices that require functions beyond those necessary for hard disk drives. For example, any removable media device needs a "media eject" command, and a way for the host to determine whether the media is present, and these were not provided in the ATA protocol.
ATAPI is a protocol allowing the ATA interface to carry SCSI commands and responses; therefore, all ATAPI devices are actually "speaking SCSI" other than at the electrical interface. The SCSI commands and responses are embedded in "packets" (hence "ATA Packet Interface") for transmission on the ATA cable. This allows any device class for which a SCSI command set has been defined to be interfaced via ATA/ATAPI.
ATAPI devices are also "speaking ATA", as the ATA physical interface and protocol are still being used to send the packets. On the other hand, ATA hard drives and solid state drives do not use ATAPI.
ATAPI devices include CD-ROM and DVD-ROM drives, tape drives, and large-capacity floppy drives such as the Zip drive and SuperDisk drive. Some early ATAPI devices were simply SCSI devices with an ATA/ATAPI to SCSI protocol converter added on. | Parallel ATA | Wikipedia | 443 | 2778 | https://en.wikipedia.org/wiki/Parallel%20ATA | Technology | Computer hardware | null |
The SCSI commands and responses used by each class of ATAPI device (CD-ROM, tape, etc.) are described in other documents or specifications specific to those device classes and are not within ATA/ATAPI or the T13 committee's purview. One commonly used set is defined in the MMC SCSI command set.
ATAPI was adopted as part of ATA in INCITS 317-1998, AT Attachment with Packet Interface Extension (ATA/ATAPI-4).
UDMA and ATA-4
The ATA/ATAPI-4 standard also introduced several "Ultra DMA" transfer modes. These initially supported speeds from 16 to 33 MB/s. In later versions, faster Ultra DMA modes were added, requiring new 80-wire cables to reduce crosstalk. The latest versions of Parallel ATA support up to 133 MB/s.
Ultra ATA
Ultra ATA, abbreviated UATA, is a designation that has been primarily used by Western Digital for different speed enhancements to the ATA/ATAPI standards. For example, in 2000 Western Digital published a document describing "Ultra ATA/100", which brought performance improvements for the then-current ATA/ATAPI-5 standard by improving maximum speed of the Parallel ATA interface from 66 to 100 MB/s. Most of Western Digital's changes, along with others, were included in the ATA/ATAPI-6 standard (2002).
x86 BIOS size limitations
Initially, the size of an ATA drive was stored in the system x86 BIOS using a type number (1 through 45) that predefined the C/H/S parameters and also often the landing zone, in which the drive heads are parked while not in use. Later, a "user definable" format called C/H/S or cylinders, heads, sectors was made available. These numbers were important for the earlier ST-506 interface, but were generally meaningless for ATA—the CHS parameters for later ATA large drives often specified impossibly high numbers of heads or sectors that did not actually define the internal physical layout of the drive at all. From the start, and up to ATA-2, every user had to specify explicitly how large every attached drive was. From ATA-2 on, an "identify drive" command was implemented that can be sent and which will return all drive parameters. | Parallel ATA | Wikipedia | 479 | 2778 | https://en.wikipedia.org/wiki/Parallel%20ATA | Technology | Computer hardware | null |
Owing to a lack of foresight by motherboard manufacturers, the system BIOS was often hobbled by artificial C/H/S size limitations due to the manufacturer assuming certain values would never exceed a particular numerical maximum.
The first of these BIOS limits occurred when ATA drives reached sizes in excess of 504 MiB, because some motherboard BIOSes would not allow C/H/S values above 1024 cylinders, 16 heads, and 63 sectors. Multiplied by 512 bytes per sector, this totals bytes which, divided by bytes per MiB, equals 504 MiB (528 MB).
The second of these BIOS limitations occurred at 1024 cylinders, 256 heads, and 63 sectors, and a problem in MS-DOS limited the number of heads to 255. This totals to bytes (8032.5 MiB), commonly referred to as the 8.4 gigabyte barrier. This is again a limit imposed by x86 BIOSes, and not a limit imposed by the ATA interface.
It was eventually determined that these size limitations could be overridden with a small program loaded at startup from a hard drive's boot sector. Some hard drive manufacturers, such as Western Digital, started including these override utilities with large hard drives to help overcome these problems. However, if the computer was booted in some other manner without loading the special utility, the invalid BIOS settings would be used and the drive could either be inaccessible or appear to the operating system to be damaged.
Later, an extension to the x86 BIOS disk services called the "Enhanced Disk Drive" (EDD) was made available, which makes it possible to address drives as large as 264 sectors.
Interface size limitations
The first drive interface used 22-bit addressing mode which resulted in a maximum drive capacity of two gigabytes. Later, the first formalized ATA specification used a 28-bit addressing mode through LBA28, allowing for the addressing of 228 () sectors (blocks) of 512 bytes each, resulting in a maximum capacity of 128 GiB (137 GB).
ATA-6 introduced 48-bit addressing, increasing the limit to 128 PiB (144 PB). As a consequence, any ATA drive of capacity larger than about 137 GB must be an ATA-6 or later drive. Connecting such a drive to a host with an ATA-5 or earlier interface will limit the usable capacity to the maximum of the interface. | Parallel ATA | Wikipedia | 493 | 2778 | https://en.wikipedia.org/wiki/Parallel%20ATA | Technology | Computer hardware | null |
Some operating systems, including Windows XP pre-SP1, and Windows 2000 pre-SP3, disable LBA48 by default, requiring the user to take extra steps to use the entire capacity of an ATA drive larger than about 137 gigabytes.
Older operating systems, such as Windows 98, do not support 48-bit LBA at all. However, members of the third-party group MSFN have modified the Windows 98 disk drivers to add unofficial support for 48-bit LBA to Windows 95 OSR2, Windows 98, Windows 98 SE and Windows ME.
Some 16-bit and 32-bit operating systems supporting LBA48 may still not support disks larger than 2 TiB due to using 32-bit arithmetic only; a limitation also applying to many boot sectors.
Primacy and obsolescence
Parallel ATA (then simply called ATA or IDE) became the primary storage device interface for PCs soon after its introduction. In some systems, a third and fourth motherboard interface was provided, allowing up to eight ATA devices to be attached to the motherboard. Often, these additional connectors were implemented by inexpensive RAID controllers.
Soon after the introduction of Serial ATA (SATA) in 2003, use of Parallel ATA declined. Some PCs and laptops of the era have a SATA hard disk and an optical drive connected to PATA.
As of 2007, some PC chipsets, for example the Intel ICH10, had removed support for PATA. Motherboard vendors still wishing to offer Parallel ATA with those chipsets must include an additional interface chip. In more recent computers, the Parallel ATA interface is rarely used even if present, as four or more Serial ATA connectors are usually provided on the motherboard and SATA devices of all types are common.
With Western Digital's withdrawal from the PATA market, hard disk drives with the PATA interface were no longer in production after December 2013 for other than specialty applications. | Parallel ATA | Wikipedia | 392 | 2778 | https://en.wikipedia.org/wiki/Parallel%20ATA | Technology | Computer hardware | null |
Interface
Parallel ATA cables transfer data 16 bits at a time. The traditional cable uses 40-pin female insulation displacement connectors (IDC) attached to a 40- or 80-conductor ribbon cable. Each cable has two or three connectors, one of which plugs into a host adapter interfacing with the rest of the computer system. The remaining connector(s) plug into storage devices, most commonly hard disk drives or optical drives. Each connector has 39 physical pins arranged into two rows (2.54 mm, -inch pitch), with a gap or key at pin 20. Earlier connectors may not have that gap, with all 40 pins available. Thus, later cables with the gap filled in are incompatible with earlier connectors, although earlier cables are compatible with later connectors.
Round parallel ATA cables (as opposed to ribbon cables) were eventually made available for 'case modders' for cosmetic reasons, as well as claims of improved computer cooling and were easier to handle; however, only ribbon cables are supported by the ATA specifications.
Pin 20 In the ATA standard, pin 20 is defined as a mechanical key and is not used. The pin's socket on the female connector is often blocked, requiring pin 20 to be omitted from the male cable or drive connector; it is thus impossible to plug it in the wrong way round. However, some flash memory drives can use pin 20 as VCC_in to power the drive without requiring a special power cable; this feature can only be used if the equipment supports this use of pin 20.
Pin 28 Pin 28 of the gray (slave/middle) connector of an 80-conductor cable is not attached to any conductor of the cable. It is attached normally on the black (master drive end) and blue (motherboard end) connectors. This enables cable select functionality.
Pin 34 Pin 34 is connected to ground inside the blue connector of an 80-conductor cable but not attached to any conductor of the cable, allowing for detection of such a cable. It is attached normally on the gray and black connectors.
44-pin variant
A 44-pin variant PATA connector is used for 2.5 inch drives inside laptops. The pins are closer together (2.0 mm pitch) and the connector is physically smaller than the 40-pin connector. The extra pins carry power.
80-conductor variant | Parallel ATA | Wikipedia | 478 | 2778 | https://en.wikipedia.org/wiki/Parallel%20ATA | Technology | Computer hardware | null |
ATA's cables have had 40 conductors for most of its history (44 conductors for the smaller form-factor version used for 2.5" drives—the extra four for power), but an 80-conductor version appeared with the introduction of the UDMA/66 mode. All of the additional conductors in the new cable are grounds, interleaved with the signal conductors to reduce the effects of capacitive coupling between neighboring signal conductors, reducing crosstalk. Capacitive coupling is more of a problem at higher transfer rates, and this change was necessary to enable the 66 megabytes per second (MB/s) transfer rate of UDMA4 to work reliably. The faster UDMA5 and UDMA6 modes also require 80-conductor cables.
Though the number of conductors doubled, the number of connector pins and the pinout remain the same as 40-conductor cables, and the external appearance of the connectors is identical. Internally, the connectors are different; the connectors for the 80-conductor cable connect a larger number of ground conductors to the ground pins, while the connectors for the 40-conductor cable connect ground conductors to ground pins one-to-one. 80-conductor cables usually come with three differently colored connectors (blue, black, and gray for controller, master drive, and slave drive respectively) as opposed to uniformly colored 40-conductor cable's connectors (commonly all gray). The gray connector on 80-conductor cables has pin 28 CSEL not connected, making it the slave position for drives configured cable select.
Multiple devices on a cable
If two devices are attached to a single cable, one must be designated as Device 0 (in the past, commonly designated master) and the other as Device 1 (in the past, commonly designated as slave). This distinction is necessary to allow both drives to share the cable without conflict. The Device 0 drive is the drive that usually appears "first" to the computer's BIOS and/or operating system. In most personal computers the drives are often designated as "C:" for the Device 0 and "D:" for the Device 1 referring to one active primary partitions on each. | Parallel ATA | Wikipedia | 444 | 2778 | https://en.wikipedia.org/wiki/Parallel%20ATA | Technology | Computer hardware | null |
The mode that a device must use is often set by a jumper setting on the device itself, which must be manually set to Device 0 (Master) or Device 1 (Slave). If there is a single device on a cable, it should be configured as Device 0. However, some certain era drives have a special setting called Single for this configuration (Western Digital, in particular). Also, depending on the hardware and software available, a Single drive on a cable will often work reliably even though configured as the Device 1 drive (most often seen where an optical drive is the only device on the secondary ATA interface).
The words primary and secondary typically refers to the two IDE cables, which can have two drives each (primary master, primary slave, secondary master, secondary slave). | Parallel ATA | Wikipedia | 161 | 2778 | https://en.wikipedia.org/wiki/Parallel%20ATA | Technology | Computer hardware | null |
There are many debates about how much a slow device can impact the performance of a faster device on the same cable. On early ATA host adapters, both devices' data transfers can be constrained to the speed of the slower device, if two devices of different speed capabilities are on the same cable. For all modern ATA host adapters, this is not true, as modern ATA host adapters support independent device timing. This allows each device on the cable to transfer data at its own best speed. Even with earlier adapters without independent timing, this effect applies only to the data transfer phase of a read or write operation. This is caused by the omission of both overlapped and queued feature sets from most parallel ATA products. Only one device on a cable can perform a read or write operation at one time; therefore, a fast device on the same cable as a slow device under heavy use will find it has to wait for the slow device to complete its task first. However, most modern devices will report write operations as complete once the data is stored in their onboard cache memory, before the data is written to the (slow) magnetic storage. This allows commands to be sent to the other device on the cable, reducing the impact of the "one operation at a time" limit. The impact of this on a system's performance depends on the application. For example, when copying data from an optical drive to a hard drive (such as during software installation), this effect probably will not matter. Such jobs are necessarily limited by the speed of the optical drive no matter where it is. But if the hard drive in question is also expected to provide good throughput for other tasks at the same time, it probably should not be on the same cable as the optical drive.
Cable select
A drive mode called cable select was described as optional in ATA-1 and has come into fairly widespread use with ATA-5 and later. A drive set to "cable select" automatically configures itself as Device 0 or Device 1, according to its position on the cable. Cable select is controlled by pin 28. The host adapter grounds this pin; if a device sees that the pin is grounded, it becomes the Device 0 (master) device; if it sees that pin 28 is open, the device becomes the Device 1 (slave) device.
This setting is usually chosen by a jumper setting on the drive called "cable select", usually marked CS, which is separate from the Device 0/1 setting. | Parallel ATA | Wikipedia | 506 | 2778 | https://en.wikipedia.org/wiki/Parallel%20ATA | Technology | Computer hardware | null |
If two drives are configured as Device 0 and Device 1 manually, this configuration does not need to correspond to their position on the cable. Pin 28 is only used to let the drives know their position on the cable; it is not used by the host when communicating with the drives. In other words, the manual master/slave setting using jumpers on the drives takes precedence and allows them to be freely placed on either connector of the ribbon cable.
With the 40-conductor cable, it was very common to implement cable select by simply cutting the pin 28 wire between the two device connectors; putting the slave Device 1 device at the end of the cable, and the master Device 0 on the middle connector. This arrangement eventually was standardized in later versions. However, it had one drawback: if there is just one master device on a 2-drive cable, using the middle connector, this results in an unused stub of cable, which is undesirable for physical convenience and electrical reasons. The stub causes signal reflections, particularly at higher transfer rates.
Starting with the 80-conductor cable defined for use in ATAPI5/UDMA4, the master Device 0 device goes at the far-from-the-host end of the cable on the black connector, the slave Device 1 goes on the grey middle connector, and the blue connector goes to the host (e.g. motherboard IDE connector, or IDE card). So, if there is only one (Device 0) device on a two-drive cable, using the black connector, there is no cable stub to cause reflections (the unused connector is now in the middle of the ribbon). Also, cable select is now implemented in the grey middle device connector, usually simply by omitting the pin 28 contact from the connector body.
Serialized, overlapped, and queued operations
The parallel ATA protocols up through ATA-3 require that once a command has been given on an ATA interface, it must complete before any subsequent command may be given. Operations on the devices must be serializedwith only one operation in progress at a timewith respect to the ATA host interface. A useful mental model is that the host ATA interface is busy with the first request for its entire duration, and therefore can not be told about another request until the first one is complete. The function of serializing requests to the interface is usually performed by a device driver in the host operating system. | Parallel ATA | Wikipedia | 493 | 2778 | https://en.wikipedia.org/wiki/Parallel%20ATA | Technology | Computer hardware | null |
The ATA-4 and subsequent versions of the specification have included an "overlapped feature set" and a "queued feature set" as optional features, both being given the name "Tagged Command Queuing" (TCQ), a reference to a set of features from SCSI which the ATA version attempts to emulate. However, support for these is extremely rare in actual parallel ATA products and device drivers because these feature sets were implemented in such a way as to maintain software compatibility with its heritage as originally an extension of the ISA bus. This implementation resulted in excessive CPU utilization which largely negated the advantages of command queuing. By contrast, overlapped and queued operations have been common in other storage buses; in particular, SCSI's version of tagged command queuing had no need to be compatible with APIs designed for ISA, allowing it to attain high performance with low overhead on buses which supported first party DMA like PCI. This has long been seen as a major advantage of SCSI.
The Serial ATA standard has supported native command queueing (NCQ) since its first release, but it is an optional feature for both host adapters and target devices. Many obsolete PC motherboards do not support NCQ, but modern SATA hard disk drives and SATA solid-state drives usually support NCQ, which is not the case for removable (CD/DVD) drives because the ATAPI command set used to control them prohibits queued operations.
HDD passwords and security
ATA devices may support an optional security feature which is defined in an ATA specification, and thus not specific to any brand or device. The security feature can be enabled and disabled by sending special ATA commands to the drive. If a device is locked, it will refuse all access until it is unlocked. A device can have two passwords: A User Password and a Master Password; either or both may be set. There is a Master Password identifier feature which, if supported and used, can identify the current Master Password (without disclosing it). The master password, if set, can used by the administrator to reset user password, if the end user forgot the user password. On some laptops and some business computers, their BIOS can control the ATA passwords. | Parallel ATA | Wikipedia | 460 | 2778 | https://en.wikipedia.org/wiki/Parallel%20ATA | Technology | Computer hardware | null |
A device can be locked in two modes: High security mode or Maximum security mode. Bit 8 in word 128 of the IDENTIFY response shows which mode the disk is in: 0 = High, 1 = Maximum. In High security mode, the device can be unlocked with either the User or Master password, using the "SECURITY UNLOCK DEVICE" ATA command. There is an attempt limit, normally set to 5, after which the disk must be power cycled or hard-reset before unlocking can be attempted again. Also in High security mode, the SECURITY ERASE UNIT command can be used with either the User or Master password. In Maximum security mode, the device can be unlocked only with the User password. If the User password is not available, the only remaining way to get at least the bare hardware back to a usable state is to issue the SECURITY ERASE PREPARE command, immediately followed by SECURITY ERASE UNIT. In Maximum security mode, the SECURITY ERASE UNIT command requires the Master password and will completely erase all data on the disk. Word 89 in the IDENTIFY response indicates how long the operation will take. While the ATA lock is intended to be impossible to defeat without a valid password, there are purported workarounds to unlock a device.
For NVMe drives, the security features, including lock passwords, were defined in the OPAL standard.
For sanitizing entire disks, the built-in Secure Erase command is effective when implemented correctly. There have been a few reported instances of failures to erase some or all data. On some laptops and some business computers, their BIOS can utilize Secure Erase to erase all data of the disk.
External parallel ATA devices
Due to a short cable length specification and shielding issues it is extremely uncommon to find external PATA devices that directly use PATA for connection to a computer. A device connected externally needs additional cable length to form a U-shaped bend so that the external device may be placed alongside, or on top of the computer case, and the standard cable length is too short to permit this. For ease of reach from motherboard to device, the connectors tend to be positioned towards the front edge of motherboards, for connection to devices protruding from the front of the computer case. This front-edge position makes extension out the back to an external device even more difficult. Ribbon cables are poorly shielded, and the standard relies upon the cabling to be installed inside a shielded computer case to meet RF emissions limits. | Parallel ATA | Wikipedia | 492 | 2778 | https://en.wikipedia.org/wiki/Parallel%20ATA | Technology | Computer hardware | null |
External hard disk drives or optical disk drives that have an internal PATA interface, use some other interface technology to bridge the distance between the external device and the computer. USB is the most common external interface, followed by Firewire. A bridge chip inside the external devices converts from the USB interface to PATA, and typically only supports a single external device without cable select or master/slave.
Specifications
The following table shows the names of the versions of the ATA standards and the transfer modes and rates supported by each. Note that the transfer rate for each mode (for example, 66.7 MB/s for UDMA4, commonly called "Ultra-DMA 66", defined by ATA-5) gives its maximum theoretical transfer rate on the cable. This is simply two bytes multiplied by the effective clock rate, and presumes that every clock cycle is used to transfer end-user data. In practice, of course, protocol overhead reduces this value.
Congestion on the host bus to which the ATA adapter is attached may also limit the maximum burst transfer rate. For example, the maximum data transfer rate for conventional PCI bus is 133 MB/s, and this is shared among all active devices on the bus.
In addition, no ATA hard drives existed in 2005 that were capable of measured sustained transfer rates of above 80 MB/s. Furthermore, sustained transfer rate tests do not give realistic throughput expectations for most workloads: They use I/O loads specifically designed to encounter almost no delays from seek time or rotational latency. Hard drive performance under most workloads is limited first and second by those two factors; the transfer rate on the bus is a distant third in importance. Therefore, transfer speed limits above 66 MB/s really affect performance only when the hard drive can satisfy all I/O requests by reading from its internal cache—a very unusual situation, especially considering that such data is usually already buffered by the operating system.
, mechanical hard disk drives can transfer data at up to 524 MB/s, which is far beyond the capabilities of the PATA/133 specification. High-performance solid state drives can transfer data at up to 7000–7500 MB/s. | Parallel ATA | Wikipedia | 445 | 2778 | https://en.wikipedia.org/wiki/Parallel%20ATA | Technology | Computer hardware | null |
Only the Ultra DMA modes use CRC to detect errors in data transfer between the controller and drive. This is a 16-bit CRC, and it is used for data blocks only. Transmission of command and status blocks do not use the fast signaling methods that would necessitate CRC. For comparison, in Serial ATA, 32-bit CRC is used for both commands and data.
Features introduced with each ATA revision
Speed of defined transfer modes
Related standards, features, and proposals
ATAPI Removable Media Device (ARMD)
ATAPI devices with removable media, other than CD and DVD drives, are classified as ARMD (ATAPI Removable Media Device) and can appear as either a super-floppy (non-partitioned media) or a hard drive (partitioned media) to the operating system. These can be supported as bootable devices by a BIOS complying with the ATAPI Removable Media Device BIOS Specification, originally developed by Compaq Computer Corporation and Phoenix Technologies. It specifies provisions in the BIOS of a personal computer to allow the computer to be bootstrapped from devices such as Zip drives, Jaz drives, SuperDisk (LS-120) drives, and similar devices.
These devices have removable media like floppy disk drives, but capacities more commensurate with hard drives, and programming requirements unlike either. Due to limitations in the floppy controller interface most of these devices were ATAPI devices, connected to one of the host computer's ATA interfaces, similarly to a hard drive or CD-ROM device. However, existing BIOS standards did not support these devices. An ARMD-compliant BIOS allows these devices to be booted from and used under the operating system without requiring device-specific code in the OS.
A BIOS implementing ARMD allows the user to include ARMD devices in the boot search order. Usually an ARMD device is configured earlier in the boot order than the hard drive. Similarly to a floppy drive, if bootable media is present in the ARMD drive, the BIOS will boot from it; if not, the BIOS will continue in the search order, usually with the hard drive last. | Parallel ATA | Wikipedia | 444 | 2778 | https://en.wikipedia.org/wiki/Parallel%20ATA | Technology | Computer hardware | null |
There are two variants of ARMD, ARMD-FDD and ARMD-HDD. Originally ARMD caused the devices to appear as a sort of very large floppy drive, either the primary floppy drive device 00h or the secondary device 01h. Some operating systems required code changes to support floppy disks with capacities far larger than any standard floppy disk drive. Also, standard-floppy disk drive emulation proved to be unsuitable for certain high-capacity floppy disk drives such as Iomega Zip drives. Later the ARMD-HDD, ARMD-"Hard disk device", variant was developed to address these issues. Under ARMD-HDD, an ARMD device appears to the BIOS and the operating system as a hard drive.
ATA over Ethernet
In August 2004, Sam Hopkins and Brantley Coile of Coraid specified a lightweight ATA over Ethernet protocol to carry ATA commands over Ethernet instead of directly connecting them to a PATA host adapter. This permitted the established block protocol to be reused in storage area network (SAN) applications.
Compact Flash
Compact Flash in its IDE mode is essentially a miniaturized ATA interface, intended for use on devices that use flash memory storage. No interfacing chips or circuitry are required, other than to directly adapt the smaller CF socket onto the larger ATA connector. (Although most CF cards only support IDE mode up to PIO4, making them much slower in IDE mode than their CF capable speed)
The ATA connector specification does not include pins for supplying power to a CF device, so power is inserted into the connector from a separate source. The exception to this is when the CF device is connected to a 44-pin ATA bus designed for 2.5-inch hard disk drives, commonly found in notebook computers, as this bus implementation must provide power to a standard hard disk drive.
CF devices can be designated as devices 0 or 1 on an ATA interface, though since most CF devices offer only a single socket, it is not necessary to offer this selection to end users. Although CF can be hot-pluggable with additional design methods, by default when wired directly to an ATA interface, it is not intended to be hot-pluggable. | Parallel ATA | Wikipedia | 449 | 2778 | https://en.wikipedia.org/wiki/Parallel%20ATA | Technology | Computer hardware | null |
Astrobiology (also xenology or exobiology) is a scientific field within the life and environmental sciences that studies the origins, early evolution, distribution, and future of life in the universe by investigating its deterministic conditions and contingent events. As a discipline, astrobiology is founded on the premise that life may exist beyond Earth.
Research in astrobiology comprises three main areas: the study of habitable environments in the Solar System and beyond, the search for planetary biosignatures of past or present extraterrestrial life, and the study of the origin and early evolution of life on Earth.
The field of astrobiology has its origins in the 20th century with the advent of space exploration and the discovery of exoplanets. Early astrobiology research focused on the search for extraterrestrial life and the study of the potential for life to exist on other planets. In the 1960s and 1970s, NASA began its astrobiology pursuits within the Viking program, which was the first US mission to land on Mars and search for signs of life. This mission, along with other early space exploration missions, laid the foundation for the development of astrobiology as a discipline.
Regarding habitable environments, astrobiology investigates potential locations beyond Earth that could support life, such as Mars, Europa, and exoplanets, through research into the extremophiles populating austere environments on Earth, like volcanic and deep sea environments. Research within this topic is conducted utilising the methodology of the geosciences, especially geobiology, for astrobiological applications.
The search for biosignatures involves the identification of signs of past or present life in the form of organic compounds, isotopic ratios, or microbial fossils. Research within this topic is conducted utilising the methodology of planetary and environmental science, especially atmospheric science, for astrobiological applications, and is often conducted through remote sensing and in situ missions.
Astrobiology also concerns the study of the origin and early evolution of life on Earth to try to understand the conditions that are necessary for life to form on other planets. This research seeks to understand how life emerged from non-living matter and how it evolved to become the diverse array of organisms we see today. Research within this topic is conducted utilising the methodology of paleosciences, especially paleobiology, for astrobiological applications. | Astrobiology | Wikipedia | 480 | 2787 | https://en.wikipedia.org/wiki/Astrobiology | Physical sciences | Astronomy basics | Astronomy |
Astrobiology is a rapidly developing field with a strong interdisciplinary aspect that holds many challenges and opportunities for scientists. Astrobiology programs and research centres are present in many universities and research institutions around the world, and space agencies like NASA and ESA have dedicated departments and programs for astrobiology research.
Overview
The term astrobiology was first proposed by the Russian astronomer Gavriil Tikhov in 1953. It is etymologically derived from the Greek , "star"; , "life"; and , -logia, "study". A close synonym is exobiology from the Greek Έξω, "external"; , "life"; and , -logia, "study", coined by American molecular biologist Joshua Lederberg; exobiology is considered to have a narrow scope limited to search of life external to Earth. Another associated term is xenobiology, from the Greek ξένος, "foreign"; , "life"; and -λογία, "study", coined by American science fiction writer Robert Heinlein in his work The Star Beast; xenobiology is now used in a more specialised sense, referring to 'biology based on foreign chemistry', whether of extraterrestrial or terrestrial (typically synthetic) origin.
While the potential for extraterrestrial life, especially intelligent life, has been explored throughout human history within philosophy and narrative, the question is a verifiable hypothesis and thus a valid line of scientific inquiry; planetary scientist David Grinspoon calls it a field of natural philosophy, grounding speculation on the unknown in known scientific theory. | Astrobiology | Wikipedia | 326 | 2787 | https://en.wikipedia.org/wiki/Astrobiology | Physical sciences | Astronomy basics | Astronomy |
The modern field of astrobiology can be traced back to the 1950s and 1960s with the advent of space exploration, when scientists began to seriously consider the possibility of life on other planets. In 1957, the Soviet Union launched Sputnik 1, the first artificial satellite, which marked the beginning of the Space Age. This event led to an increase in the study of the potential for life on other planets, as scientists began to consider the possibilities opened up by the new technology of space exploration. In 1959, NASA funded its first exobiology project, and in 1960, NASA founded the Exobiology Program, now one of four main elements of NASA's current Astrobiology Program. In 1971, NASA funded Project Cyclops, part of the search for extraterrestrial intelligence, to search radio frequencies of the electromagnetic spectrum for interstellar communications transmitted by extraterrestrial life outside the Solar System. In the 1960s-1970s, NASA established the Viking program, which was the first US mission to land on Mars and search for metabolic signs of present life; the results were inconclusive.
In the 1980s and 1990s, the field began to expand and diversify as new discoveries and technologies emerged. The discovery of microbial life in extreme environments on Earth, such as deep-sea hydrothermal vents, helped to clarify the feasibility of potential life existing in harsh conditions. The development of new techniques for the detection of biosignatures, such as the use of stable isotopes, also played a significant role in the evolution of the field.
The contemporary landscape of astrobiology emerged in the early 21st century, focused on utilising Earth and environmental science for applications within comparate space environments. Missions included the ESA's Beagle 2, which failed minutes after landing on Mars, NASA's Phoenix lander, which probed the environment for past and present planetary habitability of microbial life on Mars and researched the history of water, and NASA's Curiosity rover, currently probing the environment for past and present planetary habitability of microbial life on Mars.
Theoretical foundations
Planetary habitability
Astrobiological research makes a number of simplifying assumptions when studying the necessary components for planetary habitability. | Astrobiology | Wikipedia | 441 | 2787 | https://en.wikipedia.org/wiki/Astrobiology | Physical sciences | Astronomy basics | Astronomy |
Carbon and Organic Compounds: Carbon is the fourth most abundant element in the universe and the energy required to make or break a bond is at just the appropriate level for building molecules which are not only stable, but also reactive. The fact that carbon atoms bond readily to other carbon atoms allows for the building of extremely long and complex molecules. As such, astrobiological research presumes that the vast majority of life forms in the Milky Way galaxy are based on carbon chemistries, as are all life forms on Earth. However, theoretical astrobiology entertains the potential for other organic molecular bases for life, thus astrobiological research often focuses on identifying environments that have the potential to support life based on the presence of organic compounds.
Liquid water: Liquid water is a common molecule that provides an excellent environment for the formation of complicated carbon-based molecules, and is generally considered necessary for life as we know it to exist. Thus, astrobiological research presumes that extraterrestrial life similarly depends upon access to liquid water, and often focuses on identifying environments that have the potential to support liquid water. Some researchers posit environments of water-ammonia mixtures as possible solvents for hypothetical types of biochemistry.
Environmental stability: Where organisms adaptively evolve to the conditions of the environments in which they reside, environmental stability is considered necessary for life to exist. This presupposes the necessity of a stable temperature, pressure, and radiation levels; resultantly, astrobiological research focuses on planets orbiting Sun-like red dwarf stars. This is because very large stars have relatively short lifetimes, meaning that life might not have time to emerge on planets orbiting them; very small stars provide so little heat and warmth that only planets in very close orbits around them would not be frozen solid, and in such close orbits these planets would be tidally locked to the star; whereas the long lifetimes of red dwarfs could allow the development of habitable environments on planets with thick atmospheres. This is significant as red dwarfs are extremely common. ( | Astrobiology | Wikipedia | 407 | 2787 | https://en.wikipedia.org/wiki/Astrobiology | Physical sciences | Astronomy basics | Astronomy |
The anthropic principle, also known as the observation selection effect, is the proposition that the range of possible observations that could be made about the universe is limited by the fact that observations are only possible in the type of universe that is capable of developing intelligent life. Proponents of the anthropic principle argue that it explains why the universe has the age and the fundamental physical constants necessary to accommodate intelligent life. If either had been significantly different, no one would have been around to make observations. Anthropic reasoning has been used to address the question as to why certain measured physical constants take the values that they do, rather than some other arbitrary values, and to explain a perception that the universe appears to be finely tuned for the existence of life.
There are many different formulations of the anthropic principle. Philosopher Nick Bostrom counts thirty, but the underlying principles can be divided into "weak" and "strong" forms, depending on the types of cosmological claims they entail.
Definition and basis
The principle was formulated as a response to a series of observations that the laws of nature and parameters of the universe have values that are consistent with conditions for life as it is known rather than values that would not be consistent with life on Earth. The anthropic principle states that this is an a posteriori necessity, because if life were impossible, no living entity would be there to observe it, and thus it would not be known. That is, it must be possible to observe some universe, and hence, the laws and constants of any such universe must accommodate that possibility.
The term anthropic in "anthropic principle" has been argued to be a misnomer. While singling out the currently observable kind of carbon-based life, none of the finely tuned phenomena require human life or some kind of carbon chauvinism. Any form of life or any form of heavy atom, stone, star, or galaxy would do; nothing specifically human or anthropic is involved. | Anthropic principle | Wikipedia | 415 | 2792 | https://en.wikipedia.org/wiki/Anthropic%20principle | Physical sciences | Physical cosmology | Astronomy |
The anthropic principle has given rise to some confusion and controversy, partly because the phrase has been applied to several distinct ideas. All versions of the principle have been accused of discouraging the search for a deeper physical understanding of the universe. Critics of the weak anthropic principle point out that its lack of falsifiability entails that it is non-scientific and therefore inherently not useful. Stronger variants of the anthropic principle which are not tautologies can still make claims considered controversial by some; these would be contingent upon empirical verification.
Anthropic observations
In 1961, Robert Dicke noted that the age of the universe, as seen by living observers, cannot be random. Instead, biological factors constrain the universe to be more or less in a "golden age", neither too young nor too old. If the universe was one tenth as old as its present age, there would not have been sufficient time to build up appreciable levels of metallicity (levels of elements besides hydrogen and helium) especially carbon, by nucleosynthesis. Small rocky planets did not yet exist. If the universe were 10 times older than it actually is, most stars would be too old to remain on the main sequence and would have turned into white dwarfs, aside from the dimmest red dwarfs, and stable planetary systems would have already come to an end. Thus, Dicke explained the coincidence between large dimensionless numbers constructed from the constants of physics and the age of the universe, a coincidence that inspired Dirac's varying-G theory.
Dicke later reasoned that the density of matter in the universe must be almost exactly the critical density needed to prevent the Big Crunch (the "Dicke coincidences" argument). The most recent measurements may suggest that the observed density of baryonic matter, and some theoretical predictions of the amount of dark matter, account for about 30% of this critical density, with the rest contributed by a cosmological constant. Steven Weinberg gave an anthropic explanation for this fact: he noted that the cosmological constant has a remarkably low value, some 120 orders of magnitude smaller than the value particle physics predicts (this has been described as the "worst prediction in physics"). However, if the cosmological constant were only several orders of magnitude larger than its observed value, the universe would suffer catastrophic inflation, which would preclude the formation of stars, and hence life. | Anthropic principle | Wikipedia | 503 | 2792 | https://en.wikipedia.org/wiki/Anthropic%20principle | Physical sciences | Physical cosmology | Astronomy |
The observed values of the dimensionless physical constants (such as the fine-structure constant) governing the four fundamental interactions are balanced as if fine-tuned to permit the formation of commonly found matter and subsequently the emergence of life. A slight increase in the strong interaction (up to 50% for some authors) would bind the dineutron and the diproton and convert all hydrogen in the early universe to helium; likewise, an increase in the weak interaction also would convert all hydrogen to helium. Water, as well as sufficiently long-lived stable stars, both essential for the emergence of life as it is known, would not exist. More generally, small changes in the relative strengths of the four fundamental interactions can greatly affect the universe's age, structure, and capacity for life.
Origin
The phrase "anthropic principle" first appeared in Brandon Carter's contribution to a 1973 Kraków symposium honouring Copernicus's 500th birthday. Carter, a theoretical astrophysicist, articulated the Anthropic Principle in reaction to the Copernican Principle, which states that humans do not occupy a privileged position in the Universe. Carter said: "Although our situation is not necessarily central, it is inevitably privileged to some extent." Specifically, Carter disagreed with using the Copernican principle to justify the Perfect Cosmological Principle, which states that all large regions and times in the universe must be statistically identical. The latter principle underlies the steady-state theory, which had recently been falsified by the 1965 discovery of the cosmic microwave background radiation. This discovery was unequivocal evidence that the universe has changed radically over time (for example, via the Big Bang).
Carter defined two forms of the anthropic principle, a "weak" one which referred only to anthropic selection of privileged spacetime locations in the universe, and a more controversial "strong" form that addressed the values of the fundamental constants of physics.
Roger Penrose explained the weak form as follows: | Anthropic principle | Wikipedia | 411 | 2792 | https://en.wikipedia.org/wiki/Anthropic%20principle | Physical sciences | Physical cosmology | Astronomy |
One reason this is plausible is that there are many other places and times in which humans could have evolved. But when applying the strong principle, there is only one universe, with one set of fundamental parameters, so what exactly is the point being made? Carter offers two possibilities: First, humans can use their own existence to make "predictions" about the parameters. But second, "as a last resort", humans can convert these predictions into explanations by assuming that there is more than one universe, in fact a large and possibly infinite collection of universes, something that is now called the multiverse ("world ensemble" was Carter's term), in which the parameters (and perhaps the laws of physics) vary across universes. The strong principle then becomes an example of a selection effect, exactly analogous to the weak principle. Postulating a multiverse is certainly a radical step, but taking it could provide at least a partial answer to a question seemingly out of the reach of normal science: "Why do the fundamental laws of physics take the particular form we observe and not another?"
Since Carter's 1973 paper, the term anthropic principle has been extended to cover a number of ideas that differ in important ways from his. Particular confusion was caused by the 1986 book The Anthropic Cosmological Principle by John D. Barrow and Frank Tipler, which distinguished between a "weak" and "strong" anthropic principle in a way very different from Carter's, as discussed in the next section.
Carter was not the first to invoke some form of the anthropic principle. In fact, the evolutionary biologist Alfred Russel Wallace anticipated the anthropic principle as long ago as 1904: "Such a vast and complex universe as that which we know exists around us, may have been absolutely required [...] in order to produce a world that should be precisely adapted in every detail for the orderly development of life culminating in man." In 1957, Robert Dicke wrote: "The age of the Universe 'now' is not random but conditioned by biological factors [...] [changes in the values of the fundamental constants of physics] would preclude the existence of man to consider the problem." | Anthropic principle | Wikipedia | 460 | 2792 | https://en.wikipedia.org/wiki/Anthropic%20principle | Physical sciences | Physical cosmology | Astronomy |
Ludwig Boltzmann may have been one of the first in modern science to use anthropic reasoning. Prior to knowledge of the Big Bang Boltzmann's thermodynamic concepts painted a picture of a universe that had inexplicably low entropy. Boltzmann suggested several explanations, one of which relied on fluctuations that could produce pockets of low entropy or Boltzmann universes. While most of the universe is featureless in this model, to Boltzmann, it is unremarkable that humanity happens to inhabit a Boltzmann universe, as that is the only place where intelligent life could be.
Variants
Weak anthropic principle (WAP) (Carter): "... our location in the universe is necessarily privileged to the extent of being compatible with our existence as observers." For Carter, "location" refers to our location in time as well as space.
Strong anthropic principle (SAP) (Carter): "[T]he universe (and hence the fundamental parameters on which it depends) must be such as to admit the creation of observers within it at some stage. To paraphrase Descartes, cogito ergo mundus talis est."The Latin tag ("I think, therefore the world is such [as it is]") makes it clear that "must" indicates a deduction from the fact of our existence; the statement is thus a truism.
In their 1986 book, The anthropic cosmological principle, John Barrow and Frank Tipler depart from Carter and define the WAP and SAP as follows:
Weak anthropic principle (WAP) (Barrow and Tipler): "The observed values of all physical and cosmological quantities are not equally probable but they take on values restricted by the requirement that there exist sites where carbon-based life can evolve and by the requirements that the universe be old enough for it to have already done so."Unlike Carter they restrict the principle to carbon-based life, rather than just "observers". A more important difference is that they apply the WAP to the fundamental physical constants, such as the fine-structure constant, the number of spacetime dimensions, and the cosmological constant—topics that fall under Carter's SAP. | Anthropic principle | Wikipedia | 474 | 2792 | https://en.wikipedia.org/wiki/Anthropic%20principle | Physical sciences | Physical cosmology | Astronomy |
Strong anthropic principle (SAP) (Barrow and Tipler): "The Universe must have those properties which allow life to develop within it at some stage in its history."This looks very similar to Carter's SAP, but unlike the case with Carter's SAP, the "must" is an imperative, as shown by the following three possible elaborations of the SAP, each proposed by Barrow and Tipler:
"There exists one possible Universe 'designed' with the goal of generating and sustaining 'observers'."
This can be seen as simply the classic design argument restated in the garb of contemporary cosmology. It implies that the purpose of the universe is to give rise to intelligent life, with the laws of nature and their fundamental physical constants set to ensure that life emerges and evolves.
"Observers are necessary to bring the Universe into being."
Barrow and Tipler believe that this is a valid conclusion from quantum mechanics, as John Archibald Wheeler has suggested, especially via his idea that information is the fundamental reality (see It from bit) and his Participatory anthropic principle (PAP) which is an interpretation of quantum mechanics associated with the ideas of John von Neumann and Eugene Wigner.
"An ensemble of other different universes is necessary for the existence of our Universe."
By contrast, Carter merely says that an ensemble of universes is necessary for the SAP to count as an explanation.
The philosophers John Leslie and Nick Bostrom reject the Barrow and Tipler SAP as a fundamental misreading of Carter. For Bostrom, Carter's anthropic principle just warns us to make allowance for anthropic bias—that is, the bias created by anthropic selection effects (which Bostrom calls "observation" selection effects)—the necessity for observers to exist in order to get a result. He writes: | Anthropic principle | Wikipedia | 385 | 2792 | https://en.wikipedia.org/wiki/Anthropic%20principle | Physical sciences | Physical cosmology | Astronomy |
Strong self-sampling assumption (SSSA) (Bostrom): "Each observer-moment should reason as if it were randomly selected from the class of all observer-moments in its reference class." Analysing an observer's experience into a sequence of "observer-moments" helps avoid certain paradoxes; but the main ambiguity is the selection of the appropriate "reference class": for Carter's WAP this might correspond to all real or potential observer-moments in our universe; for the SAP, to all in the multiverse. Bostrom's mathematical development shows that choosing either too broad or too narrow a reference class leads to counter-intuitive results, but he is not able to prescribe an ideal choice.
According to Jürgen Schmidhuber, the anthropic principle essentially just says that the conditional probability of finding yourself in a universe compatible with your existence is always 1. It does not allow for any additional nontrivial predictions such as "gravity won't change tomorrow". To gain more predictive power, additional assumptions on the prior distribution of alternative universes are necessary.
Playwright and novelist Michael Frayn describes a form of the strong anthropic principle in his 2006 book The Human Touch, which explores what he characterises as "the central oddity of the Universe":
Character of anthropic reasoning
Carter chose to focus on a tautological aspect of his ideas, which has resulted in much confusion. In fact, anthropic reasoning interests scientists because of something that is only implicit in the above formal definitions, namely that humans should give serious consideration to there being other universes with different values of the "fundamental parameters"—that is, the dimensionless physical constants and initial conditions for the Big Bang. Carter and others have argued that life would not be possible in most such universes. In other words, the universe humans live in is fine tuned to permit life. Collins & Hawking (1973) characterized Carter's then-unpublished big idea as the postulate that "there is not one universe but a whole infinite ensemble of universes with all possible initial conditions". If this is granted, the anthropic principle provides a plausible explanation for the fine tuning of our universe: the "typical" universe is not fine-tuned, but given enough universes, a small fraction will be capable of supporting intelligent life. Ours must be one of these, and so the observed fine tuning should be no cause for wonder. | Anthropic principle | Wikipedia | 503 | 2792 | https://en.wikipedia.org/wiki/Anthropic%20principle | Physical sciences | Physical cosmology | Astronomy |
Although philosophers have discussed related concepts for centuries, in the early 1970s the only genuine physical theory yielding a multiverse of sorts was the many-worlds interpretation of quantum mechanics. This would allow variation in initial conditions, but not in the truly fundamental constants. Since that time a number of mechanisms for producing a multiverse have been suggested: see the review by Max Tegmark. An important development in the 1980s was the combination of inflation theory with the hypothesis that some parameters are determined by symmetry breaking in the early universe, which allows parameters previously thought of as "fundamental constants" to vary over very large distances, thus eroding the distinction between Carter's weak and strong principles. At the beginning of the 21st century, the string landscape emerged as a mechanism for varying essentially all the constants, including the number of spatial dimensions.
The anthropic idea that fundamental parameters are selected from a multitude of different possibilities (each actual in some universe or other) contrasts with the traditional hope of physicists for a theory of everything having no free parameters. As Albert Einstein said: "What really interests me is whether God had any choice in the creation of the world." In 2002, some proponents of the leading candidate for a "theory of everything", string theory, proclaimed "the end of the anthropic principle" since there would be no free parameters to select. In 2003, however, Leonard Susskind stated: "... it seems plausible that the landscape is unimaginably large and diverse. This is the behavior that gives credence to the anthropic principle."
The modern form of a design argument is put forth by intelligent design. Proponents of intelligent design often cite the fine-tuning observations that (in part) preceded the formulation of the anthropic principle by Carter as a proof of an intelligent designer. Opponents of intelligent design are not limited to those who hypothesize that other universes exist; they may also argue, anti-anthropically, that the universe is less fine-tuned than often claimed, or that accepting fine tuning as a brute fact is less astonishing than the idea of an intelligent creator. Furthermore, even accepting fine tuning, Sober (2005) and Ikeda and Jefferys, argue that the anthropic principle as conventionally stated actually undermines intelligent design. | Anthropic principle | Wikipedia | 478 | 2792 | https://en.wikipedia.org/wiki/Anthropic%20principle | Physical sciences | Physical cosmology | Astronomy |
Paul Davies's book The Goldilocks Enigma (2006) reviews the current state of the fine-tuning debate in detail, and concludes by enumerating the following responses to that debate:
The absurd universe: Our universe just happens to be the way it is.
The unique universe: There is a deep underlying unity in physics that necessitates the Universe being the way it is. A Theory of Everything will explain why the various features of the Universe must have exactly the values that have been recorded.
The multiverse: Multiple universes exist, having all possible combinations of characteristics, and humans inevitably find themselves within a universe that allows us to exist.
Intelligent design: A creator designed the Universe with the purpose of supporting complexity and the emergence of intelligence.
The life principle: There is an underlying principle that constrains the Universe to evolve towards life and mind.
The self-explaining universe: A closed explanatory or causal loop: "perhaps only universes with a capacity for consciousness can exist". This is Wheeler's participatory anthropic principle (PAP).
The fake universe: Humans live inside a virtual reality simulation.
Omitted here is Lee Smolin's model of cosmological natural selection, also known as fecund universes, which proposes that universes have "offspring" that are more plentiful if they resemble our universe. Also see Gardner (2005).
Clearly each of these hypotheses resolve some aspects of the puzzle, while leaving others unanswered. Followers of Carter would admit only option 3 as an anthropic explanation, whereas 3 through 6 are covered by different versions of Barrow and Tipler's SAP (which would also include 7 if it is considered a variant of 4, as in Tipler 1994).
The anthropic principle, at least as Carter conceived it, can be applied on scales much smaller than the whole universe. For example, Carter (1983) inverted the usual line of reasoning and pointed out that when interpreting the evolutionary record, one must take into account cosmological and astrophysical considerations. With this in mind, Carter concluded that given the best estimates of the age of the universe, the evolutionary chain culminating in Homo sapiens probably admits only one or two low probability links. | Anthropic principle | Wikipedia | 465 | 2792 | https://en.wikipedia.org/wiki/Anthropic%20principle | Physical sciences | Physical cosmology | Astronomy |
Observational evidence
No possible observational evidence bears on Carter's WAP, as it is merely advice to the scientist and asserts nothing debatable. The obvious test of Barrow's SAP, which says that the universe is "required" to support life, is to find evidence of life in universes other than ours. Any other universe is, by most definitions, unobservable (otherwise it would be included in our portion of this universe). Thus, in principle Barrow's SAP cannot be falsified by observing a universe in which an observer cannot exist.
Philosopher John Leslie states that the Carter SAP (with multiverse) predicts the following:
Physical theory will evolve so as to strengthen the hypothesis that early phase transitions occur probabilistically rather than deterministically, in which case there will be no deep physical reason for the values of fundamental constants;
Various theories for generating multiple universes will prove robust;
Evidence that the universe is fine tuned will continue to accumulate;
No life with a non-carbon chemistry will be discovered;
Mathematical studies of galaxy formation will confirm that it is sensitive to the rate of expansion of the universe.
Hogan has emphasised that it would be very strange if all fundamental constants were strictly determined, since this would leave us with no ready explanation for apparent fine tuning. In fact, humans might have to resort to something akin to Barrow and Tipler's SAP: there would be no option for such a universe not to support life.
Probabilistic predictions of parameter values can be made given:
a particular multiverse with a "measure", i.e. a well defined "density of universes" (so, for parameter X, one can calculate the prior probability P(X0) dX that X is in the range ), and
an estimate of the number of observers in each universe, N(X) (e.g., this might be taken as proportional to the number of stars in the universe).
The probability of observing value X is then proportional to . A generic feature of an analysis of this nature is that the expected values of the fundamental physical constants should not be "over-tuned", i.e. if there is some perfectly tuned predicted value (e.g. zero), the observed value need be no closer to that predicted value than what is required to make life possible. The small but finite value of the cosmological constant can be regarded as a successful prediction in this sense. | Anthropic principle | Wikipedia | 505 | 2792 | https://en.wikipedia.org/wiki/Anthropic%20principle | Physical sciences | Physical cosmology | Astronomy |
One thing that would not count as evidence for the anthropic principle is evidence that the Earth or the Solar System occupied a privileged position in the universe, in violation of the Copernican principle (for possible counterevidence to this principle, see Copernican principle), unless there was some reason to think that that position was a necessary condition for our existence as observers.
Applications of the principle
The nucleosynthesis of carbon-12
Fred Hoyle may have invoked anthropic reasoning to predict an astrophysical phenomenon. He is said to have reasoned, from the prevalence on Earth of life forms whose chemistry was based on carbon-12 nuclei, that there must be an undiscovered resonance in the carbon-12 nucleus facilitating its synthesis in stellar interiors via the triple-alpha process. He then calculated the energy of this undiscovered resonance to be 7.6 million electronvolts. Willie Fowler's research group soon found this resonance, and its measured energy was close to Hoyle's prediction.
However, in 2010 Helge Kragh argued that Hoyle did not use anthropic reasoning in making his prediction, since he made his prediction in 1953 and anthropic reasoning did not come into prominence until 1980. He called this an "anthropic myth", saying that Hoyle and others made an after-the-fact connection between carbon and life decades after the discovery of the resonance.
Cosmic inflation | Anthropic principle | Wikipedia | 299 | 2792 | https://en.wikipedia.org/wiki/Anthropic%20principle | Physical sciences | Physical cosmology | Astronomy |
Don Page criticized the entire theory of cosmic inflation as follows. He emphasized that initial conditions that made possible a thermodynamic arrow of time in a universe with a Big Bang origin, must include the assumption that at the initial singularity, the entropy of the universe was low and therefore extremely improbable. Paul Davies rebutted this criticism by invoking an inflationary version of the anthropic principle. While Davies accepted the premise that the initial state of the visible universe (which filled a microscopic amount of space before inflating) had to possess a very low entropy value—due to random quantum fluctuations—to account for the observed thermodynamic arrow of time, he deemed this fact an advantage for the theory. That the tiny patch of space from which our observable universe grew had to be extremely orderly, to allow the post-inflation universe to have an arrow of time, makes it unnecessary to adopt any "ad hoc" hypotheses about the initial entropy state, hypotheses other Big Bang theories require.
String theory
String theory predicts a large number of possible universes, called the "backgrounds" or "vacua". The set of these vacua is often called the "multiverse" or "anthropic landscape" or "string landscape". Leonard Susskind has argued that the existence of a large number of vacua puts anthropic reasoning on firm ground: only universes whose properties are such as to allow observers to exist are observed, while a possibly much larger set of universes lacking such properties go unnoticed.
Steven Weinberg believes the anthropic principle may be appropriated by cosmologists committed to nontheism, and refers to that principle as a "turning point" in modern science because applying it to the string landscape "may explain how the constants of nature that we observe can take values suitable for life without being fine-tuned by a benevolent creator". Others—most notably David Gross but also Luboš Motl, Peter Woit, and Lee Smolin—argue that this is not predictive. Max Tegmark, Mario Livio, and Martin Rees argue that only some aspects of a physical theory need be observable and/or testable for the theory to be accepted, and that many well-accepted theories are far from completely testable at present. | Anthropic principle | Wikipedia | 485 | 2792 | https://en.wikipedia.org/wiki/Anthropic%20principle | Physical sciences | Physical cosmology | Astronomy |
Jürgen Schmidhuber (2000–2002) points out that Ray Solomonoff's theory of universal inductive inference and its extensions already provide a framework for maximizing our confidence in any theory, given a limited sequence of physical observations, and some prior distribution on the set of possible explanations of the universe.
Zhi-Wei Wang and Samuel L. Braunstein proved that life's existence in the universe depends on various fundamental constants. It suggests that without a complete understanding of these constants, one might incorrectly perceive the universe as being intelligently designed for life. This perspective challenges the view that our universe is unique in its ability to support life.
Dimensions of spacetime
There are two kinds of dimensions: spatial (bidirectional) and temporal (unidirectional). Let the number of spatial dimensions be N and the number of temporal dimensions be T. That and , setting aside the compactified dimensions invoked by string theory and undetectable to date, can be explained by appealing to the physical consequences of letting N differ from 3 and T differ from 1. The argument is often of an anthropic character and possibly the first of its kind, albeit before the complete concept came into vogue.
The implicit notion that the dimensionality of the universe is special is first attributed to Gottfried Wilhelm Leibniz, who in the Discourse on Metaphysics suggested that the world is "the one which is at the same time the simplest in hypothesis and the richest in phenomena". Immanuel Kant argued that 3-dimensional space was a consequence of the inverse square law of universal gravitation. While Kant's argument is historically important, John D. Barrow said that it "gets the punch-line back to front: it is the three-dimensionality of space that explains why we see inverse-square force laws in Nature, not vice-versa" (Barrow 2002:204). | Anthropic principle | Wikipedia | 385 | 2792 | https://en.wikipedia.org/wiki/Anthropic%20principle | Physical sciences | Physical cosmology | Astronomy |
In 1920, Paul Ehrenfest showed that if there is only a single time dimension and more than three spatial dimensions, the orbit of a planet about its Sun cannot remain stable. The same is true of a star's orbit around the center of its galaxy. Ehrenfest also showed that if there are an even number of spatial dimensions, then the different parts of a wave impulse will travel at different speeds. If there are spatial dimensions, where k is a positive whole number, then wave impulses become distorted. In 1922, Hermann Weyl claimed that Maxwell's theory of electromagnetism can be expressed in terms of an action only for a four-dimensional manifold. Finally, Tangherlini showed in 1963 that when there are more than three spatial dimensions, electron orbitals around nuclei cannot be stable; electrons would either fall into the nucleus or disperse.
Max Tegmark expands on the preceding argument in the following anthropic manner. If T differs from 1, the behavior of physical systems could not be predicted reliably from knowledge of the relevant partial differential equations. In such a universe, intelligent life capable of manipulating technology could not emerge. Moreover, if , Tegmark maintains that protons and electrons would be unstable and could decay into particles having greater mass than themselves. (This is not a problem if the particles have a sufficiently low temperature.) Lastly, if , gravitation of any kind becomes problematic, and the universe would probably be too simple to contain observers. For example, when , nerves cannot cross without intersecting. Hence anthropic and other arguments rule out all cases except and , which describes the world around us.
On the other hand, in view of creating black holes from an ideal monatomic gas under its self-gravity, Wei-Xiang Feng showed that -dimensional spacetime is the marginal dimensionality. Moreover, it is the unique dimensionality that can afford a "stable" gas sphere with a "positive" cosmological constant. However, a self-gravitating gas cannot be stably bound if the mass sphere is larger than ~1021 solar masses, due to the small positivity of the cosmological constant observed.
In 2019, James Scargill argued that complex life may be possible with two spatial dimensions. According to Scargill, a purely scalar theory of gravity may enable a local gravitational force, and 2D networks may be sufficient for complex neural networks. | Anthropic principle | Wikipedia | 493 | 2792 | https://en.wikipedia.org/wiki/Anthropic%20principle | Physical sciences | Physical cosmology | Astronomy |
Metaphysical interpretations
Some of the metaphysical disputes and speculations include, for example, attempts to back Pierre Teilhard de Chardin's earlier interpretation of the universe as being Christ centered (compare Omega Point), expressing a creatio evolutiva instead the elder notion of creatio continua. From a strictly secular, humanist perspective, it allows as well to put human beings back in the center, an anthropogenic shift in cosmology. Karl W. Giberson has laconically stated that
William Sims Bainbridge disagreed with de Chardin's optimism about a future Omega point at the end of history, arguing that logically, humans are trapped at the Omicron point, in the middle of the Greek alphabet rather than advancing to the end, because the universe does not need to have any characteristics that would support our further technical progress, if the anthropic principle merely requires it to be suitable for our evolution to this point.
The anthropic cosmological principle
A thorough extant study of the anthropic principle is the book The anthropic cosmological principle by John D. Barrow, a cosmologist, and Frank J. Tipler, a cosmologist and mathematical physicist. This book sets out in detail the many known anthropic coincidences and constraints, including many found by its authors. While the book is primarily a work of theoretical astrophysics, it also touches on quantum physics, chemistry, and earth science. An entire chapter argues that Homo sapiens is, with high probability, the only intelligent species in the Milky Way.
The book begins with an extensive review of many topics in the history of ideas the authors deem relevant to the anthropic principle, because the authors believe that principle has important antecedents in the notions of teleology and intelligent design. They discuss the writings of Fichte, Hegel, Bergson, and Alfred North Whitehead, and the Omega Point cosmology of Teilhard de Chardin. Barrow and Tipler carefully distinguish teleological reasoning from eutaxiological reasoning; the former asserts that order must have a consequent purpose; the latter asserts more modestly that order must have a planned cause. They attribute this important but nearly always overlooked distinction to an obscure 1883 book by L. E. Hicks. | Anthropic principle | Wikipedia | 480 | 2792 | https://en.wikipedia.org/wiki/Anthropic%20principle | Physical sciences | Physical cosmology | Astronomy |
Seeing little sense in a principle requiring intelligent life to emerge while remaining indifferent to the possibility of its eventual extinction, Barrow and Tipler propose the final anthropic principle (FAP): Intelligent information-processing must come into existence in the universe, and, once it comes into existence, it will never die out.
Barrow and Tipler submit that the FAP is both a valid physical statement and "closely connected with moral values". FAP places strong constraints on the structure of the universe, constraints developed further in Tipler's The Physics of Immortality. One such constraint is that the universe must end in a Big Crunch, which seems unlikely in view of the tentative conclusions drawn since 1998 about dark energy, based on observations of very distant supernovas.
In his review of Barrow and Tipler, Martin Gardner ridiculed the FAP by quoting the last two sentences of their book as defining a completely ridiculous anthropic principle (CRAP):
Reception and controversies
Carter has frequently expressed regret for his own choice of the word "anthropic", because it conveys the misleading impression that the principle involves humans in particular, to the exclusion of non-human intelligence more broadly. Others have criticised the word "principle" as being too grandiose to describe straightforward applications of selection effects.
A common criticism of Carter's SAP is that it is an easy deus ex machina that discourages searches for physical explanations. To quote Penrose again: "It tends to be invoked by theorists whenever they do not have a good enough theory to explain the observed facts."
Carter's SAP and Barrow and Tipler's WAP have been dismissed as truisms or trivial tautologies—that is, statements true solely by virtue of their logical form and not because a substantive claim is made and supported by observation of reality. As such, they are criticized as an elaborate way of saying, "If things were different, they would be different", which is a valid statement, but does not make a claim of some factual alternative over another. | Anthropic principle | Wikipedia | 417 | 2792 | https://en.wikipedia.org/wiki/Anthropic%20principle | Physical sciences | Physical cosmology | Astronomy |
Critics of the Barrow and Tipler SAP claim that it is neither testable nor falsifiable, and thus is not a scientific statement but rather a philosophical one. The same criticism has been leveled against the hypothesis of a multiverse, although some argue that it does make falsifiable predictions. A modified version of this criticism is that humanity understands so little about the emergence of life, especially intelligent life, that it is effectively impossible to calculate the number of observers in each universe. Also, the prior distribution of universes as a function of the fundamental constants is easily modified to get any desired result.
Many criticisms focus on versions of the strong anthropic principle, such as Barrow and Tipler's anthropic cosmological principle, which are teleological notions that tend to describe the existence of life as a necessary prerequisite for the observable constants of physics. Similarly, Stephen Jay Gould, Michael Shermer, and others claim that the stronger versions of the anthropic principle seem to reverse known causes and effects. Gould compared the claim that the universe is fine-tuned for the benefit of our kind of life to saying that sausages were made long and narrow so that they could fit into modern hotdog buns, or saying that ships had been invented to house barnacles. These critics cite the vast physical, fossil, genetic, and other biological evidence consistent with life having been fine-tuned through natural selection to adapt to the physical and geophysical environment in which life exists. Life appears to have adapted to the universe, and not vice versa.
Some applications of the anthropic principle have been criticized as an argument by lack of imagination, for tacitly assuming that carbon compounds and water are the only possible chemistry of life (sometimes called "carbon chauvinism"; see also alternative biochemistry). The range of fundamental physical constants consistent with the evolution of carbon-based life may also be wider than those who advocate a fine-tuned universe have argued. For instance, Harnik et al. propose a Weakless Universe in which the weak nuclear force is eliminated. They show that this has no significant effect on the other fundamental interactions, provided some adjustments are made in how those interactions work. However, if some of the fine-tuned details of our universe were violated, that would rule out complex structures of any kind—stars, planets, galaxies, etc. | Anthropic principle | Wikipedia | 492 | 2792 | https://en.wikipedia.org/wiki/Anthropic%20principle | Physical sciences | Physical cosmology | Astronomy |
Lee Smolin has offered a theory designed to improve on the lack of imagination that has been ascribed to anthropic principles. He puts forth his fecund universes theory, which assumes universes have "offspring" through the creation of black holes whose offspring universes have values of physical constants that depend on those of the mother universe.
The philosophers of cosmology John Earman, Ernan McMullin, and Jesús Mosterín contend that "in its weak version, the anthropic principle is a mere tautology, which does not allow us to explain anything or to predict anything that we did not already know. In its strong version, it is a gratuitous speculation". A further criticism by Mosterín concerns the flawed "anthropic" inference from the assumption of an infinity of worlds to the existence of one like ours: | Anthropic principle | Wikipedia | 176 | 2792 | https://en.wikipedia.org/wiki/Anthropic%20principle | Physical sciences | Physical cosmology | Astronomy |
Aerodynamics ( aero (air) + (dynamics)) is the study of the motion of air, particularly when affected by a solid object, such as an airplane wing. It involves topics covered in the field of fluid dynamics and its subfield of gas dynamics, and is an important domain of study in aeronautics. The term aerodynamics is often used synonymously with gas dynamics, the difference being that "gas dynamics" applies to the study of the motion of all gases, and is not limited to air. The formal study of aerodynamics began in the modern sense in the eighteenth century, although observations of fundamental concepts such as aerodynamic drag were recorded much earlier. Most of the early efforts in aerodynamics were directed toward achieving heavier-than-air flight, which was first demonstrated by Otto Lilienthal in 1891. Since then, the use of aerodynamics through mathematical analysis, empirical approximations, wind tunnel experimentation, and computer simulations has formed a rational basis for the development of heavier-than-air flight and a number of other technologies. Recent work in aerodynamics has focused on issues related to compressible flow, turbulence, and boundary layers and has become increasingly computational in nature.
History
Modern aerodynamics only dates back to the seventeenth century, but aerodynamic forces have been harnessed by humans for thousands of years in sailboats and windmills, and images and stories of flight appear throughout recorded history, such as the Ancient Greek legend of Icarus and Daedalus. Fundamental concepts of continuum, drag, and pressure gradients appear in the work of Aristotle and Archimedes.
In 1726, Sir Isaac Newton became the first person to develop a theory of air resistance, making him one of the first aerodynamicists. Dutch-Swiss mathematician Daniel Bernoulli followed in 1738 with Hydrodynamica in which he described a fundamental relationship between pressure, density, and flow velocity for incompressible flow known today as Bernoulli's principle, which provides one method for calculating aerodynamic lift. In 1757, Leonhard Euler published the more general Euler equations which could be applied to both compressible and incompressible flows. The Euler equations were extended to incorporate the effects of viscosity in the first half of the 1800s, resulting in the Navier–Stokes equations. The Navier–Stokes equations are the most general governing equations of fluid flow but are difficult to solve for the flow around all but the simplest of shapes. | Aerodynamics | Wikipedia | 491 | 2819 | https://en.wikipedia.org/wiki/Aerodynamics | Physical sciences | Fluid mechanics | null |
In 1799, Sir George Cayley became the first person to identify the four aerodynamic forces of flight (weight, lift, drag, and thrust), as well as the relationships between them, and in doing so outlined the path toward achieving heavier-than-air flight for the next century. In 1871, Francis Herbert Wenham constructed the first wind tunnel, allowing precise measurements of aerodynamic forces. Drag theories were developed by Jean le Rond d'Alembert, Gustav Kirchhoff, and Lord Rayleigh. In 1889, Charles Renard, a French aeronautical engineer, became the first person to reasonably predict the power needed for sustained flight. Otto Lilienthal, the first person to become highly successful with glider flights, was also the first to propose thin, curved airfoils that would produce high lift and low drag. Building on these developments as well as research carried out in their own wind tunnel, the Wright brothers flew the first powered airplane on December 17, 1903.
During the time of the first flights, Frederick W. Lanchester, Martin Kutta, and Nikolai Zhukovsky independently created theories that connected circulation of a fluid flow to lift. Kutta and Zhukovsky went on to develop a two-dimensional wing theory. Expanding upon the work of Lanchester, Ludwig Prandtl is credited with developing the mathematics behind thin-airfoil and lifting-line theories as well as work with boundary layers.
As aircraft speed increased designers began to encounter challenges associated with air compressibility at speeds near the speed of sound. The differences in airflow under such conditions lead to problems in aircraft control, increased drag due to shock waves, and the threat of structural failure due to aeroelastic flutter. The ratio of the flow speed to the speed of sound was named the Mach number after Ernst Mach who was one of the first to investigate the properties of the supersonic flow. Macquorn Rankine and Pierre Henri Hugoniot independently developed the theory for flow properties before and after a shock wave, while Jakob Ackeret led the initial work of calculating the lift and drag of supersonic airfoils. Theodore von Kármán and Hugh Latimer Dryden introduced the term transonic to describe flow speeds between the critical Mach number and Mach 1 where drag increases rapidly. This rapid increase in drag led aerodynamicists and aviators to disagree on whether supersonic flight was achievable until the sound barrier was broken in 1947 using the Bell X-1 aircraft. | Aerodynamics | Wikipedia | 496 | 2819 | https://en.wikipedia.org/wiki/Aerodynamics | Physical sciences | Fluid mechanics | null |
By the time the sound barrier was broken, aerodynamicists' understanding of the subsonic and low supersonic flow had matured. The Cold War prompted the design of an ever-evolving line of high-performance aircraft. Computational fluid dynamics began as an effort to solve for flow properties around complex objects and has rapidly grown to the point where entire aircraft can be designed using computer software, with wind-tunnel tests followed by flight tests to confirm the computer predictions. Understanding of supersonic and hypersonic aerodynamics has matured since the 1960s, and the goals of aerodynamicists have shifted from the behaviour of fluid flow to the engineering of a vehicle such that it interacts predictably with the fluid flow. Designing aircraft for supersonic and hypersonic conditions, as well as the desire to improve the aerodynamic efficiency of current aircraft and propulsion systems, continues to motivate new research in aerodynamics, while work continues to be done on important problems in basic aerodynamic theory related to flow turbulence and the existence and uniqueness of analytical solutions to the Navier–Stokes equations.
Fundamental concepts
Understanding the motion of air around an object (often called a flow field) enables the calculation of forces and moments acting on the object. In many aerodynamics problems, the forces of interest are the fundamental forces of flight: lift, drag, thrust, and weight. Of these, lift and drag are aerodynamic forces, i.e. forces due to air flow over a solid body. Calculation of these quantities is often founded upon the assumption that the flow field behaves as a continuum. Continuum flow fields are characterized by properties such as flow velocity, pressure, density, and temperature, which may be functions of position and time. These properties may be directly or indirectly measured in aerodynamics experiments or calculated starting with the equations for conservation of mass, momentum, and energy in air flows. Density, flow velocity, and an additional property, viscosity, are used to classify flow fields.
Flow classification
Flow velocity is used to classify flows according to speed regime. Subsonic flows are flow fields in which the air speed field is always below the local speed of sound. Transonic flows include both regions of subsonic flow and regions in which the local flow speed is greater than the local speed of sound. Supersonic flows are defined to be flows in which the flow speed is greater than the speed of sound everywhere. A fourth classification, hypersonic flow, refers to flows where the flow speed is much greater than the speed of sound. Aerodynamicists disagree on the precise definition of hypersonic flow. | Aerodynamics | Wikipedia | 510 | 2819 | https://en.wikipedia.org/wiki/Aerodynamics | Physical sciences | Fluid mechanics | null |
Compressible flow accounts for varying density within the flow. Subsonic flows are often idealized as incompressible, i.e. the density is assumed to be constant. Transonic and supersonic flows are compressible, and calculations that neglect the changes of density in these flow fields will yield inaccurate results.
Viscosity is associated with the frictional forces in a flow. In some flow fields, viscous effects are very small, and approximate solutions may safely neglect viscous effects. These approximations are called inviscid flows. Flows for which viscosity is not neglected are called viscous flows. Finally, aerodynamic problems may also be classified by the flow environment. External aerodynamics is the study of flow around solid objects of various shapes (e.g. around an airplane wing), while internal aerodynamics is the study of flow through passages inside solid objects (e.g. through a jet engine).
Continuum assumption
Unlike liquids and solids, gases are composed of discrete molecules which occupy only a small fraction of the volume filled by the gas. On a molecular level, flow fields are made up of the collisions of many individual of gas molecules between themselves and with solid surfaces. However, in most aerodynamics applications, the discrete molecular nature of gases is ignored, and the flow field is assumed to behave as a continuum. This assumption allows fluid properties such as density and flow velocity to be defined everywhere within the flow.
The validity of the continuum assumption is dependent on the density of the gas and the application in question. For the continuum assumption to be valid, the mean free path length must be much smaller than the length scale of the application in question. For example, many aerodynamics applications deal with aircraft flying in atmospheric conditions, where the mean free path length is on the order of micrometers and where the body is orders of magnitude larger. In these cases, the length scale of the aircraft ranges from a few meters to a few tens of meters, which is much larger than the mean free path length. For such applications, the continuum assumption is reasonable. The continuum assumption is less valid for extremely low-density flows, such as those encountered by vehicles at very high altitudes (e.g. 300,000 ft/90 km) or satellites in Low Earth orbit. In those cases, statistical mechanics is a more accurate method of solving the problem than is continuum aerodynamics. The Knudsen number can be used to guide the choice between statistical mechanics and the continuous formulation of aerodynamics. | Aerodynamics | Wikipedia | 507 | 2819 | https://en.wikipedia.org/wiki/Aerodynamics | Physical sciences | Fluid mechanics | null |
Conservation laws
The assumption of a fluid continuum allows problems in aerodynamics to be solved using fluid dynamics conservation laws. Three conservation principles are used:
Conservation of mass Conservation of mass requires that mass is neither created nor destroyed within a flow; the mathematical formulation of this principle is known as the mass continuity equation.
Conservation of momentum The mathematical formulation of this principle can be considered an application of Newton's Second Law. Momentum within a flow is only changed by external forces, which may include both surface forces, such as viscous (frictional) forces, and body forces, such as weight. The momentum conservation principle may be expressed as either a vector equation or separated into a set of three scalar equations (x,y,z components).
Conservation of energy The energy conservation equation states that energy is neither created nor destroyed within a flow, and that any addition or subtraction of energy to a volume in the flow is caused by heat transfer, or by work into and out of the region of interest.
Together, these equations are known as the Navier–Stokes equations, although some authors define the term to only include the momentum equation(s). The Navier–Stokes equations have no known analytical solution and are solved in modern aerodynamics using computational techniques. Because computational methods using high speed computers were not historically available and the high computational cost of solving these complex equations now that they are available, simplifications of the Navier–Stokes equations have been and continue to be employed. The Euler equations are a set of similar conservation equations which neglect viscosity and may be used in cases where the effect of viscosity is expected to be small. Further simplifications lead to Laplace's equation and potential flow theory. Additionally, Bernoulli's equation is a solution in one dimension to both the momentum and energy conservation equations.
The ideal gas law or another such equation of state is often used in conjunction with these equations to form a determined system that allows the solution for the unknown variables.
Branches of aerodynamics | Aerodynamics | Wikipedia | 410 | 2819 | https://en.wikipedia.org/wiki/Aerodynamics | Physical sciences | Fluid mechanics | null |
Aerodynamic problems are classified by the flow environment or properties of the flow, including flow speed, compressibility, and viscosity. External aerodynamics is the study of flow around solid objects of various shapes. Evaluating the lift and drag on an airplane or the shock waves that form in front of the nose of a rocket are examples of external aerodynamics. Internal aerodynamics is the study of flow through passages in solid objects. For instance, internal aerodynamics encompasses the study of the airflow through a jet engine or through an air conditioning pipe.
Aerodynamic problems can also be classified according to whether the flow speed is below, near or above the speed of sound. A problem is called subsonic if all the speeds in the problem are less than the speed of sound, transonic if speeds both below and above the speed of sound are present (normally when the characteristic speed is approximately the speed of sound), supersonic when the characteristic flow speed is greater than the speed of sound, and hypersonic when the flow speed is much greater than the speed of sound. Aerodynamicists disagree over the precise definition of hypersonic flow; a rough definition considers flows with Mach numbers above 5 to be hypersonic.
The influence of viscosity on the flow dictates a third classification. Some problems may encounter only very small viscous effects, in which case viscosity can be considered to be negligible. The approximations to these problems are called inviscid flows. Flows for which viscosity cannot be neglected are called viscous flows.
Incompressible aerodynamics
An incompressible flow is a flow in which density is constant in both time and space. Although all real fluids are compressible, a flow is often approximated as incompressible if the effect of the density changes cause only small changes to the calculated results. This is more likely to be true when the flow speeds are significantly lower than the speed of sound. Effects of compressibility are more significant at speeds close to or above the speed of sound. The Mach number is used to evaluate whether the incompressibility can be assumed, otherwise the effects of compressibility must be included. | Aerodynamics | Wikipedia | 439 | 2819 | https://en.wikipedia.org/wiki/Aerodynamics | Physical sciences | Fluid mechanics | null |
Subsonic flow
Subsonic (or low-speed) aerodynamics describes fluid motion in flows which are much lower than the speed of sound everywhere in the flow. There are several branches of subsonic flow but one special case arises when the flow is inviscid, incompressible and irrotational. This case is called potential flow and allows the differential equations that describe the flow to be a simplified version of the equations of fluid dynamics, thus making available to the aerodynamicist a range of quick and easy solutions.
In solving a subsonic problem, one decision to be made by the aerodynamicist is whether to incorporate the effects of compressibility. Compressibility is a description of the amount of change of density in the flow. When the effects of compressibility on the solution are small, the assumption that density is constant may be made. The problem is then an incompressible low-speed aerodynamics problem. When the density is allowed to vary, the flow is called compressible. In air, compressibility effects are usually ignored when the Mach number in the flow does not exceed 0.3 (about 335 feet (102 m) per second or 228 miles (366 km) per hour at 60 °F (16 °C)). Above Mach 0.3, the problem flow should be described using compressible aerodynamics.
Compressible aerodynamics
According to the theory of aerodynamics, a flow is considered to be compressible if the density changes along a streamline. This means that – unlike incompressible flow – changes in density are considered. In general, this is the case where the Mach number in part or all of the flow exceeds 0.3. The Mach 0.3 value is rather arbitrary, but it is used because gas flows with a Mach number below that value demonstrate changes in density of less than 5%. Furthermore, that maximum 5% density change occurs at the stagnation point (the point on the object where flow speed is zero), while the density changes around the rest of the object will be significantly lower. Transonic, supersonic, and hypersonic flows are all compressible flows.
Transonic flow | Aerodynamics | Wikipedia | 444 | 2819 | https://en.wikipedia.org/wiki/Aerodynamics | Physical sciences | Fluid mechanics | null |
The term Transonic refers to a range of flow velocities just below and above the local speed of sound (generally taken as Mach 0.8–1.2). It is defined as the range of speeds between the critical Mach number, when some parts of the airflow over an aircraft become supersonic, and a higher speed, typically near Mach 1.2, when all of the airflow is supersonic. Between these speeds, some of the airflow is supersonic, while some of the airflow is not supersonic.
Supersonic flow
Supersonic aerodynamic problems are those involving flow speeds greater than the speed of sound. Calculating the lift on the Concorde during cruise can be an example of a supersonic aerodynamic problem.
Supersonic flow behaves very differently from subsonic flow. Fluids react to differences in pressure; pressure changes are how a fluid is "told" to respond to its environment. Therefore, since sound is, in fact, an infinitesimal pressure difference propagating through a fluid, the speed of sound in that fluid can be considered the fastest speed that "information" can travel in the flow. This difference most obviously manifests itself in the case of a fluid striking an object. In front of that object, the fluid builds up a stagnation pressure as impact with the object brings the moving fluid to rest. In fluid traveling at subsonic speed, this pressure disturbance can propagate upstream, changing the flow pattern ahead of the object and giving the impression that the fluid "knows" the object is there by seemingly adjusting its movement and is flowing around it. In a supersonic flow, however, the pressure disturbance cannot propagate upstream. Thus, when the fluid finally reaches the object it strikes it and the fluid is forced to change its properties – temperature, density, pressure, and Mach number—in an extremely violent and irreversible fashion called a shock wave. The presence of shock waves, along with the compressibility effects of high-flow velocity (see Reynolds number) fluids, is the central difference between the supersonic and subsonic aerodynamics regimes.
Hypersonic flow
In aerodynamics, hypersonic speeds are speeds that are highly supersonic. In the 1970s, the term generally came to refer to speeds of Mach 5 (5 times the speed of sound) and above. The hypersonic regime is a subset of the supersonic regime. Hypersonic flow is characterized by high temperature flow behind a shock wave, viscous interaction, and chemical dissociation of gas. | Aerodynamics | Wikipedia | 511 | 2819 | https://en.wikipedia.org/wiki/Aerodynamics | Physical sciences | Fluid mechanics | null |
Associated terminology
The incompressible and compressible flow regimes produce many associated phenomena, such as boundary layers and turbulence.
Boundary layers
The concept of a boundary layer is important in many problems in aerodynamics. The viscosity and fluid friction in the air is approximated as being significant only in this thin layer. This assumption makes the description of such aerodynamics much more tractable mathematically.
Turbulence
In aerodynamics, turbulence is characterized by chaotic property changes in the flow. These include low momentum diffusion, high momentum convection, and rapid variation of pressure and flow velocity in space and time. Flow that is not turbulent is called laminar flow.
Aerodynamics in other fields
Engineering design
Aerodynamics is a significant element of vehicle design, including road cars and trucks where the main goal is to reduce the vehicle drag coefficient, and racing cars, where in addition to reducing drag the goal is also to increase the overall level of downforce. Aerodynamics is also important in the prediction of forces and moments acting on sailing vessels. It is used in the design of mechanical components such as hard drive heads. Structural engineers resort to aerodynamics, and particularly aeroelasticity, when calculating wind loads in the design of large buildings, bridges, and wind turbines.
The aerodynamics of internal passages is important in heating/ventilation, gas piping, and in automotive engines where detailed flow patterns strongly affect the performance of the engine.
Environmental design
Urban aerodynamics are studied by town planners and designers seeking to improve amenity in outdoor spaces, or in creating urban microclimates to reduce the effects of urban pollution. The field of environmental aerodynamics describes ways in which atmospheric circulation and flight mechanics affect ecosystems.
Aerodynamic equations are used in numerical weather prediction.
Ball-control in sports
Sports in which aerodynamics are of crucial importance include soccer, table tennis, cricket, baseball, and golf, in which most players can control the trajectory of the ball using the "Magnus effect". | Aerodynamics | Wikipedia | 393 | 2819 | https://en.wikipedia.org/wiki/Aerodynamics | Physical sciences | Fluid mechanics | null |
Ash is the solid remnants of fires. Specifically, ash refers to all non-aqueous, non-gaseous residues that remain after something burns. In analytical chemistry, to analyse the mineral and metal content of chemical samples, ash is the non-gaseous, non-liquid residue after complete combustion.
Ashes as the end product of incomplete combustion are mostly mineral, but usually still contain an amount of combustible organic or other oxidizable residues. The best-known type of ash is wood ash, as a product of wood combustion in campfires, fireplaces, etc. The darker the wood ashes, the higher the content of remaining charcoal from incomplete combustion. The ashes are of different types. Some ashes contain natural compounds that make soil fertile. Others have chemical compounds that can be toxic but may break up in soil from chemical changes and microorganism activity.
Like soap, ash is also a disinfecting agent (alkaline). The World Health Organization recommends ash or sand as alternative for handwashing when soap is not available. Before industrialization, ash soaked in water was the primary means of obtaining potash.
Natural occurrence
Ash occurs naturally from any fire that burns vegetation, and may disperse in the soil to fertilise it, or clump under it for long enough to carbonise into coal.
Composition
The composition of the ash varies depending on the product burned and its origin. The "ash content" or "mineral content" of a product is derived its incineration under temperatures ranging from to .
Wood and plant matter
The composition of ash derived from wood and other plant matter varies based on plant species, parts of the plants (such as bark, trunk, or young branches with foliage), type of soil, and time of year. The composition of these ashes also differ greatly depending on mode of combustion.
Wood ashes, in addition to residual carbonaceous materials (unconsumed embers, activated carbons impregnated with carbonaceous particles, tars, various gases, etc.), contain a between 20% and 50% calcium in the form of calcium oxide and are generally rich in potassium carbonate. Ashes derived from grasses, and the Gramineae family in particular, are rich in silica. The color of the ash comes from small proportions of inorganic minerals such as iron oxides and manganese. The oxidized metal elements that constitute wood ash are mostly considered alkaline. | Ash | Wikipedia | 496 | 2822 | https://en.wikipedia.org/wiki/Ash | Physical sciences | Salts and ions: General | Chemistry |
For example, ash collected from wood boilers is composed of
17–33% calcium in the form of calcium oxide ()
2–6% potassium in the form of potassium oxide ()
2.5–4.6% magnesium in the form of magnesium oxide ()
1–6% phosphorus in the form of phosphorus pentoxide ()
3% in total of oxides such as iron oxide, manganese oxide, and sodium oxide
The pH of the ash is between 10 and 13, mostly due to the fact that the oxides of calcium, potassium, and sodium are strong bases. Acidic components such as carbon dioxide, phosphoric acid, silicic acid, and sulfuric acid are rarely present and, in the presence of the previously mentioned bases, are generally found in the form of salts, respectively carbonates, phosphates, silicates and sulphates.
Strictly speaking, calcium and potassium salts produce the aforementioned calcium oxide (also known as quicklime) and potassium during the combustion of organic matter. But, in practice, quicklime is only obtained via lime-kiln, and potash (from potassium carbonate) or baking soda (from sodium carbonate) is extracted from the ashes.
Other substances such as sulfur, chlorine, iron or sodium only appear in small quantities. Still others are rarely found in wood, such as aluminum, zinc, and boron. (depending on the trace elements drawn from the soil by the incinerated plants).
Mineral content in ash depends on the species of tree burned, even in the same soil conditions. More chloride is found in conifer trees than broadleaf trees, with seven times as much found in spruces than in oak trees. There is twice as much phosphoric acid in the European aspen than in oaks and twice as much magnesium in elm trees than in the Scotch pine.
Ash composition also varies by which part of the tree was burnt. Silicon and calcium salts are more abundant in bark than in wood, while potassium salts are primarily found in wood. Compositional variation also occurred based on the season in which the tree died.
Specific types | Ash | Wikipedia | 431 | 2822 | https://en.wikipedia.org/wiki/Ash | Physical sciences | Salts and ions: General | Chemistry |
Cremation ashes
Cremation ashes, also called cremated remains or "cremains," are the bodily remains left from cremation. They often take the form of a grey powder resembling coarse sand. While often referred to as ashes, the remains primarily consist of powdered bone fragments due to the cremation process, which eliminates the body's organic materials. People often store these ashes in containers like urns, although they are also sometimes buried or scattered in specific locations.
Food ashes
In food processing, mineral and ash content is used to characterize the presence of organic and inorganic components in food for monitoring quality, nutritional quantification and labeling, analyzing microbiological stability, and more. This process can be used to measure minerals like calcium, sodium, potassium, and phosphorus as well as metal content such as lead, mercury, cadmium, and aluminum.
Joss paper ash
Analysis of the contents of ash samples shows that joss paper burning can emit many pollutants detrimental to air quality. There is a significant amount of heavy metals in the dust fume and bottom ash, e.g., aluminium, iron, manganese, copper, lead, zinc and cadmium.
"Burning of joss paper accounted for up to 42% of the atmospheric rBC [refractory black carbon] mass, higher than traffic (14-17%), crop residue (10-17%), coal (18-20%) during the Hanyi festival in northwest China", according to a 2022 study, "the overall air quality can be worsened due to the practice of uncontrolled burning of joss paper during the festival, which is not just confined to the people who do the burning," and "burning joss paper during worship activities is common in China and most Asian countries with similar traditions."
Slash-and-burn ash
Wildfire ash
High levels of heavy metals, including lead, arsenic, cadmium, and copper were found in the ash debris following the 2007 Californian wildfires. A national clean-up campaign was organised ... In the devastating California Camp Fire (2018) that killed 85 people, lead levels increased by around 50 times in the hours following the fire at a site nearby (Chico). Zinc concentration also increased significantly in Modesto, 150 miles away. Heavy metals such as manganese and calcium were found in numerous California fires as well. | Ash | Wikipedia | 491 | 2822 | https://en.wikipedia.org/wiki/Ash | Physical sciences | Salts and ions: General | Chemistry |
Others
Ashes from
Stubble burning
Open burning of waste
Cigarette or cigar ash
Incinerator bottom ash, a form of ash produced in incinerators
Products of coal combustion
Bottom ash
Fly ash
Volcanic ash, ash that consists of fragmented glass, rock, and minerals that appears during an eruption.
Wood ash
Other properties
Aging process
Global distillation
Uses
Fertilizer
Ashes have been used since the Neolithic period as fertilizer because they are rich in minerals, especially potash and essential nutrients. They are the main fertilizer in slash-and-burn agriculture, which eventually evolved into controlled burn and forest clearing practices. People in ancient history already possessed extensive knowledge of the nutrients produced by (from social 10th textbook)(manufacturing industries )different ashes. For clay soil in particular, using ash without modification or using , ash whose minerals have been washed with water, was necessary.
Laundry
Because ashes contain potash, they can be used to make biodegradable laundry detergent. The demand for organic products has led to renewed interest for laundry using ash derived from wood. The French word for laundry is from the Latin word , which means a substance made from ash and used to wash laundry. This usage also developed into a small, traditional architectural structure to the west of Rhône mainstem: the , a masonry structure built with stone or cob, that looks like a cabinet and that carries dirty laundry and fireplace ash; when the is full, the laundry and ash are moved to a laundry container and boiled in water.
Laundry using ash derived from wood has the benefit of being free, easy to produce, sustainable, and as efficient as standard laundry washing methods.
Health effects
Effect on precipitation
"Particles of dust or smoke in the atmosphere are essential for precipitation. These particles, called 'condensation nuclei,' provide a surface for water vapor to condense upon. This helps water droplets gather together and become large enough to fall to the earth"
Effect on climate change | Ash | Wikipedia | 391 | 2822 | https://en.wikipedia.org/wiki/Ash | Physical sciences | Salts and ions: General | Chemistry |
In calculus, an antiderivative, inverse derivative, primitive function, primitive integral or indefinite integral of a continuous function is a differentiable function whose derivative is equal to the original function . This can be stated symbolically as . The process of solving for antiderivatives is called antidifferentiation (or indefinite integration), and its opposite operation is called differentiation, which is the process of finding a derivative. Antiderivatives are often denoted by capital Roman letters such as and .
Antiderivatives are related to definite integrals through the second fundamental theorem of calculus: the definite integral of a function over a closed interval where the function is Riemann integrable is equal to the difference between the values of an antiderivative evaluated at the endpoints of the interval.
In physics, antiderivatives arise in the context of rectilinear motion (e.g., in explaining the relationship between position, velocity and acceleration). The discrete equivalent of the notion of antiderivative is antidifference.
Examples
The function is an antiderivative of , since the derivative of is . Since the derivative of a constant is zero, will have an infinite number of antiderivatives, such as , etc. Thus, all the antiderivatives of can be obtained by changing the value of in , where is an arbitrary constant known as the constant of integration. The graphs of antiderivatives of a given function are vertical translations of each other, with each graph's vertical location depending upon the value .
More generally, the power function has antiderivative if , and if .
In physics, the integration of acceleration yields velocity plus a constant. The constant is the initial velocity term that would be lost upon taking the derivative of velocity, because the derivative of a constant term is zero. This same pattern applies to further integrations and derivatives of motion (position, velocity, acceleration, and so on). Thus, integration produces the relations of acceleration, velocity and displacement:
Uses and properties
Antiderivatives can be used to compute definite integrals, using the fundamental theorem of calculus: if is an antiderivative of the continuous function over the interval , then:
Because of this, each of the infinitely many antiderivatives of a given function may be called the "indefinite integral" of f and written using the integral symbol with no bounds: | Antiderivative | Wikipedia | 495 | 2823 | https://en.wikipedia.org/wiki/Antiderivative | Mathematics | Integral calculus | null |
If is an antiderivative of , and the function is defined on some interval, then every other antiderivative of differs from by a constant: there exists a number such that for all . is called the constant of integration. If the domain of is a disjoint union of two or more (open) intervals, then a different constant of integration may be chosen for each of the intervals. For instance
is the most general antiderivative of on its natural domain
Every continuous function has an antiderivative, and one antiderivative is given by the definite integral of with variable upper boundary:
for any in the domain of . Varying the lower boundary produces other antiderivatives, but not necessarily all possible antiderivatives. This is another formulation of the fundamental theorem of calculus.
There are many elementary functions whose antiderivatives, even though they exist, cannot be expressed in terms of elementary functions. Elementary functions are polynomials, exponential functions, logarithms, trigonometric functions, inverse trigonometric functions and their combinations under composition and linear combination. Examples of these nonelementary integrals are
the error function
the Fresnel function
the sine integral
the logarithmic integral function and
sophomore's dream
For a more detailed discussion, see also Differential Galois theory.
Techniques of integration
Finding antiderivatives of elementary functions is often considerably harder than finding their derivatives (indeed, there is no pre-defined method for computing indefinite integrals). For some elementary functions, it is impossible to find an antiderivative in terms of other elementary functions. To learn more, see elementary functions and nonelementary integral.
There exist many properties and techniques for finding antiderivatives. These include, among others: | Antiderivative | Wikipedia | 365 | 2823 | https://en.wikipedia.org/wiki/Antiderivative | Mathematics | Integral calculus | null |
The linearity of integration (which breaks complicated integrals into simpler ones)
Integration by substitution, often combined with trigonometric identities or the natural logarithm
The inverse chain rule method (a special case of integration by substitution)
Integration by parts (to integrate products of functions)
Inverse function integration (a formula that expresses the antiderivative of the inverse of an invertible and continuous function , in terms of and the antiderivative of ).
The method of partial fractions in integration (which allows us to integrate all rational functions—fractions of two polynomials)
The Risch algorithm
Additional techniques for multiple integrations (see for instance double integrals, polar coordinates, the Jacobian and the Stokes' theorem)
Numerical integration (a technique for approximating a definite integral when no elementary antiderivative exists, as in the case of )
Algebraic manipulation of integrand (so that other integration techniques, such as integration by substitution, may be used)
Cauchy formula for repeated integration (to calculate the -times antiderivative of a function)
Computer algebra systems can be used to automate some or all of the work involved in the symbolic techniques above, which is particularly useful when the algebraic manipulations involved are very complex or lengthy. Integrals which have already been derived can be looked up in a table of integrals.
Of non-continuous functions
Non-continuous functions can have antiderivatives. While there are still open questions in this area, it is known that:
Some highly pathological functions with large sets of discontinuities may nevertheless have antiderivatives.
In some cases, the antiderivatives of such pathological functions may be found by Riemann integration, while in other cases these functions are not Riemann integrable.
Assuming that the domains of the functions are open intervals: | Antiderivative | Wikipedia | 376 | 2823 | https://en.wikipedia.org/wiki/Antiderivative | Mathematics | Integral calculus | null |
A necessary, but not sufficient, condition for a function to have an antiderivative is that have the intermediate value property. That is, if is a subinterval of the domain of and is any real number between and , then there exists a between and such that . This is a consequence of Darboux's theorem.
The set of discontinuities of must be a meagre set. This set must also be an F-sigma set (since the set of discontinuities of any function must be of this type). Moreover, for any meagre F-sigma set, one can construct some function having an antiderivative, which has the given set as its set of discontinuities.
If has an antiderivative, is bounded on closed finite subintervals of the domain and has a set of discontinuities of Lebesgue measure 0, then an antiderivative may be found by integration in the sense of Lebesgue. In fact, using more powerful integrals like the Henstock–Kurzweil integral, every function for which an antiderivative exists is integrable, and its general integral coincides with its antiderivative.
If has an antiderivative on a closed interval , then for any choice of partition if one chooses sample points as specified by the mean value theorem, then the corresponding Riemann sum telescopes to the value . However, if is unbounded, or if is bounded but the set of discontinuities of has positive Lebesgue measure, a different choice of sample points may give a significantly different value for the Riemann sum, no matter how fine the partition. See Example 4 below.
Some examples
Basic formulae
If , then . | Antiderivative | Wikipedia | 364 | 2823 | https://en.wikipedia.org/wiki/Antiderivative | Mathematics | Integral calculus | null |
Acrylic paint is a fast-drying paint made of pigment suspended in acrylic polymer emulsion and plasticizers, silicone oils, defoamers, stabilizers, or metal soaps. Most acrylic paints are water-based, but become water-resistant when dry. Depending on how much the paint is diluted with water, or modified with acrylic gels, mediums, or pastes, the finished acrylic painting can resemble a watercolor, a gouache, or an oil painting, or it may have its own unique characteristics not attainable with other media.
Water-based acrylic paints are used as latex house paints, as latex is the technical term for a suspension of polymer microparticles in water. Interior latex house paints tend to be a combination of binder (sometimes acrylic, vinyl, PVA, and others), filler, pigment, and water. Exterior latex house paints may also be a co-polymer blend, but the best exterior water-based paints are 100% acrylic, because of its elasticity and other factors. Vinyl, however, costs half of what 100% acrylic resins cost, and polyvinyl acetate (PVA) is even cheaper, so paint companies make many different combinations of them to match the market.
History
Otto Röhm invented acrylic resin, which was quickly transformed into acrylic paint. As early as 1934, the first usable acrylic resin dispersion was developed by German chemical company BASF, and patented by Rohm and Haas. The synthetic paint was first used in the 1940s, combining some of the properties of oil and watercolor. Between 1946 and 1949, Leonard Bocour and Sam Golden invented a solution acrylic paint under the brand Magna paint. These were mineral spirit-based paints.
Water-based acrylic paints were subsequently sold as latex house paints. | Acrylic paint | Wikipedia | 400 | 2838 | https://en.wikipedia.org/wiki/Acrylic%20paint | Technology | Artist's and drafting tools | null |
Soon after the water-based acrylic binders were introduced as house paints, artists and companies alike began to explore the potential of the new binders. Diego Rivera, David Alfaro Siqueiros, and José Clemente Orozco were the first ones who experimented with acrylic paint. This is because they were very impressed with the durability of the acrylic paint. Because of this, artists and companies alike began to produce Politec Acrylic Artists' Colors in Mexico in 1953. According to The Times newspaper, Lancelot Ribeiro pioneered the use of acrylic paints in the UK because of his "increasing impatience" by the 1960s over the time it took for oil paints to dry, as also its "lack of brilliance in its colour potential." He took to the new synthetic plastic bases that commercial paints were beginning to use and soon got help from manufacturers like ICI, Courtaulds, and Geigy. The companies supplied him samples of their latest paints in quantities that he was using three decades later, according to the paper. Initially, the firms thought the PVA compounds would not be needed in commercially viable quantities. But they quickly recognised the potential demand and "so Ribeiro became the godfather of generations of artists using acrylics as an alternative to oils."
In 1956, José L. Gutiérrez produced Politec Acrylic Artists' Colors in Mexico, and Henry Levison of Cincinnati-based Permanent Pigments Co. produced Liquitex colors. These two product lines were the first acrylic emulsion artists' paints, with modern high-viscosity paints becoming available in the early 1960s. Meanwhile, on the other side of the globe, 1958 saw the inception of Vynol Paints Pty Ltd (now Derivan) in Australia, who started producing a water-based artist acrylic called Vynol Colour, followed by Matisse Acrylics in the 1960s. Following that development, Golden came up with a waterborne acrylic paint called "Aquatec". In 1963, George Rowney (part of Daler-Rowney since 1983) was the first manufacturer to introduce artists' acrylic paints in Europe, under the brand name "Cryla". | Acrylic paint | Wikipedia | 458 | 2838 | https://en.wikipedia.org/wiki/Acrylic%20paint | Technology | Artist's and drafting tools | null |
Painting with acrylics
Acrylic painters can modify the appearance, hardness, flexibility, texture, and other characteristics of the paint surface by using acrylic medium or simply by adding water. Watercolor and oil painters also use various mediums, but the range of acrylic mediums is much greater. Acrylics have the ability to bond to many different surfaces, and mediums can be used to modify their binding characteristics. Acrylics can be used on paper, canvas, and a range of other materials; however, their use on engineered woods such as medium-density fiberboard can be problematic because of the porous nature of those surfaces. In these cases, it is recommended that the surface first be sealed with an appropriate sealer. The process of sealing acrylic painting is called varnishing. Artists use removable varnishes over isolation coat to protect paintings from dust, UV, scratches, etc. This process is similar to varnishing an oil painting.
Acrylics can be applied in thin layers or washes to create effects that resemble watercolors and other water-based mediums. They can also be used to build thick layers of paint — gel and molding paste are sometimes used to create paintings with relief features. Acrylic paints are also used in hobbies such as trains, cars, houses, DIY projects, and human models. People who make such models use acrylic paint to build facial features on dolls or raised details on other types of models. Wet acrylic paint is easily removed from paintbrushes and skin with water, whereas oil paints require the use of a hydrocarbon.
Acrylics are the most common paints used in grattage, a surrealist technique that began to be used with the advent of this type of paint. Acrylics are used for this purpose because they easily scrape or peel from a surface.
Painting techniques | Acrylic paint | Wikipedia | 386 | 2838 | https://en.wikipedia.org/wiki/Acrylic%20paint | Technology | Artist's and drafting tools | null |
Acrylic artists' paints may be thinned with water or acrylic medium and used as washes in the manner of watercolor paints, but unlike watercolor the washes are not rehydratable once dry. For this reason, acrylics do not lend themselves to the color lifting techniques of gum arabic-based watercolor paints. Instead, the paint is applied in layers, sometimes diluting with water or acrylic medium to allow layers underneath to partially show through. Using an acrylic medium gives the paint more of a rich and glossy appearance, whereas using water makes the paint look more like watercolor and have a matte finish.
Acrylic paints with gloss or matte finishes are common, although a satin (semi-matte) sheen is most common. Some brands exhibit a range of finishes (e.g. heavy-body paints from Golden, Liquitex, Winsor & Newton and Daler-Rowney); Politec acrylics are fully matte. As with oils, pigment amounts and particle size or shape can affect the paint sheen. Matting agents can also be added during manufacture to dull the finish. If desired, the artist can mix different media with their paints and use topcoats or varnishes to alter or unify sheen.
When dry, acrylic paint is generally non-removable from a solid surface if it adheres to the surface. Water or mild solvents do not re-solubilize it, although isopropyl alcohol can lift some fresh paint films off. Toluene and acetone can remove paint films, but they do not lift paint stains very well and are not selective. The use of a solvent to remove paint may result in removal of all of the paint layers (acrylic gesso, et cetera). Oils and warm, soapy water can remove acrylic paint from skin. Acrylic paint can be removed from nonporous plastic surfaces such as miniatures or models using cleaning products such as Dettol (containing chloroxylenol 4.8% v/w). | Acrylic paint | Wikipedia | 434 | 2838 | https://en.wikipedia.org/wiki/Acrylic%20paint | Technology | Artist's and drafting tools | null |
An acrylic sizing should be used to prime canvas in preparation for painting with acrylic paints, to prevent Support Induced Discoloration (SID). Acrylic paint contains surfactants that can pull up discoloration from a raw canvas, especially in transparent glazed or translucent gelled areas. Gesso alone will not stop SID; a sizing must be applied before using a gesso.
The viscosity of acrylic can be successfully reduced by using suitable extenders that maintain the integrity of the paint film. There are retarders to slow drying and extend workability time, and flow releases to increase color-blending ability.
Properties
Grades
Commercial acrylic paints come in two grades by manufacturers:
Artist acrylics (professional acrylics) are created and designed to resist chemical reactions from exposure to water, ultraviolet light, and oxygen. Professional-grade acrylics have the most pigment, which allows for more medium manipulation and limits the color shift when mixed with other colors or after drying.
Student acrylics have working characteristics similar to artist acrylics, but with lower pigment concentrations, less-expensive formulas, and fewer available colors. More expensive pigments are generally replicated by hues. Colors are designed to be mixed even though color strength is lower. Hues may not have exactly the same mixing characteristics as full-strength colors. | Acrylic paint | Wikipedia | 278 | 2838 | https://en.wikipedia.org/wiki/Acrylic%20paint | Technology | Artist's and drafting tools | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.