text stringlengths 26 3.6k | page_title stringlengths 1 71 | source stringclasses 1
value | token_count int64 10 512 | id stringlengths 2 8 | url stringlengths 31 117 | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|---|---|
Additionally, an electron always tends to fall to the lowest possible energy state. It is possible for it to occupy any orbital so long as it does not violate the Pauli exclusion principle, but if lower-energy orbitals are available, this condition is unstable. The electron will eventually lose energy (by releasing a photon) and drop into the lower orbital. Thus, electrons fill orbitals in the order specified by the energy sequence given above.
This behavior is responsible for the structure of the periodic table. The table may be divided into several rows (called 'periods'), numbered starting with 1 at the top. The presently known elements occupy seven periods. If a certain period has number i, it consists of elements whose outermost electrons fall in the ith shell. Niels Bohr was the first to propose (1923) that the periodicity in the properties of the elements might be explained by the periodic filling of the electron energy levels, resulting in the electronic structure of the atom.
The periodic table may also be divided into several numbered rectangular 'blocks'. The elements belonging to a given block have this common feature: their highest-energy electrons all belong to the same -state (but the associated with that -state depends upon the period). For instance, the leftmost two columns constitute the 's-block'. The outermost electrons of Li and Be respectively belong to the 2s subshell, and those of Na and Mg to the 3s subshell.
The following is the order for filling the "subshell" orbitals, which also gives the order of the "blocks" in the periodic table:
1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 6s, 4f, 5d, 6p, 7s, 5f, 6d, 7p
The "periodic" nature of the filling of orbitals, as well as emergence of the s, p, d, and f "blocks", is more obvious if this order of filling is given in matrix form, with increasing principal quantum numbers starting the new rows ("periods") in the matrix. Then, each subshell (composed of the first two quantum numbers) is repeated as many times as required for each pair of electrons it may contain. The result is a compressed periodic table, with each entry representing two successive elements: | Atomic orbital | Wikipedia | 494 | 1206 | https://en.wikipedia.org/wiki/Atomic%20orbital | Physical sciences | Atomic physics | null |
Although this is the general order of orbital filling according to the Madelung rule, there are exceptions, and the actual electronic energies of each element are also dependent upon additional details of the atoms (see ).
The number of electrons in an electrically neutral atom increases with the atomic number. The electrons in the outermost shell, or valence electrons, tend to be responsible for an element's chemical behavior. Elements that contain the same number of valence electrons can be grouped together and display similar chemical properties.
Relativistic effects
For elements with high atomic number , the effects of relativity become more pronounced, and especially so for s electrons, which move at relativistic velocities as they penetrate the screening electrons near the core of high- atoms. This relativistic increase in momentum for high speed electrons causes a corresponding decrease in wavelength and contraction of 6s orbitals relative to 5d orbitals (by comparison to corresponding s and d electrons in lighter elements in the same column of the periodic table); this results in 6s valence electrons becoming lowered in energy.
Examples of significant physical outcomes of this effect include the lowered melting temperature of mercury (which results from 6s electrons not being available for metal bonding) and the golden color of gold and caesium. | Atomic orbital | Wikipedia | 260 | 1206 | https://en.wikipedia.org/wiki/Atomic%20orbital | Physical sciences | Atomic physics | null |
In the Bohr model, an electron has a velocity given by , where is the atomic number, is the fine-structure constant, and is the speed of light. In non-relativistic quantum mechanics, therefore, any atom with an atomic number greater than 137 would require its 1s electrons to be traveling faster than the speed of light. Even in the Dirac equation, which accounts for relativistic effects, the wave function of the electron for atoms with is oscillatory and unbounded. The significance of element 137, also known as untriseptium, was first pointed out by the physicist Richard Feynman. Element 137 is sometimes informally called feynmanium (symbol Fy). However, Feynman's approximation fails to predict the exact critical value of due to the non-point-charge nature of the nucleus and very small orbital radius of inner electrons, resulting in a potential seen by inner electrons which is effectively less than . The critical value, which makes the atom unstable with regard to high-field breakdown of the vacuum and production of electron-positron pairs, does not occur until is about 173. These conditions are not seen except transiently in collisions of very heavy nuclei such as lead or uranium in accelerators, where such electron-positron production from these effects has been claimed to be observed.
There are no nodes in relativistic orbital densities, although individual components of the wave function will have nodes.
pp hybridization (conjectured)
In late period 8 elements, a hybrid of 8p3/2 and 9p1/2 is expected to exist, where "3/2" and "1/2" refer to the total angular momentum quantum number. This "pp" hybrid may be responsible for the p-block of the period due to properties similar to p subshells in ordinary valence shells. Energy levels of 8p3/2 and 9p1/2 come close due to relativistic spin–orbit effects; the 9s subshell should also participate, as these elements are expected to be analogous to the respective 5p elements indium through xenon.
Transitions between orbitals | Atomic orbital | Wikipedia | 449 | 1206 | https://en.wikipedia.org/wiki/Atomic%20orbital | Physical sciences | Atomic physics | null |
Bound quantum states have discrete energy levels. When applied to atomic orbitals, this means that the energy differences between states are also discrete. A transition between these states (i.e., an electron absorbing or emitting a photon) can thus happen only if the photon has an energy corresponding with the exact energy difference between said states.
Consider two states of the hydrogen atom:
State , , and
State , , and
By quantum theory, state 1 has a fixed energy of , and state 2 has a fixed energy of . Now, what would happen if an electron in state 1 were to move to state 2? For this to happen, the electron would need to gain an energy of exactly . If the electron receives energy that is less than or greater than this value, it cannot jump from state 1 to state 2. Now, suppose we irradiate the atom with a broad-spectrum of light. Photons that reach the atom that have an energy of exactly will be absorbed by the electron in state 1, and that electron will jump to state 2. However, photons that are greater or lower in energy cannot be absorbed by the electron, because the electron can jump only to one of the orbitals, it cannot jump to a state between orbitals. The result is that only photons of a specific frequency will be absorbed by the atom. This creates a line in the spectrum, known as an absorption line, which corresponds to the energy difference between states 1 and 2.
The atomic orbital model thus predicts line spectra, which are observed experimentally. This is one of the main validations of the atomic orbital model.
The atomic orbital model is nevertheless an approximation to the full quantum theory, which only recognizes many electron states. The predictions of line spectra are qualitatively useful but are not quantitatively accurate for atoms and ions other than those containing only one electron. | Atomic orbital | Wikipedia | 376 | 1206 | https://en.wikipedia.org/wiki/Atomic%20orbital | Physical sciences | Atomic physics | null |
Amino acids are organic compounds that contain both amino and carboxylic acid functional groups. Although over 500 amino acids exist in nature, by far the most important are the 22 α-amino acids incorporated into proteins. Only these 22 appear in the genetic code of life.
Amino acids can be classified according to the locations of the core structural functional groups (alpha- (α-), beta- (β-), gamma- (γ-) amino acids, etc.); other categories relate to polarity, ionization, and side-chain group type (aliphatic, acyclic, aromatic, polar, etc.). In the form of proteins, amino-acid residues form the second-largest component (water being the largest) of human muscles and other tissues. Beyond their role as residues in proteins, amino acids participate in a number of processes such as neurotransmitter transport and biosynthesis. It is thought that they played a key role in enabling life on Earth and its emergence.
Amino acids are formally named by the IUPAC-IUBMB Joint Commission on Biochemical Nomenclature in terms of the fictitious "neutral" structure shown in the illustration. For example, the systematic name of alanine is 2-aminopropanoic acid, based on the formula . The Commission justified this approach as follows:
The systematic names and formulas given refer to hypothetical forms in which amino groups are unprotonated and carboxyl groups are undissociated. This convention is useful to avoid various nomenclatural problems but should not be taken to imply that these structures represent an appreciable fraction of the amino-acid molecules.
History
The first few amino acids were discovered in the early 1800s. In 1806, French chemists Louis-Nicolas Vauquelin and Pierre Jean Robiquet isolated a compound from asparagus that was subsequently named asparagine, the first amino acid to be discovered. Cystine was discovered in 1810, although its monomer, cysteine, remained undiscovered until 1884. Glycine and leucine were discovered in 1820. The last of the 20 common amino acids to be discovered was threonine in 1935 by William Cumming Rose, who also determined the essential amino acids and established the minimum daily requirements of all amino acids for optimal growth. | Amino acid | Wikipedia | 479 | 1207 | https://en.wikipedia.org/wiki/Amino%20acid | Biology and health sciences | Biochemistry and molecular biology | null |
The unity of the chemical category was recognized by Wurtz in 1865, but he gave no particular name to it. The first use of the term "amino acid" in the English language dates from 1898, while the German term, , was used earlier. Proteins were found to yield amino acids after enzymatic digestion or acid hydrolysis. In 1902, Emil Fischer and Franz Hofmeister independently proposed that proteins are formed from many amino acids, whereby bonds are formed between the amino group of one amino acid with the carboxyl group of another, resulting in a linear structure that Fischer termed "peptide".
General structure
2-, alpha-, or α-amino acids have the generic formula in most cases, where R is an organic substituent known as a "side chain".
Of the many hundreds of described amino acids, 22 are proteinogenic ("protein-building"). It is these 22 compounds that combine to give a vast array of peptides and proteins assembled by ribosomes. Non-proteinogenic or modified amino acids may arise from post-translational modification or during nonribosomal peptide synthesis.
Chirality
The carbon atom next to the carboxyl group is called the α–carbon. In proteinogenic amino acids, it bears the amine and the R group or side chain specific to each amino acid, as well as a hydrogen atom. With the exception of glycine, for which the side chain is also a hydrogen atom, the α–carbon is stereogenic. All chiral proteogenic amino acids have the L configuration. They are "left-handed" enantiomers, which refers to the stereoisomers of the alpha carbon.
A few D-amino acids ("right-handed") have been found in nature, e.g., in bacterial envelopes, as a neuromodulator (D-serine), and in some antibiotics. Rarely, D-amino acid residues are found in proteins, and are converted from the L-amino acid as a post-translational modification.
Side chains
Polar charged side chains | Amino acid | Wikipedia | 432 | 1207 | https://en.wikipedia.org/wiki/Amino%20acid | Biology and health sciences | Biochemistry and molecular biology | null |
Five amino acids possess a charge at neutral pH. Often these side chains appear at the surfaces on proteins to enable their solubility in water, and side chains with opposite charges form important electrostatic contacts called salt bridges that maintain structures within a single protein or between interfacing proteins. Many proteins bind metal into their structures specifically, and these interactions are commonly mediated by charged side chains such as aspartate, glutamate and histidine. Under certain conditions, each ion-forming group can be charged, forming double salts.
The two negatively charged amino acids at neutral pH are aspartate (Asp, D) and glutamate (Glu, E). The anionic carboxylate groups behave as Brønsted bases in most circumstances. Enzymes in very low pH environments, like the aspartic protease pepsin in mammalian stomachs, may have catalytic aspartate or glutamate residues that act as Brønsted acids.
There are three amino acids with side chains that are cations at neutral pH: arginine (Arg, R), lysine (Lys, K) and histidine (His, H). Arginine has a charged guanidino group and lysine a charged alkyl amino group, and are fully protonated at pH 7. Histidine's imidazole group has a pKa of 6.0, and is only around 10% protonated at neutral pH. Because histidine is easily found in its basic and conjugate acid forms it often participates in catalytic proton transfers in enzyme reactions.
Polar uncharged side chains | Amino acid | Wikipedia | 342 | 1207 | https://en.wikipedia.org/wiki/Amino%20acid | Biology and health sciences | Biochemistry and molecular biology | null |
The polar, uncharged amino acids serine (Ser, S), threonine (Thr, T), asparagine (Asn, N) and glutamine (Gln, Q) readily form hydrogen bonds with water and other amino acids. They do not ionize in normal conditions, a prominent exception being the catalytic serine in serine proteases. This is an example of severe perturbation, and is not characteristic of serine residues in general. Threonine has two chiral centers, not only the L (2S) chiral center at the α-carbon shared by all amino acids apart from achiral glycine, but also (3R) at the β-carbon. The full stereochemical specification is (2S,3R)-L-threonine.
Hydrophobic side chains
Nonpolar amino acid interactions are the primary driving force behind the processes that fold proteins into their functional three dimensional structures. None of these amino acids' side chains ionize easily, and therefore do not have pKas, with the exception of tyrosine (Tyr, Y). The hydroxyl of tyrosine can deprotonate at high pH forming the negatively charged phenolate. Because of this one could place tyrosine into the polar, uncharged amino acid category, but its very low solubility in water matches the characteristics of hydrophobic amino acids well.
Special case side chains | Amino acid | Wikipedia | 304 | 1207 | https://en.wikipedia.org/wiki/Amino%20acid | Biology and health sciences | Biochemistry and molecular biology | null |
Several side chains are not described well by the charged, polar and hydrophobic categories. Glycine (Gly, G) could be considered a polar amino acid since its small size means that its solubility is largely determined by the amino and carboxylate groups. However, the lack of any side chain provides glycine with a unique flexibility among amino acids with large ramifications to protein folding. Cysteine (Cys, C) can also form hydrogen bonds readily, which would place it in the polar amino acid category, though it can often be found in protein structures forming covalent bonds, called disulphide bonds, with other cysteines. These bonds influence the folding and stability of proteins, and are essential in the formation of antibodies. Proline (Pro, P) has an alkyl side chain and could be considered hydrophobic, but because the side chain joins back onto the alpha amino group it becomes particularly inflexible when incorporated into proteins. Similar to glycine this influences protein structure in a way unique among amino acids. Selenocysteine (Sec, U) is a rare amino acid not directly encoded by DNA, but is incorporated into proteins via the ribosome. Selenocysteine has a lower redox potential compared to the similar cysteine, and participates in several unique enzymatic reactions. Pyrrolysine (Pyl, O) is another amino acid not encoded in DNA, but synthesized into protein by ribosomes. It is found in archaeal species where it participates in the catalytic activity of several methyltransferases.
β- and γ-amino acids
Amino acids with the structure , such as β-alanine, a component of carnosine and a few other peptides, are β-amino acids. Ones with the structure are γ-amino acids, and so on, where X and Y are two substituents (one of which is normally H).
Zwitterions | Amino acid | Wikipedia | 410 | 1207 | https://en.wikipedia.org/wiki/Amino%20acid | Biology and health sciences | Biochemistry and molecular biology | null |
The common natural forms of amino acids have a zwitterionic structure, with ( in the case of proline) and functional groups attached to the same C atom, and are thus α-amino acids, and are the only ones found in proteins during translation in the ribosome.
In aqueous solution at pH close to neutrality, amino acids exist as zwitterions, i.e. as dipolar ions with both and in charged states, so the overall structure is . At physiological pH the so-called "neutral forms" are not present to any measurable degree. Although the two charges in the zwitterion structure add up to zero it is misleading to call a species with a net charge of zero "uncharged".
In strongly acidic conditions (pH below 3), the carboxylate group becomes protonated and the structure becomes an ammonio carboxylic acid, . This is relevant for enzymes like pepsin that are active in acidic environments such as the mammalian stomach and lysosomes, but does not significantly apply to intracellular enzymes. In highly basic conditions (pH greater than 10, not normally seen in physiological conditions), the ammonio group is deprotonated to give .
Although various definitions of acids and bases are used in chemistry, the only one that is useful for chemistry in aqueous solution is that of Brønsted: an acid is a species that can donate a proton to another species, and a base is one that can accept a proton. This criterion is used to label the groups in the above illustration. The carboxylate side chains of aspartate and glutamate residues are the principal Brønsted bases in proteins. Likewise, lysine, tyrosine and cysteine will typically act as a Brønsted acid. Histidine under these conditions can act both as a Brønsted acid and a base.
Isoelectric point
For amino acids with uncharged side-chains the zwitterion predominates at pH values between the two pKa values, but coexists in equilibrium with small amounts of net negative and net positive ions. At the midpoint between the two pKa values, the trace amount of net negative and trace of net positive ions balance, so that average net charge of all forms present is zero. This pH is known as the isoelectric point pI, so pI = (pKa1 + pKa2). | Amino acid | Wikipedia | 506 | 1207 | https://en.wikipedia.org/wiki/Amino%20acid | Biology and health sciences | Biochemistry and molecular biology | null |
For amino acids with charged side chains, the pKa of the side chain is involved. Thus for aspartate or glutamate with negative side chains, the terminal amino group is essentially entirely in the charged form , but this positive charge needs to be balanced by the state with just one C-terminal carboxylate group is negatively charged. This occurs halfway between the two carboxylate pKa values: pI = (pKa1 + pKa(R)), where pKa(R) is the side chain pKa.
Similar considerations apply to other amino acids with ionizable side-chains, including not only glutamate (similar to aspartate), but also cysteine, histidine, lysine, tyrosine and arginine with positive side chains.
Amino acids have zero mobility in electrophoresis at their isoelectric point, although this behaviour is more usually exploited for peptides and proteins than single amino acids. Zwitterions have minimum solubility at their isoelectric point, and some amino acids (in particular, with nonpolar side chains) can be isolated by precipitation from water by adjusting the pH to the required isoelectric point. | Amino acid | Wikipedia | 252 | 1207 | https://en.wikipedia.org/wiki/Amino%20acid | Biology and health sciences | Biochemistry and molecular biology | null |
Physicochemical properties
The 20 canonical amino acids can be classified according to their properties. Important factors are charge, hydrophilicity or hydrophobicity, size, and functional groups. These properties influence protein structure and protein–protein interactions. The water-soluble proteins tend to have their hydrophobic residues (Leu, Ile, Val, Phe, and Trp) buried in the middle of the protein, whereas hydrophilic side chains are exposed to the aqueous solvent. (In biochemistry, a residue refers to a specific monomer within the polymeric chain of a polysaccharide, protein or nucleic acid.) The integral membrane proteins tend to have outer rings of exposed hydrophobic amino acids that anchor them in the lipid bilayer. Some peripheral membrane proteins have a patch of hydrophobic amino acids on their surface that sticks to the membrane. In a similar fashion, proteins that have to bind to positively charged molecules have surfaces rich in negatively charged amino acids such as glutamate and aspartate, while proteins binding to negatively charged molecules have surfaces rich in positively charged amino acids like lysine and arginine. For example, lysine and arginine are present in large amounts in the low-complexity regions of nucleic-acid binding proteins. There are various hydrophobicity scales of amino acid residues.
Some amino acids have special properties. Cysteine can form covalent disulfide bonds to other cysteine residues. Proline forms a cycle to the polypeptide backbone, and glycine is more flexible than other amino acids.
Glycine and proline are strongly present within low complexity regions of both eukaryotic and prokaryotic proteins, whereas the opposite is the case with cysteine, phenylalanine, tryptophan, methionine, valine, leucine, isoleucine, which are highly reactive, or complex, or hydrophobic.
Many proteins undergo a range of posttranslational modifications, whereby additional chemical groups are attached to the amino acid residue side chains sometimes producing lipoproteins (that are hydrophobic), or glycoproteins (that are hydrophilic) allowing the protein to attach temporarily to a membrane. For example, a signaling protein can attach and then detach from a cell membrane, because it contains cysteine residues that can have the fatty acid palmitic acid added to them and subsequently removed. | Amino acid | Wikipedia | 506 | 1207 | https://en.wikipedia.org/wiki/Amino%20acid | Biology and health sciences | Biochemistry and molecular biology | null |
Table of standard amino acid abbreviations and properties
Although one-letter symbols are included in the table, IUPAC–IUBMB recommend that "Use of the one-letter symbols should be restricted to the comparison of long sequences".
The one-letter notation was chosen by IUPAC-IUB based on the following rules:
Initial letters are used where there is no ambuiguity: C cysteine, H histidine, I isoleucine, M methionine, S serine, V valine,
Where arbitrary assignment is needed, the structurally simpler amino acids are given precedence: A Alanine, G glycine, L leucine, P proline, T threonine,
F PHenylalanine and R aRginine are assigned by being phonetically suggestive,
W tryptophan is assigned based on the double ring being visually suggestive to the bulky letter W,
K lysine and Y tyrosine are assigned as alphabetically nearest to their initials L and T (note that U was avoided for its similarity with V, while X was reserved for undetermined or atypical amino acids); for tyrosine the mnemonic tYrosine was also proposed,
D aspartate was assigned arbitrarily, with the proposed mnemonic asparDic acid; E glutamate was assigned in alphabetical sequence being larger by merely one methylene –CH2– group,
N asparagine was assigned arbitrarily, with the proposed mnemonic asparagiNe; Q glutamine was assigned in alphabetical sequence of those still available (note again that O was avoided due to similarity with D), with the proposed mnemonic Qlutamine.
Two additional amino acids are in some species coded for by codons that are usually interpreted as stop codons:
In addition to the specific amino acid codes, placeholders are used in cases where chemical or crystallographic analysis of a peptide or protein cannot conclusively determine the identity of a residue. They are also used to summarize conserved protein sequence motifs. The use of single letters to indicate sets of similar residues is similar to the use of abbreviation codes for degenerate bases.
Unk is sometimes used instead of Xaa, but is less standard.
Ter or * (from termination) is used in notation for mutations in proteins when a stop codon occurs. It corresponds to no amino acid at all. | Amino acid | Wikipedia | 509 | 1207 | https://en.wikipedia.org/wiki/Amino%20acid | Biology and health sciences | Biochemistry and molecular biology | null |
In addition, many nonstandard amino acids have a specific code. For example, several peptide drugs, such as Bortezomib and MG132, are artificially synthesized and retain their protecting groups, which have specific codes. Bortezomib is Pyz–Phe–boroLeu, and MG132 is Z–Leu–Leu–Leu–al. To aid in the analysis of protein structure, photo-reactive amino acid analogs are available. These include photoleucine (pLeu) and photomethionine (pMet).
Occurrence and functions in biochemistry
Proteinogenic amino acids
Amino acids are the precursors to proteins. They join by condensation reactions to form short polymer chains called peptides or longer chains called either polypeptides or proteins. These chains are linear and unbranched, with each amino acid residue within the chain attached to two neighboring amino acids. In nature, the process of making proteins encoded by RNA genetic material is called translation and involves the step-by-step addition of amino acids to a growing protein chain by a ribozyme that is called a ribosome. The order in which the amino acids are added is read through the genetic code from an mRNA template, which is an RNA derived from one of the organism's genes.
Twenty-two amino acids are naturally incorporated into polypeptides and are called proteinogenic or natural amino acids. Of these, 20 are encoded by the universal genetic code. The remaining 2, selenocysteine and pyrrolysine, are incorporated into proteins by unique synthetic mechanisms. Selenocysteine is incorporated when the mRNA being translated includes a SECIS element, which causes the UGA codon to encode selenocysteine instead of a stop codon. Pyrrolysine is used by some methanogenic archaea in enzymes that they use to produce methane. It is coded for with the codon UAG, which is normally a stop codon in other organisms.
Several independent evolutionary studies have suggested that Gly, Ala, Asp, Val, Ser, Pro, Glu, Leu, Thr may belong to a group of amino acids that constituted the early genetic code, whereas Cys, Met, Tyr, Trp, His, Phe may belong to a group of amino acids that constituted later additions of the genetic code.
Standard vs nonstandard amino acids | Amino acid | Wikipedia | 504 | 1207 | https://en.wikipedia.org/wiki/Amino%20acid | Biology and health sciences | Biochemistry and molecular biology | null |
The 20 amino acids that are encoded directly by the codons of the universal genetic code are called standard or canonical amino acids. A modified form of methionine (N-formylmethionine) is often incorporated in place of methionine as the initial amino acid of proteins in bacteria, mitochondria and plastids (including chloroplasts). Other amino acids are called nonstandard or non-canonical. Most of the nonstandard amino acids are also non-proteinogenic (i.e. they cannot be incorporated into proteins during translation), but two of them are proteinogenic, as they can be incorporated translationally into proteins by exploiting information not encoded in the universal genetic code.
The two nonstandard proteinogenic amino acids are selenocysteine (present in many non-eukaryotes as well as most eukaryotes, but not coded directly by DNA) and pyrrolysine (found only in some archaea and at least one bacterium). The incorporation of these nonstandard amino acids is rare. For example, 25 human proteins include selenocysteine in their primary structure, and the structurally characterized enzymes (selenoenzymes) employ selenocysteine as the catalytic moiety in their active sites. Pyrrolysine and selenocysteine are encoded via variant codons. For example, selenocysteine is encoded by stop codon and SECIS element.
N-formylmethionine (which is often the initial amino acid of proteins in bacteria, mitochondria, and chloroplasts) is generally considered as a form of methionine rather than as a separate proteinogenic amino acid. Codon–tRNA combinations not found in nature can also be used to "expand" the genetic code and form novel proteins known as alloproteins incorporating non-proteinogenic amino acids.
Non-proteinogenic amino acids
Aside from the 22 proteinogenic amino acids, many non-proteinogenic amino acids are known. Those either are not found in proteins (for example carnitine, GABA, levothyroxine) or are not produced directly and in isolation by standard cellular machinery. For example, hydroxyproline, is synthesised from proline. Another example is selenomethionine). | Amino acid | Wikipedia | 493 | 1207 | https://en.wikipedia.org/wiki/Amino%20acid | Biology and health sciences | Biochemistry and molecular biology | null |
Non-proteinogenic amino acids that are found in proteins are formed by post-translational modification. Such modifications can also determine the localization of the protein, e.g., the addition of long hydrophobic groups can cause a protein to bind to a phospholipid membrane. Examples:
the carboxylation of glutamate allows for better binding of calcium cations,
Hydroxyproline, generated by hydroxylation of proline, is a major component of the connective tissue collagen.
Hypusine in the translation initiation factor EIF5A, contains a modification of lysine.
Some non-proteinogenic amino acids are not found in proteins. Examples include 2-aminoisobutyric acid and the neurotransmitter gamma-aminobutyric acid. Non-proteinogenic amino acids often occur as intermediates in the metabolic pathways for standard amino acids – for example, ornithine and citrulline occur in the urea cycle, part of amino acid catabolism (see below). A rare exception to the dominance of α-amino acids in biology is the β-amino acid beta alanine (3-aminopropanoic acid), which is used in plants and microorganisms in the synthesis of pantothenic acid (vitamin B5), a component of coenzyme A.
In mammalian nutrition
Amino acids are not typical component of food: animals eat proteins. The protein is broken down into amino acids in the process of digestion. They are then used to synthesize new proteins, other biomolecules, or are oxidized to urea and carbon dioxide as a source of energy. The oxidation pathway starts with the removal of the amino group by a transaminase; the amino group is then fed into the urea cycle. The other product of transamidation is a keto acid that enters the citric acid cycle. Glucogenic amino acids can also be converted into glucose, through gluconeogenesis.
Of the 20 standard amino acids, nine (His, Ile, Leu, Lys, Met, Phe, Thr, Trp and Val) are called essential amino acids because the human body cannot synthesize them from other compounds at the level needed for normal growth, so they must be obtained from food. | Amino acid | Wikipedia | 485 | 1207 | https://en.wikipedia.org/wiki/Amino%20acid | Biology and health sciences | Biochemistry and molecular biology | null |
Semi-essential and conditionally essential amino acids, and juvenile requirements
In addition, cysteine, tyrosine, and arginine are considered semiessential amino acids, and taurine a semi-essential aminosulfonic acid in children. Some amino acids are conditionally essential for certain ages or medical conditions. Essential amino acids may also vary from species to species. The metabolic pathways that synthesize these monomers are not fully developed.
Non-protein functions
Many proteinogenic and non-proteinogenic amino acids have biological functions beyond being precursors to proteins and peptides.In humans, amino acids also have important roles in diverse biosynthetic pathways. Defenses against herbivores in plants sometimes employ amino acids. Examples:
Standard amino acids
Tryptophan is a precursor of the neurotransmitter serotonin.
Tyrosine (and its precursor phenylalanine) are precursors of the catecholamine neurotransmitters dopamine, epinephrine and norepinephrine and various trace amines.
Phenylalanine is a precursor of phenethylamine and tyrosine in humans. In plants, it is a precursor of various phenylpropanoids, which are important in plant metabolism.
Glycine is a precursor of porphyrins such as heme.
Arginine is a precursor of nitric oxide.
Ornithine and S-adenosylmethionine are precursors of polyamines.
Aspartate, glycine, and glutamine are precursors of nucleotides. | Amino acid | Wikipedia | 334 | 1207 | https://en.wikipedia.org/wiki/Amino%20acid | Biology and health sciences | Biochemistry and molecular biology | null |
Roles for nonstandard amino acids
Carnitine is used in lipid transport.
gamma-aminobutyric acid is a neurotransmitter.
5-HTP (5-hydroxytryptophan) is used for experimental treatment of depression.
L-DOPA (L-dihydroxyphenylalanine) for Parkinson's treatment,
Eflornithine inhibits ornithine decarboxylase and used in the treatment of sleeping sickness.
Canavanine, an analogue of arginine found in many legumes is an antifeedant, protecting the plant from predators.
Mimosine found in some legumes, is another possible antifeedant. This compound is an analogue of tyrosine and can poison animals that graze on these plants.
However, not all of the functions of other abundant nonstandard amino acids are known.
Uses in industry
Animal feed
Amino acids are sometimes added to animal feed because some of the components of these feeds, such as soybeans, have low levels of some of the essential amino acids, especially of lysine, methionine, threonine, and tryptophan. Likewise amino acids are used to chelate metal cations in order to improve the absorption of minerals from feed supplements.
Food
The food industry is a major consumer of amino acids, especially glutamic acid, which is used as a flavor enhancer, and aspartame (aspartylphenylalanine 1-methyl ester), which is used as an artificial sweetener. Amino acids are sometimes added to food by manufacturers to alleviate symptoms of mineral deficiencies, such as anemia, by improving mineral absorption and reducing negative side effects from inorganic mineral supplementation.
Chemical building blocks
Amino acids are low-cost feedstocks used in chiral pool synthesis as enantiomerically pure building blocks.
Amino acids are used in the synthesis of some cosmetics.
Aspirational uses
Fertilizer
The chelating ability of amino acids is sometimes used in fertilizers to facilitate the delivery of minerals to plants in order to correct mineral deficiencies, such as iron chlorosis. These fertilizers are also used to prevent deficiencies from occurring and to improve the overall health of the plants.
Biodegradable plastics | Amino acid | Wikipedia | 480 | 1207 | https://en.wikipedia.org/wiki/Amino%20acid | Biology and health sciences | Biochemistry and molecular biology | null |
Amino acids have been considered as components of biodegradable polymers, which have applications as environmentally friendly packaging and in medicine in drug delivery and the construction of prosthetic implants. An interesting example of such materials is polyaspartate, a water-soluble biodegradable polymer that may have applications in disposable diapers and agriculture. Due to its solubility and ability to chelate metal ions, polyaspartate is also being used as a biodegradable antiscaling agent and a corrosion inhibitor.
Synthesis
Chemical synthesis
The commercial production of amino acids usually relies on mutant bacteria that overproduce individual amino acids using glucose as a carbon source. Some amino acids are produced by enzymatic conversions of synthetic intermediates. 2-Aminothiazoline-4-carboxylic acid is an intermediate in one industrial synthesis of L-cysteine for example. Aspartic acid is produced by the addition of ammonia to fumarate using a lyase.
Biosynthesis
In plants, nitrogen is first assimilated into organic compounds in the form of glutamate, formed from alpha-ketoglutarate and ammonia in the mitochondrion. For other amino acids, plants use transaminases to move the amino group from glutamate to another alpha-keto acid. For example, aspartate aminotransferase converts glutamate and oxaloacetate to alpha-ketoglutarate and aspartate. Other organisms use transaminases for amino acid synthesis, too.
Nonstandard amino acids are usually formed through modifications to standard amino acids. For example, homocysteine is formed through the transsulfuration pathway or by the demethylation of methionine via the intermediate metabolite S-adenosylmethionine, while hydroxyproline is made by a post translational modification of proline. | Amino acid | Wikipedia | 399 | 1207 | https://en.wikipedia.org/wiki/Amino%20acid | Biology and health sciences | Biochemistry and molecular biology | null |
Microorganisms and plants synthesize many uncommon amino acids. For example, some microbes make 2-aminoisobutyric acid and lanthionine, which is a sulfide-bridged derivative of alanine. Both of these amino acids are found in peptidic lantibiotics such as alamethicin. However, in plants, 1-aminocyclopropane-1-carboxylic acid is a small disubstituted cyclic amino acid that is an intermediate in the production of the plant hormone ethylene.
Primordial synthesis
The formation of amino acids and peptides is assumed to have preceded and perhaps induced the emergence of life on earth. Amino acids can form from simple precursors under various conditions. Surface-based chemical metabolism of amino acids and very small compounds may have led to the build-up of amino acids, coenzymes and phosphate-based small carbon molecules. Amino acids and similar building blocks could have been elaborated into proto-peptides, with peptides being considered key players in the origin of life.
In the famous Urey-Miller experiment, the passage of an electric arc through a mixture of methane, hydrogen, and ammonia produces a large number of amino acids. Since then, scientists have discovered a range of ways and components by which the potentially prebiotic formation and chemical evolution of peptides may have occurred, such as condensing agents, the design of self-replicating peptides and a number of non-enzymatic mechanisms by which amino acids could have emerged and elaborated into peptides. Several hypotheses invoke the Strecker synthesis whereby hydrogen cyanide, simple aldehydes, ammonia, and water produce amino acids. | Amino acid | Wikipedia | 354 | 1207 | https://en.wikipedia.org/wiki/Amino%20acid | Biology and health sciences | Biochemistry and molecular biology | null |
According to a review, amino acids, and even peptides, "turn up fairly regularly in the various experimental broths that have been allowed to be cooked from simple chemicals. This is because nucleotides are far more difficult to synthesize chemically than amino acids." For a chronological order, it suggests that there must have been a 'protein world' or at least a 'polypeptide world', possibly later followed by the 'RNA world' and the 'DNA world'. Codon–amino acids mappings may be the biological information system at the primordial origin of life on Earth. While amino acids and consequently simple peptides must have formed under different experimentally probed geochemical scenarios, the transition from an abiotic world to the first life forms is to a large extent still unresolved.
Reactions
Amino acids undergo the reactions expected of the constituent functional groups.
Peptide bond formation
As both the amine and carboxylic acid groups of amino acids can react to form amide bonds, one amino acid molecule can react with another and become joined through an amide linkage. This polymerization of amino acids is what creates proteins. This condensation reaction yields the newly formed peptide bond and a molecule of water. In cells, this reaction does not occur directly; instead, the amino acid is first activated by attachment to a transfer RNA molecule through an ester bond. This aminoacyl-tRNA is produced in an ATP-dependent reaction carried out by an aminoacyl tRNA synthetase. This aminoacyl-tRNA is then a substrate for the ribosome, which catalyzes the attack of the amino group of the elongating protein chain on the ester bond. As a result of this mechanism, all proteins made by ribosomes are synthesized starting at their N-terminus and moving toward their C-terminus. | Amino acid | Wikipedia | 384 | 1207 | https://en.wikipedia.org/wiki/Amino%20acid | Biology and health sciences | Biochemistry and molecular biology | null |
However, not all peptide bonds are formed in this way. In a few cases, peptides are synthesized by specific enzymes. For example, the tripeptide glutathione is an essential part of the defenses of cells against oxidative stress. This peptide is synthesized in two steps from free amino acids. In the first step, gamma-glutamylcysteine synthetase condenses cysteine and glutamate through a peptide bond formed between the side chain carboxyl of the glutamate (the gamma carbon of this side chain) and the amino group of the cysteine. This dipeptide is then condensed with glycine by glutathione synthetase to form glutathione.
In chemistry, peptides are synthesized by a variety of reactions. One of the most-used in solid-phase peptide synthesis uses the aromatic oxime derivatives of amino acids as activated units. These are added in sequence onto the growing peptide chain, which is attached to a solid resin support. Libraries of peptides are used in drug discovery through high-throughput screening.
The combination of functional groups allow amino acids to be effective polydentate ligands for metal–amino acid chelates.
The multiple side chains of amino acids can also undergo chemical reactions.
Catabolism
Degradation of an amino acid often involves deamination by moving its amino group to α-ketoglutarate, forming glutamate. This process involves transaminases, often the same as those used in amination during synthesis. In many vertebrates, the amino group is then removed through the urea cycle and is excreted in the form of urea. However, amino acid degradation can produce uric acid or ammonia instead. For example, serine dehydratase converts serine to pyruvate and ammonia. After removal of one or more amino groups, the remainder of the molecule can sometimes be used to synthesize new amino acids, or it can be used for energy by entering glycolysis or the citric acid cycle, as detailed in image at right.
Complexation
Amino acids are bidentate ligands, forming transition metal amino acid complexes.
Chemical analysis | Amino acid | Wikipedia | 457 | 1207 | https://en.wikipedia.org/wiki/Amino%20acid | Biology and health sciences | Biochemistry and molecular biology | null |
The total nitrogen content of organic matter is mainly formed by the amino groups in proteins. The Total Kjeldahl Nitrogen (TKN) is a measure of nitrogen widely used in the analysis of (waste) water, soil, food, feed and organic matter in general. As the name suggests, the Kjeldahl method is applied. More sensitive methods are available. | Amino acid | Wikipedia | 76 | 1207 | https://en.wikipedia.org/wiki/Amino%20acid | Biology and health sciences | Biochemistry and molecular biology | null |
Area is the measure of a region's size on a surface. The area of a plane region or plane area refers to the area of a shape or planar lamina, while surface area refers to the area of an open surface or the boundary of a three-dimensional object. Area can be understood as the amount of material with a given thickness that would be necessary to fashion a model of the shape, or the amount of paint necessary to cover the surface with a single coat. It is the two-dimensional analogue of the length of a curve (a one-dimensional concept) or the volume of a solid (a three-dimensional concept).
Two different regions may have the same area (as in squaring the circle); by synecdoche, "area" sometimes is used to refer to the region, as in a "polygonal area".
The area of a shape can be measured by comparing the shape to squares of a fixed size. In the International System of Units (SI), the standard unit of area is the square metre (written as m2), which is the area of a square whose sides are one metre long. A shape with an area of three square metres would have the same area as three such squares. In mathematics, the unit square is defined to have area one, and the area of any other shape or surface is a dimensionless real number.
There are several well-known formulas for the areas of simple shapes such as triangles, rectangles, and circles. Using these formulas, the area of any polygon can be found by dividing the polygon into triangles. For shapes with curved boundary, calculus is usually required to compute the area. Indeed, the problem of determining the area of plane figures was a major motivation for the historical development of calculus.
For a solid shape such as a sphere, cone, or cylinder, the area of its boundary surface is called the surface area. Formulas for the surface areas of simple shapes were computed by the ancient Greeks, but computing the surface area of a more complicated shape usually requires multivariable calculus. | Area | Wikipedia | 422 | 1209 | https://en.wikipedia.org/wiki/Area | Mathematics | Geometry and topology | null |
Area plays an important role in modern mathematics. In addition to its obvious importance in geometry and calculus, area is related to the definition of determinants in linear algebra, and is a basic property of surfaces in differential geometry. In analysis, the area of a subset of the plane is defined using Lebesgue measure, though not every subset is measurable if one supposes the axiom of choice. In general, area in higher mathematics is seen as a special case of volume for two-dimensional regions.
Area can be defined through the use of axioms, defining it as a function of a collection of certain plane figures to the set of real numbers. It can be proved that such a function exists.
Formal definition
An approach to defining what is meant by "area" is through axioms. "Area" can be defined as a function from a collection M of a special kinds of plane figures (termed measurable sets) to the set of real numbers, which satisfies the following properties:
For all S in M, .
If S and T are in M then so are and , and also .
If S and T are in M with then is in M and .
If a set S is in M and S is congruent to T then T is also in M and .
Every rectangle R is in M. If the rectangle has length h and breadth k then .
Let Q be a set enclosed between two step regions S and T. A step region is formed from a finite union of adjacent rectangles resting on a common base, i.e. . If there is a unique number c such that for all such step regions S and T, then .
It can be proved that such an area function actually exists.
Units
Every unit of length has a corresponding unit of area, namely the area of a square with the given side length. Thus areas can be measured in square metres (m2), square centimetres (cm2), square millimetres (mm2), square kilometres (km2), square feet (ft2), square yards (yd2), square miles (mi2), and so forth. Algebraically, these units can be thought of as the squares of the corresponding length units.
The SI unit of area is the square metre, which is considered an SI derived unit.
Conversions
Calculation of the area of a square whose length and width are 1 metre would be:
1 metre × 1 metre = 1 m2 | Area | Wikipedia | 495 | 1209 | https://en.wikipedia.org/wiki/Area | Mathematics | Geometry and topology | null |
and so, a rectangle with different sides (say length of 3 metres and width of 2 metres) would have an area in square units that can be calculated as:
3 metres × 2 metres = 6 m2. This is equivalent to 6 million square millimetres. Other useful conversions are:
1 square kilometre = 1,000,000 square metres
1 square metre = 10,000 square centimetres = 1,000,000 square millimetres
1 square centimetre = 100 square millimetres.
Non-metric units
In non-metric units, the conversion between two square units is the square of the conversion between the corresponding length units.
1 foot = 12 inches,
the relationship between square feet and square inches is
1 square foot = 144 square inches,
where 144 = 122 = 12 × 12. Similarly:
1 square yard = 9 square feet
1 square mile = 3,097,600 square yards = 27,878,400 square feet
In addition, conversion factors include:
1 square inch = 6.4516 square centimetres
1 square foot = square metres
1 square yard = square metres
1 square mile = square kilometres
Other units including historical
There are several other common units for area. The are was the original unit of area in the metric system, with:
1 are = 100 square metres
Though the are has fallen out of use, the hectare is still commonly used to measure land:
1 hectare = 100 ares = 10,000 square metres = 0.01 square kilometres
Other uncommon metric units of area include the tetrad, the hectad, and the myriad.
The acre is also commonly used to measure land areas, where
1 acre = 4,840 square yards = 43,560 square feet.
An acre is approximately 40% of a hectare.
On the atomic scale, area is measured in units of barns, such that:
1 barn = 10−28 square meters.
The barn is commonly used in describing the cross-sectional area of interaction in nuclear physics.
In South Asia (mainly Indians), although the countries use SI units as official, many South Asians still use traditional units. Each administrative division has its own area unit, some of them have same names, but with different values. There's no official consensus about the traditional units values. Thus, the conversions between the SI units and the traditional units may have different results, depending on what reference that has been used. | Area | Wikipedia | 476 | 1209 | https://en.wikipedia.org/wiki/Area | Mathematics | Geometry and topology | null |
Some traditional South Asian units that have fixed value:
1 Killa = 1 acre
1 Ghumaon = 1 acre
1 Kanal = 0.125 acre (1 acre = 8 kanal)
1 Decimal = 48.4 square yards
1 Chatak = 180 square feet
History
Circle area
In the 5th century BCE, Hippocrates of Chios was the first to show that the area of a disk (the region enclosed by a circle) is proportional to the square of its diameter, as part of his quadrature of the lune of Hippocrates, but did not identify the constant of proportionality. Eudoxus of Cnidus, also in the 5th century BCE, also found that the area of a disk is proportional to its radius squared.
Subsequently, Book I of Euclid's Elements dealt with equality of areas between two-dimensional figures. The mathematician Archimedes used the tools of Euclidean geometry to show that the area inside a circle is equal to that of a right triangle whose base has the length of the circle's circumference and whose height equals the circle's radius, in his book Measurement of a Circle. (The circumference is 2r, and the area of a triangle is half the base times the height, yielding the area r2 for the disk.) Archimedes approximated the value of (and hence the area of a unit-radius circle) with his doubling method, in which he inscribed a regular triangle in a circle and noted its area, then doubled the number of sides to give a regular hexagon, then repeatedly doubled the number of sides as the polygon's area got closer and closer to that of the circle (and did the same with circumscribed polygons).
Triangle area
Quadrilateral area
In the 7th century CE, Brahmagupta developed a formula, now known as Brahmagupta's formula, for the area of a cyclic quadrilateral (a quadrilateral inscribed in a circle) in terms of its sides. In 1842, the German mathematicians Carl Anton Bretschneider and Karl Georg Christian von Staudt independently found a formula, known as Bretschneider's formula, for the area of any quadrilateral.
General polygon area | Area | Wikipedia | 474 | 1209 | https://en.wikipedia.org/wiki/Area | Mathematics | Geometry and topology | null |
The development of Cartesian coordinates by René Descartes in the 17th century allowed the development of the surveyor's formula for the area of any polygon with known vertex locations by Gauss in the 19th century.
Areas determined using calculus
The development of integral calculus in the late 17th century provided tools that could subsequently be used for computing more complicated areas, such as the area of an ellipse and the surface areas of various curved three-dimensional objects.
Area formulas
Polygon formulas
For a non-self-intersecting (simple) polygon, the Cartesian coordinates (i=0, 1, ..., n-1) of whose n vertices are known, the area is given by the surveyor's formula:
where when i=n-1, then i+1 is expressed as modulus n and so refers to 0.
Rectangles
The most basic area formula is the formula for the area of a rectangle. Given a rectangle with length and width , the formula for the area is:
(rectangle).
That is, the area of the rectangle is the length multiplied by the width. As a special case, as in the case of a square, the area of a square with side length is given by the formula:
(square).
The formula for the area of a rectangle follows directly from the basic properties of area, and is sometimes taken as a definition or axiom. On the other hand, if geometry is developed before arithmetic, this formula can be used to define multiplication of real numbers.
Dissection, parallelograms, and triangles
Most other simple formulas for area follow from the method of dissection.
This involves cutting a shape into pieces, whose areas must sum to the area of the original shape.
For an example, any parallelogram can be subdivided into a trapezoid and a right triangle, as shown in figure to the left. If the triangle is moved to the other side of the trapezoid, then the resulting figure is a rectangle. It follows that the area of the parallelogram is the same as the area of the rectangle:
(parallelogram).
However, the same parallelogram can also be cut along a diagonal into two congruent triangles, as shown in the figure to the right. It follows that the area of each triangle is half the area of the parallelogram:
(triangle).
Similar arguments can be used to find area formulas for the trapezoid as well as more complicated polygons. | Area | Wikipedia | 512 | 1209 | https://en.wikipedia.org/wiki/Area | Mathematics | Geometry and topology | null |
Area of curved shapes
Circles
The formula for the area of a circle (more properly called the area enclosed by a circle or the area of a disk) is based on a similar method. Given a circle of radius , it is possible to partition the circle into sectors, as shown in the figure to the right. Each sector is approximately triangular in shape, and the sectors can be rearranged to form an approximate parallelogram. The height of this parallelogram is , and the width is half the circumference of the circle, or . Thus, the total area of the circle is :
(circle).
Though the dissection used in this formula is only approximate, the error becomes smaller and smaller as the circle is partitioned into more and more sectors. The limit of the areas of the approximate parallelograms is exactly , which is the area of the circle.
This argument is actually a simple application of the ideas of calculus. In ancient times, the method of exhaustion was used in a similar way to find the area of the circle, and this method is now recognized as a precursor to integral calculus. Using modern methods, the area of a circle can be computed using a definite integral:
Ellipses
The formula for the area enclosed by an ellipse is related to the formula of a circle; for an ellipse with semi-major and semi-minor axes and the formula is:
Non-planar surface area
Most basic formulas for surface area can be obtained by cutting surfaces and flattening them out (see: developable surfaces). For example, if the side surface of a cylinder (or any prism) is cut lengthwise, the surface can be flattened out into a rectangle. Similarly, if a cut is made along the side of a cone, the side surface can be flattened out into a sector of a circle, and the resulting area computed.
The formula for the surface area of a sphere is more difficult to derive: because a sphere has nonzero Gaussian curvature, it cannot be flattened out. The formula for the surface area of a sphere was first obtained by Archimedes in his work On the Sphere and Cylinder. The formula is:
(sphere),
where is the radius of the sphere. As with the formula for the area of a circle, any derivation of this formula inherently uses methods similar to calculus.
General formulas
Areas of 2-dimensional figures | Area | Wikipedia | 488 | 1209 | https://en.wikipedia.org/wiki/Area | Mathematics | Geometry and topology | null |
A triangle: (where B is any side, and h is the distance from the line on which B lies to the other vertex of the triangle). This formula can be used if the height h is known. If the lengths of the three sides are known then Heron's formula can be used: where a, b, c are the sides of the triangle, and is half of its perimeter. If an angle and its two included sides are given, the area is where is the given angle and and are its included sides. If the triangle is graphed on a coordinate plane, a matrix can be used and is simplified to the absolute value of . This formula is also known as the shoelace formula and is an easy way to solve for the area of a coordinate triangle by substituting the 3 points (x1,y1), (x2,y2), and (x3,y3). The shoelace formula can also be used to find the areas of other polygons when their vertices are known. Another approach for a coordinate triangle is to use calculus to find the area.
A simple polygon constructed on a grid of equal-distanced points (i.e., points with integer coordinates) such that all the polygon's vertices are grid points: , where i is the number of grid points inside the polygon and b is the number of boundary points. This result is known as Pick's theorem.
Area in calculus
The area between a positive-valued curve and the horizontal axis, measured between two values a and b (b is defined as the larger of the two values) on the horizontal axis, is given by the integral from a to b of the function that represents the curve:
The area between the graphs of two functions is equal to the integral of one function, f(x), minus the integral of the other function, g(x):
where is the curve with the greater y-value.
An area bounded by a function expressed in polar coordinates is:
The area enclosed by a parametric curve with endpoints is given by the line integrals:
or the z-component of
(For details, see .) This is the principle of the planimeter mechanical device.
Bounded area between two quadratic functions
To find the bounded area between two quadratic functions, we first subtract one from the other, writing the difference as | Area | Wikipedia | 485 | 1209 | https://en.wikipedia.org/wiki/Area | Mathematics | Geometry and topology | null |
where f(x) is the quadratic upper bound and g(x) is the quadratic lower bound.
By the area integral formulas above and Vieta's formula, we can obtain that
The above remains valid if one of the bounding functions is linear instead of quadratic.
Surface area of 3-dimensional figures
Cone: , where r is the radius of the circular base, and h is the height. That can also be rewritten as or where r is the radius and l is the slant height of the cone. is the base area while is the lateral surface area of the cone.
Cube: , where s is the length of an edge.
Cylinder: , where r is the radius of a base and h is the height. The can also be rewritten as , where d is the diameter.
Prism: , where B is the area of a base, P is the perimeter of a base, and h is the height of the prism.
pyramid: , where B is the area of the base, P is the perimeter of the base, and L is the length of the slant.
Rectangular prism: , where is the length, w is the width, and h is the height.
General formula for surface area
The general formula for the surface area of the graph of a continuously differentiable function where and is a region in the xy-plane with the smooth boundary:
An even more general formula for the area of the graph of a parametric surface in the vector form where is a continuously differentiable vector function of is:
List of formulas
The above calculations show how to find the areas of many common shapes.
The areas of irregular (and thus arbitrary) polygons can be calculated using the "Surveyor's formula" (shoelace formula).
Relation of area to perimeter
The isoperimetric inequality states that, for a closed curve of length L (so the region it encloses has perimeter L) and for area A of the region that it encloses,
and equality holds if and only if the curve is a circle. Thus a circle has the largest area of any closed figure with a given perimeter.
At the other extreme, a figure with given perimeter L could have an arbitrarily small area, as illustrated by a rhombus that is "tipped over" arbitrarily far so that two of its angles are arbitrarily close to 0° and the other two are arbitrarily close to 180°. | Area | Wikipedia | 499 | 1209 | https://en.wikipedia.org/wiki/Area | Mathematics | Geometry and topology | null |
For a circle, the ratio of the area to the circumference (the term for the perimeter of a circle) equals half the radius r. This can be seen from the area formula πr2 and the circumference formula 2πr.
The area of a regular polygon is half its perimeter times the apothem (where the apothem is the distance from the center to the nearest point on any side).
Fractals
Doubling the edge lengths of a polygon multiplies its area by four, which is two (the ratio of the new to the old side length) raised to the power of two (the dimension of the space the polygon resides in). But if the one-dimensional lengths of a fractal drawn in two dimensions are all doubled, the spatial content of the fractal scales by a power of two that is not necessarily an integer. This power is called the fractal dimension of the fractal.
Area bisectors
There are an infinitude of lines that bisect the area of a triangle. Three of them are the medians of the triangle (which connect the sides' midpoints with the opposite vertices), and these are concurrent at the triangle's centroid; indeed, they are the only area bisectors that go through the centroid. Any line through a triangle that splits both the triangle's area and its perimeter in half goes through the triangle's incenter (the center of its incircle). There are either one, two, or three of these for any given triangle.
Any line through the midpoint of a parallelogram bisects the area.
All area bisectors of a circle or other ellipse go through the center, and any chords through the center bisect the area. In the case of a circle they are the diameters of the circle.
Optimization
Given a wire contour, the surface of least area spanning ("filling") it is a minimal surface. Familiar examples include soap bubbles.
The question of the filling area of the Riemannian circle remains open.
The circle has the largest area of any two-dimensional object having the same perimeter.
A cyclic polygon (one inscribed in a circle) has the largest area of any polygon with a given number of sides of the same lengths.
A version of the isoperimetric inequality for triangles states that the triangle of greatest area among all those with a given perimeter is equilateral. | Area | Wikipedia | 509 | 1209 | https://en.wikipedia.org/wiki/Area | Mathematics | Geometry and topology | null |
The triangle of largest area of all those inscribed in a given circle is equilateral; and the triangle of smallest area of all those circumscribed around a given circle is equilateral.
The ratio of the area of the incircle to the area of an equilateral triangle, , is larger than that of any non-equilateral triangle.
The ratio of the area to the square of the perimeter of an equilateral triangle, is larger than that for any other triangle. | Area | Wikipedia | 108 | 1209 | https://en.wikipedia.org/wiki/Area | Mathematics | Geometry and topology | null |
The astronomical unit (symbol: au or AU) is a unit of length defined to be exactly equal to . Historically, the astronomical unit was conceived as the average Earth-Sun distance (the average of Earth's aphelion and perihelion), before its modern redefinition in 2012.
The astronomical unit is used primarily for measuring distances within the Solar System or around other stars. It is also a fundamental component in the definition of another unit of astronomical length, the parsec. One au is equivalent to 499 light-seconds to within 10 parts per million.
History of symbol usage
A variety of unit symbols and abbreviations have been in use for the astronomical unit. In a 1976 resolution, the International Astronomical Union (IAU) had used the symbol A to denote a length equal to the astronomical unit. In the astronomical literature, the symbol AU is common. In 2006, the International Bureau of Weights and Measures (BIPM) had recommended ua as the symbol for the unit, from the French "unité astronomique". In the non-normative Annex C to ISO 80000-3:2006 (later withdrawn), the symbol of the astronomical unit was also ua.
In 2012, the IAU, noting "that various symbols are presently in use for the astronomical unit", recommended the use of the symbol "au". The scientific journals published by the American Astronomical Society and the Royal Astronomical Society subsequently adopted this symbol. In the 2014 revision and 2019 edition of the SI Brochure, the BIPM used the unit symbol "au". ISO 80000-3:2019, which replaces ISO 80000-3:2006, does not mention the astronomical unit.
Development of unit definition | Astronomical unit | Wikipedia | 350 | 1210 | https://en.wikipedia.org/wiki/Astronomical%20unit | Physical sciences | Length and distance | null |
Earth's orbit around the Sun is an ellipse. The semi-major axis of this elliptic orbit is defined to be half of the straight line segment that joins the perihelion and aphelion. The centre of the Sun lies on this straight line segment, but not at its midpoint. Because ellipses are well-understood shapes, measuring the points of its extremes defined the exact shape mathematically, and made possible calculations for the entire orbit as well as predictions based on observation. In addition, it mapped out exactly the largest straight-line distance that Earth traverses over the course of a year, defining times and places for observing the largest parallax (apparent shifts of position) in nearby stars. Knowing Earth's shift and a star's shift enabled the star's distance to be calculated. But all measurements are subject to some degree of error or uncertainty, and the uncertainties in the length of the astronomical unit only increased uncertainties in the stellar distances. Improvements in precision have always been a key to improving astronomical understanding. Throughout the twentieth century, measurements became increasingly precise and sophisticated, and ever more dependent on accurate observation of the effects described by Einstein's theory of relativity and upon the mathematical tools it used.
Improving measurements were continually checked and cross-checked by means of improved understanding of the laws of celestial mechanics, which govern the motions of objects in space. The expected positions and distances of objects at an established time are calculated (in au) from these laws, and assembled into a collection of data called an ephemeris. NASA Jet Propulsion Laboratory HORIZONS System provides one of several ephemeris computation services. | Astronomical unit | Wikipedia | 334 | 1210 | https://en.wikipedia.org/wiki/Astronomical%20unit | Physical sciences | Length and distance | null |
In 1976, to establish a more precise measure for the astronomical unit, the IAU formally adopted a new definition. Although directly based on the then-best available observational measurements, the definition was recast in terms of the then-best mathematical derivations from celestial mechanics and planetary ephemerides. It stated that "the astronomical unit of length is that length (A) for which the Gaussian gravitational constant (k) takes the value when the units of measurement are the astronomical units of length, mass and time". Equivalently, by this definition, one au is "the radius of an unperturbed circular Newtonian orbit about the sun of a particle having infinitesimal mass, moving with an angular frequency of "; or alternatively that length for which the heliocentric gravitational constant (the product G) is equal to ()2 au3/d2, when the length is used to describe the positions of objects in the Solar System.
Subsequent explorations of the Solar System by space probes made it possible to obtain precise measurements of the relative positions of the inner planets and other objects by means of radar and telemetry. As with all radar measurements, these rely on measuring the time taken for photons to be reflected from an object. Because all photons move at the speed of light in vacuum, a fundamental constant of the universe, the distance of an object from the probe is calculated as the product of the speed of light and the measured time. However, for precision the calculations require adjustment for things such as the motions of the probe and object while the photons are transiting. In addition, the measurement of the time itself must be translated to a standard scale that accounts for relativistic time dilation. Comparison of the ephemeris positions with time measurements expressed in Barycentric Dynamical Time (TDB) leads to a value for the speed of light in astronomical units per day (of ). By 2009, the IAU had updated its standard measures to reflect improvements, and calculated the speed of light at (TDB). | Astronomical unit | Wikipedia | 422 | 1210 | https://en.wikipedia.org/wiki/Astronomical%20unit | Physical sciences | Length and distance | null |
In 1983, the CIPM modified the International System of Units (SI) to make the metre defined as the distance travelled in a vacuum by light in 1 / . This replaced the previous definition, valid between 1960 and 1983, which was that the metre equalled a certain number of wavelengths of a certain emission line of krypton-86. (The reason for the change was an improved method of measuring the speed of light.) The speed of light could then be expressed exactly as c0 = , a standard also adopted by the IERS numerical standards. From this definition and the 2009 IAU standard, the time for light to traverse an astronomical unit is found to be τA = , which is slightly more than 8 minutes 19 seconds. By multiplication, the best IAU 2009 estimate was A = c0τA = , based on a comparison of Jet Propulsion Laboratory and IAA–RAS ephemerides.
In 2006, the BIPM reported a value of the astronomical unit as . In the 2014 revision of the SI Brochure, the BIPM recognised the IAU's 2012 redefinition of the astronomical unit as .
This estimate was still derived from observation and measurements subject to error, and based on techniques that did not yet standardize all relativistic effects, and thus were not constant for all observers. In 2012, finding that the equalization of relativity alone would make the definition overly complex, the IAU simply used the 2009 estimate to redefine the astronomical unit as a conventional unit of length directly tied to the metre (exactly ). The new definition recognizes as a consequence that the astronomical unit has reduced importance, limited in use to a convenience in some applications.
{| style="border-spacing:0"
|-
|rowspan=7 style="vertical-align:top; padding-right:0"|1 astronomical unit
|= metres (by definition)
|-
|= (exactly)
|-
|≈
|-
|≈ light-seconds
|-
|≈
|-
|≈
|}
This definition makes the speed of light, defined as exactly , equal to exactly × ÷ or about , some 60 parts per trillion less than the 2009 estimate. | Astronomical unit | Wikipedia | 448 | 1210 | https://en.wikipedia.org/wiki/Astronomical%20unit | Physical sciences | Length and distance | null |
Usage and significance
With the definitions used before 2012, the astronomical unit was dependent on the heliocentric gravitational constant, that is the product of the gravitational constant, G, and the solar mass, . Neither G nor can be measured to high accuracy separately, but the value of their product is known very precisely from observing the relative positions of planets (Kepler's third law expressed in terms of Newtonian gravitation). Only the product is required to calculate planetary positions for an ephemeris, so ephemerides are calculated in astronomical units and not in SI units.
The calculation of ephemerides also requires a consideration of the effects of general relativity. In particular, time intervals measured on Earth's surface (Terrestrial Time, TT) are not constant when compared with the motions of the planets: the terrestrial second (TT) appears to be longer near January and shorter near July when compared with the "planetary second" (conventionally measured in TDB). This is because the distance between Earth and the Sun is not fixed (it varies between and ) and, when Earth is closer to the Sun (perihelion), the Sun's gravitational field is stronger and Earth is moving faster along its orbital path. As the metre is defined in terms of the second and the speed of light is constant for all observers, the terrestrial metre appears to change in length compared with the "planetary metre" on a periodic basis.
The metre is defined to be a unit of proper length. Indeed, the International Committee for Weights and Measures (CIPM) notes that "its definition applies only within a spatial extent sufficiently small that the effects of the non-uniformity of the gravitational field can be ignored". As such, a distance within the Solar System without specifying the frame of reference for the measurement is problematic. The 1976 definition of the astronomical unit was incomplete because it did not specify the frame of reference in which to apply the measurement, but proved practical for the calculation of ephemerides: a fuller definition that is consistent with general relativity was proposed, and "vigorous debate" ensued until August 2012 when the IAU adopted the current definition of 1 astronomical unit = metres. | Astronomical unit | Wikipedia | 442 | 1210 | https://en.wikipedia.org/wiki/Astronomical%20unit | Physical sciences | Length and distance | null |
The astronomical unit is typically used for stellar system scale distances, such as the size of a protostellar disk or the heliocentric distance of an asteroid, whereas other units are used for other distances in astronomy. The astronomical unit is too small to be convenient for interstellar distances, where the parsec and light-year are widely used. The parsec (parallax arcsecond) is defined in terms of the astronomical unit, being the distance of an object with a parallax of . The light-year is often used in popular works, but is not an approved non-SI unit and is rarely used by professional astronomers.
When simulating a numerical model of the Solar System, the astronomical unit provides an appropriate scale that minimizes (overflow, underflow and truncation) errors in floating point calculations.
History
The book On the Sizes and Distances of the Sun and Moon, which is ascribed to Aristarchus, says the distance to the Sun is 18 to 20 times the distance to the Moon, whereas the true ratio is about . The latter estimate was based on the angle between the half-moon and the Sun, which he estimated as (the true value being close to ). Depending on the distance that van Helden assumes Aristarchus used for the distance to the Moon, his calculated distance to the Sun would fall between and Earth radii.
Hipparchus gave an estimate of the distance of Earth from the Sun, quoted by Pappus as equal to 490 Earth radii. According to the conjectural reconstructions of Noel Swerdlow and G. J. Toomer, this was derived from his assumption of a "least perceptible" solar parallax of .
A Chinese mathematical treatise, the Zhoubi Suanjing (), shows how the distance to the Sun can be computed geometrically, using the different lengths of the noontime shadows observed at three places li apart and the assumption that Earth is flat. | Astronomical unit | Wikipedia | 401 | 1210 | https://en.wikipedia.org/wiki/Astronomical%20unit | Physical sciences | Length and distance | null |
According to Eusebius in the Praeparatio evangelica (Book XV, Chapter 53), Eratosthenes found the distance to the Sun to be "σταδιων μυριαδας τετρακοσιας και οκτωκισμυριας" (literally "of stadia myriads 400 and ") but with the additional note that in the Greek text the grammatical agreement is between myriads (not stadia) on the one hand and both 400 and on the other: all three are accusative plural, while σταδιων is genitive plural ("of stadia") . All three words (or all four including stadia) are inflected. This has been translated either as stadia (1903 translation by Edwin Hamilton Gifford), or as stadia (edition of Édouard des Places, dated 1974–1991). Using the Greek stadium of 185 to 190 metres, the former translation comes to to , which is far too low, whereas the second translation comes to 148.7 to 152.8 billion metres (accurate within 2%).
In the 2nd century CE, Ptolemy estimated the mean distance of the Sun as times Earth's radius. To determine this value, Ptolemy started by measuring the Moon's parallax, finding what amounted to a horizontal lunar parallax of 1° 26′, which was much too large. He then derived a maximum lunar distance of Earth radii. Because of cancelling errors in his parallax figure, his theory of the Moon's orbit, and other factors, this figure was approximately correct. He then measured the apparent sizes of the Sun and the Moon and concluded that the apparent diameter of the Sun was equal to the apparent diameter of the Moon at the Moon's greatest distance, and from records of lunar eclipses, he estimated this apparent diameter, as well as the apparent diameter of the shadow cone of Earth traversed by the Moon during a lunar eclipse. Given these data, the distance of the Sun from Earth can be trigonometrically computed to be Earth radii. This gives a ratio of solar to lunar distance of approximately 19, matching Aristarchus's figure. Although Ptolemy's procedure is theoretically workable, it is very sensitive to small changes in the data, so much so that changing a measurement by a few per cent can make the solar distance infinite. | Astronomical unit | Wikipedia | 508 | 1210 | https://en.wikipedia.org/wiki/Astronomical%20unit | Physical sciences | Length and distance | null |
After Greek astronomy was transmitted to the medieval Islamic world, astronomers made some changes to Ptolemy's cosmological model, but did not greatly change his estimate of the Earth–Sun distance. For example, in his introduction to Ptolemaic astronomy, al-Farghānī gave a mean solar distance of Earth radii, whereas in his zij, al-Battānī used a mean solar distance of Earth radii. Subsequent astronomers, such as al-Bīrūnī, used similar values. Later in Europe, Copernicus and Tycho Brahe also used comparable figures ( and Earth radii), and so Ptolemy's approximate Earth–Sun distance survived through the 16th century.
Johannes Kepler was the first to realize that Ptolemy's estimate must be significantly too low (according to Kepler, at least by a factor of three) in his Rudolphine Tables (1627). Kepler's laws of planetary motion allowed astronomers to calculate the relative distances of the planets from the Sun, and rekindled interest in measuring the absolute value for Earth (which could then be applied to the other planets). The invention of the telescope allowed far more accurate measurements of angles than is possible with the naked eye. Flemish astronomer Godefroy Wendelin repeated Aristarchus’ measurements in 1635, and found that Ptolemy's value was too low by a factor of at least eleven.
A somewhat more accurate estimate can be obtained by observing the transit of Venus. By measuring the transit in two different locations, one can accurately calculate the parallax of Venus and from the relative distance of Earth and Venus from the Sun, the solar parallax (which cannot be measured directly due to the brightness of the Sun). Jeremiah Horrocks had attempted to produce an estimate based on his observation of the 1639 transit (published in 1662), giving a solar parallax of , similar to Wendelin's figure. The solar parallax is related to the Earth–Sun distance as measured in Earth radii by
The smaller the solar parallax, the greater the distance between the Sun and Earth: a solar parallax of is equivalent to an Earth–Sun distance of Earth radii. | Astronomical unit | Wikipedia | 451 | 1210 | https://en.wikipedia.org/wiki/Astronomical%20unit | Physical sciences | Length and distance | null |
Christiaan Huygens believed that the distance was even greater: by comparing the apparent sizes of Venus and Mars, he estimated a value of about Earth radii, equivalent to a solar parallax of . Although Huygens' estimate is remarkably close to modern values, it is often discounted by historians of astronomy because of the many unproven (and incorrect) assumptions he had to make for his method to work; the accuracy of his value seems to be based more on luck than good measurement, with his various errors cancelling each other out.
Jean Richer and Giovanni Domenico Cassini measured the parallax of Mars between Paris and Cayenne in French Guiana when Mars was at its closest to Earth in 1672. They arrived at a figure for the solar parallax of , equivalent to an Earth–Sun distance of about Earth radii. They were also the first astronomers to have access to an accurate and reliable value for the radius of Earth, which had been measured by their colleague Jean Picard in 1669 as toises. This same year saw another estimate for the astronomical unit by John Flamsteed, which accomplished it alone by measuring the martian diurnal parallax. Another colleague, Ole Rømer, discovered the finite speed of light in 1676: the speed was so great that it was usually quoted as the time required for light to travel from the Sun to the Earth, or "light time per unit distance", a convention that is still followed by astronomers today.
A better method for observing Venus transits was devised by James Gregory and published in his Optica Promata (1663). It was strongly advocated by Edmond Halley and was applied to the transits of Venus observed in 1761 and 1769, and then again in 1874 and 1882. Transits of Venus occur in pairs, but less than one pair every century, and observing the transits in 1761 and 1769 was an unprecedented international scientific operation including observations by James Cook and Charles Green from Tahiti. Despite the Seven Years' War, dozens of astronomers were dispatched to observing points around the world at great expense and personal danger: several of them died in the endeavour. The various results were collated by Jérôme Lalande to give a figure for the solar parallax of . Karl Rudolph Powalky had made an estimate of in 1864. | Astronomical unit | Wikipedia | 474 | 1210 | https://en.wikipedia.org/wiki/Astronomical%20unit | Physical sciences | Length and distance | null |
Another method involved determining the constant of aberration. Simon Newcomb gave great weight to this method when deriving his widely accepted value of for the solar parallax (close to the modern value of ), although Newcomb also used data from the transits of Venus. Newcomb also collaborated with A. A. Michelson to measure the speed of light with Earth-based equipment; combined with the constant of aberration (which is related to the light time per unit distance), this gave the first direct measurement of the Earth–Sun distance in metres. Newcomb's value for the solar parallax (and for the constant of aberration and the Gaussian gravitational constant) were incorporated into the first international system of astronomical constants in 1896, which remained in place for the calculation of ephemerides until 1964. The name "astronomical unit" appears first to have been used in 1903.
The discovery of the near-Earth asteroid 433 Eros and its passage near Earth in 1900–1901 allowed a considerable improvement in parallax measurement. Another international project to measure the parallax of 433 Eros was undertaken in 1930–1931.
Direct radar measurements of the distances to Venus and Mars became available in the early 1960s. Along with improved measurements of the speed of light, these showed that Newcomb's values for the solar parallax and the constant of aberration were inconsistent with one another.
Developments
The unit distance (the value of the astronomical unit in metres) can be expressed in terms of other astronomical constants:
where is the Newtonian constant of gravitation, is the solar mass, is the numerical value of Gaussian gravitational constant and is the time period of one day.
The Sun is constantly losing mass by radiating away energy, so the orbits of the planets are steadily expanding outward from the Sun. This has led to calls to abandon the astronomical unit as a unit of measurement.
As the speed of light has an exact defined value in SI units and the Gaussian gravitational constant is fixed in the astronomical system of units, measuring the light time per unit distance is exactly equivalent to measuring the product × in SI units. Hence, it is possible to construct ephemerides entirely in SI units, which is increasingly becoming the norm.
A 2004 analysis of radiometric measurements in the inner Solar System suggested that the secular increase in the unit distance was much larger than can be accounted for by solar radiation, + metres per century. | Astronomical unit | Wikipedia | 503 | 1210 | https://en.wikipedia.org/wiki/Astronomical%20unit | Physical sciences | Length and distance | null |
The measurements of the secular variations of the astronomical unit are not confirmed by other authors and are quite controversial.
Furthermore, since 2010, the astronomical unit has not been estimated by the planetary ephemerides.
Examples
The following table contains some distances given in astronomical units. It includes some examples with distances that are normally not given in astronomical units, because they are either too short or far too long. Distances normally change over time. Examples are listed by increasing distance. | Astronomical unit | Wikipedia | 92 | 1210 | https://en.wikipedia.org/wiki/Astronomical%20unit | Physical sciences | Length and distance | null |
Ada is a structured, statically typed, imperative, and object-oriented high-level programming language, inspired by Pascal and other languages. It has built-in language support for design by contract (DbC), extremely strong typing, explicit concurrency, tasks, synchronous message passing, protected objects, and non-determinism. Ada improves code safety and maintainability by using the compiler to find errors in favor of runtime errors. Ada is an international technical standard, jointly defined by the International Organization for Standardization (ISO), and the International Electrotechnical Commission (IEC). , the standard, called Ada 2022 informally, is ISO/IEC 8652:2023.
Ada was originally designed by a team led by French computer scientist Jean Ichbiah of Honeywell under contract to the United States Department of Defense (DoD) from 1977 to 1983 to supersede over 450 programming languages used by the DoD at that time. Ada was named after Ada Lovelace (1815–1852), who has been credited as the first computer programmer.
Features
Ada was originally designed for embedded and real-time systems. The Ada 95 revision, designed by S. Tucker Taft of Intermetrics between 1992 and 1995, improved support for systems, numerical, financial, and object-oriented programming (OOP).
Features of Ada include: strong typing, modular programming mechanisms (packages), run-time checking, parallel processing (tasks, synchronous message passing, protected objects, and nondeterministic select statements), exception handling, and generics. Ada 95 added support for object-oriented programming, including dynamic dispatch.
The syntax of Ada minimizes choices of ways to perform basic operations, and prefers English keywords (such as "or else" and "and then") to symbols (such as "||" and "&&"). Ada uses the basic arithmetical operators "+", "-", "*", and "/", but avoids using other symbols. Code blocks are delimited by words such as "declare", "begin", and "end", where the "end" (in most cases) is followed by the identifier of the block it closes (e.g., if ... end if, loop ... end loop). In the case of conditional blocks this avoids a dangling else that could pair with the wrong nested if-expression in other languages like C or Java. | Ada (programming language) | Wikipedia | 507 | 1242 | https://en.wikipedia.org/wiki/Ada%20%28programming%20language%29 | Technology | "Historical" languages | null |
Ada is designed for developing very large software systems. Ada packages can be compiled separately. Ada package specifications (the package interface) can also be compiled separately without the implementation to check for consistency. This makes it possible to detect problems early during the design phase, before implementation starts.
A large number of compile-time checks are supported to help avoid bugs that would not be detectable until run-time in some other languages or would require explicit checks to be added to the source code. For example, the syntax requires explicitly named closing of blocks to prevent errors due to mismatched end tokens. The adherence to strong typing allows detecting many common software errors (wrong parameters, range violations, invalid references, mismatched types, etc.) either during compile-time, or otherwise during run-time. As concurrency is part of the language specification, the compiler can in some cases detect potential deadlocks. Compilers also commonly check for misspelled identifiers, visibility of packages, redundant declarations, etc. and can provide warnings and useful suggestions on how to fix the error.
Ada also supports run-time checks to protect against access to unallocated memory, buffer overflow errors, range violations, off-by-one errors, array access errors, and other detectable bugs. These checks can be disabled in the interest of runtime efficiency, but can often be compiled efficiently. It also includes facilities to help program verification. For these reasons, Ada is sometimes used in critical systems, where any anomaly might lead to very serious consequences, e.g., accidental death, injury or severe financial loss. Examples of systems where Ada is used include avionics, air traffic control, railways, banking, military and space technology.
Ada's dynamic memory management is high-level and type-safe. Ada has no generic or untyped pointers; nor does it implicitly declare any pointer type. Instead, all dynamic memory allocation and deallocation must occur via explicitly declared access types. Each access type has an associated storage pool that handles the low-level details of memory management; the programmer can either use the default storage pool or define new ones (this is particularly relevant for Non-Uniform Memory Access). It is even possible to declare several different access types that all designate the same type but use different storage pools. Also, the language provides for accessibility checks, both at compile time and at run time, that ensures that an access value cannot outlive the type of the object it points to. | Ada (programming language) | Wikipedia | 512 | 1242 | https://en.wikipedia.org/wiki/Ada%20%28programming%20language%29 | Technology | "Historical" languages | null |
Though the semantics of the language allow automatic garbage collection of inaccessible objects, most implementations do not support it by default, as it would cause unpredictable behaviour in real-time systems. Ada does support a limited form of region-based memory management; also, creative use of storage pools can provide for a limited form of automatic garbage collection, since destroying a storage pool also destroys all the objects in the pool.
A double-dash ("--"), resembling an em dash, denotes comment text. Comments stop at end of line; there is intentionally no way to make a comment span multiple lines, to prevent unclosed comments from accidentally voiding whole sections of source code. Disabling a whole block of code therefore requires the prefixing of each line (or column) individually with "--". While this clearly denotes disabled code by creating a column of repeated "--" down the page, it also renders the experimental dis/re-enablement of large blocks a more drawn-out process in editors without block commenting support.
The semicolon (";") is a statement terminator, and the null or no-operation statement is null;. A single ; without a statement to terminate is not allowed.
Unlike most ISO standards, the Ada language definition (known as the Ada Reference Manual or ARM, or sometimes the Language Reference Manual or LRM) is free content. Thus, it is a common reference for Ada programmers, not only programmers implementing Ada compilers. Apart from the reference manual, there is also an extensive rationale document which explains the language design and the use of various language constructs. This document is also widely used by programmers. When the language was revised, a new rationale document was written.
One notable free software tool that is used by many Ada programmers to aid them in writing Ada source code is the GNAT Programming Studio, and GNAT which is part of the GNU Compiler Collection.
Alire is a package and toolchain management tool for Ada. | Ada (programming language) | Wikipedia | 407 | 1242 | https://en.wikipedia.org/wiki/Ada%20%28programming%20language%29 | Technology | "Historical" languages | null |
History
In the 1970s the US Department of Defense (DoD) became concerned by the number of different programming languages being used for its embedded computer system projects, many of which were obsolete or hardware-dependent, and none of which supported safe modular programming. In 1975, a working group, the High Order Language Working Group (HOLWG), was formed with the intent to reduce this number by finding or creating a programming language generally suitable for the department's and the UK Ministry of Defence's requirements. After many iterations beginning with an original straw-man proposal the eventual programming language was named Ada. The total number of high-level programming languages in use for such projects fell from over 450 in 1983 to 37 by 1996.
HOLWG crafted the Steelman language requirements , a series of documents stating the requirements they felt a programming language should satisfy. Many existing languages were formally reviewed, but the team concluded in 1977 that no existing language met the specifications. The requirements were created by the United States Department of Defense in The Department of Defense Common High Order Language program in 1978. The predecessors of this document were called, in order, "Strawman", "Woodenman", "Tinman" and "Ironman". The requirements focused on the needs of embedded computer applications, and emphasised reliability, maintainability, and efficiency. Notably, they included exception handling facilities, run-time checking, and parallel computing.
It was concluded that no existing language met these criteria to a sufficient extent, so a contest was called to create a language that would be closer to fulfilling them. The design that won this contest became the Ada programming language. The resulting language followed the Steelman requirements closely, though not exactly. | Ada (programming language) | Wikipedia | 344 | 1242 | https://en.wikipedia.org/wiki/Ada%20%28programming%20language%29 | Technology | "Historical" languages | null |
Requests for proposals for a new programming language were issued and four contractors were hired to develop their proposals under the names of Red (Intermetrics led by Benjamin Brosgol), Green (Honeywell, led by Jean Ichbiah), Blue (SofTech, led by John Goodenough) and Yellow (SRI International, led by Jay Spitzen). In April 1978, after public scrutiny, the Red and Green proposals passed to the next phase. In May 1979, the Green proposal, designed by Jean Ichbiah at Honeywell, was chosen and given the name Ada—after Augusta Ada King, Countess of Lovelace, usually known as Ada Lovelace. This proposal was influenced by the language LIS that Ichbiah and his group had developed in the 1970s. The preliminary Ada reference manual was published in ACM SIGPLAN Notices in June 1979. The Military Standard reference manual was approved on December 10, 1980 (Ada Lovelace's birthday), and given the number MIL-STD-1815 in honor of Ada Lovelace's birth year. In 1981, Tony Hoare took advantage of his Turing Award speech to criticize Ada for being overly complex and hence unreliable, but subsequently seemed to recant in the foreword he wrote for an Ada textbook.
Ada attracted much attention from the programming community as a whole during its early days. Its backers and others predicted that it might become a dominant language for general purpose programming and not only defense-related work. Ichbiah publicly stated that within ten years, only two programming languages would remain: Ada and Lisp. Early Ada compilers struggled to implement the large, complex language, and both compile-time and run-time performance tended to be slow and tools primitive. Compiler vendors expended most of their efforts in passing the massive, language-conformance-testing, government-required Ada Compiler Validation Capability (ACVC) validation suite that was required in another novel feature of the Ada language effort. | Ada (programming language) | Wikipedia | 400 | 1242 | https://en.wikipedia.org/wiki/Ada%20%28programming%20language%29 | Technology | "Historical" languages | null |
The first validated Ada implementation was the NYU Ada/Ed translator, certified on April 11, 1983. NYU Ada/Ed is implemented in the high-level set language SETL. Several commercial companies began offering Ada compilers and associated development tools, including Alsys, TeleSoft, DDC-I, Advanced Computer Techniques, Tartan Laboratories, Irvine Compiler, TLD Systems, and Verdix. Computer manufacturers who had a significant business in the defense, aerospace, or related industries, also offered Ada compilers and tools on their platforms; these included Concurrent Computer Corporation, Cray Research, Inc., Digital Equipment Corporation, Harris Computer Systems, and Siemens Nixdorf Informationssysteme AG.
In 1991, the US Department of Defense began to require the use of Ada (the Ada mandate) for all software, though exceptions to this rule were often granted. The Department of Defense Ada mandate was effectively removed in 1997, as the DoD began to embrace commercial off-the-shelf (COTS) technology. Similar requirements existed in other NATO countries: Ada was required for NATO systems involving command and control and other functions, and Ada was the mandated or preferred language for defense-related applications in countries such as Sweden, Germany, and Canada.
By the late 1980s and early 1990s, Ada compilers had improved in performance, but there were still barriers to fully exploiting Ada's abilities, including a tasking model that was different from what most real-time programmers were used to. | Ada (programming language) | Wikipedia | 299 | 1242 | https://en.wikipedia.org/wiki/Ada%20%28programming%20language%29 | Technology | "Historical" languages | null |
Because of Ada's safety-critical support features, it is now used not only for military applications, but also in commercial projects where a software bug can have severe consequences, e.g., avionics and air traffic control, commercial rockets such as the Ariane 4 and 5, satellites and other space systems, railway transport and banking.
For example, the Primary Flight Control System, the fly-by-wire system software in the Boeing 777, was written in Ada, as were the fly-by-wire systems for the aerodynamically unstable Eurofighter Typhoon, Saab Gripen, Lockheed Martin F-22 Raptor and the DFCS replacement flight control system for the Grumman F-14 Tomcat. The Canadian Automated Air Traffic System was written in 1 million lines of Ada (SLOC count). It featured advanced distributed processing, a distributed Ada database, and object-oriented design. Ada is also used in other air traffic systems, e.g., the UK's next-generation Interim Future Area Control Tools Support () air traffic control system is designed and implemented using SPARK Ada.
It is also used in the French TVM in-cab signalling system on the TGV high-speed rail system, and the metro suburban trains in Paris, London, Hong Kong and New York City.
The Ada 95 revision of the language went beyond the Steelman requirements, targeting general-purpose systems in addition to embedded ones, and adding features supporting object-oriented programming.
Standardization
Preliminary Ada can be found in ACM Sigplan Notices Vol 14, No 6, June 1979
Ada was first published in 1980 as an ANSI standard ANSI/MIL-STD 1815. As this very first version held many errors and inconsistencies , the revised edition was published in 1983 as ANSI/MIL-STD 1815A. Without any further changes, it became an ISO standard in 1987. This version of the language is commonly known as Ada 83, from the date of its adoption by ANSI, but is sometimes referred to also as Ada 87, from the date of its adoption by ISO. There is also a French translation; DIN translated it into German as DIN 66268 in 1988. | Ada (programming language) | Wikipedia | 450 | 1242 | https://en.wikipedia.org/wiki/Ada%20%28programming%20language%29 | Technology | "Historical" languages | null |
Ada 95, the joint ISO/IEC/ANSI standard ISO/IEC 8652:1995 was published in February 1995, making it the first ISO standard object-oriented programming language. To help with the standard revision and future acceptance, the US Air Force funded the development of the GNAT Compiler. Presently, the GNAT Compiler is part of the GNU Compiler Collection.
Work has continued on improving and updating the technical content of the Ada language. A Technical Corrigendum to Ada 95 was published in October 2001, and a major Amendment, ISO/IEC 8652:1995/Amd 1:2007 was published on March 9, 2007, commonly known as Ada 2005 because work on the new standard was finished that year.
At the Ada-Europe 2012 conference in Stockholm, the Ada Resource Association (ARA) and Ada-Europe announced the completion of the design of the latest version of the Ada language and the submission of the reference manual to the ISO/IEC JTC 1/SC 22/WG 9 of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) for approval. ISO/IEC 8652:2012(see Ada 2012 RM) was published in December 2012, known as Ada 2012. A technical corrigendum, ISO/IEC 8652:2012/COR 1:2016, was published (see RM 2012 with TC 1).
On May 2, 2023, the Ada community saw the formal approval of publication of the Ada 2022 edition of the programming language standard.
Despite the names Ada 83, 95 etc., legally there is only one Ada standard, the one of the last ISO/IEC standard: with the acceptance of a new standard version, the previous one becomes withdrawn. The other names are just informal ones referencing a certain edition.
Other related standards include ISO/IEC 8651-3:1988 Information processing systems—Computer graphics—Graphical Kernel System (GKS) language bindings—Part 3: Ada.
Language constructs
Ada is an ALGOL-like programming language featuring control structures with reserved words such as if, then, else, while, for, and so on. However, Ada also has many data structuring facilities and other abstractions which were not included in the original ALGOL 60, such as type definitions, records, pointers, enumerations. Such constructs were in part inherited from or inspired by Pascal. | Ada (programming language) | Wikipedia | 490 | 1242 | https://en.wikipedia.org/wiki/Ada%20%28programming%20language%29 | Technology | "Historical" languages | null |
"Hello, world!" in Ada
A common example of a language's syntax is the Hello world program:
(hello.adb)
with Ada.Text_IO;
procedure Hello is
begin
Ada.Text_IO.Put_Line ("Hello, world!");
end Hello;
This program can be compiled by using the freely available open source compiler GNAT, by executing
gnatmake hello.adb
Data types
Ada's type system is not based on a set of predefined primitive types but allows users to declare their own types. This declaration in turn is not based on the internal representation of the type but on describing the goal which should be achieved. This allows the compiler to determine a suitable memory size for the type, and to check for violations of the type definition at compile time and run time (i.e., range violations, buffer overruns, type consistency, etc.). Ada supports numerical types defined by a range, modulo types, aggregate types (records and arrays), and enumeration types. Access types define a reference to an instance of a specified type; untyped pointers are not permitted.
Special types provided by the language are task types and protected types.
For example, a date might be represented as:
type Day_type is range 1 .. 31;
type Month_type is range 1 .. 12;
type Year_type is range 1800 .. 2100;
type Hours is mod 24;
type Weekday is (Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday);
type Date is
record
Day : Day_type;
Month : Month_type;
Year : Year_type;
end record;
Important to note: Day_type, Month_type, Year_type, Hours are incompatible types, meaning that for instance the following expression is illegal:
Today: Day_type := 4;
Current_Month: Month_type := 10;
... Today + Current_Month ... -- illegal
The predefined plus-operator can only add values of the same type, so the expression is illegal.
Types can be refined by declaring subtypes:
subtype Working_Hours is Hours range 0 .. 12; -- at most 12 Hours to work a day
subtype Working_Day is Weekday range Monday .. Friday; -- Days to work | Ada (programming language) | Wikipedia | 485 | 1242 | https://en.wikipedia.org/wiki/Ada%20%28programming%20language%29 | Technology | "Historical" languages | null |
Work_Load: constant array(Working_Day) of Working_Hours -- implicit type declaration
:= (Friday => 6, Monday => 4, others => 10); -- lookup table for working hours with initialization
Types can have modifiers such as limited, abstract, private etc. Private types do not show their inner structure; objects of limited types cannot be copied. Ada 95 adds further features for object-oriented extension of types.
Control structures
Ada is a structured programming language, meaning that the flow of control is structured into standard statements. All standard constructs and deep-level early exit are supported, so the use of the also supported "go to" commands is seldom needed.
-- while a is not equal to b, loop.
while a /= b loop
Ada.Text_IO.Put_Line ("Waiting");
end loop;
if a > b then
Ada.Text_IO.Put_Line ("Condition met");
else
Ada.Text_IO.Put_Line ("Condition not met");
end if;
for i in 1 .. 10 loop
Ada.Text_IO.Put ("Iteration: ");
Ada.Text_IO.Put (i);
Ada.Text_IO.Put_Line;
end loop;
loop
a := a + 1;
exit when a = 10;
end loop;
case i is
when 0 => Ada.Text_IO.Put ("zero");
when 1 => Ada.Text_IO.Put ("one");
when 2 => Ada.Text_IO.Put ("two");
-- case statements have to cover all possible cases:
when others => Ada.Text_IO.Put ("none of the above");
end case;
for aWeekday in Weekday'Range loop -- loop over an enumeration
Put_Line ( Weekday'Image(aWeekday) ); -- output string representation of an enumeration
if aWeekday in Working_Day then -- check of a subtype of an enumeration
Put_Line ( " to work for " &
Working_Hours'Image (Work_Load(aWeekday)) ); -- access into a lookup table
end if;
end loop;
Packages, procedures and functions
Among the parts of an Ada program are packages, procedures and functions. | Ada (programming language) | Wikipedia | 495 | 1242 | https://en.wikipedia.org/wiki/Ada%20%28programming%20language%29 | Technology | "Historical" languages | null |
Functions differ from procedures in that they must return a value. Function calls cannot be used "as a statement", and their result must be assigned to a variable. However, since Ada 2012, functions are not required to be pure and may mutate their suitably declared parameters or the global state.
Example:
Package specification (example.ads)
package Example is
type Number is range 1 .. 11;
procedure Print_and_Increment (j: in out Number);
end Example;
Package body (example.adb)
with Ada.Text_IO;
package body Example is
i : Number := Number'First;
procedure Print_and_Increment (j: in out Number) is
function Next (k: in Number) return Number is
begin
return k + 1;
end Next;
begin
Ada.Text_IO.Put_Line ( "The total is: " & Number'Image(j) );
j := Next (j);
end Print_and_Increment;
-- package initialization executed when the package is elaborated
begin
while i < Number'Last loop
Print_and_Increment (i);
end loop;
end Example;
This program can be compiled, e.g., by using the freely available open-source compiler GNAT, by executing
gnatmake -z example.adb
Packages, procedures and functions can nest to any depth, and each can also be the logical outermost block.
Each package, procedure or function can have its own declarations of constants, types, variables, and other procedures, functions and packages, which can be declared in any order.
Pragmas
A pragma is a compiler directive that conveys information to the compiler to allow specific manipulating of compiled output. Certain pragmas are built into the language, while others are implementation-specific.
Examples of common usage of compiler pragmas would be to disable certain features, such as run-time type checking or array subscript boundary checking, or to instruct the compiler to insert object code instead of a function call (as C/C++ does with inline functions).
Generics | Ada (programming language) | Wikipedia | 440 | 1242 | https://en.wikipedia.org/wiki/Ada%20%28programming%20language%29 | Technology | "Historical" languages | null |
Alpha decay or α-decay is a type of radioactive decay in which an atomic nucleus emits an alpha particle (helium nucleus) and thereby transforms or "decays" into a different atomic nucleus, with a mass number that is reduced by four and an atomic number that is reduced by two. An alpha particle is identical to the nucleus of a helium-4 atom, which consists of two protons and two neutrons. It has a charge of and a mass of . For example, uranium-238 decays to form thorium-234.
While alpha particles have a charge , this is not usually shown because a nuclear equation describes a nuclear reaction without considering the electrons – a convention that does not imply that the nuclei necessarily occur in neutral atoms.
Alpha decay typically occurs in the heaviest nuclides. Theoretically, it can occur only in nuclei somewhat heavier than nickel (element 28), where the overall binding energy per nucleon is no longer a maximum and the nuclides are therefore unstable toward spontaneous fission-type processes. In practice, this mode of decay has only been observed in nuclides considerably heavier than nickel, with the lightest known alpha emitter being the second lightest isotope of antimony, 104Sb. Exceptionally, however, beryllium-8 decays to two alpha particles.
Alpha decay is by far the most common form of cluster decay, where the parent atom ejects a defined daughter collection of nucleons, leaving another defined product behind. It is the most common form because of the combined extremely high nuclear binding energy and relatively small mass of the alpha particle. Like other cluster decays, alpha decay is fundamentally a quantum tunneling process. Unlike beta decay, it is governed by the interplay between both the strong nuclear force and the electromagnetic force.
Alpha particles have a typical kinetic energy of 5 MeV (or ≈ 0.13% of their total energy, 110 TJ/kg) and have a speed of about 15,000,000 m/s, or 5% of the speed of light. There is surprisingly small variation around this energy, due to the strong dependence of the half-life of this process on the energy produced. Because of their relatively large mass, the electric charge of and relatively low velocity, alpha particles are very likely to interact with other atoms and lose their energy, and their forward motion can be stopped by a few centimeters of air. | Alpha decay | Wikipedia | 486 | 1267 | https://en.wikipedia.org/wiki/Alpha%20decay | Physical sciences | Nuclear physics | Physics |
Approximately 99% of the helium produced on Earth is the result of the alpha decay of underground deposits of minerals containing uranium or thorium. The helium is brought to the surface as a by-product of natural gas production.
History
Alpha particles were first described in the investigations of radioactivity by Ernest Rutherford in 1899, and by 1907 they were identified as He2+ ions.
By 1928, George Gamow had solved the theory of alpha decay via tunneling. The alpha particle is trapped inside the nucleus by an attractive nuclear potential well
and a repulsive electromagnetic potential barrier. Classically, it is forbidden to escape, but according to the (then) newly discovered principles of quantum mechanics, it has a tiny (but non-zero) probability of "tunneling" through the barrier and appearing on the other side to escape the nucleus. Gamow solved a model potential for the nucleus and derived, from first principles, a relationship between the half-life of the decay, and the energy of the emission, which had been previously discovered empirically and was known as the Geiger–Nuttall law.
Mechanism
The nuclear force holding an atomic nucleus together is very strong, in general much stronger than the repulsive electromagnetic forces between the protons. However, the nuclear force is also short-range, dropping quickly in strength beyond about 3 femtometers, while the electromagnetic force has an unlimited range. The strength of the attractive nuclear force keeping a nucleus together is thus proportional to the number of the nucleons, but the total disruptive electromagnetic force of proton-proton repulsion trying to break the nucleus apart is roughly proportional to the square of its atomic number. A nucleus with 210 or more nucleons is so large that the strong nuclear force holding it together can just barely counterbalance the electromagnetic repulsion between the protons it contains. Alpha decay occurs in such nuclei as a means of increasing stability by reducing size.
One curiosity is why alpha particles, helium nuclei, should be preferentially emitted as opposed to other particles like a single proton or neutron or other atomic nuclei. Part of the reason is the high binding energy of the alpha particle, which means that its mass is less than the sum of the masses of two free protons and two free neutrons. This increases the disintegration energy. Computing the total disintegration energy given by the equation | Alpha decay | Wikipedia | 479 | 1267 | https://en.wikipedia.org/wiki/Alpha%20decay | Physical sciences | Nuclear physics | Physics |
where is the initial mass of the nucleus, is the mass of the nucleus after particle emission, and is the mass of the emitted (alpha-)particle, one finds that in certain cases it is positive and so alpha particle emission is possible, whereas other decay modes would require energy to be added. For example, performing the calculation for uranium-232 shows that alpha particle emission releases 5.4 MeV of energy, while a single proton emission would require 6.1 MeV. Most of the disintegration energy becomes the kinetic energy of the alpha particle, although to fulfill conservation of momentum, part of the energy goes to the recoil of the nucleus itself (see atomic recoil). However, since the mass numbers of most alpha-emitting radioisotopes exceed 210, far greater than the mass number of the alpha particle (4), the fraction of the energy going to the recoil of the nucleus is generally quite small, less than 2%. Nevertheless, the recoil energy (on the scale of keV) is still much larger than the strength of chemical bonds (on the scale of eV), so the daughter nuclide will break away from the chemical environment the parent was in. The energies and ratios of the alpha particles can be used to identify the radioactive parent via alpha spectrometry.
These disintegration energies, however, are substantially smaller than the repulsive potential barrier created by the interplay between the strong nuclear and the electromagnetic force, which prevents the alpha particle from escaping. The energy needed to bring an alpha particle from infinity to a point near the nucleus just outside the range of the nuclear force's influence is generally in the range of about 25 MeV. An alpha particle within the nucleus can be thought of as being inside a potential barrier whose walls are 25 MeV above the potential at infinity. However, decay alpha particles only have energies of around 4 to 9 MeV above the potential at infinity, far less than the energy needed to overcome the barrier and escape. | Alpha decay | Wikipedia | 406 | 1267 | https://en.wikipedia.org/wiki/Alpha%20decay | Physical sciences | Nuclear physics | Physics |
Quantum tunneling
Quantum mechanics, however, allows the alpha particle to escape via quantum tunneling. The quantum tunneling theory of alpha decay, independently developed by George Gamow and by Ronald Wilfred Gurney and Edward Condon in 1928, was hailed as a very striking confirmation of quantum theory. Essentially, the alpha particle escapes from the nucleus not by acquiring enough energy to pass over the wall confining it, but by tunneling through the wall. Gurney and Condon made the following observation in their paper on it:
It has hitherto been necessary to postulate some special arbitrary 'instability' of the nucleus, but in the following note, it is pointed out that disintegration is a natural consequence of the laws of quantum mechanics without any special hypothesis... Much has been written of the explosive violence with which the α-particle is hurled from its place in the nucleus. But from the process pictured above, one would rather say that the α-particle almost slips away unnoticed.
The theory supposes that the alpha particle can be considered an independent particle within a nucleus, that is in constant motion but held within the nucleus by strong interaction. At each collision with the repulsive potential barrier of the electromagnetic force, there is a small non-zero probability that it will tunnel its way out. An alpha particle with a speed of 1.5×107 m/s within a nuclear diameter of approximately 10−14 m will collide with the barrier more than 1021 times per second. However, if the probability of escape at each collision is very small, the half-life of the radioisotope will be very long, since it is the time required for the total probability of escape to reach 50%. As an extreme example, the half-life of the isotope bismuth-209 is . | Alpha decay | Wikipedia | 367 | 1267 | https://en.wikipedia.org/wiki/Alpha%20decay | Physical sciences | Nuclear physics | Physics |
The isotopes in beta-decay stable isobars that are also stable with regards to double beta decay with mass number A = 5, A = 8, 143 ≤ A ≤ 155, 160 ≤ A ≤ 162, and A ≥ 165 are theorized to undergo alpha decay. All other mass numbers (isobars) have exactly one theoretically stable nuclide. Those with mass 5 decay to helium-4 and a proton or a neutron, and those with mass 8 decay to two helium-4 nuclei; their half-lives (helium-5, lithium-5, and beryllium-8) are very short, unlike the half-lives for all other such nuclides with A ≤ 209, which are very long. (Such nuclides with A ≤ 209 are primordial nuclides except 146Sm.)
Working out the details of the theory leads to an equation relating the half-life of a radioisotope to the decay energy of its alpha particles, a theoretical derivation of the empirical Geiger–Nuttall law.
Uses
Americium-241, an alpha emitter, is used in smoke detectors. The alpha particles ionize air in an open ion chamber and a small current flows through the ionized air. Smoke particles from the fire that enter the chamber reduce the current, triggering the smoke detector's alarm.
Radium-223 is also an alpha emitter. It is used in the treatment of skeletal metastases (cancers in the bones).
Alpha decay can provide a safe power source for radioisotope thermoelectric generators used for space probes and were used for artificial heart pacemakers. Alpha decay is much more easily shielded against than other forms of radioactive decay.
Static eliminators typically use polonium-210, an alpha emitter, to ionize the air, allowing the "static cling" to dissipate more rapidly. | Alpha decay | Wikipedia | 389 | 1267 | https://en.wikipedia.org/wiki/Alpha%20decay | Physical sciences | Nuclear physics | Physics |
Toxicity
Highly charged and heavy, alpha particles lose their several MeV of energy within a small volume of material, along with a very short mean free path. This increases the chance of double-strand breaks to the DNA in cases of internal contamination, when ingested, inhaled, injected or introduced through the skin. Otherwise, touching an alpha source is typically not harmful, as alpha particles are effectively shielded by a few centimeters of air, a piece of paper, or the thin layer of dead skin cells that make up the epidermis; however, many alpha sources are also accompanied by beta-emitting radio daughters, and both are often accompanied by gamma photon emission.
Relative biological effectiveness (RBE) quantifies the ability of radiation to cause certain biological effects, notably either cancer or cell-death, for equivalent radiation exposure. Alpha radiation has a high linear energy transfer (LET) coefficient, which is about one ionization of a molecule/atom for every angstrom of travel by the alpha particle. The RBE has been set at the value of 20 for alpha radiation by various government regulations. The RBE is set at 10 for neutron irradiation, and at 1 for beta radiation and ionizing photons.
However, the recoil of the parent nucleus (alpha recoil) gives it a significant amount of energy, which also causes ionization damage (see ionizing radiation). This energy is roughly the weight of the alpha () divided by the weight of the parent (typically about 200 Da) times the total energy of the alpha. By some estimates, this might account for most of the internal radiation damage, as the recoil nucleus is part of an atom that is much larger than an alpha particle, and causes a very dense trail of ionization; the atom is typically a heavy metal, which preferentially collect on the chromosomes. In some studies, this has resulted in an RBE approaching 1,000 instead of the value used in governmental regulations. | Alpha decay | Wikipedia | 394 | 1267 | https://en.wikipedia.org/wiki/Alpha%20decay | Physical sciences | Nuclear physics | Physics |
The largest natural contributor to public radiation dose is radon, a naturally occurring, radioactive gas found in soil and rock. If the gas is inhaled, some of the radon particles may attach to the inner lining of the lung. These particles continue to decay, emitting alpha particles, which can damage cells in the lung tissue. The death of Marie Curie at age 66 from aplastic anemia was probably caused by prolonged exposure to high doses of ionizing radiation, but it is not clear if this was due to alpha radiation or X-rays. Curie worked extensively with radium, which decays into radon, along with other radioactive materials that emit beta and gamma rays. However, Curie also worked with unshielded X-ray tubes during World War I, and analysis of her skeleton during a reburial showed a relatively low level of radioisotope burden.
The Russian defector Alexander Litvinenko's 2006 murder by radiation poisoning is thought to have been carried out with polonium-210, an alpha emitter. | Alpha decay | Wikipedia | 214 | 1267 | https://en.wikipedia.org/wiki/Alpha%20decay | Physical sciences | Nuclear physics | Physics |
The analytical engine was a proposed digital mechanical general-purpose computer designed by English mathematician and computer pioneer Charles Babbage. It was first described in 1837 as the successor to Babbage's Difference Engine, which was a design for a simpler mechanical calculator.
The analytical engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-Complete. In other words, the structure of the analytical engine was essentially the same as that which has dominated computer design in the electronic era. The analytical engine is one of the most successful achievements of Charles Babbage.
Babbage was never able to complete construction of any of his machines due to conflicts with his chief engineer and inadequate funding. It was not until 1941 that Konrad Zuse built the first general-purpose computer, Z3, more than a century after Babbage had proposed the pioneering analytical engine in 1837.
Design
Babbage's first attempt at a mechanical computing device, the Difference Engine, was a special-purpose machine designed to tabulate logarithms and trigonometric functions by evaluating finite differences to create approximating polynomials. Construction of this machine was never completed; Babbage had conflicts with his chief engineer, Joseph Clement, and ultimately the British government withdrew its funding for the project.
During this project, Babbage realised that a much more general design, the analytical engine, was possible. The work on the design of the analytical engine started around 1833.
The input, consisting of programs ("formulae") and data, was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter, and a bell. The machine would also be able to punch numbers onto cards to be read in later. It employed ordinary base-10 fixed-point arithmetic. | Analytical engine | Wikipedia | 415 | 1271 | https://en.wikipedia.org/wiki/Analytical%20engine | Technology | Early computers | null |
There was to be a store (that is, a memory) capable of holding 1,000 numbers of 40 decimal digits each (ca. 16.6 kB). An arithmetic unit (the "mill") would be able to perform all four arithmetic operations, plus comparisons and optionally square roots. Initially (1838) it was conceived as a difference engine curved back upon itself, in a generally circular layout, with the long store exiting off to one side. Later drawings (1858) depict a regularised grid layout. Like the central processing unit (CPU) in a modern computer, the mill would rely upon its own internal procedures, roughly equivalent to microcode in modern CPUs, to be stored in the form of pegs inserted into rotating drums called "barrels", to carry out some of the more complex instructions the user's program might specify.
The programming language to be employed by users was akin to modern day assembly languages. Loops and conditional branching were possible, and so the language as conceived would have been Turing-complete as later defined by Alan Turing. Three different types of punch cards were used: one for arithmetical operations, one for numerical constants, and one for load and store operations, transferring numbers from the store to the arithmetical unit or back. There were three separate readers for the three types of cards. Babbage developed some two dozen programs for the analytical engine between 1837 and 1840, and one program later. These programs treat polynomials, iterative formulas, Gaussian elimination, and Bernoulli numbers.
In 1842, the Italian mathematician Luigi Federico Menabrea published a description of the engine in French, based on lectures Babbage gave when he visited Turin in 1840. In 1843, the description was translated into English and extensively annotated by Ada Lovelace, who had become interested in the engine eight years earlier. In recognition of her additions to Menabrea's paper, which included a way to calculate Bernoulli numbers using the machine (widely considered to be the first complete computer program), she has been described as the first computer programmer.
Construction
Late in his life, Babbage sought ways to build a simplified version of the machine, and assembled a small part of it before his death in 1871. | Analytical engine | Wikipedia | 456 | 1271 | https://en.wikipedia.org/wiki/Analytical%20engine | Technology | Early computers | null |
In 1878, a committee of the British Association for the Advancement of Science described the analytical engine as "a marvel of mechanical ingenuity", but recommended against constructing it. The committee acknowledged the usefulness and value of the machine, but could not estimate the cost of building it, and were unsure whether the machine would function correctly after being built.
Intermittently from 1880 to 1910, Babbage's son Henry Prevost Babbage was constructing a part of the mill and the printing apparatus. In 1910, it was able to calculate a (faulty) list of multiples of pi. This constituted only a small part of the whole engine; it was not programmable and had no storage. (Popular images of this section have sometimes been mislabelled, implying that it was the entire mill or even the entire engine.) Henry Babbage's "analytical engine mill" is on display at the Science Museum in London. Henry also proposed building a demonstration version of the full engine, with a smaller storage capacity: "perhaps for a first machine ten (columns) would do, with fifteen wheels in each". Such a version could manipulate 20 numbers of 25 digits each, and what it could be told to do with those numbers could still be impressive. "It is only a question of cards and time", wrote Henry Babbage in 1888, "... and there is no reason why (twenty thousand) cards should not be used if necessary, in an analytical engine for the purposes of the mathematician".
In 1991, the London Science Museum built a complete and working specimen of Babbage's Difference Engine No. 2, a design that incorporated refinements Babbage discovered during the development of the analytical engine. This machine was built using materials and engineering tolerances that would have been available to Babbage, quelling the suggestion that Babbage's designs could not have been produced using the manufacturing technology of his time. | Analytical engine | Wikipedia | 398 | 1271 | https://en.wikipedia.org/wiki/Analytical%20engine | Technology | Early computers | null |
In October 2010, John Graham-Cumming started a "Plan 28" campaign to raise funds by "public subscription" to enable serious historical and academic study of Babbage's plans, with a view to then build and test a fully working virtual design which will then in turn enable construction of the physical analytical engine. As of May 2016, actual construction had not been attempted, since no consistent understanding could yet be obtained from Babbage's original design drawings. In particular it was unclear whether it could handle the indexed variables which were required for Lovelace's Bernoulli program. By 2017, the "Plan 28" effort reported that a searchable database of all catalogued material was available, and an initial review of Babbage's voluminous Scribbling Books had been completed.
Many of Babbage's original drawings have been digitised and are publicly available online.
Instruction set
Babbage is not known to have written down an explicit set of instructions for the engine in the manner of a modern processor manual. Instead he showed his programs as lists of states during their execution, showing what operator was run at each step with little indication of how the control flow would be guided.
Allan G. Bromley has assumed that the card deck could be read in forwards and backwards directions as a function of conditional branching after testing for conditions, which would make the engine Turing-complete:
...the cards could be ordered to move forward and reverse (and hence to loop)...
The introduction for the first time, in 1845, of user operations for a variety of service functions including, most importantly, an effective system for user control of looping in user programs.
There is no indication how the direction of turning of the operation and variable cards is specified. In the absence of other evidence I have had to adopt the minimal default assumption that both the operation and variable cards can only be turned backward as is necessary to implement the loops used in Babbage's sample programs. There would be no mechanical or microprogramming difficulty in placing the direction of motion under the control of the user.
In their emulator of the engine, Fourmilab say: | Analytical engine | Wikipedia | 441 | 1271 | https://en.wikipedia.org/wiki/Analytical%20engine | Technology | Early computers | null |
The Engine's Card Reader is not constrained to simply process the cards in a chain one after another from start to finish. It can, in addition, directed by the very cards it reads and advised by whether the Mill's run-up lever is activated, either advance the card chain forward, skipping the intervening cards, or backward, causing previously-read cards to be processed once again.
This emulator does provide a written symbolic instruction set, though this has been constructed by its authors rather than based on Babbage's original works. For example, a factorial program would be written as:
N0 6
N1 1
N2 1
×
L1
L0
S1
–
L0
L2
S0
L2
L0
CB?11
where the CB is the conditional branch instruction or "combination card" used to make the control flow jump, in this case backward by 11 cards.
Influence
Predicted influence
Babbage understood that the existence of an automatic computer would kindle interest in the field now known as algorithmic efficiency, writing in his Passages from the Life of a Philosopher, "As soon as an analytical engine exists, it will necessarily guide the future course of the science. Whenever any result is sought by its aid, the question will then arise—By what course of calculation can these results be arrived at by the machine in the shortest time?"
Computer science
From 1872, Henry continued diligently with his father's work and then intermittently in retirement in 1875.
Percy Ludgate wrote about the engine in 1914 and published his own design for an analytical engine in 1909. It was drawn up in detail, but never built, and the drawings have never been found. Ludgate's engine would be much smaller (about , which corresponds to cube of side length ) than Babbage's, and hypothetically would be capable of multiplying two 20-decimal-digit numbers in about six seconds.
In his work Essays on Automatics (1914) Leonardo Torres Quevedo, inspired by Babbage, designed a theoretical electromechanical calculating machine which was to be controlled by a read-only program. The paper also contains the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which consisted of an arithmetic unit connected to a (possibly remote) typewriter, on which commands could be typed and the results printed automatically. | Analytical engine | Wikipedia | 506 | 1271 | https://en.wikipedia.org/wiki/Analytical%20engine | Technology | Early computers | null |
Vannevar Bush's paper Instrumental Analysis (1936) included several references to Babbage's work. In the same year he started the Rapid Arithmetical Machine project to investigate the problems of constructing an electronic digital computer.
Despite this groundwork, Babbage's work fell into historical obscurity, and the analytical engine was unknown to builders of electromechanical and electronic computing machines in the 1930s and 1940s when they began their work, resulting in the need to re-invent many of the architectural innovations Babbage had proposed. Howard Aiken, who built the quickly-obsoleted electromechanical calculator, the Harvard Mark I, between 1937 and 1945, praised Babbage's work likely as a way of enhancing his own stature, but knew nothing of the analytical engine's architecture during the construction of the Mark I, and considered his visit to the constructed portion of the analytical engine "the greatest disappointment of my life". The Mark I showed no influence from the analytical engine and lacked the analytical engine's most prescient architectural feature, conditional branching. J. Presper Eckert and John W. Mauchly similarly were not aware of the details of Babbage's analytical engine work prior to the completion of their design for the first electronic general-purpose computer, the ENIAC.
Comparison to other early computers
If the analytical engine had been built, it would have been digital, programmable and Turing-complete. It would, however, have been very slow. Luigi Federico Menabrea reported in Sketch of the Analytical Engine: "Mr. Babbage believes he can, by his engine, form the product of two numbers, each containing twenty figures, in three minutes".
By comparison the Harvard Mark I could perform the same task in just six seconds (though it is debatable that computer is Turing complete; the ENIAC, which is, would also have been faster). A modern CPU could do the same thing in under a billionth of a second. | Analytical engine | Wikipedia | 413 | 1271 | https://en.wikipedia.org/wiki/Analytical%20engine | Technology | Early computers | null |
In popular culture
The cyberpunk novelists William Gibson and Bruce Sterling co-authored a steampunk novel of alternative history titled The Difference Engine in which Babbage's difference and analytical engines became available to Victorian society. The novel explores the consequences and implications of the early introduction of computational technology.
Moriarty by Modem, a short story by Jack Nimersheim, describes an alternative history where Babbage's analytical engine was indeed completed and had been deemed highly classified by the British government. The characters of Sherlock Holmes and Moriarty had in reality been a set of prototype programs written for the analytical engine. This short story follows Holmes as his program is implemented on modern computers and he is forced to compete against his nemesis yet again in the modern counterparts of Babbage's analytical engine.
A similar setting to The Difference Engine is used by Sydney Padua in the webcomic The Thrilling Adventures of Lovelace and Babbage. It features an alternative history where Ada Lovelace and Babbage have built the analytical engine and use it to fight crime at Queen Victoria's request. The comic is based on thorough research on the biographies of and correspondence between Babbage and Lovelace, which is then twisted for humorous effect.
The Orion's Arm online project features the Machina Babbagenseii, fully sentient Babbage-inspired mechanical computers. Each is the size of a large asteroid, only capable of surviving in microgravity conditions, and processes data at 0.5% the speed of a human brain.
Charles Babbage and Ada Lovelace appear in an episode of Doctor Who, "Spyfall Part 2", where the engine is displayed and referenced. | Analytical engine | Wikipedia | 343 | 1271 | https://en.wikipedia.org/wiki/Analytical%20engine | Technology | Early computers | null |
Abalone ( or ; via Spanish , from Rumsen aulón) is a common name for any small to very large marine gastropod mollusc in the family Haliotidae, which once contained six genera but now contains only one genus, Haliotis. Other common names are ear shells, sea ears, and, now rarely, muttonfish or muttonshells in parts of Australia, ormer in the United Kingdom, perlemoen in South Africa, and pāua in New Zealand. The number of abalone species recognized worldwide ranges between 30 and 130 with over 230 species-level taxa described. The most comprehensive treatment of the family considers 56 species valid, with 18 additional subspecies.
The shells of abalone have a low, open spiral structure, and are characterized by several open respiratory pores in a row near the shell's outer edge. The thick inner layer of the shell is composed of nacre, which in many species is highly iridescent, giving rise to a range of strong, changeable colors which make the shells attractive to humans as ornaments, jewelry, and as a source of colorful mother-of-pearl.
The flesh of abalone is widely considered to be a delicacy, and is consumed raw or cooked by a variety of cuisines.
Description
Most abalone vary in size from (Haliotis pulcherrima) to . The largest species, Haliotis rufescens, reaches .
The shell of abalone is convex, rounded to oval in shape, and may be highly arched or very flattened. The shell of the majority of species has a small, flat spire and two to three whorls. The last whorl, known as the body whorl, is auriform, meaning that the shell resembles an ear, giving rise to the common name "ear shell". Haliotis asinina has a somewhat different shape, as it is more elongated and distended. The shell of Haliotis cracherodii cracherodii is also unusual as it has an ovate form, is imperforate, shows an exserted spire, and has prickly ribs. | Abalone | Wikipedia | 439 | 1300 | https://en.wikipedia.org/wiki/Abalone | Biology and health sciences | Gastropods | Animals |
A mantle cleft in the shell impresses a groove in the shell, in which are the row of holes characteristic of the genus. These holes are respiratory apertures for venting water from the gills and for releasing sperm and eggs into the water column. They make up what is known as the selenizone, which forms as the shell grows. This series of eight to 38 holes is near the anterior margin. Only a small number is generally open. The older holes are gradually sealed up as the shell grows and new holes form. Each species has a typical number of open holes, between four and 10, in the selenizone. An abalone has no operculum. The aperture of the shell is very wide and nacreous.
The exterior of the shell is striated and dull. The color of the shell is very variable from species to species, which may reflect the animal's diet. The iridescent nacre that lines the inside of the shell varies in color from silvery white, to pink, red and green-red to deep blue, green to purple.
The animal has fimbriated head lobes and side lobes that are fimbriated and cirrated. The radula has small median teeth, and the lateral teeth are single and beam-like. They have about 70 uncini, with denticulated hooks, the first four very large. The rounded foot is very large in comparison to most molluscs. The soft body is coiled around the columellar muscle, and its insertion, instead of being on the columella, is on the middle of the inner wall of the shell. The gills are symmetrical and both well developed.
These snails cling solidly with their broad, muscular foot to rocky surfaces at sublittoral depths, although some species such as Haliotis cracherodii used to be common in the intertidal zone. Abalone reach maturity at a relatively small size. Their fecundity is high and increases with their size, laying from 10,000 to 11 million eggs at a time. The spermatozoa are filiform and pointed at one end, and the anterior end is a rounded head.
Distribution | Abalone | Wikipedia | 448 | 1300 | https://en.wikipedia.org/wiki/Abalone | Biology and health sciences | Gastropods | Animals |
The haliotid family has a worldwide distribution, along the coastal waters of every continent, except the Pacific coast of South America, the Atlantic coast of North America, the Arctic, and Antarctica. The majority of abalone species are found in cold waters, such as off the coasts of New Zealand, South Africa, Australia, Western North America, and Japan.
Structure and properties of the shell
The shell of the abalone is exceptionally strong and is made of microscopic calcium carbonate tiles stacked like bricks. Between the layers of tiles is a clingy protein substance. When the abalone shell is struck, the tiles slide instead of shattering and the protein stretches to absorb the energy of the blow. Material scientists around the world are studying this tiled structure for insight into stronger ceramic products such as body armor. The dust created by grinding and cutting abalone shell is dangerous; appropriate safeguards must be taken to protect people from inhaling these particles.
Diseases and pests
Abalone are subject to various diseases. The Victorian Department of Primary Industries said in 2007 that ganglioneuritis killed up to 90% of stock in affected regions. Abalone are also severe hemophiliacs, as their fluids will not clot in the case of a laceration or puncture wound. Members of the Spionidae of the polychaetes are known as pests of abalone.
Human use
Abalone has been harvested worldwide for centuries as a source of food and decorative items. Abalone shells and associated materials, like their claw-like pearls and nacre, have been used as jewelry and for buttons, buckles, and inlay. These shells have been found in archaeological sites around the world, ranging from 100,000-year-old deposits at Blombos Cave in South Africa to historic Chinese abalone middens on California's Northern Channel Islands. For at least 12,000 years, abalone were harvested to such an extent around the Channel Islands that shells in the area decreased in size four thousand years ago.
Farming | Abalone | Wikipedia | 407 | 1300 | https://en.wikipedia.org/wiki/Abalone | Biology and health sciences | Gastropods | Animals |
Farming of abalone began in the late 1950s and early 1960s in Japan and China. Since the mid-1990s, there have been many increasingly successful endeavors to commercially farm abalone for the purpose of consumption. Overfishing and poaching have reduced wild populations to such an extent that farmed abalone now supplies most of the abalone meat consumed. The principal abalone farming regions are China, Taiwan, Japan, and Korea. Abalone is also farmed in Australia, Canada, Chile, France, Iceland, Ireland, Mexico, Namibia, New Zealand, South Africa, Spain, Thailand, and the United States.
After trials in 2012, a commercial "sea ranch" was set up in Flinders Bay, Western Australia to raise abalone. The ranch is based on an artificial reef made up of 5,000 separate concrete abalone habitat units, which can host 400 abalone each. The reef is seeded with young abalone from an onshore hatchery.
The abalone feed on seaweed that grows naturally on the habitats; the ecosystem enrichment of the bay also results in growing numbers of dhufish, pink snapper, wrasse, and Samson fish among other species.
Consumption
Abalone have long been a valuable food source for humans in every area of the world where a species is abundant. The meat of this mollusc is considered a delicacy in certain parts of Latin America (particularly Chile), France, New Zealand, East Asia and Southeast Asia. In the Greater China region and among Overseas Chinese communities, abalone is commonly known as bao yu, and sometimes forms part of a Chinese banquet. In the same way as shark fin soup or bird's nest soup, abalone is considered a luxury item, and is traditionally reserved for celebrations. | Abalone | Wikipedia | 356 | 1300 | https://en.wikipedia.org/wiki/Abalone | Biology and health sciences | Gastropods | Animals |
As abalone became more popular and less common, the prices adjusted accordingly. In the 1920s, a restaurant-served portion of abalone, about 4 ounces, would cost (in inflation adjusted dollars) about US$7; by 2004, the price had risen to US$75. In the United States, prior to this time, abalone was predominantly eaten, gathered, and prepared by Chinese immigrants. Before that, abalone were collected to be eaten, and used for other purposes by Native American tribes. By 1900, laws were passed in California to outlaw the taking of abalone above the intertidal zone. This forced the Chinese out of the market and the Japanese perfected diving, with or without gear, to enter the market. Abalone started to become popular in the US after the Panama–Pacific International Exposition in 1915, which exhibited 365 varieties of fish with cooking demonstrations, and a 1,300-seat dining hall.
In Japan, live and raw abalone are used in awabi sushi, or served steamed, salted, boiled, chopped, or simmered in soy sauce. Salted, fermented abalone entrails are the main component of tottsuru, a local dish from Honshū. Tottsuru is mainly enjoyed with sake.
In South Korea, abalone is called Jeonbok (/juhn-bok/) and used in various recipes. Jeonbok porridge and pan-fried abalone steak with butter are popular but also commonly used in soups or ramyeon.
In California, abalone meat can be found on pizza, sautéed with caramelized mango, or in steak form dusted with cracker meal and flour.
Sport harvesting
Australia
Tasmania supplies about 25% of the yearly world abalone harvest. Around 12,500 Tasmanians recreationally fish for blacklip and greenlip abalone. For blacklip abalone, the size limit varies between for the southern end of the state and for the northern end of the state. Greenlip abalone have a minimum size of , except for an area around Perkins Bay in the north of the state where the minimum size is . With a recreational abalone licence, the bag limit is 10 per day, with a total possession limit of 20. Scuba diving for abalone is allowed, and has a rich history in Australia. (Scuba diving for abalone in the states of New South Wales and Western Australia is illegal; a free-diving catch limit of two is allowed). | Abalone | Wikipedia | 512 | 1300 | https://en.wikipedia.org/wiki/Abalone | Biology and health sciences | Gastropods | Animals |
Victoria has had an active abalone fishery since the late 1950s. The state is sectioned into three fishing zones, Eastern, Central and Western, with each fisher required a zone-allocated licence. Harvesting is performed by divers using surface-supplied air "hookah" systems operating from runabout-style, outboard-powered boats. While the diver seeks out colonies of abalone amongst the reef beds, the deckhand operates the boat, known as working "live" and stays above where the diver is working. Bags of abalone pried from the rocks are brought to the surface by the diver or by way of "shot line", where the deckhand drops a weighted rope for the catch bag to be connected then retrieved. Divers measure each abalone before removing from the reef and the deckhand remeasures each abalone and removes excess weed growth from the shell. Since 2002, the Victorian industry has seen a significant decline in catches, with the total allowable catch reduced from 1440 to 787 tonnes for the 2011/12 fishing year, due to dwindling stocks and most notably the abalone virus ganglioneuritis, which is fast-spreading and lethal to abalone stocks.
United States
Sport harvesting of red abalone is permitted with a California fishing license and an abalone stamp card. In 2008, the abalone card also came with a set of 24 tags. This was reduced to 18 abalone per year in 2014, and as of 2017 the limit has been reduced to 12, only nine of which may be taken south of Mendocino County. Legal-size abalone must be tagged immediately. Abalone may only be taken using breath-hold techniques or shorepicking; scuba diving for abalone is strictly prohibited. Taking of abalone is not permitted south of the mouth of San Francisco Bay. A size minimum of measured across the shell is in place. A person may be in possession of only three abalone at any given time. | Abalone | Wikipedia | 399 | 1300 | https://en.wikipedia.org/wiki/Abalone | Biology and health sciences | Gastropods | Animals |
As of 2017, abalone season is May to October, excluding July. Transportation of abalone may only legally occur while the abalone is still attached in the shell. Sale of sport-obtained abalone is illegal, including the shell. Only red abalone may be taken, as black, white, pink, flat, green, and pinto abalone are protected by law. In 2018, the California Fish and Game Commission closed recreational abalone season due to dramatically declining populations. That year, they extended the moratorium to last through April 2021. Afterwards, they extended the ban for another 5 years until April 2026.
An abalone diver is normally equipped with a thick wetsuit, including a hood, bootees, and gloves, and usually also a mask, snorkel, weight belt, abalone iron, and abalone gauge. Alternatively, the rock picker can feel underneath rocks at low tides for abalone. Abalone are mostly taken in depths from a few inches up to ; less common are freedivers who can work deeper than . Abalone are normally found on rocks near food sources such as kelp. An abalone iron is used to pry the abalone from the rock before it has time to fully clamp down. Divers dive from boats, kayaks, tube floats, or directly off the shore.
The largest abalone recorded in California is , caught by John Pepper somewhere off the coast of San Mateo County in September 1993.
The mollusc Concholepas concholepas is often sold in the United States under the name "Chilean abalone", though it is not an abalone, but a muricid.
New Zealand
In New Zealand, abalone is called pāua (, from the Māori language). Haliotis iris (or blackfoot pāua) is the ubiquitous New Zealand pāua, the highly polished nacre of which is extremely popular as souvenirs with its striking blue, green, and purple iridescence. Haliotis australis and Haliotis virginea are also found in New Zealand waters, but are less popular than H. iris. Haliotis pirimoana is a small species endemic to Manawatāwhi / the Three Kings Islands that superficially resembles H. virginea. | Abalone | Wikipedia | 462 | 1300 | https://en.wikipedia.org/wiki/Abalone | Biology and health sciences | Gastropods | Animals |
Like all New Zealand shellfish, recreational harvesting of pāua does not require a permit provided catch limits, size restrictions, and seasonal and local restrictions set by the Ministry for Primary Industries (MPI) are followed. The legal recreational daily limit is 10 per diver, with a minimum shell length of for H. iris and for H. australis. In addition, no person may be in possession, even on land, of more than 20 pāua or more than of pāua meat at any one time. Pāua can only be caught by free-diving; it is illegal to catch them using scuba gear.
An extensive global black market exists in collecting and exporting abalone meat. This can be a particularly awkward problem where the right to harvest pāua can be granted legally under Māori customary rights. When such permits to harvest are abused, it is frequently difficult to police. The limit is strictly enforced by roving Ministry for Primary Industries fishery officers with the backing of the New Zealand Police. Poaching is a major industry in New Zealand with many thousands being taken illegally, often undersized. Convictions have resulted in seizure of diving gear, boats, and motor vehicles and fines and in rare cases, imprisonment.
South Africa
There are five species endemic to South Africa, namely H. parva, H. spadicea, H. queketti and H. speciosa.
The largest abalone in South Africa, Haliotis midae, occurs along roughly two-thirds of the country's coastline. Abalone-diving has been a recreational activity for many years, but stocks are currently being threatened by illegal commercial harvesting. In South Africa, all persons harvesting this shellfish need permits that are issued annually, and no abalone may be harvested using scuba gear. | Abalone | Wikipedia | 359 | 1300 | https://en.wikipedia.org/wiki/Abalone | Biology and health sciences | Gastropods | Animals |
For the last few years, however, no permits have been issued for collecting abalone, but commercial harvesting still continues as does illegal collection by syndicates.
In 2007, because of widespread poaching of abalone, the South African government listed abalone as an endangered species according to the CITES section III appendix, which requests member governments to monitor the trade in this species. This listing was removed from CITES in June 2010 by the South African government and South African abalone is no longer subject to CITES trade controls. Export permits are still required, however.
The abalone meat from South Africa is prohibited for sale in the country to help reduce poaching; however, much of the illegally harvested meat is sold in Asian countries. As of early 2008, the wholesale price for abalone meat was approximately US$40.00 per kilogram. There is an active trade in the shells, which sell for more than US$1,400 per tonne.
Channel Islands, Brittany and Normandy
Ormers (Haliotis tuberculata) are considered a delicacy in the British Channel Islands as well as in adjacent areas of France, and are pursued with great alacrity by the locals. This, and a recent lethal bacterial disease, has led to a dramatic depletion in numbers since the latter half of the 19th century, and "ormering" is now strictly regulated to preserve stocks. The gathering of ormers is now restricted to a number of 'ormering tides', from 1 January to 30 April, which occur on the full or new moon and two days following. No ormers may be taken from the beach that are under in shell length. Gatherers are not allowed to wear wetsuits or even put their heads underwater. Any breach of these laws is a criminal offence and can lead to a fine of up to £5,000 or six months in prison. The demand for ormers is such that they led to the world's first underwater arrest, when Mr. Kempthorne-Leigh of Guernsey was arrested by a police officer in full diving gear when illegally diving for ormers.
Decorative items
The highly iridescent inner nacre layer of the shell of abalone has traditionally been used as a decorative item, in jewelry, buttons, and as inlay in furniture and musical instruments, such as on fret boards and binding of guitars. See article Najeonchilgi regarding Korean handicraft. | Abalone | Wikipedia | 492 | 1300 | https://en.wikipedia.org/wiki/Abalone | Biology and health sciences | Gastropods | Animals |
Indigenous use
Abalone has been an important staple in a number of Indigenous cultures around the world, specifically in Africa and on the Northwest American coast. The meat is a traditional food, and the shell is used to make ornaments; historically, the shells were also used as currency in some communities.
Threat of extinction
Abalone are one of the many classes of organism threatened with extinction due to overfishing and the acidification of oceans from recent higher levels of carbon dioxide, as reduced pH erodes their shells. In the 21st century, white, pink, and green abalone are on the United States federal endangered species list, and possible restoration sites have been proposed for the San Clemente Island and Santa Barbara Island areas. The possibility of farming abalone to be reintroduced into the wild has also been proposed, with these abalone having special tags to help track the population.
Species
The number of species that are recognized within the genus Haliotis has fluctuated over time, and depends on the source that is consulted. The number of recognized species range from 30 to 130. This list finds a compromise using the WoRMS database, plus some species that have been added, for a total of 57. The majority of abalone have not been rated for conservation status. Those that have been reviewed tend to show that the abalone in general is an animal that is declining in numbers, and will need protection throughout the globe.
Synonyms | Abalone | Wikipedia | 283 | 1300 | https://en.wikipedia.org/wiki/Abalone | Biology and health sciences | Gastropods | Animals |
Aromatic compounds or arenes are organic compounds "with a chemistry typified by benzene" and "cyclically conjugated."
The word "aromatic" originates from the past grouping of molecules based on odor, before their general chemical properties were understood. The current definition of aromatic compounds does not have any relation to their odor. Aromatic compounds are now defined as cyclic compounds satisfying Hückel's Rule.
Aromatic compounds have the following general properties:
Typically unreactive
Often non polar and hydrophobic
High carbon-hydrogen ratio
Burn with a strong sooty yellow flame, due to high C:H ratio
Undergo electrophilic substitution reactions and nucleophilic aromatic substitutions
Arenes are typically split into two categories - benzoids, that contain a benzene derivative and follow the benzene ring model, and non-benzoids that contain other aromatic cyclic derivatives. Aromatic compounds are commonly used in organic synthesis and are involved in many reaction types, following both additions and removals, as well as saturation and dearomatization.
Heteroarenes
Heteroarenes are aromatic compounds, where at least one methine or vinylene (-C= or -CH=CH-) group is replaced by a heteroatom: oxygen, nitrogen, or sulfur. Examples of non-benzene compounds with aromatic properties are furan, a heterocyclic compound with a five-membered ring that includes a single oxygen atom, and pyridine, a heterocyclic compound with a six-membered ring containing one nitrogen atom. Hydrocarbons without an aromatic ring are called aliphatic. Approximately half of compounds known in 2000 are described as aromatic to some extent.
Applications
Aromatic compounds are pervasive in nature and industry. Key industrial aromatic hydrocarbons are benzene, toluene, xylene called BTX. Many biomolecules have phenyl groups including the so-called aromatic amino acids.
Benzene ring model | Aromatic compound | Wikipedia | 403 | 1313 | https://en.wikipedia.org/wiki/Aromatic%20compound | Physical sciences | Hydrocarbons | null |
Benzene, C6H6, is the least complex aromatic hydrocarbon, and it was the first one defined as such. Its bonding nature was first recognized independently by Joseph Loschmidt and August Kekulé in the 19th century. Each carbon atom in the hexagonal cycle has four electrons to share. One electron forms a sigma bond with the hydrogen atom, and one is used in covalently bonding to each of the two neighboring carbons. This leaves six electrons, shared equally around the ring in delocalized pi molecular orbitals the size of the ring itself. This represents the equivalent nature of the six carbon-carbon bonds all of bond order 1.5. This equivalency can also explained by resonance forms. The electrons are visualized as floating above and below the ring, with the electromagnetic fields they generate acting to keep the ring flat.
The circle symbol for aromaticity was introduced by Sir Robert Robinson and his student James Armit in 1925 and popularized starting in 1959 by the Morrison & Boyd textbook on organic chemistry. The proper use of the symbol is debated: some publications use it to any cyclic π system, while others use it only for those π systems that obey Hückel's rule. Some argue that, in order to stay in line with Robinson's originally intended proposal, the use of the circle symbol should be limited to monocyclic 6 π-electron systems. In this way the circle symbol for a six-center six-electron bond can be compared to the Y symbol for a three-center two-electron bond.
Benzene and derivatives of benzene | Aromatic compound | Wikipedia | 325 | 1313 | https://en.wikipedia.org/wiki/Aromatic%20compound | Physical sciences | Hydrocarbons | null |
Benzene derivatives have from one to six substituents attached to the central benzene core. Examples of benzene compounds with just one substituent are phenol, which carries a hydroxyl group, and toluene with a methyl group. When there is more than one substituent present on the ring, their spatial relationship becomes important for which the arene substitution patterns ortho, meta, and para are devised. When reacting to form more complex benzene derivatives, the substituents on a benzene ring can be described as either activated or deactivated, which are electron donating and electron withdrawing respectively. Activators are known as ortho-para directors, and deactivators are known as meta directors. Upon reacting, substituents will be added at the ortho, para or meta positions, depending on the directivity of the current substituents to make more complex benzene derivatives, often with several isomers. Electron flow leading to re-aromatization is key in ensuring the stability of such products.
For example, three isomers exist for cresol because the methyl group and the hydroxyl group (both ortho para directors) can be placed next to each other (ortho), one position removed from each other (meta), or two positions removed from each other (para). Given that both the methyl and hydroxyl group are ortho-para directors, the ortho and para isomers are typically favoured. Xylenol has two methyl groups in addition to the hydroxyl group, and, for this structure, 6 isomers exist.
Arene rings can stabilize charges, as seen in, for example, phenol (C6H5–OH), which is acidic at the hydroxyl (OH), as charge on the oxygen (alkoxide –O−) is partially delocalized into the benzene ring.
Non-benzylic arenes
Although benzylic arenes are common, non-benzylic compounds are also exceedingly important. Any compound containing a cyclic portion that conforms to Hückel's rule and is not a benzene derivative can be considered a non-benzylic aromatic compound. | Aromatic compound | Wikipedia | 469 | 1313 | https://en.wikipedia.org/wiki/Aromatic%20compound | Physical sciences | Hydrocarbons | null |
Monocyclic arenes
Of annulenes larger than benzene, [12]annulene and [14]annulene are weakly aromatic compounds and [18]annulene, Cyclooctadecanonaene, is aromatic, though strain within the structure causes a slight deviation from the precisely planar structure necessary for aromatic categorization. Another example of a non-benzylic monocyclic arene is the cyclopropenyl (cyclopropenium cation), which satisfies Hückel's rule with an n equal to 0. Note, only the cationic form of this cyclic propenyl is aromatic, given that neutrality in this compound would violate either the octet rule or Hückel's rule.
Other non-benzylic monocyclic arenes include the aforementioned heteroarenes that can replace carbon atoms with other heteroatoms such as N, O or S. Common examples of these are the six-membered pyrrole and five-membered pyridine, both of which have a substituted nitrogen
Polycyclic aromatic hydrocarbons
Polycyclic aromatic hydrocarbons, also known as polynuclear aromatic compounds (PAHs) are aromatic hydrocarbons that consist of fused aromatic rings and do not contain heteroatoms or carry substituents. Naphthalene is the simplest example of a PAH. PAHs occur in oil, coal, and tar deposits, and are produced as byproducts of fuel burning (whether fossil fuel or biomass). As pollutants, they are of concern because some compounds have been identified as carcinogenic, mutagenic, and teratogenic. PAHs are also found in cooked foods. Studies have shown that high levels of PAHs are found, for example, in meat cooked at high temperatures such as grilling or barbecuing, and in smoked fish. They are also a good candidate molecule to act as a basis for the earliest forms of life. In graphene the PAH motif is extended to large 2D sheets.
Reactions
Aromatic ring systems participate in many organic reactions. | Aromatic compound | Wikipedia | 449 | 1313 | https://en.wikipedia.org/wiki/Aromatic%20compound | Physical sciences | Hydrocarbons | null |
Substitution
In aromatic substitution, one substituent on the arene ring, usually hydrogen, is replaced by another reagent. The two main types are electrophilic aromatic substitution, when the active reagent is an electrophile, and nucleophilic aromatic substitution, when the reagent is a nucleophile. In radical-nucleophilic aromatic substitution, the active reagent is a radical.
An example of electrophilic aromatic substitution is the nitration of salicylic acid, where a nitro group is added para to the hydroxide substituent:
Nucleophilic aromatic substitution involves displacement of a leaving group, such as a halide, on an aromatic ring. Aromatic rings usually nucleophilic, but in the presence of electron-withdrawing groups aromatic compounds undergo nucleophilic substitution. Mechanistically, this reaction differs from a common SN2 reaction, because it occurs at a trigonal carbon atom (sp2 hybridization).
Hydrogenation
Hydrogenation of arenes create saturated rings. The compound 1-naphthol is completely reduced to a mixture of decalin-ol isomers.
The compound resorcinol, hydrogenated with Raney nickel in presence of aqueous sodium hydroxide forms an enolate which is alkylated with methyl iodide to 2-methyl-1,3-cyclohexandione:
Dearomatization
In dearomatization reactions the aromaticity of the reactant is lost. In this regard, the dearomatization is related to hydrogenation. A classic approach is Birch reduction. The methodology is used in synthesis. | Aromatic compound | Wikipedia | 348 | 1313 | https://en.wikipedia.org/wiki/Aromatic%20compound | Physical sciences | Hydrocarbons | null |
In modern physics, antimatter is defined as matter composed of the antiparticles (or "partners") of the corresponding particles in "ordinary" matter, and can be thought of as matter with reversed charge, parity, and time, known as CPT reversal. Antimatter occurs in natural processes like cosmic ray collisions and some types of radioactive decay, but only a tiny fraction of these have successfully been bound together in experiments to form antiatoms. Minuscule numbers of antiparticles can be generated at particle accelerators, but total artificial production has been only a few nanograms. No macroscopic amount of antimatter has ever been assembled due to the extreme cost and difficulty of production and handling. Nonetheless, antimatter is an essential component of widely available applications related to beta decay, such as positron emission tomography, radiation therapy, and industrial imaging.
In theory, a particle and its antiparticle (for example, a proton and an antiproton) have the same mass, but opposite electric charge, and other differences in quantum numbers.
A collision between any particle and its anti-particle partner leads to their mutual annihilation, giving rise to various proportions of intense photons (gamma rays), neutrinos, and sometimes less-massive particleantiparticle pairs. The majority of the total energy of annihilation emerges in the form of ionizing radiation. If surrounding matter is present, the energy content of this radiation will be absorbed and converted into other forms of energy, such as heat or light. The amount of energy released is usually proportional to the total mass of the collided matter and antimatter, in accordance with the notable mass–energy equivalence equation, .
Antiparticles bind with each other to form antimatter, just as ordinary particles bind to form normal matter. For example, a positron (the antiparticle of the electron) and an antiproton (the antiparticle of the proton) can form an antihydrogen atom. The nuclei of antihelium have been artificially produced, albeit with difficulty, and are the most complex anti-nuclei so far observed. Physical principles indicate that complex antimatter atomic nuclei are possible, as well as anti-atoms corresponding to the known chemical elements. | Antimatter | Wikipedia | 472 | 1317 | https://en.wikipedia.org/wiki/Antimatter | Physical sciences | Antimatter | null |
There is strong evidence that the observable universe is composed almost entirely of ordinary matter, as opposed to an equal mixture of matter and antimatter. This asymmetry of matter and antimatter in the visible universe is one of the great unsolved problems in physics. The process by which this inequality between matter and antimatter particles is hypothesised to have occurred is called baryogenesis.
Definitions
Antimatter particles carry the same charge as matter particles, but of opposite sign. That is, an antiproton is negatively charged and an antielectron (positron) is positively charged. Neutrons do not carry a net charge, but their constituent quarks do. Protons and neutrons have a baryon number of +1, while antiprotons and antineutrons have a baryon number of –1. Similarly, electrons have a lepton number of +1, while that of positrons is –1. When a particle and its corresponding antiparticle collide, they are both converted into energy.
The French term for "made of or pertaining to antimatter", , led to the initialism "C.T." and the science fiction term , as used in such novels as Seetee Ship.
Conceptual history
The idea of negative matter appears in past theories of matter that have now been abandoned. Using the once popular vortex theory of gravity, the possibility of matter with negative gravity was discussed by William Hicks in the 1880s. Between the 1880s and the 1890s, Karl Pearson proposed the existence of "squirts" and sinks of the flow of aether. The squirts represented normal matter and the sinks represented negative matter. Pearson's theory required a fourth dimension for the aether to flow from and into.
The term antimatter was first used by Arthur Schuster in two rather whimsical letters to Nature in 1898, in which he coined the term. He hypothesized antiatoms, as well as whole antimatter solar systems, and discussed the possibility of matter and antimatter annihilating each other. Schuster's ideas were not a serious theoretical proposal, merely speculation, and like the previous ideas, differed from the modern concept of antimatter in that it possessed negative gravity. | Antimatter | Wikipedia | 467 | 1317 | https://en.wikipedia.org/wiki/Antimatter | Physical sciences | Antimatter | null |
The modern theory of antimatter began in 1928, with a paper by Paul Dirac. Dirac realised that his relativistic version of the Schrödinger wave equation for electrons predicted the possibility of antielectrons. Although Dirac had laid the groundwork for the existence of these “antielectrons” he initially failed to pick up on the implications contained within his own equation. He freely gave the credit for that insight to J. Robert Oppenheimer, whose seminal paper “On the Theory of Electrons and Protons” (Feb 14th 1930) drew on Dirac's equation and argued for the existence of a positively charged electron (a positron), which as a counterpart to the electron should have the same mass as the electron itself. This meant that it could not be, as Dirac had in fact suggested, a proton. Dirac further postulated the existence of antimatter in a 1931 paper which referred to the positron as an "anti-electron". These were discovered by Carl D. Anderson in 1932 and named positrons from "positive electron". Although Dirac did not himself use the term antimatter, its use follows on naturally enough from antielectrons, antiprotons, etc. A complete periodic table of antimatter was envisaged by Charles Janet in 1929.
The Feynman–Stueckelberg interpretation states that antimatter and antiparticles behave exactly identical to regular particles, but traveling backward in time. This concept is nowadays used in modern particle physics, in Feynman diagrams.
Notation
One way to denote an antiparticle is by adding a bar over the particle's symbol. For example, the proton and antiproton are denoted as and , respectively. The same rule applies if one were to address a particle by its constituent components. A proton is made up of quarks, so an antiproton must therefore be formed from antiquarks. Another convention is to distinguish particles by positive and negative electric charge. Thus, the electron and positron are denoted simply as and respectively. To prevent confusion, however, the two conventions are never mixed. | Antimatter | Wikipedia | 445 | 1317 | https://en.wikipedia.org/wiki/Antimatter | Physical sciences | Antimatter | null |
Properties
There is no difference in the gravitational behavior of matter and antimatter. In other words, antimatter falls down when dropped, not up. This was confirmed with the thin, very cold gas of thousands of antihydrogen atoms that were confined in a vertical shaft surrounded by superconducting electromagnetic coils. These can create a magnetic bottle to keep the antimatter from coming into contact with matter and annihilating. The researchers then gradually weakened the magnetic fields and detected the antiatoms using two sensors as they escaped and annihilated. Most of the anti-atoms came out of the bottom opening, and only one-quarter out of the top.
There are compelling theoretical reasons to believe that, aside from the fact that antiparticles have different signs on all charges (such as electric and baryon charges), matter and antimatter have exactly the same properties. This means a particle and its corresponding antiparticle must have identical masses and decay lifetimes (if unstable). It also implies that, for example, a star made up of antimatter (an "antistar") will shine just like an ordinary star. This idea was tested experimentally in 2016 by the ALPHA experiment, which measured the transition between the two lowest energy states of antihydrogen. The results, which are identical to that of hydrogen, confirmed the validity of quantum mechanics for antimatter.
Origin and asymmetry
Most things observable from the Earth seem to be made of matter rather than antimatter. If antimatter-dominated regions of space existed, the gamma rays produced in annihilation reactions along the boundary between matter and antimatter regions would be detectable. | Antimatter | Wikipedia | 350 | 1317 | https://en.wikipedia.org/wiki/Antimatter | Physical sciences | Antimatter | null |
Antiparticles are created everywhere in the universe where high-energy particle collisions take place. High-energy cosmic rays striking Earth's atmosphere (or any other matter in the Solar System) produce minute quantities of antiparticles in the resulting particle jets, which are immediately annihilated by contact with nearby matter. They may similarly be produced in regions like the center of the Milky Way and other galaxies, where very energetic celestial events occur (principally the interaction of relativistic jets with the interstellar medium). The presence of the resulting antimatter is detectable by the two gamma rays produced every time positrons annihilate with nearby matter. The frequency and wavelength of the gamma rays indicate that each carries 511 keV of energy (that is, the rest mass of an electron multiplied by c2).
Observations by the European Space Agency's INTEGRAL satellite may explain the origin of a giant antimatter cloud surrounding the Galactic Center. The observations show that the cloud is asymmetrical and matches the pattern of X-ray binaries (binary star systems containing black holes or neutron stars), mostly on one side of the Galactic Center. While the mechanism is not fully understood, it is likely to involve the production of electron–positron pairs, as ordinary matter gains kinetic energy while falling into a stellar remnant.
Antimatter may exist in relatively large amounts in far-away galaxies due to cosmic inflation in the primordial time of the universe. Antimatter galaxies, if they exist, are expected to have the same chemistry and absorption and emission spectra as normal-matter galaxies, and their astronomical objects would be observationally identical, making them difficult to distinguish. NASA is trying to determine if such galaxies exist by looking for X-ray and gamma ray signatures of annihilation events in colliding superclusters.
In October 2017, scientists working on the BASE experiment at CERN reported a measurement of the antiproton magnetic moment to a precision of 1.5 parts per billion. It is consistent with the most precise measurement of the proton magnetic moment (also made by BASE in 2014), which supports the hypothesis of CPT symmetry. This measurement represents the first time that a property of antimatter is known more precisely than the equivalent property in matter. | Antimatter | Wikipedia | 467 | 1317 | https://en.wikipedia.org/wiki/Antimatter | Physical sciences | Antimatter | null |
Antimatter quantum interferometry has been first demonstrated in 2018 in the Positron Laboratory (L-NESS) of Rafael Ferragut in Como (Italy), by a group led by Marco Giammarchi.
Natural production
Positrons are produced naturally in β+ decays of naturally occurring radioactive isotopes (for example, potassium-40) and in interactions of gamma quanta (emitted by radioactive nuclei) with matter. Antineutrinos are another kind of antiparticle created by natural radioactivity (β− decay). Many different kinds of antiparticles are also produced by (and contained in) cosmic rays. In January 2011, research by the American Astronomical Society discovered antimatter (positrons) originating above thunderstorm clouds; positrons are produced in terrestrial gamma ray flashes created by electrons accelerated by strong electric fields in the clouds. Antiprotons have also been found to exist in the Van Allen Belts around the Earth by the PAMELA module.
Antiparticles are also produced in any environment with a sufficiently high temperature (mean particle energy greater than the pair production threshold). It is hypothesized that during the period of baryogenesis, when the universe was extremely hot and dense, matter and antimatter were continually produced and annihilated. The presence of remaining matter, and absence of detectable remaining antimatter, is called baryon asymmetry. The exact mechanism that produced this asymmetry during baryogenesis remains an unsolved problem. One of the necessary conditions for this asymmetry is the violation of CP symmetry, which has been experimentally observed in the weak interaction.
Recent observations indicate black holes and neutron stars produce vast amounts of positron-electron plasma via the jets.
Observation in cosmic rays | Antimatter | Wikipedia | 369 | 1317 | https://en.wikipedia.org/wiki/Antimatter | Physical sciences | Antimatter | null |
Satellite experiments have found evidence of positrons and a few antiprotons in primary cosmic rays, amounting to less than 1% of the particles in primary cosmic rays. This antimatter cannot all have been created in the Big Bang, but is instead attributed to have been produced by cyclic processes at high energies. For instance, electron-positron pairs may be formed in pulsars, as a magnetized neutron star rotation cycle shears electron-positron pairs from the star surface. Therein the antimatter forms a wind that crashes upon the ejecta of the progenitor supernovae. This weathering takes place as "the cold, magnetized relativistic wind launched by the star hits the non-relativistically expanding ejecta, a shock wave system forms in the impact: the outer one propagates in the ejecta, while a reverse shock propagates back towards the star." The former ejection of matter in the outer shock wave and the latter production of antimatter in the reverse shock wave are steps in a space weather cycle.
Preliminary results from the presently operating Alpha Magnetic Spectrometer (AMS-02) on board the International Space Station show that positrons in the cosmic rays arrive with no directionality, and with energies that range from 10 GeV to 250 GeV. In September, 2014, new results with almost twice as much data were presented in a talk at CERN and published in Physical Review Letters. A new measurement of positron fraction up to 500 GeV was reported, showing that positron fraction peaks at a maximum of about 16% of total electron+positron events, around an energy of 275 ± 32 GeV. At higher energies, up to 500 GeV, the ratio of positrons to electrons begins to fall again. The absolute flux of positrons also begins to fall before 500 GeV, but peaks at energies far higher than electron energies, which peak about 10 GeV. These results on interpretation have been suggested to be due to positron production in annihilation events of massive dark matter particles.
Cosmic ray antiprotons also have a much higher energy than their normal-matter counterparts (protons). They arrive at Earth with a characteristic energy maximum of 2 GeV, indicating their production in a fundamentally different process from cosmic ray protons, which on average have only one-sixth of the energy. | Antimatter | Wikipedia | 503 | 1317 | https://en.wikipedia.org/wiki/Antimatter | Physical sciences | Antimatter | null |
There is an ongoing search for larger antimatter nuclei, such as antihelium nuclei (that is, anti-alpha particles), in cosmic rays. The detection of natural antihelium could imply the existence of large antimatter structures such as an antistar. A prototype of the AMS-02 designated AMS-01, was flown into space aboard the on STS-91 in June 1998. By not detecting any antihelium at all, the AMS-01 established an upper limit of 1.1×10−6 for the antihelium to helium flux ratio. AMS-02 revealed in December 2016 that it had discovered a few signals consistent with antihelium nuclei amidst several billion helium nuclei. The result remains to be verified, and , the team is trying to rule out contamination.
Artificial production
Positrons
Positrons were reported in November 2008 to have been generated by Lawrence Livermore National Laboratory in large numbers. A laser drove electrons through a gold target's nuclei, which caused the incoming electrons to emit energy quanta that decayed into both matter and antimatter. Positrons were detected at a higher rate and in greater density than ever previously detected in a laboratory. Previous experiments made smaller quantities of positrons using lasers and paper-thin targets; newer simulations showed that short bursts of ultra-intense lasers and millimeter-thick gold are a far more effective source.
In 2023, the production of the first electron-positron beam-plasma was reported by a collaboration led by researchers at University of Oxford working with the High-Radiation to Materials (HRMT) facility at CERN. The beam demonstrated the highest positron yield achieved so far in a laboratory setting. The experiment employed the 440 GeV proton beam, with protons, from the Super Proton Synchrotron, and irradiated a particle converter composed of carbon and tantalum. This yielded a total electron-positron pairs via a particle shower process. The produced pair beams have a volume that fills multiple Debye spheres and are thus able to sustain collective plasma oscillations.
Antiprotons, antineutrons, and antinuclei | Antimatter | Wikipedia | 448 | 1317 | https://en.wikipedia.org/wiki/Antimatter | Physical sciences | Antimatter | null |
The existence of the antiproton was experimentally confirmed in 1955 by University of California, Berkeley physicists Emilio Segrè and Owen Chamberlain, for which they were awarded the 1959 Nobel Prize in Physics. An antiproton consists of two up antiquarks and one down antiquark (). The properties of the antiproton that have been measured all match the corresponding properties of the proton, with the exception of the antiproton having opposite electric charge and magnetic moment from the proton. Shortly afterwards, in 1956, the antineutron was discovered in proton–proton collisions at the Bevatron (Lawrence Berkeley National Laboratory) by Bruce Cork and colleagues.
In addition to antibaryons, anti-nuclei consisting of multiple bound antiprotons and antineutrons have been created. These are typically produced at energies far too high to form antimatter atoms (with bound positrons in place of electrons). In 1965, a group of researchers led by Antonino Zichichi reported production of nuclei of antideuterium at the Proton Synchrotron at CERN. At roughly the same time, observations of antideuterium nuclei were reported by a group of American physicists at the Alternating Gradient Synchrotron at Brookhaven National Laboratory.
Antihydrogen atoms
In 1995, CERN announced that it had successfully brought into existence nine hot antihydrogen atoms by implementing the SLAC/Fermilab concept during the PS210 experiment. The experiment was performed using the Low Energy Antiproton Ring (LEAR), and was led by Walter Oelert and Mario Macri. Fermilab soon confirmed the CERN findings by producing approximately 100 antihydrogen atoms at their facilities. The antihydrogen atoms created during PS210 and subsequent experiments (at both CERN and Fermilab) were extremely energetic and were not well suited to study. To resolve this hurdle, and to gain a better understanding of antihydrogen, two collaborations were formed in the late 1990s, namely, ATHENA and ATRAP. | Antimatter | Wikipedia | 421 | 1317 | https://en.wikipedia.org/wiki/Antimatter | Physical sciences | Antimatter | null |
In 1999, CERN activated the Antiproton Decelerator, a device capable of decelerating antiprotons from to – still too "hot" to produce study-effective antihydrogen, but a huge leap forward. In late 2002 the ATHENA project announced that they had created the world's first "cold" antihydrogen. The ATRAP project released similar results very shortly thereafter. The antiprotons used in these experiments were cooled by decelerating them with the Antiproton Decelerator, passing them through a thin sheet of foil, and finally capturing them in a Penning–Malmberg trap. The overall cooling process is workable, but highly inefficient; approximately 25 million antiprotons leave the Antiproton Decelerator and roughly 25,000 make it to the Penning–Malmberg trap, which is about or 0.1% of the original amount.
The antiprotons are still hot when initially trapped. To cool them further, they are mixed into an electron plasma. The electrons in this plasma cool via cyclotron radiation, and then sympathetically cool the antiprotons via Coulomb collisions. Eventually, the electrons are removed by the application of short-duration electric fields, leaving the antiprotons with energies less than . While the antiprotons are being cooled in the first trap, a small cloud of positrons is captured from radioactive sodium in a Surko-style positron accumulator. This cloud is then recaptured in a second trap near the antiprotons. Manipulations of the trap electrodes then tip the antiprotons into the positron plasma, where some combine with antiprotons to form antihydrogen. This neutral antihydrogen is unaffected by the electric and magnetic fields used to trap the charged positrons and antiprotons, and within a few microseconds the antihydrogen hits the trap walls, where it annihilates. Some hundreds of millions of antihydrogen atoms have been made in this fashion.
In 2005, ATHENA disbanded and some of the former members (along with others) formed the ALPHA Collaboration, which is also based at CERN. The ultimate goal of this endeavour is to test CPT symmetry through comparison of the atomic spectra of hydrogen and antihydrogen (see hydrogen spectral series). | Antimatter | Wikipedia | 495 | 1317 | https://en.wikipedia.org/wiki/Antimatter | Physical sciences | Antimatter | null |
Most of the sought-after high-precision tests of the properties of antihydrogen could only be performed if the antihydrogen were trapped, that is, held in place for a relatively long time. While antihydrogen atoms are electrically neutral, the spins of their component particles produce a magnetic moment. These magnetic moments can interact with an inhomogeneous magnetic field; some of the antihydrogen atoms can be attracted to a magnetic minimum. Such a minimum can be created by a combination of mirror and multipole fields. Antihydrogen can be trapped in such a magnetic minimum (minimum-B) trap; in November 2010, the ALPHA collaboration announced that they had so trapped 38 antihydrogen atoms for about a sixth of a second. This was the first time that neutral antimatter had been trapped.
On 26 April 2011, ALPHA announced that they had trapped 309 antihydrogen atoms, some for as long as 1,000 seconds (about 17 minutes). This was longer than neutral antimatter had ever been trapped before. ALPHA has used these trapped atoms to initiate research into the spectral properties of antihydrogen.
In 2016, a new antiproton decelerator and cooler called ELENA (Extra Low ENergy Antiproton decelerator) was built. It takes the antiprotons from the antiproton decelerator and cools them to 90 keV, which is "cold" enough to study. This machine works by using high energy and accelerating the particles within the chamber. More than one hundred antiprotons can be captured per second, a huge improvement, but it would still take several thousand years to make a nanogram of antimatter.
The biggest limiting factor in the large-scale production of antimatter is the availability of antiprotons. Recent data released by CERN states that, when fully operational, their facilities are capable of producing ten million antiprotons per minute. Assuming a 100% conversion of antiprotons to antihydrogen, it would take 100 billion years to produce 1 gram or 1 mole of antihydrogen (approximately atoms of anti-hydrogen). However, CERN only produces 1% of the anti-matter Fermilab does, and neither are designed to produce anti-matter. According to Gerald Jackson, using technology already in use today we are capable of producing and capturing 20 grams of anti-matter particles per year at a yearly cost of 670 million dollars per facility. | Antimatter | Wikipedia | 509 | 1317 | https://en.wikipedia.org/wiki/Antimatter | Physical sciences | Antimatter | null |
Antihelium
Antihelium-3 nuclei () were first observed in the 1970s in proton–nucleus collision experiments at the Institute for High Energy Physics by Y. Prockoshkin's group (Protvino near Moscow, USSR) and later created in nucleus–nucleus collision experiments. Nucleus–nucleus collisions produce antinuclei through the coalescence of antiprotons and antineutrons created in these reactions. In 2011, the STAR detector reported the observation of artificially created antihelium-4 nuclei (anti-alpha particles) () from such collisions.
The Alpha Magnetic Spectrometer on the International Space Station has, as of 2021, recorded eight events that seem to indicate the detection of antihelium-3.
Preservation
Antimatter cannot be stored in a container made of ordinary matter because antimatter reacts with any matter it touches, annihilating itself and an equal amount of the container. Antimatter in the form of charged particles can be contained by a combination of electric and magnetic fields, in a device called a Penning trap. This device cannot, however, contain antimatter that consists of uncharged particles, for which atomic traps are used. In particular, such a trap may use the dipole moment (electric or magnetic) of the trapped particles. At high vacuum, the matter or antimatter particles can be trapped and cooled with slightly off-resonant laser radiation using a magneto-optical trap or magnetic trap. Small particles can also be suspended with optical tweezers, using a highly focused laser beam.
In 2011, CERN scientists were able to preserve antihydrogen for approximately 17 minutes. The record for storing antiparticles is currently held by the TRAP experiment at CERN: antiprotons were kept in a Penning trap for 405 days. A proposal was made in 2018 to develop containment technology advanced enough to contain a billion anti-protons in a portable device to be driven to another lab for further experimentation. | Antimatter | Wikipedia | 407 | 1317 | https://en.wikipedia.org/wiki/Antimatter | Physical sciences | Antimatter | null |
Cost
Scientists claim that antimatter is the costliest material to make. In 2006, Gerald Smith estimated $250 million could produce 10 milligrams of positrons (equivalent to $25 billion per gram); in 1999, NASA gave a figure of $62.5 trillion per gram of antihydrogen. This is because production is difficult (only very few antiprotons are produced in reactions in particle accelerators) and because there is higher demand for other uses of particle accelerators. According to CERN, it has cost a few hundred million Swiss francs to produce about 1 billionth of a gram (the amount used so far for particle/antiparticle collisions). In comparison, to produce the first atomic weapon, the cost of the Manhattan Project was estimated at $23 billion with inflation during 2007.
Several studies funded by NASA Innovative Advanced Concepts are exploring whether it might be possible to use magnetic scoops to collect the antimatter that occurs naturally in the Van Allen belt of the Earth, and ultimately the belts of gas giants like Jupiter, ideally at a lower cost per gram.
Uses
Medical
Matter–antimatter reactions have practical applications in medical imaging, such as positron emission tomography (PET). In positive beta decay, a nuclide loses surplus positive charge by emitting a positron (in the same event, a proton becomes a neutron, and a neutrino is also emitted). Nuclides with surplus positive charge are easily made in a cyclotron and are widely generated for medical use. Antiprotons have also been shown within laboratory experiments to have the potential to treat certain cancers, in a similar method currently used for ion (proton) therapy.
Fuel
Isolated and stored antimatter could be used as a fuel for interplanetary or interstellar travel as part of an antimatter-catalyzed nuclear pulse propulsion or another antimatter rocket. Since the energy density of antimatter is higher than that of conventional fuels, an antimatter-fueled spacecraft would have a higher thrust-to-weight ratio than a conventional spacecraft. | Antimatter | Wikipedia | 431 | 1317 | https://en.wikipedia.org/wiki/Antimatter | Physical sciences | Antimatter | null |
If matter–antimatter collisions resulted only in photon emission, the entire rest mass of the particles would be converted to kinetic energy. The energy per unit mass () is about 10 orders of magnitude greater than chemical energies, and about 3 orders of magnitude greater than the nuclear potential energy that can be liberated, today, using nuclear fission (about per fission reaction or ), and about 2 orders of magnitude greater than the best possible results expected from fusion (about for the proton–proton chain). The reaction of of antimatter with of matter would produce (180 petajoules) of energy (by the mass–energy equivalence formula, ), or the rough equivalent of 43 megatons of TNT – slightly less than the yield of the 27,000 kg Tsar Bomba, the largest thermonuclear weapon ever detonated.
Not all of that energy can be utilized by any realistic propulsion technology because of the nature of the annihilation products. While electron–positron reactions result in gamma ray photons, these are difficult to direct and use for thrust. In reactions between protons and antiprotons, their energy is converted largely into relativistic neutral and charged pions. The neutral pions decay almost immediately (with a lifetime of 85 attoseconds) into high-energy photons, but the charged pions decay more slowly (with a lifetime of 26 nanoseconds) and can be deflected magnetically to produce thrust.
Charged pions ultimately decay into a combination of neutrinos (carrying about 22% of the energy of the charged pions) and unstable charged muons (carrying about 78% of the charged pion energy), with the muons then decaying into a combination of electrons, positrons and neutrinos (cf. muon decay; the neutrinos from this decay carry about 2/3 of the energy of the muons, meaning that from the original charged pions, the total fraction of their energy converted to neutrinos by one route or another would be about ).
Weapons | Antimatter | Wikipedia | 428 | 1317 | https://en.wikipedia.org/wiki/Antimatter | Physical sciences | Antimatter | null |
Antimatter has been considered as a trigger mechanism for nuclear weapons. A major obstacle is the difficulty of producing antimatter in large enough quantities, and there is no evidence that it will ever be feasible. Nonetheless, the U.S. Air Force funded studies of the physics of antimatter in the Cold War, and began considering its possible use in weapons, not just as a trigger, but as the explosive itself. | Antimatter | Wikipedia | 86 | 1317 | https://en.wikipedia.org/wiki/Antimatter | Physical sciences | Antimatter | null |
In particle physics, every type of particle of "ordinary" matter (as opposed to antimatter) is associated with an antiparticle with the same mass but with opposite physical charges (such as electric charge). For example, the antiparticle of the electron is the positron (also known as an antielectron). While the electron has a negative electric charge, the positron has a positive electric charge, and is produced naturally in certain types of radioactive decay. The opposite is also true: the antiparticle of the positron is the electron.
Some particles, such as the photon, are their own antiparticle. Otherwise, for each pair of antiparticle partners, one is designated as the normal particle (the one that occurs in matter usually interacted with in daily life). The other (usually given the prefix "anti-") is designated the antiparticle.
Particle–antiparticle pairs can annihilate each other, producing photons; since the charges of the particle and antiparticle are opposite, total charge is conserved. For example, the positrons produced in natural radioactive decay quickly annihilate themselves with electrons, producing pairs of gamma rays, a process exploited in positron emission tomography.
The laws of nature are very nearly symmetrical with respect to particles and antiparticles. For example, an antiproton and a positron can form an antihydrogen atom, which is believed to have the same properties as a hydrogen atom. This leads to the question of why the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter, rather than being a half-and-half mixture of matter and antimatter. The discovery of charge parity violation helped to shed light on this problem by showing that this symmetry, originally thought to be perfect, was only approximate. The question about how the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter remains an unanswered one, and explanations so far are not truly satisfactory, overall. | Antiparticle | Wikipedia | 433 | 1327 | https://en.wikipedia.org/wiki/Antiparticle | Physical sciences | Antimatter | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.