text stringlengths 9 2.4k |
|---|
The cancel character (CAN) signaled that the previous element should be discarded. The negative acknowledge character (NAK) is a definite flag for, usually, noting that reception was a problem, and, often, that the current element should be sent again. The acknowledge character (ACK) is normally used as a flag to indicate no problem detected with current element.
When a transmission medium is half duplex (that is, it can transmit in only one direction at a time), there is usually a master station that can transmit at any time, and one or more slave stations that transmit when they have permission. The enquire character (ENQ) is generally used by a master station to ask a slave station to send its next message. A slave station indicates that it has completed its transmission by sending the end of transmission character (EOT).
The device control codes (DC1 to DC4) were originally generic, to be implemented as necessary by each device. However, a universal need in data transmission is to request the sender to stop transmitting when a receiver is temporarily unable to accept any more data. Digital Equipment Corporation invented a convention which used 19 (the device control 3 character (DC3), also known as control-S, or XOFF) to "S"top transmission, and 17 (the device control 1 character (DC1), a.k.a. control-Q, or XON) to start transmission. It has become so widely used that most don't realize it is not part of official ASCII. This technique, however implemented, avoids additional wires in the data cable devoted only to transmission management, which saves money. A sensible protocol for the use of such transmission flow control signals must be used, to avoid potential deadlock conditions, however.
|
The data link escape character (DLE) was intended to be a signal to the other end of a data link that the following character is a control character such as STX or ETX. For example a packet may be structured in the following way (DLE) <STX> <PAYLOAD> (DLE) <ETX>.
Miscellaneous codes.
Code 7 (BEL) is intended to cause an audible signal in the receiving terminal.
Many of the ASCII control characters were designed for devices of the time that are not often seen today. For example, code 22, "synchronous idle" (SYN), was originally sent by synchronous modems (which have to send data constantly) when there was no actual data to send. (Modern systems typically use a start bit to announce the beginning of a transmitted word— this is a feature of "asynchronous" communication. "Synchronous" communication links were more often seen with mainframes, where they were typically run over corporate leased lines to connect a mainframe to another mainframe or perhaps a minicomputer.)
Code 0 (ASCII code name NUL) is a special case. In paper tape, it is the case when there are no holes. It is convenient to treat this as a "fill" character with no meaning otherwise. Since the position of a NUL character has no holes punched, it can be replaced with any other character at a later time, so it was typically used to reserve space, either for correcting errors or for inserting information that would be available at a later time or in another place. In computing, it is often used for padding in fixed length records; to mark the end of a string; and formerly to give printing devices enough time to execute a control function.
|
Code 127 (DEL, a.k.a. "rubout") is likewise a special case. Its 7-bit code is "all-bits-on" in binary, which essentially erased a character cell on a paper tape when overpunched. Paper tape was a common storage medium when ASCII was developed, with a computing history dating back to WWII code breaking equipment at Biuro Szyfrów. Paper tape became obsolete in the 1970s, so this clever aspect of ASCII rarely saw any use after that. Some systems (such as the original Apples) converted it to a backspace. But because its code is in the range occupied by other printable characters, and because it had no official assigned glyph, many computer equipment vendors used it as an additional printable character (often an all-black "box" character useful for erasing text by overprinting with ink).
Non-erasable programmable ROMs are typically implemented as arrays of fusible elements, each representing a bit, which can only be switched one way, usually from one to zero. In such PROMs, the DEL and NUL characters can be used in the same way that they were used on punched tape: one to reserve meaningless fill bytes that can be written later, and the other to convert written bytes to meaningless fill bytes. For PROMs that switch one to zero, the roles of NUL and DEL are reversed; also, DEL will only work with 7-bit characters, which are rarely used today; for 8-bit content, the character code 255, commonly defined as a nonbreaking space character, can be used instead of DEL.
Many file systems do not allow control characters in filenames, as they may have reserved functions. |
Carbon
Carbon () is a chemical element; it has symbol C and atomic number 6. It is nonmetallic and tetravalent—meaning that its atoms are able to form up to four covalent bonds due to its valence shell exhibiting 4 electrons. It belongs to group 14 of the periodic table. Carbon makes up about 0.025 percent of Earth's crust. Three isotopes occur naturally, C and C being stable, while C is a radionuclide, decaying with a half-life of 5,700 years. Carbon is one of the few elements known since antiquity.
Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass after hydrogen, helium, and oxygen. Carbon's abundance, its unique diversity of organic compounds, and its unusual ability to form polymers at the temperatures commonly encountered on Earth, enables this element to serve as a common element of all known life. It is the second most abundant element in the human body by mass (about 18.5%) after oxygen.
The atoms of carbon can bond together in diverse ways, resulting in various allotropes of carbon. Well-known allotropes include graphite, diamond, amorphous carbon, and fullerenes. The physical properties of carbon vary widely with the allotropic form. For example, graphite is opaque and black, while diamond is highly transparent. Graphite is soft enough to form a streak on paper (hence its name, from the Greek verb "γράφειν" which means "to write"), while diamond is the hardest naturally occurring material known. Graphite is a good electrical conductor while diamond has a low electrical conductivity. Under normal conditions, diamond, carbon nanotubes, and graphene have the highest thermal conductivities of all known materials. All carbon allotropes are solids under normal conditions, with graphite being the most thermodynamically stable form at standard temperature and pressure. They are chemically resistant and require high temperature to react even with oxygen.
|
The most common oxidation state of carbon in inorganic compounds is +4, while +2 is found in carbon monoxide and transition metal carbonyl complexes. The largest sources of inorganic carbon are limestones, dolomites and carbon dioxide, but significant quantities occur in organic deposits of coal, peat, oil, and methane clathrates. Carbon forms a vast number of compounds, with about two hundred million having been described and indexed; and yet that number is but a fraction of the number of theoretically possible compounds under standard conditions.
Characteristics.
The allotropes of carbon include graphite, one of the softest known substances, and diamond, the hardest naturally occurring substance. It bonds readily with other small atoms, including other carbon atoms, and is capable of forming multiple stable covalent bonds with suitable multivalent atoms. Carbon is a component element in the large majority of all chemical compounds, with about two hundred million examples having been described in the published chemical literature. Carbon also has the highest sublimation point of all elements. At atmospheric pressure it has no melting point, as its triple point is at and , so it sublimes at about . Graphite is much more reactive than diamond at standard conditions, despite being more thermodynamically stable, as its delocalised pi system is much more vulnerable to attack. For example, graphite can be oxidised by hot concentrated nitric acid at standard conditions to mellitic acid, C6(CO2H)6, which preserves the hexagonal units of graphite while breaking up the larger structure.
|
Carbon sublimes in a carbon arc, which has a temperature of about 5800 K (5,530 °C or 9,980 °F). Thus, irrespective of its allotropic form, carbon remains solid at higher temperatures than the highest-melting-point metals such as tungsten or rhenium. Although thermodynamically prone to oxidation, carbon resists oxidation more effectively than elements such as iron and copper, which are weaker reducing agents at room temperature.
Carbon is the sixth element, with a ground-state electron configuration of 1s22s22p2, of which the four outer electrons are valence electrons. Its first four ionisation energies, 1086.5, 2352.6, 4620.5 and 6222.7 kJ/mol, are much higher than those of the heavier group-14 elements. The electronegativity of carbon is 2.5, significantly higher than the heavier group-14 elements (1.8–1.9), but close to most of the nearby nonmetals, as well as some of the second- and third-row transition metals. Carbon's covalent radii are normally taken as 77.2 pm (C−C), 66.7 pm (C=C) and 60.3 pm (C≡C), although these may vary depending on coordination number and what the carbon is bonded to. In general, covalent radius decreases with lower coordination number and higher bond order.
|
Carbon-based compounds form the basis of all known life on Earth, and the carbon-nitrogen-oxygen cycle provides a small portion of the energy produced by the Sun, and most of the energy in larger stars (e.g. Sirius). Although it forms an extraordinary variety of compounds, most forms of carbon are comparatively unreactive under normal conditions. At standard temperature and pressure, it resists all but the strongest oxidizers. It does not react with sulfuric acid, hydrochloric acid, chlorine or any alkalis. At elevated temperatures, carbon reacts with oxygen to form carbon oxides and will rob oxygen from metal oxides to leave the elemental metal. This exothermic reaction is used in the iron and steel industry to smelt iron and to control the carbon content of steel:
Carbon reacts with sulfur to form carbon disulfide, and it reacts with steam in the coal-gas reaction used in coal gasification:
Carbon combines with some metals at high temperatures to form metallic carbides, such as the iron carbide cementite in steel and tungsten carbide, widely used as an abrasive and for making hard tips for cutting tools.
|
The system of carbon allotropes spans a range of extremes:
Allotropes.
Atomic carbon is a very short-lived species and, therefore, carbon is stabilized in various multi-atomic structures with diverse molecular configurations called allotropes. The three relatively well-known allotropes of carbon are amorphous carbon, graphite, and diamond. Once considered exotic, fullerenes are nowadays commonly synthesized and used in research; they include buckyballs, carbon nanotubes, carbon nanobuds and nanofibers. Several other exotic allotropes have also been discovered, such as lonsdaleite, glassy carbon, carbon nanofoam and linear acetylenic carbon (carbyne).
Graphene is a two-dimensional sheet of carbon with the atoms arranged in a hexagonal lattice. As of 2009, graphene appears to be the strongest material ever tested. The process of separating it from graphite will require some further technological development before it is economical for industrial processes. If successful, graphene could be used in the construction of a space elevator. It could also be used to safely store hydrogen for use in a hydrogen based engine in cars.
|
The amorphous form is an assortment of carbon atoms in a non-crystalline, irregular, glassy state, not held in a crystalline macrostructure. It is present as a powder, and is the main constituent of substances such as charcoal, lampblack (soot), and activated carbon. At normal pressures, carbon takes the form of graphite, in which each atom is bonded trigonally to three others in a plane composed of fused hexagonal rings, just like those in aromatic hydrocarbons. The resulting network is 2-dimensional, and the resulting flat sheets are stacked and loosely bonded through weak van der Waals forces. This gives graphite its softness and its cleaving properties (the sheets slip easily past one another). Because of the delocalization of one of the outer electrons of each atom to form a π-cloud, graphite conducts electricity, but only in the plane of each covalently bonded sheet. This results in a lower bulk electrical conductivity for carbon than for most metals. The delocalization also accounts for the energetic stability of graphite over diamond at room temperature.
|
At very high pressures, carbon forms the more compact allotrope, diamond, having nearly twice the density of graphite. Here, each atom is bonded tetrahedrally to four others, forming a 3-dimensional network of puckered six-membered rings of atoms. Diamond has the same cubic structure as silicon and germanium, and because of the strength of the carbon-carbon bonds, it is the hardest naturally occurring substance measured by resistance to scratching. Contrary to the popular belief that "diamonds are forever", they are thermodynamically unstable (Δf"G"°(diamond, 298 K) = 2.9 kJ/mol) under normal conditions (298 K, 105 Pa) and should theoretically transform into graphite. But due to a high activation energy barrier, the transition into graphite is so slow at normal temperature that it is unnoticeable. However, at very high temperatures diamond will turn into graphite, and diamonds can burn up in a house fire. The bottom left corner of the phase diagram for carbon has not been scrutinized experimentally. Although a computational study employing density functional theory methods reached the conclusion that as and , diamond becomes more stable than graphite by approximately 1.1 kJ/mol, more recent and definitive experimental and computational studies show that graphite is more stable than diamond for , without applied pressure, by 2.7 kJ/mol at "T" = 0 K and 3.2 kJ/mol at "T" = 298.15 K. Under some conditions, carbon crystallizes as lonsdaleite, a hexagonal crystal lattice with all atoms covalently bonded and properties similar to those of diamond.
|
Fullerenes are a synthetic crystalline formation with a graphite-like structure, but in place of flat hexagonal cells only, some of the cells of which fullerenes are formed may be pentagons, nonplanar hexagons, or even heptagons of carbon atoms. The sheets are thus warped into spheres, ellipses, or cylinders. The properties of fullerenes (split into buckyballs, buckytubes, and nanobuds) have not yet been fully analyzed and represent an intense area of research in nanomaterials. The names "fullerene" and "buckyball" are given after Richard Buckminster Fuller, popularizer of geodesic domes, which resemble the structure of fullerenes. The buckyballs are fairly large molecules formed completely of carbon bonded trigonally, forming spheroids (the best-known and simplest is the soccerball-shaped C buckminsterfullerene). Carbon nanotubes (buckytubes) are structurally similar to buckyballs, except that each atom is bonded trigonally in a curved sheet that forms a hollow cylinder. Nanobuds were first reported in 2007 and are hybrid buckytube/buckyball materials (buckyballs are covalently bonded to the outer wall of a nanotube) that combine the properties of both in a single structure.
|
Of the other discovered allotropes, carbon nanofoam is a ferromagnetic allotrope discovered in 1997. It consists of a low-density cluster-assembly of carbon atoms strung together in a loose three-dimensional web, in which the atoms are bonded trigonally in six- and seven-membered rings. It is among the lightest known solids, with a density of about 2 kg/m. Similarly, glassy carbon contains a high proportion of closed porosity, but contrary to normal graphite, the graphitic layers are not stacked like pages in a book, but have a more random arrangement. Linear acetylenic carbon has the chemical structure −(C≡C)− . Carbon in this modification is linear with "sp" orbital hybridization, and is a polymer with alternating single and triple bonds. This carbyne is of considerable interest to nanotechnology as its Young's modulus is 40 times that of the hardest known material – diamond.
In 2015, a team at the North Carolina State University announced the development of another allotrope they have dubbed Q-carbon, created by a high-energy low-duration laser pulse on amorphous carbon dust. Q-carbon is reported to exhibit ferromagnetism, fluorescence, and a hardness superior to diamonds.
|
In the vapor phase, some of the carbon is in the form of highly reactive diatomic carbon dicarbon (). When excited, this gas glows green.
Occurrence.
Carbon is the fourth most abundant chemical element in the observable universe by mass after hydrogen, helium, and oxygen. Carbon is abundant in the Sun, stars, comets, and in the atmospheres of most planets. Some meteorites contain microscopic diamonds that were formed when the Solar System was still a protoplanetary disk. Microscopic diamonds may also be formed by the intense pressure and high temperature at the sites of meteorite impacts.
In 2014 NASA announced a greatly upgraded database for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. More than 20% of the carbon in the universe may be associated with PAHs, complex compounds of carbon and hydrogen without oxygen. These compounds figure in the PAH world hypothesis where they are hypothesized to have a role in abiogenesis and formation of life. PAHs seem to have been formed "a couple of billion years" after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets.
|
It has been estimated that the solid earth as a whole contains 730 ppm of carbon, with 2000 ppm in the core and 120 ppm in the combined mantle and crust. Since the mass of the earth is , this would imply 4360 million gigatonnes of carbon. This is much more than the amount of carbon in the oceans or atmosphere (below).
In combination with oxygen in carbon dioxide, carbon is found in the Earth's atmosphere (approximately 900 gigatonnes of carbon — each ppm corresponds to 2.13 Gt) and dissolved in all water bodies (approximately 36,000 gigatonnes of carbon). Carbon in the biosphere has been estimated at 550 gigatonnes but with a large uncertainty, due mostly to a huge uncertainty in the amount of terrestrial deep subsurface bacteria. Hydrocarbons (such as coal, petroleum, and natural gas) contain carbon as well. Coal "reserves" (not "resources") amount to around 900 gigatonnes with perhaps 18,000 Gt of resources. Oil reserves are around 150 gigatonnes. Proven sources of natural gas are about (containing about 105 gigatonnes of carbon), but studies estimate another of "unconventional" deposits such as shale gas, representing about 540 gigatonnes of carbon.
|
Carbon is also found in methane hydrates in polar regions and under the seas. Various estimates put this carbon between 500, 2500, or 3,000 Gt.
According to one source, in the period from 1751 to 2008 about 347 gigatonnes of carbon were released as carbon dioxide to the atmosphere from burning of fossil fuels. Another source puts the amount added to the atmosphere for the period since 1750 at 879 Gt, and the total going to the atmosphere, sea, and land (such as peat bogs) at almost 2,000 Gt.
Carbon is a constituent (about 12% by mass) of the very large masses of carbonate rock (limestone, dolomite, marble, and others). Coal is very rich in carbon (anthracite contains 92–98%) and is the largest commercial source of mineral carbon, accounting for 4,000 gigatonnes or 80% of fossil fuel.
As for individual carbon allotropes, graphite is found in large quantities in the United States (mostly in New York and Texas), Russia, Mexico, Greenland, and India. Natural diamonds occur in the rock kimberlite, found in ancient volcanic "necks", or "pipes". Most diamond deposits are in Africa, notably in South Africa, Namibia, Botswana, the Republic of the Congo, and Sierra Leone. Diamond deposits have also been found in Arkansas, Canada, the Russian Arctic, Brazil, and in Northern and Western Australia. Diamonds are now also being recovered from the ocean floor off the Cape of Good Hope. Diamonds are found naturally, but about 30% of all industrial diamonds used in the U.S. are now manufactured.
|
Carbon-14 is formed in upper layers of the troposphere and the stratosphere at altitudes of 9–15 km by a reaction that is precipitated by cosmic rays. Thermal neutrons are produced that collide with the nuclei of nitrogen-14, forming carbon-14 and a proton. As such, of atmospheric carbon dioxide contains carbon-14.
Carbon-rich asteroids are relatively preponderant in the outer parts of the asteroid belt in the Solar System. These asteroids have not yet been directly sampled by scientists. The asteroids can be used in hypothetical space-based carbon mining, which may be possible in the future, but is currently technologically impossible.
Isotopes.
Isotopes of carbon are atomic nuclei that contain six protons plus a number of neutrons (varying from 2 to 16). Carbon has two stable, naturally occurring isotopes. The isotope carbon-12 (C) forms 98.93% of the carbon on Earth, while carbon-13 (C) forms the remaining 1.07%. The concentration of C is further increased in biological materials because biochemical reactions discriminate against C. In 1961, the International Union of Pure and Applied Chemistry (IUPAC) adopted the isotope carbon-12 as the basis for atomic weights. Identification of carbon in nuclear magnetic resonance (NMR) experiments is done with the isotope C.
|
Carbon-14 (C) is a naturally occurring radioisotope, created in the upper atmosphere (lower stratosphere and upper troposphere) by interaction of nitrogen with cosmic rays. It is found in trace amounts on Earth of 1 part per trillion (0.0000000001%) or more, mostly confined to the atmosphere and superficial deposits, particularly of peat and other organic materials. This isotope decays by 0.158 MeV β emission. Because of its relatively short half-life of years, C is virtually absent in ancient rocks. The amount of C in the atmosphere and in living organisms is almost constant, but decreases predictably in their bodies after death. This principle is used in radiocarbon dating, invented in 1949, which has been used extensively to determine the age of carbonaceous materials with ages up to about 40,000 years.
There are 15 known isotopes of carbon and the shortest-lived of these is C which decays through proton emission and has a half-life of 3.5 s. The exotic C exhibits a nuclear halo, which means its radius is appreciably larger than would be expected if the nucleus were a sphere of constant density.
|
Formation in stars.
Formation of the carbon atomic nucleus occurs within a giant or supergiant star through the triple-alpha process. This requires a nearly simultaneous collision of three alpha particles (helium nuclei), as the products of further nuclear fusion reactions of helium with hydrogen or another helium nucleus produce lithium-5 and beryllium-8 respectively, both of which are highly unstable and decay almost instantly back into smaller nuclei. The triple-alpha process happens in conditions of temperatures over 100 megakelvins and helium concentration that the rapid expansion and cooling of the early universe prohibited, and therefore no significant carbon was created during the Big Bang.
According to current physical cosmology theory, carbon is formed in the interiors of stars on the horizontal branch. When massive stars die as supernova, the carbon is scattered into space as dust. This dust becomes component material for the formation of the next-generation star systems with accreted planets. The Solar System is one such star system with an abundance of carbon, enabling the existence of life as we know it. It is the opinion of most scholars that all the carbon in the Solar System and the Milky Way comes from dying stars.
|
The CNO cycle is an additional hydrogen fusion mechanism that powers stars, wherein carbon operates as a catalyst.
Rotational transitions of various isotopic forms of carbon monoxide (for example, CO, CO, and CO) are detectable in the submillimeter wavelength range, and are used in the study of newly forming stars in molecular clouds.
Carbon cycle.
Under terrestrial conditions, conversion of one element to another is very rare. Therefore, the amount of carbon on Earth is effectively constant. Thus, processes that use carbon must obtain it from somewhere and dispose of it somewhere else. The paths of carbon in the environment form the carbon cycle. For example, photosynthetic plants draw carbon dioxide from the atmosphere (or seawater) and build it into biomass, as in the Calvin cycle, a process of carbon fixation. Some of this biomass is eaten by animals, while some carbon is exhaled by animals as carbon dioxide. The carbon cycle is considerably more complicated than this short loop; for example, some carbon dioxide is dissolved in the oceans; if bacteria do not consume it, dead plant or animal matter may become petroleum or coal, which releases carbon when burned.
|
Compounds.
Organic compounds.
Carbon can form very long chains of interconnecting carbon–carbon bonds, a property that is called catenation. Carbon-carbon bonds are strong and stable. Through catenation, carbon forms a countless number of compounds. A tally of unique compounds shows that more contain carbon than do not. A similar claim can be made for hydrogen because most organic compounds contain hydrogen chemically bonded to carbon or another common element like oxygen or nitrogen.
The simplest form of an organic molecule is the hydrocarbon—a large family of organic molecules that are composed of hydrogen atoms bonded to a chain of carbon atoms. A hydrocarbon backbone can be substituted by other atoms, known as heteroatoms. Common heteroatoms that appear in organic compounds include oxygen, nitrogen, sulfur, phosphorus, and the nonradioactive halogens, as well as the metals lithium and magnesium. Organic compounds containing bonds to metal are known as organometallic compounds ("see below"). Certain groupings of atoms, often including heteroatoms, recur in large numbers of organic compounds. These collections, known as "functional groups", confer common reactivity patterns and allow for the systematic study and categorization of organic compounds. Chain length, shape and functional groups all affect the properties of organic molecules.
|
In most stable compounds of carbon (and nearly all stable "organic" compounds), carbon obeys the octet rule and is "tetravalent", meaning that a carbon atom forms a total of four covalent bonds (which may include double and triple bonds). Exceptions include a small number of stabilized "carbocations" (three bonds, positive charge), "radicals" (three bonds, neutral), "carbanions" (three bonds, negative charge) and "carbenes" (two bonds, neutral), although these species are much more likely to be encountered as unstable, reactive intermediates.
Carbon occurs in all known organic life and is the basis of organic chemistry. When united with hydrogen, it forms various hydrocarbons that are important to industry as refrigerants, lubricants, solvents, as chemical feedstock for the manufacture of plastics and petrochemicals, and as fossil fuels.
When combined with oxygen and hydrogen, carbon can form many groups of important biological compounds including sugars, lignans, chitins, alcohols, fats, aromatic esters, carotenoids and terpenes. With nitrogen, it forms alkaloids, and with the addition of sulfur also it forms antibiotics, amino acids, and rubber products. With the addition of phosphorus to these other elements, it forms DNA and RNA, the chemical-code carriers of life, and adenosine triphosphate (ATP), the most important energy-transfer molecule in all living cells. Norman Horowitz, head of the Mariner and Viking missions to Mars (1965–1976), considered that the unique characteristics of carbon made it unlikely that any other element could replace carbon, even on another planet, to generate the biochemistry necessary for life.
|
Inorganic compounds.
Commonly carbon-containing compounds which are associated with minerals or which do not contain bonds to the other carbon atoms, halogens, or hydrogen, are treated separately from classical organic compounds; the definition is not rigid, and the classification of some compounds can vary from author to author (see reference articles above). Among these are the simple oxides of carbon. The most prominent oxide is carbon dioxide (). This was once the principal constituent of the paleoatmosphere, but is a minor component of the Earth's atmosphere today. Dissolved in water, it forms carbonic acid (), but as most compounds with multiple single-bonded oxygens on a single carbon it is unstable. Through this intermediate, though, resonance-stabilized carbonate ions are produced. Some important minerals are carbonates, notably calcite. Carbon disulfide () is similar. Nevertheless, due to its physical properties and its association with organic synthesis, carbon disulfide is sometimes classified as an "organic" solvent.
|
The other common oxide is carbon monoxide (CO). It is formed by incomplete combustion, and is a colorless, odorless gas. The molecules each contain a triple bond and are fairly polar, resulting in a tendency to bind permanently to hemoglobin molecules, displacing oxygen, which has a lower binding affinity. Cyanide (CN), has a similar structure, but behaves much like a halide ion (pseudohalogen). For example, it can form the nitride cyanogen molecule ((CN)), similar to diatomic halides. Likewise, the heavier analog of cyanide, cyaphide (CP), is also considered inorganic, though most simple derivatives are highly unstable. Other uncommon oxides are carbon suboxide (), the unstable dicarbon monoxide (CO), carbon trioxide (CO), cyclopentanepentone (CO), cyclohexanehexone (CO), and mellitic anhydride (CO). However, mellitic anhydride is the triple acyl anhydride of mellitic acid; moreover, it contains a benzene ring. Thus, many chemists consider it to be organic.
With reactive metals, such as tungsten, carbon forms either carbides (C) or acetylides () to form alloys with high melting points. These anions are also associated with methane and acetylene, both very weak acids. With an electronegativity of 2.5, carbon prefers to form covalent bonds. A few carbides are covalent lattices, like carborundum (SiC), which resembles diamond. Nevertheless, even the most polar and salt-like of carbides are not completely ionic compounds.
|
Organometallic compounds.
Organometallic compounds by definition contain at least one carbon-metal covalent bond. A wide range of such compounds exist; major classes include simple alkyl-metal compounds (for example, tetraethyllead), η-alkene compounds (for example, Zeise's salt), and η-allyl compounds (for example, allylpalladium chloride dimer); metallocenes containing cyclopentadienyl ligands (for example, ferrocene); and transition metal carbene complexes. Many metal carbonyls and metal cyanides exist (for example, tetracarbonylnickel and potassium ferricyanide); some workers consider metal carbonyl and cyanide complexes without other carbon ligands to be purely inorganic, and not organometallic. However, most organometallic chemists consider metal complexes with any carbon ligand, even 'inorganic carbon' (e.g., carbonyls, cyanides, and certain types of carbides and acetylides) to be organometallic in nature. Metal complexes containing organic ligands without a carbon-metal covalent bond (e.g., metal carboxylates) are termed "metalorganic" compounds.
|
While carbon is understood to strongly prefer formation of four covalent bonds, other exotic bonding schemes are also known. Carboranes are highly stable dodecahedral derivatives of the [B12H12]2- unit, with one BH replaced with a CH+. Thus, the carbon is bonded to five boron atoms and one hydrogen atom. The cation [(PhPAu)C] contains an octahedral carbon bound to six phosphine-gold fragments. This phenomenon has been attributed to the aurophilicity of the gold ligands, which provide additional stabilization of an otherwise labile species. In nature, the iron-molybdenum cofactor (FeMoco) responsible for microbial nitrogen fixation likewise has an octahedral carbon center (formally a carbide, C(-IV)) bonded to six iron atoms. In 2016, it was confirmed that, in line with earlier theoretical predictions, the hexamethylbenzene dication contains a carbon atom with six bonds. More specifically, the dication could be described structurally by the formulation [MeC(η5-C5Me5)]2+, making it an "organic metallocene" in which a MeC3+ fragment is bonded to a η5-C5Me5− fragment through all five of the carbons of the ring.
|
It is important to note that in the cases above, each of the bonds to carbon contain less than two formal electron pairs. Thus, the formal electron count of these species does not exceed an octet. This makes them hypercoordinate but not hypervalent. Even in cases of alleged 10-C-5 species (that is, a carbon with five ligands and a formal electron count of ten), as reported by Akiba and co-workers, electronic structure calculations conclude that the electron population around carbon is still less than eight, as is true for other compounds featuring four-electron three-center bonding.
History and etymology.
The English name "carbon" comes from the Latin "carbo" for coal and charcoal, whence also comes the French "charbon", meaning charcoal. In German, Dutch and Danish, the names for carbon are "Kohlenstoff", "koolstof", and "kulstof" respectively, all literally meaning coal-substance.
Carbon was discovered in prehistory and was known in the forms of soot and charcoal to the earliest human civilizations. Diamonds were known probably as early as 2500 BCE in China, while carbon in the form of charcoal was made by the same chemistry as it is today, by heating wood in a pyramid covered with clay to exclude air.
|
In 1722, René Antoine Ferchault de Réaumur demonstrated that iron was transformed into steel through the absorption of some substance, now known to be carbon. In 1772, Antoine Lavoisier showed that diamonds are a form of carbon; when he burned samples of charcoal and diamond and found that neither produced any water and that both released the same amount of carbon dioxide per gram. In 1779, Carl Wilhelm Scheele showed that graphite, which had been thought of as a form of lead, was instead identical with charcoal but with a small admixture of iron, and that it gave "aerial acid" (his name for carbon dioxide) when oxidized with nitric acid. In 1786, the French scientists Claude Louis Berthollet, Gaspard Monge and C. A. Vandermonde confirmed that graphite was mostly carbon by oxidizing it in oxygen in much the same way Lavoisier had done with diamond. Some iron again was left, which the French scientists thought was necessary to the graphite structure. In their publication they proposed the name "carbone" (Latin "carbonum") for the element in graphite which was given off as a gas upon burning graphite. Antoine Lavoisier then listed carbon as an element in his 1789 textbook.
|
A new allotrope of carbon, fullerene, that was discovered in 1985 includes nanostructured forms such as buckyballs and nanotubes. Their discoverers – Robert Curl, Harold Kroto, and Richard Smalley – received the Nobel Prize in Chemistry in 1996. The resulting renewed interest in new forms led to the discovery of further exotic allotropes, including glassy carbon, and the realization that "amorphous carbon" is not strictly amorphous.
Production.
Graphite.
Commercially viable natural deposits of graphite occur in many parts of the world, but the most important sources economically are in China, India, Brazil, and North Korea. Graphite deposits are of metamorphic origin, found in association with quartz, mica, and feldspars in schists, gneisses, and metamorphosed sandstones and limestone as lenses or veins, sometimes of a metre or more in thickness. Deposits of graphite in Borrowdale, Cumberland, England were at first of sufficient size and purity that, until the 19th century, pencils were made by sawing blocks of natural graphite into strips before encasing the strips in wood. Today, smaller deposits of graphite are obtained by crushing the parent rock and floating the lighter graphite out on water.
|
There are three types of natural graphite—amorphous, flake or crystalline flake, and vein or lump. Amorphous graphite is the lowest quality and most abundant. Contrary to science, in industry "amorphous" refers to very small crystal size rather than complete lack of crystal structure. Amorphous is used for lower value graphite products and is the lowest priced graphite. Large amorphous graphite deposits are found in China, Europe, Mexico and the United States. Flake graphite is less common and of higher quality than amorphous; it occurs as separate plates that crystallized in metamorphic rock. Flake graphite can be four times the price of amorphous. Good quality flakes can be processed into expandable graphite for many uses, such as flame retardants. The foremost deposits are found in Austria, Brazil, Canada, China, Germany and Madagascar. Vein or lump graphite is the rarest, most valuable, and highest quality type of natural graphite. It occurs in veins along intrusive contacts in solid lumps, and it is only commercially mined in Sri Lanka.
|
According to the USGS, world production of natural graphite was 1.1 million tonnes in 2010, to which China contributed 800,000 t, India 130,000 t, Brazil 76,000 t, North Korea 30,000 t and Canada 25,000 t. No natural graphite was reported mined in the United States, but 118,000 t of synthetic graphite with an estimated value of $998 million was produced in 2009.
Diamond.
The diamond supply chain is controlled by a limited number of powerful businesses, and is also highly concentrated in a small number of locations around the world (see figure).
Only a very small fraction of the diamond ore consists of actual diamonds. The ore is crushed, during which care has to be taken in order to prevent larger diamonds from being destroyed in this process and subsequently the particles are sorted by density. Today, diamonds are located in the diamond-rich density fraction with the help of X-ray fluorescence, after which the final sorting steps are done by hand. Before the use of X-rays became commonplace, the separation was done with grease belts; diamonds have a stronger tendency to stick to grease than the other minerals in the ore.
|
Historically diamonds were known to be found only in alluvial deposits in southern India. India led the world in diamond production from the time of their discovery in approximately the 9th century BC to the mid-18th century AD, but the commercial potential of these sources had been exhausted by the late 18th century and at that time India was eclipsed by Brazil where the first non-Indian diamonds were found in 1725.
Diamond production of primary deposits (kimberlites and lamproites) only started in the 1870s after the discovery of the diamond fields in South Africa. Production has increased over time and an accumulated total of over 4.5 billion carats have been mined since that date. Most commercially viable diamond deposits were in Russia, Botswana, Australia and the Democratic Republic of Congo. By 2005, Russia produced almost one-fifth of the global diamond output (mostly in Yakutia territory; for example, Mir pipe and Udachnaya pipe) but the Argyle mine in Australia became the single largest source, producing 14 million carats in 2018. New finds, the Canadian mines at Diavik and Ekati, are expected to become even more valuable owing to their production of gem quality stones.
|
In the United States, diamonds have been found in Arkansas, Colorado, and Montana. In 2004, a startling discovery of a microscopic diamond in the United States led to the January 2008 bulk-sampling of kimberlite pipes in a remote part of Montana.
Applications.
Carbon is essential to all known living systems, and without it life as we know it could not exist (see alternative biochemistry). The major economic use of carbon other than food and wood is in the form of hydrocarbons, most notably the fossil fuel methane gas and crude oil (petroleum). Crude oil is distilled in refineries by the petrochemical industry to produce gasoline, kerosene, and other products. Cellulose is a natural, carbon-containing polymer produced by plants in the form of wood, cotton, linen, and hemp. Cellulose is used primarily for maintaining structure in plants. Commercially valuable carbon polymers of animal origin include wool, cashmere, and silk. Plastics are made from synthetic carbon polymers, often with oxygen and nitrogen atoms included at regular intervals in the main polymer chain. The raw materials for many of these synthetic substances come from crude oil.
|
The uses of carbon and its compounds are extremely varied. It can form alloys with iron, of which the most common is carbon steel. Graphite is combined with clays to form the 'lead' used in pencils used for writing and drawing. It is also used as a lubricant and a pigment, as a moulding material in glass manufacture, in electrodes for dry batteries and in electroplating and electroforming, in brushes for electric motors, and as a neutron moderator in nuclear reactors.
Charcoal is used as a drawing material in artwork, barbecue grilling, iron smelting, and in many other applications. Wood, coal and oil are used as fuel for production of energy and heating. Gem quality diamond is used in jewelry, and industrial diamonds are used in drilling, cutting and polishing tools for machining metals and stone. Plastics are made from fossil hydrocarbons, and carbon fiber, made by pyrolysis of synthetic polyester fibers is used to reinforce plastics to form advanced, lightweight composite materials.
Carbon fiber is made by pyrolysis of extruded and stretched filaments of polyacrylonitrile (PAN) and other organic substances. The crystallographic structure and mechanical properties of the fiber depend on the type of starting material, and on the subsequent processing. Carbon fibers made from PAN have structure resembling narrow filaments of graphite, but thermal processing may re-order the structure into a continuous rolled sheet. The result is fibers with higher specific tensile strength than steel.
|
Carbon black is used as the black pigment in printing ink, artist's oil paint, and water colours, carbon paper, automotive finishes, India ink and laser printer toner. Carbon black is also used as a filler in rubber products such as tyres and in plastic compounds. Activated charcoal is used as an absorbent and adsorbent in filter material in applications as diverse as gas masks, water purification, and kitchen extractor hoods, and in medicine to absorb toxins, poisons, or gases from the digestive system. Carbon is used in chemical reduction at high temperatures. Coke is used to reduce iron ore into iron (smelting). Case hardening of steel is achieved by heating finished steel components in carbon powder. Carbides of silicon, tungsten, boron, and titanium are among the hardest known materials, and are used as abrasives in cutting and grinding tools. Carbon compounds make up most of the materials used in clothing, such as natural and synthetic textiles and leather, and almost all of the interior surfaces in the built environment other than glass, stone, drywall, and metal.
|
Diamonds.
The diamond industry falls into two categories: one dealing with gem-grade diamonds and the other, with industrial-grade diamonds. While a large trade in both types of diamonds exists, the two markets function dramatically differently.
Unlike precious metals such as gold or platinum, gem diamonds do not trade as a commodity. There is a substantial mark-up in the sale of diamonds, and there is not a very active market for resale of diamonds.
Industrial diamonds are valued mostly for their hardness and heat conductivity, with the gemological qualities of clarity and color being mostly irrelevant. About 80% of mined diamonds (equal to about 100 million carats or 20 tonnes annually) are unsuitable for use as gemstones and relegated for industrial use (known as "bort)". Synthetic diamonds, invented in the 1950s, found almost immediate industrial applications; 3 billion carats (600 tonnes) of synthetic diamond is produced annually.
The dominant industrial use of diamond is in cutting, drilling, grinding, and polishing. Most of these applications do not require large diamonds; in fact, most diamonds of gem-quality except for their small size can be used industrially. Diamonds are embedded in drill tips or saw blades, or ground into a powder for use in grinding and polishing applications. Specialized applications include use in laboratories as containment for high-pressure experiments (see diamond anvil cell), high-performance bearings, and limited use in specialized windows. With the continuing advances in the production of synthetic diamonds, new applications are becoming feasible. Garnering much excitement is the possible use of diamond as a semiconductor suitable for microchips, and because of its exceptional heat conductance property, as a heat sink in electronics.
|
Precautions.
Pure carbon has extremely low toxicity to humans and can be handled safely in the form of graphite or charcoal. It is resistant to dissolution or chemical attack, even in the acidic contents of the digestive tract. Consequently, once it enters into the body's tissues it is likely to remain there indefinitely. Carbon black was probably one of the first pigments to be used for tattooing, and Ötzi the Iceman was found to have carbon tattoos that survived during his life and for 5200 years after his death. Inhalation of coal dust or soot (carbon black) in large quantities can be dangerous, irritating lung tissues and causing the congestive lung disease, coalworker's pneumoconiosis. Diamond dust used as an abrasive can be harmful if ingested or inhaled. Microparticles of carbon are produced in diesel engine exhaust fumes, and may accumulate in the lungs. In these examples, the harm may result from contaminants (e.g., organic chemicals, heavy metals) rather than from the carbon itself.
Carbon generally has low toxicity to life on Earth; but carbon nanoparticles are deadly to "Drosophila".
|
Carbon may burn vigorously and brightly in the presence of air at high temperatures. Large accumulations of coal, which have remained inert for hundreds of millions of years in the absence of oxygen, may spontaneously combust when exposed to air in coal mine waste tips, ship cargo holds and coal bunkers, and storage dumps.
In nuclear applications where graphite is used as a neutron moderator, accumulation of Wigner energy followed by a sudden, spontaneous release may occur. Annealing to at least 250 °C can release the energy safely, although in the Windscale fire the procedure went wrong, causing other reactor materials to combust.
The great variety of carbon compounds include such lethal poisons as tetrodotoxin, the lectin ricin from seeds of the castor oil plant "Ricinus communis", cyanide (CN), and carbon monoxide; and such essentials to life as glucose and protein. |
Computer data storage
Computer data storage or digital data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers.
The central processing unit (CPU) of a computer is what manipulates data by performing computations. In practice, almost all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but less expensive and larger options further away. Generally, the fast technologies are referred to as "memory", while slower persistent technologies are referred to as "storage".
Even the first computer designs, Charles Babbage's Analytical Engine and Percy Ludgate's Analytical Machine, clearly distinguished between processing and memory (Babbage stored numbers as rotations of gears, while Ludgate stored numbers as displacements of rods in shuttles). This distinction was extended in the Von Neumann architecture, where the CPU consists of two main parts: The control unit and the arithmetic logic unit (ALU). The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data.
|
Functionality.
Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, and other specialized devices. Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann machines.
Data organization and representation.
A modern digital computer represents data using the binary numeral system. Text, numbers, pictures, audio, and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 0 or 1. The most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate "the binary representation of the piece of information", or simply data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes (40 million bits) with one byte per character.
|
Data are encoded by assigning a bit pattern to each character, digit, or multimedia object. Many standards exist for encoding (e.g. character encodings like ASCII, image encodings like JPEG, and video encodings like MPEG-4).
By adding bits to each encoded unit, redundancy allows the computer to detect errors in coded data and correct them based on mathematical algorithms. Errors generally occur in low probabilities due to random bit value flipping, or "physical bit fatigue", loss of the physical bit in the storage of its ability to maintain a distinguishable value (0 or 1), or due to errors in inter or intra-computer communication. A random bit flip (e.g. due to random radiation) is typically corrected upon detection. A bit or a group of malfunctioning physical bits (the specific defective bit is not always known; group definition depends on the specific storage device) is typically automatically fenced out, taken out of use by the device, and replaced with another functioning equivalent group in the device, where the corrected bit values are restored (if possible). The cyclic redundancy check (CRC) method is typically used in communications and storage for error detection. A detected error is then retried.
|
Data compression methods allow in many cases (such as a database) to represent a string of bits by a shorter bit string ("compress") and reconstruct the original string ("decompress") when needed. This utilizes substantially less storage (tens of percent) for many types of data at the cost of more computation (compress and decompress when needed). Analysis of the trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not.
For security reasons, certain types of data (e.g. credit card information) may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots.
Hierarchy of storage.
Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary, tertiary, and off-line storage is also guided by cost per bit.
|
In contemporary usage, "memory" is usually fast but temporary semiconductor read-write memory, typically DRAM (dynamic RAM) or other such devices. "Storage" consists of storage devices and their media not directly accessible by the CPU (secondary or tertiary storage), typically hard disk drives, optical disc drives, and other devices slower than RAM but non-volatile (retaining contents when powered down).
Historically, "memory" has, depending on technology, been called "central memory", "core memory", "core storage", "drum", "main memory", "real storage", or "internal memory". Meanwhile, slower persistent storage devices have been referred to as "secondary storage", "external memory", or "auxiliary/peripheral storage".
Primary storage.
"Primary storage" (also known as "main memory", "internal memory", or "prime memory"), often referred to simply as "memory", is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in a uniform manner.
|
Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic-core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive.
This led to modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. The particular types of RAM used for primary storage are volatile, meaning that they lose the information when not powered. Besides storing opened programs, it serves as disk cache and write buffer to improve both reading and writing performance. Operating systems borrow RAM capacity for caching so long as it's not needed by running software. Spare memory can be utilized as RAM drive for temporary high-speed data storage.
As shown in the diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity RAM:
Main memory is directly or indirectly connected to the central processing unit via a "memory bus". It is actually two buses (not on the diagram): an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address, that indicates the desired location of data. Then it reads or writes the data in the memory cells using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks.
|
As the RAM types used for primary storage are volatile (uninitialized at start up), a computer containing only such storage would not have a source to read instructions from, in order to start the computer. Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile "secondary" storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of "random access").
Many types of "ROM" are not literally "read only", as updates to them are possible; however it is slow and memory must be erased in large portions before it can be re-written. Some embedded systems run programs directly from ROM (or similar), because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, and rather, use large capacities of secondary storage, which is non-volatile as well, and not as costly.
|
Recently, "primary storage" and "secondary storage" in some uses refer to what was historically called, respectively, "secondary storage" and "tertiary storage".
The primary storage, including ROM, EEPROM, NOR flash, and RAM, are usually byte-addressable.
Secondary storage.
"Secondary storage" (also known as "external memory" or "auxiliary storage") differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfer the desired data to primary storage. Secondary storage is non-volatile (retaining data when its power is shut off). Modern computer systems typically have two orders of magnitude more secondary storage than primary storage because secondary storage is less expensive.
In modern computers, hard disk drives (HDDs) or solid-state drives (SSDs) are usually used as secondary storage. The access time per byte for HDDs or SSDs is typically measured in milliseconds (thousandths of a second), while the access time per byte for primary storage is measured in nanoseconds (billionths of a second). Thus, secondary storage is significantly slower than primary storage. Rotating optical storage devices, such as CD and DVD drives, have even longer access times. Other examples of secondary storage technologies include USB flash drives, floppy disks, magnetic tape, paper tape, punched cards, and RAM disks.
|
Once the disk read/write head on HDDs reaches the proper placement and the data, subsequent data on the track are very fast to access. To reduce the seek time and rotational latency, data are transferred to and from disks in large contiguous blocks. Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated paradigms have been developed to design efficient algorithms based on sequential and block access. Another way to reduce the I/O bottleneck is to use multiple disks in parallel to increase the bandwidth between primary and secondary memory.
Secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, while also providing metadata describing the owner of a certain file, the access time, the access permissions, and other information.
Most computer operating systems use the concept of virtual memory, allowing the utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks (pages) to a swap file or page file on secondary storage, retrieving them later when needed. If a lot of pages are moved to slower secondary storage, the system performance is degraded.
|
The secondary storage, including HDD, ODD and SSD, are usually block-addressable.
Tertiary storage.
"Tertiary storage" or "tertiary memory" is a level below secondary storage. Typically, it involves a robotic mechanism which will "mount" (insert) and "dismount" removable mass storage media into a storage device according to the system's demands; such data are often copied to secondary storage before use. It is primarily used for archiving rarely accessed information since it is much slower than secondary storage (e.g. 5–60 seconds vs. 1–10 milliseconds). This is primarily useful for extraordinarily large data stores, accessed without human operators. Typical examples include tape libraries and optical jukeboxes.
When a computer needs to read information from the tertiary storage, it will first consult a catalog database to determine which tape or disc contains the information. Next, the computer will instruct a robotic arm to fetch the medium and place it in a drive. When the computer has finished reading the information, the robotic arm will return the medium to its place in the library.
|
Tertiary storage is also known as "nearline storage" because it is "near to online". The formal distinction between online, nearline, and offline storage is:
For example, always-on spinning hard disk drives are online storage, while spinning drives that spin down automatically, such as in massive arrays of idle disks (MAID), are nearline storage. Removable media such as tape cartridges that can be automatically loaded, as in tape libraries, are nearline storage, while tape cartridges that must be manually loaded are offline storage.
Off-line storage.
"Off-line storage" is computer data storage on a medium or a device that is not under the control of a processing unit. The medium is recorded, usually in a secondary or tertiary storage device, and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction.
Off-line storage is used to transfer information since the detached medium can easily be physically transported. Additionally, it is useful for cases of disaster, where, for example, a fire destroys the original data, a medium in a remote location will be unaffected, enabling disaster recovery. Off-line storage increases general information security since it is physically inaccessible from a computer, and data confidentiality or integrity cannot be affected by computer-based attack techniques. Also, if the information stored for archival purposes is rarely accessed, off-line storage is less expensive than tertiary storage.
|
In modern personal computers, most secondary and tertiary storage media are also used for off-line storage. Optical discs and flash memory devices are the most popular, and to a much lesser extent removable hard disk drives; older examples include floppy disks and Zip disks. In enterprise uses, magnetic tape cartridges are predominant; older examples include open-reel magnetic tape and punched cards.
Characteristics of storage.
Storage technologies at all levels of the storage hierarchy can be differentiated by evaluating certain core characteristics as well as measuring characteristics specific to a particular implementation. These core characteristics are volatility, mutability, accessibility, and addressability. For any particular implementation of any storage technology, the characteristics worth measuring are capacity and performance.
Volatility.
Non-volatile memory retains the stored information even if not constantly supplied with electric power. It is suitable for long-term storage of information. Volatile memory requires constant power to maintain the stored information. The fastest memory technologies are volatile ones, although that is not a universal rule. Since the primary storage is required to be very fast, it predominantly uses volatile memory.
|
Dynamic random-access memory is a form of volatile memory that also requires the stored information to be periodically reread and rewritten, or refreshed, otherwise it would vanish. Static random-access memory is a form of volatile memory similar to DRAM with the exception that it never needs to be refreshed as long as power is applied; it loses its content when the power supply is lost.
An uninterruptible power supply (UPS) can be used to give a computer a brief window of time to move information from primary volatile storage into non-volatile storage before the batteries are exhausted. Some systems, for example EMC Symmetrix, have integrated batteries that maintain volatile storage for several minutes.
Performance.
Utilities such as hdparm and sar can be used to measure IO performance in Linux.
Security.
Full disk encryption, volume and virtual disk encryption, andor file/folder encryption is readily available for most storage devices.
Hardware memory encryption is available in Intel Architecture, supporting Total Memory Encryption (TME) and page granular memory encryption with multiple keys (MKTME). and in SPARC M7 generation since October 2015.
|
Vulnerability and reliability.
Distinct types of data storage have different points of failure and various methods of predictive failure analysis.
Vulnerabilities that can instantly lead to total loss are head crashing on mechanical hard drives and failure of electronic components on flash storage.
Error detection.
Impending failure on hard disk drives is estimable using S.M.A.R.T. diagnostic data that includes the hours of operation and the count of spin-ups, though its reliability is disputed.
Flash storage may experience downspiking transfer rates as a result of accumulating errors, which the flash memory controller attempts to correct.
The health of optical media can be determined by measuring correctable minor errors, of which high counts signify deteriorating and/or low-quality media. Too many consecutive minor errors can lead to data corruption. Not all vendors and models of optical drives support error scanning.
Storage media.
, the most commonly used data storage media are semiconductor, magnetic, and optical, while paper still sees some limited usage. Some other fundamental storage technologies, such as all-flash arrays (AFAs) are proposed for development.
|
Semiconductor.
Semiconductor memory uses semiconductor-based integrated circuit (IC) chips to store information. Data are typically stored in metal–oxide–semiconductor (MOS) memory cells. A semiconductor memory chip may contain millions of memory cells, consisting of tiny MOS field-effect transistors (MOSFETs) and/or MOS capacitors. Both "volatile" and "non-volatile" forms of semiconductor memory exist, the former using standard MOSFETs and the latter using floating-gate MOSFETs.
In modern computers, primary storage almost exclusively consists of dynamic volatile semiconductor random-access memory (RAM), particularly dynamic random-access memory (DRAM). Since the turn of the century, a type of non-volatile floating-gate semiconductor memory known as flash memory has steadily gained share as off-line storage for home computers. Non-volatile semiconductor memory is also used for secondary storage in various advanced electronic devices and specialized computers that are designed for them.
As early as 2006, notebook and desktop computer manufacturers started using flash-based solid-state drives (SSDs) as default configuration options for the secondary storage either in addition to or instead of the more traditional HDD.
|
Magnetic.
Magnetic storage uses different patterns of magnetization on a magnetically coated surface to store information. Magnetic storage is "non-volatile". The information is accessed using one or more read/write heads which may contain one or more recording transducers. A read/write head only covers a part of the surface so that the head or medium or both must be moved relative to another in order to access data. In modern computers, magnetic storage will take these forms:
In early computers, magnetic storage was also used as:
Magnetic storage does not have a definite limit of rewriting cycles like flash storage and re-writeable optical media, as altering magnetic fields causes no physical wear. Rather, their life span is limited by mechanical parts.
Optical.
Optical storage, the typical optical disc, stores information in deformities on the surface of a circular disc and reads this information by illuminating the surface with a laser diode and observing the reflection. Optical disc storage is "non-volatile". The deformities may be permanent (read only media), formed once (write once media) or reversible (recordable or read/write media). The following forms are in common use :
|
Magneto-optical disc storage is optical disc storage where the magnetic state on a ferromagnetic surface stores information. The information is read optically and written by combining magnetic and optical methods. Magneto-optical disc storage is "non-volatile", "sequential access", slow write, fast read storage used for tertiary and off-line storage.
3D optical data storage has also been proposed.
Light induced magnetization melting in magnetic photoconductors has also been proposed for high-speed low-energy consumption magneto-optical storage.
Paper.
Paper data storage, typically in the form of paper tape or punched cards, has long been used to store information for automatic processing, particularly before general-purpose computers existed. Information was recorded by punching holes into the paper or cardboard medium and was read mechanically (or later optically) to determine whether a particular location on the medium was solid or contained a hole. Barcodes make it possible for objects that are sold or transported to have some computer-readable information securely attached.
|
Relatively small amounts of digital data (compared to other digital data storage) may be backed up on paper as a matrix barcode for very long-term storage, as the longevity of paper typically exceeds even magnetic data storage.
Related technologies.
Redundancy.
While a group of bits malfunction may be resolved by error detection and correction mechanisms (see above), storage device malfunction requires different solutions. The following solutions are commonly used and valid for most storage devices:
Device mirroring and typical RAID are designed to handle a single device failure in the RAID group of devices. However, if a second failure occurs before the RAID group is completely repaired from the first failure, then data can be lost. The probability of a single failure is typically small. Thus the probability of two failures in the same RAID group in time proximity is much smaller (approximately the probability squared, i.e., multiplied by itself). If a database cannot tolerate even such a smaller probability of data loss, then the RAID group itself is replicated (mirrored). In many cases such mirroring is done geographically remotely, in a different storage array, to handle recovery from disasters (see disaster recovery above).
|
Network connectivity.
A secondary or tertiary storage may connect to a computer utilizing computer networks. This concept does not pertain to the primary storage, which is shared between multiple processors to a lesser degree.
Robotic storage.
Large quantities of individual magnetic tapes, and optical or magneto-optical discs may be stored in robotic tertiary storage devices. In tape storage field they are known as tape libraries, and in optical storage field optical jukeboxes, or optical disk libraries per analogy. The smallest forms of either technology containing just one drive device are referred to as autoloaders or autochangers.
Robotic-access storage devices may have a number of slots, each holding individual media, and usually one or more picking robots that traverse the slots and load media to built-in drives. The arrangement of the slots and picking devices affects performance. Important characteristics of such storage are possible expansion options: adding slots, modules, drives, robots. Tape libraries may have from 10 to more than 100,000 slots, and provide terabytes or petabytes of near-line information. Optical jukeboxes are somewhat smaller solutions, up to 1,000 slots.
Robotic storage is used for backups, and for high-capacity archives in imaging, medical, and video industries. Hierarchical storage management is a most known archiving strategy of automatically "migrating" long-unused files from fast hard disk storage to libraries or jukeboxes. If the files are needed, they are "retrieved" back to disk. |
Conditional
Conditional (if then) may refer to: |
Cone (disambiguation)
A cone is a basic geometrical shape.
Cone may also refer to: |
Chemical equilibrium
In a chemical reaction, chemical equilibrium is the state in which both the reactants and products are present in concentrations which have no further tendency to change with time, so that there is no observable change in the properties of the system. This state results when the forward reaction proceeds at the same rate as the reverse reaction. The reaction rates of the forward and backward reactions are generally not zero, but they are equal. Thus, there are no net changes in the concentrations of the reactants and products. Such a state is known as dynamic equilibrium.
It is the subject of study of "equilibrium chemistry".
Historical introduction.
The concept of chemical equilibrium was developed in 1803, after Berthollet found that some chemical reactions are reversible. For any reaction mixture to exist at equilibrium, the rates of the forward and backward (reverse) reactions must be equal. In the following chemical equation, arrows point both ways to indicate equilibrium. A and B are reactant chemical species, S and T are product species, and "α", "β", "σ", and "τ" are the stoichiometric coefficients of the respective reactants and products:
|
The equilibrium concentration position of a reaction is said to lie "far to the right" if, at equilibrium, nearly all the reactants are consumed. Conversely the equilibrium position is said to be "far to the left" if hardly any product is formed from the reactants.
Guldberg and Waage (1865), building on Berthollet's ideas, proposed the law of mass action:
where A, B, S and T are active masses and "k"+ and "k"− are rate constants. Since at equilibrium forward and backward rates are equal:
and the ratio of the rate constants is also a constant, now known as an equilibrium constant.
By convention, the products form the numerator.
However, the law of mass action is valid only for concerted one-step reactions that proceed through a single transition state and is not valid in general because rate equations do not, in general, follow the stoichiometry of the reaction as Guldberg and Waage had proposed (see, for example, nucleophilic aliphatic substitution by SN1 or reaction of hydrogen and bromine to form hydrogen bromide). Equality of forward and backward reaction rates, however, is a necessary condition for chemical equilibrium, though it is not sufficient to explain why equilibrium occurs.
|
Despite the limitations of this derivation, the equilibrium constant for a reaction is indeed a constant, independent of the activities of the various species involved, though it does depend on temperature as observed by the van 't Hoff equation. Adding a catalyst will affect both the forward reaction and the reverse reaction in the same way and will not have an effect on the equilibrium constant. The catalyst will speed up both reactions thereby increasing the speed at which equilibrium is reached.
Although the macroscopic equilibrium concentrations are constant in time, reactions do occur at the molecular level. For example, in the case of acetic acid dissolved in water and forming acetate and hydronium ions,
a proton may hop from one molecule of acetic acid onto a water molecule and then onto an acetate anion to form another molecule of acetic acid and leaving the number of acetic acid molecules unchanged. This is an example of dynamic equilibrium. Equilibria, like the rest of thermodynamics, are statistical phenomena, averages of microscopic behavior.
|
Le Châtelier's principle (1884) predicts the behavior of an equilibrium system when changes to its reaction conditions occur. "If a dynamic equilibrium is disturbed by changing the conditions, the position of equilibrium moves to partially reverse the change". For example, adding more S (to the chemical reaction above) from the outside will cause an excess of products, and the system will try to counteract this by increasing the reverse reaction and pushing the equilibrium point backward (though the equilibrium constant will stay the same).
If mineral acid is added to the acetic acid mixture, increasing the concentration of hydronium ion, the amount of dissociation must decrease as the reaction is driven to the left in accordance with this principle. This can also be deduced from the equilibrium constant expression for the reaction:
If {H3O+} increases {CH3CO2H} must increase and must decrease. The H2O is left out, as it is the solvent and its concentration remains high and nearly constant.
|
where "R" is the universal gas constant and "T" the temperature.
When the reactants are dissolved in a medium of high ionic strength the quotient of activity coefficients may be taken to be constant. In that case the concentration quotient, "K"c,
where [A] is the concentration of A, etc., is independent of the analytical concentration of the reactants. For this reason, equilibrium constants for solutions are usually determined in media of high ionic strength. "Kc" varies with ionic strength, temperature and pressure (or volume). Likewise "Kp" for gases depends on partial pressure. These constants are easier to measure and encountered in high-school chemistry courses.
Thermodynamics.
At constant temperature and pressure, one must consider the Gibbs free energy, "G", while at constant temperature and volume, one must consider the Helmholtz free energy, "A", for the reaction; and at constant internal energy and volume, one must consider the entropy, "S", for the reaction.
The constant volume case is important in geochemistry and atmospheric chemistry where pressure variations are significant. Note that, if reactants and products were in standard state (completely pure), then there would be no reversibility and no equilibrium. Indeed, they would necessarily occupy disjoint volumes of space. The mixing of the products and reactants contributes a large entropy increase (known as entropy of mixing) to states containing equal mixture of products and reactants and gives rise to a distinctive minimum in the Gibbs energy as a function of the extent of reaction. The standard Gibbs energy change, together with the Gibbs energy of mixing, determine the equilibrium state.
|
In this article only the constant pressure case is considered. The relation between the Gibbs free energy and the equilibrium constant can be found by considering chemical potentials.
At constant temperature and pressure in the absence of an applied voltage, the Gibbs free energy, "G", for the reaction depends only on the extent of reaction: "ξ" (Greek letter xi), and can only decrease according to the second law of thermodynamics. It means that the derivative of "G" with respect to "ξ" must be negative if the reaction happens; at the equilibrium this derivative is equal to zero.
In order to meet the thermodynamic condition for equilibrium, the Gibbs energy must be stationary, meaning that the derivative of "G" with respect to the extent of reaction, "ξ", must be zero. It can be shown that in this case, the sum of chemical potentials times the stoichiometric coefficients of the products is equal to the sum of those corresponding to the reactants. Therefore, the sum of the Gibbs energies of the reactants must be the equal to the sum of the Gibbs energies of the products.
|
where "μ" is in this case a partial molar Gibbs energy, a chemical potential. The chemical potential of a reagent A is a function of the activity, {A} of that reagent.
(where "μ" is the standard chemical potential).
The definition of the Gibbs energy equation interacts with the fundamental thermodynamic relation to produce
Inserting "dNi" = "νi dξ" into the above equation gives a stoichiometric coefficient (formula_11) and a differential that denotes the reaction occurring to an infinitesimal extent ("dξ"). At constant pressure and temperature the above equations can be written as
which is the Gibbs free energy change for the reaction. This results in:
By substituting the chemical potentials:
the relationship becomes:
which is the standard Gibbs energy change for the reaction that can be calculated using thermodynamical tables.
The reaction quotient is defined as:
Therefore,
At equilibrium:
leading to:
and
Obtaining the value of the standard Gibbs energy change, allows the calculation of the equilibrium constant.
|
Addition of reactants or products.
For a reactional system at equilibrium: "Q"r = "K"eq; "ξ" = "ξ"eq.
In simplifications where the change in reaction quotient is solely due to the concentration changes, "Q"r is referred to as the mass-action ratio, and the ratio "Q"r/"K"eq is referred to as the disequilibrium ratio.
Note that activities and equilibrium constants are dimensionless numbers.
Treatment of activity.
The expression for the equilibrium constant can be rewritten as the product of a concentration quotient, "K"c and an activity coefficient quotient, "Γ".
[A] is the concentration of reagent A, etc. It is possible in principle to obtain values of the activity coefficients, γ. For solutions, equations such as the Debye–Hückel equation or extensions such as Davies equation Specific ion interaction theory or Pitzer equations may be used. However this is not always possible. It is common practice to assume that "Γ" is a constant, and to use the concentration quotient in place of the thermodynamic equilibrium constant. It is also general practice to use the term "equilibrium constant" instead of the more accurate "concentration quotient". This practice will be followed here.
|
For reactions in the gas phase partial pressure is used in place of concentration and fugacity coefficient in place of activity coefficient. In the real world, for example, when making ammonia in industry, fugacity coefficients must be taken into account. Fugacity, "f", is the product of partial pressure and fugacity coefficient. The chemical potential of a species in the real gas phase is given by
so the general expression defining an equilibrium constant is valid for both solution and gas phases.
Concentration quotients.
In aqueous solution, equilibrium constants are usually determined in the presence of an "inert" electrolyte such as sodium nitrate, NaNO3, or potassium perchlorate, KClO4. The ionic strength of a solution is given by
where "ci" and "zi" stand for the concentration and ionic charge of ion type "i", and the sum is taken over all the "N" types of charged species in solution. When the concentration of dissolved salt is much higher than the analytical concentrations of the reagents, the ions originating from the dissolved salt determine the ionic strength, and the ionic strength is effectively constant. Since activity coefficients depend on ionic strength, the activity coefficients of the species are effectively independent of concentration. Thus, the assumption that "Γ" is constant is justified. The concentration quotient is a simple multiple of the equilibrium constant.
|
However, "K"c will vary with ionic strength. If it is measured at a series of different ionic strengths, the value can be extrapolated to zero ionic strength. The concentration quotient obtained in this manner is known, paradoxically, as a thermodynamic equilibrium constant.
Before using a published value of an equilibrium constant in conditions of ionic strength different from the conditions used in its determination, the value should be adjusted.
Metastable mixtures.
A mixture may appear to have no tendency to change, though it is not at equilibrium. For example, a mixture of SO2 and O2 is metastable as there is a kinetic barrier to formation of the product, SO3.
The barrier can be overcome when a catalyst is also present in the mixture as in the contact process, but the catalyst does not affect the equilibrium concentrations.
Likewise, the formation of bicarbonate from carbon dioxide and water is very slow under normal conditions
but almost instantaneous in the presence of the catalytic enzyme carbonic anhydrase.
|
Pure substances.
When pure substances (liquids or solids) are involved in equilibria their activities do not appear in the equilibrium constant because their numerical values are considered one.
Applying the general formula for an equilibrium constant to the specific case of a dilute solution of acetic acid in water one obtains
For all but very concentrated solutions, the water can be considered a "pure" liquid, and therefore it has an activity of one. The equilibrium constant expression is therefore usually written as
A particular case is the self-ionization of water
Because water is the solvent, and has an activity of one, the self-ionization constant of water is defined as
It is perfectly legitimate to write [H+] for the hydronium ion concentration, since the state of solvation of the proton is constant (in dilute solutions) and so does not affect the equilibrium concentrations. "K"w varies with variation in ionic strength and/or temperature.
The concentrations of H+ and OH− are not independent quantities. Most commonly [OH−] is replaced by "K"w[H+]−1 in equilibrium constant expressions which would otherwise include hydroxide ion.
|
Solids also do not appear in the equilibrium constant expression, if they are considered to be pure and thus their activities taken to be one. An example is the Boudouard reaction:
for which the equation (without solid carbon) is written as:
Multiple equilibria.
Consider the case of a dibasic acid H2A. When dissolved in water, the mixture will contain H2A, HA− and A2−. This equilibrium can be split into two steps in each of which one proton is liberated.
"K"1 and" K"2 are examples of "stepwise" equilibrium constants. The "overall" equilibrium constant, "β"D, is product of the stepwise constants.
Note that these constants are dissociation constants because the products on the right hand side of the equilibrium expression are dissociation products. In many systems, it is preferable to use association constants.
"β"1 and "β"2 are examples of association constants. Clearly and ; and
For multiple equilibrium systems, also see: theory of Response reactions.
Effect of temperature.
The effect of changing temperature on an equilibrium constant is given by the van 't Hoff equation
|
Thus, for exothermic reactions (Δ"H" is negative), "K" decreases with an increase in temperature, but, for endothermic reactions, (ΔH is positive) "K" increases with an increase temperature. An alternative formulation is
At first sight this appears to offer a means of obtaining the standard molar enthalpy of the reaction by studying the variation of "K" with temperature. In practice, however, the method is unreliable because error propagation almost always gives very large errors on the values calculated in this way.
Effect of electric and magnetic fields.
The effect of electric field on equilibrium has been studied by Manfred Eigen among others.
Types of equilibrium.
Equilibrium can be broadly classified as heterogeneous and homogeneous equilibrium. Homogeneous equilibrium consists of reactants and products belonging in the same phase whereas heterogeneous equilibrium comes into play for reactants and products in different phases.
In these applications, terms such as stability constant, formation constant, binding constant, affinity constant, association constant and dissociation constant are used. In biochemistry, it is common to give units for binding constants, which serve to define the concentration units used when the constant's value was determined.
|
Composition of a mixture.
When the only equilibrium is that of the formation of a 1:1 adduct as the composition of a mixture, there are many ways that the composition of a mixture can be calculated. For example, see ICE table for a traditional method of calculating the pH of a solution of a weak acid.
There are three approaches to the general calculation of the composition of a mixture at equilibrium.
Mass-balance equations.
In general, the calculations are rather complicated or complex. For instance, in the case of a dibasic acid, H2A dissolved in water the two reactants can be specified as the conjugate base, A2−, and the proton, H+. The following equations of mass-balance could apply equally well to a base such as 1,2-diaminoethane, in which case the base itself is designated as the reactant A:
with TA the total concentration of species A. Note that it is customary to omit the ionic charges when writing and using these equations.
When the equilibrium constants are known and the total concentrations are specified there are two equations in two unknown "free concentrations" [A] and [H]. This follows from the fact that [HA] = "β"1[A][H], [H2A] = "β"2[A][H]2 and [OH] = "K"w[H]−1
|
so the concentrations of the "complexes" are calculated from the free concentrations and the equilibrium constants.
General expressions applicable to all systems with two reagents, A and B would be
It is easy to see how this can be extended to three or more reagents.
Polybasic acids.
The composition of solutions containing reactants A and H is easy to calculate as a function of p[H]. When [H] is known, the free concentration [A] is calculated from the mass-balance equation in A.
The diagram alongside, shows an example of the hydrolysis of the aluminium Lewis acid Al3+(aq) shows the species concentrations for a 5 × 10−6 M solution of an aluminium salt as a function of pH. Each concentration is shown as a percentage of the total aluminium.
Solution and precipitation.
The diagram above illustrates the point that a precipitate that is not one of the main species in the solution equilibrium may be formed. At pH just below 5.5 the main species present in a 5 μM solution of Al3+ are aluminium hydroxides Al(OH)2+, and , but on raising the pH Al(OH)3 precipitates from the solution. This occurs because Al(OH)3 has a very large lattice energy. As the pH rises more and more Al(OH)3 comes out of solution. This is an example of Le Châtelier's principle in action: Increasing the concentration of the hydroxide ion causes more aluminium hydroxide to precipitate, which removes hydroxide from the solution. When the hydroxide concentration becomes sufficiently high the soluble aluminate, , is formed.
|
Another common instance where precipitation occurs is when a metal cation interacts with an anionic ligand to form an electrically neutral complex. If the complex is hydrophobic, it will precipitate out of water. This occurs with the nickel ion Ni2+ and dimethylglyoxime, (dmgH2): in this case the lattice energy of the solid is not particularly large, but it greatly exceeds the energy of solvation of the molecule Ni(dmgH)2.
Minimization of Gibbs energy.
At equilibrium, at a specified temperature and pressure, and with no external forces, the Gibbs free energy "G" is at a minimum:
where μj is the chemical potential of molecular species "j", and "Nj" is the amount of molecular species "j". It may be expressed in terms of thermodynamic activity as:
where formula_51 is the chemical potential in the standard state, "R" is the gas constant "T" is the absolute temperature, and "Aj" is the activity.
For a closed system, no particles may enter or leave, although they may combine in various ways. The total number of atoms of each element will remain constant. This means that the minimization above must be subjected to the constraints:
|
where "aij" is the number of atoms of element "i" in molecule "j" and "b" is the total number of atoms of element "i", which is a constant, since the system is closed. If there are a total of "k" types of atoms in the system, then there will be "k" such equations. If ions are involved, an additional row is added to the aij matrix specifying the respective charge on each molecule which will sum to zero.
This is a standard problem in optimisation, known as constrained minimisation. The most common method of solving it is using the method of Lagrange multipliers (although other methods may be used).
Define:
where the "λi" are the Lagrange multipliers, one for each element. This allows each of the "Nj" and "λj" to be treated independently, and it can be shown using the tools of multivariate calculus that the equilibrium condition is given by
(For proof see Lagrange multipliers.) This is a set of ("m" + "k") equations in ("m" + "k") unknowns (the "Nj" and the "λi") and may, therefore, be solved for the equilibrium concentrations "Nj" as long as the chemical activities are known as functions of the concentrations at the given temperature and pressure. (In the ideal case, activities are proportional to concentrations.) (See Thermodynamic databases for pure substances.) Note that the second equation is just the initial constraints for minimization.
|
This method of calculating equilibrium chemical concentrations is useful for systems with a large number of different molecules. The use of "k" atomic element conservation equations for the mass constraint is straightforward, and replaces the use of the stoichiometric coefficient equations. The results are consistent with those specified by chemical equations. For example, if equilibrium is specified by a single chemical equation:,
where νj is the stoichiometric coefficient for the "j" th molecule (negative for reactants, positive for products) and "Rj" is the symbol for the "j" th molecule, a properly balanced equation will obey:
Multiplying the first equilibrium condition by νj and using the above equation yields:
As above, defining ΔG
where "Kc" is the equilibrium constant, and ΔG will be zero at equilibrium.
Analogous procedures exist for the minimization of other thermodynamic potentials. |
Combination
In mathematics, a combination is a selection of items from a set that has distinct members, such that the order of selection does not matter (unlike permutations). For example, given three fruits, say an apple, an orange and a pear, there are three combinations of two that can be drawn from this set: an apple and a pear; an apple and an orange; or a pear and an orange. More formally, a "k"-combination of a set "S" is a subset of "k" distinct elements of "S". So, two combinations are identical if and only if each combination has the same members. (The arrangement of the members in each set does not matter.) If the set has "n" elements, the number of "k"-combinations, denoted by formula_1 or formula_2, is equal to the binomial coefficient
formula_3
which can be written using factorials as formula_4 whenever formula_5, and which is zero when formula_6. This formula can be derived from the fact that each "k"-combination of a set "S" of "n" members has formula_7 permutations so formula_8 or formula_9. The set of all "k"-combinations of a set "S" is often denoted by formula_10.
|
A combination is a selection of "n" things taken "k" at a time "without repetition". To refer to combinations in which repetition is allowed, the terms "k"-combination with repetition, "k"-multiset, or "k"-selection, are often used. If, in the above example, it were possible to have two of any one kind of fruit there would be 3 more 2-selections: one with two apples, one with two oranges, and one with two pears.
Although the set of three fruits was small enough to write a complete list of combinations, this becomes impractical as the size of the set increases. For example, a poker hand can be described as a 5-combination ("k" = 5) of cards from a 52 card deck ("n" = 52). The 5 cards of the hand are all distinct, and the order of cards in the hand does not matter. There are 2,598,960 such combinations, and the chance of drawing any one hand at random is 1 / 2,598,960.
Number of "k"-combinations.
The number of "k"-combinations from a given set "S" of "n" elements is often denoted in elementary combinatorics texts by formula_1, or by a variation such as formula_2, formula_13, formula_14, formula_15 or even formula_16 (the last form is standard in French, Romanian, Russian, and Chinese texts). The same number however occurs in many other mathematical contexts, where it is denoted by formula_17 (often read as ""n" choose "k""); notably it occurs as a coefficient in the binomial formula, hence its name binomial coefficient. One can define formula_17 for all natural numbers "k" at once by the relation
|
formula_19
from which it is clear that
formula_20
and further
formula_21
for formula_6.
To see that these coefficients count "k"-combinations from "S", one can first consider a collection of "n" distinct variables "X""s" labeled by the elements "s" of "S", and expand the product over all elements of "S":
formula_23
it has 2"n" distinct terms corresponding to all the subsets of "S", each subset giving the product of the corresponding variables "X""s". Now setting all of the "X""s" equal to the unlabeled variable "X", so that the product becomes , the term for each "k"-combination from "S" becomes "X""k", so that the coefficient of that power in the result equals the number of such "k"-combinations.
Binomial coefficients can be computed explicitly in various ways. To get all of them for the expansions up to , one can use (in addition to the basic cases already given) the recursion relation
formula_24
for 0 < "k" < "n", which follows from =; this leads to the construction of Pascal's triangle.
For determining an individual binomial coefficient, it is more practical to use the formula
|
formula_25
The numerator gives the number of "k"-permutations of "n", i.e., of sequences of "k" distinct elements of "S", while the denominator gives the number of such "k"-permutations that give the same "k"-combination when the order is ignored.
When "k" exceeds "n"/2, the above formula contains factors common to the numerator and the denominator, and canceling them out gives the relation
formula_26
for 0 ≤ "k" ≤ "n". This expresses a symmetry that is evident from the binomial formula, and can also be understood in terms of "k"-combinations by taking the complement of such a combination, which is an -combination.
Finally there is a formula which exhibits this symmetry directly, and has the merit of being easy to remember:
formula_27
where "n"! denotes the factorial of "n". It is obtained from the previous formula by multiplying denominator and numerator by !, so it is certainly computationally less efficient than that formula.
The last formula can be understood directly, by considering the "n"! permutations of all the elements of "S". Each such permutation gives a "k"-combination by selecting its first "k" elements. There are many duplicate selections: any combined permutation of the first "k" elements among each other, and of the final ("n" − "k") elements among each other produces the same combination; this explains the division in the formula.
|
From the above formulas follow relations between adjacent numbers in Pascal's triangle in all three directions:
formula_28
Together with the basic cases formula_29, these allow successive computation of respectively all numbers of combinations from the same set (a row in Pascal's triangle), of "k"-combinations of sets of growing sizes, and of combinations with a complement of fixed size .
Example of counting combinations.
As a specific example, one can compute the number of five-card hands possible from a standard fifty-two card deck as:
formula_30
Alternatively one may use the formula in terms of factorials and cancel the factors in the numerator against parts of the factors in the denominator, after which only multiplication of the remaining factors is required:
formula_31
Another alternative computation, equivalent to the first, is based on writing
formula_32
which gives
formula_33
When evaluated in the following order, , this can be computed using only integer arithmetic. The reason is that when each division occurs, the intermediate result that is produced is itself a binomial coefficient, so no remainders ever occur.
|
Using the symmetric formula in terms of factorials without performing simplifications gives a rather extensive calculation:
formula_34
Enumerating "k"-combinations.
One can enumerate all "k"-combinations of a given set "S" of "n" elements in some fixed order, which establishes a bijection from an interval of formula_17 integers with the set of those "k"-combinations. Assuming "S" is itself ordered, for instance "S" = { 1, 2, ..., "n" }, there are two natural possibilities for ordering its "k"-combinations: by comparing their smallest elements first (as in the illustrations above) or by comparing their largest elements first. The latter option has the advantage that adding a new largest element to "S" will not change the initial part of the enumeration, but just add the new "k"-combinations of the larger set after the previous ones. Repeating this process, the enumeration can be extended indefinitely with "k"-combinations of ever larger sets. If moreover the intervals of the integers are taken to start at 0, then the "k"-combination at a given place "i" in the enumeration can be computed easily from "i", and the bijection so obtained is known as the combinatorial number system. It is also known as "rank"/"ranking" and "unranking" in computational mathematics.
|
There are many ways to enumerate "k" combinations. One way is to track "k" index numbers of the elements selected, starting with {0 .. "k"−1} (zero-based) or {1 .. "k"} (one-based) as the first allowed "k"-combination. Then, repeatedly move to the next allowed "k"-combination by incrementing the smallest index number for which this would not create two equal index numbers, at the same time resetting all smaller index numbers to their initial values.
Number of combinations with repetition.
A "k"-combination with repetitions, or "k"-multicombination, or multisubset of size "k" from a set "S" of size "n" is given by a set of "k" not necessarily distinct elements of "S", where order is not taken into account: two sequences define the same multiset if one can be obtained from the other by permuting the terms. In other words, it is a sample of "k" elements from a set of "n" elements allowing for duplicates (i.e., with replacement) but disregarding different orderings (e.g. {2,1,2} = {1,2,2}). Associate an index to each element of "S" and think of the elements of "S" as "types" of objects, then we can let formula_36 denote the number of elements of type "i" in a multisubset. The number of multisubsets of size "k" is then the number of nonnegative integer (so allowing zero) solutions of the Diophantine equation:
|
formula_37
If "S" has "n" elements, the number of such "k"-multisubsets is denoted by
formula_38
a notation that is analogous to the binomial coefficient which counts "k"-subsets. This expression, "n" multichoose "k", can also be given in terms of binomial coefficients:
formula_39
This relationship can be easily proved using a representation known as stars and bars.
A solution of the above Diophantine equation can be represented by formula_40 "stars", a separator (a "bar"), then formula_41 more stars, another separator, and so on. The total number of stars in this representation is "k" and the number of bars is "n" - 1 (since a separation into n parts needs n-1 separators). Thus, a string of "k" + "n" - 1 (or "n" + "k" - 1) symbols (stars and bars) corresponds to a solution if there are "k" stars in the string. Any solution can be represented by choosing "k" out of positions to place stars and filling the remaining positions with bars. For example, the solution formula_42 of the equation formula_43 ("n" = 4 and "k" = 10) can be represented by
|
formula_44
The number of such strings is the number of ways to place 10 stars in 13 positions, formula_45 which is the number of 10-multisubsets of a set with 4 elements.
As with binomial coefficients, there are several relationships between these multichoose expressions. For example, for formula_46,
formula_47
This identity follows from interchanging the stars and bars in the above representation.
Example of counting multisubsets.
For example, if you have four types of donuts ("n" = 4) on a menu to choose from and you want three donuts ("k" = 3), the number of ways to choose the donuts with repetition can be calculated as
formula_48
This result can be verified by listing all the 3-multisubsets of the set "S" = {1,2,3,4}. This is displayed in the following table. The second column lists the donuts you actually chose, the third column shows the nonnegative integer solutions formula_49 of the equation formula_50 and the last column gives the stars and bars representation of the solutions.
Number of "k"-combinations for all "k".
|
The number of "k"-combinations for all "k" is the number of subsets of a set of "n" elements. There are several ways to see that this number is 2"n". In terms of combinations, formula_51, which is the sum of the "n"th row (counting from 0) of the binomial coefficients in Pascal's triangle. These combinations (subsets) are enumerated by the 1 digits of the set of base 2 numbers counting from 0 to 2"n" − 1, where each digit position is an item from the set of "n".
Given 3 cards numbered 1 to 3, there are 8 distinct combinations (subsets), including the empty set:
formula_52
Representing these subsets (in the same order) as base 2 numerals:
Probability: sampling a random combination.
There are various algorithms to pick out a random combination from a given set or list. Rejection sampling is extremely slow for large sample sizes. One way to select a "k"-combination efficiently from a population of size "n" is to iterate across each element of the population, and at each step pick that element with a dynamically changing probability of formula_53 (see Reservoir sampling). Another is to pick a random non-negative integer less than formula_54 and convert it into a combination using the combinatorial number system.
|
Number of ways to put objects into bins.
A combination can also be thought of as a selection of "two" sets of items: those that go into the chosen bin and those that go into the unchosen bin. This can be generalized to any number of bins with the constraint that every item must go to exactly one bin. The number of ways to put objects into bins is given by the multinomial coefficient
formula_55
where "n" is the number of items, "m" is the number of bins, and formula_56 is the number of items that go into bin "i".
One way to see why this equation holds is to first number the objects arbitrarily from "1" to "n" and put the objects with numbers formula_57 into the first bin in order, the objects with numbers formula_58 into the second bin in order, and so on. There are formula_59 distinct numberings, but many of them are equivalent, because only the set of items in a bin matters, not their order in it. Every combined permutation of each bins' contents produces an equivalent way of putting items into bins. As a result, every equivalence class consists of formula_60 distinct numberings, and the number of equivalence classes is formula_61.
The binomial coefficient is the special case where "k" items go into the chosen bin and the remaining formula_62 items go into the unchosen bin:
formula_63 |
Software
Software consists of computer programs that instruct the execution of a computer. Software also includes design documents and specifications.
The history of software is closely tied to the development of digital computers in the mid-20th century. Early programs were written in the machine language specific to the hardware. The introduction of high-level programming languages in 1958 allowed for more human-readable instructions, making software development easier and more portable across different computer architectures. Software in a programming language is run through a compiler or interpreter to execute on the architecture's hardware. Over time, software has become complex, owing to developments in networking, operating systems, and databases.
Software can generally be categorized into two main types:
The rise of cloud computing has introduced the new software delivery model Software as a Service (SaaS). In SaaS, applications are hosted by a provider and accessed over the Internet.
The process of developing software involves several stages. The stages include software design, programming, testing, release, and maintenance. Software quality assurance and security are critical aspects of software development, as bugs and security vulnerabilities can lead to system failures and security breaches. Additionally, legal issues such as software licenses and intellectual property rights play a significant role in the distribution of software products.
|
History.
The first use of the word "software" to describe computer programs is credited to mathematician John Wilder Tukey in 1958.
The first programmable computers, which appeared at the end of the 1940s, were programmed in machine language. Machine language is difficult to debug and not portable across different computers. Initially, hardware resources were more expensive than human resources. As programs became complex, programmer productivity became the bottleneck. The introduction of high-level programming languages in 1958 hid the details of the hardware and expressed the underlying algorithms into the code . Early languages include Fortran, Lisp, and COBOL.
Types.
There are two main types of software:
Software can also be categorized by how it is deployed. Traditional applications are purchased with a perpetual license for a specific version of the software, downloaded, and run on hardware belonging to the purchaser. The rise of the Internet and cloud computing enabled a new model, software as a service (SaaS), in which the provider hosts the software (usually built on top of rented infrastructure or platforms) and provides the use of the software to customers, often in exchange for a subscription fee. By 2023, SaaS products—which are usually delivered via a web application—had become the primary method that companies deliver applications.
|
Software development and maintenance.
Software companies aim to deliver a high-quality product on time and under budget. A challenge is that software development effort estimation is often inaccurate. Software development begins by conceiving the project, evaluating its feasibility, analyzing the business requirements, and making a software design. Most software projects speed up their development by reusing or incorporating existing software, either in the form of commercial off-the-shelf (COTS) or open-source software. Software quality assurance is typically a combination of manual code review by other engineers and automated software testing. Due to time constraints, testing cannot cover all aspects of the software's intended functionality, so developers often focus on the most critical functionality. Formal methods are used in some safety-critical systems to prove the correctness of code, while user acceptance testing helps to ensure that the product meets customer expectations. There are a variety of software development methodologies, which vary from completing all steps in order to concurrent and iterative models. Software development is driven by requirements taken from prospective users, as opposed to maintenance, which is driven by events such as a change request.
|
Frequently, software is released in an incomplete state when the development team runs out of time or funding. Despite testing and quality assurance, virtually all software contains bugs where the system does not work as intended. Post-release software maintenance is necessary to remediate these bugs when they are found and keep the software working as the environment changes over time. New features are often added after the release. Over time, the level of maintenance becomes increasingly restricted before being cut off entirely when the product is withdrawn from the market. As software ages, it becomes known as legacy software and can remain in use for decades, even if there is no one left who knows how to fix it. Over the lifetime of the product, software maintenance is estimated to comprise 75 percent or more of the total development cost.
Completing a software project involves various forms of expertise, not just in software programmers but also testing, documentation writing, project management, graphic design, user experience, user support, marketing, and fundraising.
|
Quality and security.
Software quality is defined as meeting the stated requirements as well as customer expectations. Quality is an overarching term that can refer to a code's correct and efficient behavior, its reusability and portability, or the ease of modification. It is usually more cost-effective to build quality into the product from the beginning rather than try to add it later in the development process. Higher quality code will reduce lifetime cost to both suppliers and customers as it is more reliable and easier to maintain. Software failures in safety-critical systems can be very serious including death. By some estimates, the cost of poor quality software can be as high as 20 to 40 percent of sales. Despite developers' goal of delivering a product that works entirely as intended, virtually all software contains bugs.
The rise of the Internet also greatly increased the need for computer security as it enabled malicious actors to conduct cyberattacks remotely. If a bug creates a security risk, it is called a vulnerability. Software patches are often released to fix identified vulnerabilities, but those that remain unknown (zero days) as well as those that have not been patched are still liable for exploitation. Vulnerabilities vary in their ability to be exploited by malicious actors, and the actual risk is dependent on the nature of the vulnerability as well as the value of the surrounding system. Although some vulnerabilities can only be used for denial of service attacks that compromise a system's availability, others allow the attacker to inject and run their own code (called malware), without the user being aware of it. To thwart cyberattacks, all software in the system must be designed to withstand and recover from external attack. Despite efforts to ensure security, a significant fraction of computers are infected with malware.
|
Encoding and execution.
Programming languages.
Programming languages are the format in which software is written. Since the 1950s, thousands of different programming languages have been invented; some have been in use for decades, while others have fallen into disuse. Some definitions classify machine code—the exact instructions directly implemented by the hardware—and assembly language—a more human-readable alternative to machine code whose statements can be translated one-to-one into machine code—as programming languages. Programs written in the high-level programming languages used to create software share a few main characteristics: knowledge of machine code is not necessary to write them, they can be ported to other computer systems, and they are more concise and human-readable than machine code. They must be both human-readable and capable of being translated into unambiguous instructions for computer hardware.
Compilation, interpretation, and execution.
The invention of high-level programming languages was simultaneous with the compilers needed to translate them automatically into machine code. Most programs do not contain all the resources needed to run them and rely on external libraries. Part of the compiler's function is to link these files in such a way that the program can be executed by the hardware. Once compiled, the program can be saved as an object file and the loader (part of the operating system) can take this saved file and execute it as a process on the computer hardware. Some programming languages use an interpreter instead of a compiler. An interpreter converts the program into machine code at run time, which makes them 10 to 100 times slower than compiled programming languages.
|
Legal issues.
Liability.
Software is often released with the knowledge that it is incomplete or contains bugs. Purchasers knowingly buy it in this state, which has led to a legal regime where liability for software products is significantly curtailed compared to other products.
Licenses.
Since the mid-1970s, software and its source code have been protected by copyright law that vests the owner with the exclusive right to copy the code. The underlying ideas or algorithms are not protected by copyright law, but are sometimes treated as a trade secret and concealed by such methods as non-disclosure agreements. A software copyright is often owned by the person or company that financed or made the software (depending on their contracts with employees or contractors who helped to write it). Some software is in the public domain and has no restrictions on who can use it, copy or share it, or modify it; a notable example is software written by the United States Government. Free and open-source software also allow free use, sharing, and modification, perhaps with a few specified conditions. The use of some software is governed by an agreement (software license) written by the copyright holder and imposed on the user. Proprietary software is usually sold under a restrictive license that limits its use and sharing. Some free software licenses require that modified versions must be released under the same license, which prevents the software from being sold
|
or distributed under proprietary restrictions.
Patents.
Patents give an inventor an exclusive, time-limited license for a novel product or process. Ideas about what software could accomplish are not protected by law and concrete implementations are instead covered by copyright law. In some countries, a requirement for the claimed invention to have an effect on the physical world may also be part of the requirements for a software patent to be held valid. Software patents have been historically controversial. Before the 1998 case "State Street Bank & Trust Co. v. Signature Financial Group, Inc.", software patents were generally not recognized in the United States. In that case, the Supreme Court decided that business processes could be patented. Patent applications are complex and costly, and lawsuits involving patents can drive up the cost of products. Unlike copyrights, patents generally only apply in the jurisdiction where they were issued.
Impact.
Engineer Capers Jones writes that "computers and software are making profound changes to every aspect of human life: education, work, warfare, entertainment, medicine, law, and everything else". It has become ubiquitous in everyday life in developed countries. In many cases, software augments the functionality of existing technologies such as household appliances and elevators. Software also spawned entirely new technologies such as the Internet, video games, mobile phones, and GPS. New methods of communication, including email, forums, blogs, microblogging, wikis, and social media, were enabled by the Internet. Massive amounts of knowledge exceeding any paper-based library are now available with a quick web search. Most creative professionals have switched to software-based tools such as computer-aided design, 3D modeling, digital image editing, and computer animation. Almost every complex device is controlled by software. |
Computer programming
Computer programming or coding is the composition of sequences of instructions, called programs, that computers can follow to perform tasks. It involves designing and implementing algorithms, step-by-step specifications of procedures, by writing code in one or more programming languages. Programmers typically use high-level programming languages that are more easily intelligible to humans than machine code, which is directly executed by the central processing unit. Proficient programming usually requires expertise in several different subjects, including knowledge of the application domain, details of programming languages and generic code libraries, specialized algorithms, and formal logic.
Auxiliary tasks accompanying and related to programming include analyzing requirements, testing, debugging (investigating and fixing problems), implementation of build systems, and management of derived artifacts, such as programs' machine code. While these are sometimes considered programming, often the term "software development" is used for this larger overall process – with the terms "programming", "implementation", and "coding" reserved for the writing and editing of code per se. Sometimes software development is known as "software engineering", especially when it employs formal methods or follows an engineering design process.
|
History.
Programmable devices have existed for centuries. As early as the 9th century, a programmable music sequencer was invented by the Persian Banu Musa brothers, who described an automated mechanical flute player in the "Book of Ingenious Devices". In 1206, the Arab engineer Al-Jazari invented a programmable drum machine where a musical mechanical automaton could be made to play different rhythms and drum patterns, via pegs and cams. In 1801, the Jacquard loom could produce entirely different weaves by changing the "program" – a series of pasteboard cards with holes punched in them.
Code-breaking algorithms have also existed for centuries. In the 9th century, the Arab mathematician Al-Kindi described a cryptographic algorithm for deciphering encrypted code, in "A Manuscript on Deciphering Cryptographic Messages". He gave the first description of cryptanalysis by frequency analysis, the earliest code-breaking algorithm.
The first computer program is generally dated to 1843 when mathematician Ada Lovelace published an algorithm to calculate a sequence of Bernoulli numbers, intended to be carried out by Charles Babbage's Analytical Engine. The algorithm, which was conveyed through notes on a translation of Luigi Federico Menabrea's paper on the analytical engine was mainly conceived by Lovelace as can be discerned through her correspondence with Babbage. However, Charles Babbage himself had written a program for the AE in 1837. Lovelace was also the first to see a broader application for the analytical engine beyond mathematical calculations.
|
In the 1880s, Herman Hollerith invented the concept of storing "data" in machine-readable form. Later a control panel (plug board) added to his 1906 Type I Tabulator allowed it to be programmed for different jobs, and by the late 1940s, unit record equipment such as the IBM 602 and IBM 604, were programmed by control panels in a similar way, as were the first electronic computers. However, with the concept of the stored-program computer introduced in 1949, both programs and data were stored and manipulated in the same way in computer memory.
Machine language.
Machine code was the language of early programs, written in the instruction set of the particular machine, often in binary notation. Assembly languages were soon developed that let the programmer specify instructions in a text format (e.g., ADD X, TOTAL), with abbreviations for each operation code and meaningful names for specifying addresses. However, because an assembly language is little more than a different notation for a machine language, two machines with different instruction sets also have different assembly languages.
|
Compiler languages.
High-level languages made the process of developing a program simpler and more understandable, and less bound to the underlying hardware.
The first compiler related tool, the A-0 System, was developed in 1952 by Grace Hopper, who also coined the term 'compiler'. FORTRAN, the first widely used high-level language to have a functional implementation, came out in 1957, and many other languages were soon developed—in particular, COBOL aimed at commercial data processing, and Lisp for computer research.
These compiled languages allow the programmer to write programs in terms that are syntactically richer, and more capable of abstracting the code, making it easy to target varying machine instruction sets via compilation declarations and heuristics. Compilers harnessed the power of computers to make programming easier by allowing programmers to specify calculations by entering a formula using infix notation.
Source code entry.
Programs were mostly entered using punched cards or paper tape. By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Text editors were also developed that allowed changes and corrections to be made much more easily than with punched cards.
|
Modern programming.
Quality requirements.
Whatever the approach to development may be, the final program must satisfy some fundamental properties. The following properties are among the most important:
Using automated tests and fitness functions can help to maintain some of the aforementioned attributes.
Readability of source code.
In computer programming, readability refers to the ease with which a human reader can comprehend the purpose, control flow, and operation of source code. It affects the aspects of quality above, including portability, usability and most importantly maintainability.
Readability is important because programmers spend the majority of their time reading, trying to understand, reusing, and modifying existing source code, rather than writing new source code. Unreadable code often leads to bugs, inefficiencies, and duplicated code. A study found that a few simple readability transformations made code shorter and drastically reduced the time to understand it.
Following a consistent programming style often helps readability. However, readability is more than just programming style. Many factors, having little or nothing to do with the ability of the computer to efficiently compile and execute the code, contribute to readability. Some of these factors include:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.